id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15406998
pes2o/s2orc
v3-fos-license
Soil organic matter dynamics in a North America tallgrass prairie after 9 yr of experimental warming The influence of global warming on soil organic matter (SOM) dynamics in terrestrial ecosystems remains unclear. In this study, we combined soil fractionation with isotope analyses to examine SOM dynamics after nine years of experimental warming in a North America tallgrass prairie. Soil samples from the control plots and the warmed plots were separated into four aggregate sizes ( >2000 μm, 250–2000 μm, 53–250 μm, and <53 μm), and three density fractions (free light fraction – LF, intra-aggregate particulate organic matter – iPOM, and mineral-associated organic matter – mSOM). All fractions were analyzed for their carbon (C) and nitrogen (N) content, and δ13C andδ15N values. Warming did not significantly effect soil aggregate distribution and stability but increased C 4-derived C input into all fractions with the greatest in LF. Warming also stimulated decay rates of C in whole soil and all aggregate sizes. C in LF turned over faster than that in iPOM in the warmed soils. Theδ15N values of soil fractions were more enriched in the warmed soils than those in the control, indicating that warming accelerated loss of soil N. The δ15N values changed from low to high, while C:N ratios changed from high to low in the order LF, iPOM, and mSOM due to increased degree of decomposition and mineral association. Overall, warming increased the input of C 4-derived C by 11.6 %, which was offset by the accelerated loss of soil C. Our results suggest that global warming simultaneously stimulates C input via shift in species composition and decomposition of SOM, resulting in negligible net change in soil C. Correspondence to: X. Cheng (xlcheng@fudan.edu.cn) Introduction Recent Intergovernmental Panel on Climate Change report (IPCC, 2007) predicts global average temperature to increase by 1.1-6.4• C during current century.Global warming is expected to profoundly impact ecosystem processes such as soil organic matter (SOM) dynamics (e.g., Davidson and Janssens, 2006;Von Fischer et al., 2008).Carbon (C) in SOM accounts for 80 % of terrestrial C pool and is regarded as an important potential C sink that may help offset the greenhouse effect (e.g., Lal, 2008;Maia et al., 2010).Small changes in SOM stock under global change can potentially effect atmospheric CO 2 concentrations (e.g., Batjes and Sombroek, 1997;Marin-Spiotta et al., 2009).In addition, warming-induced changes in SOM regulate the availability of nitrogen (N) for plant growth and ultimately influence the net primary productivity of terrestrial ecosystems.Hence, it is imperative to understand how global warming will effect SOM dynamics. Effects of warming on SOM dynamics remain a widely debated topic (e.g., Pendall et al., 2004).For example, climatic warming increases soil temperature and hence accelerates organic matter decomposition rates, leading to loss of soil C and N (e.g., Rustad et al., 2001;Fontaine et al., 2004).Conversely, some studies have reported that warming leads to increases in soil C and N because of great increases in biomass and litter inputs in tundra ecosystems (e.g., Welker et al., 2004;Day et al., 2008).These differences are not surprising given response of soils to warming depends on many factors, such as soil moisture and temperature and, in particular, on plant species that provide carbon inputs to soils (e.g., Shaw and Harte, 2001;Fissore et al., 2008).Most SOM derives exclusively from the plant material growing on site.Changes X. Cheng et al.: Soil organic C and N dynamics in a North America tallgrass prairie in vegetation type are thus expected to alter the quality and quantity of SOM (Cheng et al., 2006;Fissore et al., 2008).Recent climatic warming has already led to dramatic shifts in plant functional groups (e.g., C 3 litter with high quality and C 4 litter with low quality), and this can effect the accumulation and decomposition patterns of SOM by altering the quantity and quality of plant material entering into soil (Day et al., 2008;Fissore et al., 2008).Therefore, understanding the response of SOM to climatic warming is critical for accurate predictions of long-term ecosystem C and N cycling in future climatic scenarios. However, detecting changes in the SOM stock of terrestrial ecosystem under global change can be difficult, because SOM consists of a complex composition with different physical and chemical stabilities (Van Groenigen et al., 2002;Del Galdo et al., 2003;Marin-Spiotta et al., 2009).To characterize changes in soil C and SOM dynamics correctly, size and density fractionation techniques have been developed to separate bulk soil into fractions that differ in microbial degradability and turnover time (e.g., Jastrow, 1996;Six et al., 2000;Marin-Spiotta et al., 2009).Aggregate size fractionations have shown that C in SOM associated with larger aggregates has higher turnover rates than C in SOM associated with smaller aggregates (Jastrow, 1996;Six et al., 2000).Density fractionations result in a light fraction, which is composed of physically unprotected plant debris and it is generally thought to have a rapid turnover as well as a heavy mineral associated fraction, which remains more recalcitrant with a long-term turnover (Balesdent, 1996). Natural abundance of stable C isotopes coupled with SOM fractionation technique offers an approach to better quantify SOM dynamics when global change induces a shift in the dominant plant species composition between C 4 and C 3 (López-Ulloa et al., 2005;John et al., 2005;Auerswald et al., 2009;Marin-Spiotta et al., 2009).Theoretically, differences in the natural stable C isotope signature between C 3 (average δ 13 C value of −27 ‰) and C 4 (average δ 13 C value of −11 ‰) plants result in SOM with distinct isotopic signatures.Changes in δ 13 C values of SOM over time following a change in vegetation can be used to examine the relative contribution of C 3 -or C 4 -derived C to SOM formation (Del Galdo et al., 2003;Cheng et al., 2006) and quantify SOM decomposition rates (e.g., Liao et al., 2006).Furthermore, soil δ 15 N values reflect the net effect of N-cycling processes as influenced by climate change and species composition (Robinson, 2001;Dawson et al., 2002;Bijoor et al., 2008).For instance, increased soil temperature has been suggested to enhance rates of N cycling and loss of N, resulting in 15 N enrichment (Bijoor et al., 2008).The soil δ 15 N values can be also used to estimate the degree of SOM decomposition and humification (Kramer et al., 2003;Liao et al., 2006;Templer et al., 2007;Marin-Spiotta et al., 2009) In Central Oklahoma USA Great Plains, a long-term, ongoing experimental warming and clipping experiment was initiated on 21 November 1999 in a tallgrass prairie (Luo et al., 2001), dominated by a mixture of C 4 grasses and a few C 3 forbs.Warming has resulted in a shift towards a more C 4 -grass dominated plant community and an increase in aboveground biomass as well as aboveground net primary productivity (ANPP) (Wan et al., 2005;Luo et al., 2009), and hence increased litter input and altered litter quality (An et al., 2005;Cheng et al., 2010).These changes provide a unique opportunity to utilize the natural abundance of δ 13 C and δ 15 N to evaluate changes in SOM dynamics after nine years of experimental warming.We hypothesized that nine years of warming would significantly increase SOM storage due to warming-induced increases in litter input and changes in litter quality (An et al., 2005;Cheng et al., 2010).To test this hypothesis, we measured the δ 13 C, δ 15 N, C, and N concentrations in all SOM aggregates and density fractions in the tallgrass prairie experiment.The specific objectives of this study were to: (1) evaluate the impact of the long-term experimental warming on the C and N pools in SOM fractions; (2) quantify amounts of C derived from C 4 vs.C 3 sources in SOM fractions after nine years of experimental warming; and (3) estimate the turnover rate of C in SOM fractions in warmed soils. Site description The experiment was located on Kessler's Farm Field Lab (formerly Great Plain Apiaries, 34 • 58 54 N, 97 • 31 14 W), 40 km from the Norman campus of the University of Oklahoma, USA.Detailed description of the site characteristics and design of the experiment have been reported elsewhere (See Luo et al., 2001).Briefly, the site is a tallgrass prairie primarily dominated by C 4 grasses (Schizachyrium scoparium and Sorghastrum nutans) and C 3 forbs (Solidago rigida and Solidago nemoralis).S. scoparium comprises over 40 % of the plantcover, and S. nutans over 20 % (Sherry and Luo, unpublished data).Mean annual temperature is 16.0 • C with a monthly mean temperature of 3.1 • C in January and 28.0 • C in July.Mean annual precipitation is 911.4 mm (Oklahoma Meteorological Survey).The soil is a silt loam with 36 % sand, 55 % silt, and 10 % clay in the top 15 cm.The proportion of clay increases with depth.The soil is part of the Nash-Lucien complex, which is characterized by a low permeability, high available water capacity, and deep, moderately penetrable root zone (USDA Soil Conservation Service and Oklahoma Agricultural Experiment Station, 1963). Experimental design This experiment used a paired factorial design with warming as the main factor nested by a clipping factor.Pairs of 2 × 2 m control and warmed plots were replicated six times.One plot has been subjected to continuous 2 • C warming since 12 November 1999, while the control has had ambient temperature.One 165 × 15 cm radiant infrared heater (Kalglo Electronics Inc., Bethlehem, PA, USA) with an output of 100 Watt m −2 was suspended at 1.5 m above the ground in each warmed plot as the heating device.Reflector surface of the heaters were adjusted so as to generate evenly distributed radiant input to soil surface (Kimball, 2005).As a result, temperature increments generated by the infrared heaters were relatively even over the entire area of plots and similar at different soil depths (Wan et al., 2005).A "dummy heater" with the same shape and size as the infrared heater was suspended at the same height in the control plots to simulate the shading effect of the heater on the plant canopy.For each paired plot, the distance between warmed and control plots was approximately 5 m to avoid heating of the control plots.The distance between the paired plots varied from 20 to 60 m.Each 2 × 2 m plot was divided into four 1 × 1 m subplots.Plants in the two diagonal subplots were clipped at the height of 10 cm above the ground yearly to remove biomass, usually in August.Clipping in this manner effectively mimics agricultural hay mowing, a widely practiced land use in the southern Great Plains.Usually farmers and ranchers in the southern Great Plains mow pasture once or twice per year, depending on rainfall.Clipping also simulates biomass harvest for biofuel feedstock production, although the study was not originally designed to study bioenergy production.The other two diagonal subplots were left unclipped.The four treatments in the experiment were unclipped control (UC), clipped control (CC), unclipped warming (UW), and clipped warming (CW). Litter and soil collection, and soil fractionation Litter on the soil surface was collected in clipped plots in August 2008.As the unclipped subplots were designed to have minimal disturbances over the long term, no plant material was taken from the unclipped plots.Collected litter was separated into C 3 and C 4 species according to morphological traits.Litter from C 3 and C 4 species, and mixed litter (i.e., mixed C 3 and C 4 litter) from warmed plots were selected for analyses of N and C concentration, and stable C and N isotopes (δ 13 C, and δ 15 N). Soil samples were collected at a depth of 0-20 cm with a 4 cm diameter soil corer in the fall of 2008, nine years after the warming began.Soil samples were air-dried, after which large roots and stone were removed by hand.The method for aggregate separation and size density fractionations of free fraction (LF), intra aggregate particulate organic matter (iPOM), and mineral-associated organic matter (mSOM) was adapted from Six et al. (1998).The fractionation sequences are summarized in Table 1.Four aggregate sizes were separated using wet sieving through a series of sieves (2000, 250, and 53 µm).A 100 g air dried sample was submerged for 5 min in room temperature de-ionized water, on top of the 2000 µm sieve.Aggregate separation was achieved by manually moving the sieve up and down 3 cm with 50 repetitions during a period of 2 min.After the 2-min cycle, the stable >2000 µm aggregates were gently back-washed off the sieve into an aluminum pan.Floating organic material (>2000 µm) was discarded, as this is by definition not considered SOM (Six et al., 1998).Water and soil that passed through the sieve were poured into the next two sieves (one at a time) and the sieving was repeated in a similar fashion, but floating material was retained.The density fractionation was carried out by using a solution of 1.85 g cm −3 sodium polytungstate (SPT), following the method described in Six et al. (1998).A subsample (5 g) of each oven-dried (110 • C) aggregate size fraction was suspended in 35 ml of SPT and slowly shaken by hand.The material remaining on the cap and sides of the centrifuge tube was washed into suspension with 10 ml of SPT.After 20 min of vacuum (138 kPa), the samples were centrifuged (1250 g) at 20 • C for 60 min.The floating material (light fraction-LF) was aspirated onto a 20 µm nylon filter, subjected to multiple washings with deionized water to remove SPT, and dried at 50 • C. The heavy fraction (HF) was rinsed twice with 50 ml of deionized water and dispersed in 0.5 % sodium hexametaphosphate by shaking for 18 h on a reciprocal shaker.The dispersed heavy fraction was then passed through a 53 µm sieve and the material remaining on the sieve, i.e. the intra-aggregate particulate organic matter (iPOM), was dried (50 • C) and weighed. Carbon, nitrogen, and isotope analyses Samples of litter and soil were dried at 50 • C to constant weight and then ground to pass through 20-mesh (0.84 mm) sieves (Cheng et al., 2006).The C and N concentrations and δ 13 C and δ 15 N were measured for all soil fractions and litter materials.Subsamples from all fractions were treated with 1N HCL for 24 h at room temperature to remove any soil carbonates (Cheng et al., 2006).The C and N concentration and δ 13 C and δ 15 N of soil and litter were determined at University of Arkansas Stable Isotope Laboratory on a Finnigan Delta + mass spectrometer (Finnigan MAT, Germany) coupled to a Carlo Erba elemental analyzer (NA1500 CHN Combustion Analyzer, Carlo Erba Strumentazione, Milan, Italy) via a Finnigan Conflo II Interface.Carbon and nitrogen contents of SOM fractions were calculated on an areal basis, correcting for soil depth and density. The carbon and nitrogen isotope ratio of the soil fractions was expressed as: where X is either carbon or nitrogen, "h" is the heavier isotope, "l" is the lighter isotope.Both CO 2 and N 2 samples were analyzed relative to internal, working gas standards.Carbon isotope ratios ( 13 C) are expressed relative to Pee Dee Belemnite (δ 13 C = 0.0 ‰); nitrogen stable isotope ratios ( 15 N) are expressed relative to air (δ 15 N = 0.0 ‰).Standards (acetanilide and spinach) were analyzed after every ten samples; analytical precision of the instrument was ±0.13 for δ 13 C and ±0.21 for δ 15 N. Differences in δ 13 C isotope composition due to photosynthetic pathways allow for the proportion of soil C derived form C 3 or C 4 sources to be calculated using a twocompartment mixing-model (Del Galdo et al., 2003;Cheng et al., 2006): where δ X is the δ 13 C of a given fraction isolated from the warmed or control plots, δ A and δ B are the isotope values of C 3 and C 4 plants from these plots, f A is the fraction of C 3 vegetation, and f B (1 − f A ) is the proportion derived from C 4 grasses. The fraction of new C, f new , derived from the current vegetation in the warmed soils after nine years of warming is calculated by using the isotope mass balance method (Marin-Spiotta et al., 2009): where δ 2 and δ 0 are δ 13 C values for SOM pools in the warmed and control plots and δ 1 is the average δ 13 C value of mixed litter to the SOM pool in the warmed plots, on the assumption that in the past 9 yr, no shift in ratio between C 3 /C 4 input in the control soil occurred.In Eqs. ( 2) and (3), because δ A (or δ 1 ), δ X (or δ 2 ), and δ B (or δ 0 ) are independently measured, the standard errors (SE) of f associated with the use of the mass-balance approach can be calculated using partial derivatives (Phillips and Gregg, 2001) as: This can be reduced to: where σ 2 δ A ,σ 2 δ X , and σ 2 δ B represent variances of the mean δ A (or δ 1 ), δ X (or δ 2 ), and δ B (or δ 0 ), respectively.The σ f is the SE of the proportion (f ) estimate (Phillips and Gregg, 2001). Furthermore, decomposition rate constants (k) for old C (i.e. the C of the organic matter previous to warming) of different fraction of SOM in the warmed plots were calculated using the following equation (Del Galdo et al., 2003): where f old = (1 − f new ) is the proportion of old C, k is the net relative decomposition rate constant of old C, and t is the age of warming. Statistics Analysis of variance (ANOVA) of paired split-plot design (one pair of plots being considered a block) was conducted to examine the effects of warming on the soil organic C and N contents, the δ 13 C and δ 15 N values, C:N ratios in all soil fractions, and the weight distribution.The differences in soil organic C and N contents, the δ 13 C and δ 15 N values, and C:N ratios between aggregate sizes and density fractions were analyzed using one-way ANOVA.All statistical analyses were performed using Stat Soft's Statistica, statistical software for Windows (Version 6.0, StatSoft, Inc., 2001).2). Whole soil C and N dynamics Whole soil (i.e., total soil) organic C and N contents ranged from 2371 to 2707 g C m −2 , and 284 to 312 g N m −2 , respectively, across all treatments.No significant differences in C and N content, or C:N ratios among treatments were found (Table 3).Nine-year warming significantly increased the δ 13 C signature of SOM for both clipped and unclipped plots.On average, the warmed plot soils were 1.3 ‰ more enriched in 13 C than the control plots.Based on these data, warming increased the fraction of C 4 -derived C on average by 11.6 % (Table 4).Warming increased the δ 15 N values of SOM in clipped plots (Table 4). Size distribution, C and N contents, and δ 13 C and δ 15 N of soil aggregates Aggregate distribution was not significantly effected by warming or clipping (Fig. 1).Warming significantly decreased soil organic C and N content in microaggregates (<250 µm) in clipped plots but not in other aggregate size classes (Table 4).No significant differences in C:N ratios were found across aggregate size and treatments (Fig. 2).Generally, macroaggregates (>250 µm) contained significantly more C and N (78-84 %) than microaggregates in all treatments (Table 4). Warming resulted in no significant increase in the δ 13 C values of all aggregate sizes compared to the control plots (Table 5), indicating that warming possibly stimulated input of C 4 -derived C (Fig. 3a).Warming-induced increases in the 3b).The δ 15 N values of each aggregate size were significantly more enriched in the warmed plots than the control (Table 5).There were no significant differences in δ 15 N values between aggregates (>53 µm), but microaggregates (<53 µm) had a significantly higher δ 15 N value than all other aggregate size classes (Table 5). Density fraction: C and N contents, and δ 13 C and δ 15 N in LF, iPOM and mSOM LF accounted for the smallest fraction of total SOM, whereas mSOM accounted for the largest (74-79 %) fraction of total SOM in all aggregate size classes across all treatments (Table 4).Warming significantly decreased soil organic C and N contents in iPOM in macroaggregates (>2000 µm) in clipped plots but not in any other SOM classes.C and N content in mSOM and iPOM significantly decreased with size class, whereas the highest C and N contents in LF were found in 2000-250 µm macroaggregates (Table 4).C:N ratio significantly increased in LF of 250-53 µm microaggregates but not in any other SOM classes under warming in comparison to control (Fig. 4).C:N ratios decreased from LF to iPOM to mSOM in all aggregate classes across treatments (Fig. 4). Warming resulted in an increase in δ 13 C values across all density fractions in each aggregate size (Table 5).The warming-induced increase was significant for δ 13 C values from LF in >2000 µm macroaggregates in clipped plots.The δ 13 C values were generally more enriched in mSOM than LF and iPOM across aggregate sizes and treatments (Table 5).Warming-induced increase in C 4 -derived C was the highest for LF in >2000 µm macroaggregates among all aggregates and density fractions (Fig. 5b).In general, warming stimulated more C 4 -derived C input into LF than iPOM and mSOM across all aggregate sizes, and more into larger than smaller aggregate sizes (Fig. 5b In general, warming significantly increased δ 15 N values of LF, iPOM, and mSOM across aggregate sizes (Table 5).mSOM had the highest δ 15 N value and LF had the lowest δ 15 N value among density fractions in all aggregate sizes across treatments (Table 5). Soil C turnover Experimental warming stimulated both new C input from C 4 photosynthesis and decay rate of old C (Table 6).New C inputs in whole soil were greater than those in all aggregates.New C inputs were greater in LF than in iPOM, with the greatest in >2000 µm macroaggregates.Overall, new C inputs in soil fractions decreased for smaller aggregates except for mSOM (Table 6).Accordingly, decay rates for old C in whole soil were faster than those for all aggregates.The fastest decay rates were found in LF in >2000 µm macroaggregates, and LF had a greater decay rate than iPOM for all SOM classes (Table 6). Discussion Warming did not significantly increase total soil C and N storage, but other aspects of SOM dynamics did change.Warming effects at our study site were previously characterized by increased biomass growth and ANPP, a shift toward greater C 4 species dominance, and increased litter input (Wan et al., 2005;Luo et al., 2009;Cheng et al., 2010).Our stable isotope analysis in this present study confirmed that the δ 13 C abundance in SOM in the warmed soils was more enriched than in the control soils (Table 3), resulting from a higher contribution of C 4 residuals.Indeed, warminginduced increases in C 4 plants and decreases in C 3 plants led to increases in the fraction of C 4 -derived C on average by 11.6 % (Table 3).However, increases in C inputs and changes in SOM quality after 9-yr warming did not significantly increase total soil organic C and N content (Table 3; Niu et al., 2010).The main processes that control soil SOM storage under warming are determined by a balance between litter input and soil C respiration (e.g., Shaw and Harte, 2001;Fissore et al., 2008).Our previous study found warming increased soil respiration (Zhou et al., 2007), similar to other warming studies (e.g., Rustad et al., 2001;Fontaine et al., 2004).Furthermore, Wynn and Bird (2007) have found that the active pool of SOM derived from C 4 plants decomposes faster than the total pool of SOM.Warming-induced increases in C 4 -derived C in SOM pool (Table 3) likely accelerates decay rates of SOM in warmed soils.Thus, unchanged SOM stock in our warming experiment possibly resulted from concur- rent increases in litter inputs to soil (Luo et al., 2009;Cheng et al., 2010) and decomposition rates (Zhou et al., 2007). Similarly, warming did not significantly change soil aggregate size distribution (Fig. 1) and soil organic C and N contents (Table 4).Although warming induced a shift from C 3 to C 4 plant species, which may effect SOM quality and input (Luo et al., 2009;Cheng et al., 2010), warming did not effect the level of soil aggregation (Fig. 1).These results are in agreement with Scott (1998), who reported grass species had no effect on size-distribution of soil aggregates or organic matter concentration.However, the C and N content of different size fractions are primarily controlled by the amount of each aggregate size (Elliott, 1986;Van Groenigen et al., 2002).We found that macroaggregates (>250 µm) contained significantly more C and N than microaggregates ( with decreasing aggregate size (Elliott, 1986;Puget et al., 1995).Additionally, warming increased C 4 -derived C in all aggregate size with the highest C 4 -derived C in >2000 µm macroaggreates (Fig. 3b), supporting the evidence that new C is incorporated more rapidly in coarse SOM than in fine SOM fractions (Desjardins et al., 2004;Schwendenmann and Pendall, 2006). It is well known that LF, iPOM, and mSOM have different chemical compositions and turnover times (Trumbore, 2000;Wynn and Bird, 2007).Higher C:N ratios of LF reflect more recent litter inputs, while mSOM had much lower C:N ratios (Fig. 4).Decreasing C:N ratios in soil C fractions have been associated with increasing SOM decomposition and mineral association (John et al., 2005;Marin-Spiotta et al., 2009).Moreover, the 15 N values enriched from LF to iPOM to mSOM provided further evidence to support the degree of decomposition and humification of SOM.Similar to other studies (Liao et al., 2006;Marin-Spiotta et al., 2009), the 15 N values of the <53 µm microaggreates were higher than other aggregates (Table 5).In general, low 15 N values are related to recent organic matter inputs (litter, roots), whereas high 15 N values in silts + clays (<53 µm) are associated with increasing SOM transformation and humification. The 15 N values of soil fractions were more enriched in warmed soil than in the control soil (Table 5).It suggests that the natural abundance of 15 N in soil becomes enriched in 15 N in the warmed soil by the process of N loss from soil through increased mineralization and possibly nitrate leaching compared to control sites (Rustad et al., 2001;Bijoor et al., 2008).Indeed, warming resulted in no significant decreases in total soil N pools, on average by 10 % in the clipped plots in our study (Table 3).SOM decomposition typically results in an enrichment in δ 15 N (Liao et al., 2006;Marin-Spiotta et al., 2009), so higher 15 N for the warmed soils relative to the control soils in our study would suggest that the SOM in the warmed soils is more decomposed than the control soils. Even though there were no significant increases in SOM pools after nine years of warming, isotopic measurements and turnover time estimates suggest different C decay rates of SOM fractions in warmed soils.Increased new C inputs from plant residue could result in faster decomposition of SOM (Dijkstra and Cheng, 2007).The decay rates for old C in whole soil were faster than all aggregates due to greater new C inputs (Table 6).This finding supports the evidence that soil aggregates physically protect certain SOM fractions, resulting in pools with longer turnover times (Six et al., 1998). Although LF generally represent only a small proportion of total soil C (Gregorich et al., 2006), changes in C stocks following changes in species composition can be more pronounced in LF compared to bulk soil (Schwendenmann and Pendall, 2006).Our results showed that warming-induced increase in C 4 -derived C in LF was larger than in iPOM and mSOM in all aggregate sizes (Fig. 5).Meanwhile, organic matter in LF fractions is readily accessible to microbes, as reflected by their initial rapid loss (Fontaine et al., 2004;Pendall et al., 2004).Warming caused no significant decreases in C content in LF in all SOM size fractions except in 2000-250 µm aggregates in clipped plots (Table 4), possibly due to a rapid loss of labile substrates in LF (Table 6).With increasing degree of decomposition, organic matter may be transferred to more stabilized soil fractions.In contrast to rapid decomposition of C 4 -derived C from LF, some C 4 -derived C remained in iPOM fractions with slower turnover rates (Table 6).iPOM and mSOM accounted for the largest fraction of soil organic C and N contents in all sized aggregates (Table 4).Warming did not significantly effect soil organic C and N contents in iPOM and mSOM fractions, supporting the view that the heavy and mineral associated recalcitrant frac-tions that maintain physical and chemical stabilization (e.g., Six et al., 1998). To conclude, we found that nine years of experimental warming caused no significant increases in soil organic C and N content in any soil fraction at our site.Warming did not significantly effect soil aggregate distribution and stability.However, warming did increase C 4 -derived C input into all fractions, particularly in LF of all aggregate size classes.Significant C loss in whole soil and labile components of LF under warming likely offset increased overall SOM inputs.Under warming, 15 N values of soil fractions were more enriched than in the controls, indicating increased N transformation under warming.C:N ratios and differences in natural abundance of δ 13 C and δ 15 N in SOM fractions are associated with an increasing degree of decomposition across density fractions with increasing mineral association.The δ 13 C value of SOM is controlled by multiple factors, including hydrology, soil temperature, substrate, and vegetation, but turnover times based on natural abundance stable isotope methods tend to be more related to recent C inputs and C pools associated with the C 3 /C 4 vegetation type conversion (Six and Jastrow, 2002).Lack of variability in the controls in this study might not provide rigorous statistical tests of warming effects on changes in δ 13 C of SOM.However, environment-induced changes in δ 13 C of SOM are small relative to changes caused by C 3 /C 4 litter inputs, which do not strongly influence δ 13 C to trace SOM dynamics following changes in litter inputs (Wedin et al., 1995;Six and Jastrow, 2002).Our results suggest that shifts in species composition www.biogeosciences.net/8/1487/2011/Biogeosciences, 8, 1487-1498, 2011 under warming could potentially modify SOM quality and decomposition and consequently effect ecosystem functions.Physical fractionation methods combined with isotope analyses in our study were an attempt to better understand SOM dynamics in response to global warming by acknowledging that SOM consists of a continuum of substrates with different turnover times.To accurately predict the effects of global warming on ecosystem processes, we need ecological models and long-term experiments to project future changes in ecosystem C and N cycles in response to multifactor global change. Fig. 1 .Fig. 2 . Fig. 1.Fig.1. Weight distribution among aggregate size classes under four treatments after nine years of warming and clipping.Values followed by a different lowercase letter are significantly different within aggregate size classes among treatments.Values followed by a different capital letter are significantly different among aggregate size classes under treatments.Abbreviations: UC, unclipped control; CC, clipped control; UW, unclipped warming; CW, clipped warming. Fig. 3 . Fig. 3. Fraction of C 4 -derived C of aggregate size classes under four treatments after nine years of warming and clipping (a), and warming-induced increases in the fraction of C 4 -derived C of aggregate size classes in warmed soils (b).Vales followed by a different lowercase letter are significantly different within aggregate size classes among treatments.Values followed by a different capital are significantly different among aggregate size classes under treatments. Fig. 4 . Fig. 4. C:N ratios of LF, iPOM, and mSOM of aggregate size classes under four treatments after nine years of warming and clipping.Values followed by a different lowercase letter are significantly different within SOM classes among treatments of each aggregate size.Values followed by a different capital are significantly different among SOM classes under treatments of each aggregate size.See Fig. 1 for abbreviations. Fig. 5 . Fig. 5. Fig. 5. Fraction of C 4 -derived C in the LF, iPOM, and mSOM of aggregate size classes under four treatments after nine years of warming and clipping (a), and warming-induced increases in the fraction of C 4 -derived C in LF, iPOM, and mSOM of SOM classes in the warmed soils (b).See Fig. 4 for the explanation of the symbols. Table 1 . The physical soil fraction obtained from a trallgrass prairie after nine years of warming and clipping.Soil fractions analyzed for C, N concentrations, δ 13 C, and δ 15 N are denoted by bold numbers. δ 13 C and δ 15 N values of plant litter The δ 13 C values of C 4 litter had a mean vale of −12.74 ‰, while C 3 litter had a mean value of −27.62 ‰.Warming significantly increased the δ 13 C values of mixed litter on average by 11.3 % (Table2).Warming significantly enhanced the δ 15 N values on average by 12.2 % for C 4 litter and by 26.1 % for C 3 litter, whereas warming slightly increased the δ 15 N values on average by 8.4 % for mixed litter (Table Table 2 . The values of δ 13 C and δ 15 N of litter of C 3 , C 4 species and mixed litter under the warming treatment.Data are expressed as mean ± SE, n = 6.Different letters indicate statistical significance at P < 0.05 between the two treatments. Table 3 . Soil organic C and N content,13C-and 15 N signature, fraction of C 4 -derived C (f B ), and the C:N ratio of whole soils in a tallgrass prairie (0-20 cm depth) after nine years of warming and clipping.Data are expressed as mean ± SE, n = 6.Different letters indicate statistical significance at P < 0.05 among the four treatments.Abbreviations: UC, unclipped control; CC, clipped control; UW, unclipped warming; CW, clipped warming. Table 4 . Soil organic C and N content of soil fractions under four treatments after nine years of warming and clipping.Data are expressed as mean ± SE, n = 6.Different letters indicate statistical significance at P < 0.05 among the four treatments.See Table3for abbreviations. Table 5 . δ 13 C and δ 15 N values of soil fractions under four treatments after nine years of warming and clipping.Data are expressed as mean ± SE, n = 6.Different letters indicate statistical significance at P < 0.05 among the four treatments.See Table3for abbreviations. Table 6 . New C input (f new ), and decay rate (k, yr −1 ) of old C of soil fractions (0-20 cm) in warmed soils after nine years of experimental warming.
2016-03-04T08:47:34.539Z
2010-11-15T00:00:00.000
{ "year": 2010, "sha1": "3e027e89c1b153837cdb141aba692888076b3cf2", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/8/1487/2011/bg-8-1487-2011.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3e027e89c1b153837cdb141aba692888076b3cf2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
260702677
pes2o/s2orc
v3-fos-license
Operant novelty seeking predicts cue-induced reinstatement following cocaine but not water reinforcement in male rats Rationale An important facet of cocaine addiction is a high propensity to relapse, with increasing research investigating factors that predispose individuals toward uncontrolled drug use and relapse. A personality trait linked to drug addiction is high sensation seeking, i.e., a preference for novel sensations/experiences. In an animal model of sensation seeking, operant novelty seeking predicts the acquisition of drug self-administration. Objective The primary goal of this research was to evaluate the hypothesis that sensitivity to the reinforcing effects of novel sensory stimuli predicts more intensive aspects of drug-taking behaviors, such as relapse. Methods Rats were first tested for Operant Novelty Seeking, during which responses resulted in complex visual/auditory stimuli. Next, rats were trained to respond to water/cocaine reinforcers signaled by a cue light. Finally, rats were exposed to extinction in the absence of discrete cues and subsequently tested in a single session of cue-induced reinstatement, during which active responses resulted in cues previously paired with water/cocaine delivery. Results The present study showed operant responses to produce novel sensory stimuli positively correlate with responding for cocaine during self-administration and during discrete cue-induced reinstatement, but no association with performance during extinction. A different pattern of associations was observed for a natural reward, in this case, water reinforcement. Here, the degree of novelty seeking also correlated with responding to water reinforcement and extinction responding; however, operant novelty seeking did not correlate with responding to water cues during testing of cue-induced reinstatement. Taken together, the incongruence of relationships indicates an underlying difference between natural and drug reinforcers. Conclusion In summary, we found a reinforcer-dependent relationship between operant novelty seeking (i.e., sensation seeking) and responsivity to extinction and discrete cues signaling availability for cocaine (i.e., craving), demonstrating the validity of the operant novelty seeking model to investigate drug seeking and relapse. Supplementary Information The online version contains supplementary material available at 10.1007/s00213-023-06441-4. Introduction The US Department of Health and Human Services estimated that in 2019, approximately 5.2 million people used cocaine and 1.3 million met the criteria for a cocaine use disorder (SAMHSA n.d.).For those that achieve abstinence from cocaine, an estimated 25 (Simpson et al. 1999) to 45% (Carroll et al. 1993;Hall et al. 1991;McKay et al. 1999) relapse to using the drug within 1 year.Abstinence of longer than 1 year is reportedly achieved by as few as 5-15% of individuals (McCabe et al. 2018;White 2017).Together, these statistics indicate strong individual differences in vulnerability to drug addiction, and a subpopulation of individuals with a heightened risk of relapse (George and Koob 2017;Homberg et al. 2014;Le Moal 2009;Saunders et al. 2013;Sedighim et al. 2021).High individual variability coupled with the lack of available and effective treatment for cocaine use disorder highlights the need to identify factors, such as genetic and environmental variables, that can identify individuals with increased susceptibility to drug addiction and can potentially lead to therapies or strategies to attenuate the high rates of relapse and promote long-term recovery. Interactions between genetic and environmental variables are often reflected in personality traits.One such trait of relevance to substance use disorders is the seeking of varied and novel experiences, referred to as sensation seeking (SS) (Zuckerman 1994).Sensation seeking correlates with aspects of drug addiction, including self-administration (SA) and acquisition (Cherpitel 1999;White 2017;Zuckerman 2007), such that individuals who score high on SS scales are more likely to abuse drugs than individuals with lower scores (Andrucci et al. 1989).There is strong evidence that sensation-seeking plays a role in various stages of the substance abuse continuum (Aklin et al. 2009;Castellani and Rugle 1995;Freund et al. 2021;Kahler et al. 2009;Kelly et al. 2006;McCabe et al. 2021;McKay et al. 1999;Nieva et al. 2011;Patkar et al. 2003Patkar et al. , 2004;;Regan et al. 2020;Tatari et al. 2021;Teichman et al. 1989), although see Ersche et al. (2010) and Murray et al. (2003).However, few studies have examined the relationship between sensation seeking and drug relapse (McKay et al. 1995(McKay et al. , 1999)), although Ramos-Grille et al. (2015) showed that sensation seeking predicts relapse to pathological gambling. Sensation seeking has been modeled in animals by assessing locomotor reactivity to an inescapable novel environment or free-choice preference for novel objects/places (Bardo et al. 1996;Cain et al. 2005Cain et al. , 2006Cain et al. , 2008;;Piazza et al. 1989;Robinet et al. 1998).Another model involves an operant procedure in which novel sensory stimuli are presented (Dickson andMittleman 2020, 2021;Dickson et al. 2019;Gancarz et al. 2013).In this model of operant novelty seeking (ONS), the primary reinforcement is a response-contingent presentation of stimuli of moderate intensity that is not related to a biological need (Gancarz et al. 2013;Kish 1955Kish , 1966;;Lloyd et al. 2012a, b;Stewart 1960).We previously found that the procedures for ONS and locomotor responsivity to novelty have similar behavioral features: both procedures (i) produce within-and between-session habituation (Gancarz et al. 2011), (ii) result in increased responding after systemic administration of psychomotor stimulants (Gancarz et al. 2012b, c) and (iii) predict acquisition of methamphetamine SA (Gancarz et al. 2011).We also found that performance during operant sensory reinforcement positively associates with the number of cocaine infusions an animal self-administers (Gancarz et al. 2013). While these earlier studies provided important insight regarding the relationship between sensory reinforcement and acquisition/early exposure to drug use, here, we sought to determine if operant novelty seeking predicts additional aspects of the addicted-like phenotype, such as relapse.Importantly, investigation of other animal models of response to novelty (e.g., locomotor, preference) has also focused on drug-taking behaviors with little research exploring how such measures would map onto addicted-like phenotypes (e.g., relapse).The goal of the present study was to determine whether ONS also predicts relapse of cocaine taking.We hypothesized that responding to the reinforcing effects of novel stimuli would predict cue-induced responding for cocaine but not for a natural reinforcer. Subjects Ninety male Sprague-Dawley laboratory rats (12 weeks old at the beginning of testing, 250-275 g, Charles River Laboratories) were tested.Rats were allowed to acclimate to the colony room for 2 days upon arrival.Behavioral testing occurred 5-7 days/week during the light phase of the 12-h light/12-h dark cycle.The rats were assigned to either cocaine (n = 42; experiment 1) or natural (water, n = 48; experiment 2) reinforcer groups.All rats had ad libitum access to food.Rats assigned to the water experiment were restricted to 30 min of water access a day for each phase of the experiment, which was provided in home cages following test sessions.Rats assigned to cocaine groups had ad libitum access to water.Rats were singly housed for the duration of the SA phase of the experiments to protect the catheter/ harness assembly.This study was conducted in accordance with guidelines set forth by the Institutional Animal Care and Use Committee (IACUC; Protocol 15-04) at California State University, Bakersfield. Habituation/ONS The experimental chambers had stainless-steel grid floors, aluminum front and back walls, Plexiglas sides, and a Plexiglas top.The chambers were 24 cm × 30 cm × 25 cm (inside dimensions).The right-side wall served as the intelligence panel, with two 3-cm-diameter circular snout poke holes (~ 2.5 cm above the floor grid) on either side of three LED lights (red, yellow, green; ~ 10 cm from the floor grid).A Sonalert was mounted on the back wall approximately 17 cm from the floor and 3 cm from the right side of the chamber.Snout/head entries into the snout poke holes were monitored and recorded with infrared detectors located 0.5 cm behind the front panel.All chambers were housed in sound-attenuating boxes.The entire apparatus was computer controlled through a MED Associates interface with MED-PC (version 5).The temporal resolution of the system was 0.01 s. SA Distinct operant chambers were used for SA and reinstatement with several important differences from those used for ONS.The two snout poke holes were waterdispensing receptacles (4 cm ´ 4 cm), and two additional stimulus lights were located above the receptacles.The stimulus lights were used as cues to indicate an infusion/ reward was earned.The front right and left stimulus lights were equipped with a 28-V light bulb (SPC Technology, model #1819) with a white lens cap (Dialight, model #081-0135-303).A third stimulus light was in the middle of the back wall of the chamber, approximately 22 cm from the grid floor.Illumination of the back house light indicated the availability of the cocaine or water reinforcer.The house light was equipped with a 28-V light bulb (SPC Technology, model #1864) covered by a clear lens cap (Dialight,. Drugs (-)-Cocaine hydrochloride (gifted by the NIDA drug supply program) solutions were prepared weekly by dissolving the drug in sterile 0.9% saline at a concentration of 4.5 mg/mL.Pump duration was adjusted according to body weight every other day in order to deliver the correct dose of the drug (1.0 mg/kg/infusion, i.v.). Experiment 1: association between ONS and cocaine-seeking behavior To examine the relationship between ONS and reinstatement to cocaine SA, a correlational research design was used with three phases (see Fig. 1,top).In phase 1, rats were habituated to dark experimental chambers and then tested for responding to novel operant sensory reinforcers.In phase 2, rats were tested for cocaine SA.In phase 3, rats were tested for extinction in the absence of discrete cues using a 1-day within-session extinction and then tested for cue-induced reinstatement the following day. Habituation Rats were introduced to the experimental chambers prior to testing to reduce potential confounds from exploratory or stress-related responding (i.e., to familiarize the rats with the chamber and decrease its novelty).Each day during the 5-day habituation phase, rats were placed in dark chambers for 30 min with no programmed stimuli: snout poke responses were recorded but did not produce any programmed consequences. ONS Following habituation, animals were placed in the experimental chambers for six 30-min test sessions to measure ONS as previously described with a few modifications (Gancarz et al. 2012c).During this phase, the chambers were dark except when response-contingent visual and auditory stimuli were presented.Reinforced active responses produced a complex visual/auditory stimulus, which consisted of the illumination of three LED lights (yellow, red, and green; illumination duration varied between each presentation from 2 to 8 s) and the presentation of an auditory stimulus (activation of Sonalert of between 1 and 3 s) on a variable interval 1-min schedule of reinforcement (the average of every minute an active response was reinforced with any combination of the available stimuli).The stimuli were presented in random sequences and combinations to prolong their novelty.Responses to the inactive alternative had no programmed consequences. Cocaine SA Following ONS, rats were implanted with jugular vein catheters, maintained, and tested for patency as previously described (Gancarz et al. 2012b), and only rats with patent catheters were used in the study.After recovery, the rats were connected to the operant chamber as previously described and tested for cocaine SA (2-h sessions for 10 days).During SA sessions, responses to the active snout poke were reinforced with an infusion of 1.0 mg/kg cocaine according to a Fixed Ratio 3 (FR3) schedule of reinforcement.Reinforced responses resulted in the delivery of drug infusion and the illumination of the cue light above the snout poke receptacle, indicating drug dispersal.Following drug delivery, the houselight was extinguished and the chamber was dark for a 30-s time-out period.Re-illumination of house light following the timeout period then indicated the availability of the cocaine reinforcer. Extinction responding in the absence of the discrete cue Twenty-four hours following the last SA session, subjects were tested for extinction responding in the absence of the discrete cue and underwent a one-day multiple within-session extinction as previously described (Gancarz et al. 2015;Gancarz-Kausch et al. 2014).Briefly, rats were returned to the operant chambers where cocaine testing occurred.In this phase, chambers were dark, and responses were recorded but resulted in no programmed consequences.Stimuli previously paired with cocaine infusions were no longer available, and the catheter/harness assembly was connected to a saline solution (no infusions occurred). The rats were allowed to respond for a minimum of twelve 30-min sessions to ensure the behavior was thoroughly extinguished (sessions separated by 5-min intervals, during which rats were removed from the operant chamber and returned to the home cage until the next session began) and a maximum of 16 sessions, or until their responses fell to < 20 snout pokes per session.The first session was used as a test of extinction responding in the absence of the discrete cue (Crombag et al. 2008). Discrete cue-induced reinstatement The test for discrete cue-induced reinstatement occurred the following day, during which subjects were returned to the operant chambers for 1 h.The catheter/harness assemblies were connected to a saline solution, but no drug was delivered (Gancarz et al. 2015).Snout poke responses produced cues previously paired with drug delivery. Experiment 2: association between ONS and water-seeking behavior The same correlational research design was used to examine the relationship between ONS and reinstatement to a natural reinforcer, in this case, water (Fig. 1, bottom).In phase 1, rats were first habituated to dark experimental chambers and then tested for responding to novel operant sensory stimuli.In phase 2, rats were tested for water SA.In phase 3, rats underwent extinction responding in the absence of the discrete cue using a 1-day within-session extinction and were tested for cue-induced reinstatement the following day. Habituation and ONS Parameters were identical to those described in experiment 1. Water SA Water reinforcement was conducted for 10 days in 2-h sessions.Responses to the active snout poke were reinforced with a delivery of 30 μL of water directly into the water receptacle.All other parameters were identical to those conducted for cocaine SA.After the session each day, the rats were returned to the colony and given 30 min access to water. Extinction and cue-induced reinstatement Parameters were identical to those described in experiment 1. Data analysis The primary measures of ONS were the number of responses to the active and inactive snout pokes (active responses and inactive responses, respectively) and the number of sensory rewards earned.The dependent variables used to measure responding for water and cocaine reinforcers were the same as those used for ONS (numbers of active and inactive responses and numbers of cocaine infusions and water presentations).The dependent variable used to measure responding for extinction responding in the absence of discrete cues and cue-induced reinstatement was the number of active responses.Percent change in responding was used to normalize performance to determine relative changes in performance between days of testing, calculated as follows: [(day A -day B)/day B] × 100.Using this calculation, positive values indicate an increase in responding, whereas negative values indicate a decrease in responding from day B to day A. Analyses of variance (ANOVAs) were used where appropriate to compare experimental groups. Pearson's correlation tests were used to determine the relationships between (i) ONS and cocaine or water SA, (ii) ONS and extinction responding in the absence of discrete cues, and (iii) ONS and cue-induced reinstatement (active responses for cocaine/water).A two-tailed alpha of < 0.05 indicated a significant association.All dependent variables for the cocaine SA and water reinforcement were collapsed across all 10 days of testing for the correlation analysis.Dependent variables for ONS were averaged across the last 3 days (days 4-6) of ONS testing. An "extreme groups" approach was also used, in which animals that scored in the highest and lowest quartiles with respect to ONS were compared.Animals with ONS scores in the top quartile were classified as high-ONS (experiment 1: n = 10; experiment 2: n = 12) and animals with scores in the lowest quartile were classified as low-ONS (experiment 1: n = 10; experiment 2: n = 12).Data from animals in the middle half were not used for this analysis.The extreme groups approach has been used in research investigating individual differences in sensation/novelty-seeking to predict the acquisition of SA (Piazza et al. 1989(Piazza et al. , 1990(Piazza et al. , 2000)).An independent samples t test was used to compare the dependent measures defined above.An alpha of < 0.05 used to identify a significant difference. Sensation seeking has previously been shown to be stable across time (Lynne-Landsman et al. 2011;Pedersen 1991;Zuckerman 2007;Zuckerman et al. 1980); thus, we assessed the percent change in active responses throughout the ONS testing phase.There was a dramatic change in responding between day 5 of habituation and day 1 of ONS, demonstrating that the contingent sensory stimuli were reinforcing (Fig. 2c).The percent change in responding became progressively smaller between days 1 and 2 of ONS and between days 2 and 3 but was maintained thereafter, reflecting the stability of the behavior. Cocaine SA and cue-induced reinstatement A one-way ANOVA with SA sessions as the withinsubject factor showed a significant main effect of session on the number of active responses during cocaine SA testing [F(9,369) = 6.450,P < 0.02] (Fig. 3a).A follow-up paired-sample t test revealed that rats responded significantly more on day 10 than on day 1 of cocaine SA [t(41) = -5.078,P < 0.01], indicating the acquisition of responding for cocaine.No significant main effect of inactive responding was detected.The numbers of cocaine infusions earned were also stable across days of testing (day 1: X = 17.50 ± 0.69 [cumulative dose of 17.50 ± 0.69 mg/kg]; day 10: X = 18.45 ± 0.57 [cumulative dose of 18.45 ± 0.57 mg/kg]) (Supplemental Fig. 1a). A one-way ANOVA with extinction sessions as the within-subject factor showed a significant main effect of the session on the total number of responses [F(7,287) = 23.932,P < 0.001].A follow-up pairedsample t test revealed the rats emitted significantly more active responses during extinction session 1 than during extinction session 8 [t(41) = 7.948, P < 0.01], indicating extinction had occurred (Fig. 3b).A within-subject t test revealed rats emitted significantly more active responses than inactive responses during the testing of cue-induced reinstatement [t(41) = 8.594, P < 0.01] (Fig. 3c).These data indicate that the cues previously paired with cocaine elicited drug-seeking behavior.There was a positive percent change from active responses emitted on Day 10 of cocaine SA to active responding during extinction and cue-induced reinstatement (Fig. 3d).These data indicate that rats made more active responses during tests of Extinction Session 1 and cue-induced reinstatement compared to day 10 of cocaine SA. ONS is associated with cocaine SA, extinction, and cue-induced reinstatement We sought to determine the association between ONS and cocaine self-administration.Therefore, we averaged performance on operant novelty seeking on days 4-6 of testing when behavior was stable (Fig. 2c).The average numbers of active responses each rat emitted on days 4-6 during ONS positively correlated with the average numbers of active responses (averages from days 1 to 10) emitted during cocaine SA (r = 0.34, P < 0.05) (Fig. 4a).However, active ONS responses did not predict the number of cocaine infusions (see Table 1) because they were limited to 20 infusions per session (active responses were still recorded during drug time-out periods).The number of inactive responses during ONS did not correlate with the number of sensory reinforcers, cocaine infusions earned, or active responses (Table 1).ONS also did not correlate with performance during extinction session 1 (Fig. 4b).Interestingly, however, the number of active responses during ONS also positively correlated with the numbers of active responses during cue-induced reinstatement (r = 0.51, P < 0.05) (Fig. 4c).There were no correlations between inactive responses during ONS and the numbers of ONS sensory reinforcers earned or with any of the other dependent measures of cue-induced reinstatement. Animals were sorted based on the number of active responses emitted during ONS and the highest (high-ONS) and lowest (low-ONS) quartiles were compared on cocaine self-administration (average days 1-10) using Fig. 4 Correlations between ONS and cocaine SA.Relationships between measures of ONS (average across days 4-6) and: a active responses to receive an infusion of 1.0 mg/kg cocaine (averaged across days 1-10), b responses emitted during extinction (session 1), and c responses to cues previously signaling delivery of cocaine.Performance of top quartile (high-ONS, n = 10) and the bottom quartile (low-ONS, n = 10) rats for novelty seeking on d number of responses emitted during cocaine SA (average days 1-10), e extinction responding (session 1), and f cue-induced reinstatement.*P < 0.05; **P < 0.01 an independent samples t-test.High-ONS rats emitted significantly more active [t(18) = 2.384, P < 0.05] but not inactive responses during cocaine SA than rats classified as low-ONS (Fig. 4d).There was no significant difference between high-ONS or low-ONS in the number of active and inactive responses emitted during a test of extinction (session 1; Fig. 4e).High-ONS rats also had significantly more active [t(18) = 3.508, P < 0.05] but not inactive responses than their low-ONS counterparts during the test of cue-induced reinstatement (Fig. 4f).Taken together, the data suggest that ONS predicts responding for cocaine during SA and cue-induced reinstatement, but not extinction responding in absence of discrete cues. Habituation and ONS A one-way within-subject ANOVA revealed a significant main effect of the session on the total number of responses during the habituation phase [F(4, 188) = 9.197, P < 0.001], indicating that the rats in this cohort also became familiar with the chamber context (Fig. 5a). A one-way within-subject ANOVA revealed a significant main effect of session on the number of active responses made during ONS testing [F(5, 235) = 2.530, P < 0.05] (Fig. 5a).Furthermore, a one-way ANOVA with sessions as the within-subject factor revealed a significant main effect of time on the number of sensory reinforcers earned [F(5, 235) = 8.154, P < 0.001] (Fig. 5b).These data from a separate cohort of rats demonstrate the reliability of the reinforcing effects of the response-contingent sensory stimuli used in our experiments. Similar to that observed with the cohort of rats in experiments 1, the percent change in active responding was highest between the last day of habituation and the first day of ONS and then decreased thereafter, indicating a stabilization in performance (Fig. 5c). Water reinforcement and cue-induced reinstatement A one-way ANOVA with sessions as the within-subject factor showed a significant main effect of session on active responses for a water reinforcer [F(9,423) = 14.208,P < 0.01] (Fig. 6a).A follow-up paired-samples t test revealed that rats made significantly more responses on day 10 than on day 1 of water reinforcement [t(47) = − 10.352, P < 0.01], indicating the acquisition of responding for water.Consistent with this, a one-way ANOVA with sessions as the within-subject factor showed a significant main effect of the session on the number of water rewards earned [F(9,423) = 8.717, P < 0.01] (Supplemental Fig. 1b).A follow-up paired-sample t test revealed that rats earned significantly more water on day 10 than on day 1 of water reinforcement [t(47) = − 10.352, P < 0.01].A one-way ANOVA with water reinforcement sessions as the within-subject factor also showed a significant main effect of session on inactive responses [F(9,423) = 26.849,P < 0.01) (Fig. 6a), with follow-up paired-sample t tests indicating rats made significantly more inactive responses on day 1 than on day 10 [t(47) = 7.959, P < 0.01].Taken together, these data indicate acquisition of water reinforcement occurred across days of testing in thirsty rats.A one-way ANOVA with extinction sessions as the within-subject factor showed a significant main effect of the session on total active responses for water reinforcement [F(7,329) = 103.625,P < 0.001].A follow-up paired-sample t test revealed significantly more active responses during session 1 than during session 8 [t(47) = 13.825,P < 0.01], indicating the reinforced behavior had been extinguished (Fig. 6b).A within-subjects t test revealed that rats made significantly more active responses than inactive responses during the test of cue-induced reinstatement [t(47) = 4.64, P < 0.01) (Fig. 6c), indicating that the cues previously paired with water elicited seeking behavior. Contrary to the pattern of results for cocaine in experiment 1, there was a negative percent change for both extinction (session 1) and cue-induced reinstatement from performance on the last day of water reinforcement.These data indicate that rats made fewer active responses during extinction (session 1) and cue-induced reinstatement compared to day 10 of water reinforcement (Fig. 6d). ONS is associated with water reinforcement but not cue-induced reinstatement of water-seeking Table 2 shows a correlational matrix for the relationships between ONS testing (average from days 4-6 for each animal) and water reinforcement testing (averages from days 1-10 for each animal).The number of ONS active responses positively correlated with the number of active responses for water reinforcement (r = 0.35, P < 0.01) (Fig. 7a).The number of ONS active responses also predicted the number of water reinforcers earned (r = 0.30, P < 0.01).Interestingly, there was a significant positive association between ONS active responses and responses emitted during extinction (session 1; Fig. 7b).However, there was no correlation between active responses for ONS and those for cue-induced reinstatement (Fig. 7c). Using the extreme groups approach, independentsample t tests showed that rats classified as high-ONS made significantly more active [t(22) = − 2.195, P < 0.05] but not inactive responses than their low-ONS counterparts during water reinforcement testing (Fig. 7d).High ONS rats emitted significantly more inactive responses [t(22) = − 2.756, P < 0.05 and a trend for more active responses (P = 0.1) than low ONS rats during a test of extinction responding in absence of discrete cues (Fig. 7e).However, there were no significant differences between the high-and low-ONS rats during a test of cue-induced reinstatement (Fig. 7f).Taken together, these data suggest ONS predicts responding for water reinforcers and extinction responding in absence of discrete cues but not cue-induced reinstatement of the behavior. Discussion The primary aim of this study was to examine the association between an animal model of sensation seeking and responsiveness for extinction responding in the presence and absence of discrete cues paired with cocaine and water reinforcers in adult male rats.We found a significant positive association between ONS performance and active responding in both cocaine self-administration and water reinforcement.Our data also demonstrate an animal's level of responding to novel sensory stimuli predicts the propensity to seek discrete cues paired with cocaine use, but not extinction responding in the absence of discrete cues.These findings suggest that novelty or sensation seeking, as a measurable personality trait, may serve to predict individual responses that are associated with substance use disorder and relapse. Our results replicate and extend those from studies showing that sensory reinforcement predicts responding to drugs and natural rewards (Gancarz et al. 2011(Gancarz et al. , 2013;;Mitchell et al. 2005).For example, we previously showed that responding to a visual stimulus reinforcer correlates with the number of cocaine infusions animals will self-administer (Gancarz et al. 2013).The present study utilized different experimental parameters, such as a higher dose of cocaine (1.0 mg/kg/inf), which has been shown to be more likely to support acquisition (Carroll and Lac 1997;Gancarz et al. 2013;Kosten et al. 1997), and increased response requirement per reward, illustrating a robust relationship between novelty seeking and cocaine SA.Taken together, these experiments show that ONS is a reliable animal model of sensation seeking and is associated with drug-taking behavior.The present study is among the first to demonstrate the association between sensation seeking and the propensity to relapse to drug use.Whereas the readiness to self-administer cocaine is thought to reflect the sensitivity of an animal to the rewarding properties of the drug, the reinstatement of responding when a drug-paired cue is presented is thought to reflect the vulnerability to relapse.Our data suggest that an animal that finds novel stimuli highly rewarding is more likely to resume behaviors that were previously linked with a drug reward.However, the reinstatement of seeking behaviors did not extend to a natural reinforcer, in this case, water delivery to water-restricted rats.Although visual stimuli were used both as a reinforcer (in the ONS task) and as a cue associated with reinforcers (cocaine and water), the level of responding for the visual stimuli did not correlate with the reinstatement of responding for both reinforcers (i.e., only with the reinstatement of drug seeking).Thus, the rats did not generalize the reinforcement and cue properties of the visual stimuli. A different pattern emerged when testing extinction responding (when rats were re-exposed to the chambers in the absence of cue or reinforcer availability).Here, a positive association was observed between ONS and rats previously trained to respond to water reinforcers, but no relationship was observed between ONS and performance in rats previously trained to respond to cocaine.Taken together, these data illustrate a different pattern of relationships between ONS and extinction and discrete cue-seeking for water and cocaine reinforcers.Rats responding to cocaine showed a relationship to discrete cues but no relationship to extinction, Fig. 7 Correlations between ONS and water SA.Relationships between measures of ONS (average across days 4-6) and a active responses to receiving a delivery of 30 μL water (averaged across days 1-10), b and c responses to cues previously signaling delivery of water.Performance of top quartile (high-ONS, n = 10) and the bottom quartile (low-ONS, n = 10) rats for novelty seeking on d number of responses emitted during water SA (average days 1-10), e extinction responding (session 1) and, and f cue-induced reinstatement of water seeking.*P < 0.05; **P < 0.01 whereas rats responding to water showed a relationship to extinction rather than discrete cues.From an ethological perspective, water itself is a discrete cue and encompasses a distinct motor and oral consummatory response (e.g., licking, swallowing).In contrast, cocaine as a stimulus is more perceptually diffuse and relies on interoceptive perceived response.This inherent difference in the basic properties of the reinforcer/nature of the stimuli and consummatory behavior may result in differences in associations made in guiding behavior.Overall, ONS similarly predicts the sensitivity to reinforcers but differentially predicts the seeking of those reinforcers. The seeking of natural reinforcers (e.g., food or water) is driven by an internal state, whereas the seeking of drugs is driven by many factors.We can begin to differentiate these by the responses for a given reinforcer.In the present study, the level of responding was markedly higher for water than cocaine, in part because of the experimenter-imposed parameters (e.g., limit in the number of available cocaine infusions to prevent overdose, economy type).These constraints aside, variances in operant response to drug and non-drug rewards have been fully characterized (Gancarz et al. 2012b;Kearns 2019;N Kearns et al. 2011).For example, Christensen et al. (2009) showed less elasticity for food than cocaine in a demand analysis for both Fisher and Lewis rats.Differences in reinstatement performance for drug and natural rewards have also been reported.Ahmed and Koob (1997) showed rats exhibited stress-induced reinstatement for cocaine but not food.Furthermore, while incubation of cue-induced reinstatement has been demonstrated for both sucrose and cocaine, greater responding was reported for the drug (Li and Frantz 2010;Lu et al. 2004).Given the distinct behavioral patterns between these two reinforcer types, it would be of interest to determine if sensory reinforcement would predict the temporal profile of relapse-like behavior, such as the progressive enhancement of relapse behaviors across time (i.e., incubation, Grimm et al. 2001). Another difference in responding between reinforcer types was revealed when considering the percent change between the last day of reinforcement testing and relapse.Whereas responding for cocaine was higher during reinstatement than on the last day of SA, responding for water was much lower than it was during SA.However, the patterns of extinction of responding for both reinforcers were similar.Our data suggest that although the same cues can drive behaviors to obtain natural and drug rewards, the strength of these cues differs depending on the type of reward.Indeed, the importance of cues in cocaine SA has been well established (Schenk and Partridge 2001).The stronger cue-induced reinstatement response for cocaine may reflect incentive salience, in which cues gain motivational incentive value (Flagel et al. 2009). There is evidence that sensation seeking is more strongly associated with craving for cocaine and other stimulants (Gancarz et al. 2011(Gancarz et al. , 2012a(Gancarz et al. , c, 2013;;Lloyd et al. 2012a, b;Walsh et al. 2010) than for cigarette smoking (Billieux et al. 2007;Doran et al. 2009;Reuter and Netter 2001) and alcohol consumption (Flaudias et al. 2019).Sensation seeking has also been implicated in relapse to cocaine use (McKay et al. 1995(McKay et al. , 1999)), cigarette smoking (Kahler et al. 2009), and alcohol (Kravitz et al. 1999;Marra et al. 1998;Meszaros et al. 1999;Müller et al. 2008) in humans.Recently, Xu et al. (2022) showed that high levels of sensation seeking reduced the motivation to remain abstinent from drug use.However, more definitive studies investigating the interaction between sensation seeking and craving/relapse are necessary. As demonstrated here, ONS presents a valid model to explore the associations between sensation seeking and craving for reinforcers in males.The focus of this study was to examine the role of novelty seeking as a predictor and or correlate to drug-seeking behavior following exposure to cocaine.As novelty-seeking has been shown to be a sexually dimorphic trait (Cross et al. 2013;Obst et al. 2021;Zuckerman et al. 1978), we chose to analyze this relationship in male rats.Ongoing studies will determine whether this procedure predicts these same relationships in females.Furthermore, research has extensively explored animal models of sensation seeking and response to cocaine, d-amphetamine, and methamphetamine (Gancarz et al. 2011(Gancarz et al. , 2012a(Gancarz et al. , c, 2013;;Lloyd et al. 2012a, b); however, it is unclear if these relationships are unique to psychomotor stimulants or can be generalized to other drugs of abuse.Indeed, there is some evidence of divergent relationships between sensation seeking and other drug classes.For example, Galizio and Stein (1984) showed sensation seeking was more strongly associated with polydrug use than opiate and depressants use alone in humans.Future research is needed to explore the relationship between operant novelty seeking and other drugs of abuse. Other studies have shown a relationship between sensation seeking and enhanced responses to other non-drug rewarding stimuli in humans.For example, high sensation seekers have altered food preferences (Logue and Smith 1986;Terasaki and Imada 1988), elevated levels of gambling (Coventry and Brown 1993), and operant responses to monetary reinforcers (Bornovalova et al. 2009).To the best of our knowledge, no studies have investigated the role of sensation seeking in craving for natural rewards, such as food.The lack of research in this area highlights the key importance in the predictive validity of animal models.The present study is significant because it shows the predictive ability of ONS as a model of sensation seeking and that it could be used to examine the vulnerability to a variety of drug and non-drug reinforcers. Conclusion Our results indicate a reinforcer-dependent relationship between ONS and responsivity to cues signaling the availability of cocaine and water reinforcers.ONS correlated with cue-induced reinstatement to cocaine, an animal model of relapse; however, ONS was not associated with cue-induced reinstatement for water reinforcers.These data support our hypothesis that sensitivity to the reinforcing effects of novel visual and auditory stimuli predicts relapse as well as other drug-taking behaviors.Thus, ONS is an animal model of sensation seeking that can be used to predict drug craving and relapse to cocaine.Future studies are needed to determine if sensation seeking similarly predicts the use of other drugs in males and females. Fig. 1 Fig. 1 Illustration of experimental timeline for experiments 1 and 2 Fig. 2 Fig. 2 Performance on ONS.(a) Performance during the habituation phase is shown in the shaded region, and performance in the ONS phase is shown on the right.(b) Numbers of sensory reinforcer presentations earned across sessions 6-11 of ONS.(c) Percent change in performance between days of testing.Data are averages (± SEM); n = 42 ▸ Fig. 3 Fig. 3 Performance during cocaine SA. a Total numbers of inactive responses and active responses for 1.0 mg/ kg/infusion cocaine.b Total number of inactive responses and responses to the previously active snout poke during extinction.c Numbers of active and inactive responses during the 1-h test of cue-induced reinstatement.d Percent change of active responses from cocaine SA to Extinction (Ext) and Cueinduced Reinstatement (Reins).Data are averages (± SEM).*P < 0.05 Fig. 5 Fig. 5 Performance on ONS. a Performance during the habituation phase is shown in the shaded region, and performance in the ONS phase is shown on the right.b Numbers of sensory reinforcer presentations earned across sessions 6-11 of ONS.c Percent change in performance between days of testing.Data are averages (± SEM); n = 48 Fig. 6 Fig. 6 Performance during water reinforcement.a Total numbers of inactive responses and active responses for delivery of 30 μL water.b Total number of inactive responses and responses to the previously active snout poke during extinction.c Numbers of active and inactive responses during 1-h test of cue-induced reinstatement.d Percent change of active responses from water SA to extinction (Ext) and cueinduced reinstatement (Reins).Data are averages (± SEM).*P < 0.05
2023-08-09T06:16:59.247Z
2023-08-08T00:00:00.000
{ "year": 2023, "sha1": "601b5c70b6a79645a48ca0e2bbe14ec78d7e03b5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00213-023-06441-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e11e90418308a019a080e4eb7aa6004df9a4b46a", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235638712
pes2o/s2orc
v3-fos-license
Association between protoporphyrin IX and sarcopenia: a cross sectional study Background According to the European Working Group on Sarcopenia in Older People (EWGSOP), the diagnosis of sarcopenia primarily focused on low muscle strength with the detection of low muscle quality and quantity as confirming index. Many studies had identified mitochondrial dysfunction as one of the multifactorial etiologies of sarcopenia. Yet, no study had investigated the role of biosynthetic pathway intermediate, which was found in mitochondria, in the development of sarcopenia. This study aimed to examine the association between protoporphyrin IX (PPIX) and components of sarcopenia. Method The present study enrolled 1172 participants without anemia between 1999 to 2002 from the National Health and Nutrition Examination Survey (NHANES) database. We employed the multivariable-logistic regression model to examine the relationship between PPIX and sarcopenia. Covariate adjustments were designated to each of the three models for further analysis of the relationship. Results In the unadjusted model, PPIX was significantly associated with sarcopenia (OR = 3.910, 95% CI = 2.375, 6.439, P value < 0.001). The significance persisted after covariate adjustments as observed in the fully adjusted model (OR = 2.537, 95% CI = 1.419, 4.537, P value = 0.002). Conclusions The findings of this study suggested statistically significant association between PPIX and sarcopenia. Our study disclosed the potential of PPIX as a valuable indicator of sarcopenia. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-021-02331-6. Introduction Sarcopenia is a generalized skeletal muscle disorder that involves the progressive loss of muscle mass and function [1], leading to substantial functional decline, development of chronic diseases, disability, and frailty [2][3][4][5]. The European Working Group on Sarcopenia in Older People (EWGSOP) developed a set of clinical guidelines and consensus diagnostic criteria, which emphasized on the presence of low muscle strength and low muscle mass as the definition of sarcopenia with physical performance being only a severity gradient [6]. Sarcopenia that is largely related to age in the absence of other identifiable cause is defined as primary sarcopenia. Sarcopenia is considered 'secondary' when factors other than age are identified [6]. The mechanisms associated with the development and progression of sarcopenia include endocrine dysfunction [7], neuro-degenerative diseases [8], malnutrition [9], cachexia [10], aging [10], disuse [11], and cellular dysfunctions [12]. Studies had pointed out the significances of mitochondrial dysfunction in the pathogenesis of sarcopenia [12][13][14][15]. However, the associated pathway of mitochondrial dysfunction leading up to sarcopenia is still unclear. More researches and clinical studies on mitochondrial dysfunction induced sarcopenia are required. Several researchers have outlined that decreased hemoglobin is linked to sarcopenia [16][17][18]. While anemia is evidently associated with sarcopenia, investigation regarding non-anemic individuals with sarcopenia development is warranted. We suspect that the heme biosynthetic pathway, which takes place in mitochondria, responsible for the generation of hemoprotein constituent is related to the development of sarcopenia. Protoporphyrin IX (PPIX) is the final intermediate in the heme biosynthetic pathway. PPIX is a heterocyclic organic compound that exhibits biological functions by chelating to transition metals to form metalloporphyrins [19]. A mice study highlighted accumulating PPIX mediated alterations in mitochondrial membrane potentials and formation of fragmented mitochondria in hepatocytes [20]. PPIX is commonly converted to heme by chelating to iron under the catalyzation of an enzyme known as ferrochelatase [19]. Nevertheless, the heme biosynthetic pathway could get interrupted under low iron concentration and lead toxicity. This results in the substitution of other transitional metals, particularly zinc, for iron during the chelation, causing an increase in zinc PPIX level [21]. Taken together, PPIX plays a critical role as an intermediate of heme biosynthetic pathway in which a sustained homeostasis is required. In this study, we hypothesize that alterations in serum PPIX concentration may play a role in the pathogenesis of sarcopenia. As no previous studies had investigated the association between PPIX and sarcopenia, we conducted a cross-sectional study on a group of nationally representative United States adult population to examine this issue. Ethics statement The present study obtained data from the National Health and Nutrition Examination Survey (NHANES) [22,23]. NHANES is a cross-sectional survey that collects information on demographic, clinical, behavioral, dietary, social, and laboratory data from noninstitutionalized individuals in the United States. The survey was performed by the National Center for Health Statistics (NCHS) and approved by the NCHS Institutional Review Board (IRB). All informed consents had been obtained from the eligible subjects before initiating data collection and NHANES health examinations. Study sample The study sample was collected from the NHANES database from 1999 to 2002. The interviews were conducted by trained examiners and consisted demographic information such as gender, age, race/ethnicity, education level, smoking history, and individual comorbidities. Physical examination was conducted at the Mobile Examination Center (MEC). According to the flowchart of the study in Fig. 1, we included a total of 2430 participants who had household records and excluded 1258 participants with missing data, inadequate interview responses, and anemic conditions. The cutoff points used to determine anemia were hemoglobin concentrations less than 13 g/dL for men and 12 g/dL for women [24]. The population surveyed in the current study were in between the age of 60 to 85 years. The demographic sample included 1172 eligible participants (n = 626 men; n = 546 women) as shown in Table 1. The survey was conducted in the United States between 1999 and 2002 by the NCHS of the Centers for Disease Control and Prevention using a stratified, multistage, and clustered probability sample design. Further details on the survey could be accessed on the NHANES website [25,26]. EWGSOP guideline The definition and diagnostic criteria of sarcopenia in the present research was based on the revised EWGSOP guideline [6]. The diagnosis of sarcopenia was based on the documentation of low muscle strength and low muscle mass. Measurements of muscle strength included handgrip strength and knee flexion/extension techniques. The cutoff points for knee extension strength for men and women were respectively, 56.1 and 38.1 kg for one-repetition maximum as reported by Abdalla et al. [27]. This is equivalent to 550.15 and 373.63 Newtons for men and women. The use of knee extension strength in the evaluation of muscle strength was approved by the EWGSOP guideline. The EWGSOP guideline advised utilizing imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and dual energy X-ray absorptiometry (DXA) to estimate muscle mass or lean body mass. Low muscle mass was defined as skeletal muscle index (SMI) less or equal to 7.0 appendicular skeletal muscle mass (ASM) per height squared (kg/m 2 ) for men and 5.5 kg/m 2 for women [28]. Gait speed was a reliable and widely used test to evaluate physical performance. Low physical function or low gait speed was defined as walking speed ≤0.8 m/s. Measurement: muscle quality index The standard protocol used to measure the muscle strength and gait speed of participants were available in the NHANES documentation [25,26]. Participants with medical histories of major surgeries within 6 weeks, knee surgery, severe back pain, brain aneurysm, and stroke were excluded from the exam. The muscle strengths of participants were measured using A Kin Com MP dynamometer manufactured by Chattanooga Group, Inc., Chattanooga, TN. The peak torques of participants' knee extensor strengths of the quadriceps were measured at one speed (60 degrees/second) and documented in Newtons. Participants' habitual gait speed was measured on a 20 ft long test track set up in the MEC. All participants were required to walk at their habitual pace and timed using a hand-held stopwatch. All examinations were conducted by certified health technicians. The muscle mass was estimated using the DXA. The DXA scans provide measurements of various body components including the bone and soft tissue. Female participants with positive urine pregnancy test or selfreported pregnancy were not permitted to undergo the DXA scan. Participants with weight over 300 pounds or height over six feet, five inches were excluded from the DXA examination. The DXA scan was conducted using the Hologic QDR-4500A fan-beam densitometer (Hologic, Inc., Bedford, Massachusetts) and Hologic software version 8.26:a3* by certified radiology technologists. The DXA scan was performed with the participants lying supine on the tabletop with feet in neutral position and hands flat by their side. Participants undergoing the examination were scanned with an x-ray of extremely low radiation exposure at less than 10 uSv. The scan acquired two low-dose x-ray images at different average energies to distinguish both bone from soft tissue and the percentage of fat in soft tissue when bone wasn't present. The DXA scan measured the ASM, which involved lean soft tissue of arms and legs without bone mineral content. Specific details on the measurement could be attained from the NHANES documentation [25,26]. Measurement: PPIX The measurement of PPIX involved extracting PPIX from EDTA-whole blood into a mixture of ethyl acetateacetic acid and then back-extracted into diluted hydrochloric acid. In the aqueous form, the PPIX was measured at excitation wavelength of 404 nm and emission wavelength of 658 nm. The measurement of PPIX concentration was expressed in μg/dL of packed red blood cells (RBC) after correction for hematocrit in individual specimen. Specific details of the measurement could be obtained from the NHANES documentation [25,26]. Measurement: covariates The demographic information concerning variables, such as race/ethnicity [29], sex [29][30][31], age, and medical history, including arthritis, congestive heart failure (CHF), coronary heart disease (CHD), angina, heart attack, stroke, and emphysema, were acquired from selfreported data. Participants' cigarette use was recorded by asking the question, "Have you ever smoked cigarettes?" [32]. Participants' education level was recorded by asking the question, "What is the highest grade or level of school you have completed or the highest degree have received?" [31]. Laboratory data such as body mass index (BMI) [31,33], and hemoglobin [34] were analyzed in our study. Specific details on the measurement could be attained from the NHANES documentation [25,26]. Statistical analysis All statistical analysis of this study was performed using SPSS (Version 18.0 for Windows, SPSS, Inc., Chicago, IL, USA). Continuous data were indicated by their median and interquartile range (IQR). Categorical data were recorded by their frequency counts and percentages. The chi-square test and analysis of variance (ANOVA) were applied to categorical variables and continuous variables, respectively. The distributions of PPIX levels were normalized using natural logarithm transformation. Odds ratios (OR) were calculated using logistic regression to evaluate the intensity of the relationship between PPIX and sarcopenia. To test the robustness of the primary result, we conducted a sensitivity analysis by dividing participants into quartiles of PPIX concentrations as shown in Additional file 1: Table S1. The present study further examined the intensity of the relationship between PPIX levels and strength/SMI/physical performance by investigating the Pearson correlation coefficients. We interpreted the association between PPIX and components of sarcopenia, using multivariable-linear regression models designed with progressive degrees of modification. Covariate adjustments were investigated by the following extendedmodel linear regressions: Model 1 was unadjusted; Model 2 was adjusted for race/ethnicity, sex, and age; Model 3 = Model 2 + BMI, comorbidity, smoking, education level, and hemoglobin. A receiving operating Study sample characteristics The present study included a total of 1172 participants. The characteristics of eligible study participants were shown in Table 1. Based on the EWGSOP definition of sarcopenia, we divided participants into non-sarcopenic group (n = 998) and sarcopenic group (n = 174) [6]. According to our analysis, 190 participants had low muscle strength, 174 participants had low SMI, and 230 participants had low gait speed. All continuous variables observed showed statistical significance (P value < 0.001). The median age for the non-sarcopenic group and sarcopenic group were 64.00 (IQR = 14.00) and 70.00 (IQR = 18.00), respectively. It was also observed that the median PPIX concentration was higher for sarcopenic group at 50.00 (IQR = 22.00) compared to 47.00 (IQR = 18.00) in the non-sarcopenic group. Subjects with older age, higher BMI, increased PPIX concentrations, and more comorbidities tended to display signs of declined muscle quality including muscle strength, SMI, and gait speed. Association between PPIX and sarcopenia In Fig. 2 highlighted the optimal cutoff point for PPIX in the development of sarcopenia. The area under ROC (AUROC) value was 0.624 (95% CI = 0.580, 0.668). An optimal cutoff value of 48.50 μg/dL RBC were determined using maximal Youden's index with sensitivity of 0.663 and specificity of 0.561. Sensitivity analysis Sensitivity analysis using logistic regression to compare the analysis of sarcopenic and non-sarcopenic groups and quartiles of PPIX showed similar results for the association between PPIX and sarcopenia. Additional file 1: Table S2 was a logistic regression analysis displaying the association between quartiles of PPIX and components of sarcopenia. In the unadjusted model, we observed that the higher quartiles of PPIX concentration were significantly associated with low muscle strength and low gait speed in comparison to the lowest quartile of PPIX concentration. The fully adjusted model showed that the odds of participants with the highest quartile of PPIX concentrations would have low muscle strength and low SMI were 1.934 (1.174 ± 3.185) and 2.258 (1.172 ± 4.349) times greater than participants with the lowest quartile of PPIX concentrations. This result was similar to the result derived in Table 2 in which PPIX was significantly associated with sarcopenia before and after covariate adjustments. Correlation between PPIX and muscle quality index In Table 3, the correlation coefficients between PPIX and muscle strength, SMI, and gait speed were − 0.182, − 0.123, and − 0.131 respectively. Table 4 Association between PPIX and components of sarcopenia Under low serum iron concentrations, PPIX was positively associated with low muscle strength and low muscle mass as shown in Additional file 1: Table S3. The pattern of association with low muscle strength and low muscle mass persisted after covariate adjustments for race/ethnicity, gender, age, BMI, comorbidity, smoking, education level, and hemoglobin concentration. In study participants with normal serum iron concentrations, PPIX maintained significant positive correlations with low muscle strength. However, no strong relationship was drawn between PPIX and low muscle mass in participants with normal serum iron concentrations. Our results indicated that the associations in the normal serum iron group could not fulfill the EWGSOP criteria. In contrast, associations in the low iron group was consistent with the sarcopenia definition of the EWG-SOP guideline. Discussion The present study closely inspected the relationship between the PPIX concentration in a nationally representative sample of the United States adult population and the components of sarcopenia. Our results indicated strong relationships between PPIX concentrations and sarcopenia. The same correlations in participants with low serum iron concentrations were also observed. To the best of our knowledge, the present study was the first to investigate the role of PPIX in predicting sarcopenia in a group of 1172 males and females. The development of sarcopenia is multicausal and remains controversial owing to its complex nature in pathogenesis. Current studies had discovered multiple factors that represent strong associations with the development of sarcopenia [35][36][37]. The reduction in mitochondrial biogenesis and its respective cellular changes were recognized as major contributing factors in the progression of sarcopenia [14,38]. A decline in the functions of mitochondria was found to be associated with pathologic events such as Type 2 diabetes and Alzheimer's disease, both of which were common in the geriatric population [39]. Mitochondrial biogenesis is responsible for the clearance of damaged mitochondria and prompts the generation of new mitochondria [40]. The process of mitochondrial biogenesis is prone to the destructive effects of toxins and environmental exposures [41]. Heme oxygenase-1 (HO-1), on the other hand, plays a protective role in the elimination of dysfunctional mitochondria and the stimulation of mitochondrial biogenesis [42,43]. Recent studies by Takanche stressed the upregulating effects HO-1 had on antioxidant gene and mitochondrial biogenesis [40,44]. An animal study on rat by Chen et al. explained the role of HO-1 in the expression of miR-27b, which increased mitochondrial biogenesis and suppressed systemic inflammation [45]. We speculate that HO-1 inhibitor zinc PPIX, which forms under the lack of iron reservoir, may implicate the cause behind the positive correlation between PPIX and components of sarcopenia. While increased expression of miR-27b mediated cytoprotective modulation in mitochondria, the coadministration of zinc PPIX reversed the regenerating activities of HO-1 [46]. Yu et al. also observed the similar inhibitory effects of zinc PPIX on the attenuation of mitochondrial biogenesis and anti-inflammation induced by HO-1 and Tetrahydroxystilbene glucoside [47]. Taken together, zinc PPIX entails sarcopenia via inducing inhibition on mitochondrial biogenesis. These discoveries were consistent with our finding, which illustrated positive association between PPIX concentration and components of sarcopenia. Our results also indicated the equivalent association under low serum iron concentration, supporting the formation of zinc PPIX and its subsequent inhibitory activities. In addition to the augmentation in mitochondrial biogenesis, HO-1 had also been found to exhibit antiinflammatory and neuroprotective properties in rats [48,49]. Yu et al. discovered in their experiments that HO-1 inhibited muscle fiber atrophy through suppressing proinflammatory cytokines and downregulating specific enzyme activation [48]. Although they did not observe effects of zinc PPIX on HO-1 expression, they did recognize suppressed HO-1 activation caused by zinc PPIX. Khan et al., in their animal models, demonstrated the therapeutic effects of increased HO-1 in neurodegenerative disorders such as Alzheimer disease and Parkinson disease [49]. Their study also identified tin PPIX, also a HO-1 inhibitor, as the reversal agent of the cytoprotective effects of HO-1. According to accumulating studies, the buildup of PPIX caused degenerative mitochondrial function, inflammation, and reduced cytoprotection. The accumulation of PPIX and its inhibitory nature implicates intrinsic role in the development of sarcopenia. The present study has several strengths. While the screening involved in sarcopenia has been clearly identified in the EWGSOP guideline, a predictive biomarker for sarcopenia is lacking. Through our research, we offered PPIX as a highly potential biomarker in the prognostic prediction of sarcopenia. Moreover, the study included a large sample of older adults with racially mixed sample in the demographics to examine the associations. A defined biomarker such as PPIX proposed in the current study may assist clinical settings in diagnosing sarcopenia through blood analysis. This is not only effective, but also economical in comparison to physical performance tests. Further researches should aim towards establishing analytical validation of the proposed biomarker as well as qualification of the biomarker. Several limitations in the present study should be noted. First, the study is a cross-sectional study. Thus, a causal relationship between PPIX and sarcopenia cannot be established. Long term observations are required to validate the relationship. Second, information such as participants' medical histories and education levels were based on self-reported response to questionnaires. The effects of recall bias and other unknown errors may cause distortion in results. Third, the design of the current study may have been subjected to selection bias. This may lead to inaccurate representation of the relationship. Lastly, participants' cognitive status [50], drug consumption [51], and nutritional status [52] were crucial factors of sarcopenia that were not taken into account. Thus, the effects of confusion bias could not be overlooked. Conclusion In conclusion, our findings suggested strong correlations between PPIX and sarcopenia among both sexes. We also discovered significant associations under the state of low serum iron concentrations in non-anemic participants. Although further studies are required to validate the underlying mechanisms, the concentration of PPIX may be a valuable indicator for assessing the risk of sarcopenia in clinical settings. Additional file 1 Table S1. Characteristics of study participants. Table S2. Association between quartiles of protoporphyrin IX and components of sarcopenia. Table S3. Association between components of sarcopenia and protoporphyrin IX
2021-06-26T13:45:24.753Z
2021-06-26T00:00:00.000
{ "year": 2021, "sha1": "fae57d7da871a100fa9df99003720d754b2fa3fd", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-021-02331-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fae57d7da871a100fa9df99003720d754b2fa3fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231656206
pes2o/s2orc
v3-fos-license
Comparison of optical performance among three dental operating microscopes: A pilot study Introduction: Two important aspects of the dental operating microscope (DOM) that factor into its overall effectiveness are resolution and depth of field. Therefore, the objective of this study was to evaluate and compare the resolution and depth of field of DOMs from three well‐known manufacturers using standardized test targets. Materials and Methods: A resolution test, using the 1951 USAF Hi‐Resolution Target (Edmund Optics, Barrington, NJ), and a depth of field test, using the Depth of Field Target 5‐15 (Edmund Optics, Barrington, NJ), were performed by two calibrated observers. Three DOM systems such as Seiler IQ (Seiler Instrument Inc., St. Louis, USA), Global G‐Series 6 step (Global Surgical Corp., St. Louis, USA), and Zeiss Extaro 300 (Carl Zeiss Meditec AG, Oberkochen, Germany) were used to compare the resolution and depth of field. Results: The Zeiss Extaro 300 showed the highest maximum resolution and maximum DOF (64 lp/mm and 17mm, respectively). The Seiler IQ showed the lowest maximum resolution and maximum DOF (35.9 lp/mm and 11 mm, respectively). Conclusion: Within the limitations of this study, the Zeiss Extaro 300 was superior in terms of resolution and depth of field as compared to the other two DOMs. INTRODUCTION The dental operating microscope (DOM) enhances the visualization of the tooth and its anatomic substructures during endodontic procedures. Numerous clinicians have noted its ability to improve treatment. [1][2][3][4][5][6][7][8][9] Resolution and depth of field are two important aspects that factor into the effectiveness of the DOM. Resolution is defined as the measure of the sharpness of an image or the ability to distinguish the individual parts of an object. [10] The unaided human eye can generally distinguish between two parallel lines that are 0.2 mm apart. If they are closer together, they appear as a single line. [2] Consequently, resolution (or limiting resolution) is most often measured by and reported as the minimum distance between line pairs (lp/mm) where they are still observed as a pair. Depth of field, on the other hand, refers to the visible zone of clarity and focus. This is a pure function of the optics and is measured without the use of any focusing adjustments. An operating dental microscope should have a high resolution with a large depth of field to provide the best optical performance. Zeiss, Seiler Instrument Inc., Global Surgical Corp., and Carl Zeiss Meditec AG are well-known manufacturers of DOMs. A convenience sample of one microscope from each of those manufacturers was utilized for this study. The goal of this study was to measure differences in resolution and depth of field among the DOMs from these manufacturers. To the best of our knowledge, no prior study has quantitatively investigated and compared various DOMs in terms of optical performance. MATERIALS AND METHODS A resolution test, using the 1951 USAF Hi-Resolution Target (Edmund Optics, Barrington, NJ), and a depth of field test, using the Depth of Field Target 5-15 (Edmund Optics, Barrington, NJ), were performed by two observers [ Figure 1]. Three DOM systems such as Seiler IQ (Seiler, St. Louis, USA), Global G-Series 6 step (Global, St. Louis, USA), and Zeiss Extaro 300 (Zeiss, Oberkochen, Germany) were used to compare the resolution and depth of field. All the tested DOMs were equipped with a light-emitting diode light source. The two observers were calibrated using two 45-min pilot sessions by evaluating and reviewing their findings. In addition, short calibration sessions were conducted before testing each DOM to ensure maintaining interobserver agreement throughout the study. Each examiner conducted an initial setting of the DOM by adjusting the interpupillary distance and the eyepieces. Then, an individual microscope adjustment (Parfocaling) was conducted as previously described. [11] During the study, each examiner performed the tests independently. If there was a disagreement between their findings, the tests were repeated and reviewed together until an agreement on findings was reached. Depth of field test The depth of field test was performed using the Edmund Optics Depth of Field Target at normal incidence. To ensure that the maximum depth of field was recorded, it was important to make sure that the top of the target recorded one end of the depth of field. The top of the field was focused into view before decreasing the working distance just enough to get it out of focus. Then, it was adjusted slightly so that it would barely come back into focus. To record the depth of field, the 5 lp/mm was used, and the distance, at which the focus was lost, was recorded. Resolution Test The resolution test involved distinguishing two sets of three bars, with one set being horizontal and the other being vertical. Being able to distinguish higher group numbers meant that the microscope was able to distinguish smaller objects, which were then further divided into elements within each group. However, it was important to be able to distinguish all objects at groups and elements lower than that of the smallest object. The group and element, at which the smallest object was distinguishable, were noted. RESULTS The magnification, resolution, and DOF for the three DOMs are presented in Table 1. The end magnification was from ×3.4 to ×21.25 for the Zeiss Extaro ×300, 2.1 to ×19.2 for the Global G series, and ×4.2 to ×11 for the Seiler IQ. The Zeiss Extaro 300 showed the highest maximum resolution and maximum DOF (64 lp/mm and 17mm, respectively). The Seiler IQ showed the lowest maximum resolution and maximum DOF (35.9 lp/mm and 11 mm, respectively). DISCUSSION Three DOMs from different manufacturers were tested and compared with regard to resolution and depth of field. High-precision test targets were used to quantitatively measure the imaging characteristics of the three DOMs. The Zeiss Extaro 300 and Global G series showed to have a wide range of magnification, with the latter having the smallest minimum end magnification (×2.1). The minimum magnifications, due to their wide field of view, could provide comfort and confidence in performing certain procedures that require the visualization of a large area of the mouth. For example, in intraoral injections, rubber dam placement, and suturing, the minimum magnification proved to be more suitable than the higher magnifications. [2,12] Therefore, having the smallest minimum end magnification may be an advantage of the Global G series compared to the other two DOMs. In endodontics, high-resolution DOMs are strongly demanded for precision visualization and procedure. In this study, the Zeiss Extaro 300 showed to have the highest maximum resolution (64 lp/mm), which was significantly higher than the Global G series and Seiler IQ's maximum resolution (40.3 lp/mm and 35.9 lp/mm, respectively). However, further studies are needed to determine whether these differences in maximum resolution have any clinical significance and whether they could result in improved performance. A study by Bowers et al. showed a significant improvement in fine motor skills with the use of DOM compared with loupes and unaided vision. [6] Therefore, it is reasonable to assume that a higher resolution may result in better clinical performance. Furthermore, it is important to note that resolution and magnification are independent quantities. In other words, higher magnification does not necessarily result in a higher resolution, which is in accordance with the findings of this study. For example, the results of this study showed that, in Global G series, no changes in the resolution are noted when the end magnification was increased from ×12.8 to ×19.2. Furthermore, the Zeiss Extaro 300 at ×13.6 showed a higher resolution than Global G series at ×19.2. Depth of field of a DOM is of utmost importance in endodontics. This feature allows the clinician to clearly visualize different levels of the internal root canal anatomy. [2] In other words, in a DOM with a high depth of field, different levels of the root canal system can be maintained into focus without moving the patient, DOM, or adjusting the fine focus. In this study, the Zeiss Extaro 300 showed the highest maximum DOF (17 mm), followed by Global G series (13 mm). As expected, DOF decreases as the magnification of DOM increases for each DOM. To the best of our knowledge, this is the first study of its kind that quantitatively evaluated and compared various DOMs in terms of optical performance. However, this study had a number of limitations. First, the study could have benefited from including more observers for measuring the imaging characteristics. However, using the test targets was a subjective method of evaluation, and a very significant amount of time was utilized to make the two observers adequately calibrated. Adding more observers into the study, potentially, could have resulted in less accurate and reliable findings. Second, the limited number of DOMs evaluated could also be considered a limitation, and the results of this study should not be generalized to the other products of these manufacturers. However, it is reasonable to assume that, within each manufacturer, different models may have very similar lens quality and optical performance. Third, the observer-dependent measurements used in this study introduced subjectivity in the results. However, the risk of subjectivity was minimized by calibrating the observers and using standardized targets. However, it is still important to note that these methods are subjective, and the resolution and depth of field scores presented in this study are not absolute findings. Therefore, using the test target, the observations may vary among individuals. Nevertheless, using these methods, the results would be meaningful, providing the observers are calibrated with each other, and the findings are used for comparison purposes only. CONCLUSION Within the limitations of this study, among the three tested DOMs, Zeiss Extaro 300 showed to be the superior DOM in terms of resolution and depth of field. Further studies needed to evaluate whether these differences in optical performance could result in improved clinical performance. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-01-21T14:41:13.203Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "975f1220dd184688aa7d219b9552f22d62fcb1a5", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7883792", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "098d63bb3595a193c60ffe6256a32af85841e379", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250626972
pes2o/s2orc
v3-fos-license
Convergence acceleration for the BLUES function method A detailed comparison is made between four different iterative procedures: Picard, Ishikawa, Mann and Picard-Krasnoselskii, within the framework of the BLUES function method and the variational iteration method. The resulting modified methods are subsequently applied to a nonlinear reaction-diffusion-advection differential equation to generate approximations to the known exact solution. The differences between the BLUES function method and the variational iteration method are illustrated by studying the approximants and the error between the obtained approximants and the exact solution. Introduction Finding solutions of nonlinear differential equations in mathematics, physics and other sciences is a challenge. Because of the inherent nonlinearity, regular solution methods such as superposition or direct integration are often impossible and hence no general solution strategy exists. Therefore, it is appropriate to look for approximate solutions instead of trying to solve the differential equations directly. To this end, (semi-)analytical iterative methods such as the variational iteration method (VIM) [1,2,3], Adomian decomposition method (ADM) [4,5], homotopy perturbation method (HPM) [6] or BLUES function method [7,8,9,10] have been proposed. The standard formulations of these methods all rely on iterating through a sequence of approximants by direct substitution of the nth approximant into some nonlinear operator to generate the (n + 1)th approximant. While this so-called Picard iteration is straightforward and provides a direct way to generate solutions, its convergence is often slow and its accuracy insufficient to provide useful approximations with limited computational resources. In this paper we propose to reformulate the BLUES function method and the VIM to incorporate three other iterative procedures: Mann's procedure, Ishikawa's procedure and a hybrid Picard-Krasnoselskii procedure. Subsequently, we compare the approximants generated by each of these methods and study whether the different iterative schemes result in an increased accuracy with respect to Picard iteration for a fixed number of iterations. The setup of this paper is as follows. In section 2 we introduce a nonlinear reaction-diffusion-advection PDE along with the approximate solutions obtained by regular Picard iteration for the BLUES function method abd the VIM. Next, in section 3, we set up the aforementioned iterative procedures in a general formulation. In section 4, the different procedures are applied to the reaction-diffusion-advection PDE for both the BLUES function method and the VIM. A comparison is made between the methods as well as between the different iterative procedures, and the results are presented by studying the error between the approximants and the exact solution. Finally, in section 5, we present the conclusions and a future outlook. Nonlinear reaction-diffusion-advection equation We set the stage by considering a nonlinear reaction-diffusion-advection PDE [11,9] that can be used to describe, e.g., the propagation of a temperature or chemical concentration u through the combined mechanisms of diffusion, nonlinear advection and reaction, where N x,t is a nonlinear operator that is defined through its action on u. Subscripts such as u t , u xx , ... denote differentiation with respect to the subscript variable. We consider a decaying exponential initial condition, This unbounded initial condition is unphysical but will serve as an ideal testbed for the comparison of the different approximation methods, as in this case a simple exact solution of (1) can be found, i.e., It can be shown [9] that for regular Picard iteration the VIM and BLUES function method give the following nth-order approximants to the exact solution (3), respectively It is now easy to see that, in the limit for n → ∞, both methods converge to the exact solution (3). Iterative methods and procedures To avoid confusion, we will from now on differentiate between methods and procedures. A method, e.g. the BLUES function method or the VIM, signifies a particular strategy employed to find an approximate solution u n . A procedure, however, is the specific manner in which one iterates through the approximants generated by an iterative method. Examples are Picard or Mann iteration. Consider a sequence of functions {u n } ∞ n=0 and a (nonlinear) mapping T : C → C, with C a nonempty convex subset of a normed linear space E. The Picard iterative procedure is the following with n ∈ N. For the BLUES function method, the action of the operator T on the nth approximant u n is where G is the BLUES function, which we take to be the Green function of a judiciously chosen linear problem, i.e., where L x,t is a linear operator. For the remainder of this work, we will assume that L x,t is an operator that at most contains first-order time derivatives. The * -multiplication indicates the convolution operator, while R x,t ≡ L x,t − N x,t is the residual operator. A detailed setup of the BLUES function method can be found in e.g., Ref. [9]. Now, for the VIM, the action of the operator T can be written as where L, N and f are respectively the linear operator, which is often the highest time derivative, the nonlinear operator, and an external source. The function λ(s) is a general Lagrange multiplier that can be identified via variational theory, andũ n is a restricted variation, i.e., δũ = 0. For the VIM, u 0 = u(x, 0). We now briefly discuss other iterative procedures that can be embedded into these two methods. Mann's iterative procedure Consider the following single-step procedure with n ∈ N and where (α n ) n∈N is a sequence of positive real numbers. This process is called Mann's iterative procedure [12]. This scheme is sometimes used to accelerate the convergence of the VIM by considering the α n 's as convergence-control parameters [13,14]. These can be optimally determined by minimising the residual square error of the approximants with respect to the α n in each iteration. This is numerically expensive but can result in needing a smaller number of approximants to achieve the desired accuracy. When α n = α (constant) , ∀n ∈ N, the procedure is called Krasnoselskii's iterative procedure, while for α n = 1, Mann's iteration (8) reduces to Picard iteration (5). Ishikawa iterative procedure Consider the following two-step procedure with n ∈ N and where (α n ) n∈N and (β n ) n∈N are sequences of positive real numbers. This process is called the Ishikawa iterative procedure [15,16]. As was the case for Mann's procedure, the parameters α n and β n can be used as convergence-control parameters. Obviously, for β n = 0, Ishikawa's procedure (9) reduces to Mann's procedure (8). Hybrid Picard-Krasnoselskii procedure While the choice for any one of the above procedures may improve convergence for the BLUES function method or the VIM, there exists another choice by combining Picard's and Krasnoselskii's method. This so-called hybrid Picard-Krasnoselskii's procedure can be proven to converge faster than Picard's, Mann's, Krasnoselskii's or Ishikawa's procedures [17]. It can be described as follows, with n ∈ N. Again, the parameter λ can be used to control the convergence. For λ = 0, the hybrid procedure reduces to Picard iteration, while for λ = 1, it becomes a two-step Picard iteration, where the (n + 1)th approximant is found by applying the T operator twice. Comparison Let us now consider the procedures that were discussed in section 3. The operator T from equations (6) and (7) for the nonlinear reaction-diffusion-advection equation (1) is for respectively the BLUES function method and the VIM Note that for the BLUES method we have used superscripts to denote the iterations, while for the VIM we use subscripts. We will use these interchangeably when no confusion can arise. The function G(t) = e −a(t) Θ(t), where Θ(.) is the Heaviside function, is the Green function of the associated linear operator L x,t u ≡ u t + au. This choice of linear operator fixes the residual operator to be R x,t u ≡ L x,t u − N x,t u = u xx − uu x − u 2 . We now discuss the different procedures first for the VIM and subsequently for the BLUES function method. Variational iteration method For the VIM, the Picard iteration approximants are given by equation (4a). Mann's iterative procedure can be set up as follows Consider now the residual error E(n, x, T ), One has to minimize E(n, x, T ) with respect to α n for each n. With the (arbitrary) choice T = 1, the α n with n ∈ {1, 2, 3} become, for a = 2, Note that the coordinate x does not have to be fixed for the minimisation of the residual error. By careful inspection of the different iterative procedures (5), (8), (9) and (10), one can see that the exponential containing the x−coordinate can be divided out when minimising (13). For the Ishikawa scheme, we set up the procedure as follows Minimizing the residual error E(n, x, T ) with respect to α n and β n , with once again T = 1 and n ∈ {1, 2, 3} gives the following convergence-control parameters, for a = 2, Note that the α n converge to unity, indicating that the Ishikawa procedure converges to a hybrid Picard-Ishikawa procedure. Lastly, the Picard-Krasnoselskii hybrid procedure is the following For n = 3, the parameter λ can be calculated by minimising once again the residual error with respect to λ, resulting in λ = 0.85590 for T = 1 and a = 2. The (logarithmic) error of the different procedures with respect to the exact solution can be defined as and will be used to study the differences between the procedures. The different iteration procedures applied to the VIM for the solution of the nonlinear reaction-diffusion-convection equation (1) are compared in Fig. 1(a), where the errors between the exact solution and the approximants for the different procedures are shown in panel (b). BLUES function method Let us now study the application of the different iterative procedures on the BLUES function method. The Picard iteration approximants are given by equation (4b). Mann's iterative procedure can be set up as follows We now repeat the same strategy as before and minimize E(n, x, T ) with respect to α n for each n. With the choice T = 1, the α n with n ∈ {1, 2, 3} become, for a = 2, The Ishikawa scheme for the BLUES function method can be set up as follows The convergence-control parameters are, for a = 2, Note that once again the α n converge to unity, so this Ishikawa procedure for the BLUES function method also converges to a hybrid Picard-Ishikawa procedure. Next, the Picard-Krasnoselskii hybrid procedure is the following For n = 3, the parameter λ is found as λ = 1.18323 for T = 1 and a = 2. The different iteration procedures applied to the BLUES function method for the solution of the nonlinear reactiondiffusion-convection equation (1) are compared in Fig. 2. It is clear from Figure 2 that Mann's and Picard's schemes converge to the same iterative procedure. Moreover, this is also the case for Ishikawa's scheme and the hybrid Picard-Krasnoselskii scheme. For t → ∞, all methods converge and the error decreases exponentially. Comparing the results obtained by the VIM and by the BLUES function method, it becomes clear that BLUES outperforms the VIM for longer times t. Conclusions In this work, we have applied four different iteration procedures (Picard, Mann, Ishikawa and hybrid Picard-Krasnoselskii) to both the variational iteration method and the BLUES function method in the framework of a nonlinear reaction-diffusion-advection partial differential equation. By numerically determining the optimal convergencecontrol parameters, we have shown that for the aforementioned equation, the Ishikawa and hybrid Picard-Krasnoselskii schemes produce significantly more accurate approximations to the exact solution, for an equal number of iterations. Moreover, for the BLUES function method, the different schemes all produce globally convergent approximants, for which the difference between the approximants and the exact solution decreases exponentially in time. While implementing other iterative procedures into the VIM significantly improves its convergence and reduces the number of iterations required, the BLUES method seems to outperform the VIM for longer times. We envision implementing these iterative methods into more general formulations of the BLUES function method such as e.g., higher-order PDEs, coupled PDEs, or stochastic differential equations (SDEs).
2022-07-19T06:42:31.770Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "a42046d84e4ce8276918fbe383eb05c5e90be0e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a42046d84e4ce8276918fbe383eb05c5e90be0e1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
257624892
pes2o/s2orc
v3-fos-license
Adherence to oral anticancer hormonal therapy in breast cancer patients and its relationship with treatment satisfaction: an important insight from a developing country Background Hormone-positive breast cancer is the most common type and represents a burden in all countries. Treatment satisfaction might be a predictor for adherence, as higher satisfaction with medication encourages patients to adhere appropriately to the medication and, consequently, successfully achieve the treatment goals. The present study evaluated the adherence of women with hormone-positive breast cancer to oral hormonal drugs and correlated it with treatment satisfaction and other sociodemographic and clinical factors. Methods A cross-sectional design was applied. This study included two cancer centers. Data were collected from patients through face-to-face interviews and medical record reviews. The Medication Adherence Scale was adapted to assess medication adherence, and the Treatment Satisfaction Questionnaire for Medication (TSQM) version 1.4 was adopted to measure treatment satisfaction. Results The final analysis included 106 patients, with a mean age ± SD of 51.9 ± 1.2. Approximately 35% were hospitalized in the past year. Current hormonal therapy among cancer patients included letrozole (38.7%), tamoxifen (31.1%), exemestane (17%), and anastrozole (13.2%). The median adherence score was 5.0 [4.8–6.0], and 62.3% adhered fully to their oral hormonal drugs in the past week. The median scores of effectiveness, side effects, convenience, and global satisfaction were 66.67 [61.11.0–72.22], 75.00 [48.44–100.00], 66.67 [66.67–72.22], and 71.43 [57.14–78.57], respectively. A significantly lower adherence score was identified in patients living in camps (p = 0.020). Patients with comorbidities and those who continued on the same hormonal therapy had higher adherence scores, although they were not statistically significant. Multiple linear regression analysis showed that two domains of treatment satisfaction, side effects (p = 0.013) and global satisfaction (p = 0.018), were predictors of adherence to oral hormonal drugs. Conclusions The current study revealed a significant association between treatment satisfaction and adherence to oral hormonal therapy. We recommend creating a specialized scale to measure adherence, considering the psychosocial factors that affect hormonal anticancer medication adherence. Background In modern times, breast cancer is one of the most common health conditions faced by women worldwide [1]. It represents approximately 24.5% of all types of cancer in females and affects 1 in 8 women during their lifetime [1]. In Palestine, breast cancer is the most common cancer. In 2021, the number of new breast cancer cases in Palestine was 876 [2]. Based on statistics in the United States, Hormonal positive/human epidermal growth factor receptor 2 (HER2) negative breast cancer is the most common subtype and represents approximately 68% [3]. Oral hormonal anticancer drugs (i.e., tamoxifen and aromatase inhibitors) are prescribed for women with estrogenpositive and progesterone-positive breast cancer, with a highly satisfactory result after using these treatments [4]. It is often started as an adjuvant treatment following surgery/radiotherapy/chemotherapy or a combination of these therapies and given for 5 to 10 years [5]. It can also be given as a neo-adjuvant [5]. In a systematic review, the adherence rate to adjuvant hormonal therapy was approximately 66% [6]. Furthermore, it was found that more than half of breast cancer patients had nonadherent behavior to their treatment [7]. This percentage is close to that in Arabic nations, where Saudi Arabia reported a 69% adherence rate to antihormonal therapy [8]. Depression, older age, comorbidities, younger age, and side effects were associated with lower adherence. However, therapy with aromatase inhibitors, received chemotherapy, and prior medication use were associated with improved adherence [6]. In addition, it was found that adherence to hormone therapy increases disease-free survival [9]. Nevertheless, adherence to endocrine treatment decreased with years of therapy [10]. A previous study proved that satisfaction with oral anticancer drugs substantially affects adherence [11]. Breast cancer is a notable burden in all countries, and its incidence is high [12]. Suppose a cancer patient follows the treatment plan and adheres to the medications directed by his physician. In that case, it improves the survival rate and decreases the likelihood of recurrence [13]. Patient satisfaction with treatment is the key that encourages him to adhere to medications and successfully achieve short-and long-term results [11]. However, treatment satisfaction assessment helps healthcare professionals know the exact level of their patient's satisfaction with a specific drug and subsequently modify the treatment plan or find other solutions. This study will be the foundation for other projects that aim to evaluate adherence and treatment satisfaction in different cancer populations or with other therapies. There are limited reports on endocrine therapy adherence and treatment satisfaction in Palestine. Therefore, this study aims to determine the adherence rate and study factors associated with adherence. Study design and sampling technique We conducted the current multicenter cross-sectional study to assess breast cancer patients' adherence and satisfaction with oral hormonal medications using two main sets of data: medical records (both on paper and electronic) and women breast cancer patients' interviews. This research was conducted using convenience sampling between November 2021 and January 2022. All patients who came to the hospital for treatment or follow-up care and met the inclusion criteria were asked to complete the questionnaire. Study setting Our research was carried out in the oncology centers of the Al-Watani Hospital and the An-Najah National University Hospital in Nablus, Palestine. These hospitals are the largest and most important referral sites for cancer patients from all locations in Palestine. Sample size According to medical records, the number of women with breast cancer visiting the two hospitals during the study period was approximately 175. Therefore, the recommended sample size was 121 patients using an online calculation, Raosoft, with a response of 50%, a 5% margin of error, and a 95% confidence interval. Exclusion and inclusion criteria This study included women who had survived breast cancer over 18 years of age and had been prescribed and initiated oral hormonal drugs (neoadjuvant or adjuvant) at least four weeks prior to enrollment. Patients with comorbid delirium, dementia, bipolar, substance dependence disorders, untreated psychotic disorders, hospitalized patients, or those who were unable to participate or refused were excluded because of their inability to consent. We also excluded patients with missing findings in their medical records. Data collection instrument and procedure Two clinical pharmacists collected the data through faceto-face interviews with patients. Before beginning the data analysis, regular checks were performed for data integrity, proper sequences of information, and an evaluation of missing or incomplete variables. Questionnaires were completed by explaining the questions to the patients, filling in the information on papers using specific scales to assess cancer patients' adherence and satisfaction, and recording the patients' sociodemographic information (Table 1). In addition, medical records were used to record information related to disease and treatment characteristics. Medication adherence The Medication Adherence Rating Scale (MARS) is used to assess adherence to medication [14]. It is a 10-item self-report instrument with yes/no responses to the questions asked, with a summation yielding a maximum of 10 points. MARS scores can range from 0 (low likelihood of adherence) to 10 (high likelihood of adherence). It also has three groups of items: "medication adherence behavior" (questions 1-4), "attitude toward taking medication" (questions 5-8) and "negative side effects and attitudes toward oral hormonal medication" (questions 9, 10). However, three theoretically irrelevant items (questions 5, 7, and 9) were removed due to poor item-total correlation. These excluded items are "I take my medication when I am sick", "My thoughts are clearer on medication", and "I feel weird, like a 'zombie' , on medication". Therefore, the maximum point became 7 (high likelihood of adherence). Treatment satisfaction The Treatment Satisfaction Questionnaire for Medication (TSQM) version 1.4 assesses patients' perceptions of treatment [15][16][17][18]. It evaluates effectiveness (items 1-3), side effects (items 4-8), convenience (items 9-11), and global satisfaction (items [12][13][14]. The TSQM is a validated scale ranging from 0 to 100, with a higher score denoting better satisfaction [19]. The TSQM scale uses 14 questions to evaluate patient satisfaction; questions 1 through 3 inquire about the patient's satisfaction with the drug's efficacy in preventing and treating his disease, as well as the drug's capacity to relieve the patient's symptoms and the length of time it takes to begin working. Questions 4-8 inquire about the drug's adverse effects, the degree to which the patient finds them bothersome, how they affect his bodily and emotional well-being, and how much of an impact they have on the patient's satisfaction with the medication. The ninth and tenth questions concern the ease or difficulty of using the medication and scheduling a time for it to be used, while the eleventh question concerns whether it is proper to take the medication as directed. The confidence of the patient that this medication is helpful to him, that its benefits outweigh its drawbacks, and the degree of his general satisfaction with the medication are evaluated in questions 12 through 14. The Arabic version of the TSQM 1.4 is a valid and reliable instrument for assessing the perceptions of patients about treatment [19]. It has been used in several publications in Palestine [16,17,[20][21][22][23][24][25]. In addition, IQVIA ™ has given An-Najah National University permission to utilize this questionnaire in their research. Pilot study The pilot study sample consisted of 10 breast cancer patients chosen at the same criteria as the study population. The questionnaire was also completed in the same manner as it was for the study's population. Both scales, TSQM and MARS, were tested in the sample to evaluate the simplicity, understandability, and time to fill out all questions of the questionnaire. The Cronbach's alpha was 0.673 for the effectiveness domain of TSQM, 0.899 for side effects, 0.747 for convenience, and 0.878 for global satisfaction. Ethical approval The Institutional Review Boards (IRB) of An-Najah National University and the Palestinian Health Authority approved every aspect of the study protocol, including the use of and access to the patients' data. Furthermore, before initiating data collection, we properly explained all parts of the questionnaire to patients and received their verbal consent. Statistical analysis The Statistical Package for Social Sciences (IBM-SPSS) version 21 was used to enter and analyze the data. The results were explained using frequencies and percentages. The sociodemographic and clinical characteristics were described using descriptive and comparative statistics. We expressed the continuous variables using the median and interquartile ranges because the data were not normally distributed, as tested by the Kolmogorov-Smirnov test. Therefore, the Mann-Whitney U and Kruskal-Wallis tests were applied to examine the differences between variables. The Spearman test (TSQM and MARS scores) determined the association between treatment satisfaction and adherence. After that, all documented significant variables (sociodemographics and treatment satisfaction domains) in univariate analysis were entered in multiple linear regression analysis to determine the predictors for adherence. It was determined that there was a significant association with the outcome variables if the p value was less than 0.05. Table 1). Description of associations between patient characteristics and adherence score Among 106 women with breast cancer, the median adherence score was 5.0 [4.8-6.0] (range: 1.0-7.0). Approximately 62.3% of the patients reported a high likelihood of adherence to oral hormonal drugs in the past week. Regarding the associations between patient characteristics and adherence score, a significantly lower adherence score was identified in patients living in camps (p = 0.020). Patients with comorbidities and those who continued on the same hormonal therapy had higher adherence scores, although they were not statistically significant. In this study, patients with comorbidities had a mean rank of 58. 18 Table 2). Description of the association between treatment satisfaction and adherence As shown in Table 3, there were significant correlations between MARS score and treatment satisfaction, including side effects (p = 0.024) and global satisfaction (p = 0.008). Women with a high adherence rate had higher satisfaction scores than women with a low adherence rate. Spearman's rank order correlation coefficient between MARS adherence score and side effects and global satisfaction TSQM scores indicated significant positive correlations (r = 0.220 and 0.258, respectively). Description of associations between patient characteristics and treatment satisfaction As shown in .57], respectively. Postmenopausal patients had significantly higher satisfaction towards side effects (p = 0.049). In addition, patients with comorbidities had a higher global satisfaction score (p = 0.010). Furthermore, the satisfaction score toward side effects was significantly lower in patients with experienced side effects (p = 0.001) and those hospitalized in the last year (p = 0.030). Moreover, letrozole therapy was significantly associated with higher satisfaction with perceived effectiveness (p = 0.002) and global satisfaction (p = 0.004). Multivariate analysis of adherence score From the univariate analysis, residency, side effects, and global satisfaction were found to be statistically significant (p < 0.05). Multiple linear regression analysis revealed that the side effects domain (p = 0.013) and global satisfaction (p = 0.018) were predictors of oral hormonal drug adherence (Table 5). Discussion The current study examined the degree of adherence of Palestinian women with breast cancer to their oral hormonal therapy and described its correlation with treatment satisfaction and other variables. The sample of our study represents the age of the breast cancer population, in which approximately half of the breast cancer cases in Palestine fall within the 45-65 age group [26]. Oral hormonal therapy has improved patients' overall survival in breast cancer and long-term outcomes. An important element of treatment success is adherence to the medication. In the current study, 62.3% adhered fully in the past week, with a median adherence score of 5.0 [4.8-6.0]. In general, the adherence rate to oral hormonal drugs ranged from 45 to 95.7% [27]. In a systematic review, the mean rate of adherence at five years for the implementation phase was 66.2%, and the mean persistence was 66.8% [6]. Our results showed that women living in refugee camps were less adherent than those who resided in cities or villages. This could be due to low residential stability and social affluence. Patients with comorbidities had a higher adherence score, similar to a previous study [28]. This may be explained by the fact that patients with multiple comorbidities are aware of their diseases and the consequences of being nonadherent to medications. In addition, patients with other conditions may also use co-medication for these indications, which might stimulate them to take antihormonal therapy since they have a 'cocktail' to take and follow a medication scheme. Importantly, women who switched from their hormone drugs to another experienced less adherence to the new medication. Similar findings were reported in previous studies [29][30][31]. However, this finding should be further highlighted to identify the causes of switching and its effect on adherence. Our study found no significant differences in adherence scores between the hormonal drugs used. Similarly, a study did not show a significant association between the adherence of patients using tamoxifen and those receiving aromatase inhibitors [32]. However, our results contradict those of previous studies related to educational level, radiation therapy, age, and hospitalization, all of which were found to be significantly associated with adherence to hormonal therapy [33,34]. Concerning treatment satisfaction, we found that Palestinian patients had different scores in the four domains of treatment satisfaction, with lower scores in effectiveness and convenience. Patients on oral hormone therapy may not objectively feel an improvement in their health. Furthermore, the long duration of this therapy (5-10 years) may impact treatment satisfaction. However, the treatment satisfaction domain score of side effects was significantly lower in patients with experienced side effects or hospitalization in the past year. It was evident that side effects substantially decreased the patient's satisfaction with treatment. In this study, treatment satisfaction (side effect and global satisfaction domains) was a predictor of adherence to oral hormonal drugs. This finding means that a high adherence score is associated with low experienced side effects and high global satisfaction rates. A previous study found that greater satisfaction with treatment led to more adherence to oral cancer drugs, including hormonal medications [11]. However, another study revealed no obvious correlation between adherence and patient satisfaction with medication information. The domain of side effects represented an essential impact on treatment satisfaction and adherence. Adverse effects from hormonal therapy were considered the main barrier to nonadherence [28,[34][35][36], and it negatively impacts the quality of life [32]. In our study, the highest beta coefficient was for the variable side effects. This suggests that side effects contributed the most to explaining differences in hormonal drug adherence. Our result is close to other studies, which indicated a considerably high percentage of nonadherence [32,33,37]. Clinicians should pay great attention to this issue, as nonadherence is correlated with all-cause mortality in Asian women with breast cancer [38]. For example, physician-patient and pharmacist-patient communications should be enhanced [39] or an app-based new technique, such as a smartphone intervention [40] or using bubble packaging [41], should be adopted. Strengths and limitations This is the first study to correlate adherence and treatment satisfaction in patients with breast cancer treated with oral hormonal drugs and to analyze twenty-five sociodemographic and clinical factors. However, a crosssectional design, a small sample size, the inclusion of only two centers, using self-report questionnaires, and convenient selections are considered limitations of the current study, affecting our findings' generalizability. Additionally, certain factors, such as receiving counseling from an oncologist/clinical pharmacist about medications, time since onset treatment, and stages of the disease, were not analyzed, as these variables may have a notable impact on adherence. Furthermore, the TSQM scale was not validated in the Palestinian population. Finally, MARS was developed for a psychiatric population and was not validated in a cancer population. Although the MARS was set for psychiatric patients [14], it had convergent validity, biologically measured adherence, good internal consistency, and test-retest reliability. It has also been used in a previous study among cancer patients receiving oral anticancer agents [42]. Importantly, the adherence scale used in the current study was adapted by removing three irrelevant items from the original MARS scale. Conclusions The current study found that higher treatment satisfaction, especially with regard to side effects, was strongly associated with good adherence to oral hormonal therapy. Adjuvant hormone therapy seems to be an exceptional situation for medication adherence because the relationship between psychosocial factors and adherence to hormonal therapy in breast cancer differs from the relationship in other chronic conditions [43]. Therefore, we recommend creating a specialized scale to measure adherence, considering the psychosocial factors that affect hormonal anticancer medication adherence. In addition, pharmacists should counsel cancer patients about hormonal therapy, addressing the reasons for nonadherence and handling them. Finally, awareness of healthcare professionals regarding oral hormonal drug adherence is the cornerstone to openly discussing risks for nonadherence with cancer patients.
2023-03-20T14:01:17.340Z
2023-03-20T00:00:00.000
{ "year": 2023, "sha1": "981cddc9edb6da97a165f0250314d8f571444e12", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "981cddc9edb6da97a165f0250314d8f571444e12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268403914
pes2o/s2orc
v3-fos-license
A Systematic Literature Review of Mathematical Models for Coinfections: Tuberculosis, Malaria, and HIV/AIDS Abstract Tuberculosis, malaria, and HIV are among the most lethal diseases, with AIDS (Acquired Immune Deficiency Syndrome) being a chronic and potentially life-threatening condition caused by the human immunodeficiency virus (HIV). Individually, each of these infections presents a significant health challenge. However, when tuberculosis, malaria, and HIV co-occur, the symptoms can worsen, leading to an increased mortality risk. Mathematical models have been created to study coinfections involving tuberculosis, malaria, and HIV. This systematic literature review explores the importance of coinfection models by examining articles from reputable databases such as Dimensions, ScienceDirect, Scopus, and PubMed. The primary emphasis is on investigating coinfection models related to tuberculosis, malaria, and HIV. The findings demonstrate that each article thoroughly covers various aspects, including model development, mathematical analysis, sensitivity analysis, optimal control strategies, and research discoveries. Based on our comprehensive evaluation, we offer valuable recommendations for future research efforts in this field. Introduction Human immunodeficiency virus (HIV) is responsible for causing acquired immunodeficiency syndrome (AIDS). 1 The transmission of HIV occurs when infected individuals come into contact with certain bodily fluids, such as blood, semen, breast milk, and vaginal secretions. 2 Even though a cure for HIV is not available, the virus can be effectively managed through the use of antiretroviral (ARV) drugs, to control its progression. 3HIV epidemic significantly influences the prevalence of tuberculosis infections and 4 the virus compromises the immune system, making individuals who are infected highly susceptible to different infectious diseases. 5uberculosis (TB) is transmitted through the air when infected individuals speak, sneeze, or cough, making it an airborne disease.It is caused by Mycobacterium tuberculosis, a bacterium that primarily targets the lungs but can also affect other. 6fter successful treatment, individuals who have been declared cured can still be susceptible to reinfection.Failure to consistently take the prescribed medication for the designated duration can lead to the development of Multi-Drug Resistant Tuberculosis (MDR-TB), where the bacteria become resistant to drugs. 7Even individuals who are subjected to treatment can still transmit tuberculosis when the virus remains active in their bodies. 8According to the 2013 global report by the World Health Organization (WHO), approximately one-third of the population is affected by the infection.The 2020 global report stated that around 9.96 million individuals were estimated to have contracted tuberculosis in 2020. 9Furthermore, between 2000 and 2019, the diagnosis and treatment efforts saved the lives of approximately 63 million people. 10Among individuals coinfected with HIV and tuberculosis, tuberculosis is the primary cause of mortality. 11Globally, approximately 10.6 million individuals contracted tuberculosis in 2022, marking an increase from the estimated 10. 3 million cases in 2021. There is a possibility of a resurgence in the declining trend observed before the pandemic, anticipated to take place in either 2023 or 2024. 12alaria is caused by a parasite called Plasmodium. 13 Common signs of malaria consist of fever, muscle pain, fatigue, and chills.In severe instances, the illness can result in fatalities.The disease is a highly fatal contagious illness solely transmitted by Anopheles mosquitoes. 12The transmission occurs when female mosquitoes infected with the Plasmodium parasite bite humans, passing on the infection. 14Female mosquitoes require blood meals to produce eggs. 15According to World Health Organization (WHO), the count of malaria-endemic countries reporting fewer than 10,000 cases has risen from 26 nations in 2000 to 47 in 2020.Moreover, the number of countries with fewer than 100 native cases has increased from 6 to 26. 16 On a worldwide scale in 2022, an approximate 249 million cases of malaria were reported across 85 countries where malaria is prevalent.The incidence of malaria cases in 2022 stood at 58 per 1000 population at risk. 17Global initiatives are in progress to eliminate malaria, which involves the development of novel vaccines and the implementation of insecticides to prevent mosquito bites. Based on this data, the imperative to take action becomes evident to mitigate the transmission risk and curb the dissemination of tuberculosis, malaria, and HIV. 18As we progress towards controlling and potentially eliminating malaria, HIV, and TB, mathematical models offer a robust framework for assessing the potential impact of interventions.They help identify areas requiring additional empirical research, prioritize crucial policy and research questions, and, most importantly, necessitate improved communication between modelers and those involved in experiments or fieldwork.This collaboration is essential to fine-tune questions, pinpoint critical data, and ensure that analytical work contributes to enhanced policies and effectively controls all three infections. 19This model divides the population into different compartments based on certain assumptions and characteristics. 20Model can also serve to depict the occurrence of epidemic events involving interactions between two diseases.Using this model enables the estimation of individuals who are infected. 21,22Subsequently, through the application of optimal controls, effective recommendations can be identified to manage and curb the transmission of the disease. In recent years, progress has been made in the development of coinfection model for tuberculosis, malaria, and HIV.This Systematic Literature Review is dedicated to compiling existing model related to coinfection of these diseases and evaluating the extent of research and its corresponding outcomes.Ultimately, the objective is to provide valuable recommendations for future investigations in this field. Materials and Methods The method employed for articles to be included in this systematic literature review is the Preferred Reporting Item for Systematic Review and Meta-Analyses (PRISMA). 23PRISMA involves three key stages: identification, screening, and eligibility.In the identification stage, articles pertinent to the research are searched in different databases using specific keywords.During, at the screening stage, all identified articles from multiple databases are consolidated and duplicated entries are removed.The remaining articles are subjected to relevance assessment base on title and abstracts.At this stage, the relevance of the title and abstract in question is adjusted to the keywords used to search for these articles in the previous stage.The article goes to the next stage if the title or abstract contains a combination of these keywords.However, if there is no combination of these keywords, then the article is categorized as irrelevant at this stage.Additionally, the accessibility is checked, and any articles that do not meet the criteria are excluded.Following the screening stage, the articles that pass the initial assessment proceed to the eligibility phase, where a comprehensive evaluation is conducted to ascertain their relevance.At this stage, the relevance of the article in question is adjusted to the research topic of the article to be reviewed, namely mathematical models of tuberculosis, malaria, and HIV/AIDS coinfection in the form of ordinary differential equations.The article goes to the next stage if it is appropriate to the topic.However, if it is inappropriate, the article is categorized as irrelevant at this stage.By following these stages, the selected articles serve as the research material for this systematic literature review, facilitating a thorough exploration of coinfection model for tuberculosis, malaria, and HIV. The search limitation for this systematic literature review is as follows: Duplication selection employed JabRef, while research topic mapping used VOSviewer.JabRef and VOSviewer were opensource software applications accessible to all users.JabRef enabled users to store their data in a simple text-based file format without being tied to any specific vendor.In contrast, VOSviewer was a software tool designed for creating and visualizing bibliometric networks. Figure 1 showed the first PRISMA for articles on tuberculosis and HIV/AIDS coinfection models.A total of 55, 47, 13, and 278 articles were obtained from Dimension, Scopus, PubMed, and Science Direct, while 38 articles were duplicated.The screening process yielded 52 articles that proceeded to the eligibility stage.Subsequently, these 25 articles were deemed suitable for inclusion in this systematic literature review.Figure 2 showed the second PRISMA for articles on tuberculosis and malaria coinfection model.A total of 8, 3, 2, and 95 articles were obtained from Dimension, Scopus, PubMed, and Science Direct.After going through all stages of PRISMA, only 2 articles met the criteria for this systematic literature review. Figure 3 showed the third PRISMA for articles on malaria and HIV/AIDS coinfection model.A total of 22, 14, 7, and 122 articles were obtained from Dimension, Scopus, PubMed, and Science Direct were obtained, while 12 articles are duplicates.The screening result was 15 articles that proceed to the eligibility stage and only 7 articles were included in this systematic literature review. Figure 4 showed the fourth PRISMA for articles on tuberculosis, malaria, and HIV/AIDS coinfection model.A total of 6, 1, 2, and 84 articles were obtained from Dimension, Scopus, PubMed, and Science Direct.After going through all stages of PRISMA, only 1 article met the criteria for this systematic literature review. The articles retrieved from the four PRISMA searches were combined, and the keywords were mapped using VOSviewer.Figure 5 presented the resulting mapping, indicating connections between mathematical model, coinfection, HIV/AIDS, malaria, and tuberculosis.The circle shape represents the keywords contained in these articles.The bigger the circle, the more frequently the keyword appears in the articles.The lines connecting the circles illustrate the relationship between these keywords.The circle for the keyword HIV is larger than the circle for the keyword tuberculosis and the circle for the keyword malaria.This is because search results show that HIV co-infection is more common than tuberculosis or malaria.Based on the most occurrences, the keywords that appeared the most were "model" 80 times, "HIV" 73 times, "disease" 49 times, and "co-infection" 40 times.Meanwhile, "tuberculosis" and "malaria" appeared 29 and 22 times. Results and Discussion The articles derived from the analysis of the four PRISMA analyses were examined and assessed.The articles were scrutinized, focusing on aspects such as model construction, mathematical analysis, sensitivity analysis, and optimal Mathematical analysis can encompass various aspects of coinfection models, such as examining their fundamental properties, such as positivity, uniqueness, and invariant regions.This can also involve conducting local and global stability analyses.Additional analyses such as bifurcation may be incorporated.If the article does not discuss one of these analyses, then the article is said to be "No" on mathematical analysis, although all articles explain the formulation of mathematical models. Sensitivity Analysis is a method for measuring uncertainty in a model.Sensitivity analysis in disease spread models can be used to determine how much influence parameter changes have on changes in basic reproduction numbers (< 0 ) or compartments.Several sensitivity analysis methods can be used to analyze the model: index method for < 0 and Partial Rank Correlation Coefficient (PRCC).In models of tuberculosis, malaria, and HIV/AIDS coinfection, interventions in the form of various forms of control are often used to reduce the spread of these diseases.Mathematically, this control is used to create an objective function to minimize the spread of the disease.The solution can be found using optimal control theory.Optimal control theory is a subset of control theory focused on determining a control strategy for a dynamic system throughout a specific time frame, aiming to optimize an objective function.These articles were reviewed to find out what type of controls were used in each article. In this systematic literature review, a total of 35 articles were examined from the four PRISMA analyses.The analysis aimed to gain insights into the coinfection scenarios involving tuberculosis-malaria, tuberculosis-HIV, malaria-HIV, as well as tuberculosis-malaria-HIV.Detailed information from all the articles was extracted, including author names, publication years, details of mathematical analysis, sensitivity methods, and the controls used in model, as shown in Table 1.The specifics regarding the compartments employed in model are available in Table 2. Detailed descriptions of the model parameters can be seen in the reference article.The research results of these articles will be briefly explained in this systematic literature review. 24he only article about a coinfection model of tuberculosis and malaria based on the second PRISMA.Model formulation of malaria and tuberculosis coinfection can be seen in model (1).In model ( 1), the entire human population is categorized into various subpopulations: susceptible individuals denoted as SðtÞ, individuals currently exposed to malaria alone represented as E m ðtÞ, individuals exclusively infected with malaria noted as I m t ð Þ, individuals who have recovered from both TB and malaria referred to as ReðtÞ, individuals exposed to TB marked as E tb ðtÞ, individuals infected with TB indicated as I tb ðtÞ, individuals infected with TB who are undergoing treatment as T tb ðtÞ, and individuals concurrently infected with both TB and malaria denoted as I mt ðtÞ.The author carries out a fundamental analysis of the model to prove that the model is well posed by establishing the positivity of the solution, invariance region, and stability of the equilibrium point locally and globally.The author also proves that model ( 1) experiences backward bifurcation.To obtain the best strategy for dealing with tuberculosis and malaria coinfection, the author used optimal control with five types of control and then carried out numerical simulations for several optimal control combinations. Based on the results obtained from the third PRISMA, six articles discuss malaria and HIV coinfection. 27Of the six articles, the article has detailed and coherent discussion content.The author divides the model into sub-models and then looks for the balance point and stability of each balance point, both local stability and global stability.Model formulation of malaria and HIV coinfection can be seen in model (2).In model (2), the human population is categorized into six distinct compartments, each mutually exclusive: susceptible S h t ð Þ, productive HIV-only infected individuals I pa ðtÞ, nonproductive HIV-only infected individuals I na ðtÞ, AIDS patients AðtÞ, non-productive individuals infected with malaria only I nm ðtÞ, and individuals dually infected with both HIV and malaria I nd ðtÞ.Similarly, the vector population is divided into three exclusive compartments: susceptible mosquitoes S v ðtÞ, exposed mosquitoes E v ðtÞ, and infected mosquitoes I v ðtÞ.Model (2) also experiences backward bifurcation.The author also added 5 optimal controls and carried out numerical simulations to get the best control. with λ a ¼ Based on the results obtained from the first PRISMA, sixteen articles discuss tuberculosis and HIV coinfection. 52Of the sixteen articles, research on a comprehensive model of tuberculosis and HIV coinfection, starting from basic analysis to model simulations.Model formulation of tuberculosis and HIV coinfection can be seen in model (3).The overall population is divided into seven distinct classes: Susceptible individuals represented as SðtÞ, those infected with tuberculosis as I 1 ðtÞ, individuals who have both HIV and TB infections as I 2 ðtÞ, individuals with TB who are undergoing treatment as T 1 ðtÞ, individuals with both HIV and TB infections who are receiving treatment as T 2 ðtÞ, individuals affected by AIDS as AðtÞ due to the progression of HIV-infected individuals who have chosen not to undergo treatment, and lastly, individuals who have recovered from tuberculosis infection, denoted as R 1 ðtÞ.The author carries out a fundamental analysis of the model to prove that the model is well posed by establishing the positivity of the solution, invariance property, existence of the solution, and stability of the equilibrium point both locally and globally. According to the fourth PRISMA, only one article constructs a model from tuberculosis, malaria, and HIV coinfection. 58Model formulation of malaria and tuberculosis coinfection can be seen in model (4).The entire population is categorized into five distinct classes, which include the susceptible SðtÞ, individuals with treatable malaria infections I 1 ðtÞ, those with mTB infections I 2 t ð Þ, the HIV-infected population I 3 ðtÞ, and the class of people living with AIDS AðtÞ.Model (4) considers one-way infection from malaria or mTB infection to malaria-HIV coinfection and mTB-HIV coinfection.In addition, this model also does not consider coinfection between malaria and mTB.The author analyzes local stability, performs several numerical simulation cases, and concludes that malaria and mTB infection significantly increase HIV/AIDS infection.From this information, this model still has many things that can be developed. Table 1 provides an overview of the included articles, showing 7, 25, 2, and 1 article on malaria-HIV coinfection model, tuberculosis-HIV coinfection model, tuberculosis-malaria and tuberculosis-malaria-HIV coinfection model. 33,40,45he authors did not incorporate mathematical analysis into their articles. 33][30]37,39,52 Furthermore, articles performed a sensitivity analysis on model, employing an indexing method to assess the impact on basic reproduction number.Basic Reproduction Number < 0 is a threshold value that can reference whether a disease will spread or be eliminated from a population.Sensitivity analysis of a parameter is carried out to see how the parameter influences changes in < 0 . 57Other Article use PRCC method to analyze the sensitivity of the parameter.Obtaining trustworthy data poses a common challenge in mathematical biology.Although specific parameter values are available in other articles, some parameters had to be estimated.Consequently, it is crucial for the estimation of < 0 to exhibit relatively low sensitivity to these parameter values. 26Example of sensitivity analysis result, if HIV infection predominantly influences < 0 , a 10% reduction in the transmission rate is approximately mirrored by a 10% decline in < 0 .However, a 10% reduction in the death rate results in a 7% rise in R0 for Malawi and an 8% increase for sub-Saharan Africa. 30Other article result, increasing infection rate for malaria by 10%, increases the reproduction number by 5%.Thus, increasing natural death rate δ by 10% decreases the basic reproduction number by 10%. 33Other articles employed sensitivity analysis with an indexing method applied to different variables.In general transmission model, sensitivity analysis employs an index for < 0 .The more modest the sensitivity of a parameter to its < 0 , the more accurate the estimated value of that parameter.For some cases, the results of sensitivity analysis are used as a reference for selecting parameters to be controlled using optimal control theory.This can prove the influence of these parameters on the spread of disease in the population. Optimal control plays a crucial role in determining the most appropriate and efficient interventions for model.Several articles employ these methods, incorporating different types of prevention and intervention controls. 28,31,34,38,39,45,49,58ertain articles determine the best strategy to reduce infection by altering parameter values. 26,33,40However, some articles did not incorporate any control measures in their analysis. Table 2 illustrates that model with the 36 lowest and 37 highest number of compartments consist of 4 and 21 compartments, respectively.Despite the complexity of their model, the authors conducted mathematical and sensitivity analyses, and utilized optimal control to obtain results. The presented articles have yielded recommendations for mitigating the transmission of tuberculosis, malaria, and HIV. 24The integration of Long-Lasting Insecticide-Treated Nets (LLITN), malaria treatment, tuberculosis treatment, Indoor Residual Spraying (IRS), and tuberculosis prevention proves to be effective in reducing the spread of tuberculosis, malaria, as well as their coinfection. The results obtained for the malaria-HIV coinfection model differ.According to 30 article, the most effective method to diminish malaria-HIV coinfection involves the combination of malaria prevention measures and antiretroviral (ARV) treatment.However, 31 article obtained the result that treating malaria and HIV individually proved to be more effective in reducing infection compared to administering combined treatment. 28The escalation of HIV/AIDS prevalence due to coinfection with malaria, highlights the significance of treatment in mitigating this interplay, particularly for individuals already affected by AIDS. 26The mortality rate rises with coinfection and doubles when the infectivity escalates by 30%. 27The most cost-effective control to inhibit the spread of HIV-Malaria coinfection is prevention.Furthermore, 29 significant reductions or potential eradication of HIV prevalence can be achieved by ensuring high bed-net coverage, a high rate of malaria treatment to effectively minimize the incidence of malaria-HIV coinfection. The outcomes derived from coinfection model of tuberculosis and HIV exhibit variations. 40The infections exert a significant impact on the population due to the presence of a hidden population with tuberculosis infection. 33uberculosis and HIV are intricately interconnected with each other, while AIDS is influential on tuberculosis infection.Likewise, 34 the presence of tuberculosis infection can expedite the progression of HIV, potentially leading to more rapid development of AIDS. In terms of optimal control, 36 article implemented effective isolation measures to restrict the contact and transmission of infections within the population, to eliminate tuberculosis-HIV coinfection.Furthermore, 37 the integration of case findings and prevention treatment failure of tuberculosis proved to be effective in reducing the spread of tuberculosis and HIV. 38Screening plays a crucial role in managing and containing the transmission of HIV and tuberculosis. 39Individuals with weakened immune systems are more vulnerable to contracting HIV and tuberculosis.The overall burden arising from coinfection can be minimized by employing well-selected strengths and initiating antiretroviral therapy (ARV). 43Additionally, maintaining a higher early treatment rate compared to the late treatment rate throughout the entire treatment program is essential. 45Early detection of HIV and tuberculosis cases and the prompt initiation of treatment can effectively decrease the rate of infection, slow down the progression of HIV infected individuals toward AIDS, and reduce the occurrence of coinfection. 46he combination of prevention and treatment gives good results from economic and epidemiological perspectives.In addition, 49 vaccination leads to rapid recovery in individuals. 48Optimal detection and integrated therapy, administered at the appropriate time, also yield superior clinical outcomes. 50The optimal result can be obtained by combining all the detection or one of tuberculosis or HIV only for a longer period. To decrease the prevalence of infection and fatalities caused by the disease, it is necessary to implement all control measures collectively and at an optimal level.However, 51 this method carries the potential risk of inducing immune reconstitution inflammatory syndrome (IRIS) in infected individuals. 52Enhancing awareness through education results in a decline in the cumulative occurrence of new cases of coinfection within a population. 53The most effective method to minimize infection, maximize the rate of recovery, and control the disease progression is through a combination of vaccination and treatment. Based on the method employed to search for articles, only one addresses coinfection model involving tuberculosis, malaria, and HIV.However, 58 this model refrains from employing vectors as compartments, and the management of malaria and tuberculosis can potentially decelerate the progression of HIV. Some coinfection models consider the effects of prevention and intervention in various forms.Prevention is in the form of Net insecticides and mass spraying to reduce mosquito populations.There is also in the form of education and use of condoms or in any form without specifying the prevention.Meanwhile, intervention can take the form of treatment for malaria and tuberculosis or various therapies for HIV/AIDS.Based on these articles, prevention and intervention significantly influence the dynamics of disease spread.Prevention and intervention in various forms can be used as optimal control in reducing the spread of disease in a population. Mathematical models of coinfection between tuberculosis, malaria, and HIV/AIDS can be used to help in deciding intervention policies that must be implemented to suppress the spread of these diseases. 11One of the important mathematical models in the spread of infectious diseases is the use of country-specific dynamic models to estimate the incidence and mortality of TB in that period 2020-2022.Estimates for 2020-2022 were generated utilizing a dynamic model tailored to each country, considering the impact of disruptions to tuberculosis diagnosis and treatment caused by the Covid-19 pandemic.Determining the burden of TB during the Covid-19 pandemic and its aftermath poses challenges, and the current approach involves the use of country-and region-specific dynamic models, particularly for many lowand middle-income countries.These models also used by WHO. 52rticle use data relevant to Kogi state of Nigeria to study the coinfection of tuberculosis and HIV.Analytical results supported by numerical simulation prove that educational awareness campaigns and treatment can reduce the burden of tuberculosis, HIV, and the coinfection of tuberculosis and HIV. 40Article use population and health statistic from the Ministry of Interior, the Ministry of Public Health, Thailand and the World Health Organization to estimate the parameter value of the model.The extended duration of latent TB infection means that newly infected cases do not exhibit clinical symptoms immediately.Consequently, these cases remain unnoticed for a considerable period.To account for this delay, the development of a time-delay differential equation model becomes essential. 26lobal parameter estimates were derived from data collected in sub-Saharan Africa.The complete biological interactions between the malaria parasite and HIV are not yet fully understood.However, it is plausible that coinfection might result in a magnitude increase in the parasite or viral load.Future research endeavors should involve parameter fitting to data.Exploring coinfection at a cellular level would be needed as well. Conclusion In conclusion, a comprehensive search was conducted for articles using four different databases and specific sets of keywords.The selection process involved removing duplicated articles, assessing titles and abstracts, checking accessibility, and evaluating the relevance of entire article.Furthermore, the search yielded a total of 761 articles with the specified keywords.After completing the selection process, 35 articles were identified for analysis in systematic literature review.The articles covered different aspects, including model construction, mathematical analysis, sensitivity analysis, optimal control, and research findings.Valuable research and findings were also presented, contributing to the field of tuberculosis, malaria, and HIV coinfection model.Infectious diseases are a global health problem, especially deadly diseases such as tuberculosis, malaria and HIV/AIDS.This can become a more crucial health problem when coinfection https://doi.org/10.2147/JMDH.S446508 DovePress Journal of Multidisciplinary Healthcare 2024:17 1106 occurs between these diseases.Mathematical models of tuberculosis, malaria, and HIV/AIDS coinfection can find the most effective intervention policies in reducing the spread of disease.Several studies on tuberculosis, malaria, and HIV/ AIDS coinfections have been carried out in several countries using mathematical models and parameter estimation using data resulting from collaboration with local departments.For example, in Nigeria, Thailand, and Africa.This research produced several optimal solutions to reduce the spread of disease, including educational awareness campaign and treatment.Mathematical models of disease spread can produce recommendations for the government to create policies to reduce the spread of disease.However, there was still potential for further development of coinfection model by considering factors such as lifestyle, hospitalization, traditional treatment, and bacterial or virus evolution.Future research can also carry out parameter fitting estimation methods or use time delay models.Even the research object can be modified, namely from distribution at the human level to distribution at the cellular level. Figure 1 Figure 1 PRISMA of Tuberculosis and HIV/AIDS. Figure 2 Figure 2 PRISMA of Tuberculosis and Malaria. Figure 3 Figure 3 PRISMA of Malaria and HIV/AIDS. Figure 5 Figure 5 Mapping research topics using VOSviewer. Table 1 Detail of the Articles Table 2 Compartments of the Models
2024-03-15T15:02:15.987Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "e936578558ab8439be4ff923dc68c17767d5529c", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=97542", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54b8092cbed060240588da58cf8fdb1f6b78a92a", "s2fieldsofstudy": [ "Medicine", "Mathematics" ], "extfieldsofstudy": [] }
255256136
pes2o/s2orc
v3-fos-license
Invadosome Formation by Lung Fibroblasts in Idiopathic Pulmonary Fibrosis Idiopathic pulmonary fibrosis (IPF) is characterized by abnormal fibroblast accumulation in the lung leading to extracellular matrix deposition and remodeling that compromise lung function. However, the mechanisms of interstitial invasion and remodeling by lung fibroblasts remain poorly understood. The invadosomes, initially described in cancer cells, consist of actin-based adhesive structures that coordinate with numerous other proteins to form a membrane protrusion capable of degrading the extracellular matrix to promote their invasive phenotype. In this regard, we hypothesized that invadosome formation may be increased in lung fibroblasts from patients with IPF. Public RNAseq datasets from control and IPF lung tissues were used to identify differentially expressed genes associated with invadosomes. Lung fibroblasts isolated from bleomycin-exposed mice and IPF patients were seeded with and without the two approved drugs for treating IPF, nintedanib or pirfenidone on fluorescent gelatin-coated coverslips for invadosome assays. Several matrix and invadosome-associated genes were increased in IPF tissues and in IPF fibroblastic foci. Invadosome formation was significantly increased in lung fibroblasts isolated from bleomycin-exposed mice and IPF patients. The degree of lung fibrosis found in IPF tissues correlated strongly with invadosome production by neighboring cells. Nintedanib suppressed IPF and PDGF-activated lung fibroblast invadosome formation, an event associated with inhibition of the PDGFR/PI3K/Akt pathway and TKS5 expression. Fibroblasts derived from IPF lung tissues express a pro-invadosomal phenotype, which correlates with the severity of fibrosis and is responsive to antifibrotic treatment. Introduction Idiopathic pulmonary fibrosis (IPF) is a chronic and progressive lung disease characterized by alveolar epithelial cell injury, aberrant epithelium repair, fibroblast accumulation and excessive deposition of extracellular matrix (ECM). Fibroblasts and myofibroblasts form fibroblastic foci, the accumulation of which is associated with disease progression and poor prognosis [1]. Pharmacological treatment options are currently limited to two antifibrotic drugs, nintedanib and pirfenidone. Although their cellular mechanisms of action remain to be fully elucidated, these antifibrotics have been shown to slow the progressive decline in pulmonary functions as measured by forced vital capacity (FVC) [2,3]. Originally developed as an anti-angiogenesis agent, nintedanib inhibits the receptor tyrosine kinases (RTK) VEGFR, FGFR and PDGFRα/β at the ATP-binding pocket site. Nintedanib also inhibits members of the Src-family of kinases, Src, Lyn and Lck [4]. Moreover, this molecule has been reported to inhibit PDGFR and the downstream signaling pathways Akt and ERK in the bleomycin-induced lung fibrosis mouse model [5]. Fibroblasts are known to promote the aberrant ECM remodeling in several respiratory diseases such as asthma, chronic obstructive pulmonary disease and particularly interstitial lung diseases including IPF [6]. in IPF. In addition to releasing excessive amounts of collagen and fibronectin, they express mediators of matrix degradation, such as the metalloproteinases MMP-2 and MMP-9 [7] and enzymes responsible for ECM crosslinking, resulting in matrix stiffening [8]. The stiff matrix further activates fibroblasts [9] and promotes matrix invasion [10,11]. Fibroblasts also release ECM-bound TGFβ through matrix contraction and MMP-mediated cleavage [12,13]. First identified in Src-transformed fibroblasts [14], invadosomes (known as invadopodia when specific to cancer cells) have been observed in highly invasive cancer cells [15]. Briefly, the process of invadosome formation consists of an integrin-based matrix adhesion step that enables the cells to probe the ECM. Specific proteins, including TKS5, cortactin, N-WASP, AFAP110, supervilin (SVIL), CD44 and ARP2/3, act cooperatively to initiate actin polymerization. TKS5 binds to the phosphatidylinositol PI (3,4)P2 found at the membrane and acts as a scaffold protein for the polymerization complex. The colocalization of cortactin or TKS5 with f-actin is commonly used to identify invadosomes in cells [16,17]. The assembly of an actin bundle with microtubules and intermediate filaments such as vimentin provides a mechanical force to push the cell membrane protrusion into the ECM [18]. Local invasion of the matrix by invadosomes is facilitated by matrix degradation involving proteolytic enzymes, such as the serine protease FAP-α and the metalloproteinases MMP14 (MT1-MMP), MMP-2, MMP-9 and ADAM12 [19]. The formation of invadosomes is involved in the ECM remodeling observed in many malignant tumors and non-cancer cells such as macrophages, dendritic cells, osteoclasts and synoviocytes [20][21][22]. Invadosome formation is promoted in response to a variety of growth factors, signaling pathways and environmental cues [23]. In this regard, we recently reported the critical role of the PDGFR/PI3K/Akt signaling pathway in invadosome formation by arthritic fibroblast-like synoviocytes [24]. Although the involvement of fibroblasts in the dysregulation of ECM homeostasis is an important event in the development of fibrosis, little is known about the key players involved in this process. We hypothesized that invadosomes, which are known to drive ECM remodeling in cancer, might be assembled by IPF fibroblasts and that these structures could be targeted by first-line treatments. To address this question, fibroblasts were isolated from human and murine fibrotic lungs and assessed for their ability to form invadosomes. In this study, we report the presence of an invadosome-associated gene signature in IPF lungs and fibroblastic foci. Accordingly, IPF lung fibroblasts have an increased capacity to form invadosomes, an event that correlates with the severity of fibrosis observed in the patients' lungs. Furthermore, nintedanib and pirfenidone decreased invadosome formation by IPF fibroblasts, with nintedanib concomitantly inhibiting TKS5 expression and the PDGFR/Akt axis. These data suggest that invadosomes are associated with IPF pathogenesis and reveal an additional mode of action for some of the current first-line treatments. Several Key Genes of Invadosome Formation Are Upregulated in IPF Lung Tissue Samples and in Fibroblastic Foci Areas Given the possible presence and involvement of invadosomes in IPF disease, we first asked whether IPF lungs are characterized by a pro-invadosomal gene expression signature. For this, we used the public gene expression dataset GSE32537 and performed differential expression analysis between control and IPF subsets. The upregulation of fibrosis-related genes in IPF samples was used to validate the dataset. The results indicated that fibroblast-associated genes COL1A1, COL3A1, ACTA2, FN1, HAS2 and TGFB1 were strongly increased in IPF samples compared with the controls ( Figure 1A). Next, using the same cohort, a panel of genes known to be associated with invadosome formation [17,25,26] was analyzed. The expression of FAP, ADAM12, SYNJ2, MMP2, VIM, AFAP1, SH3PXD2A (TKS5), SVIL and CD44 was significantly increased in IPF lungs ( Figure 1B). To further investigate whether fibroblastic cells expressed increased levels of invadosome-related genes, we performed differential expression analysis between control alveolar septae, IPF alveolar septae and IPF fibroblastic foci with the GSE19500 dataset. The expression of SYNJ2, SH3PXD2B, FAP, SVIL, ADAM12, SH3PXD2A, SLC9A1 MMP2, MMP14 and AFAP1 was strongly increased in IPF fibroblastic foci compared to non-fibrotic IPF alveolar septae ( Figure 1C). Overall, IPF lung tissues and fibroblastic foci express more invadosomal markers than healthy tissues, suggesting an increased presence of invadosomes in IPF lung tissues. Invadosome Formation Is Increased in Lung Fibroblasts Isolated from IPF Patients and Fibrotic Mice To assess the ability of IPF fibroblasts to form ECM-degrading invadosomes, we isolated lung fibroblasts from healthy and IPF lung tissues, and the cells were subjected to Figure 1. Several invadosome-associated genes are increased in lungs from IPF patients. Heatmaps of differential expression analysis using public datasets. Expression levels are presented as Z-scores. Significance of each gene is shown on the left with Benjamini-Hochberg false discovery rate adjusted p-values. Gene expression in lung tissue from GSE32537 of (A) 6 fibrosis-associated genes and (B) 17 genes encoding proteins involved in invadosome formation. Ctrl (n = 50) and IPF (n = 119). (C) Gene expression from GSE169500 of 10 genes involved in invadosome formation. Control alveolar septae (n = 10), IPF alveolar septae (n = 10) and IPF fibroblastic foci (n = 10). * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. Invadosome Formation Is Increased in Lung Fibroblasts Isolated from IPF Patients and Fibrotic Mice To assess the ability of IPF fibroblasts to form ECM-degrading invadosomes, we isolated lung fibroblasts from healthy and IPF lung tissues, and the cells were subjected to invadosome assays. Fluorescent images show that IPF fibroblasts form more invadosomes and create larger areas of degraded matrix compared with fibroblasts from healthy donors. The areas of degraded matrix found 20 h after cell seeding often appear beside the cells, suggesting fibroblast motility on the gelatin matrix ( Figure 2A). Specifically, invadosomeproducing cells are 1.8 times more abundant in IPF fibroblasts than healthy fibroblasts ( Figure 2B), and this cellular behavior was accompanied by a 2.4-fold greater capacity to degrade matrix ( Figure 2C). Of note, the capacity of IPF fibroblasts to form invadosomes was not associated with fibronectin expression ( Figure S1B) or with FVC clinical measures or smoking status ( Figure S2A,B). Next, cortactin and TKS5 proteins are essential to produce invadosomes. By confocal microscopy, the colocalization of f-actin and cortactin as well as f-actin and TKS5 ( Figure 2D) specifically confirmed the presence of invadosomal structures in IPF fibroblasts. The number f-actin and cortactin positive puncta were 1.8 times higher in IPF fibroblasts ( Figure 2E). In addition, the mRNA level of TKS5 is increased 2.1-fold in IPF fibroblasts cultures ( Figure 2F) and is positively correlated with the ability of fibroblasts to form invadosomes ( Figure 2G). Accordingly, lung tissue immunohistochemistry revealed that cytoplasmic TKS5 is intensified in IPF lungs compared to healthy lungs ( Figure 2H). Knockdown of TKS5 in IPF fibroblasts reduced the ability of cells to produce invadosomes, confirming the involvement of TKS5 ( Figure S3). invadosome assays. Fluorescent images show that IPF fibroblasts form more invadosomes and create larger areas of degraded matrix compared with fibroblasts from healthy donors. The areas of degraded matrix found 20 h after cell seeding often appear beside the cells, suggesting fibroblast motility on the gelatin matrix ( Figure 2A). Specifically, invadosome-producing cells are 1.8 times more abundant in IPF fibroblasts than healthy fibroblasts ( Figure 2B), and this cellular behavior was accompanied by a 2.4-fold greater capacity to degrade matrix ( Figure 2C). Of note, the capacity of IPF fibroblasts to form invadosomes was not associated with fibronectin expression ( Figure S1B) or with FVC clinical measures or smoking status ( Figure S2A-B). Next, cortactin and TKS5 proteins are essential to produce invadosomes. By confocal microscopy, the colocalization of f-actin and cortactin as well as f-actin and TKS5 ( Figure 2D) specifically confirmed the presence of invadosomal structures in IPF fibroblasts. The number f-actin and cortactin positive puncta were 1.8 times higher in IPF fibroblasts ( Figure 2E). In addition, the mRNA level of TKS5 is increased 2.1-fold in IPF fibroblasts cultures ( Figure 2F) and is positively correlated with the ability of fibroblasts to form invadosomes ( Figure 2G). Accordingly, lung tissue immunohistochemistry revealed that cytoplasmic TKS5 is intensified in IPF lungs compared to healthy lungs ( Figure 2H). Knockdown of TKS5 in IPF fibroblasts reduced the ability of cells to produce invadosomes, confirming the involvement of TKS5 ( Figure S3). To determine whether invadosome formation is also present in a model of induced fibrosis, we next studied lung fibroblasts isolated from mice, 28 days after exposure to bleomycin. Actively degrading cells are 1.7 times more frequent in bleomycin fibroblasts ( Figure S4A), and these cells have a 2.6-fold greater capacity to degrade matrix ( Figure S4B). Moreover, fibroblasts from bleomycin mice produce two times more invadosomal structures as measured by f-actin and cortactin clusters ( Figure S4C). Fibroblast Invadosome Formation Correlates Positively with the Collagen Content of Neighbouring Tissue To define whether fibroblasts from collagen-rich regions of the lung have an increased capacity to form invadosomes, tissues from apical and basal regions of the lung were collected for each IPF patient and processed for histology. Then, high-resolution images of the tissues were taken following Masson's trichrome staining to quantify collagen. Histologically, these tissues consisted of a heterogeneous distribution of normal tissues as well as immature and mature fibrotic areas, alveolar inflammation, fibroblastic foci and dense areas with collagen deposition, which is entirely consistent with the diagnosis of IPF. In Figure 3A, tissues from the IPF-01 patient show predominant fibrosis at the bases of the lung (collagen 39%) compared to the apex (collagen 24%), which is consistent with a usual interstitial pneumonia (UIP) pattern. Four out of five patients have a greater amount of collagen in the basal region of their lungs compared to the apical region ( Figure 3B). Interestingly, fibroblasts derived from the lung bases of these patients, also showed an increased capacity to form invadosomes when compared with fibroblasts derived from the upper lung zones. Furthermore, the only patient with a greater amount of fibrosis in the apical sample had increased invadosome formation in apical-derived fibroblasts. This observation suggested that the potential of fibroblasts to form invadosomes may be related to the amount of collagen present in their vicinity ( Figure 3C). To further explore the potential correlation between collagen density and fibroblast invadosome formation within the same lung area, we examined 17 tissue samples from 12 IPF patients representing a broad range of collagen content. A strong positive linear correlation was found between the presence of collagen and the formation of invadosomes ( Figure 3D). To further assess the in vivo relevance of invadosomes in collagen-rich samples and fibroblastic foci, we used the GSE32537 and GSE169500 datasets to correlate the expression of TKS5 (SH3PXD2A) with collagen I (COL1A1) in lung tissues from IPF patients. Analysis of 119 IPF lung samples indicated a significant and strong correlation between type 1 collagen and TKS5 expression ( Figure 3E). Similar results were observed with IPF fibroblastic foci samples regarding TKS5 and collagen I ( Figure 3F). Altogether, these results suggest that invadosome production is intensified in fibroblasts from collagen-rich regions of IPF lungs. Nintedanib and Pirfenidone Inhibit Invadosome Formation by Lung Fibroblasts from IPF Patients Given that nintedanib and pirfenidone, the two first-line IPF treatments, were shown to slow the rate of disease progression, it was of interest to investigate their effect on invadosome formation. Images of fibroblasts from an IPF patient illustrate the efficacy of Nintedanib and Pirfenidone Inhibit Invadosome Formation by Lung Fibroblasts from IPF Patients Given that nintedanib and pirfenidone, the two first-line IPF treatments, were shown to slow the rate of disease progression, it was of interest to investigate their effect on invadosome formation. Images of fibroblasts from an IPF patient illustrate the efficacy of nintedanib to impair invadosome-induced gelatin degradation ( Figure 4A). Nintedanib markedly decreases the percentage of invadosome-forming cells from 7 IPF patients to the level of healthy fibroblasts (22% ± 2) ( Figure 4B). The degraded area was also decreased by 60% and 73% using 0.2 µM and 0.5 µM nintedanib, respectively ( Figure 4C). Furthermore, f-actin and cortactin puncta counts demonstrate that nintedanib reduces the formation of invadosomal structures by 39% ( Figure 4D and Figure S5). Pirfenidone at 5 mM also reduces the percentage of invadosome-forming IPF fibroblasts to the level of healthy fibroblasts (22% ± 2) ( Figure 4E). nintedanib to impair invadosome-induced gelatin degradation ( Figure 4A). Nintedanib markedly decreases the percentage of invadosome-forming cells from 7 IPF patients to the level of healthy fibroblasts (22% ± 2) ( Figure 4B). The degraded area was also decreased by 60% and 73% using 0.2 µM and 0.5 µM nintedanib, respectively ( Figure 4C). Furthermore, f-actin and cortactin puncta counts demonstrate that nintedanib reduces the formation of invadosomal structures by 39% (Figures 4D and S5). Pirfenidone at 5 mM also reduces the percentage of invadosome-forming IPF fibroblasts to the level of healthy fibroblasts (22% ± 2) ( Figure 4E). Nintedanib Reduces the Expression of TKS5 and p-Akt in IPF Fibroblasts Among the receptor tyrosine kinases (RTK) targeted by nintedanib, PDGFRα/β receptors are activated during IPF due to the strong production of PDGF by alveolar macrophages [27]. Using healthy donor fibroblasts, we found that PDGF-BB induces by 1.6fold TKS5 mRNA expression ( Figure 5A) and by 1.9 times the capacity of cells to form invadosomes ( Figure 5B), which was completely blocked by nintedanib, suggesting that PDGFR activation is sufficient to promote invadosome formation. Similarly, when IPF fibroblasts are stimulated with PDGF-BB, TKS5 mRNA expression is increased 1.4 times Nintedanib Reduces the Expression of TKS5 and p-Akt in IPF Fibroblasts Among the receptor tyrosine kinases (RTK) targeted by nintedanib, PDGFRα/β receptors are activated during IPF due to the strong production of PDGF by alveolar macrophages [27]. Using healthy donor fibroblasts, we found that PDGF-BB induces by 1.6-fold TKS5 mRNA expression ( Figure 5A) and by 1.9 times the capacity of cells to form invadosomes ( Figure 5B), which was completely blocked by nintedanib, suggesting that PDGFR activation is sufficient to promote invadosome formation. Similarly, when IPF fibroblasts are stimulated with PDGF-BB, TKS5 mRNA expression is increased 1.4 times ( Figure 5C). Nintedanib reduces the expression of TKS5 at the mRNA level ( Figure 5C) as well as at the protein level ( Figure 5D) in unstimulated IPF fibroblasts. Using the same condition, expression levels of MMP2, MMP14 and ADAM12 metalloproteinases remained unchanged, and the pro-fibrotic genes COL1A1 and CTGF were significantly decreased, confirming the efficacy of nintedanib ( Figure S6). Given that PDGFR mainly acts through the PI3K/Akt signaling as well as non-receptor TKs including Src, we sought to determine the effect of nintedanib on the Src/p-cortactin and PI3k/Akt signaling pathways. Nintedanib had no significant effect on Src phosphorylation at the Y416 activation site as well as cortactin phosphorylation at Y421, an Src phosphorylation site [28] (Figure S7). However, immunoblots performed on IPF fibroblasts reveal that 0.5 µM of nintedanib significantly decreases the phosphorylation of Akt ( Figure 5E). Inhibition of Akt with triciribine and MK-2206 reduced the frequency of invadosomes in IPF fibroblasts ( Figure 5F). Altogether, these results suggest that nintedanib impairs invadosome formation likely by disrupting the PDGFR/PI3k/Akt pathway and by downregulating the expression of TKS5. ( Figure 5C). Nintedanib reduces the expression of TKS5 at the mRNA level ( Figure 5C) as well as at the protein level ( Figure 5D) in unstimulated IPF fibroblasts. Using the same condition, expression levels of MMP2, MMP14 and ADAM12 metalloproteinases remained unchanged, and the pro-fibrotic genes COL1A1 and CTGF were significantly decreased, confirming the efficacy of nintedanib ( Figure S6). Given that PDGFR mainly acts through the PI3K/Akt signaling as well as non-receptor TKs including Src, we sought to determine the effect of nintedanib on the Src/p-cortactin and PI3k/Akt signaling pathways. Nintedanib had no significant effect on Src phosphorylation at the Y416 activation site as well as cortactin phosphorylation at Y421, an Src phosphorylation site [28] (Figure S7). However, immunoblots performed on IPF fibroblasts reveal that 0.5 µM of nintedanib significantly decreases the phosphorylation of Akt ( Figure 5E). Inhibition of Akt with triciribine and MK-2206 reduced the frequency of invadosomes in IPF fibroblasts ( Figure 5F). Altogether, these results suggest that nintedanib impairs invadosome formation likely by disrupting the PDGFR/PI3k/Akt pathway and by downregulating the expression of TKS5. Discussion IPF is characterized by aberrant interstitial remodeling and abnormal ECM deposition in the lung interstitium. Fibroblastic foci (FF) comprised of activated fibroblasts and myofibroblasts are a histologic hallmark of IPF and strongly correlate with disease progression and mortality [29]. Cells comprising FF likely originate from mesenchymal progenitor cells, EMT, circulating fibrocytes, pericytes and resident fibroblasts [30][31][32] and have the capacity to migrate, invade and remodel areas of alveolar damage [33][34][35]. Invadosomal structures characteristically present in cancer cells are known to facilitate metastatic dissemination through pericellular ECM degradation. The identification of such specialized cellular structures and their potential involvement in interstitial remodeling of the IPF lung is lacking. Other actin-based structures have been found in lung fibroblasts, such as focal adhesions [11] and filopodia [36]; however, these structures lack the proteolytic activity characteristic of invadosomes [26]. In the current study, we report for the first time that IPF fibroblasts have an increased capacity to form invadosomes, which correlates with the severity of lung fibrosis and is blocked by the antifibrotic drugs nintedanib and pirfenidone. These observations suggest that IPF fibroblasts can form invadosomes and provide new insights into the mechanism by which fibroblasts may contribute to IPF physiopathology. TKS5, an essential component of invadosomes, acts as a scaffold for the actin polymerization complex and takes part in MMP trafficking and ROS-mediated MMP activation [16]. Here, TKS5 (SH3PXD2A) expression levels were found to be elevated in tissues, fibroblastic foci and cells derived from patients with IPF, suggesting its involvement in the disease. Using a gelatin degradation assay and a colocalization assay with TKS5, cortactin and f-actin, we identified functional invadosomes in lung fibroblasts derived from IPF patients and bleomycin-exposed mice. Healthy lung fibroblasts could produce invadosome structures and degraded gelatin matrix, but these features were robustly increased in fibroblasts isolated from fibrotic lungs. MMPs with gelatinase activity are augmented in lung samples, BALF and plasma derived from patients with IPF [37]. In addition, increased expression and activation of MMPs was shown to contribute to fibrocyte migration [38] and lung fibroblast invasion [39], two events associated with ECM remodeling. The degradation of ECM by invadosomes is mediated by MMPs recruitment and secretion at the tip of the protrusion, where MMP14 can activate pro-MMP-2 and pro-MMP-9 [40]. Our RNAseq data analysis showed an upregulation of MMP14, MMP2 and ADAM12 in the fibroblastic foci, which was consistent with the ability of isolated IPF fibroblasts to degrade ECM in invadosome assays. TGFβ and multiple growth factors are bound to the ECM in a latent form and can be released and activated by MMPs and ADAMs [13,41]. It is therefore possible that the marked increase in invadosome formation observed in IPF lung-derived fibroblasts promotes ECM remodeling through activation of TGFβ and/or other growth factors via pericellular secretion and activation of proteases. Interestingly, several invadosome inducers are upregulated in lung fibrosis, such as TGFβ, PDGF and LPA [42]. These factors are notably involved in various cellular events associated with the fibrotic response such as fibroblast proliferation, migration and differentiation into myofibroblasts, as well as ECM remodeling [43]. These pro-fibrotic factors may therefore contribute to establish an invadosomal phenotype in IPF fibroblasts. It would be of interest to evaluate, in the future, the effect of TGFβ and LPA on the production of invadosomes by lung fibroblasts. Invadosomes have been largely studied in vitro, but growing evidence in different organisms indicates their role in vivo in both physiological conditions and malignancies [26,44]. To further explore the potential relevance of invadosomes in lung fibrosis, 17 IPF lung tissue samples were obtained to measure collagen content and invadosome formation from fibroblasts isolated from the same specimen. Interestingly, fibroblasts with the highest capacity to form invadosomes originated from areas of the lung tissue with the most severe fibrosis. Accordingly, in IPF lung tissues and fibroblastic foci, RNAseq analysis revealed that samples with high level of collagen I were also those expressing increased levels of the invadosome marker TKS5. Previous works reported that dense fibrillar type I collagen was a strong inducer of invadosome production by cancer cells and human fibroblasts [45]. During lung fibrosis, substantial changes take place in the ECM composition and organization, resulting in increased rigidity [8]. Loss of lung compliance due to tissue stiffening is clinically manifested by traction bronchiectasis and restrictive ventilatory defects, such as a decreased FVC and an increased FEV1/FVC ratio. At the cellular level, the stiff matrix is known to increase invasion and migration of lung fibroblasts [10,46]. In liver fibrosis and cancer cell invasion, matrix stiffness clearly acts as an inducer of invadosomal activity [47]. Collectively, these studies suggest that the marked increase in invadosome formation by fibroblasts derived from areas of severe fibrosis found in this work may have been promoted by the rigid matrix present in these densely fibrotic portions of the lung. We observed that nintedanib can achieve~50% inhibition of invadosome formation at a lower concentration (0.5 µM) than pirfenidone (5.0 mM). These concentrations, which are commonly used for in vitro studies, correspond to, respectively,~7 fold and~60 fold above the concentrations found in human plasma [48,49]. Consequently, nintedanib was selected for further mechanistic investigation. Nintedanib targets many receptor tyrosine kinases (RTKs) and non-RTKs, the latter including members of the Src family [4] that are known to participate in the initiation step of invadosome formation [28]. In this study, RNAseq data showed similar levels of Src (SRC) and cortactin (CTTN) expression between the control and IPF tissues. Immunodetection of the Src Y416 phosphorylation site and the Src-mediated cortactin Y421 phosphorylation site also suggested that the activity of these proteins was not modulated by nintedanib used at a concentration sufficient to block invadosome formation. Therefore, nintedanib seemed to inhibit invadosome formation through an Src-independent mechanism. Among the RTKs targeted by nintedanib, inhibition of PDGFRβ receptor was shown to markedly attenuate bleomycin-induced lung fibrosis [50]. Our group found that PDGF-BB induces the formation of invadosomes in fibroblast-like synoviocytes through the PI3K/Akt pathway. Moreover, in rheumatoid arthritis fibroblast-like synoviocytes, inhibition of PDGFR efficiently blocked invadosome formation [23]. Consistent with this work, we found that PDGF-BB induced the formation of invadosomes and the expression of TKS5 in healthy lung fibroblasts, all of which could be prevented with nintedanib. In IPF fibroblasts, the inhibition of invadosome and TKS5 by nintedanib was concomitant with reduced Akt phosphorylation. Specific inhibition of Akt with triciribine or MK-2206 impaired invadosome formation, suggesting that this process was dependent on the PI3K/Akt pathway, which is likely affected by nintedanib. The downregulation of PI3K decreases the production of PI (3,4,5)P3 and PI (3,4)P2. Interestingly, the most significantly overexpressed gene found in IPF fibroblastic foci from the RNAseq analysis was SYNJ2, which encodes a phosphatase that produces PI(3,4)P2 [51]. The PI(3,4)P2 synthesis is a prerequisite for TKS5 membrane attachment prior to invadosome formation [52]. Therefore, nintedanib could inhibit invadosomes by indirectly impairing TKS5 recruitment. Although this potential mechanism requires further investigation, the fact that nintedanib prevented invadosome formation represents a novel mode of action for this antifibrotic. Given the role of invadosomes in ECM remodeling, invasion and migration, the observation of an invadosomal phenotype in IPF lung fibroblasts suggests a contribution of invadosomes to the pathogenesis of IPF, the details of which remain to be defined. To further study the role of invadosomes in lung fibrogenesis, the bleomycin mouse model could be used, since invadosomes were more numerous in fibroblasts isolated from lungs of mice with bleomycin-induced fibrosis. In conclusion, this study provides evidence suggesting that invadosome formation is a cellular mechanism by which IPF fibroblasts can promote interstitial remodeling of the lung and perpetuate pulmonary fibrosis. Given that the ability of IPF lung-derived fibroblasts to form invadosomes is strongly associated with fibrotic ECM, the capacity of fibroblasts to form invadosomes is likely to be relevant to numerous diseases with ECM remodeling and a fibrotic component. Despite the availably of two treatments that slow the rate of fibrosis progression, IPF remains a fatal lung disease for which effective treatments are needed. Further studies will be required to define whether the mechanisms leading to the increased capacity of IPF fibroblasts to form invadosomes could become relevant therapeutic targets in IPF. Lung Fibroblast Isolation and Culture After lung transplantation, human lung tissue samples from healthy donors and IPF patients were obtained from the Respiratory Cell and Tissue Biobank of CRCHUM, affiliated with the Quebec Respiratory Health research Network (biobanque.ca) ( Table 1) after obtaining patient informed written consent prior enrolment (protocol CER CHUM #08.063) and approval of the research study (#2018-2437) by the U. Sherbrooke Institutional Review Board. Lung samples were minced and incubated in Dulbecco's modified Eagle's medium (DMEM) containing DNase 1 (50 µg/mL) and Liberase TL (0.25 mg/mL). The enzymatic digestion was stopped by EDTA (100 mM). The cells were passed through a 70 µm strainer and pelleted by centrifugation. Isolated cells were cultured at 37 • C with 5% CO 2 in DMEM supplemented with 15% fetal bovine serum, penicillin, streptomycin and glutamine. Additional normal human lung fibroblasts (NHLF) # CC-2512 and IPF disease human lung fibroblasts (DHLF-IPF) # CC-7231 were purchased from Lonza and cultured under the same conditions. The purity of fibroblast cultures was analyzed by flow cytometry ( Figure S1A). Lung fibroblasts from healthy and 28-day post-bleomycin challenge mice were isolated and cultured in a similar manner (data supplement). Human and murine cells were used between passages 3 and 7. Reagents, immunoblotting, mRNA expression analysis (Table 2) and public gene expression analysis from GSE32537 and GSE169500 are further described in the data supplement. Invadosome Formation Assessment Coverslips were coated with Oregon green gelatin and prepared as previously described [53]. Lung fibroblasts were seeded on gelatin-coated coverslips without FBS. After a 5-40 h period, fibroblasts were fixed with PFA, permeabilized with saponin and blocked with BSA. Actin and nuclei were stained with phalloidin-Texas red and DAPI, respectively. Invadosomes were identified as actin-rich dots near areas of degraded gelatin using an epifluorescent microscope (Carl Zeiss Inc, Thornwood, NY, USA) at a magnification of 40×. Three hundred cells were counted per coverslip to determine the percentage of invadosomeforming cells. The areas of gelatin degradation per cell were measured with ImagePro software (Media Cybernetics, Rockville, MD, USA). To quantify the number of invadosome structures per cell, fibroblasts were stained with anti-cortactin antibody, phalloidin-Texas red and DAPI. Clusters of cortactin and actin micrographs from 20-25 cells were acquired using a scanning confocal microscope LSM Olympus FV1000 spectral (Olympus, Tokyo, Japan) with a 60× magnification. Lung Histology and Collagen Quantification Lung tissues were fixed with 10% formalin before paraffin embedding. Tissue sections of 4 µm were stained with Masson's trichrome by the Histology Platform at the Université de Sherbrooke. TKS5 (NBP1-90454) immunohistochemical staining was detected using diaminobenzidine and counter-stained with Harris hematoxylin. Microscope slides were scanned at 20× and 40× magnifications with the Hamamatsu NanoZoomer 2.0 RS slide scanner (Hamamatsu Photonics, Bridgewater, NJ, USA). Collagen quantification was performed as described by Chen et al. [54]. The percentage of collagen-positive areas was measured for each patient specimen on two separate samples (~0.5 cm 3 ) and on two non-serial sections. Statistical Analysis GraphPad Prism 8.01 software (GraphPad Software Inc. La Jolla, CA, USA) was used to perform the statistical analysis. Each individual value in the histograms corresponds to a single donor. Significance between two groups of non-normally distributed data was measured with an unpaired Mann-Whitney test. A Kruskal¬¬-Wallis test followed by Dunn's multiple comparisons were used to compare more than two groups. Correlation strength was measured with a two-tailed Pearson's test. Results are presented as mean ± SEM and p values were identified as * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
2022-12-30T16:07:49.745Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "01fadf08394719d4a7b280a8d38eebae68eb5497", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/1/499/pdf?version=1672221186", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "857b0bd73fcdd17295facba015f8708e2f2a2ee2", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
209502860
pes2o/s2orc
v3-fos-license
Hair Thread Tourniquet Syndrome in an Infant: Emergency Exploration Saves Limbs Hair thread tourniquet syndrome (HTTS) is a rare condition where fibres constrict around appendages causing ischaemia and necrosis. It is a sporadically reported condition, where almost all reported cases showed involvement of fingers, toes or genitalia. A significant number of the cases are infants aged two weeks to six months where it is attributed to the mother’s excessive hair fall due to hormonal changes after delivery. We present a two-month-old infant who was irritable for the past two days with her left ring finger exhibiting an ischaemic constriction with no apparent insulting agent. She successfully treated surgically after we suspected an incomplete removal of hair thread in the emergency department. We would like to highlight the importance of a high index of suspicion in cases as such as early intervention saves the appendage. Introduction Hair thread tourniquet syndrome (HTTS) is an uncommon condition in which an appendage is constricted by a fibre leading to ischaemia and subsequently necrosis. The annual incidence of HTTS is 0.02%, and it presents as an emergency [1]. Attending doctors must have a high index of suspicion especially in children who are crying with no apparent reason as this diagnosis is poorly recognised. Delay in diagnosis and treatment can lead to amputation of the affected appendages. Mat Saad et al. reported that 44.2% of HTTS involved the penis, 40.4% involved the toes and 8.6% involved the fingers [2]. One of the possible insulting agents to cause HTTS is human hair, which is known to have a tensile strength greater than 29,000 pounds per square inch. Given its nature that expands when wet and tightens when dry, the human hair is capable of constricting any appendage of the body constricting blood vessels that may lead to a deadly appendage [3]. Case Presentation A two-month-old infant presented with persistent irritability despite all efforts to comfort her after one day, when the mother noted there was a swelling and constriction of her left ring finger upon removing her mitten. Physical examination revealed marked oedema, redness and congestion distal to the metacarpophalangeal joint of the ring finger ( Figure 1A). There were no signs of tissue necrosis, no recent infections, no past medical history, and no known congenital issues such as constriction band syndrome and trauma. HTTS was considered, but no obvious constricting thread could be visualised. We explored the finger in the emergency department using loupes (3x magnification) and a pen torch. A few fine hair strands were seen strangulating just distal to the metacarpophalangeal joint and cut free ( Figure 1B). Although the perfusion of the finger was good, the congestion of the finger was only slightly reduced. The constricting mark was still very obvious, and we were not convinced that all the hair threads had been successfully removed. Subsequently, we explored the finger in the operating theatre under general anaesthesia as the child was fretful and moving. Using the microscope in the operating theatre, we confirmed that there were no more hair threads detected. As the hair strand had strangulated the finger, and since we wanted to reduce the oedema, we decided to undermine the constricting mark and close it with a vertical mattress suture (Figure 2A, 2B). The oedema subsequently reduced and finger circulation was monitored for one day before the infant was allowed discharge. At three weeks, the wound over the finger had healed and the infant was actively moving the affected finger. Discussion The aetiology of HTTS is currently not well known. However, the risk of getting HTTS is increased in an infant aged two weeks to six months due to telogen effluvium. Telogen effluvium is pronounced following pregnancy due to the endocrine flux, which leads to an excessive hair loss experienced by mothers in the post-partum period. This condition poses a higher chance of the hair strangulating the appendage as in the case above [4]. Another hypothesis of HTTS is that most appendages including the finger and toes are mostly covered in mittens, which are frequently washed without turning inside out. The accumulation of hair strands in these mittens could lead the infant winding these fibres around their appendages when they move their hands or legs freely in the mittens increasing the risk of HTTS [4]. As the hair may be deeply embedded, most patients with HTTS usually present three to four days later [5]. Although most cases are accidental, child abuse cannot be excluded in cases that involved multiple appendages and ecchymoses noted on the infant's body [4]. Other differential diagnosis that should be considered includes ainhum, pseudoainhum, infection, foreign body, insect bites and congenital constriction band [5]. The outcome of the affected appendage largely depends on the prompt detection, correct diagnosis and early intervention to completely remove any encircling fibres to restore the circulation. This can be done with the aid of a magnifying glass in the emergency setting or in the operating theatre. When the appendages are severely edematous, it is difficult to determine whether the constriction has been removed completely. This is exemplified in our case, where we had difficulty confirming complete removal of all hair strands due to the oedema. Surgical exploration is mandatory if a complete removal cannot be achieved or satisfactorily confirmed and should be done under general anaesthesia to allow optimal exploration. In this case, the Zplasty approach was used in the exploration to prevent potential risk of scar contracture which may occur. If the constricting agent is hair, the use of a depilatory agent is possible with minimal discomfort to the child [6]. However, it may not be suitable in cases where the hair is deeply embedded and unable to be visualised [7]. Education is to be targeted to both parents and attending doctors to raise awareness of this uncommon diagnosis. Parents should be advised to launder their children clothes inside and to avoid using coverings over their children extremities for a long period of time without supervision [8]. Conclusions HTTS is an uncommon occurrence. We should have a high index of suspicion as early diagnosis is vital in determining the outcome of the appendage. Early intervention either by non-surgical or surgical method to release the strangulation of the affected appendage is important to prevent ischaemic necrosis. Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2019-12-19T09:16:55.435Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "4df13475d14117df9e25dd7e998ccd163d59b4fa", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/25776-hair-thread-tourniquet-syndrome-in-an-infant-emergency-exploration-saves-limbs.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4edb7e2c37dc2b5cb58b48e4de05ee9d820e1c97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257559410
pes2o/s2orc
v3-fos-license
Decoding optimal ligand design for multicomponent condensates Biomolecular condensates form via multivalent interactions among key macromolecules and are regulated through ligand binding and/or post-translational modifications. One such modification is ubiquitination, the covalent addition of ubiquitin (Ub) or polyubiquitin chains to target macromolecules for various cellular processes. Specific interactions between polyubiquitin chains and partner proteins, including hHR23B, NEMO, and UBQLN2, regulate condensate assembly or disassembly. Here, we used a library of designed polyubiquitin hubs and UBQLN2 as model systems for determining the driving forces of ligand-mediated phase transitions. Perturbations to the UBQLN2-binding surface of Ub or deviations from the optimal spacing between Ub units reduce the ability of hubs to modulate UBQLN2 phase behavior. By developing an analytical model that accurately described the effects of different hubs on UBQLN2 phase diagrams, we determined that introduction of Ub to UBQLN2 condensates incurs a significant inclusion energetic penalty. This penalty antagonizes the ability of polyUb hubs to scaffold multiple UBQLN2 molecules and cooperatively amplify phase separation. Importantly, the extent to which polyubiquitin hubs can promote UBQLN2 phase separation are encoded in the spacings between Ub units as found for naturally-occurring chains of different linkages and designed chains of different architectures, thus illustrating how the ubiquitin code regulates functionality via the emergent properties of the condensate. We expect our findings to extend to other condensates necessitating the consideration of ligand properties, including concentration, valency, affinity, and spacing between binding sites in studies and designs of condensates. Table of Contents Theory for Ligand-mediated phase transitions Figure S1 Table S1 Table S2 Table S3 Table S4 Table S5 Theory for ligand-mediated phase transitions Solution Free Energy We have developed a theory to reconcile the observation that monoubiquitin and polyubiquitin inhibit and promote phase separation respectively. The first requirement of the model is that it must capture the binding equilibrium of ubiquitin and UBQLN2 into higher order complexes. We begin by writing down the free energy of the dilute state, which we denote with the index V = "vapor": (Eq. S1) Here %,& and $,& are the concentrations of hub and driver molecules in the monomer state, respectively (i.e., not bound to any other hub or driver molecule). N is the number of driver molecules that a hub can bind (N=4 in the case of Ub4) and !,& is the concentration of hubs that are bound to n driver molecules ("n-mers"). # is the free energy of species i and # 1ln , & , * − 12 is the mixing entropy of that species, where : is a reference concentration. The chemical potentials % and $ serve as Lagrange multipliers to ensure that the concentration of molecules in the monomer and n-mer states add up to the total concentration of hubs, 4,# , and driver molecules, +,# . (In our notation, the capital index (e.g. H, D) indicates the total concentration while the lowercase (e.g. h, d) indicates the monomer state). The free energy of the dense state (denoted with the subscript L = "liquid") has a similar form: Where # is the energy to transfer species i from the dilute, vapor (V) state to the liquid (L) state (e.g. % is the energy to transfer the hub from the vapor to liquid state). Chemical and Partitioning Equilibrium Minimizing & and ' with respect to # we find expressions for each of the species concentrations in the vapor phase: And in the liquid phase: (Eq. S4) %,' = : The expressions for the monomers in the vapor phase can be rearranged to find expressions for the chemical potentials: (Eq. S5) These expressions can be used with Eqs. S3 and S4 to obtain the conditions for (unbound) monomer phase partitioning: Similarly, the chemical potentials can be substituted in the expressions for the n-mer concentrations: (Eq. S7) Where !,& is the dissociation constant for the formation of n-mers in the dilute (vapor) phase. Similarly, in the dense (liquid) phase we find: Note that if the transfer free energy of an n-mer is given by the sum of its parts such that ! = % + $ then the dissociation constants are identical in each phase. Therefore, Δ , is a parameter that describes how oligomerization (hub-driver binding) modifies the interaction with the fluid (driver-only fluid). However, if the bonding constraints within the n-mer perturb the interactions with the surrounding phase, the oligomerization equilibrium will be different between the phases (Δ will be non-zero). The above formulation ensures that binding equilibria are satisfied in each phase and that the chemical potentials for each species are equal across phases. Our next task is to determine a condition for the onset of phase separation. At the onset, we can consider the dense phase to be infinitesimally small so that all of the proteins are in the dilute (vapor) phase. Thus, 4,& and +,& are equal to the total protein concentrations, which can be used with Eq. S7 to determine the monomer concentration ( %,& and $,& (as described below)). These concentrations, in turn, can be used with Eq. S8 to find the monomer concentrations in the infinitesimal droplet. Next these concentrations are used with Eq. S9 to find the concentration of n-mers in the dense phase. Saturated Solution Condition To assess whether these concentrations represent a subsaturated or supersaturated solution we examine the total concentration of the driver molecules in the dense phase. When polymers phase separate, the mass concentration of the dense phase is insensitive to the molecular weight (or equivalently the polymerization number) of the individual molecules (57). This is because the microscopic interactions and mesh structure of the phase are both much smaller than the molecules. In agreement with this expectation, the concentration of UBQLN2 molecules in the dense phase is nearly constant regardless of whether the UBQLN2 are monomers or oligomerized by a hub (8). Therefore, we adopt, as the criteria for phase separation, the condition that the total concentration of UBQLN2 in the dense phase is equal to the concentration of the UBQLN2-only fluid (i.e. in a UBQLN2-only phase separating solution): (Eq. S10) where the total concentration +,' is taken to be the constants +,' = 10 mM for 450C and +,' = 2 mM for full length UBQLN2 (8). An important insight from Eq. S10 is that hub molecules facilitate phase separation by lowering the chemical potential of driver molecules in the dense phase. This is most easily seen by examining the unbound driver chemical potentials in the two limiting cases: • In a sub-saturated solution, the UBQLN2 concentration will add up to less than the pure UBQLN2 fluid: !,# + ∑ $,# $ < %,# . In this case, the dense phase will collapse to fill the voids and optimize the UBQLN2-UBQLN2 contacts. After this collapse, the monomer (unbound driver) concentration in the dense phase will increase such that the chemical potential in the dense phase is greater than the dilute phase: ln , ",, , * < ln , ",$ , * + $ . This will drive the monomers (unbound drivers) to leave the dense phase, causing it to shrink. • Conversely, in a supersaturated solution we would find $,' + ∑ !,' ! > +,' implying that the driver molecules are packed too close together. In this case the dense phase will expand, lowering the concentration of monomers. Again comparing the resulting chemical potentials of the monomers we find ln , ",, , * Therefore, the condition for the onset of phase separation is when Eq. S10 is satisfied and the monomer concentrations satisfy (Eq. S11) Which is equivalent to Eq. S6. Temperature Dependence of Phase Separation The temperature dependence of $ is modeled by fitting the experimental cloud point temperatures to the quadratic function: (Eq. S13) The best fit parameters are shown in Fig. S10. Combining this with our previous result (Eq. S12) we have: (Eq. S14) Next, we express our condition for phase separation in terms of $ : (Eq. S15) Using Eqs. S6 and S9 this becomes (Eq. S16) Finally, to reduce the number of free parameters in the model, we make the approximation that Δ is independent of n and obtain: (Eq. S17) We refer to the quantity % + Δ as the "inclusion energy". It accounts for two detrimental effects from transferring an n-mer to the dense phase. The first is the repulsive solvation energy of the hub and the second is the constraints preventing the bound drivers from attaining their optimal interactions with the fluid. This inclusion energy is the only free parameter in Eq. S17.
2023-03-17T13:12:06.002Z
2023-03-14T00:00:00.000
{ "year": 2023, "sha1": "9ea10700e594530dfd736f0dc022bd6ee41e7e75", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10054939", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "327aea05b7419ac093c7410fcd8a9346f347a664", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225612511
pes2o/s2orc
v3-fos-license
Experimental Analysis of Semi-Open Impeller Pump as Turbine Pump as turbine is one of the options for low-cost and simple electric power generation that suitable for rural area. Pump as turbine facing the same problem as any other hydropower plant which is low efficiency. Researchers focus on modification of the impeller, such as rounding, trimming, and surface smoothing. Most of the modified impeller was closed impeller type. The objective of this research was to investigate the possibility to generate more electric power of a PAT by using semi-open impeller. Simulation analysis was performed to analyze fluid flow in the pump housing. The simulation result shows the fluid velocity for open, semi-open and closed impeller is 646.87, 641.76, 639.89 m/s, respectively. The pressure of each open, semi-open and closed impeller is approximately 356, 361, and 388 kPa, respectively. The conclusion is that the open impeller had a higher velocity and lower pressure among two others. In the experimental testing, a semi-open impeller was selected because it had more strength and rigid body when it encountered the water impact. Three semi-open impellers with a blade thickness of 2, 4 and 6 mm made of brass were manufactured and tested. A close impeller which is the original impeller of the pump was also tested as a comparison. It is found that the open impeller with 2 mm blade thickness generated more electric power, approximately 36% than the original impeller. Introduction Pump as Turbine (PAT) is one of a renewable power plant that used a commercial pump to generate electrical power by reversing the flow [1] and modified the motor pump as a generator [2]. An impeller is one of the essential parts of PAT that affects the PAT performance because it transforms the water flows to rotate the generator shaft. Several studies have performed to improve the PAT performance by modifying [3] [4], re-designing [5], and re-manufacturing the impeller. Impeller trimming and rounding were proven to be able to improve the PAT performance. [6][7] [8]. There are three types of impeller which are open, semi-open, and closed impeller. Commercial centrifugal pump only provides a closed-impeller. Thus most of the impeller modification was implemented in a closed-impeller. The objective of this research is to analyze the feasibility to improve the efficiency of PAT by using semi-open impeller. Before the experiment, simulation analysis for three types of the impeller (open, semi-open, and closed) was executed. Simulation Analysis In this research, Grundfos NF 30-18 was used as a PAT. The model of the impeller was developed based on the dimension of pump housing as shown in Figure 1 Figure 2(b) shows that the water velocity decreased when flow in the housing and rotating the impeller. Table 1 shows each pressure and velocity value at point A to F. Figure 5 shows the maximum value of (a) pressure and (b) velocity of each impeller. It can be seen that the highest value of velocity occurs in open impeller, which compromises that it will generate more power. On the contrary, the closed impeller that mostly used in commercial pump has the lowest velocity and highest pressure. Experimental Setup According to the simulation result, the conclusion is that open impeller will produce more power. However, when concerning the blade strength and rigidity when encountered water impact, semiopen impellers is a better choice. The open impeller is easily deformed when receiving water pressure load. Three semi-open impellers made of brass with thickness of 2, 4, and 6 mm were manufactured, as shown in Figure 6. The original impeller of the pump, which is a closed-impeller was also tested in the experiment as the comparison. The experiments were carried out on a PAT laboratory-scale test facility [9] as shown in Figure 7(a). Figure 7(b) shows the semi-open impeller installed in the pump before testing. Figure 8 shows the electric power generated by each impeller. It shows that more power is generated as the increase of water flow-rate that control by the valve opening. At the maximum valve opening (90º), the closed impeller produced 49.65 Watt. Meanwhile, the semi-open impeller with blade thickness of 2, 4, and 6 mm produced 67.65 Watt, 63.43 Watt, and 61.37 Watt, respectively. It is found that the semi-open impeller with 2 mm blade thickness generated more electric power, approximately 36% than the close impeller (the original impeller of the pump). Meanwhile the 4 and 6 mm blade thickness generate more power than the closed impeller, approximately 28%, and 24%, respectively. It can be concluded that the open impeller will produce more power. However, when concerning the strength and rigidity of the blade, an open impeller is considered to be more fragile. Therefore, the semi-open impeller was chosen to be manufactured and tested. Experimental Result Three semi-open impellers with thickness of 2 mm, 4 mm, and 6 mm were tested. The experimental result shows that the semi-open impeller generated more power than the closedimpeller. Blade thickness of 2 mm produces more electric power (approximately 36%) than the closed impeller. Meanwhile, 4 mm, and 6 mm generate approximately 28% and 24% than close impeller. It can be concluded that the thinner the blade, the more power generated. However, the blade deformation must be considered for long term of usage.
2020-07-23T09:06:30.072Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "baf58fb29fc7579a22b79e11b7078b7489beb2b3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/852/1/012069", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "53efbbadec172b58c99e068d66f15cb15861bbfd", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
196811677
pes2o/s2orc
v3-fos-license
Silk fibroin-derived polypeptides additives to promote hydroxyapatite nucleation in dense collagen hydrogels Silk fibroin-derived polypeptides (FDPs) are polypeptides resulting from the enzymatic separation of the hydrophobic crystalline (Cp) and hydrophilic electronegative amorphous (Cs) components of silk fibroin (SF). The role of these polypeptides in promoting the nucleation of hydroxyapatite (HA) has been previously investigated, yet is still not fully understood. Here we study the potential of HA mineralization via FDPs incorporated at 1:10, 1:2 and 1:1 in a plastically compressed (PC) and dense collagen (DC) scaffold. Scaffolds were immersed in simulated body fluid (SBF) at physiological conditions (pH = 7.4, 37°C) to promote biomineralization. The effect of Cs and Cp to promote HA nucleation was investigated at different time points, and compared to pure DC scaffolds. Characterization of Cs and Cp fragments using Liquid Chromatography–Mass Spectrometry (LCMS) showed little difference in the amino acid composition of the FDPs. Results obtained in vitro using Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Scanning Electron Microscopy (SEM) X-Ray Diffraction (XRD) and mass analysis showed little difference between scaffolds that incorporated Cs, Cp, and DC hydrogels. These results demonstrated that silk FDPs incorporation are not yet suitable to promote HA nucleation in vivo without further refining the collagen-FDP system. Introduction Native bone tissue is composed of organic collagen fibrils and inorganic hydroxyapatite (HA) crystals arranged in a strict hierarchical structure. The collagen fibrils provide a template for HA, with crystals nucleating and growing within the gaps of collagen [1,2]. While the nucleation of HA crystals occurs within collagen fibrils, collagen itself does not take an active role in HA nucleation, and only a small amount of apatite will form after a long period of time [3,4]. Instead, it is the non-collagenous proteins (NCPs) that are responsible for HA formation [5]. In the literature, it is demonstrated that NCPs having calcium binding and HA nucleating properties contain glutamic acid [6][7][8]. Glutamic acid has the ability to bind Ca 2+ ions to negatively charged carboxylate groups, which in turn attract PO 4 3ions and increase the local a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 concentration past the point of supersaturation [9], with a small amount (4-8 wt% [10]) of carbonate ions remaining within the HA crystal. This supersaturation reaches a critical level, and leads to nucleation sites for HA crystals [8,11,12]. However, without the NCPs, the mineralization of collagen generally does not proceed [1]. As a consequence, collagen scaffolds used in bone implants without mineralization require more than 2-3 weeks for osteointegration [13]. Currently, an increasing number of bioactive materials are being developed to replace NCPs and promote the nucleation of HA and mineralization of collagen for bone defects repair [2,[13][14][15]. Silk is a natural protein fiber that is used by certain insects such as spiders and silkworms (Bombyx mori), with the former using it to make webs and the latter to form cocoons. Though spider silk is known for its outstanding mechanical properties [16], Bombyx mori silk from domesticated silkworms has the advantage of being available in higher quantities [17]. The latter is composed of two main proteins, sericin and fibroin. The silk fibroin (SF) is the core structural component of silk, which exhibits high tensile strength [18], while the sericin surrounds the fibroin and acts as a glue, or "gum" [19]. Due to its ease of processing, biodegradability and high tensile strength, silk fibroin is especially attractive for biomedical applications [20][21][22]. In particular, due to the high concentration of glutamic acid in SF [23], it has been proposed that nucleating HA on SF is possible in vitro, and is considered as a promising material for repairing bone defects [14,24,25]. As SF is composed of peptides that are hydrophilic (C s ) and hydrophobic (C p ), it is fairly straightforward to separate SF into its constituents by initially degumming (removing the sericin/silk gum by cleaving its peptide bonds) the Bombyx mori silk and then hydrolyzing the SF [26]. Then, by dissolving the SF with α-chymotrypsin (a digestive enzyme that breaks down proteins into polypeptides), and centrifuging the resultant, the supernatant, C s , and precipitate, C p can be separated [27]. Once separated, it has been shown that C s and C p have different compositions, with C s having more anionic amino acids than C p [23]. Furthermore, the C s component of SF has excellent biomineralization properties in vitro. DC hydrogels that incorporate C s have been shown to form HA under physiological temperature and pH, when immersed in simulated body fluid (SBF), unlike C p -containing DC gels, while C p showed almost no mineralization [27]. It is thought that the high concentration of glutamic acid (glu) in C s [23] allows it to play a role similar to calcium-binding non-collagenous proteins (NCPs), which act as modulators for the nucleation and growth of HA nanocrystals [28][29][30] by attracting more Ca 2+ ions, as well as providing more nucleation sites for HA crystals [31]. found in bone, they play an important role in bone mineralization by acting as HA nucleation centers [43]. Additionally, plastic compression (PC) removes excess water and forms a dense collagen (DC) hydrogel, in which the collagen fibril density (CFD) increases from <0.5 wt.% to approximately 8 wt.% [44]. This results in a scaffold that closely mimics collagen in native bone tissue [32,33,36,44] with a greater potential to nucleate HA. The objective was to obtain a hydrogel that has the composite structure of flexible collagen and tough HA, which gives bone its requisite strength and high fracture toughness [45][46][47]. Thus, in this paper we study the potential for including FDPs in DC scaffolds for bone tissue engineering (BTE). Preparation of hydrogels and bioactive additives Silk fibroin polypeptides. Soluble silk fibroin-derived peptides were produced from raw Bombyx Mori (silkworm) silk provided by Stazione Sperimentale per la Seta (Milan, Italy). The silk sericin was degummed and the fibroins were cleaved and separated into their soluble (C s ) and non-soluble phases (C p ), using a method reported in the literature [27,48,49]. Silk fibroin fibers were produced through a heating cycle that removed the silk sericin. The resulting fibroin was dissolved in a concentrated aqueous solution of LiBr (Sigma) [50,51], due to LiBr being a chaotropic salt, which disrupts the bonds in protein molecules [52,53]. The dissolved fibroin results in both α-helical and β-folded structures [51], heated, then separated using dialysis to produce a 2% silk fibroin solution [48]. The resulting silk fibroin was then diluted with a 0.1 M ammonium bicarbonate solution (NH 4 CO 3 , Fisher Scientific), then enzymatically digested with α-chymotrypsin (Sigma) at an enzyme-to-substrate ratio of 1:100 to generate C p and C s in solution, which was incubated at 37˚C for 24 hours then centrifuged to separate the supernatant (containing C s ) from the pellet (containing C p ). Both the supernatant and the pellet were freeze-dried to produce C s and C p [27]. Plastically compressed dense collagen hydrogels. A solution of 13:1 10X Dulbeccos's Modified Eagle Medium (DMEM) (Gibco) to 5N NaOH (Fisher Scientific, Sodium Hydroxide Solution 5N) was created. A 5.83 mg/mL bovine dermis (BD) collagen solution collagen solution (Devro Medical, Purified Soluble Collagen) was added at 4:1 by volume to the DMEM. The pH was adjusted to 7.4 by adding NaOH while kept at~4 ºC using an icepack or a box filled with ice. [27,36]. To produce Dense Collagen (DC) gels, a 48-well plate (10.5 mm diameter per well) was filled with 1 mL of the above solution (DMEM, NaOH and collagen) and, similar to a previously established protocol [27], left in an incubator (Thermo Scientific, Forma Series II) at 37˚C for 30 minutes. Collagen gels were then subjected to a plastic compression method [27,33,34,36] by applying 1 kPa (40 g per 350 mm 2 ; 160 g for four gels) for five minutes to collagen gels placed between two nylon meshes and on top a steel mesh and blot paper (the latter to collect expelled water). The load was applied to expel water and retain collagen, and produce DC gels. Hydrogels containing a FDP additive (C s or C p ) were created following the previously outlined protocol [27]. The process for creating DC hydrogels containing C s (DC-C s ), and C p (DC-C p ) was identical to that outlined above, with an interim step of mixing the additive in the DMEM then ultrasonicating the solution prior to adding NaOH and collagen. Collagen was added to the DMEM at 4:1, and FDPs to polymer ratios of 1:10 1:2 and 1:1 were used to determine effect of FDPs in the biomineralization of collagen gels, with pure collagen gels used as control (see Table 1). After adding collagen, the solution was magnetically stirred to ensure a homogenous solution. Mineralization of hydroxyapatite within additive-incorporated hydrogels Both C s and C p were immersed in SBF at a 1:3 ratio (3 mg/mL) and placed in an incubator for 24 hours before being removed and analysed. The resulting gels were immersed in SBF (pH 7.4, 37˚C) for up to two weeks, using a hydrogel to SBF ratio of 1:3 (mg/mL). The solution was replaced at two or three-day intervals by fresh SBF. Samples were taken at days 0, 3, 7, 10 and 14. The results were compared to gels that had no additives incorporated. HA nucleation in bone occurs within the gaps of collagen fibrils [48], though for scaffolds constructed in vitro, a high concentration of carboxyl groups was shown to lead to HA nucleation. Charged amino acids act as nucleation sites, where calcium ions are gathered through electrostatic attraction, which then attract phosphate ions until a critical concentration is reached, leading to HA formation [49][50][51][52]. The nucleation of HA is thus expected to occur around the charged Cs particles within the collagen scaffold. Liquid chromatography-mass spectrometry Liquid Chromatography-Mass Spectrometry (LCMS) was carried to determine the amino acid composition of the silk fibroin derived polypeptides on a QTOF (Agilent). LCMS was carried out at IRIC-Université de Montréal Proteomics facilities. The results were analysed in Microsoft Excel. Particle size analysis Particle Size Analysis (PSA) (Horiba LA-920) was performed on C s and C p to determine the size distribution. Isopropanol was used to create a solution that was then ultrasonicated before analysis. A refractive index of n D 22 1.55 (n D 22 1.13 in isopropanol) was used to calculate the size distribution based on values obtained from the literature [54,55]. Fourier transform infrared spectroscopy Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR) (Perkin Elmer, Spectrum 400) was used to characterize the mineralization that occurred in the sample, as well as any changes in the chemical composition of the sample itself. The bands associated with PO 4 3-, indicative of the presence of HA, were compared between timepoints and samples Table 1. Solution of 10 mL collagen with additives (added prior to plastic compression). Mass of collagen (mg) Mass of C s /C p (mg) Mass of polymer and additive in solution (mg) Collagen ( to monitor HA mineralization [27,56]. Additionally, IR is sensitive to the substitution of PO 4 3ions by CO 3 2ions, and can detect the presence of small amounts of carbonate, indicating the formation of carbonated hydroxyapatite [57]. Samples were prepared for ATR-FTIR analysis by first washing in dH 2 O to remove any excess SBF, then freezing in liquid nitrogen, then freeze-drying (lyophilizing) the sample until all the excess moisture had been removed. The sample was then analysed at a resolution of 2 cm -1 in the IR range of 4000 to 650 cm -1 at 32 scans per sample. The resulting spectra were normalized against the Amide I band found between 1800 to 1650 cm -1 for comparison (Spectrumsoftware, Perkin-Elmer) [36,58]. X-Ray diffraction X-Ray Diffraction (XRD) (Bruker D8, Bruker-AXS Corp.) was performed on C p and C s fragments, as well as freeze-dried hydrogels that had been immersed in SBF. Samples were compressed and taped onto glass slides to produce a flat, fixed surface, and XRD analysis was performed using a Cu-K α source. XRD patterns were recorded from 3 to 104˚2 theta at 40 kV and 40 mA. Four frames of 25˚were recorded for 150 seconds and then merged during data post processing. The resulting patterns were analysed (EVA 14.0.0.0, Bruker) and compared with spectra with peaks identified in the International Centre for Diffraction Data (ICDD) database. Statistical analysis The data was analyzed for statistical significance using a one-way ANOVA with a statistical significance level of 0.05. Tukey and Holm-Bonferroni methods were used for comparison (Origin Pro v9.0 software, OriginLab). Materials characterization Characterization of the starting materials (collagen, C s and C p ) was carried out using ATR-F-TIR to determine their chemical composition, which can serve as a reference (control) for future tests. While the ATR-FTIR spectra of C p and C s are quite similar, still there are slight differences, such as the shoulder at 1695 cm -1 in C p and the amide I band at 1650 cm -1 for C s . This can be attributed to C p polypeptides being composed of the crystalline regions of silk fibroin and having a β-sheet type structure, while C s has an α-helix type structure [16,23]. The spectra for C p (see Fig 1A) exhibit a shoulder on the amide I band at 1695 cm -1 that is typically associated with a β-sheet structure of SF, as well as Amide I, II and III absorptions at 1622, 1515 and 1230 cm -1 that are characteristic of SF [27,59]. The spectra for C s (see Fig 1B) differed slightly in the amide I band at 1650 cm -1 that is typical of an α-helix type structure, though it still have the Amide II and III bands at 1530-1515 cm -1 and 1239 cm -1 , respectively [27]. XRD patterns for solubilized C s and C p (see Fig 1B) fragments revealed a crystalline structure in C p that is similar to XRD patterns of SF reported elsewhere [60], while C s fragments resulted in spectra confirming the amorphous nature. SEM images of C s and C p also indicate that C p has a crystalline structure, while C s has an amorphous structure (see Fig 2). These results confirmed that the structure of SF is composed of both the crystalline phase of C p and the amorphous phase of C s . The size distribution of C p and C s particles indicates that C p is significantly larger than C s (see Fig 3). The mean particle size of C p is 91.02 μm, while C s is only 22.46 μm. The difference in size can be attributed to C p having a molecular weight 4-20 times higher than C s [27], but is more likely due to the structure of C p being semi-crystalline, whereas C s is amorphous [61][62][63][64]. The use of α-chymotrypsin to separate the two phases attacks the amorphous Cs phase, while not affecting the crystalline C p , resulting in larger crystalline C p and smaller chunks of C s [22,64,65] LCMS results contrast with the previously assumed composition of the silk fibroin-derived polypeptides (see Table 2). LCMS showed that C s and C p have a similar composition to silk fibroin. The composition of C p is similar to values found previously [23]. In contrast, C s has a slightly more varied composition, though it has a lower concentration of sequences rich in aspartic acid and glutamic acid residues compared to similar results reported in the literature [23,55]. The values obtained from LCMS match the structure seen previously, with the Glycine amino acids alternating with other amino acids (except for one Ala-Ala link) [66]. The NCBI database for the amino acid composition of silk fibroin polypeptides that are composed of a mixture of Heavy Chains (HC) (NCBI Reference Sequence: NP_001106733.1) and Light Chains (LC) (NCBI Reference Sequence: NP_001037488.1) shows that C s and C p have a similar composition to silk fibroin as well, though with a lower concentration of glutamic acid (HC-0.57%, LC-1.91%) and aspartic acid (HC-0.48%, LC-6.49%). Regarding the charge of the FDPs (see Table 3), tabulating the LCMS results above (see Table 3) shows that overall the FDP is composed of neutral amino acids (97.6% and 96.8% for C p and C s , respectively). Compared to the charge calculated from the composition of the FDPs obtained from the literature [55], where 7.6% of the amino acids are negatively charged, there is a significant difference in the charge of C s , as only 1.5% of the amino acids within the samples examined are negatively charged. It is unlikely that the C s fragments incorporated into the hydrogel are attracted to the collagen, which is cationic in nature [27], as was previously assumed. The SBF that was used to imitate a physiological environment for in vitro testing was analysed using Ion Chromatography. The results were compared to the theoretical composition of the ionic species present in solution as well as a commercial SBF solution (Hank's Balanced Salt Solution, HBSS [67]) and Extracellular Fluid (ECF), specifically plasma [68]. The carbonate present in the SBF could not be measured due to on-column neutralisation of CO 3 2to HCO 3by dissolved CO 2 . Polypeptides (C s and C p ) immersed in SBF at 37 ºC showed aggregation within a day. The HA aggregate was confirmed from XRD analysis (see Fig 4). While HA can exist in different forms, depending on the condition of its formation [69], the method of nucleation remains the same. It is known that HA forms an intermediary amorphous calcium phosphate, that HA crystals exhibit "polycrystalline character of the elemental particles," and nucleation is favoured [70], which would indicate that HA forms are determined by nucleation conditions. The resulting XRD pattern showed peaks that matched the corresponding pattern for HA (ICDD file 00-009-0432), indicating that both polypeptides lead to the nucleation of HA at physiological conditions. Given the similar composition of C s and C p (see Table 2), it is to be expected that both would nucleate HA. Previous work has shown that the presence of acidrich proteins (specifically aspartic and glutamic acid) led to biomineralization in vitro [71,72] via epitactic nucleation, whereas heterogeneous nucleation occurs via the formation of critical nuclei on a surface [73]. The XRD measurements of C s and C p immersed in SBF results in a diffraction pattern identical to that arising from HA. Hydroxyapatite nucleation investigation Hydrogels incorporating C s and C p at a 1:10 additive to collagen ratio (by mass) were created as stated. SEM imaging was performed as described above. The results for DC, 1:10 DC-C p and 1:10 DC-C s gels, after seven days immersion in Kokubo's SBF were analysed to visually determine whether HA nucleation occurred. SEM reveals no major difference in the morphology of the as-made hydrogels, either DC, 1:10 DC-C s or 1:10 DC-C p (see Fig 5.). Particles that differed were seen in the DC and 1:10 DC-C p gels at day 7, though there were no signs of particle nucleation in the 1:10 DC-C p gels. In the DC/1:10 DC-C s gels, the particles were located in small clumps of 1-20 particles randomly spaced across the surface of the hydrogel. Most of the hydrogel surface remained bare. No evidence of C s is observed in the hydrogels via SEM at day 7, indicating that electrostatic interactions between collagen and the hydrophilic FDP [27] do not last. Spectroscopic and XRD analysis was conducted on DC hydrogels, both with and without additives incorporated, as described above. The resulting ATR-FTIR spectra were plotted and showed that bands indicative of the presence of HA; υ 3 PO 4 3-(1030 and 1080 cm -1 ) [58,74,75], as well as the bands for υ 3 and υ 2 CO 3 2-(1450 and 1400 cm -1 and 850 cm -1 respectively) [58,74] were present in all samples, post-immersion in SBF. The presence of type I collagen is confirmed from the bands at 1630, 1550 and 1240 cm -1 , corresponding to the amide I, II and II groups respectively [58,76] (see Fig 6) present in all samples. The spectra obtained show that there was an increase in the υ 3 PO 4 3over the first 10 days compared to the initial band, indicating the occurrence of HA nucleation/growth. This was particularly apparent in DC and 1:10 DC-C s gels (see Fig 6A and 6B), as well as in the 1:10 DC-C p gels (see Fig 6C). The spectra indicated that all hydrogels showed a presence of phosphate and carbonate groups, suggesting some HA nucleation occurs within the hydrogel. Comparing the value of the peaks associated with phosphate and carbonate showed the bands associated with phosphate increased with time (see Fig 7A), those associated with carbonate did not (see Fig 7B), indicating that phosphate absorbance within the collagen increases over time. This change can be attributed to the mineralization of non-carbonated HA, and is consistent with previous findings [58]. The hydrogels that had C s incorporated did not show significantly higher values associated with the phosphate peak, contrary to earlier work [27], indicating that it does not serve to nucleate HA at a higher rate. The ratio of phosphate and carbonate peaks to the Amide I band remained nearly constant over time, indicating that little change takes place within the 1:10 DC-C p or C s gels. It is likely that the FDPs distributed within the hydrogel lack the acidic amino acids, particularly glutamic acid, necessary for the nucleation of HA [71,72,77]. Additionally, the lack of electrostatic interaction between collagen and the mainly neutral FDP is also a barrier to HA formation, as it was shown to be necessary for mineralization to occur [73,78]. The XRD patterns for DC hydrogels immersed in SBF showed that the formation of HA occurs in all samples and increases over time, though the extent of mineralization differs based on whether it was a DC, DC-C p or DC-C s hydrogel (see Fig 8). XRD patterns for pure collagen hydrogels showed peaks at 2θ = 33˚and 45˚in all samples, which are associated with HA [79]. The latter also exhibit a peak at 2θ = 23˚in an otherwise amorphous section of the pattern. As collagen is an amorphous material with no defined peaks, and the 2θ = 23˚peak does not match any peaks associated with HA, the presence of this peak has been attributed to the Ratio of A) ν3 PO 4 3and B) ν3 CO 3 2bands to amide I bands from ATR-FTIR spectra of DC gels containing no additives, C s and C p (at a 1:10 ratio to collagen) immersed in SBF (SD, n = 4, p<0.05). XRD patterns confirmed the nucleation of HA in all samples. The 1:10 DC-C s hydrogels (see Fig 8B) initially showed a similar extent of mineralization as DC and 1:10 DC-C p gels on days 3 and 7. However, the XRD patterns for DC and DC-C s hydrogels (see Fig 8A and 8B) showed much higher HA peaks at day 14, indicating a greater extent of mineralization than DC-C p (see Fig 8C) hydrogels in the long-term. In particular, the peak at 2θ = 45˚was not as apparent as in other DC and DC-C s hydrogels. In general, however, both 1:10 DC-C s and DC-C p hydrogels showed that their incorporation led to HA nucleation, which can be explained by the fact that both C s and C p have similar amino acid compositions, as seen by the LCMS results, and the fact that both C s and C p lead to the nucleation of HA, as confirmed by the XRD pattern from the particles were immersed in SBF (see Fig 4). The major difference between C s and C p is related to the high, narrow peak at 2θ = 23˚. The 2θ = 23˚peak is present in all samples, but the intensity is much greater in the DC and DC-C s samples. The broad peaks in the DC-C p samples are attributed to a relatively small crystal size [80][81][82] or to the poorly crystalline nature [83,84] of HA within the sample. The XRD results match ATR-FTIR results, which shows that FDPs do not have a significant impact on HA nucleation due to the lack of the necessary acidic amino acids [71,72,77] or of the electrostatic interaction between collagen and FDP [73,78]. ATR-FTIR was also conducted on hydrogels with a higher amount of FDPs additive added (1:2 and 1:1) that were immersed in SBF for 3 and 7 days (see Fig 9). The resulting spectra for the different types of hydrogels, only DC-C p showed that bands attributed to υ 3 PO 4 3-(1030 and 1080 cm -1 ) [58,74,75], increased with time, while the bands for υ 3 and υ 2 CO 3 2-(1450 and 1400 cm -1 and 850 cm -1 respectively) [58,74] remained relatively constant, similar to the results obtained for additives added at a 1:10 ratio. The results obtained are contrary to the expectation that the inclusion of a greater amount of FPDs would lead to previously reported increased HA nucleation [27]. The mass of freeze-dried DC hydrogels with different ratios of C s and C p (1:2 and 1:1 relative to collagen) before and after immersion in SBF was measured to determine if the addition of FDP led to a change in mass due to HA nucleation over time. Previously [27], it was reported that DC and DC-C s hydrogels with 1:10 C s to collagen should result in a hydrogel that is 5 wt.% and~60 wt.% HA, respectively, by day 7 after immersion in SBF. However, the present results obtained show that there is no significant difference between plastically compressed DC gels and DC gels with C s incorporated (see Fig 10). Comparison of the difference in the masses of the DC, DC-C s and DC-C p hydrogels to the theoretical value of an as-made (day 0) gel with the same amount of additive incorporated is reported in Table 4. The mass analysis of the DC gels after PC matched up closely (-3.6%) with the theoretical mass of a gel of the same volume. Results demonstrated that the DC-C s hydrogels have a significantly lower mass than their theoretical value, while the mass of the DC-C p gels is close to its theoretical value. Furthermore, the difference of mass in the DC-C s was similar to the amount of C s added to the solution, and statistically, the difference in mass between the DC and DC-C s gels is not significant. This indicates that C s is expelled with the water inside the gel during the plastic compression of the highly hydrated hydrogel, and that the remaining scaffold is only collagen. Conclusions and perspectives We examined the interaction in vitro between silk fibroin-derived polypeptides (FDPs) and plastically compressed collagen hydrogels developed as scaffolds for bone tissue engineering. Results in vitro show that immersing FDPs (both C s and C p ) in simulated body fluids lead to the nucleation of hydroxyapatite (HA), yet incorporating these same FDPs within a dense collagen (DC) hydrogel does not contribute to mineralizing the collagen, as was previously reported [27]. Characterization of the polypeptides using Liquid Chromatography-Mass Spectrometry showed the presence of glutamic acid which is important to promoting HA nucleation within the protein. Glutamic acid was present in a lower quantity than expected, especially for C s , though this may be attributed to the different experimental techniques used to quantify the amino acid composition or its processing. The results indicate that it is less suitable for promoting biomineralization than was initially thought, but it is possible that this may vary depending on the source and processing of the silk. Furthermore, mass analysis of DC hydrogels containing C s (DC-C s ) over time indicated that the C s fragments are immediately expelled during plastic compression (PC). The silk fibroin-derived polypeptides incorporated into a collagen hydrogel show little difference compared to a pure collagen hydrogel for Silk-fibroin polypeptides for bone repair promoting the nucleation of hydroxyapatite, and it cannot serve as a replacement for non-collagenous proteins (NCPs) in bone tissue engineering (BTE) without first refining the method of linking the polypeptide to the collagen fibril. Supporting information S1 File. ATR-FTIR dataset for silk FDPs. An uncompressed (highly hydrated) hydrogel is placed between two pieces of nylon mesh, and placed overtop a steel mesh and a paper towel (blot paper). A weight (glass plate) is placed on top the hydrogel to produce a constant force (1 kPa) and held for 5 minutes. Water is expelled from the bottom, through the nylon and steel meshes, and soaked up by the blot paper. The hydrogel is plastically compressed into a dense collagen hydrogel, and then is removed from between the nylon meshes. (TIF) S2 Fig. ATR-FTIR spectra for bone (bovine) i) as-received and i) after heat treatment. FTIR spectra shows that bovine bone has the characteristic peaks of collagen from the bands amide I, II and II groups at 1630, 1550 and 1240 cm -1 [58,76], while the presence of HA is seen in the large phosphate band at 1030 and 1080 cm -1 [58,74,75]. The results are supported by those seen in literature [85][86][87] of FTIR conducted on bone and bone samples that have had collagen removed. (TIF) S3 Fig. XRD spectra for bone (bovine) i) as-received and i) after heat treatment. iii) ICDD file 00-046-0905. XRD analysis show that the samples of bone match that seen in literature [88,89]. Heat treatment reveals the band pattern of bone and calcined bone, with the peaks of the latter matching that of calcium-deficient hydroxyapatite (CDHA) (ICDD file 00-046-0905). (TIF) S4 Fig. Analysis of Kokubo's SBF via IC compared to theoretical values for Kokubo's SBF and HBSS, as well ECF (plasma). Ion Chromatography of Kokubo's SBF shows that the composition is nearly identical to the calculated theoretical values, as well as the composition of commercial SBF (Hank's Balance Salt Solution, HBSS) [67]. In addition, it is also similar to the composition of ECF [42], with the exception of the amount carbonate, which is taken to be much lower than that of ECF given the theoretical value. (TIF) Author Contributions Formal analysis: Imran Deen.
2019-07-17T13:03:19.258Z
2019-07-15T00:00:00.000
{ "year": 2019, "sha1": "4fe1d2e56516e725929cde85fb47fe6efc1a7695", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0219429", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5dff2538ec7d0b4fa6cf34779dd2649acf66aa35", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
5169251
pes2o/s2orc
v3-fos-license
Effects of Lane Width, Lane Position and Edge Shoulder Width on Driving Behavior in Underground Urban Expressways: A Driving Simulator Study This study tested the effects of lane width, lane position and edge shoulder width on driving behavior for a three-lane underground urban expressway. A driving simulator was used with 24 volunteer test subjects. Five lane widths (2.85, 3.00, 3.25, 3.50, and 3.75 m) and three shoulder widths (0.50, 0.75, and 1.00 m) were studied. Driving speed, lane deviation and subjective perception of driving behavior were collected as performance measures. The results show that lane and shoulder width have significant effects on driving speed. Average driving speed increases from 60.01 km/h in the narrowest lane to 88.05 km/h in the widest lane. While both narrower lanes and shoulders result in reduced speed and lateral lane deviation, the effect of lane width is greater than that of shoulder width. When the lane and shoulder are narrow, drivers in the left or right lane tend to shy away from the tunnel wall, even encroaching into the neighboring middle lane. As the lane or shoulder gets wider, drivers tend to stay in the middle of the lane. An interesting finding is that although few participants acknowledged that lane position had any great bearing on their driving behaviors, the observed driving speed is statistically higher in the left lane than in the other two lanes when the lane width is narrow (in 2.85, 3 and 3.25 m lanes). These findings provided support for amending the current design specifications of urban underground roads, such as the relationship between design speed and lane width, speed limit, and combination form of lanes. Introduction With the fast development of urban traffic and the limitation of land use, the trend of building underground urban expressways has increased over the past few years. Many cities have built underground urban expressways, for example, Willits City Bypass in California Route101, Kallang-Paya Lebar Expressway (KPE) in Singapore, and the Bund Bypass in Shanghai. Several studies have shown the benefits of the underground road, such as protecting the environment from traffic noise and pollution, saving land resources for other purposes, and reducing traffic at transport nodes and in central business districts [1,2]. Driving along a dark, narrow environment can cause drivers anxiety and uncertainty, and thereby result in accidents [3,4]. In this situation, most drivers reduce their speed and increase their lateral distance to the tunnel wall [5]. The risk of accidents in a tunnel is approximately half of that on an open Participants The study included 24 subjects: 12 male and 12 female. All the subjects were licensed drivers with normal or corrected-to-normal vision and between the ages of 23 and 51 (mean = 34 years, Standard Deviation (SD) = 8.6). Each participant had over three years of driving experience. Eighteen of the drivers had used the same driving simulator before; the other six had no driving simulator experience. To ensure the reliability of the experiment and avoid negativity from the participants, participants were asked to sign a consent declaiming that they will take the tests seriously and act as they do in real traffic conditions as much as possible. Apparatus The driving simulator of Tongji University, as shown in Figure 1, is a motion-base simulator with eight degrees of freedom (DOF). A real car is placed in the middle of the experimental cabin as the test vehicle. A visual system, with five projectors, projects the road model on the spherical curtain. The simulator provides a 250 • (horizontal) × 40 • (vertical) forward view of the roadway from the driver's position. The performances of the motion system, feedback systems and the cockpit have reached the international advanced level. The simulator is monitored by the controlling software (Scaner™ Studio v1.4 by OKTAL, Paris, France) for system control, scene creation, data acquisition, data analysis, etc. The software enables the creation of accurate models and a variety of scenarios. The validation of the Tongji simulator in researching underground road driving behavior has been verified in several published studies [14,29]. Participants The study included 24 subjects: 12 male and 12 female. All the subjects were licensed drivers with normal or corrected-to-normal vision and between the ages of 23 and 51 (mean = 34 years, Standard Deviation (SD) = 8.6). Each participant had over three years of driving experience. Eighteen of the drivers had used the same driving simulator before; the other six had no driving simulator experience. To ensure the reliability of the experiment and avoid negativity from the participants, participants were asked to sign a consent declaiming that they will take the tests seriously and act as they do in real traffic conditions as much as possible. Apparatus The driving simulator of Tongji University, as shown in Figure 1, is a motion-base simulator with eight degrees of freedom (DOF). A real car is placed in the middle of the experimental cabin as the test vehicle. A visual system, with five projectors, projects the road model on the spherical curtain. The simulator provides a 250° (horizontal) × 40° (vertical) forward view of the roadway from the driver's position. The performances of the motion system, feedback systems and the cockpit have reached the international advanced level. The simulator is monitored by the controlling software (Scaner™ Studio v1.4 by OKTAL, Paris, France) for system control, scene creation, data acquisition, data analysis, etc. The software enables the creation of accurate models and a variety of scenarios. The validation of the Tongji simulator in researching underground road driving behavior has been verified in several published studies [14,29]. The geometry data of the bypass expressway in the study were from the actual design of a oneway three-lane underground urban expressway (the Bund Bypass, Shanghai, China), the length of which is about 3.6 km. It contained five horizontal curves, and the minimum curvature radius was 500 m. The minimum radius of vertical curve was 1200 m. The maximum longitudinal slope was 5%. The vehicle model was a regular four-door sedan, with a width of 1.8 m and a length of 5 m. To avoid interference with the driving behavior of the participant, no other vehicles were placed in the lane in which the test car was driving in all the scenarios [31]. A free traffic flow of 550 pcu/h in other lanes were set in the scenarios to create a realistic environment. Variables Studied The impact on the driving behavior of the participants was studied in terms of three independent road alignment variables including lane width, shoulder width, and lane position. According to urban expressway design specifications in China [32,33], five lane width values are suggested-2.85, 3.0, 3.25, 3.5 and 3.75 m-and three shoulder width values are suggested-0.5, 0.75 and 1.00 m. Scenarios with different combinations of these suggested lane width and shoulder width values were tested. Figure 2 shows snapshot examples of the simulation scenario. The geometry data of the bypass expressway in the study were from the actual design of a one-way three-lane underground urban expressway (the Bund Bypass, Shanghai, China), the length of which is about 3.6 km. It contained five horizontal curves, and the minimum curvature radius was 500 m. The minimum radius of vertical curve was 1200 m. The maximum longitudinal slope was 5%. The vehicle model was a regular four-door sedan, with a width of 1.8 m and a length of 5 m. To avoid interference with the driving behavior of the participant, no other vehicles were placed in the lane in which the test car was driving in all the scenarios [31]. A free traffic flow of 550 pcu/h in other lanes were set in the scenarios to create a realistic environment. Variables Studied The impact on the driving behavior of the participants was studied in terms of three independent road alignment variables including lane width, shoulder width, and lane position. According to urban expressway design specifications in China [32,33] Objective Data In the experiment, driving speed (km/h) and lane deviation (m) of the vehicle were recorded by the driving simulator with a sampling frequency of 10 Hz. In this study, lane deviation is defined as the offset between the position of the vehicle centroid and the centerline of the lane, as presented in The three situations can be presented as the following equations: Subjective Data After driving, each participant completed a subjective questionnaire about subject-based information such as age, gender, driving experience, perception and assessment of the simulation. Subjective perceptions in the questions were weighted with a score of 1 to 3, based on the impact level of the independent variable, such as no impact, 1, slight impact, 2, or significant impact, 3. Experiment Design and Procedure The experiment consisted of four parts: (1) general instructions; (2) a 15 min training session in the simulator; (3) driving in different scenarios, which are presented in Table 1; and (4) a questionnaire survey. The 15 min training session allowed participants to get familiar with the operation, and effectively eliminated the differences between participants with and without driving simulator experience. Participants were also requested to maintain a reasonable and safe speed according to the road condition, and stay in the same lane throughout each scenario. Participants were asked to drive twice in each lane (left, middle and right lane) of each scenario. In order to alleviate simulator sickness, the driver would take a rest for three minutes after each scenario. Analysis of variance (ANOVA) with repeated measurements was used to analyze the effects of cross-section factors of the underground expressway on driving behavior. To minimize the impact of the learning procedure in the simulator, the order of the test scenarios was randomized. In each scenario, the participant had 200 m to accelerate at the beginning, and 100 m to decelerate in the end. Data from these two sections were removed to exclude the starting and braking process. Speed Results The summary of the average travel speed and SD for driving on different lanes in each scenario is presented in Table 2. The statistic results of driving speed (minimum, median, maximum, first and third quartile) also elucidated that the driving speed was positively correlated to the lane width. The reason might be that due to the decrease of perceived risk in wide lanes, drivers would take aggressive behaviors such as speeding. Interestingly, from the results, operational speeds were always over the limit. For instance, although the design speed of a 3 m wide lane road is often less than 40 km/h in the design specifications of many countries [33], the operational speeds were in most cases over 60 km/h. This is consistent with the findings in [34]. Effect of Lane Position on Speed Results for the effect of the lane position on the speed, as presented in Table 2, differed from lane width cases. For example, the ANOVA result for the shoulder width of 0.75 m, as presented in Table 2 From Table 2, in most cases, the average speed in the left lane was the highest, while the average speed in the middle lane was the lowest. The reason might be that drivers on the left lane and right lane tended to encroach on the road shoulder when the lanes were narrow (they perceived to be driving in lanes "with extended width"); however, owing to the confined space, drivers on the middle lane had no extra space to compensate the lane width, thereby the speed of the middle lane was lower than the other two lanes. If the lanes were with enough width, participants felt it easy and safe to drive at a high speed confidently in all the three lanes, which was the reason why the differences were not significant among three lanes with a large lane width. Table 2 and Figure 4) caused by changes in lane width was larger than that caused by changes in shoulder width. In other words, lane width had a greater effect on speeds than shoulder width. than the other two lanes. If the lanes were with enough width, participants felt it easy and safe to drive at a high speed confidently in all the three lanes, which was the reason why the differences were not significant among three lanes with a large lane width. Effect of Shoulder Width on Speed In general, shoulder width has a positive effect on speed. However, the significance of the effect varies from lanes. Table 2 and Figure 4) caused by changes in lane width was larger than that caused by changes in shoulder width. In other words, lane width had a greater effect on speeds than shoulder width. Table 3 lists the summary of lane deviation for tests on different lanes. Table 3 lists the summary of lane deviation for tests on different lanes. Effect of Lane Position on Lane Deviation The effect of lane position on lane deviation was found to be large and significant (F = 216.17, p < 0.01). From Table 3, in general, the mean values of lane deviation of the left and middle lane were negative and were positive in the right lane cases. This indicated that drivers tended to drive far Effect of Lane Position on Lane Deviation The effect of lane position on lane deviation was found to be large and significant (F = 216.17, p < 0.01). From Table 3 Subjective Evaluation The subject evaluation was conducted based on the perception of the participants, the study of which could help investigate the gap between perception and the real traffic environment. From questionnaire survey results, lane width had a great effect on driving performance, with an average impact score of 2.85 (out of 3). Three quarters of the 24 participants found that shoulder width affects their decisions to maintain the speed and the lateral position (the average score was 2.25). The results on the effect of lane width and shoulder width show a high alignment of the participant perception with the real road simulation data. However, the results surprisingly showed that only less than 20% of the participants felt that the lane position had a great effect on their driving maneuver, and the average score was 1.3. This is in contrast to the analysis from the objective data where their speed and lane deviation were found to be significant affected by lane position. From the comparison between subjective evaluation and actual data, driving behavior did not always match their perception of the road. Subjective Evaluation The subject evaluation was conducted based on the perception of the participants, the study of which could help investigate the gap between perception and the real traffic environment. From questionnaire survey results, lane width had a great effect on driving performance, with an average impact score of 2.85 (out of 3). Three quarters of the 24 participants found that shoulder width affects their decisions to maintain the speed and the lateral position (the average score was 2.25). The results on the effect of lane width and shoulder width show a high alignment of the participant perception with the real road simulation data. However, the results surprisingly showed that only less than 20% of the participants felt that the lane position had a great effect on their driving maneuver, and the average score was 1.3. This is in contrast to the analysis from the objective data where their speed and lane deviation were found to be significant affected by lane position. From the comparison between subjective evaluation and actual data, driving behavior did not always match their perception of the road. Discussion Contrary to past studies, which focus on only one factor, this study used a counterbalanced experimental design in order to consider all the key factors. It included 15 possible combinations of the three independent measures. In general, all three factors including lane width, shoulder width and lane position significantly affect driving behaviors in terms of speed and lane deviation. From the results, in the underground urban expressway, wider lane and shoulder width gave the drivers more freedom in lateral space, higher perceived safety and objective safety, and increased the driving speed. Practically, operating speeds were found to be always over the speed limit, leading to an increase in accident risks. These findings are generally consistent with past studies focusing on regular urban expressways, indicating that the advanced simulator potentially provides an effective method in investigating the impact of road design specifications on driver behavior, especially in a more comprehensive way with consideration of multiple factors. Discussion Contrary to past studies, which focus on only one factor, this study used a counterbalanced experimental design in order to consider all the key factors. It included 15 possible combinations of the three independent measures. In general, all three factors including lane width, shoulder width and lane position significantly affect driving behaviors in terms of speed and lane deviation. From the results, in the underground urban expressway, wider lane and shoulder width gave the drivers more freedom in lateral space, higher perceived safety and objective safety, and increased the driving speed. Practically, operating speeds were found to be always over the speed limit, leading to an increase in accident risks. These findings are generally consistent with past studies focusing on regular urban expressways, indicating that the advanced simulator potentially provides an effective method in investigating the impact of road design specifications on driver behavior, especially in a more comprehensive way with consideration of multiple factors. Meanwhile, by investigating the impact of multiple factors using the counterbalanced experimental design, specific results from different combination cases were extracted and some interesting findings that have not been investigated in past studies can be drawn: • With a narrow road shoulder, drivers on the right or left lane tended to drive far away from the wall, with some deviating into the other lane. As the shoulder got wider, the impact became insignificant. • Participants drove faster and had a larger lane deviation in a wide shoulder compared to in a small one. • Driver behavior was more affected by lane width compared to shoulder width, which is in line with driver perception as suggested in [25], where drivers determined the lateral position mainly by recognizing the position of the marking. • Wide shoulder would bring the drivers high perception of safety and reduce the proportion of dangerous displacement, e.g., transgressing-lane driving. These findings indicate that the approach of using the advanced driving simulator provides a more detailed and potentially more precise way for validating the factors of underground expressway designs and related specifications. Several recommendations for design specifications can be drawn from the study in order to strengthen safety in underground urban expressways. For example, setting a reduced speed limit of the left lane could be used to slow down the vehicles speed and reduce the lane deviation on the left lane. It also provides evidence for setting specific limit speed values for different lanes. Additionally, a suitable roadside landscape and rumble strips could help reduce the speed variability between the middle lane and other two lanes. Moreover, setting a proper shoulder width could help reduce the lane deviations, especially on the left and right lanes. However, results from the objective measure and the subjective perception show some inconsistency between driver perception and real road situation. Lane position was tested to have a significant impact on driving behavior but only 20% realized that they were affected by their lane position. Reasons for this might be that: (1) the different impact of lane position is generated mainly from other factors, such as the extended maneuvering space with the existence shoulder, shoulder width and lane width, which is more noteworthy to drivers; (2) though greatly reduced by using the advanced simulation, difference of driver perception in virtual road environment and real road context still exists. Conclusions This study investigated the effect of three independent road alignment variables-lane width, lane position and shoulder width-on driving behavior in urban bypass expressway using the advanced driving simulator. Lane width, lane position and shoulder width were found to have a significant impact on driving behavior in terms of speed and lane deviation. A high-end simulator was used in the study to build a driving environment that is very close to a real-world environment. Results were found to be reasonable, indicating that using the advanced driving simulator could be a practical way to validate the design and safety of an underground expressway in light of different affecting factors. As the main contributions, the study introduced the use of the advanced driving simulator in investigating the impact of the key cross-section factors of an underground urban expressway, which has been less studied in the past. This method provides a potentially more precise and detailed way to explore driver behavior and road safety on underground urban expressways more comprehensively. Additionally, the method of using the advanced driving simulation can be practically used to validate and amend the current urban underground road design specifications and standards, such as the design of lane width, shoulder width and lane marking and the set of speed limit. Using the driving simulator could be a very practical way to assess design and safety before spending heavily in construction, since it provides the possibility of detecting unreasonable design aspects and potential safety issues at the beginning; however, the gap between the simulated environment and the real-world environment cannot be ignored and warrants further investigation. The limitations of a driving simulator, such as simulator sickness, ecological validity in laboratory-based study, driver motivation, and level of perceived risk in a simulated environment, should also be synthetically considered according to the research object in order to minimize their influence. The present work aims to study how drivers' behavior is influenced by cross-sections of an underground expressway. To avoid the impact of other factors on this research object, the experimental scenarios were defined as no lane changing and no other traffic in the same lane. Drivers' behavior, such as car following and lane changing, should be studied in the future. In addition, to comprehensively explain the driving behavior in underground urban expressway, research on drivers' psychology and physiology, such as workload, eye-movement, heart rate and blood pressure, would also be interesting topics in driving behavior and road safety studies. Because the entire experimental session was excessively long and monotonous, it could impact negatively on driver behavior, motivation and psychological state. The experimental approach will be further studied and optimized in the future.
2016-10-31T15:45:48.767Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "8de89d8c3466b2b5ef47f0b80cdc5d186362caff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/13/10/1010/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8de89d8c3466b2b5ef47f0b80cdc5d186362caff", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
134890807
pes2o/s2orc
v3-fos-license
Suitability of Peat Swamp Areas for Commercial Production of Sago Palms: The Sarawak Experience Realizing the potentials of sago as a new commodity to contribute to the Sarawak economy, the government initiated the development of sago plantations to address the shortage of raw materials in order to support the sago industry in its starch export and downstream activities. Initially the development of sago plantations was on peat, based on findings that sago can tolerate wet conditions including peat swamps. Furthermore Sarawak has the largest peat soil area in Malaysia of about 1.5 million hectares. For over the period of 15 years, it was observed that only 4% show good growth performance and these palms are mainly on shallow peat areas (<1.5 m), while those on very deep peat areas (>2.5 m) showed poor growth performance at the trunking phase as characterized by small crowns, low leaf count, stunted growth, and low succession suckers. The Crop Research and Application Unit (CRAUN) conducted detailed studies of sago growth performance on peat to find solutions to the problems mentioned above. The study covers land preparation, prospection and selection of quality planting material, nursery management, nutritional and soil studies, cultural practices, and weed and pest control. Based on the agronomic and cultural practices done by CRAUN for the past 10 years, it was observed that the performance of sago on peat areas (>2.5 m) for the first 4 years did show good growth and were on par with those palms grown on mineral soil. However, upon reaching trunking phase (4 years onward), the growth performance began to deteriorate, exhibiting distinct elemental deficiency symptoms, low leaf count, tapering trunk, and low yield. Cost comparison on the development of sago on alluvial and peat shows a significant difference between the two soil types whereby the latter incurred high development cost and low revenue and thus contributed to the low internal rate of return (IRR). Therefore it is not economic and feasible to cultivate sago on peat. Recommendations for any new sago expansion program should focus on mineral or shallow peat soil. R. Yong Chiew Ming (*) • Y. Sobeng • F. Zaini • N. Busri CRAUN Research Sdn. Bhd, Jalan Sultan Tengah, Kuching, Sarawak, Malaysia e-mail: rldyong@yahoo.com Introduction Sago has been among the key agriculture trading commodities of Sarawak since the 1880s. Initially, sago was grown in Sarawak primarily as a smallholder crop for starch production, mainly for local food industries. Presently, although Sarawak is not the world's largest producer, it is the sole world exporter of sago starch. The export volume recorded in 2013 was 48,000 mt valued at MYR 71 million, while in 2014, it was 46,900 mt valued at MYR 81 million. The major export destinations are Peninsular Malaysia (60%) and Japan (30%), as reported by the Statistics Department, Malaysia. Nevertheless, the sago starch export volume has not increased for the past 10 years due to the low and unreliable supply of sago logs. In 1988, the State Government decided to establish sago plantations located in Mukah Division with an area of 19,063 ha, mainly in a peat area. The decision was made based on the outcome of a feasibility study. The study area comprised mainly of peat soils of the Anderson 3 series (>2 m peat depth) and observations on farmers' fields. The study recommended sago cultivation on swampy peat land with minimal water management and maintenance. Thirteen years after plantation establishment, there were no harvestable sago palms except those planted on shallow peat areas (<1.5 m) which represented approximately 4% of the total plantation area. Generally, the sago palms planted on deep peat exhibited stunted growth and remained in a suppressed vegetative stage after year 5. In those palms that did have trunks, they were tapered, and the leaves had brownish necrotic leaflets. In spite of practicing an intensive and adequate fertilizer application program, none of the sago palms planted in deep peat areas of the plantation showed good growth performance. In view of these circumstances, the Crop Research and Application Unit (CRAUN) conducted its first detailed experiment on deep peat in 2003 at the Sebakong Sago Plantation (SSP) in Mukah, Sarawak, and followed up with a subsequent experiment at the Sungai Talau Research Station (STRS) in Dalat Sarawak in 2007. The experiments covered land preparation, water management, fertilizer application, and soil studies. Plot Establishment The study was conducted in Sebakong Sago Plantation and Sungai Talau Research Station with peat depth of more than 5 m. An experimental plot was also established on mineral soil at Sungai Talau Research Station as a control plot. Besides that, observation was also made on the smallholder areas in Senau, Dalat, and in Kampung Tellian and Kampung Sesok, Mukah ( Fig. 7.1). The plot preparation was in accordance with the standard practices for oil palm land clearing and water management on peat (Tayeb 2005) except that water table was maintained at 20-40 cm, as compared to 50-75 cm for oil palm. Planting Material and Field Planting Selected suckers were nursed in polybags. Those with a minimum of three developed leaves and well-developed root systems were selected for field planting ( Fig. 7.2). The planting density was 100 palms per hectare. The water table was maintained at 20 cm. Lime was applied 7 days prior to planting to increase the pH, and 200 g of rock phosphate fertilizer was applied during planting ( Fig. 7.3). Cultural Practices and Maintenance Chemical weeding was done after land clearing and crop establishment to prevent regeneration of residual weeds and stop the growth of new weeds in the recently cleared areas. Planting rows were slashed followed by herbicide spraying to prevent weed competition with sago palm growth. Contact herbicide was used for the first 2 years after planting to reduce the risk of damage to the palms in the event of spray drift. Circular weeding was done three times per year before each fertilizer application. Weeding prevents shading out of the suckers and the choking action of creepers that can cause plant death before the crop can be well established (Yusup et al. 2007). Cluster maintenance is necessary to minimize the competition between mother palm and the suckers for nutrients, sunlight, and space (Peter et al. 2012). Sucker pruning is done in year 4 ( Fig. 7.4). In year 4, two successional suckers are retained, followed by one successional sucker for each 18 months in the following years. This ensures sustainably harvestable palms. Dead leaves are pruned and stacked between rows to suppress weeds and to recycle the nutrients and biomass to the soil (Fig. 7.5). Fertilizer Application Fertilizer was applied around the palm base out to the edge of the leaf canopy and at the soil surface, based on the P-32 isotope dilution technique (Roland et al. 2012), as shown in Fig. 7.6. Fertilizer Trials Peat is a marginal soil lacking most of the important major and trace elements, namely, N, P, K, Cu, Zn, and B. A sago fertilizer program was initiated and formulated based on previous reports (Jong and Flach 1995) and foliar analysis (Tayeb 2005) for trials in peat areas. The fertilizer formulated for sago is shown in Table 7.1a and 7.1b. Palm Growth Assessment Parameters Under optimal ecological conditions, the average number of leaves formed per year is 24 for both the rosette stage and bole formation (trunking) stage (Flach 1977). Parameters to be observed and measured for growth performance on peat are given in Table 7.2. Starch Yield Determination Starch yield determination was done for all sago palm that had reached a harvestable stage (pelawei manit). The targeted harvestable period is less than 10 years with starch yield of more than 150 kg per palm. Soil Studies The objective of this study was to determine the edaphic factors that affect sago growth on peat. The study included: • Soil profiling for physical characterization • Chemical analysis • Root incursion through the soil profile Excavation of a soil pit was done as shown in Figs.7.7 and 7.8. Soil samples were collected from different depth horizon for nutritional analysis. The level of decomposition was determined by using the Von Post Scale which ranges from H1 (least humified) to H10 (most humified) and further grouped into three main peat types using US Department of Agriculture terminology, based mainly on their fiber content, as shown in Table 7.3 (USDA 1993). Statistical Data Analysis The growth measurements and nutrient data were subjected to ANOVA using the general linear model (GLM) procedure of the Statistical Analysis Systems (SAS™). The least significant difference (LSD) test at p < 0.05 was employed for the mean. Fertilizer Study Growth measurement data were taken both at the vegetative stage (1-4 years) and trunking stage (>5 years) of growth. Figures 7.9 and 7.10 show the number of palm leaves under different soil types over 12 years of growth. At the vegetative stage, the growth performance at all three locations showed good results, the number of leaves produced per palm was more than 14 ( Fig. 7.9), and the highest leaf count was observed in year 2. There were no apparent symptoms of leaf nutrient deficiency observed as the foliar analysis showed that the levels of major elements N, P, and K were sufficient but were not for Mg and trace elements for palms planted on deep peat (Tables 7.4 and 7.5). After year 5, palm growth on deep peat started to weaken. Average number of leaves per palm dropped to less than 14 ( Fig. 7.10) due to necrosis that led to rapid leaf senescence. Foliar analysis indicated that the palms were experiencing deficiencies in the major elements, K and Mg ( Fig. 7.11). In addition, trace elements Mn, B, and Cu were also found to be deficient. These were indicated by the symptoms of retarded young spear, crinkled leaflets, and chlorotic leaves at the tips of the young leaf, respectively, as shown in Fig. 7.12. Trace elements are required in small quantities for critical enzymatic biochemical reaction to metabolize sugar and starch. Therefore, any shortfall in the availability of those elements will result in the above symptoms. Although remedial effort was done in year 5 of growth by increasing the level of insufficient major and trace elements mentioned above, there is no significant improvement on the crop growth in subsequent years. For example, K leaf nutrient concentration for sago grown on deep peat was found to be only 0.41%, whereas the critical value for K for good growth is 0.73-0.89%. When an additional 1.12 kg/palm of K was added, the leaf nutrient concentration only increased to 0.44% which is still below the critical K value. All the deficient element results are provided in Tables 7.6 and 7.7. A total of 80% of palms on peat were trunking, but only 36% reached a harvestable stage at year 11. Moreover, the harvestable palms recorded very low starch yield of less than 20 kg/palm (Table 7.8). Furthermore, the growth of successional palms was also very slow. Based on the current growth, the second-generation harvesting can only be expected after 4-5 years from the first harvest (mother palm). Whereas those palms planted on mineral clayey soil showed good growth for both mother and successional palms, the palms started reaching harvestable stage in year 9 (9%) and were 100% harvestable palms by year 11. The starch yield was also higher, more than 150 kg/palm. Sago palm growth performance on deep peat and mineral clayey soil is shown in Fig. 7.13. Physical Properties In deep peat at Sebakong Sago Plantation, the top horizon, 0-20 cm, is generally highly decomposed (sapric) having a humification value of H8. The second horizon, which is more than 20 cm deep, is moderately decomposed (hemic) with a humification value of H4. The color change from very dark brown to dark brown, from the top horizon to second horizon, is due to the changes in the organic matter decomposition process. The bulk density in deep peat was less than 0.15 g/cm 3 , while in mineral soil, the bulk density is more than 1 g/cm 3 . This was due to more than 70% of the deep peat content of undecomposed woody materials with preserved tree trunks and large roots (Fig. 7.14) compared to a low percentage of undecomposed woody materials in the mineral soil ( Fig. 7.15). The undecomposed materials of deep peat areas needed a longer time to decay, and this contributed to the vacant zone where there was no growth media for sago palms. In addition, due to poor soil physical properties, root development and distribution are limited. The roots confined and limited themselves at the 50 cm from the soil surface which produced poor anchorage leading to poor trunking. In mineral clayey soil, the palms have better anchorage due to the underlying solid foundation that contributes to early trunking. Chemical Properties Peat soils have very low mineral content, with organic matter content reaching more than 90%, and are also very acidic with a pH range of 3.0-4.0. Low pH limits the availability of major nutrients to the palms (Table 7.9). Besides this, excessive rainfall results in key nutrients, such as K, Mg, and Ca, being washed out (leached) from the soil. The cation exchange capacity (CEC) determines the amount of nutrients the soil can hold and that is readily available to the palms. The CEC in deep peat is very high >70 meq/100 g or 70% meq which should indicate a high amount of readily available nutrients, but the high CEC is not due to the presence of cations K, Ca, or Mg but because of the presence of exchangeable hydrogen cations (H + ) as shown by the low base saturation. Hydrogen ions are generated from organic acids during organic decay. As the soil surface is saturated with H + ions, this makes the soil unable to retain and hold other cations that will lead to nutrient leach out. In mineral soil, the CEC content is lower than in peat, but the base saturation is relatively high (65%) compared to peat soil (15%) which leads to nutrients in the soil being readily available to the palms. A relatively high base saturation of CEC (70-80%) should be maintained for most cropping systems, since the base saturation determines in large measure the availability of bases for plant uptake and strongly influences soil pH as well (Steven 2000). The carbon-to-nitrogen ratio (C:N) in deep peat determines the humification rate of the organic materials. The C:N ratio on deep peat was more than 29:1 and coupled with the low pH resulted in low mineralization. The rate of peat deposition exceeds the rate of decomposition as most of the humification process is by anaerobic decay and the rate of metabolism is much less than the aerobic process. Most of the organic residues in these areas have a high C:N ratio, and this requires the microbes to find additional nitrogen for the decomposition process and creates a nitrogen deficit for the palms. For example, deep peat areas mostly covered with ferns, with a high C:N ratio of 43, and tree stumps, with C:N ratio of 560, require a long time to decay. The total P content of tropical peat is generally low. According to Zhang and Zhao (1997), phosphorus exists in soils in organic and inorganic form. In soils, P may exist in many different forms such as solution P, active P, and fixed P (Busman 2009). In peat the P will react with humic compound to form complex formations that are easily diluted in soil solution and eventually leach out (Kim 1993). As the soil contains no clay minerals, it does not have the ability to retain the soluble phosphate being leached out. In mineral soil, high amounts of total phosphorus were due to the fixation by iron (Fe) to form strengite (FePO 4 .2H 2 O) (Kim 1993). Potassium is generally very low in peat soil as the available K which was always present in the soil solution is strongly mobile and prone to leaching. The total K in mineral clayey was higher than deep peat due to the fixation by the negatively charged clay crystals. The levels of trace elements in peat were very low. Total Cu, Zn, Mn, and Bo in particular were extremely low, averaging less than 20 ppm or 20 μg/g as it is taken out of the solution by forming complex formation with humid compounds. Economic Evaluation From an economic prospective of sago development on peat and mineral soil, a comparison cost analysis was carried out. A highly significant difference was found in terms of return on investment as shown in Table 7.10 Despite providing the crop with appropriate agronomic and cultural practices, sago palms planted on peat did not respond to give better growth. It took 11 years to reach the first harvestable stage, and only 36% of the palms had starch yield of less than 20 kg/palm. Furthermore, the cost of development and maintenance was high which led to no return on investment. Therefore, economically, cultivation of commercial sago plantation on peat soil seems unviable. However, on shallow peat (<2.5 m), it can yield about 90 kg per palm with 81% harvestable palms and giving a marginal internal rate of return (IRR) of 13.1%, while on mineral soil, it could yield more than 150 kg per palm giving an IRR of 23.4%. Conclusion and Recommendation All study plots for both peat and mineral soil showed good vegetative growth in the first 4 years. However, the growth started to decline on peat areas after year 5, while on the mineral soil, it continues to grow well. The leaf number started to decline. Only 36% reached harvestable stage at year 11 when grown on peat, whereas those palms planted on mineral clayey soil reached 100% harvestable stage by year 11. In fact the first harvest of 9% of the palms in mineral clayey soil can be done at year 9. The harvestable palms in peat areas had a low starch yield of less than 20 kg/palm, while the harvestable palms on mineral soil had a starch yield of more than 150 kg/palm. Thus, the studies showed that the number of harvestable palm and starch yield on peat area after 12 years of planting is lower than that of mineral clayey soil. Therefore, planting sago on peat is not feasible. Based on the economic evaluation, sago on peat with the yield of less than 20 kg per palm does not give any return on investment. To be viable, the harvestable age must be less than 10 years with the yield of 150 kg per palm which will give an IRR of 23% (as compared to 21% for oil palm). This IRR of 23% can be achieved if palms are planted on mineral soil. Last but not least, the cost of development and maintenance on peat is comparatively higher and tends to affect the environment adversely if not managed properly.
2019-04-27T13:08:37.988Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "7fde76f1a6e5edf4166c0f86331d0f7e88b41199", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-10-5269-9_7.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e1dcbd4dd4883abb9d9b1e80db83fd957bfcabd6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
228613894
pes2o/s2orc
v3-fos-license
Lack of a Clear Behavioral Phenotype in an Inducible FXTAS Mouse Model Despite the Presence of Neuronal FMRpolyG-Positive Aggregates Fragile X-associated tremor/ataxia syndrome (FXTAS) is a rare neurodegenerative disorder caused by a 55–200 CGG repeat expansion in the 5′ untranslated region of the Fragile X Mental Retardation 1 (FMR1) gene. FXTAS is characterized by progressive cerebellar ataxia, Parkinsonism, intention tremors and cognitive decline. The main neuropathological hallmark of FXTAS is the presence of ubiquitin-positive intranuclear inclusions in neurons and astrocytes throughout the brain. The molecular pathology of FXTAS involves the presence of 2 to 8-fold elevated levels of FMR1 mRNA, and of a repeat-associated non-AUG (RAN) translated polyglycine peptide (FMRpolyG). Increased levels of FMR1 mRNA containing an expanded CGG repeat can result in cellular toxicity by an RNA gain-of-function mechanism. The increased levels of CGG repeat-expanded FMR1 transcripts may create RNA foci that sequester important cellular proteins, including RNA-binding proteins and FMRpolyG, in intranuclear inclusions. To date, it is unclear whether the FMRpolyG-positive intranuclear inclusions are a cause or a consequence of FXTAS disease pathology. In this report we studied the relation between the presence of neuronal intranuclear inclusions and behavioral deficits using an inducible mouse model for FXTAS. Neuronal intranuclear inclusions were observed 4 weeks after dox-induction. After 12 weeks, high numbers of FMRpolyG-positive intranuclear inclusions could be detected in the hippocampus and striatum, but no clear signs of behavioral deficits related to these specific brain regions were found. In conclusion, the observations in our inducible mouse model for FXTAS suggest a lack of correlation between the presence of intranuclear FMRpolyG-positive aggregates in brain regions and specific behavioral phenotypes. INTRODUCTION Fragile X-associated tremor/ataxia syndrome (FXTAS) is a lateonset neurodegenerative disease that is characterized mainly by essential tremor, cerebellar ataxia, Parkinsonism, peripheral neuropathy and cognitive decline (Hagerman et al., 2001;Tassone et al., 2007;Hagerman, 2013, 2016). FXTAS leads to cerebral and cerebellar atrophy, with increased T2 signal intensity in MRI images of the middle cerebellar peduncles as diagnostic hallmark (Brown and Stanfield, 2015). Carriers of a premutation in the FMR1 gene, consisting of a 55-200 CGG repeat expansion, are at risk of developing FXTAS. Such intermediate repeat expansions lead to elevated levels of FMR1 mRNA (Tassone et al., 2000;Kenneson et al., 2001;Salcedo-Arellano et al., 2020). In contrast, longer repeat expansions, more than 200 units, induce silencing of FMR1 mRNA, which results in a lack of FMRP protein, causing the neurodevelopmental Fragile X syndrome (Bassell and Warren, 2008;Salcedo-Arellano et al., 2020). Several mechanisms by which the premutation and the consequential increase in FMR1 mRNA levels may lead to the development of FXTAS have been proposed. Of these, arguably the most studied process is the formation of intranuclear inclusions that has been very well-documented in patients as well as in animal models and their occurrence has been linked to alterations at the cellular level in neurons and astrocytes (Louis et al., 2006;Jin et al., 2007;Berman et al., 2014;Ma et al., 2019;Haify et al., 2020). The intranuclear inclusions are mainly composed of proteins and to date more than 200 different proteins have been identified in nuclear inclusions (Iwahashi et al., 2006;Ma et al., 2019). FMR1 mRNA containing a CGG repeat expansion, although present itself only in relatively low concentrations in the nuclear inclusions, could act as a scaffold binding place for the other components (Cid-Samper et al., 2018;Langdon et al., 2018;Ma et al., 2019). The putative pathogenicity of these inclusions could be based on depleting essential molecules, including RNA-binding proteins Sofola et al., 2007;Qurashi et al., 2011;Sellier et al., 2013). Another, not necessarily mutually exclusive, potential pathogenic mechanism is repeat-associated non-AUG (RAN) translation through which a toxic polyglycine (FMRpolyG) protein is produced from the elongated FMR1 CGG repeat mRNA (Todd et al., 2013;Sellier et al., 2017;Krans et al., 2019). To date, the relative contributions of the RNA-based inclusions and the expression of toxic FMRpolyG to human pathology are still matter of debate. It has even been suggested that in early disease state, the inclusions may serve a protective function by sequestering FMRpolyG Hagerman and Hagerman, 2016). Our current clinical, molecular and histopathological understanding of FXTAS in patients is mostly derived from studies in mouse models. Several mouse models have been generated to study the (neuro)pathology and behavioral effects of FXTAS. Initially two knock-in (KI) mouse models were generated: the Dutch (CGG dut ) and the NIH (CGG nih ) KI mouse model. Both KI mouse models display FXTAS pathology at the genetic, molecular, histological and behavioral level with slight differences. Both show ubiquitin-positive intranuclear inclusions throughout the entire brain, but these inclusions are more common in the CGG dut KI mice. Behavioral examination of both CGG KI mice revealed memory impairment (Hunsaker et al., 2009), increased levels of anxiety in the CGG dut KI mice while CGG nih KI mice show decreased levels of anxiety. Also, assessment of motor function in the CGG dut KI mouse model showed impairment with increasing age of the mice (Van Dam et al., 2005). This observed cognitive decline and motor function impairment in these mice may reflect the progressive cognitive decline and functionality impairment observed in FXTAS patients. Although both KI mouse models nicely recapitulate FXTAS disease pathology, the time to generate a phenotype is a major disadvantage. It takes roughly up to 52-72 weeks before any phenotype is observed in these mice. Therefore, several transgenic mouse models were developed to study specific research questions of FXTAS disease pathology such as RAN-translation, mRNA containing expanded CGG repeat and potential therapeutic interventions. We refer the reader to more advanced and detailed reviews covering all available mouse models for the premutation and FXTAS (Berman et al., 2014;Haify et al., 2020). All these mouse models show presence of ubiquitin-positive and FMRpolyG-positive inclusions in the central nervous system (CNS) in neurons and astrocytes as well as in non-CNS organs, thus display the most prominent neuropathological hallmark in FXTAS disease pathology, with the notable exception of the intention tremor. We studied the occurrence of intranuclear inclusions in a novel inducible mouse model for FXTAS, and related these to quantitative alterations in mouse behavior. To avoid interactions during development, we induced-in adult micethe expression of a randomly integrated 103× CGG repeat expansion in the mouse under control of the neuronspecific Ca 2+ /calmodulin-dependent protein kinase II alpha (CamKII-α) promoter. The CamKII-α driver induces expression throughout the entire forebrain, but also in several other regions in the cerebrum such as the hippocampus and the basal ganglia, which are regions known to be involved in FXTAS disease pathology (Greco et al., 2002;Wang et al., 2013). In this report we mainly focused on the dentate gyrus (DG) and CA3 region of the hippocampus, and the striatum, being part of the basal ganglia. These regions are believed to contribute to several behavioral impairments in FXTAS such as in motor learning and coordination, and memory (Scaglione et al., 2008;Hagerman and Hagerman, 2013;Haify et al., 2020). Also, cognitive decline based on performance in spatial learning, memory tasks, executive motor function impairments and anxiety associated disorders are observed in premutation carriers and FXTAS patients (Hasegawa et al., 2009). For a period of 3 months after induction, we quantified the formation of inclusions in the brain and characterized the behavioral performance. As expected, and in line with the expression pattern of the CamKIIα promoter (Burgin et al., 1990), we found intranuclear inclusions in the hippocampus and the striatum, already appearing 4 weeks after dox-induction. To our surprise, however, virtually no impact on behavioral performance was detectable even after 3 months of dox-induction. We therefore propose based on this study that intranuclear inclusions do not have an immediate detrimental effect on neuronal function and this may point to a protective function of inclusion formation in the early-onset of diseaseprogression in FXTAS. Mice For this study, male and female CamKII-α-rtTA/TRE-103CGG-GFP-mice with a C57BL6/J background were used ( Figure 1A). This CamKII-α inducible mouse model was generated similarly to the ubiquitous inducible mouse model by random integration of the transgenes in the genome (Hukema et al., 2014). The TRE-103CGG-GFP mice were crossed with the CamKII-α-rtTA driver line to generate double transgenic mice using the Tet-On system. Dox-treatment was initiated at the age of 9 weeks in these mice. Dox drinking water contained 2 mg/ml doxycycline hyclate (Sigma) in 5% sucrose (Sigma) and was refreshed every 2-3 days. Repeat Length PCR Repeat length was determined according to an in-house PCR protocol. Brain tissue from mice having 11CGGs (positive control), wildtype mice (negative control) and TRE-103CGG-GFP 4 weeks old mice were incubated overnight in 300 µl tail mix buffer [50 mM Tris pH = 7.5, 10 mM EDTA, 150 mM NaCl, 1% SDS and 20 µl proteinase K (10 mg/ml; Roche Cat. #3115852)] at 55 • C. The next day, 100 µl 6 M NaCl was added to the samples and samples were shaken very well to induce FIGURE 1 | Schematic representation of the Tet-On system and behavioral testing of a new brain-specific mouse model for FXTAS. (A) Brain-specific expression of the expanded CGG repeat RNA coupled to GFP was studied in the CamKII-α-rtTA/TRE-103CGG-GFP inducible mouse model with a C57BL6/J background. The Tet-On system was used to generate double transgenic mice expressing the expanded CGG repeat at the RNA level. Expression of the reverse tetracycline transactivator (rtTA) is controlled by the CamKII-α promoter on a separate transgene. Upon dox administration, rtTA will be activated and can bind the tet response element (TRE) on another transgene, which induces expression of the expanded CGG repeat at the RNA level and GFP at the protein level. As the transgene contains the 5'-UTR of the FMR1 gene with an expanded CGG repeat, the FMRpolyG polypeptide is produced from the expanded CGG repeat by RAN translation. (B) Schematic overview of the experimental schedule for histological analysis and behavioral testing. At around 9 weeks of age, dox-treatment started. Around 10 weeks later, ErasmusLadder tests were performed, followed by balance beam and grip tests. Finally, the mice were subjected to the Morris water maze test. precipitation of cell debris. Samples were centrifuged (10,000 g at RT for 10 min) to remove cell debris. The supernatant was transferred to a new tube and 1 ml 100% EtOH was added (shake very well). Tubes were centrifuged at 10,000 g for 10 min to form DNA pellet. Next, the supernatant was discarded and DNA pellet was washed with 500 µl 70% EtOH. Samples were centrifuged at 10,000 g for 5-10 min. The supernatant with EtOH was discarded and the DNA pellet was left to dry to the air for a couple of minutes. The DNA pellet was resuspended in 100 µl sterilized water. Next, 1 µl of supernatant was used as template DNA in the PCR reaction mix (total volume 21 µl). Following PCR mix was used: 10 µl Betaine (5 M), 4 µl 5× expand HF buffer without Mg 2+ , 1.5 µl MgCl 2 (25 mM) 1 µl forward primer (10 µM), 1 µl reverse primer (10 µM), 0.2 µl dNTP mix (100 mM) (25 mM each), 0.2 µl FastStart Taq DNA polymerase (5 U/µl; Roche) and 2.1 µl sterilized water. The PCR program consisted of 10' denaturation at 98 • C, followed by 35 cycles of amplification through 35 s at 98 • C, 35 s at 58 • C and 3 min at 72 • C, and ended with a cooling step at 15 • C. For quantification of the DNA size, 1 µl 1 Kb Plus DNA ladder (Thermo Fisher Scientific; Cat. # 10787018) was used with and without 0.2% GelRed (Biotium) in dH 2 O. Staining with GelRed after electrophoresis run is necessary because GelRed interferes with the DNA and therefore influences CGG repeat measurement. To front track DNA separation during gel electrophoresis, 10 µl 30% Orange G (Sigma) loading dye was added to 5 µl of PCR product on the 1.5% agarose gel. After gel electrophoresis run, the agarose gel was stained for 30 min in 500 ml 1X TBE-buffer (1L 5X TBE-buffer: 54 g Tris (CAS #77-86-1), 27.5 g boric acid (CAS #10043-35-3) and 20 ml 0.5 M EDTA pH = 8.0 (CAS #60-00-4) + 50 µl 0.2% GelRed (Biotium). Gels were scanned using Gel Doc XR+ (Bio-Rad) Molecular Imager with Image Lab software. The CGG repeat was amplified using the following forward primer 5'-ATCCACGCTGTTTTGACCTC-3' and reverse primer 5'-CCAGTGCCTCACGACCAAC-3'. RNA Isolation and cDNA Synthesis RNA isolation was performed on dox and sucrose treated 16 weeks old CamKII-α-rtTA/TRE-103CGG-GFP mice. Per treatment group n = 3 brains were used for RNA isolation. Prior to lysing, samples were thawed on ice and supplied with RIPAbuffer containing 0.05% protease inhibitors (Roche), 0.3% 1 M DTT (Invitrogen) and 40U RNase Out (Roche). Samples were mechanically lysed, followed by 30 min of incubation on ice. After 30 min of incubation, mechanical lysing was repeated to ensure total homogenization. Homogenate was added to RNA Bee (Tel-Test) in a 1:10 (v/v) ratio and mixed thoroughly. Chloroform (Millipore) was added to mixture in a 1:5 ratio (v/v), mixed thoroughly and incubated on ice for 15 min. After incubation the mixture was centrifuged for 15 min at 4 • C and supernatant was collected and supplied with 0.6× (v/v) 100% 2-propanol (Honeywell). After 15 min centrifugation at 4 • C, supernatant was discarded. Remaining pellet was washed with 80% EtOH (Honeywell) in duplicate with brief centrifugation at 4 • C between washes. Following removal of residual supernatant, 50 µl dH 2 O was added and concentration was determined using the NanoDrop 2000 (Thermo Fisher Scientific). Quantitative Real-Time PCR Reverse transcriptase (RT) was performed using 1 µg of RNA with the iScript cDNA synthesis kit (Biorad) according to manufacturer's instructions. RNA was treated with DNase before cDNA synthesis. Q-PCR using iTaq Supermix (BioRad) was performed on 0.1 µl RT product. Cycling conditions were an initial denaturation of 3 min at 95 • C, followed by 35 cycles of each 5 s at 95 • C and 30 s at 60 • C. As a reference gene GAPDH was used. For statistical analysis the two-sample unpaired t-test assuming equal variance was used. Immunohistochemical Staining Tissues were fixed overnight in 4% paraformaldehyde (PFA) at 4 • C and embedded in paraffin according to in-house protocols. Sections of 6 µm were cut and placed on silane coated slides (Klinipath). The sections were deparaffinized in decreasing concentrations of alcohol-starting with xylene and ending in demineralized H 2 O-before performing antigen retrieval by microwave treatment in 0.01 M sodium citrate (pH = 6). Endogenous peroxidase activity was blocked with 0.6% H 2 O 2 in PBS. When staining for FMRpolyG an additional incubation with proteinase K (5 µg/ml) was performed for 20-30 min at 37 • C to ensure optimal antibody binding. Staining was performed overnight at 4 • C with primary antibodies diluted in PBS/0.5% milk/0.15% glycine (PBS+). Staining with secondary antibodies was performed at RT for 60 min. Antigen-antibody complexed were visualized using DAB-substrate (DAKO), after which slides were counterstained with hematoxylin for 5 min and subsequently mounted with Entellan (Merck Milipore International). Antibodies used are listed in Table 1 hereafter. Behavioral Testing Muscle function was tested using a hanging wire test. A metal wire with a diameter of 2 mm was suspended around 20 cm above a cage. The mouse was brought to the wire so that he could grasp the wire with his front paws after which the latency to fall was recorded. The maximal trial duration was 60 s. In addition, we used the Bioseb grip strength test (Bioseb, Vitrolles, France). For this test, the mouse was placed on a metal grid and after he clamped to the grid with all four limbs, he was gently pulled down by the base of his tail. The maximal force was measured and the average of three consecutive trials was calculated. The fine motor coordination of the mice was tested on the balance beam. During 2 consecutive days, the mice were habituated to the setup that consisted of a horizontal wooden beam with a diameter of 12 mm and a length of 100 cm located Mouse specific anti-GFP and anti-FMRpolyG (8FM) antibodies were used to visualize GFP expression and the FMRpolyG protein aggregates in mouse brain, respectively. Frontiers in Molecular Biosciences | www.frontiersin.org approximately 50 cm above a table. Each mouse was placed on one side of the beam and walked over the beam to a home cage at the other side of the beam. After two trials, the beam was replaced by one with a diameter of 8 mm and also on this beam two trials were performed. On the third day, the performance of the mice was quantified by counting the number foot slips and falls. Each mouse crossed each beam twice and the average time required to reach the other side of the beam was measured, taking only trials without falls into account. Locomotor patterns were recorded on a horizontal ladder flanked by two plexiglass walls spaced 2 cm apart (ErasmusLadder, Noldus, Wageningen, Netherlands) as described previously (Vinueza Veloz et al., 2015). The ladder consisted of two rows of 37 rungs placed in an alternated high/low pattern. The rungs were spaced 15 mm apart and the height difference between high and low rungs was 9 mm. Each rung was connected to a pressure sensor recording rung touch. During a trial, the mouse had to walk from a shelter box on one side of the ladder to another on the other end. Trial start was indicated by lighting an LED in the shelter box followed 3 s later by a strong tail wind. Early escapes, thus before the LED was switched on, were discouraged as they triggered a strong head wind. In between trials, there was a resting period. Mice were first habituated to the setup by letting them freely explore the ladder for 15 min during which no light or air cues were given. On the next day, training started with 44 trials on each day. The initial training consisted of six daily sessions, after which the mice were measured once a week. Sensor touches were filtered to delete single backsteps or fake hind limb steps using the factory settings. For the further analysis, we used the touches of the front limbs with the first and the last step of each trial being deleted. Using the water maze test, we quantified the spatial memory of the mice. Each mouse was placed on the border of a circular pool with a diameter of 120 cm filled with a mixture of water and nontoxic white paint kept constant at 26 • C. In the pool, a platform with a diameter of 11 cm diameter was hidden 1 cm below the water surface. The time to find the platform was recorded on two trials each day on 5 consecutive days. When the mouse did not find the platform within 60 s, the trial was stopped. On days 6 and 7, a probe trial was given. During the probe trials, the platform was absent and the mice were allowed to swim for 60 s while their trajectory was tracked (EthoVision XT11, Noldus, Wageningen, Netherlands). The data of the probe trials were analyzed by subdividing the pool in four quadrants, with the original position of the platform in the middle of quadrant 3. We marked the original platform position as well as the same shape at the corresponding position in the other three quadrants and counted how often the mouse passed the borders of each of these positions per trial. We considered a crossing if it involved more than 50% of the body of the mouse. On top of that, we also quantified the time spent in each quadrant. The battery of behavioral tests is schematically represented in time in Figure 1B. Statistics Behavioral performance on each paradigm was compared between mice treated with and without doxycycline. The statistical tests used are mentioned in the Results section, whereby we used non-parametric tests for data that were not normally distributed. Throughout the manuscript, we considered a p-value of 0.05 or less as indication for statistical significance. Expanded CGG Expression Results in Inclusion Formation in the Hippocampus and Basal Ganglia First of all, we studied the expression pattern of the FMRpolyG-GFP fusion protein in CamKII-α-rtTA/TRE-103CGG-GFP mice after induction of transgene expression by the addition of doxycycline (dox) to the drinking water. First, repeat length in the transgene was verified using an in-house PCR protocol. Repeat length PCR shows the repeat size of 103× CGGs at approximately 480 bp compared to the control 11 CGGs length at 290 bp (Supplementary Figure S1A). To verify whether dox treatment did not affect murine Fmr1 mRNA expression, we performed quantitative real-time PCR on brain tissue of treated and control mice. The data show that dox treatment had no effect on Fmr1 mRNA expression in the brain as tested in the hippocampus (Supplementary Figure S1B). Since the transgene expression was under the CamKII-α promoter, we expected the FMRpolyG protein to be present in neurons of, among other regions, the hippocampus, the neocortex, the basal ganglia, and in the posterior part of the cerebellum, more specifically lobule X (Hasegawa et al., 2009;Wang et al., 2013). In our hands, already after 4 weeks of dox treatment, GFP expression, indicative of FMRpolyG expression, was found in all aforementioned brain regions. After 12 weeks of dox treatment, the expanded CGG repeat was strongly expressed in the striatum of the basal ganglia, the hippocampus, the neocortex, and lobule X of the cerebellum (Figures 2A,B,D,E). Low to modest expression of GFP was present at 12 weeks in the hypothalamus, the colliculus inferior and superior (Figures 2C,D), and other sub-regions of the midbrain. Next, we investigated whether FMRpolyG expression was associated with the formation of nuclear inclusions in the CamKII-α-rtTA/TRE-103CGG-GFP mice. To this end, we compared brain sections stained for FMRpolyG from mice that did receive dox with those from mice that did not. As expected, we could not detect any inclusions in the control mice. However, the mice treated with dox developed spherical FMRpolyGpositive inclusions in most of the brain regions in which GFP expression was observed. The highest density of inclusions was found in the striatum, the CA3 region of the hippocampus and the hypothalamus (Figures 3B-D). Lower densities were present in the DG region of the hippocampus, as well as in the inferior and the superior colliculus (Figures 3A,E). We did not observe a perfect correlation between GFP expression and the occurrence of inclusions: in lobule X of the cerebellum, no inclusions were found despite the presence of GFP (Figures 2E,F). In general, during 12 weeks of dox treatment, the number of inclusions increased over time with regional differences. Quantification FIGURE 2 | GFP expression in multiple brain regions. GFP expression (brown staining) was visualized using immunohistochemical staining with a mouse specific anti-GFP antibody in sagittal brain sections at 12 weeks after onset of dox-treatment. Strong expression of GFP was present in the striatum (A), the hippocampus (B), the hypothalamus (C), and the cerebral cortex (D). Lower levels of expression were present in the superior and inferior colliculus (D). In the cerebellum, GFP expression was only observed in vermal lobule X (E, indicated area amplified in F). 3V, third ventricle; cc, corpus callosum, Ctx, cerebral cortex; DG, dentate gyrus; f, fornix; Hy, hypothalamus; IC, inferior colliculus; SC, superior colliculus; Str, striatum; Th, thalamus; TRS, triangular nucleus of the septum. of FMRpolyG-positive inclusions (Figure 3F) was only done in the hippocampus and the striatum of the basal ganglia, since these regions are known to be involved in FXTAS disease pathology (Greco et al., 2002). Irrespective of the brain region involved, most inclusions were located intranuclearly. Sometimes two or more smaller inclusions were located in the same nucleus. In summary, dox induced the production of CGG RNA in CamKII-α-rtTA/TRE-103CGG-GFP mice in several brain regions, which resulted in the formation of FMRpolyGpositive nuclear inclusions, predominantly in the striatum and the hippocampal CA3 region. Absence of Behavioral Phenotype in Mice Expressing FMRpolyG-Positive Inclusions To test whether the expression of the CGG repeat and the resulting nuclear inclusions had any impact on mouse behavior, we subjected the mice to a battery of behavioral tests. To control for possible confounding problems with the general condition of the mice, we first tested the muscle strength using the hanging wire and the Bioseb grip strength tests 12 weeks after the start of the dox treatment. The latency to fall was 22.0 ± 11.0 vs. 26.5 ± 10.2 s (control vs. dox mice, averages ± s.d., p = 0.385, t = 0.944, df = 17, t-test Figure 4A) during the hanging wire test and the force was 1.79 ± 0.37 vs. 1.53 ± 0.39 N (control vs. dox mice, averages ± s.d., p = 0.336, t = 0.991, df = 17, t-test Figure 4B) during the Bioseb grip strength test. We therefore conclude that there were no indications for changes in muscle strength due to the dox treatment. We continued by describing the behavior on the ErasmusLadder, which is a horizontal ladder consisting of two rows of rungs in an alternating high/low pattern spanning the space between two shelter boxes. After habituation and initial training, we measured the performance at 10, 11, and 12 weeks after the start of dox treatment. The start of each trial was indicated by switching on an LED in the start box and this was followed by a strong tail wind 3 s later. In roughly 75% of the trials, the mice waited until the tail wind started before leaving the start box. Leaving upon perception of the visual cue or even before that was observed less often. Changes in this pattern could be a sign of cognitive impairment (Vinueza Veloz et al., 2012), but these were not observed between control and dox mice (p = 0.516, 3 × 2 Fisher's exact test, Figure 4D). Next, we characterized the stepping pattern on the ErasmusLadder. Wild type C57BL/6J mice have a tendency to avoid the lower rungs and typically make steps from one high rung to the next or the second next high rung (Vinueza Veloz et al., 2015). We considered these small and regular steps, respectively. Long steps, skipping at least two higher rungs, and lower rung steps occurred much less often, as did other irregular steps such as backwards walking. Thus, also regarding the stepping pattern, no impact of the dox treatment was observed (Table 2 and Figures 4E-G). Finally, to test for putative defects in spatial memory formation, we subjected the mice to the Morris water maze test around 12 weeks after the start of the dox treatment. During 5 consecutive days, the mice were trained to find a hidden platform just below the surface of an opaque, circular pool. Over the sessions, both control and dox-treated mice managed to be faster in finding the hidden platform, with no statistically significant differences between the two groups [p = 0.134, F (1, 17) = 2.479, repeated measures ANOVA, Figure 4H]. On the next 2 days, the experiment was repeated-but without a hidden platform. On these probe trials we made video recordings of the mice (Figure 4I). First, we counted how often the mice crossed the location where the hidden platform had been during the training sessions and compared these with crosses of the analogous locations in the other three quadrants. During the first probe trial, both control and dox treated mice had a preference for the real location (in quadrant 3) over the other areas (control: 2.2 ± 1.5 crosses per trial of the real location vs. 1.0 ± 0.9 crosses of the other locations, p = 0.021, U = 60.5, Mann-Whitney test, dox mice 2.0 ± 2.2 vs. 0.6 ± 0.7 crosses, averaged ± ss, p = 0.107, U = 101.5, Mann-Whitney test, control vs. dox mice: p = 0.813, χ 2 = 0.95, χ 2 test). During the second probe trial, the preference of the control mice for the real location was gone (1.8 ± 1.2 vs. 1.6 ± 1.3 crosses, p = 0.624, U = 108.0, Mann-Whitney test), but remained present in the dox treated mice (3.3 ± 2.3 vs. 1.1 ± 1.1 crosses per trial, p = 0.005, U = 63.0, Mann-Whitney test). This difference between control and dox mice was on the border of statistical significance (p = 0.061, χ 2 = 7.36, χ 2 -test, Figure 4J). This might indicate that the dox-treated mice had more trouble understanding that the hidden platform was no longer in place. This, however, was not reflected in the relative dwell times per quadrant [p = 1.00, F (1 , 17) = 0.000, repeated measures ANOVA, Figure 4K], which leads us to conclude that also the Morris water maze did not reveal convincing differences in behavior due to activation of the premutation. DISCUSSION Wide-spread occurrence of nuclear inclusions is a major hallmark of FXTAS. To date, it is a matter of debate whether these inclusions contribute to cellular pathology in FXTAS, or-in contrast-slow down the disease process by sequestering toxic RNA and proteins. Such a protective function has been suggested for FXTAS Hagerman and Hagerman, 2016), but also for other protein-aggregation disorders, such as Huntington's disease and SCA1 (Klement et al., 1998;FIGURE 4 | Absence of a clear behavioral phenotype in dox-treated mice. Neither the hanging wire test (A) nor the Bioseb grip strength test (B) demonstrated an impact of dox-treatment on muscle strength. (C) Also, the balance beam test failed to find consistent differences in either the number of slips (left) or the time to cross (right). The tests were performed on a thick (12 mm diameter) and a thin (8 mm diameter) wooden beam. (D) On the ErasmusLadder, trial starts were indicated by lighting an LED in the start box, followed 3 s later by a strong tail wind. The fraction of trials with starts before the visual cue ("Early"), during the visual cue ("Light") or after the start of the tail wind ("Air") were comparable between control and dox-treated mice. Data recorded at 10, 11, and 12 weeks after the start of dox-treatment. (E) The ErasmusLadder consists of alternating high and low rungs. The fractions of short steps (from one high rung to the next), regular steps (from one high rung to the second high rung), long steps (from on high rung to another, skipping at least two) and lower rung touches were also similar for both groups, as further illustrated for the fraction of regular steps and lower rung touches (F) as well as for step times (G). The data in (F,G) show the medians with the shades indicating the inter-quartile range. See Table 2 for a more extensive statistical analysis of the ErasmusLadder test. (H) In the water maze, the mice had to find a platform hidden just below the water surface. As the water was made opaque, the mice could not see the platform. During 5 consecutive training days, the latency to find the platform decreased both in control and dox-treated mice. (I) On the next 2 days, the hidden platform was removed (probe trials) and the trajectories of the mice were recorded. The heat maps indicate the time spent per area of two exemplary mice. The original location of the hidden platform is indicated by a pink dashed circle in quadrant 3. (J) On the first probe day, the mice crossed the location where the hidden platform had been more often than the analogous regions of the other quadrants. On the second day, the control mice no longer searched more often in the area where the hidden platform had been (p = 0.624, Mann-Whitney test), while the dox-treated mice kept searching specifically around the location where the hidden platform had been (p = 0.005, Mann-Whitney test). (K) This retention, however, was not noticeable when comparing the times spent per quadrant. Unless indicated otherwise, behavioral tests were performed during the 12 th week of dox-treatment. Group sizes were 10 mice. The percentages of steps to higher rungs, being either short, regular or long, as well as those to lower steps (irrespective of stride length), and backward steps were compared at 10, 11, and 12 weeks after onset of dox treatment. Note that the percentages do not add to 100% as some irregular types of steps were not considered here (in particular, steps starting from lower rungs). Of the two most frequent step categories, also the step times are indicated and compared. The values were first calculated per mouse, and then compared between the two groups (n = 10 mice/group). The median and interquartile range (IQR) values in this table refer to the recording session at 12 weeks after onset of dox treatment. All values refer to front paw movements. p-values reflect the between-subject comparisons of repeated measures ANOVAs. Since not a single p-value was close to the threshold for significance, no correction for multiple comparisons was applied. Saudou et al., 1998;Cummings et al., 1999;Arrasate et al., 2004). To study the relation between the development of intranuclear inclusions and behavioral deficits, we used a novel, inducible and neuron-specific mouse model for FXTAS under the control of the CamKII-α promoter. Expression of an expanded 103CGG repeat RNA transgene is induced by dox and is under the control of the Tet-On system. This inducible mouse model shows no evidence of expression in the absence of dox (i.e., no leakage of expression), and was induced after completion of normal development to avoid interaction with developmental processes. Within a month after transgene induction, FMRpolyGpositive nuclear inclusions were found in the striatum and the CA3 region of the hippocampus. Two months after the occurrence of the first nuclear inclusions, the inclusions were abundant in most brain areas in which the CamKII-α promoter is active such as the hippocampus, neocortex and the striatum. Yet, we could not identify a robust behavioral phenotype that could be caused by the inclusion pathology in these mice. Several mouse models have significantly contributed to our understanding of the molecular mechanisms underlying FXTAS and have characterized disease progression. Previously, we found in a different inducible mouse model for FXTAS, using the heterogeneous nuclear ribonucleoproteins (hnRNP) promoter, a rapid death after dox-induction. The neuronal level of transgene expression in these mice was low, and nuclear inclusions were sparse or even absent in the brain (Hukema et al., 2014). In contrast, in a third mouse line, under control of the brainspecific protease-resistant-protein (PrP) promoter, we observed both the formation of nuclear inclusions and behavioral deficits (Hukema et al., 2015). These mice developed only a deficit in the compensatory eye movement pathway after 20 weeks of treatment with dox. Although expression of the transgene containing the expanded CGG repeat mRNA was found in the hippocampus, lobule X of the cerebellum and the striatum, these expression levels were low with the exception of lobule X of the cerebellum where expression was the most profound. Together, these results lead us to question whether the development of nuclear inclusions is indeed the cause of developing FXTAS symptoms. Therefore, we developed a new inducible transgenic mouse model under the control of the CamKII-α promoter expecting stronger expression in the brain. In our CamKII-α-rtTA/TRE-103CGG-GFP mouse model, the expression of GFP followed that of the previously described distribution of the CamKII-α promoter (Wang et al., 2013). Immunohistochemical staining shows the strongest GFP expression in the striatum, the CA3 region of the hippocampus and lobule X of the cerebellum. Moderate GFP expression was found in the neocortex, the dentate gyrus, the hypothalamus and several midbrain areas. In all of these regions, with the notable exception of the cerebellum, also nuclear inclusions were formed. If nuclear inclusions in these areas would result in functional deficits, a broad range of behavioral impairments is to be expected. As a consequence, typical cerebellar symptoms, although prominent in FXTAS patients (Hagerman et al., 2001;Tassone et al., 2007;Hagerman, 2013, 2016), were not expected in our mouse model since the CamKII-α is only expressed in a very limited part of the cerebellum. We therefore focused on spatial learning, that has previously been shown to be affected in a knock-in mouse model ("the Dutch mouse") (Van Dam et al., 2005;Hunsaker et al., 2009), and striatal motor coordination functions, as they also occur as parkinsonism in patients (Hagerman et al., 2001). An intact hippocampus is essential for normal spatial learning in the water maze (D'Hooge and De Deyn, 2001;Okada and Okaichi, 2009;Laeremans et al., 2015). Our mice showed no, or only marginal, deficits at the water maze test, arguing against a severely impaired hippocampal function. The striatum is vital for motor control and striatal damage leads to impaired behavior on the balance beam (Shear et al., 1998;Feng et al., 2014), which was not observed in our mice. This lack of an effect on motor coordination was further substantiated by equal performance of treated and control mice on the ErasmusLadder and the grip tests. Although we cannot exclude that there were subtle behavioral deficits that we did not observe, it is safe to state that there were no major changes in behavioral performance in spite of the abundance of nuclear inclusion in the dox treated mice. The expanded CGG RNA and proteins can aggregate with many other molecules into nuclear inclusions (Ma et al., 2019). The expanded CGG RNA on itself is not enough to induce toxicity and that the production of an out-of-frame FMRpolyG protein due to RAN translation is necessary for cellular toxicity (Galloway and Nelson, 2009;Hashem et al., 2009;Sellier et al., 2017;Derbis et al., 2018). Our present results indicate that the development of FMRpolyG-positive nuclear inclusions themselves are probably not very detrimental to the function of neurons. It remains to be seen whether aggregation is an active process, aimed at sequestering toxic molecules and thereby slowing down the disease progression, or more an epiphenomenon that is a physical consequence of the molecular structure of the expanded CGG RNA and/or RAN translation protein FMRpolyG. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. ETHICS STATEMENT All experiments involving mice were performed according to Dutch law and following institutional guidelines (Erasmus MC, Rotterdam, Netherlands) in confirmation with EU directive 2010/63. Prior to the start of the experiments, project licenses were obtained from the national authority (Centrale Commissie Dierproeven, The Hague, Netherlands) after review by an independent ethical committee (DEC Consult, Soest, Netherlands) and filed under numbers AVD101002015290 and AVD1010020197846. AUTHOR CONTRIBUTIONS SH, RH, and LB conceived the project. SH, RM, ET, VB, and RV performed the experiments. SH, RM, RW, RH, and LB analyzed the data. SH, RW, and LB wrote the manuscript with input from all authors.
2020-12-14T14:12:05.273Z
2020-12-14T00:00:00.000
{ "year": 2020, "sha1": "13e4865f542d40d81c5dfa3972a4655e8f6b8d1b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2020.599101/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13e4865f542d40d81c5dfa3972a4655e8f6b8d1b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237954495
pes2o/s2orc
v3-fos-license
THE EFFECT OF VOCABULARY TOWARDS WRITING SKILL WITH READING SKILL AS MODERATING EFFECT Vocabulary is one of the English components that need to be mastered for acquiring writing and reading. Writing needs various vocabularies to build different sentences. Reading needs understanding of different meaning of vocabularies based on the context in the sentence. Hence, vocabulary is important in English language learning to support writing and reading. Regarding the importance of vocabulary, writing, and reading, the aim of this research is to test the effect of vocabulary mastery towards writing ability in English. It is also to test moderating effect of reading ability in strengthening or weakening the relation. This study used moderated regression analysis (MRA). Then, this research used the test as research instrument. The instruments had been valid and also reliable. Classical assumption tests were used before the research analysis. The samples were Management students of semester four and six. Descriptive analysis towards each variable is used to know students’ ability in English. To analyze the effect of independent variables towards dependent variable, it used linear regression analysis. To analyze moderating effect, it used moderating regression analysis. From this research result, it was proved that vocabulary had positive significant effect towards writing skill. Reading skill moderated the relation of vocabulary towards writing skill. Further research can focus on experimental research of vocabulary, writing, and reading. INTRODUCTION In industrial revolution 4.0., the students are expected not only able to speak well but also to write well. Moreover, the students of university are also encouraged to read and write well for preparing their thesis. Hence, vocabulary is chosen as the English component to write well with reading skill as writing enhancement. between literal and implied meanings; and 10) to capitalize on discourse makers to process relationships. Vocabulary and reading relate to writing. Writing involves seven characteristics of written language as stated by Brown (2007). Permanence of the final writing form can still be clarified. Production time of writing happens through process until final version. Writing must consider distant audience for having interpretation. It needs orthography in which writing involves simple to complex ideas. Writing also involves complexity skills in reducing redundancy, combining sentences, making inferences, and creating lexical types. Then, writing involves vocabulary use. The last is writing involves formal type of writing. Teaching writing has six principles according to Brown (2007). Writing needs practice to produce good writing. Writing also needs process starting from making draft until it becomes product. Since writing in second language needs literary background, the teacher needs to introduce it to the students. Afterwards, reading connects to writing. Hence, students need to have good reading before writing. Authenticity in writing is needed to make the product become original. The last is synchronization of prewriting, drafting, and revising. There are four previous studies. Aida and Widiyati (2020) did a research on extensive reading to improve students' writing of explanation text in which the result was extensive reading improved writing. Related to Khairunas, Pratama, Iswanto (2019), they did a research on the effect of learning motivation and vocabulary mastery towards students" writing skill in argumentative text in which the result was there was significant effect of learning motivation and motivation towards writing skill. Dehkordi and Salehi (2016) did a research on impact of explicit vocabulary instruction on writing achievement in which the result was more practice on vocabulary productivity was needed for writing. Hastuti (2015) did a research on the effect of vocabulary and grammar mastery towards writing skill in which the result was vocabulary and grammar had positive influences towards writing skill. From the previous studies, there is no research yet that uses reading skill as moderating effect in the relation of vocabulary and writing skill. Hence, this study would like to fill the gap to discover this part. This study was limited in four terms. Firstly, this study was limited in the context of Palembang. In Palembang, this study was implemented in Universitas Katolik Musi Charitas. Secondly, the English component used in this study was limited to vocabulary. Thirdly, the English skills used in this study were limited to writing skill and reading skill. Fourthly, the test could not be done in the classroom with supervision, but the test was conducted online by using Google Form since the data in this study were collected during Covid 19 pandemic period in which all students learned from home. Hence, the test was conducted online and not in the classroom. Considering the importance of vocabulary and reading skill for writing skill, the writers focus the study on the effects of vocabulary towards writing skill with reading skill as moderating effect. The research questions in this study are: 1) Is there effect of vocabulary towards writing skill? 2) Does reading skill have significant role as moderating effect on the effect of vocabulary towards writing skill? Hence, after conducting this study, the effect of vocabulary towards writing skill with reading skill as moderating effect is discovered in the teaching-learning activities. Research Model Based on the phenomena given, the research model is as follows: From the figure above, it can be seen that vocabulary as the independent variable gives effects on writing skill. The reading skill in this study is as moderating effect that strengthens the influences. Hypotheses Based on the research model above, the hypotheses in this study can be seen below: H1: There is an effect of vocabulary towards writing skill H2: There is a significant role of reading skill as moderating effect on the effect of vocabulary towards writing skill Participants / Subject / Population and Sample The population in this study were all students of the Management study program at the Musi Charitas Catholic University. The total respondent in this H1 Vocabulary Writing Skill H2 Reading Skill study was 87 students of Management study program. The respondents were chosen based on nonprobability sampling and purposive sampling. Semester 4 students were considered capable enough in English since they had passed English in Economics course. Whereas, semester 6 students that were conducting seminar were expected to have frequent reading on literature and management field of studies in English. Instruments The instrument in this research was test. Creswell (2012), intelligence test is "a test that measures an individual's intellectual ability". Data Analysis Procedures Technique for data analysis in this study were linear regression and moderated regression analysis (MRA). Ghozali (2016) states that regression is used to measure the relation strength between two or more variables. Ghozali (2016) also states that moderating variable is independent variable that strengthens or weakens the relation of independent variables towards dependent variables. The independent variable in this study is vocabulary, while the dependent variable in this study is writing. Moderating regression analysis is used since there is moderating effect which is reading skill. The study was started by making test questions. Since the samples in this study were Management students, then the test questions of vocabulary, reading, and writing were made in Economics perspective. After that, the test was distributed to the students with online test by using Google Form. Then, the test answers in Google Form responses were scored. There were three total scores, namely Vocabulary total score, Reading total score, and Writing total score. The respondents' data were also described. The classical assumption tests were conducted. They were normality test, heteroscedasticity test, and multicollinearity test (between independent and moderating variable). Validity and reliability of each test were also conducted. Afterward, regression was conducted. Firstly, linear regression was conducted to discover the effect of vocabulary towards writing skill. This answered research question 1 and hypothesis 1. Secondly, moderating regression analysis was conducted to discover the effect of reading skill as moderator effect in the relation of vocabulary towards writing skill. This answered research question 2 and hypothesis 2. The data from test results were analyzed. The first was scoring the test. The score results were from vocabulary, reading, and writing tests. Mean score was used as descriptive statistics. The second was to analyze the effect of independent variable to dependent variable (hypothesis 1) with linear regression. It used SPSS version 20. The third was to analyze moderating effect (hypothesis 2). Validity, Reliability, and Classical assumptions test were conducted before having regression. Moderated Regression Analysis (MRA) used an analytical approach that maintained sample integrity and provided a basis for controlling the influence of moderator variables. To use MRA with one predictor variable (X), three regression equations had to be compared to determine the type of moderator variable (Ghozali, 2016;Sekaran & Bougie, 2017). The Reading Skill (RS) variable was a pure moderator variable on effect of vocabulary to writing skill if (1) and (2) were not different, but it had to be different from equation (3). The reading skill variable was a quasi-moderator variable if (1), (2) and (3) had to differ from one another. The reading skill variable was not a moderator variable if (2) and (3) were not significantly different. To test the effect of reading skill moderation on the effect Vocabulary to writing skill, it used comparison of equation 1 with equations 2 and 3. Respondents' Characteristics There were 87 test results out of 120 students that were valid and filled in completely that could be used to do further testing and analysis. Whereas 33 more test results could not be analyzed further because of several reasons, such as the test was not filled completely, written in Indonesian, or the test result was not sent. Respondents' characteristics as seen in Table 1, gender result was balanced, where female respondents were three people more than male respondents. For the age result, most respondents were 20 years old in which the chosen respondents were students in semester 4 and 6 that had passed English and economics theory courses. For the class result, it could be seen that respondents in semester 4 were bigger than respondents in semester 6 in the amount of 54 respondents. Morning class respondents were bigger that evening class respondents in the amount of 57 respondents. Source: the primary data were processed Reading Skill The average score of Reading score total was 66,89 (Table 2). It showed medium understanding of English passages. Reading skill was the ability that was needed by students to understand English passage. This matter had impact on students' understanding in deepening management science, in which the recent literature sources were journal articles that were mostly in English. It is also stated by Krashen in Khoirunissa and Safitri (2018) that continuous reading can enhance English skill performance including writing. Writing Skill In this writing test, there was weakness in which the test was conducted online during Covid -19 pandemic period. During data collection, there were several mistakes, such as the respondents misread the instruction so they wrote in Indonesian and the writing was less than 8-10 sentences. Hence, some respondents were contacted via Whatsapp group to revise their writing and submit again their writing in English and to write with adequate total sentences as written in the instruction. It is also stated by Brown (2007) that writing needs process starting from making draft until it becomes product. There are several assessment criteria in giving writing score, such as content (66,79), grammar (65,57), generic structure (65,95), and originality (70,25). The average score results of 87 respondents was 67 (Table 3). Hence, if it was compared to reading scores (Table 2), sub writing skill scores differences were not too high with other variables. Source: the primary data were processed Normality Test Before conducting the statistics test, multivariate normality assumption was conducted. It means each variable and all linear combinations were distributed normally. If this assumption was accepted, then residual value of this analysis was also distributed normal and independent. Screening towards normality data was the first step that needs to be done in each multivariate analysis, especially if the aim is inference. From the score of Kolmogorov-Smirnov and significance in Table 4, it can be seen that the data that distribute normal was reading skill with significance value of >0,01, while vocabulary and writing skills did not distribute normal because the significance value was <0,01 (Ghozali, 2016). Multicollinearity Test Multicollinearity test is aimed to test whether or not there is correlation among independent variables in regression model. There must be no correlation among independent variables in good regression model. If there is correlation among independent variables, then the variables are not orthogonal. Orthogonal variables are independent variables that the correlation score among independent variables is equal with zero (Ghozali, 2016). The way to detect multicollinearity is by analyzing correlation matrix among independent variables and counting Tolerance and VIF values as follows. In the correlation matrix (Table 5), the correlation between the independent variables is 0.301. Because this correlation is still below 95%, it can be said that there is no serious multicollinearity (Ghozali, 2016). Source: the primary data were processed Source: the primary data were processed Tolerance score counting shows that there is no independent variable that has tolerance value less than 0,1 which means that there is no correlation among independent variables. This counting result of Variance Inflation Factor (VIF) also shows the same matter in which there is no any single independent variable that has VIF score more than 10. Hence, it can be concluded that there is no multicollinearity among independent variables in regression model (Table 6). Heteroscedasticity Test Heteroscedasticity test is aimed to test whether or not there was variance dissimilarity from one residual observation to other residual observations. If variance from one observation to other observations is constant, then it is homoscedasticity. If it is different, it means heteroscedasticity. Good regression model is homoscedastiticy or there is no heteroscedasticity (Ghozali, 2016). One of heteroscedasticity tests is Park test that shows variants (s2i) is the function of independent variables (Ghozali, 2016) that is stated in equation: σ^2 i= α Xiβ This equation becomes linear with logarithmic equation, in which it becomes: Ln σ^2 i= α+ β LnXi+vi Since s^2 i is generally unknown, then it is predicted by using residual as proxy, then it becomes: Ln Res_1SQ= ∝ + βLnXi+vi Source: the primary data were processed Hence, the result of Ln Res_1SQ= ∝ + β1 TV+β2 TG+ β3 TR is in Table 7. If beta parameter coefficient of that regression equation is significant statistically, this shows that empirical data model is estimated that there is heteroscedasticity. If beta parameter coefficient of that regression equation is not significant statistically, it means that there is no heteroscedasticity in regression model, or in other words homoscedasticity assumption cannot be rejected. In Table 7 it can be seen that the parameter coefficients for the independent variables are not significant, it can be concluded that the regression model does not have heteroscedasticity. Validity Test Validity in this research is to measure validity of vocabulary, reading, and writing tests. It is done by conducting bivariate correlation among each test question score and variable total score. Because the instrument that is used in this research is test and not questionnaire, then each question is still included in the analysis. From the results of the correlation between the score of the questions with the total score of the variable, the vocabulary test No. 17 and reading tests no. 2 and 8 were not significant or invalid so they were excluded from further analysis. .396** 1 **Correlation is significant at the 0.01 level (2-tailed) Source: the primary data were processed Reliability Test Reliability in this research is used to measure consistency of right or wrong answers in vocabulary and reading test variables. Whereas, the consistency of writing variable is seen among assessment criteria. The result of reliability test can be seen in Table 9 as follows. From Cronbach's Alpha, it can be seen that there is quite good reliability in the vocabulary and writing tests, but not good in the reading test. A construct is said to be reliable if the Cronbach Alpha value is > 0.7 (Nunnnally, 1994in Ghozali, 2016. Source: the primary data were processed The Effect Analysis of Vocabulary towards Writing Skill with Reading Skill as Moderator Variable by using Moderated Regression Analysis (MRA) Moderated Regression Analysis uses analytic approach to maintain sample integrity and give basis to control the influence of moderator variable. In order to use MRA with one predictor variable (X), then it must be compared with three regression equations to determine the type of moderator variable (Ghozali,2016;Sekaran & Bougie, 2017). Moderator regression equation of Reading skill uses Moderated Regression Analysis (MRA). To use MRA with one predictor variable (X), then it is compared with three regression equations to determine the type of moderator variable. Here are the three equations: X1: Vocabulary towards Y: Writing Skill, Z: Reading Skill. The Effect Analysis of Moderator Variable Reading Skill towards the relation of Vocabulary and Writing Skill (Hypotheses 1 and 2 Testing) In order to test moderating effect of reading skill in the relation of vocabulary and writing skill, it is compared among equation 1 towards equations 2 and 3. Source: the primary data were processed From the regression result 2, then the output of regression result is: WS=25,720+0,308 Vc+0,207 RS. DISCUSSION Classical Assumption Tests In normality test, vocabulary and writing that did not distribute normal because of the weakness of the test in which it was conducted online. Furthermore, for the need of moderating effect measurement, then data V*R also showed significance value >0,01 which means the data distributed normal. In multicollinearity test, it was based on the result of correlation among independent variables above (-0.301, which is below 95%), then it could be stated that there was no multicollinearity. In heteroscedasticity test, there was no parameter coefficient that was significant in independent variable. Hence, it could be concluded that there was no heteroscedasticity in the regression model. Validity and Reliability From the test in Table 8, it can be seen that all test questions have significant correlation with total score. There is only one question no.2 in reading variable that is not valid with significance 0,073>0,05. Cronbach Alpha is seen that vocabulary (0,847) has been good. Whereas, reading skill is less good (0,6). For writing skill, it has been very good (0,981) for reliability. Regression From the result of regression equation 1, the equation is WS=28,813+0,429 Vc. It can be seen that vocabulary has positive significant effect (sig 0,009<0,05) towards writing skill. It is as stated by Smith in Nurdini and Marlina (2017) states that readers search in the dictionary for the words that they do not know the meaning while reading. This also supports Hastuti's research (2015) in which vocabulary can increase writing. This result also states that hypothesis 1 that is vocabulary has positive significant effect towards writing skill is accepted. This answers the first research question. The regression equation output is: WS=-8,706+0,694 Vc+0,899 RS-0,008 V*R. From equation 2 and 3 above, it can be seen that significance β3 is 0,006<0,05 (β2=0,β3≠0 ). Hence, reading skill is pure moderator variable that moderates the influence of vocabulary towards writing skill. It is as stated by Krashen in Khoirunissa and Safitri (2018) states that continuous reading can enhance English skill performance including writing. It is also stated by Dehkordi and Salehi (2016) who did a research on impact of explicit vocabulary instruction on writing achievement in which the result was more practice on vocabulary productivity was needed for writing. This supports the research done by Dehkordi and Salehi (2016). This result also states that hypothesis 2, that is reading skill has a role as moderator in the relation of vocabulary and writing skill is accepted. This answers the second research question. It means writing skill increases with the increase of vocabulary, and if reading skill is strengthened, the more frequent reading and training the ability to understand passage, then it increases positive effect of vocabulary towards writing skill. CONCLUSION AND SUGGESTION From the analysis result, it can be concluded in two parts. Hypothesis 1 that is vocabulary has positive significant effect towards writing skill is accepted. This matter is accepted since if a person has rich vocabularies, then he or she will be free in placing his or her ideas in the form of writing. Hypothesis 2 that is reading skill moderates between vocabulary and writing skill can be accepted. It means reading skill can influence the relation of vocabulary and writing skill. This can be understood with the consideration that a person who masters vocabulary will increase his or her writing. The limitation of this research was the test was conducted online during pandemic era in which the test results could not represent fully. Further research can focus on the experimental research of vocabulary, reading, and writing. Experimental research is conducted directly in the classroom by minimizing the possibility of bias so that weaknesses in research via online tests can be corrected.
2021-09-27T20:53:29.722Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3ae88659b3e588dd64bb1de00617a844d6a8ee33", "oa_license": "CCBYSA", "oa_url": "https://ejournal.unib.ac.id/index.php/joall/article/download/13670/8166", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b31652322ed286bc03a975514740032d577071f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
238106036
pes2o/s2orc
v3-fos-license
Preventive Maintenance Strategy for Train Doors Based on the Competitive Weibull Theory In view of the problems of over-maintenance and under-maintenance in the current urban rail transit maintenance strategy and the reliability of single processing of fault data, which is often inconsistent with the actual situation, an incomplete preventive maintenance strategy based on the competitive Weibull model is proposed in this paper. To make the fault mechanism processing method for urban rail vehicles more accurate, fault feature attributes and fault information sequences are introduced to classify fault data. Fuzzy cluster analysis of vehicle fault data can be performed using the formula of the competitive Weibull model, and parameter estimation of the reliability model can be performed by combining it with the graph parameter estimation method. In addition, the fault rate increase factor and service age reduction factor are introduced into the maintenance strategy, and the optimal preventive maintenance cycle and maintenance times are obtained by combining maintenance and replacement according to reliability. A quantum-genetic intelligent algorithm is used to optimize the model-solving process. Finally, the maintenance of urban rail transit train doors is taken as an example. The results of this study show that compared with the traditional maintenance strategy, the reliability of the proposed maintenance strategy is closer to the actual situation. At the same time, the proposed maintenance strategy can effectively reduce the number of parked vehicles, reduce maintenance costs, and ensure the safety of train operation, maintenance economy and performance of tasks. Introduction In recent years, along with the development of the economy, urban rail transit also has been continuously developing.Research on the safety and economy of rail vehicles is an important part of the urban rail transit system and has far-reaching significance.In addition, the maintenance of rail vehicles is important [1].According to statistics, the cost of vehicle maintenance accounts for approximately 40% of the total cost of subway maintenance [2].Therefore, on the premise of ensuring train safety and the performance of tasks, reducing the cost of vehicle maintenance has become an important research topic in recent years. At present, the main maintenance modes of metro vehicles are fault maintenance and periodic maintenance, in which the maintenance effect is considered to be complete maintenance, that is, "repair as new".However, this situation is not in line with the actual situation.Maintenance cannot restore the reliability of the system to a completely new state.Therefore, this maintenance mode will cause problems, such as over-maintenance or under-maintenance, which will lead to an increase in maintenance costs and the waste of maintenance resources.In recent years, many scholars have actively explored the reliability-centered maintenance mode.At present, many studies on reliability are based on the single Weibull model, but for complex repairable systems, fault data are often independent and identically distributed [3], which means that the single Weibull model is not suitable for metro vehicles.A metro vehicle is a complex of electromechanical equipment [4], so many kinds of failure mechanisms coexist in the vehicle system, which means that competitive failure objectively exists.For metro vehicles with multiple failure mechanisms, a reliability evaluation can be performed by the hybrid Weibull model and competitive Weibull model.The competitive Weibull model and parameter estimation are mentioned in the literature [5].A competitive failure model is used to evaluate the reliability of a product in literatures [6] [7].The competitive failure model for specific failure modes or processes has been established in literatures [8][9] [10].Fault data of metro vehicles are used in this paper, and the reliability of rail transit vehicles is solved based on the competitive Weibull model, which compensates for the disadvantage of single fault mechanism processing. In the reliability-centered maintenance strategy, the maintenance effect mainly includes complete maintenance (repair as new), incomplete maintenance and minimum maintenance (repair as old) [11].Because the effect of incomplete maintenance is between that of complete maintenance and that of minimum maintenance, it is more suitable for engineering practice and has become an important issue in current maintenance modeling research [12].Incomplete maintenance is usually expressed by the service age reduction factor and fault rate increase factor.The service age regression factor and failure rate increment factor were Preventive Maintenance Strategy for Train Doors Based on the Competitive Weibull Theory Deqiang He 1 , Xiaozhen Zhang 1 , Yanjun Chen 1 , Jian Miao 1,* , Congbo Li 2 , Xiaoyang Yao 3 improved by N. Kuboki [13] and Ronald M. Martinod [14].At the same time, aiming at preventive maintenance, the nonlinear optimization preventive maintenance strategy was proposed by these authors according to the functional relationship between the failure rate and preventive maintenance interval.The new virtual service age method was introduced by Nguyen D T et al. [15], the maintenance mode of incomplete maintenance was constructed by the new virtual service age method, and three modes of dynamic, static and fault limitation were considered.The preventive maintenance interval optimization model under the condition of maximum availability was established by Shen Guixiang [16] and Wang Lingzhi [17].For various types of repairable equipment, R. Mullo [18] et al. used different methods to combine the occurrence of uncertain types of fault with maintenance and to determine different maintenance intervals for different parts.A variety of new nonlinear selective maintenance optimization methods have been introduced by A. Khatab [19] and Byczanski [20] to construct relevant parameters.Two equivalent models of geometric age regression, GRA and GRI, were proposed by Laurent Doyen [21], and data validation was carried out.Incomplete maintenance was described from another perspective.Therefore, incomplete maintenance has been introduced into practical engineering, and the theoretical model established is more practical. In addition, clustering [22][23] [24] is one of the most widely used techniques in data preprocessing.In general, clustering uses a distance-based [23] or model-based [25] method.An integrated clustering method based on multistage learning was proposed by Indrajit Saha [26] and F. Liang [27], and classification work without attribute value data was also solved.The fuzzy clustering model of multi-attribute data was proposed by Pierpaolo D'Urso [28][29] [30], G. Peters [31]and A. Foss [32].The different measures of each attribute are combined using a weighting scheme so fuzzy clustering analysis of multi-attribute data can be performed.In the absence of a quantitative probability model, fuzzy logic considering field data and expert opinions was proposed by Maryam Gallab [33] and K. Antosz [34], and the classification and evaluation of key risks could be completed.A new type of multicriteria decision making (MCDM) was proposed by Soumava Boral [35], that is, the fuzzy analytic hierarchy process (FAHP) and improved fuzzy multi-attribute ideal comparative analysis (FMAIRCA) were combined to improve the robustness of fault evaluation.Therefore, in this paper, when historical data with various fault types are preprocessed, fuzzy clustering analysis is used to improve the feasibility, and the data obtained at the same time are more in line with those from actual engineering. The reliability model for key systems of metro vehicles based on the competitive Weibull model was adopted in this paper.The influence of fault types on reliability was considered, and fuzzy clustering analysis of the fault information sequence and fault data was performed to classify fault data, and its reliability was more suitable for engineering practice.The incomplete preventive maintenance model based on the competitive Weibull theory was established, in which the service age reduction factor and the fault rate increased factor were introduced, and the maintenance mode combining incomplete preventive maintenance, fault maintenance and preventive replacement was adopted.At the same time, reliability was constrained and the preventive maintenance threshold was taken as decision variable.The minimum cost per unit time was taken as the objective function.Finally, the goal of improving the availability of metro trains and reducing the total maintenance cost was achieved by the model. The remainder of this paper is organized as follows.In Section 2, an incomplete preventive maintenance strategy based on the competitive Weibull theory is introduced.In Section 3, the pretreatment of fault data is presented in detail.In Section 4, the solution of the model is presented in detail.A numerical example is provided in Section 5. Conclusions are drawn in Section 6. Incomplete preventive maintenance strategy based on the competitive Weibull Theory This paper adopts the competitive Weibull model.The core of the competitive Weibull theory is to classify fault data.Different fault mechanisms have different effects on reliability.Assuming that system L has k fault mechanisms and that   i Ft is a cumulative failure distribution function, the cumulative failure distribution function of system L is: According to the competitive Weibull model, the failure rate of system L can be obtained as follows: The reliability of system L is: Incomplete preventive maintenance is then introduced, and the service age reduction factor a and fault rate increase factor b are added.Set T as the incomplete preventive maintenance cycle of system L and N as the total number of incomplete preventive maintenance.Replace the system L, that is, complete a maintenance cycle.The N-th maintenance replaces system L, that is, completes a maintenance cycle.The recurrence formula of the failure rate of incomplete preventive maintenance is as follows: Finally, the reliability of the incomplete preventive maintenance system L based on the competitive Weibull model is obtained: In this paper, the objective function is the maintenance cost per unit time of system L: This paper divides preventive maintenance costs into two parts.One part is fixed and the other is variable.The cost of a single preventive maintenance interval is as follows: where x i is the degree of retirement of service age, u i is the time required for maintenance, and age is the service time of the system.This equation can be simplified to: where C p is the total cost of incomplete preventive maintenance: where C d is the cost of shutdown and C di is the cost of shutdown per unit time: where p  is the shutdown time for preventive maintenance, m  is the shutdown time for minor fault maintenance, and r  is the shutdown time for replacement maintenance.The total maintenance cost is: Therefore, the optimal preventive maintenance times and the optimal incomplete preventive maintenance interval are obtained by optimizing the total maintenance cost C L per unit time of system L. The cumulative failure risk of system L in the first maintenance interval is not fully preventive maintenance when the reliability threshold R 0 is reached.The reliability equation is as follows: After transformation: To improve the task of a metro train, it is necessary to have a high availability of metro trains.Availability is defined as the ratio of the total running time of metro trains to the running time (including failure and maintenance time) [19]: where T work is the average working time and T notwork is the average nonworking time.Therefore, the availability of the system during a replacement maintenance cycle is as follows: Finally, based on the incomplete preventive maintenance model of the competitive Weibull theory: Pretreatment of fault data To evaluate reliability with the competitive Weibull model, the problem of separating fault data , , i i i n t t t L must be solved.Fault data can essentially be separated by analyzing the fault mechanism.However, due to the lack of information and huge workload, fault mechanism analysis is impossible to complete.Another solution is adopted in this article.First, a new concept, the characteristic attributes of faults, is established.The concept of fault feature attributes in this paper is as follows: in the process of metro vehicle operation, fault feature attributes are the set of random events or minimum random events that cause system L to fail.Second, through analysis of fault data, the fuzzy relationship between a fault and fault stress is established, the similarity of the fault mechanism is represented by the fault stress similarity, and the fault information sequence representing the characteristic attributes of faults is obtained.Finally, the fault information sequence is analyzed by fuzzy clustering.Because the values in the fault information sequences represent the eigenvalues of the corresponding fault mechanisms, the similarity of the eigenvalues is the similarity of the fault mechanisms.Thus, the fault data can be classified, analysis of fault mechanism can be avoided and the requirement of the competitive Weibull model can be satisfied. The flow chart of the solution is as follows: Start The characteristic attributes of faults set F is established. According to the scores of experts, the evaluation value W of the characteristic attributes F for each fault of the system is obtained. Fuzzy relation matrix R for characteristic attributes of faults F and fault stress S. Calculate fault information sequence B based on R and W. Fault information sequence B is analyzed by fuzzy clustering and fault mechanism similarity is obtained. Fault data classification is completed. End Relationship between fault and fault stress Usually, a fault is represented by three factors: the fault mode, fault mechanism and fault stress [36].A fault mechanism is a dynamic or static process in which fault stress acts until failure modes occur.Because of the complexity of mechanical systems, there are many combinations among these three elements [37].There are many failure mechanisms in a metro vehicle system.Even if a simple part is broken, the cause and process of its formation are not singular but are a complex process of fault transmission, which makes it difficult to clearly describe the fault mechanism in a simple way or with a simple formula.The most important factor that affects the fault mechanism is the fault stress.The same process of fault stress action is similar, but the different process of stress action is certainly different.To avoid analysis of the fault mechanism, the similarity of fault stress is used to represent the similarity of the fault mechanism in this paper.The relationship between a fault and fault stress is established by the mathematical method of fuzzy evaluation, and the fuzzy evaluation results are used as the sequence of fault information to characterize the characteristics of the fault mechanism corresponding to each fault. In this paper, the characteristic attributes of faults are defined as the random events or the minimum set of random events that cause system L to fail during the operation of metro vehicles.Therefore, random fault events are equivalent to the bottom events in fault tree analysis, and the characteristic attributes of faults are equivalent to the smallest cut set of the fault tree.The fault is represented by F, and the minimum cut set ,, The specific methods of fuzzy evaluation are as follows: By combining the actual working conditions and the external environment of the system, the fault stress selection set   1 2 3 4 5 , , , , S s s s s s  , where S={working stress, internal stress, working environment stress, accidental factor stress, artificial factor stress},is determined.The fuzzy relation matrix between fault stress and the fault cannot obtain an accurate value.For this reason, according to expert knowledge, the fuzzy relation matrix R can be calculated by using the binary comparison ranking method.The bivariate comparative ranking method is a commonly used method to determine the membership function, the simplest of which is the preferential ranking method.Assuming that one of the characteristic attributes of the fault is stimulated by m fault stresses, enough experienced professionals are required to compare the m fault stresses in two ways to determine which fault stress is most likely to cause the occurrence of the fault characteristic attribute, and one of the most probable occurrences of the two is recorded once.Thus, the occurrence times of m fault stresses are obtained.According to the number of occurrences of each fault stress, the total number of occurrences of the corresponding fault stress is removed based on the total number of occurrences in the first place: The fuzzy relation matrix between the characteristic attributes of faults and fault stress is obtained: where ij r is the fuzzy relationship between i f (the characteristic attributes) of fault F and fault stress j s . Weight of the characteristic attributes of faults and the fault information sequence The weight of the characteristic attributes of a fault is not only the expression of importance but also the degree of correlation with the fault.In this paper, the fuzzy complementary matrix A of the characteristic attributes of faults is established by fuzzy evaluation: where ij a is the importance of i (the characteristic attributes of fault) and j (the characteristic attributes of fault) to the fault mode.The fuzzy complementary matrix A is modified so that the fuzzy consistent matrix can be satisfied.The requirements are as follows: (1) , , , The weight A of the characteristic attributes of a fault can be obtained by the following formulas: where  is the resolution parameter of weight allocation. Similarly, the weight 12 ( , , , ) Therefore, the fuzzy evaluation value of fault stress B (fault information sequence) can be obtained: Similarly, the fault information sequence of system failure mode B can be obtained: 12 ( , , , ) Fuzzy clustering analysis of fault data After the fault information sequence of the fault data is obtained, the fault information sequence is used to represent the fault, the numerical value in the fault information sequence is used to represent the eigenvalues of the fault mechanism, and the similarity of the eigenvalues is used to represent the similarity of the fault mechanism.The similarity of the fault mechanism can then be obtained by analyzing the similarity of the fault sequence with the method of fuzzy clustering.According to the similarity of the fault information sequence, a fuzzy similarity matrix where: Generally, the fuzzy relation C established by the above method only has reflexivity and symmetry and does not satisfy transitivity.Therefore, it is necessary to solve the transitive closure t(C) of the fuzzy matrix.Starting from the transfer matrix C, are calculated by using the square method until the first discovery of C k = C 2k , where C k is the transitive closure t(C)of C. The calculation method is as follows: is the fuzzy equivalence matrix of C, and if  is the threshold of fuzzy clustering, the equivalence matrix is: According to C, B (the fault information sequence) whose element value is 1 in each column is classified as a group, thus realizing classification of the fault mechanism by the fault data, The linear regression model of the Weibull distribution can be obtained by two logarithms: In the above formulas: (2) Because of the small sample size, the median rank method is used to calculate the reliability of the set.The reliability of R is estimated as follows: (3) From formulas ( 30), ( 31), (32) and Y, the Weibull probability maps of the fault time interval are drawn on the Weibull probability paper in turn.Thus, the following equation can be obtained: The estimation of the two parameters of the Weibull distribution based on the graph parameter estimation method is completed. Solution of the competitive Weibull Preventive Maintenance Model Based on Quantum-Genetic Algorithms The competitive Weibull model is substituted into the objective function.Through the objective function and constraints, the reliability threshold R p for ensuring the safe operation of trains and the optimal preventive maintenance number N in a cycle are taken as decision variables, and then, the preventive maintenance interval T N for ensuring the safety and economy of trains can be obtained.In this paper, the quantum-genetic algorithm is used to optimize the process of solving the objective function. The genetic algorithm (GA) [38] comes from the observation of biological evolution and genetic phenomena in nature.The GA is a global optimization algorithm with parallel computing ability.The advantage of the GA is that it has high search efficiency, good versatility, parallelism and robustness.However, the GA also has some limitations, such as poor local search ability, slow search speed, and can easily reach "premature" solutions.To overcome these limitations, the quantum-genetic algorithm has a larger population size and a stronger global search ability.Population evolutionary learning in the traditional GA is adopted in the QGA [39].For individuals in the population, when the population evolves to the t-th generation, the expression of the population is shown in Formula (34): where n is the population length and is the chromosome.For the QGA, the common quantum gates have multiple operators, which can be selected according to the characteristics of practical problems in the process of solving problems.Because of its convenience of operation and high efficiency for individual evolution, quantum revolving door is the most commonly used quantum operation algorithm.The adjustment operation of quantum revolving door is as follows: The updating process is as follows: It can be seen that the value of The flow of the QGA is similar to that of the basic genetic algorithm.On the basis of determining the fitness function, the strategy of randomly initializing the quantum population and promoting population evolution is adopted, and then, the optimal solution in the solution space is obtained.The implementation steps of the quantum genetic algorithm are as follows: (1) Initialize the algorithm parameters, including the individual binary coding length L, population size N and maximum number of iterations T. ( (5) To determine whether the algorithm terminates, the optimal individual and its corresponding fitness are recorded, the optimal result of the algorithm is recorded, and the termination algorithm is terminated.Otherwise enter ( 6). ( 6) The quantum rotary gate update is set.(7) The new population Q is obtained for iteration times+1, return to (3).The flow chart for the algorithm is as follows: Fig. 2. Flow chart for the quantum-genetic algorithm The specific operation method in Fig. 2 Example verification To prove show the rationality and superiority of the maintenance optimization strategy proposed in this paper, the maintenance strategy of the train door system in Nanning Metro is taken as an example. Characteristic attributes and information sequence of faults analysis First, according to the maintenance records of the train door system, the characteristic attributes of faults and the fault impact grade of the fault mode set are obtained, as shown in Table 2. Eight types of fault modes are then selected for reliability analysis, and the fault sets are selected as follows: F={F 1 abnormal sound of the metro door, F 2 air leakage through the metro door, F 3 the door pops open after closing, F 4 jitter of the metro door, F 5 buzzer failure, F 6 deformation of the door shield, F 7 door friction noise is too loud, F 8 interference between the door pages and balanced press wheel}.Next, the weights of the characteristic attributes of the faults of the 8 types of selected fault modes are obtained. The Metro plug door system is a complex mechanical system of mechatronics, which includes many subsystems.The system has a long working time and high working frequency.Therefore, the system bears a variety of complex fault stresses, which can be divided into three categories: (1) Stress of the plug door to complete its basic operational function (2) The environmental stress of the plug door when it is working.Environmental stress can also be divided into two kinds: the stress acting on the plug door by the external environment while it works; the stress produced by the internal parts of the plug door when it works. (3) Failure of the plug door is caused by man-made factors, that is, man-made stress.Take the failure mode " F 1 abnormal sound of the metro door " as an example.Aiming at the characteristic attributes of fault(F 1 ) 1 f = loose fastening bolt for the lock tongue in the square hole of the door side roof, according to the stress set Thus, the fuzzy evaluation value B 1 (fault information sequence) of fault F 1 " abnormal sound of the metro door " to fault stress can be obtained: 1.001 0.824 0.609 0.423 0.235 In the same way: Fuzzy Clustering Analysis of Fault Data The fuzzy similarity matrix C of clustering object B is calculated as follows: According to formulas ( 27) and ( 28), the transitive closure matrix is obtained: 1.0000 0.8244 0.6551 0.8474 0.5940 0.7065 0.7065 0.4500 0.8244 1.0000 0.6551 0.8244 0.5940 0.7065 0.7065 0.4500 0.6551 0.6551 1.0000 0.6551 0.5940 C  0.6551 0.6551 0.4500 0.8474 0.8244 0.6551 1.0000 0.5940 0.7065 0.7065 0.4500 0.5940 0.5940 0.5940 0.5940 1.0000 0.5940 0.5940 0.4500 0.7065 0.7065 0.6551 0.7065 0.5940 1.0000 0.8008 0.4500 0.7065 0.6551 0.7065 0.5940 0.8008 1.0000 0.4500 0.4500 0.4500 0.4500 0.4500 0.4500 0.4500 0.4500 1.0000 According to the fuzzy clustering method, the fuzzy clustering threshold  can be 1, 0.8474, 0.8244, 0.8008, 0.7250, 0.6551, 0.5940, or 0.4500, and the larger the value, the greater the number of clusters.At the time that  = 1, the number of clusters is 8.When  = 0.4500, the number of clusters is 1.The results of dynamic clustering are as follows: 0 =0.594From this, we can see that when , , , , , , t t t t t t t ,   8 t .The requirement of small sample data processing is considered, and the classification of 0 =0.594  is selected as the clustering result of the fault data set.Thus, data preprocessing of incomplete preventive maintenance of the metro train door system based on the competitive Weibull model is completed. Parameter analysis of the incomplete maintenance model based on the competitive Weibull theory According to the method presented in Chapter III-A, the parameters of the competitive Weibull model can be obtained as follows: As a result, the failure rate of the competitive Weibull model can be obtained as follows:       The reliability function of the competitive Weibull model is: Similarly, the parameters and reliability of a single Weibull are calculated as: It can be seen that the reliability of the competitive Weibull model is significantly lower than that of the ordinary Weibull model due to the complexity of the fault and the diversification of the fault mechanism.The objective existence of competitive failure and the phenomenon that "user reliability is significantly lower than the evaluation result" are considered, which is deemed to be reasonable in this paper. The parameters of the system are set as shown in Calculation of the maintenance threshold, maintenance interval and maintenance time To highlight the superiority of the incomplete maintenance strategy based on the competitive Weibull model, we compare the more mature improvement maintenance strategy in the current research field with the maintenance strategy proposed in this paper.Using the same parameter setting and intelligent algorithm, the availability and maintenance cost are quantitatively analyzed. In the field of metro train maintenance, we adopt a relatively mature maintenance strategy: On the basis of reliability analysis, we adopt a maintenance strategy that combines preventive complete maintenance [40] and preventive replacement (referred to as "complete maintenance strategy").The objective function of the maintenance strategy model is: where C total2 is the total cost of one maintenance cycle for the complete maintenance strategy, C m2 is the cost of minor repair, C p2 is the cost of preventive maintenance, C r2 is the cost of preventive replacement, and C d2 is the cost of parking loss.Specifically: In this paper, the complete maintenance strategy and the incomplete maintenance strategy, both based on the competitive Weibull model, are simulated and calculated.The specific process is as follows: In the case that all the parameters have been obtained, the quantum-genetic algorithm (QGA) mentioned above is used to optimize the two objective functions.First, set the QGA parameters: maximum iteration number MAXGEN=50, population size sizepop = 100, variable binary length lenchrom = 20.Using the MATLAB programming simulation calculation, the best applicability of the two maintenance strategies of the iterative curve is obtained.Using the MATLAB software to program and simulate the calculation, the iteration curve can be obtained.As shown in Fig. 5, the iteration curves of the incomplete maintenance strategy and the complete maintenance strategy, both based on the competitive Weibull model, are represented by the red curve and the blue curve, respectively.The red curve converges to the global optimum after 18 iterations, i.e., the fitness function value is 1032.50(i.e., the daily maintenance cost is 1032.50yuan); the blue curve converges to the global optimum after 24 iterations, i.e., the fitness function value is 920.17 (i.e., the daily maintenance cost is 920.17 yuan). Result analysis To prove the superiority of the new maintenance strategy of metro trains based on the competitive Weibull model over the traditional maintenance strategy of metro trains, a comparison is made from three aspects: economy, safety and task.The comparison results are presented in Table 5.The company's current maintenance mode for the metro train door system is equal to interval maintenance, specifically referring to a monthly inspection, and the door system is maintained once a month.A monthly inspection means regular maintenance.If there is any fault during the period, troubleshooting is required.Then, the door is replaced after the last 12 months to complete a maintenance cycle. This mode is also a mature preventive complete maintenance strategy in the field of maintenance, which is equal to interval maintenance.According to the preventive maintenance threshold, the maintenance interval is selected.In this paper, the maintenance threshold is obtained by optimizing the two objective functions of the highest availability and the lowest maintenance cost by the quantum genetic algorithm.It can be found that each maintenance cycle consists of 8 periodic preventive complete maintenance cycles and that preventive replacement is carried out after one maintenance cycle. The preventive maintenance strategy of incomplete unequal intervals is adopted in this model.Among them, the core of determining incomplete maintenance is the factor of service life regression and the factor of the increasing failure rate.These two factors are calculated by an empirical formula and historical maintenance data, which are highly accurate.According to the historical fault data and the competitive Weibull model, the reliability of the door system is obtained.Then, the objective function with the highest availability and the lowest maintenance cost is optimized by the quantum genetic algorithm, and the incomplete maintenance threshold is obtained.After a certain number of preventive maintenance cycles of the door system according to the maintenance threshold, preventive replacement of the door system components is carried out.8 are pie charts showing the proportion of the maintenance cost for the traditional maintenance strategy, complete maintenance strategy and incomplete maintenance strategy, respectively.Comparing the three pie charts, we find that the cost of shutdown accounts for more than 80% of the total maintenance cost.In addition, it can be seen that the cost of shutdown of the traditional maintenance strategy is greater than that of either the complete maintenance strategy or the incomplete maintenance strategy.The cost of shutdown is determined by the number of shutdown days.That is, the number of shutdown days of the traditional maintenance strategy is greater than that of the complete maintenance strategy, while the number of shutdown days of the complete maintenance strategy is greater than that of incomplete maintenance strategy.Therefore, the availability for the maintenance strategy proposed in this paper is also higher than those for the other two maintenance strategies.Second, minor fault maintenance refers to maintenance after a fault occurs during the operation of the train, which may affect the stable operation of the train, so it is necessary to minimize the number of minor fault maintenances.The incomplete maintenance strategy proposed in this paper is incomplete in that the proportion of minor fault repairs is the smallest of the three maintenance strategies.Therefore, compared with the other two kinds of maintenance, the incomplete maintenance strategy presented in this paper has a lower maintenance cost and provides higher availability. The model proposed in this paper adopts a preventive maintenance strategy, that is, an incomplete strategy with unequal intervals.After a certain number of preventive maintenance cycles of the door system, the door system is preventively replaced.The traditional maintenance strategy is a model that includes replacement after the same period. (1) According to Table 5, the maintenance cost of the metro train door system based on the competitive Weibull model is 920.17 yuan per day and the total cost of a maintenance cycle is 33261.2yuan.Compared with the traditional regular maintenance cost of 1120.25 yuan per day, the total cost of a maintenance cycle is 40329 yuan.The cost of maintenance is reduced by 17.86% by the new maintenance strategy, with 72028.8yuan saved during one maintenance cycle of the door system.Compared with the daily complete maintenance cost of 1032.50 yuan and the total cost of a single maintenance cycle of 37686.5 yuan, the incomplete maintenance strategy proposed in this paper reduces the cost by 10.88%, with the cost of a single maintenance cycle of 41000.35yuan.Therefore, from the economic point of view, the new maintenance strategy in this paper is more advantageous than either the traditional maintenance strategy or a complete maintenance strategy.(2) According to table 5, compared with the availability of the traditional maintenance strategy of 0.936 and the complete maintenance strategy of 0.944, the availability of the incomplete maintenance strategy is 0.947, representing increases of 0.011 and 0.003, respectively.Due to the frequent regular maintenance performed, the increase in the shutdown frequency of the traditional maintenance strategy leads to a decrease of its availability.Due to the phenomenon of under-maintenance, the availability of the complete maintenance strategy decreases with the increase in the number of minor fault maintenance cycles.The improvement in availability guarantees that the metro train can better complete its operational tasks.Therefore, from the task aspect, the new maintenance strategy presented in this paper is better than the traditional maintenance strategy.(3) In addition, the new maintenance strategy presented in this paper is based on the competitive Weibull model, and the parameters of the competitive Weibull model are calculated from the maintenance data of the Nanning metro company.Therefore, the classification of fault data is more in line with the actual situation and the operation status of metro trains.The fault data of metro trains are first classified and processed, and then, the maintenance plan is formulated.This method causes the maintenance strategy to be more reasonable and enhances the scientific nature of the maintenance strategy.(4) Incomplete maintenance is adopted by the new maintenance strategy in this paper, so the maintenance cycle has unequal intervals, which is more in line with reality.The traditional maintenance strategy is equal interval (monthly) maintenance with maintenance performed once every 30 days.The maintenance threshold is 0.9795 each time, and this is a case of over-maintenance. When the maintenance threshold is chosen to be 0.9795, frequent shutdown will occur and the shutdown cost accounts for a large part of the total maintenance cost.After calculation, the shutdown cost accounts for 84.099% of the total maintenance cost, so frequent shutdown will result in a large increase in the maintenance cost.Equal interval maintenance is adopted by the complete maintenance strategy, which uses reliability to determine the maintenance interval.Each preventive maintenance cycle is considered a complete maintenance, that, the repair effect is as good as new.However, the actual engineering effect is not; thus, there will be under-maintenance, which will lead to a large increase in the number of minor fault maintenance cycles, and it follows that the maintenance cost will also increase. Based on the competitive Weibull incomplete maintenance model, the maintenance threshold is 0.9187 and the maintenance interval is 62 days-48 days-45 days-42 days-39 days-37 days-35 days-31 days-26 days. Compared with traditional maintenance strategy, the number of maintenance cycles will be reduced.At the same time, because the service age reduction factor and the fault rate increase factor are introduced on the premise of ensuring safety, the maintenance interval is more in line with the actual situation, and the maintenance cost is greatly reduced. Conclusions (1) The maintenance strategy presented in this paper is based on the competitive Weibull model.The analysis methods for the fault feature attributes and the fault information sequence have been introduced, so the problems of inaccurate calculation results and impractical maintenance decisions caused by the original single processing of maintenance data for the metro train door system are solved. (2) Based on the diversity of fault mechanisms, the problem of the limited use of the competitive Weibull model is solved, the method of fuzzy clustering analysis is adopted, and the fault data classification of the metro train door system is completed, which avoids the difficult and heavy workload of fault mechanism detection and analysis.(3) An incomplete maintenance strategy and unequal interval maintenance based on the competitive Weibull model are adopted in this paper.On the premise of ensuring the safety of the metro train door system, the number of train overhauls and shutdown time are reduced; thus, the maintenance cost of the metro train door system is reduced, and the availability is improved.Therefore, the maintenance strategy improves the security, economy and task completion of the metro train door system.(4) In this paper, a maintenance mode combining multiple maintenance modes and replacement is adopted to adapt to various failure modes of the mechanical and electrical equipment of the door system, which causes the model to be more robust.Iterative Curve of the Quantum-Genetic Algorithm Fig. 1 . Fig. 1.Flow chart for data preprocessing fault F to occur can be obtained.The minimum cut set constitutes the characteristic attributes of fault F, and the characteristic attributes of fault F  12 1 i.e., clustering analysis of the fault time  Parameter estimation in the competitive Weibull model Metro vehicles have high reliability.Thus, there are fewer effective fault data and they need to be classified, so it is difficult to satisfy the requirement of a statistical sample size, that is, the competitive Weibull model evaluation must use a small sample.Therefore, the parameters in the competitive Weibull model are estimated by the method of graph parameter estimation.(1)Linearization of the Weibull model.First, assuming that every fault distribution of system L obeys a two-parameter Weibull distribution, the analytic form of the fault rate function is as follows: ( 4 ) Two asymptotic lines are fitted on the Weibull probability map.The expression of an asymptote is y kx b  , which is the asymptote of x   .The other asymptote is perpendicular to the x axis and is located on the left side of all the scattering points.The expression is 0 xx  . 0 4 ) Gt is coded by the binary system, and the individual state value is obtained by a collapse operation.(3)The applicability of the individual is calculated according to the individual state value, and the correctness of the cross experiment is used as the formula to evaluate Individuals are updated by the method of revolving door. 1 f 2 f 3 f 1 0 Abnormal sound of the metro door Loose fastening bolt for the lock tongue in the square hole of the door side roof Slight Side door antispring wheel loosening Square hole lock antiloosening line dislocation Air leakage through the metro door Abnormal size of door alignment Slight V-shape size abnormality of the door Abnormal gap of the finger protector tape Abnormal parallelism The door pops open after closing Passengers lean against the door Serious Obstacles at the door Door controller failure Jitter of the metro door Abnormal clearance between the lower pin side and block Commonly Door opening beyond its normal range Buzzer failure Loose connection of the buzzer Slightly Serious Deformation of the door shield Passenger extrusion deformation Serious Door friction noise is too loud Abnormal clearance between the side of the lower gear pin and block Slight Intervention between the door pages and balanced press wheel Passengers squeeze doors Slightly Serious Abnormal position of the balanced press wheel Long-term vibration Collision of door pages First, we consider that F 1 abnormal sound of the metro door={ loose fastening bolt for the lock tongue in the square hole of the door side roof, side door anti-spring wheel loosening, square hole lock antiloosening line dislocation}.Thus, the fuzzy consistent matrix can be obtained as follows:The weight of the characteristic attributes of the faults can be obtained by formula(24):   .1980.226 0.577 W . According to the above steps, it can be found that: fuzzy evaluation value (fault information sequence) of other faults to the fault stress can be obtained:The fault information sequence is used to represent the fault, and the clustering object is the set of the fault information sequence.  are calculated, and the equivalent relation matrix is obtained as follows: Fig. 3 . Fig. 3. Cluster analysis diagram Then, the reliability image function of the door system of the single Weibull model and the competitive Weibull model without considering maintenance is as follows: Fig. 4 . Fig. 4. Reliability Function of the Door SystemA comparison between the competitive Weibull distribution and the ordinary single Weibull distribution is shown in Figure4.In the figure, the reliability function curve of the competitive Weibull model is represented by the red line and the reliability function curve of the ordinary single Weibull model is represented by the blue line.It can be seen that the reliability of the competitive Weibull model is significantly lower than that of the ordinary Weibull model due to the complexity of the fault and the diversification of the fault mechanism.The objective existence of competitive failure and the phenomenon that "user reliability is significantly lower than the evaluation result" are considered, which is deemed to be reasonable in this paper.The parameters of the system are set as shown in Table4 ( 5 ) To find the optimal solution of the objective function faster and better, the QGA is used to optimize the process of solving the objective function.2017GXNSFDA198012], Guangxi Manufacturing Systems and Advanced Manufacturing Technology Key Laboratory Director Fund [Grant No. 19-050-44-S015], Science and Technology Planning Project of Nanning [Grant No. 20193027] and the Innovation Project of Guangxi Graduate Education [Grant No. YCSW2020017]. Figures Figure 1 Flow Figures Figure 2 Flow Figure 2 Table 6 , table 7 and table
2020-05-21T00:06:20.634Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "255aee4e544f87a2b07f03141a588ec9ab56f7db", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-27291/v1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8b7c3bd8c9dcfcb0464e7ad360ec4cc52f83edaf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
209524336
pes2o/s2orc
v3-fos-license
Neuroprotective Effects of Pomegranate Juice against Parkinson’s Disease and Presence of Ellagitannins-Derived Metabolite—Urolithin A—In the Brain Pomegranate juice is a rich source of ellagitannins (ETs) believed to contribute to a wide range of pomegranate’s health benefits. While a lot of experimental studies have been devoted to Alzheimer disease and hypoxic-ischemic brain injury, our knowledge of pomegranate’s effects against Parkinson’s disease (PD) is very limited. It is suggested that its neuroprotective effects are mediated by ETs-derived metabolites—urolithins. In this study, we examined the capability of pomegranate juice for protection against PD in a rat model of parkinsonism induced by rotenone. To evaluate its efficiency, assessment of postural instability, visualization of neurodegeneration, determination of oxidative damage to lipids and α-synuclein level, as well as markers of antioxidant defense status, inflammation, and apoptosis, were performed in the midbrain. We also check the presence of plausible active pomegranate ETs-derived metabolite, urolithin A, in the plasma and brain. Our results indicated that pomegranate juice treatment provided neuroprotection as evidenced by the postural stability improvement, enhancement of neuronal survival, its protection against oxidative damage and α-synuclein aggregation, the increase in mitochondrial aldehyde dehydrogenase activity, and maintenance of antiapoptotic Bcl-xL protein at the control level. In addition, we have provided evidence for the distribution of urolithin A to the brain. Introduction Studies on dietary polyphenols suggest their beneficial role against Parkinson's disease (PD), which is mainly attributed to antioxidant, anti-inflammatory, and anti-apoptotic activity [1].The current research trends also cover their metabolic derivatives, in particular, bioavailable gut microbiota metabolites, which offer a novel preventive approach for the disease [2]. The pomegranate (Punica granatum L.) fruit is a rich source of ellagitannins (ETs) such as punicalagin, punicalin, pedunculagin, gallic and ellagic acid esters of glucose, and ellagic acid (EA) [3], which contribute to antioxidative, anti-inflammatory.and antiapoptotic activity of pomegranate and are believed to play an essential role in its wide range of health benefits.A lot of research on the neuroprotective activity of pomegranate juice and extract has been done.Supplementation with pomegranate juice in the drinking water of pregnant and nursing dams has been demonstrated to protect the neonatal brain in an inflammatory [4] and a hypoxic-ischemic (H-I) models [5,6].These neuroprotective effects have been shown to be attributed to the inhibition of oxidative stress, and a decrease in the production of proinflammatory cytokines [4] and apoptotic proteins [4][5][6].In adult male rats, pre-administration with pomegranate extract has provided dose-dependent neuroprotection against cerebral ischemia-reperfusion (I/R) brain injury and DNA damage via antioxidant, anti-inflammatory, and anti-apoptotic action [7].Pomegranate juice and extracts have also been shown to act neuro-protectively against Alzheimer's disease (AD) in animal models [8][9][10][11][12][13][14].In older subjects with age-associated memory complaints, who drank 8 ounces of pomegranate juice for four weeks, a significant improvement in verbal and visual memory as well as an increase in plasma Trolox-equivalent antioxidant capacity was observed.Noteworthily, individuals drinking pomegranate juice represented an increased level of a metabolite of pomegranate ellagitannins-urolithin A glucuronide-in plasma [15].It is believed that pomegranate's neuroprotective effects are mediated by urolithins-the colonic microbiota ellagitannins (ETs)-derived metabolites [8].The capability of the in vivo generated urolithin A and B to reduce the formation of advanced glycation end products have been demonstrated to be involved in the neuroprotective effect of pomegranate [16,17].Urolithin B has also been indicated to suppress neuroinflammation in the cortex, hippocampus, and substantia nigra (SN) of LPS-injected mouse [18].There is a rapidly growing body of literature dealing with mechanistic in vitro studies on urolithins' activities, which may contribute to the overall neuroprotective effects reported for pomegranate.Since mitochondrial impairment and the associated oxidative stress, neuroinflammation, and apoptosis are proposed to be critical processes for neurodegeneration, the inhibition of production of intracellular reactive oxygen species (ROS) [18,19], nitric oxide [18], and pro-inflammatory cytokines [18,20] and the prevention of activation of proapoptotic caspases 3 and 9 [19] caused by urolithins A and B in neuronal cell lines, support their involvement in the neuroprotection. Despite the considerable effort devoted to the studies on beneficial effects of pomegranate in animal models of AD [9][10][11][12][13][14] and H-I brain injury [6,7,21], there is a gap for research involving experimental models of PD in vivo.To the best of our knowledge, merely two studies referring to this subject have been performed [22,23] and their findings were diverse. PD is the second most prevalent human neurodegenerative disorder, after AD, which is characterized by motor dysfunction associated with a loss of dopaminergic neurons in the midbrain substantia nigra pars compacta (SNpc) and formation of Lewy bodies, mainly composed of misfolded α-synuclein.Around 95% of diagnosed PD cases are sporadic and are a result of a combination of environmental exposures and genetic susceptibility as well as aging, which is believed to be the predominant risk factor.The pathology of the disease is very complex.Nevertheless, oxidative stress, inflammation, and α-synuclein aggregation, which are tightly linked and interdependent, are regarded to play a crucial role in the neurodegeneration [24,25]. Since the knowledge of pomegranate effects against PD is based on very limited data [22,23], the aim of our study was to evaluate the potential neuroprotective capability of pomegranate juice in a rat model.We triggered a PD-like phenotype in rats by prolonged low-dose rotenone treatment [26].First, we examined whether administration with pomegranate juice to rats intoxicated with rotenone provided any beneficial effects on postural stability and neuronal survival, and then we assessed the influence of the treatment on antioxidant, inflammatory, and apoptotic markers as well as α-synuclein level in the midbrain.Finally, we extended the research by an additional experiment to examine whether urolithin A is present in the brain after treatment with pomegranate juice and therefore could contribute to the observed effects. Bodyweight Gain The mean body weight gain (Figure 1) during the first four weeks was similar across the groups with no statistically significant differences.From the fifth week to the end of experiment, treatment with rotenone alone (ROT) negatively affected body weight gain, which was significantly lower by 105% in week 5 and about three-fold lower in weeks 6 and 7, as compared to the control values.This effect was significantly attenuated, by 57%, in the sixth week of pomegranate juice (PJ) treatment. Bodyweight Gain The mean body weight gain (Figure 1) during the first four weeks was similar across the groups with no statistically significant differences.From the fifth week to the end of experiment, treatment with rotenone alone (ROT) negatively affected body weight gain, which was significantly lower by 105% in week 5 and about three-fold lower in weeks 6 and 7, as compared to the control values.This effect was significantly attenuated, by 57%, in the sixth week of pomegranate juice (PJ) treatment. Postural Instability To examine whether treatment with PJ could result in behavior improvement, the rats were tested for postural instability (Figure 2).Animals injected with rotenone exhibited statistically significant, 40% greater postural instability as compared to the control.The degree of postural impairment was 20% less in rats that received pomegranate juice and rotenone in combination, as compared to rats injected with ROT alone. Postural Instability To examine whether treatment with PJ could result in behavior improvement, the rats were tested for postural instability (Figure 2).Animals injected with rotenone exhibited statistically significant, 40% greater postural instability as compared to the control.The degree of postural impairment was 20% less in rats that received pomegranate juice and rotenone in combination, as compared to rats injected with ROT alone. Bodyweight Gain The mean body weight gain (Figure 1) during the first four weeks was similar across the groups with no statistically significant differences.From the fifth week to the end of experiment, treatment with rotenone alone (ROT) negatively affected body weight gain, which was significantly lower by 105% in week 5 and about three-fold lower in weeks 6 and 7, as compared to the control values.This effect was significantly attenuated, by 57%, in the sixth week of pomegranate juice (PJ) treatment. Postural Instability To examine whether treatment with PJ could result in behavior improvement, the rats were tested for postural instability (Figure 2).Animals injected with rotenone exhibited statistically significant, 40% greater postural instability as compared to the control.The degree of postural impairment was 20% less in rats that received pomegranate juice and rotenone in combination, as compared to rats injected with ROT alone.Data are presented as mean values ±SEM of eight rats per group and analyzed using one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons.* p < 0.05 vs. Control.# p < 0.05 vs. ROT. Microscopic Examination Hematoxylin and eosin (H&E) staining showed marked cell neurodegeneration in SN tissue of rats injected with rotenone (Figure 3).Treatment with pomegranate juice ameliorated the rotenone-induced effect as small deeply stained neurons were only observed.Rats treated with pomegranate juice alone showed normal brain tissue. Microscopic Examination Hematoxylin and eosin (H&E) staining showed marked cell neurodegeneration in SN tissue of rats injected with rotenone (Figure 3).Treatment with pomegranate juice ameliorated the rotenoneinduced effect as small deeply stained neurons were only observed.Rats treated with pomegranate juice alone showed normal brain tissue.A rat treated with pomegranate juice and rotenone shows normal neurons (blue arrows) and a few cells with signs of degeneration (white arrows).Original magnification ×400; Scale bar-20 μm. Immunofluorescence Staining of TH positive (TH+) Neurons in the Region of SN To further examine the suggested beneficial impact of PJ treatment on neurons' survival, we performed immunostaining to identify TH+ cells in the region of SN (Figure 4), which confirmed the microscopic evaluation.The prolonged treatment with ROT resulted in a profound loss of TH+ neurons in comparison with Control, while administration of pomegranate juice improved the neuron survival.Treatment with pomegranate juice alone did not affect TH+ cells' survival.A rat treated with pomegranate juice and rotenone shows normal neurons (blue arrows) and a few cells with signs of degeneration (white arrows).Original magnification ×400; Scale bar-20 µm. Immunofluorescence Staining of TH Positive (TH+) Neurons in the Region of SN To further examine the suggested beneficial impact of PJ treatment on neurons' survival, we performed immunostaining to identify TH+ cells in the region of SN (Figure 4), which confirmed the microscopic evaluation.The prolonged treatment with ROT resulted in a profound loss of TH+ neurons in comparison with Control, while administration of pomegranate juice improved the neuron survival.Treatment with pomegranate juice alone did not affect TH+ cells' survival.To assess the effects of pomegranate juice treatment against oxidative stress, malondialdehyde (MDA) as the measure of lipid peroxidation and markers of the endogenous antioxidant system, including reduced glutathione (GSH) and antioxidant enzymes activity, were assayed in the Figure 4. Representative photomicrographs of immunofluorescent staining of TH-positive cells in adjacent microtome sections of substantia nigra (SN) neurons.Rotenone (ROT) administration caused the substantial loss of TH+ neurons, as compared to a control rat (control).Administration of pomegranate juice attenuated this loss (PJ + ROT).The pomegranate juice application (PJ) itself did not cause any effect on TH+ cells survival when compared to control rats.Scale bar-10 µm. Oxidative Stress Markers and Mitochondrial Aldehyde Dehydrogenase Activity To assess the effects of pomegranate juice treatment against oxidative stress, malondialdehyde (MDA) as the measure of lipid peroxidation and markers of the endogenous antioxidant system, including reduced glutathione (GSH) and antioxidant enzymes activity, were assayed in the midbrain.In addition, we assessed the activity of mitochondrial aldehyde dehydrogenase (ALDH2), the enzyme protecting against oxidative stress by detoxification of cytotoxic aldehydes, since we triggered mitochondria-mediated oxidative stress by rotenone [27].Rotenone administration induced a significant 170% rise in the MDA level in comparison with control rats (Figure 5a).Consistently, mitochondrial ALDH2 activity was decreased by 51% (Figure 5f).Pomegranate juice administration to the rotenone challenged animals attenuated the level of lipid peroxidation by 60%, which was similar to that in the control group.The inhibition of lipid peroxidation correlated with about a 2.5-fold increase, even above the control level, in mitochondrial ALDH2 activity.The response of the endogenous antioxidant system in experimental groups was diversified.The activities of catalase (CAT), glutathione peroxidase (GPx), and glutathione S-transferase (GST) in the ROT group were slightly decreased, however their values did not differ significantly from Control.Administration of PJ to ROT-injected animals increased the activities of CAT, GPx, and GST by 85%, 98%, and 97%, respectively, as compared to those observed in the ROT group (Figure 5c-e), and they were even higher than those in the control group.Treatment with PJ alone also caused an enhancement in GPx activity by 55% and in CAT activity by 41% vs. the control value, although the latter change was not statistically significant.The GSH level was slightly (by 16%), but not significantly, decreased in rats administered with ROT, while combined treatment with PJ and ROT caused its increase by 114% (Figure 5b).The activity of SOD was not statistically different among the groups (data not shown herein). Inflammation Markers The expression of tumor necrosis factor-alpha (TNF-α) (Figure 6a) and nitrites concentration (Figure 6b) were similar across all groups with no statistically significant differences. Inflammation Markers The expression of tumor necrosis factor-alpha (TNF-α) (Figure 6a) and nitrites concentration (Figure 6b) were similar across all groups with no statistically significant differences. Apoptosis Markers Rotenone caused a 20% decrease in the expression of pro-survival protein Bcl-xL, as compared with the control values.PJ treatment of the ROT challenged animals restored its expression by 18% (not significantly) almost to the level measured in the control rats.The expression of the second assayed Bcl-2 family member-apoptosis regulator Bax-was not affected, neither by pomegranate juice nor rotenone administration individually or combined (Figures 7 and S2). Apoptosis Markers Rotenone caused a 20% decrease in the expression of pro-survival protein Bcl-xL, as compared with the control values.PJ treatment of the ROT challenged animals restored its expression by 18% (not significantly) almost to the level measured in the control rats.The expression of the second assayed Bcl-2 family member-apoptosis regulator Bax-was not affected, neither by pomegranate juice nor rotenone administration individually or combined (Figure 7 and Figure S2). Apoptosis Markers Rotenone caused a 20% decrease in the expression of pro-survival protein Bcl-xL, as compared with the control values.PJ treatment of the ROT challenged animals restored its expression by 18% (not significantly) almost to the level measured in the control rats.The expression of the second assayed Bcl-2 family member-apoptosis regulator Bax-was not affected, neither by pomegranate juice nor rotenone administration individually or combined (Figures 7 and S2). α-Synuclein Expression To further confirm the observation that PJ treatment can protect neurons, we determined the level of α-synuclein, responsible for deleterious impact when abnormally accumulated in neurons, α-Synuclein Expression To further confirm the observation that PJ treatment can protect neurons, we determined the level of α-synuclein, responsible for deleterious impact when abnormally accumulated in neurons, widely regarded as a pathological hallmark of PD [25].Western blot analysis revealed that ROT caused an 27% increase in level of α-synuclein oligomeric species, which corresponded to 11-fold increase in the ratio of α-synuclein oligomers/monomers in the midbrain compared to that of control rats.The alteration was significantly ameliorated by PJ treatment in rats challenged with ROT as there was no significant difference in oligomeric fraction of this protein between PJ + ROT and control groups (Figure 8). Urolithin a Determination In order to examine whether urolithin A (UA) could be a neurologically active metabolite of pomegranate juice able to contribute to the overall neuroprotective effects observed in a rotenone model of Parkinson's disease, we checked whether it is distributed into the brain.Therefore, we evaluated the concentration of UA in plasma and its deposition in the brain.The detection of UA in brain and plasma samples was performed by high-resolution UPLC-ESI-QTOF-MS.Even if the total ion chromatograms were too complex to allow the detection of UA (Figure 9a), the high selectivity and sensibility given by this mass spectrometer allow the detection and identification of UA in the brain and plasma samples treated with pomegranate juice by using the selected ion chromatograms for the UA high resolution mass in negative mode m/z: 227.0344 for [C 13 H 8 O 4 -H]-(Figure 9b).This peak was identified and confirmed to be UA by comparison with its UPLC retention times and mass spectra of an authentic chemical standard (Figure 9c).Moreover, its identity was confirmed by the determination of its molecular formula by HRESIMS technique from the detected ion at m/z 227.0354 [M − H] − (m/z calcd 227.0344).The concentration of UA in the brain was 1.68 ± 0.25 ng/g tissue and in plasma 18.75 ± 3.21 ng/mL. widely regarded as a pathological hallmark of PD [25].Western blot analysis revealed that ROT caused an 27% increase in level of α-synuclein oligomeric species, which corresponded to 11-fold increase in the ratio of α-synuclein oligomers/monomers in the midbrain compared to that of control rats.The alteration was significantly ameliorated by PJ treatment in rats challenged with ROT as there was no significant difference in oligomeric fraction of this protein between PJ + ROT and control groups (Figure 8). Urolithin A Determination In order to examine whether urolithin A (UA) could be a neurologically active metabolite of pomegranate juice able to contribute to the overall neuroprotective effects observed in a rotenone model of Parkinson's disease, we checked whether it is distributed into the brain.Therefore, we evaluated the concentration of UA in plasma and its deposition in the brain.The detection of UA in brain and plasma samples was performed by high-resolution UPLC-ESI-QTOF-MS.Even if the total ion chromatograms were too complex to allow the detection of UA (Figure 9a), the high selectivity and sensibility given by this mass spectrometer allow the detection and identification of UA in the brain and plasma samples treated with pomegranate juice by using the selected ion chromatograms for the UA high resolution mass in negative mode m/z: 227.0344 for [C13H8O4 -H]-(Figure 9b).This peak was identified and confirmed to be UA by comparison with its UPLC retention times and mass spectra of an authentic chemical standard (Figure 9c).Moreover, its identity was confirmed by the determination of its molecular formula by HRESIMS technique from the detected ion at m/z 227.0354 [M − H] − (m/z calcd 227.0344).The concentration of UA in the brain was 1.68 ± 0.25 ng/g tissue and in plasma 18.75 ± 3.21 ng/mL. Discussion Parkinson's disease itself is not considered fatal [28]; however, due to a lack of any diseasemodifying therapy, as disease progresses, swallowing turns to be compromised causing aspiration pneumonia that can life threatening [29].Moreover, since the quality of life of PD patients is significantly diminished, there is an ongoing search for compounds capable of protecting neurons from a broad range of insults.As we previously reviewed [1] the number of studies supports the generally agreed view that polyphenols and/or their metabolites contribute to the neuroprotective effects of plant-derived preparations, which appear to be promising neuroprotective agents.With Discussion Parkinson's disease itself is not considered fatal [28]; however, due to a lack of any disease-modifying therapy, as disease progresses, swallowing turns to be compromised causing aspiration pneumonia that can life threatening [29].Moreover, since the quality of life of PD patients is significantly diminished, there is an ongoing search for compounds capable of protecting neurons from a broad range of insults.As we previously reviewed [1] the number of studies supports the generally agreed view that polyphenols and/or their metabolites contribute to the neuroprotective effects of plant-derived preparations, which appear to be promising neuroprotective agents.With regard to pomegranate, as mentioned earlier, there is a research bias away from PD.Studies on the neuroprotective properties of pomegranate have mainly focused on AD in animal models [9][10][11][12][13][14].To the best of our knowledge, only a few studies were performed using a PD model [22,23].The study presented herein aimed to call into question whether pomegranate juice may provide neuroprotection against PD.For this purpose, we applied the environmentally relevant rotenone model of PD.Rotenone, an inhibitor of respiratory complex I, disrupts mitochondrial electron transport and generates ROS and due to the high lipophilicity, it easily crosses biological membranes, including the blood-brain barrier.Rats exposed to prolonged, low-dose rotenone treatment develop selective degeneration of nigral dopaminergic neurons with histopathological hallmarks of PD and PD-like locomotor symptoms due to sustained inhibition of complex I and related oxidative injury in the brain [26].Since bodyweight (b.w.) loss in animal rotenone models of PD has been demonstrated along with behavioral deficits, a loss of tyrosine hydroxylase (TH+) positive neurons of the SN, an increased apoptosis and a decreased antioxidant defense in the midbrain [30][31][32] we also monitored this parameter.In our study, exposure of rats to rotenone caused a significant decrease in b.w.gain in the last three weeks of the experiment that is in line with findings by Binienda et al. [33].The beneficial effect of pomegranate juice treatment against weight loss was noticed one week later. As mentioned above, rotenone is known to produce PD-like behavioral features, which in our experiment were manifested as impaired postural stability.Pomegranate juice treatment attenuated the rotenone-induced behavioral deficit.This is in agreement with a finding that chronic pomegranate juice co-administration improved movements and reduced levodopa-induced dyskinesia in an MPTP mice model of PD [23].Our findings share some similarities with the findings for other polyphenols.Curcumin administered for 50 days and piceid for five weeks also have been reported to improve rotenone-induced postural defects [34,35]. Given the fact that loss of dopaminergic neurons triggers deregulation of motor symptoms [36], which we had previously demonstrated in an inducible transgenic PD model [37], we performed both the microscopic examination and the immunofluorescent analysis of TH+ neurons in sections of the SN area, which is the region of interest in experimental PD models due to the vulnerability of dopaminergic neurons in the brain [38].In agreement with the other authors' findings [39,40], a microscopic examination proved that rotenone-induced neurotoxicity involved the midbrain, while sections of cerebellum and cortex showed a quite normal structure.H&E staining revealed degenerative and necrotic changes in the form of shrunken neurons with dark cytoplasm and pyknotic nuclei.The neurodegeneration was reflected by the loss of TH-positive cells in the SN region, which is in line with previous studies [41,42].The administration of pomegranate juice to rotenone-intoxicated rats ameliorated the damaging effect of the neurotoxin since only some neurons with dark cytoplasm were still observed in the SN region and enhanced survival of TH+ neurons in this area was noticed.This finding is in agreement with the previously reported lower neuronal loss observed in rats with I/R injury pretreated with punicalagin [43].Interestingly, the systemic administration of its metabolite UA has protected mice against ischemic brain injury that correlated with an improved neurological deficit score [44].Based on this, it seems likely that urolithin A could contribute to the protective effect of PJ-treatment against rotenone-induced neuronal degeneration which results in behavioral improvement. The preferential degeneration of midbrain neurons, especially in SNpc region, compared to other nearby catecholaminergic neurons, is due to tremendous oxidative stress associated with high dopamine turnover rates.Its reactive aldehyde metabolites, mainly 3,4-dihydroxyphenylacetaldehyde, have been demonstrated to contribute to the pathogenesis of PD.The autoxidation of dopamine also contributes to increased generation of detrimental ROS, which, via lipid peroxidation, leads to the production of other reactive aldehydes, such as MDA, and consequently causes degeneration of dopaminergic neurons [24].In this study and in agreement with the previous works [39,40], rotenone selectively affected the midbrain area, where the increased level of MDA was detected.Pomegranate juice treatment provided substantial protection against rotenone-induced lipid peroxidation, which probably was assured by increased activity of mitochondrial ALDH2-the principal enzyme involved in detoxifying aldehydes.ALDH2 converts MDA and other ROS-induced aldehydes to less toxic acid products and is highly expressed in the brain, especially in the dopaminergic neurons of the midbrain.Inhibition of ALDH2 by some pesticides in turn leads to the accumulation of reactive aldehydes, preferential degeneration of dopaminergic neurons, and the development of PD [27].Consistent with this idea, Chiu et al. [27] have shown that administration of a pharmacological activator of ALDH2 reduced the rotenone-induced accumulation of the lipid peroxidation endproduct-4-hydroxynonenal in the SN-and prevented loss of dopaminergic neurons as a result.A decline of neurological deficit score and brain cell loss [45] in a homocysteine rat model of AD and in a model of cerebral I/R injury in the rat was attributed to the clearance of reactive metabolites by ALDH2.In line with this, resveratrol supplementation preserving cortical ALDH2 level provided substantial protection against oxidative stress-mediated neocortex damage in high-fat/sucrose (HFS)-fed rhesus monkey [46]. On the other hand, the neuronal population is particularly susceptible to oxidative damage since it has a relatively low antioxidant capacity [24].Therefore, we assessed the effects of pomegranate juice treatment on the antioxidant defense system in the midbrain.Although many authors have shown the decrease in GSH level and/or antioxidant enzymes activity in this brain area of rats exposed to rotenone [31,[40][41][42][47][48][49][50], in our experiment, these markers were not affected significantly by rotenone administration.Differences in the response can be associated with the higher doses of rotenone, ranging between 1.5 mg/kg b.w. and 2.5 mg/kg b.w., and a route of its administration, which was mainly intraperitoneal [39][40][41][42].Manjunath and Muralidhara [51] have used lower rotenone dose, i.e., 1 mg/kg b.w., and noticed no significant change either in SOD or in GPx activity in the striatum of rats.The authors even have reported an increase in striatal SOD and CAT activity of mice injected (i.p.) with rotenone in a dose of 0.5 mg/kg b.w.[51].In our experiment, the activity of antioxidant enzymes increased in response to combined treatment (ROT+PJ) with the exception of SOD activity (data not shown herein).Accumulating evidence supports the inducing effect of pomegranate [7,52,53] and its active compound punicalagin [43,54] on the endogenous antioxidant system in different experimental models.We demonstrated a trend toward enhancing the endogenous antioxidant system following treatment with pomegranate juice alone; the increase was significant only for GPx activity.It is very likely that ellagitannins present in pomegranate juice contributed to this effect since increased activity and expression of antioxidant enzymes, including GPx activity, have been reported in the brain of rodents treated with punicalagin and berry-derived ellagitannin-enriched fractions, respectively [54,55].Sun et al. [56] identified the AMPK-nuclear factor-erythroid 2 p45-related factor 2 (Nrf2) pathway as a mechanism contributing to the enhancement of the antioxidant defense system in the hypothalamus of hypertensive rats treated with pomegranate extract [56]. We did not find any significant difference in inflammatory response among the groups, but it might be related to the moderate level of rotenone-induced lipid peroxidation.Several authors have reported an increased expression of proinflammatory cytokines, including TNF-α and/or level of total NO accompanied by very high (3-5-fold) increase in MDA level in the midbrain of animals administered with rotenone [42,50,57]. Oxidative stress is suggested to initiate apoptotic neuronal cell death in PD.Neurotoxins such as rotenone caused cell death through the modulation of members of the B cell lymphoma 2 family of proteins (Bcl-2) [58].The balance between antagonistic family members such as apoptosis inhibitor Bcl-xL and its promotor Bax plays a key role in determining cell survival or death [59].Dhanalakshmi et al. [60] who observed a substantial (about 4.5-fold) increase in lipid peroxidation in the striatum of rats treated with rotenone (2.5 mg/kg/day, i.p., for 45 days) have reported a significant rise in striatal Bax level.In our study, chronic rotenone treatment significantly decreased only pro-survival Bcl-xL expression.Pomegranate juice administration ensured maintaining its amount at the control level and the rescue of midbrain neurons from rotenone toxicity was observed.The result shares a similarity with Bernier et al.'s [46] findings showing that resveratrol supplementation may overcome HFS-induced neocortex damage by protecting the expression of the anti-apoptotic Bcl-2 and ALDH2 proteins in a nonhuman primate. Moreover, along with the neuronal loss in the SN [61,62] and motor impairment [61], the accumulation of α-synuclein, a hallmark of PD, in rats exposed to chronic subcutaneous low doses of rotenone has been recently reported [61,62].α-Synuclein accumulation is thought to underlie the neurodegeneration in PD.The pathogenicity of this protein is attributed to the transition from a native α-helical conformation to a β-sheet structure that polymerizes to toxic forms [25].We found that the rotenone-induced α-synuclein aggregation was significantly diminished by PJ treatment, as the level of its early oligomeric species was similar to that in the control group.It seems that PJ caused down-regulation of α-synuclein protein expression since in rats treated alone with the juice, the levels of α-synuclein oligomers was decreased while the ratio of this fraction to monomeric was higher than in Control.This effect is likely to be involved in the improvement of neuronal cell survival, which was also reported for other natural preparations [42,62]. Because treatment with pomegranate juice protected rats against rotenone-induced motor deficit, neuron degeneration, lipid peroxidation, α-synuclein aggregation, and inhibition of mitochondrial ALDH2 activity in the midbrain, as well as caused the maintenance of pro-survival Bcl-xL expression at the control level, we surmised that the generated in vivo ellagitannin-derived metabolite, urolithin A, might contribute to this effect.We, therefore, sought to determine its presence in plasma and in the brain.Our findings provide, to the best of our knowledge, the first evidence that urolithin A is distributed to the brain after the intake of pomegranate juice.Seeram et al. [63] have reported the presence of urolithin A, at a similar level as that observed in our study, in plasma and brain tissue of mice that received synthesized UA (0.3 mg/mouse) by the oral and intraperitoneal route; however, in mice, following pomegranate extract administration, UA was detected neither in the plasma nor brain.Based on available data about its activity, it could be suggested that it contributed to the overall neuroprotective effects demonstrated in our study; however, further studies are required to confirm this assumption. The results of this study indicate that treatment with pomegranate juice prevents PD-like features in rats.Its efficiency in suppressing lipid peroxidation correlated with the enhanced activity of mitochondrial ALDH2 and normalization of the expression of anti-apoptotic Bcl-xL protein. The histological analysis demonstrated a substantially lower number of degenerated neurons in the SN as a result of pomegranate administration to rats challenged with rotenone.In addition, our study provides considerable insight into the neuroprotective potential of pomegranate ellagitannins-derived metabolite-urolithin A. However, this study is the first step towards enhancing our understanding of the capability of pomegranate for prevention of Parkinson's disease and the mechanistic research involving mitochondria-related processes and a metabolomic approach as well as further studies in others PD models should be undertaken. Pomegranate Juice Commercial 6-fold concentrated pomegranate juice (PJ) was obtained from Alter Medica ( Żywiec, Poland).The product was manufactured in accordance with the principle of HACCP (hazard analysis and critical control point) and fruit ingredients are fully compliant with the Code of Practice of the European Fruit juice Association (AIJN).Since pomegranate's ellagitannins and their hydrolysis product-ellagic acid-have been demonstrated to be precursors of potentially neuroprotective urolithins, including urolithin A [8], which we detected in this study, we identified these phenolics in the tested juice.The ellagitannin composition of PJ was as follows: Galloyl-hexoside, ellagic acid-hexoside, 3-bis-HHDP-hexoside (pedunculagin), 4-galloyl-bis-HHDP-hexoside (casuarinin), and ellagic acid (Figure S1).The total polyphenols content expressed as g of ellagic acid (EA) equivalents per L of juice was 18.90 ± 0.96 g/L.The ellagitannin identification was performed according to the protocol described previously by Oszmianski et al. [64] using a Waters Acquity Ultra Performance LC system composed of an autosampler, binary solvent manager, and photodiode array detector (PDA; Waters Corporation, Milford, MA, USA).The system was coupled to a quadrupole time-of-flight mass spectrometer (Waters, Manchester, UK) equipped with an electrospray ionization (ESI) source operating in negative and positive ion modes. Animals The animal experiment was performed on six-week old male albino Wistar rats weighing 250-300 g.All the animals used in this study were bred in the Department of Toxicology, Poznan University of Medical Sciences (Pozna ń, Poland).Animals were held (four rats/cage) in polycarbonate cages (Tecniplast, Buguggiate, Italy) with wood shavings in a room maintained under 12 h light/dark cycle, 22 ± 2 • C, 40-54% relative humidity, and controlled circulation of air.A commercial diet (ISO 22000 certified laboratory feed Labofeed H) and drinking water were available ad libitum. Experimental Design In order to induce PD in rats, rotenone (ROT, Sigma-Aldrich, Pozna ń, Poland) was injected subcutaneously once daily for 35 days in a dose of 1.3 mg/kg body weight.The doses and schedule of ROT used in the present study were established based on our results from preliminary studies, and they are similar to those previously described in published reports [33,39,57,65] with slight modifications (Figure 10).Forty rats (cohort #1) were divided randomly into four groups, with 10 animals in each.Group I: Rats receiving a vehicle, designated as a control group (Control).Group II: Rats treated with pomegranate juice alone in a dose of 500 mg/kg b.w./day (i.g.), designated as pomegranate juice-treated group (PJ).Group III: Rats injected with rotenone (1.3 mg/kg b.w./day, s.c.) alone from the 11th day of the experiment, designated as rotenone group (ROT).Group IV: Rats treated with pomegranate juice in a dose of 500 mg/kg b.w./day (i.g.) and injected with rotenone from the 11th day, designated as pomegranate juice + rotenone group (PJ + ROT).The experiment lasted a total of 45 days, including 10 days pre-treatment with PJ and 35 days combined treatment with PJ and ROT.The animals were observed daily for clinical signs of toxicity, and bodyweight was recorded weekly.Twenty-four hours after the last treatment, the rats were euthanized with ketamine/xylazine (100 U/7.5 mg/kg b.w., intraperitoneally) and perfused intracardially with isotonic sodium chloride solution.Following perfusion, the brain was removed quickly, the midbrain, cortex, and cerebellum were separated on ice, and the tissues were snap frozen using dry ice and stored at −80 • C until further use.For the purpose of the microscopic examination the brains of two rats from each group, were harvested after intracardial perfusion with isotonic sodium chloride solution, followed by 4% (w/v) paraformaldehyde in a 0.1 M sodium phosphate buffer, pH = 7.4 (Merck, Warszawa, Poland).The brains were then fixed with the buffered paraformaldehyde at 4 • C with gentle shaking for 24 h and subsequently exchanged with graded ethanol three times a day for three consecutive days at 4 • C prior to cryostat sectioning. In order to examine the distribution of urolithin A into the brain, 3 rats (cohort #2) were treated with pomegranate juice in a dose of 500 mg/kg/b.w./day alone for 10 days.The rats were anesthetized 6 h after last treatment with ketamine/xylazine (100 U/7.5 mg/kg b.w., intraperitoneally) and blood was withdrawn from the heart to the heparinized tubes.The brain was harvested after whole body perfusion with phosphate buffered saline, pH 7.4, to avoid overlapping of metabolites from the residual blood. Experiments were performed in accordance with Polish governmental regulations (Dz.U. 05. Postural Instability Test This test was performed according to Cannon et al. [30] method.Each animal was held vertically, and one forelimb was allowed to contact the table lined with sandpaper.The rat's center of gravity was then advanced and pushed forward until the rat initiated a step.The displacement distance required for the rat to regain the center of gravity was recorded.Three trials for each forelimb were recorded and the average distance was reported. Hematoxylin and Eosin (H&E) Staining Histological examinations were performed in INFO-PAT laboratory, Poznań, Poland.Following fixation described in 2.3 section, brains were embedded in paraffin, and sliced into 4 μm coronal sections and stained with hematoxylin and eosin at 22-24 °C.The slides were evaluated by light microscopy (BX61VS, Olympus, Tokyo, Japan), and scanned using a digital camera (HVF22CL 3CCD, Hitachi, Tokyo, Japan) and the Panoramic Viewer software (3DHISTECH, Budapest, Hungary). Postural Instability Test This test was performed according to Cannon et al. [30] method.Each animal was held vertically, and one forelimb was allowed to contact the table lined with sandpaper.The rat's center of gravity was then advanced and pushed forward until the rat initiated a step.The displacement distance required for the rat to regain the center of gravity was recorded.Three trials for each forelimb were recorded and the average distance was reported. Hematoxylin and Eosin (H&E) Staining Histological examinations were performed in INFO-PAT laboratory, Pozna ń, Poland.Following fixation described in Section 2.3, brains were embedded in paraffin, and sliced into 4 µm coronal sections and stained with hematoxylin and eosin at 22-24 • C. The slides were evaluated by light microscopy (BX61VS, Olympus, Tokyo, Japan), and scanned using a digital camera (HVF22CL 3CCD, Hitachi, Tokyo, Japan) and the Panoramic Viewer software (3DHISTECH, Budapest, Hungary). Immunofluorescent Staining of TH+ Neurons Paraformaldehyde-fixed rat brains embedded in paraffin as in 4.5 were cut into coronal sections on rotary microtome at 7 µm thickness.Chosen sections from corresponding regions of midbrain were deparaffinized in xylene, boiled in citrate buffer in a microwave oven for antigen retrieval, and blocked with 5% normal pig serum (NPS, Vector Laboratories, Burlingame, CA, USA) for 30 min.For immunofluorescent staining, sections were subsequently incubated with the anti-tyrosine hydroxylase antibody (sheep, 1:500, AB1542, Millipore, Temecula, CA, USA) in 5% NPS overnight in +4 • C. Afterwards, sections were rinsed and incubated with secondary anti-sheep Alexa-488 secondary antibody (1:100, Invitrogen, Carlsbad, CA, USA) in PBS for 30 min and mounted with Vectashield Hard Set Mounting Media with DAPI (Vector Laboratories, Burlingame, CA, USA).Stained sections were imaged and manually analyzed under a fluorescent microscope Eclipse50i (Nikon, Tokyo, Japan) equipped with a digital camera and digitalized by NIS Elements software. Biochemical Examinations Frozen brain tissues were homogenized with a lysis buffer (Cell Lysis Buffer 2; Bio-Techne-R&D Systems, Minneapolis, MN, USA) supplemented with a cocktail of protease and phosphatase inhibitor (Protease Inhibitor Cocktail I, Bio-Techne-Tocris, Minneapolis, MN, USA) at a weight:volume ratio of 1:2, using a handheld tissue homogenizer.The homogenate of each sample was centrifuged at 10,000 g for 20 min at 4 • C. The supernatant was collected for the biochemical assays with the exception of mitochondrial ALDH2 activity assay. Lipid Peroxidation Lipid peroxidation was determined by the reaction of its end product malondialdehyde (MDA) with thiobarbituric acid (TBA) according to the manufacturer's protocol provided with the lipid peroxidation (MDA) assay kit (Sigma-Aldrich, Pozna ń, Poland). Endogenous Antioxidants The reduced glutathione (GSH) level and antioxidant enzymes activities were determined spectrophotometrically as previously described [66].Briefly, GSH was quantified with the Ellman's reagent.The superoxide dismutase (SOD) activity was measured using spontaneous epinephrine oxidation.The catalase (CAT) activity was assayed by the measurement of hydrogen peroxide reduction-oxidation.Glutathione peroxidase (GPx) activity was determined by measuring NADPH oxidation using hydrogen peroxide as a substrate.Glutathione S-transferase (GST) activity was determined using 1-chloro-2,4-dinitrobenzene (CDNB) as a substrate. Mitochondrial Aldehyde Dehydrogenase (ALDH2) Activity The activity of this enzyme was determined using the ALDH2 activity assay kit according to the manufacturer's protocol (ab115348, Abcam).Tissue samples of the brain were homogenized in three-volumes of ice-cold phosphate-buffered saline.The homogenate was treated by adding an extraction Buffer to a sample protein concentration of 10 mg/mL and centrifuged at 16,000 × g 4 • C for 20 min followed by incubation on ice for 20 min.The samples in the volume of 100 µL were then subjected to a procedure of microplate assay. Nitrite Concentration Nitrite is formed by the spontaneous oxidation of nitric oxide (NO) under physiological conditions.As a measure of NO, we determined the nitrite concentration spectrophotometrically with the use of the Griess diazotization reaction, according to Gilchrist et al. [67].Briefly, equal volumes of brain homogenate (Section 4.6) and Griess reagent (1% sulfanilamine, 0.1% N-(1-naphyl)-ethylene-diamine dihydrochloride, 2.5% H 3 PO 4 ) were mixed.The absorbance at 540 nm was measured and the nitrite concentration was calculated from a sodium nitrite (NaNO 2 ) solution standard curve. Protein Determination The quantity of protein in samples was measured employing the Bicinchoninic Acid Protein Assay Kit following the manufacturer's instruction (BCA1 AND B9643, Sigma-Aldrich, Pozna ń, Poland). Western Blotting For the determination of the Bax, Bcl-xL, and α-synuclein protein levels, Western blot analysis was performed.The samples containing 5 µg (100 µg for Bcl-xL) of proteins were separated on 10% or 12% SDS-PAGE gels and transferred to nitrocellulose membranes.After blocking with 10% skimmed milk, the proteins were probed with rabbit Bax, mouse Bcl-xL, rabbit β-actin (Santa Cruz, CA, USA), and rabbit α-synuclein (Cell Signaling Technology) antibodies.The Western blotting detection system and SDS-PAGE Gels (10%, 12%) were purchased from Bio-Rad Laboratories (Hercules, CA, USA).As the secondary antibodies, the alkaline phosphatase-labelled anti-mose IgG (Santa Cruz, CA, USA) or HRP-linked antibody (Cell Signaling Technology) were used.The β-actin protein was used as an internal control.The amount of immunoreactive product in each lane was determined by densitometric scanning using a BioRad GS710 Image Densitometer (BioRad Laboratories, Hercules, CA, USA).The values were calculated as relative absorbance units (RQ) per mg protein. The dried samples were re-suspended in methanol and filtered through a 0.45 µm PVDF filter (Merc, Warszawa, Poland) before analysis by UPLC-ESI-QTOF-MS. UPLC-ESI-QTOF-MS Analysis The UPLC-ESI-QTOF-MS system used was an Agilent 1290 Infinity (Agilent, Les Ulis, France) equipped with an ESI-QTOF-MS (Agilent 6530 Accurate Mass, Agilent, Les Ulis, France).Chromatographic separation was carried out on an Eclipse Plus C18 column (2.1 × 100 mm, 1.8 µm, Agilent, Les Ulis, France).The solvents used were water with 0.1% formic acid for solvent A and methanol with 0.1% formic acid for solvent B with the flow rate of 0.3 mL/min.The gradient of solvent B for was as follows: Stayed at 5% during 0.5 min; 5% to 15% in 2.5 min; 15% to 30% in 8 min; 30% to 50% in 4 min; 50% to 90% in 6 min; stayed at 90% during 4 min; and the UPLC column was equilibrated for 3 min.The gradient of solvent B for analysis was as follows: Stayed at 4% during 10 min; 4% to 95% in 4 min; stayed at 95% during 2 min; and the UPLC column was equilibrated for 3 min under the initial condition.ESI conditions were as follows: Gas temperature and flow were 300 • C and 9 L/min, respectively; sheath gas temperature and flow were 350 • C and 11 L/min, respectively; capillary voltage was 3500 V.The fragmentor was always set at 150 V.The data obtained were analyzed by MassHunter Qualitative Analysis software.A calibration curve was established using commercially available UA in the range of 1 ng/mL to 100 ng/mL. Statistical Analyses The results are presented as mean values ± SEM.The experiments were performed in duplicate and 8 animals per experimental group were used.For the analysis of urolithin A distribution, 3 animals were used.Comparisons between the control and ROT groups were performed by one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons test.Differences were considered significant at p < 0.05.All statistical analyses and charts were performed using PRISM 6.0 software (GraphPad Software Inc., La Jolla, CA, USA). Figure 1 . Figure 1.Effect of pomegranate juice treatment (PJ) on body weight gain of rats injected with rotenone (ROT).Data are presented as mean values of eight rats per group and analyzed using one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons test.* p < 0.05 vs. Control.# p < 0.05 vs. ROT. Figure 2 .Figure 1 . Figure 2. Effect of pomegranate juice treatment (PJ) on postural instability in rotenone (ROT)-injected rats.Data are presented as mean values ±SEM of eight rats per group and analyzed using one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons.* p < 0.05 vs. Control.# p < 0.05 vs. ROT. Figure 1 . Figure 1.Effect of pomegranate juice treatment (PJ) on body weight gain of rats injected with rotenone (ROT).Data are presented as mean values of eight rats per group and analyzed using one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons test.* p < 0.05 vs. Control.# p < 0.05 vs. ROT. Figure 2 .Figure 2 . Figure 2. Effect of pomegranate juice treatment (PJ) on postural instability in rotenone (ROT)-injected rats.Data are presented as mean values ±SEM of eight rats per group and analyzed using one-way analysis of variance (ANOVA) followed by Sidak's multiple comparisons.* p < 0.05 vs. Control.# p < 0.05 vs. ROT. Figure 3 . Figure 3. Representative photomicrographs of hematoxylin and eosin (H&E) stained substantia nigra (SN) sections of rats.Control and pomegranate juice alone treated (PJ) rats show normal neurons (blue arrows).Rotenone (ROT) administration caused prominent degeneration of neurons (white arrows).A rat treated with pomegranate juice and rotenone shows normal neurons (blue arrows) and a few cells with signs of degeneration (white arrows).Original magnification ×400; Scale bar-20 μm. Figure 3 . Figure 3. Representative photomicrographs of hematoxylin and eosin (H&E) stained substantia nigra (SN) sections of rats.Control and pomegranate juice alone treated (PJ) rats show normal neurons (blue arrows).Rotenone (ROT) administration caused prominent degeneration of neurons (white arrows).A rat treated with pomegranate juice and rotenone shows normal neurons (blue arrows) and a few cells with signs of degeneration (white arrows).Original magnification ×400; Scale bar-20 µm. Figure 4 . Figure 4. Representative photomicrographs of immunofluorescent staining of TH-positive cells in adjacent microtome sections of substantia nigra (SN) neurons.Rotenone (ROT) administration caused the substantial loss of TH+ neurons, as compared to a control rat (control).Administration of pomegranate juice attenuated this loss (PJ + ROT).The pomegranate juice application (PJ) itself did not cause any effect on TH+ cells survival when compared to control rats.Scale bar-10 μm.2.5.Biochemical Examinations 2.5.1.Oxidative Stress Markers and Mitochondrial Aldehyde Dehydrogenase Activity Figure 6 . Figure 6.Effect of pomegranate juice treatment (PJ) on: (a) Tumor necrosis factor (TNF-α) expression; (b) total nitrite concentration, in the midbrain of rotenone (ROT) injected rats.Data are presented as mean values ±SEM of eight rats per group and analyzed using one-way ANOVA followed by Sidak's multiple comparisons test. Figure 6 . Figure 6.Effect of pomegranate juice treatment (PJ) on: (a) Tumor necrosis factor (TNF-α) expression; (b) total nitrite concentration, in the midbrain of rotenone (ROT) injected rats.Data are presented as mean values ±SEM of eight rats per group and analyzed using one-way ANOVA followed by Sidak's multiple comparisons test. Figure 7 . Figure 7. Effect of pomegranate juice treatment (PJ) on: B-cell lymphoma-extra-large (Bcl-xL) expression shown as representative immunoblots (a), % of control value ±SEM of eight rats per group (b), apoptosis regulator Bax expression shown as representative immunoblots (c), % of control value ±SEM of eight rats per group (d) in the midbrain of rotenone (ROT) injected rats.The results are presented as relative levels normalized to β-Actin, which was used as an internal control.Data analyzed using one-way ANOVA followed by Sidak's multiple comparisons test.* p < 0.05 vs. control. Figure 7 . Figure 7. Effect of pomegranate juice treatment (PJ) on: B-cell lymphoma-extra-large (Bcl-xL) expression shown as representative immunoblots (a), % of control value ±SEM of eight rats per group (b), apoptosis regulator Bax expression shown as representative immunoblots (c), % of control value ±SEM of eight rats per group (d) in the midbrain of rotenone (ROT) injected rats.The results are presented as relative levels normalized to β-Actin, which was used as an internal control.Data analyzed using one-way ANOVA followed by Sidak's multiple comparisons test.* p < 0.05 vs. control. Figure 8 . Figure 8.Effect of pomegranate juice treatment (PJ) on levels of the 17 kDa isoform of α-synuclein (monomers) and 50 kDa isoform of α-synuclein (oligomers) shown as representative immunoblots (a), on the level of α-synuclein oligomers (b) and on the ratio of α-synuclein oligomers/monomers (c) shown as % of control value ±SEM of eight rats per group in the midbrain of rotenone (ROT) injected rats.The results are presented as relative levels normalized to β-Actin, which was used as an internal control.Data analyzed using one-way ANOVA followed by Sidak's multiple comparisons test.* p < 0.05 vs. Control; # p < 0.05 vs. ROT group. Figure 8 . Figure 8.Effect of pomegranate juice treatment (PJ) on levels of the 17 kDa isoform of α-synuclein (monomers) and 50 kDa isoform of α-synuclein (oligomers) shown as representative immunoblots (a), on the level of α-synuclein oligomers (b) and on the ratio of α-synuclein oligomers/monomers (c) shown as % of control value ±SEM of eight rats per group in the midbrain of rotenone (ROT) injected rats.The results are presented as relative levels normalized to β-Actin, which was used as an internal control.Data analyzed using one-way ANOVA followed by Sidak's multiple comparisons test.* p < 0.05 vs. Control; # p < 0.05 vs. ROT group.Int.J. Mol.Sci.2019, 20, x FOR PEER REVIEW 9 of 21 Figure 9 . Figure 9. Representative UPLC-UV-QTOF chromatograms (a) total ion chromatogram for a brain sample of a rat treated with PJ; (b) selected ion chromatogram of m/z 227.0354 for a brain sample of a rat treated with PJ; (c) selected ion chromatogram of m/z 227.0354 for pure UA standard. Figure 9 . Figure 9. Representative UPLC-UV-QTOF chromatograms (a) total ion chromatogram for a brain sample of a rat treated with PJ; (b) selected ion chromatogram of m/z 227.0354 for a brain sample of a rat treated with PJ; (c) selected ion chromatogram of m/z 227.0354 for pure UA standard. Figure 9 . Figure 9. Schematic representation of the experimental design used in the study. Figure 10 . Figure 10.Schematic representation of the experimental design used in the study. 33.289) and with EU Directive 2010/63/EU for animal experiments.The study protocol was approved by the Local Ethics Committee on the Use of Laboratory Animals in Poznan, Poland (63/2015, 4 Sep 2015 and 14/2018, 27 Apr 2018).perfusion with phosphate buffered saline, pH 7.4, to avoid overlapping of metabolites from the residual blood.Experiments were performed in accordance with Polish governmental regulations (Dz.U. 05.33.289) and with EU Directive 2010/63/EU for animal experiments.The study protocol was approved by the Local Ethics Committee on the Use of Laboratory Animals in Poznan, Poland (63/2015, 4 Sep 2015 and 14/2018, 27 Apr 2018).
2020-01-02T14:02:44.421Z
2019-12-27T00:00:00.000
{ "year": 2019, "sha1": "bffd9844582d59f07ed48ced9ca1c205f767756f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/1/202/pdf?version=1577440197", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bd5d646e002fccdceaf479dd9d1a4ff7aef1c3f2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
237055826
pes2o/s2orc
v3-fos-license
Adherence and Association of Digital Proximity Tracing App Notifications With Earlier Time to Quarantine: Results From the Zurich SARS-CoV-2 Cohort Study Objectives: We aimed to evaluate the effectiveness of the SwissCovid digital proximity tracing (DPT) app in notifying exposed individuals and prompting them to quarantine earlier compared to individuals notified only by manual contact tracing (MCT). Methods: A population-based sample of cases and close contacts from the Zurich SARS-CoV-2 Cohort was surveyed regarding SwissCovid app use and SARS-CoV-2 exposure. We descriptively analyzed app adherence and effectiveness, and evaluated its effects on the time between exposure and quarantine among contacts using stratified multivariable time-to-event analyses. Results: We included 393 SARS-CoV-2 infected cases and 261 close contacts. 62% of cases reported using SwissCovid and among those, 88% received and uploaded a notification code. 71% of close contacts were app users, of which 38% received a warning. Non-household contacts notified by SwissCovid started quarantine 1 day earlier and were more likely to quarantine earlier than those not warned by the app (HR 1.53, 95% CI 1.15–2.03). Conclusion: These findings provide evidence that DPT may reach exposed contacts faster than MCT, with earlier quarantine and potential interruption of SARS-CoV-2 transmission chains. INTRODUCTION Contact tracing is a crucial public health measure for controlling the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1,2]. Traditionally, contact tracing involves interviewing all infected individuals (index cases) to systematically identify their close contacts. This aims at interrupting viral transmission chains by referring these close contacts to quarantine and SARS-CoV-2 testing [2][3][4]. However, such manual contact tracing (MCT) has inherent time delays, is resource intensive, and is limited by imperfect recall of encounters, especially those occurring briefly or by chance. Given the rapid SARS-CoV-2 transmission and the high proportion of asymptomatic cases, MCT alone is thus unlikely to be sufficiently effective [5,6]. Digital proximity tracing (DPT) has been developed as a scalable complementary method to identify transmission chains that are likely to be missed or identified late by MCT [7,8]. DPT applications register proximity encounters between individuals using the app with the aim to identify individuals that may have been exposed to an index case. Different technologies and architectures for DPT exist, one of which is decentralized, privacy-preserving proximity tracing (DP-3T) using Bluetooth Low Energy signals [9,10]. The Swiss DPT app, «SwissCovid», was one of the first to be launched in June 2020 and follows the DP-3T blueprint. In May 2021, SwissCovid had around 1.76 million active users, corresponding to 20.5% of the Swiss population [11]. Details on the design and implementation of SwissCovid are reported elsewhere [8,12]. DPT has three potential advantages over MCT [12]. First, the notification of exposed DPT app users is automatized once the index case has triggered the notification, leading to a potential speed advantage over MCT in interrupting transmission chains. Second, DPT still functions when MCT is at capacity due to high case numbers. Third, DPT has a wider reach than MCT because it does not rely on the infected individual's recollection of his or her encounters. However, DPT apps are complex interventions involving multiple, sequential steps and specific actions by app users to exert their effect (notification cascade [13], Supplementary File 1). To date, the possible impact of DPT apps on pandemic mitigation is only partially understood. Some modeling studies reported that DPT, alone or in combination with MCT, can have an effect in reducing SARS-CoV-2 transmission [7,14,15]. However, they relied on several strong assumptions, indicating that DPT effectiveness strongly depends on population uptake and the timeliness of case identification and quarantining of contacts [16,17]. Population-level data from the Isle of Wight [18] and more recently from England and Wales [19] provided evidence of the impact of the NHS COVID-19 app on SARS-CoV-2 incidence. In the latter analysis, Wymant et al. estimated that app notifications averted between approximately 200,00-900,000 cases between November and December 2020 [19]. Only few empirical analyses on the effectiveness of SwissCovid exist. Two studies identified factors associated with DPT app uptake and reasons for non-use [20], as well as challenges related to the implementation of SwissCovid in Switzerland [21]. Salathé et al. analyzed publicly available performance indicators for SwissCovid, such as number of app downloads and notification codes (CovidCodes) entered, and demonstrated proof-of-principle for the functioning of the app [22]. Furthermore, findings from a recent simulation study of the SwissCovid notification cascade in the Canton of Zurich suggest that DPT notifications may have led to an additional 5% of exposed persons entering quarantine in September 2020 [13]. Yet, critical questions pertaining to other conditions necessary for the functioning of DPT apps and their real-world impact remain unanswered. In particular, it is unclear whether the app indeed reduces the time between exposure and entering quarantine in close contacts. Using data from the Zurich SARS-CoV-2 Cohort, this study had two main aims. First, we evaluated the adherence of SwissCovid app users with the recommended steps (i.e., uploading of CovidCodes by cases upon testing positive for SARS-CoV-2, and quarantine and testing by close contacts upon receiving an app notification). Second, we examined the effectiveness of SwissCovid by evaluating whether the time from exposure to quarantine differed between close contacts who have or have not received an app warning. Study Design and Participants The Zurich SARS-CoV-2 Cohort Study is an ongoing, prospective, longitudinal, population-based cohort study of individuals infected with SARS-CoV-2 and their close contacts in the Canton of Zurich. The cohort was established in collaboration with the Cantonal Health Directorate Zurich and aims to characterize clinical outcomes and immunological responses of index cases and examine patterns of transmission among index cases and their close contacts. Individuals diagnosed with SARS-CoV-2 infection and their close contacts were identified through mandatory laboratory reporting of positive cases to and routine contact tracing by the Cantonal Health Directorate. All identified cases and close contacts were screened for eligibility and invited if they were ≥18 years old, residing in the Canton of Zurich, had sufficient knowledge of the German language and were able to follow the study procedures. Random sampling of the two populations was performed on a daily basis. Sampling of cases was stratified by age and close contacts were sampled in clusters based on the respective index case. Informed consent was obtained from all individuals agreeing to participate in the study. In this analysis, we used data from cases and close contacts enrolled between August 07, 2020 and September 30, 2020, when conditions changed due to a sharp increase in case numbers in Switzerland in early October 2020 [23]. The study protocol was approved by the ethics committee of the Canton of Zurich (BASEC 2020-01739) and prospectively registered on the International Standard Randomised Controlled Trial Number Registry (ISRCTN14990068). Data Collection Data was collected and managed through the Research Electronic Data Capture (REDCap) system. Questionnaires for cases included questions on socio-demographics, comorbidities, details on the suspected transmission event, symptoms and disease burden. Similarly, questionnaires for close contacts elicited information regarding socio-demographics, symptoms, experiences with quarantine, and details on their contact with the case (e.g., exposure setting, timing). Both questionnaires included questions related to the use of the SwissCovid app, receipt and uploading of CovidCodes by cases, and app warnings received by close contacts (Supplementary File 2). Data from both questionnaires was available for ten individuals who were initially enrolled as close contacts and later tested SARS-CoV-2 positive. Individuals identified as close contacts by contact tracing who tested positive before study enrollment were directly enrolled as cases and thus provided data from only one questionnaire (n 55; Supplementary File 3). Individuals pertaining to either group were considered "converted cases" in the analysis and contributed data on the level of close contacts and cases, as appropriate. Definitions Participants reporting permanent or occasional use of SwissCovid were considered app users. Self-reported exposure settings were classified as household if the participant reported living in the same household as the case. Non-household settings included workplace, private settings, public settings, healthcare facility, school or university, shared accommodation, and military. In line with the definition by the Cantonal Health Directorate, the exposure date referred to the last day when the close contact was within 1.5 m distance of the case for ≥15 min up to 48 h before symptom onset (or positive test if asymptomatic) and without personal protective equipment. For household contacts, exposure date corresponded to the first day the case was isolated. Exposure dates were recorded by two methods: self-reported by participants (main measure) and a proxy measure defined as 10 days prior to the last day of quarantine, as determined by contact tracing. Outcomes To evaluate adherence, primary outcomes included the frequency of cases who received and uploaded the CovidCode (thereby triggering a warning of contacts), frequency of close contacts who received a SwissCovid app notification and among those, and the frequency of close contacts who received the notification before being contacted by MCT. Regarding effectiveness, our primary outcome was the time interval (in days) between exposure date and the beginning of quarantine among close contacts, comparing those notified by the app to those not notified by the app. Statistical Methods Adherence with recommended actions was evaluated using descriptive statistics. Continuous variables are presented as median and interquartile ranges (IQR) and categorical variables as frequencies (N) and percentages (%). Free text responses regarding reasons for non-use of the app, not uploading the CovidCode by cases and steps taken by close contacts after receiving a warning, were reviewed. Based on their context, responses were coded without a preconceived categorization and reported in frequency and percentage. To evaluate effectiveness (i.e., time from exposure to beginning of quarantine), close contacts were grouped into "app notified" and "not app notified." App non-users were considered "not app notified." We assumed that notification time in household settings would be intrinsically faster than in non-household settings due to differences in information pathways and thus stratified participants by exposure setting (household vs. non-household). Concordance between selfreported and proxy exposure dates was examined. If selfreported date of last exposure was later than the beginning of quarantine (e.g., in same household contacts where the exposure date was not clearly defined), we used the proxy exposure date. If contacts entered quarantine on the day of exposure (leading to a 0 day interval) a delay of 0.5 days was added. Differences between groups were explored using Kaplan-Meier curves and stratified log-rank test. To evaluate the association between app notification and time from exposure to quarantine, we used a Cox proportional hazards model stratified by exposure setting and adjusted for age group, sex, education and employment status. Non-proportionality and possible influential outliers were tested using the scaled Schoenfeld residuals and dfbeta values, respectively. The model was adjusted for the cluster effect of sampling using robust variance estimation. Hazard ratios (HRs) and 95% confidence intervals (CI) were reported. We explored the robustness of our findings by performing a sensitivity analysis using the proxy exposure date instead of the self-reported exposure date to estimate the time from exposure to quarantine. Furthermore, in a second sensitivity analysis, we restricted our analysis to those using the app to account for potential confounding mediated by app use and associated characteristics of the close contacts. All analyses were performed using R version 3.6.1. Role of the Funding Source Study funders had no role in the study design, data collection, analysis, interpretation, or writing of this report. All authors had access to the data in the study and accept responsibility to submit for publication. Study Population Between August 06, 2020 and October 01, 2020, 2,519 individuals were diagnosed with SARS-CoV-2 in the Canton of Zurich and 6,316 close contacts were traced by contact tracing. Among cases, contact information and consent to be recontacted was available for 2082 individuals, among which 1,134 were eligible and invited to participate in our study. 392 cases agreed to participate (participation rate 35%), of which 65 cases had converted after being originally traced as a close contact. Among all close contacts, contact information was available for 5,545 individuals. 1,808 met our eligibility criteria, of which 734 were contacts of invited cases. 640 close contacts were invited and 271 individuals (261 close contacts and 10 converted cases) agreed to participate (participation rate 42%) (Supplementary File 3). In this analysis, we thus included 328 cases, 65 cases that converted from originally being traced as a close contact and 261 close contacts. Cases and close contacts were largely similar with respect to socio-demographic characteristics ( Table 1). Median age of cases and close contacts at time of identification was 38 and 35 years, respectively. Approximately 50% of the participants in both groups were female. Other characteristics such as Swiss nationality (79 and 84%), level of education (55 and 62% with a university or technical college degree), employment status (81 and 80% employed) and self-reported comorbidities (22 and 23% with at least one comorbidity) were also comparable between close contacts and cases. Converted cases were slightly different from the other two groups, with approximately 54% being female and 92% Swiss nationals. The exposure setting was reported as known or strongly suspected by 98% of close contacts and 93% of converted cases. Meanwhile, only 46% of cases knew or suspected the setting in which SARS-CoV-2 transmission occurred. Among those with knowledge or suspicion regarding their exposure, household and private settings were most frequently reported among close contacts (29 and 26%) and converted cases (53 and Reasons for app non-use are reported in Table 1 and Supplementary File 4. On average, app non-users were older and a higher proportion were female, retired and non-Swiss nationals compared to app users (Supplementary File 5). Among 243 cases using SwissCovid, 92% (n 224) reported to have received a CovidCode from public health authorities. Of those, 96% (n 215) uploaded the code in the app, thus triggering a notification to potentially exposed contacts. Main reasons for not uploading the code included receiving it too late or that their close contacts were already in quarantine ( Table 2). Among the 192 close contacts using the app, 38% (n 73) received an app notification within 7 days of the last relevant exposure. 43 of these reported a non-household exposure setting, corresponding to 34% of all non-household app users. Out of all contacts receiving a notification, 12% (n 9) received the notification before being contacted by MCT. After receiving the app notification, 14% of the 73 close contacts followed the recommendation of calling the SwissCovid info-line, whilst the remainder undertook other (19%) or no actions (67%). Most participants taking no action stated that they had already been reached by MCT and were already in quarantine and/or tested for SARS-CoV-2 ( Table 3). Effectiveness The median time from last exposure to beginning of quarantine among all close contacts was 2 days (IQR 1-3 days) based on the self-reported exposure date (main analysis). When using the proxy exposure date, the median time to quarantine was 1 day (IQR 0.5-3 days, sensitivity analysis). There was a 69% concordance between self-reported and proxy exposure date. 20 close contacts reported to have had the last exposure after starting quarantine, 18 of which reported the case to be a household member and two a friend. We found that the time from exposure to quarantine differed across exposure settings and between contacts that received or did not receive an app notification ( Figure 1). Overall, household contacts had a shorter median time from exposure to quarantine than nonhousehold contacts (1 vs. 3 days). In non-household settings, we found a difference in time intervals indicating a shorter duration to quarantine in app notified (n 43; median 2 days, IQR 1-3) compared to non-app notified contacts [n 138 (missing data on time interval from four people); median 3 days, IQR 2-4; p 0.01]. We found similar results after excluding non-app users (Supplementary File 6). Among the 43 app notified nonhousehold contacts, 8 (18.6%) reported to have received the app notification before they were contacted by MCT. 47% of app notified non-household contacts reported to have decided themselves to initiate quarantine compared to 31% of non-app notified nonhousehold contacts. In app notified contacts that received the warning before MCT, 75% (6/8) reported self-quarantine as the initial reason for quarantine, compared to 40% (14/35) of those receiving the warning after MCT (Supplementary File 7). However, in household settings, there was no evidence for a difference in the time from exposure to quarantine between app notified (median 0.5 days, IQR 0.5-2.0) and non-app notified contacts (median 1 day, IQR 0.5-2.0; p 0.11). In the stratified multivariable Cox model, we found strong evidence that contacts notified by the app had a greater probability of going into quarantine earlier than those not notified by the app while adjusting for age, sex, education, and employment status (HR 1.53, 95% CI 1.15-2.03; p 0.004). Age, education level, and employment status were not associated with a shorter time to quarantine ( Table 4, Supplementary File 8). There was no evidence for an interaction between app notification and exposure setting. Sensitivity analyses using the proxy exposure date, as well as when restricting the analysis to app users, yielded similar results (Supplementary Files 9, 10). DISCUSSION In this study of 261 close contacts and 393 cases (including 65 converted individuals) identified through routine contact tracing in the Canton of Zurich, we evaluated use of SwissCovid and whether it provides a time advantage over MCT. Our analysis showed that non-household contacts notified by the app started quarantine earlier than those not notified by the app. This provides important evidence that DPT apps have an impact on the timely interruption of transmissions chains. Most household contacts entered quarantine the same or the following day after exposure to a case in our study. This was expected, as they are easier to contact and are commonly informed directly by the case about their exposure. On the other hand, the contacting of non-household contacts through MCT is often more timeconsuming and longer delays may be expected. We found evidence for a possible time advantage through the app in nonhousehold setting, with app-notified contacts entering quarantine on average 1 day earlier than those not notified by the app. Considering that the testing delay (i.e., time from symptom onset to positive test) is 2.5 days on average in the Canton of Zurich [24], tracing delays and an overall reduced effectiveness of a contact tracing strategy are to be expected [14]. However, this does not explain differences between app notified and not app notified contacts. To explain this difference, we descriptively explored multiple hypotheses. We found that a higher percentage of app notified non-household contacts reported to have entered self-quarantine compared to those not notified by the app (47 vs. 31%). This finding supports the hypothesis that receiving an app notification may lead to a shorter time between exposure and quarantine. Although only 8 (19%) of 43 app notified contacts received the app warning before being reached by MCT, app notifications received after being called by contact tracers may not be without effect. For example, these notifications could have a reinforcing effect on the quarantine recommendations by MCT. We additionally explored alternative hypotheses that could explain our findings, such as confounding by case characteristics (e.g., more symptomatic cases or earlier testing among app users). However, we found no indication for systematic confounding in our descriptive analyses. Additionally, as the non-app notified group in our main analysis also included non-app users, our finding of a difference in time to quarantine may have potentially been influenced by better compliance of app users. Nonetheless, we found similar results after restricting our analysis to only app users. Our findings constitute the first evidence that DPT may be effective in reaching close contacts faster than MCT. Albeit small, such a time difference may be relevant in reducing transmission in the population. Ferretti et al. demonstrated in a modeling study that reducing the time to quarantine from 3 to 2 days had a substantial impact on reducing the spread of SARS-CoV-2, assuming that a large fraction of the population is using the app [7]. This emphasizes the need to focus on behavioral aspects of app uptake and use for the implementation of DPT, as well as to consider specific subgroups in its evaluation, such as distinguishing between different exposure settings. In our study, participants were enrolled during a period when case numbers were comparatively low. One potential advantage of the app is that it may be even more effective in times when case numbers are high leading to capacity issues in MCT. However, this requires that app coverage is sufficient and an efficient process is in place to initiate the notification cascade [13,22]. Thus, our findings may underestimate the effectiveness of the SwissCovid app in situations where MCT is overwhelmed. In addition, this analysis is restricted to close contacts that were identified by MCT, due to the design of the Zurich SARS-CoV-2 Cohort study. However, another potential advantage of the app is in warning exposed individuals that were unknown to the case or about whom they had forgotten [7]. This setting is difficult to assess given the privacy-by-design principle implemented in the SwissCovid app and was not within the scope of this analysis. As a consequence, our analysis did not consider potential additional benefits arising in the context of such exposure events. A high percentage of participants reported using the app, exceeding previous estimates based on publicly available data and other population-based surveys [20,22]. This difference may be explained by participants enrolled in the Zurich SARS-CoV-2 Cohort study being better informed about COVID-19 and more compliant with preventive public health measures than non-participants. Based on our data, the generation and uploading of CovidCodes seemed efficient, which contrasts previous reports of an approximate 30% gap between generated and entered CovidCodes [22]. Nevertheless, some of the participants also reported significant delays in receiving the CovidCodes. Until recently, the SwissCovid notification cascade suffered a few bottlenecks relating to the CovidCode generation process. Prior to October 2020, issuance of the codes was linked to MCT. It relied on the contact tracers to clarify whether a case requires a code during the initial tracing call and relaying the information to another person responsible for issuing the codes. In parallel to the generation and sending of CovidCodes to the cases, the contact tracer would continue to call the contacts. As the app would take a few hours until it pushes the notifications, MCT would have in theory reached the close contact first. This may also be reflected in the relatively low proportion of close contacts being notified by the app before being reached by MCT. However, although most contacts were reached the same day, contact tracers sometimes had trouble reaching the contacts by phone, leading to time delays and providing the app with a time advantage over MCT. Additional steps have been taken to improve the efficiency of the processes necessary for reducing such delays. In October, the CovidCode generation process was separated from MCT and cases were able to personally request codes through text messages. Additionally, health service providers such as laboratories, pharmacies and testing centers were able to generate and issue CovidCodes to cases starting November 18, 2020 to ensure a more rapid initiation of the notification cascade. Since December 12, 2020, CovidCodes are generated and sent automatically through an online form completed by cases as the first step in MCT in the Canton of Zurich. Some further limitations of our study should be noted. Selection effects during enrollment may have led to a generally more health literate or compliant study population. However, such self-selection effects would not invalidate the proof-of-principle of our analysis but limit the transportability of our findings to the general population. Furthermore, despite consistent signals in our data, a causality between app notification and faster quarantine could not be unequivocally demonstrated. But the observation of a small subgroup of contacts who received the app notification and entered quarantine before being reached by MCT instills confidence that SwissCovid, in principle, achieves one of its main goals. Further studies are needed to quantify the impact of our findings on pandemic mitigation. To our knowledge, our study is the first to evaluate the real-world effectiveness of a DPT app and leverages data from a population-based cohort study. While a more in-depth assessment of the exact sequence and timing of events related to the notification cascade may shed further light on the impact of SwissCovid on an individual level, our findings confirm the hypothesized benefit of DPT apps alarming non-household contacts earlier than MCT, thereby leading to earlier quarantine. DATA AVAILABILITY STATEMENT We are open to sharing individual participant data that underlie the results reported in this article, after de-identification upon reasonable requests to the corresponding author. Data requestors will need to sign a data access agreement. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethikkommission Zürich (BASEC 2020-01739) . The patients/participants provided their written informed consent to participate in this study.
2021-08-16T13:30:08.496Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "e2b08b25c4baa21fe6b8d3b859ef6c519a491193", "oa_license": "CCBY", "oa_url": "https://www.ssph-journal.org/articles/10.3389/ijph.2021.1603992/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2b08b25c4baa21fe6b8d3b859ef6c519a491193", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
26813427
pes2o/s2orc
v3-fos-license
Computational investigations and grid refinement study of 3D transient flow in a cylindrical tank using OpenFOAM The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained. . Liquid storage generation of an air-core The purpose of this study is to determine the Grid Convergence Index (GCI) of three types of grid resolution for the liquid draining. It is the most familiar and reliable technique for quantification of numerical uncertainty [5, 6,7]. The result of th important for further analysis in order to correctly replicate the air tank. Figure 2 shows the schematic diagram f tank partially filled with water. The diameter of the tank is 90 initial height of the water is 350 mm. The diameter of The liquid is drained by gravity, g with open t tank [2]. Navier -Stoke equations The following continuity and momentum equation using the incompressible condition Turbulence model In this study, the Reynolds Averaged are employed to simulate the transient flows. " is the density, Γ is the diffusion coefficient, # is all source terms and describes the midpoint of the control volume. where F is the mass flux through the cell's faces, as given in Equation 6. Here, # is the surface area element pointing outward, u is the local velocity in three spatial component (x, y and z) and index 6 describes the value at the surface control volume. It can be assumed that the mass flux from the Equation 6 can be calculated from the interpolated values of and . In this study, Second Order Upwind (SOU) differencing scheme is applied. This scheme is conditionally bounded and is more advanced compared to Central Differencing Scheme (CDS) [6,7,8]. For SOU, the critical cell Reynolds number of the result of purely convective transport of a scalar field is free from ∞ and it is infinite [6,9] when the cell Reynolds number is over two. According to [4], the face gradient of can be collected from two values around the face, which are: • Cell-centred gradient for the cells sharing the face: " + P ∑ # 3 3 (12) • Cell-centred gradient for interpolate it to the face: " 3 6 " ; 1 6 " " D (13) In this study, the diffusion terms are discretized using the 2 nd order deferred correction scheme. As Equation 11 is the 1 st order accurate uncorrected, the diffusion term is corrected by using Equation 14 when the vectors N and P are not coordinated [11]. Thus the term is separated into non-orthogonal and orthogonal parts as follows [12]: The first part of Equation 14 is for the orthogonal whereas the second part is for the non-orthogonal. Vector R is parallel to the distance vector, d and vector ) is aligned to the face of the gradient, 3 . In addition, Equation 14 must satisfy the following requirement [4]: The limiting Equation 16 is recommended for stability as the non-orthogonal part is much bigger than the orthogonal part [12]: where V W is user-specified parameter with 0 < V W < 1. Gradient term. The gradient term is discretized using bounded central differencing scheme that is a 2 nd order Gaussian method. As in Equation 17, the gradient is calculated using integrals over faces [4]: The face value can be evaluated from the cell center values as follows: (19) Figure 3 shows the interpolation factor of 6 is the ratio between the distances 6Y and 6Z. can be converted into surface integrals by applying the Gauss Theorem. They can then be written as the sum over the regarded control volume [4, 13] and the expression is called as "semi-discretized" form of the transport equation as shown in Equation 20 [14]: This time discretization can be split into two parts: explicit and implicit methods. The explicit method needs a low computational effort as it only uses the previous time level in the discretization. However, when the Courant-Friedrichs-Lewy number is bigger than ∞, there is a problem with the stability [8]. On the other hand, the implicit method needs more computational effort as it uses the new time level in discretization. Robust solutions are normally retrieved when the coupling between flow properties is bigger than in the explicit method [8]. The Euler implicit scheme is 1 st order accurate linear approximation and it guarantees boundedness. The coupling in the system is more stable than the explicit method, even if the Courant number limit is violated [14]. Boundary conditions Boundary conditions set down the series of faces in the computational mesh, which correspond to the boundaries of physical domain. They are separated into numerical and physical boundary conditions. Numerical boundary conditions are divided into the Von Neumann and Dirichlet boundary conditions. These boundary conditions stipulate the gradient of the variable normal to the boundary and the value of the variable on the boundary (or fixed/or constant value), respectively [4]. Meanwhile, the physical boundary conditions for the incompressible flow are explained as follows: • Inlet: the velocity value is determined and the pressure condition is specified as zero gradient • Outlet: the outlet boundary is prescribed similar to total mass balance. This can be perfomed in two ways: a) when the velocity dissemination is projected from the inside of the domain, the boundary condition on pressure is set as zeroGradient b) when the pressure dissemination is stated, boundary condition on pressure and velocity is set as fixedValue and zeroGradient, respectively. • Impermeable no-slip wall: the flow velocity on the wall is the same as the wall. Thus the flux through the wall is zero, so that the pressure gradient is set as zeroGradient • Slip wall: in scalar case, the variable is set as zeroGradient, but in vector magnitude case, the variable that normal and tangential to the wall is set as zero and zeroGradient, respectively In this study, three different patches have been constructed in the computational domain. The top surface is represented as an inlet and the bottom surface is represented as an outlet. Moreover, the side surface is represented as a wall. A fixedValue boundary condition is set up at the outlet patch while that at the wall patch is zeroGradient. In this case, the liquid is drained normally by the gravity. Other details of boundary conditions are listed in Table 2 and the description of each boundary is explained in Table 3. Multiphase solvers Multiphase solvers are available in OpenFOAM. They are listed as follows [13]: • interFoam, LTSinterFoam and InterDyMFoam: these are based on Volume of Fluid (VOF) method • twoPhaseEulerFoam, multiphaseEulerFoam and multiphaseInterFoam: these are based on the Eulerian-Eulerian method [17] 2.6.1. InterFoam. In this study, the interFoam solver is applied because it is capable to simulate flow of two fluids (liquid and air). In this solver, only one momentum and one mass conservation equation is determined for both fluids. Therefore, the viscosity and density of both fluids are averaged based on the volume fractions in the cell. In this condition, momentum and mass transfer are inconsiderate [13]. [18]. Here, the subscripts d and e indicate the liquid and gas, while is the density and is the dynamic viscosity. b is the mean velocity of fluid and it is transported with the function of 1. The VOF method can also be recognised as an isothermal fluid system as shown in Equation 24. Only a single set of governing equations is needed in this method and it is cheaper than the Eulerian-Lagrangian model, which has a number of governing equations and relying on the number of phases. where ) is the turbulent kinematic energy, \ } is the turbulent kinematic viscosity, \ is the kinematic viscosity, w is the surface tension, x is the curvature of free surface, * is the dissipation rate of the turbulent kinematic energy and • P€ , • .€ and ‚ _ are characteristic constants for the ) * model. In this study, following Ref. [18], the constants • P€ , • .€ and ‚ _ are set as 1.44, 1.92 and 1.30, respectively. Algebraic equations for the Navier-Stokes equations The discretization of the transport equations creates a system of algebraic equations that is as indicated by Equation 25. [20]. In particular, when standard PISO algorithm is set, the complete procedure of interFOAM solver contains of the following steps [13,21] In the explicit velocity correction stage, the new corrected pressure * is used to produce the new corrected velocity field, * * . This step continues to obtain two corrected processes (n = 2) by applying the new operator Š * * ". These two corrected processes are adequate to attain the robust solution [8,19]. Grid refinement The grid refinement study is applied to the three grid resolutions. Case A has the finest grid, Case B has the medium resolution and Case C has the coarsest resolution. These cases are shown in Table 4 for its specification and Figure 4 for its visualization. Richardson Extrapolation is acknowledged as "the deferred approach to the limit (ℎ → 0)". It determines a higher-order estimate of flow fields from a series of lower-order discrete values (6 P , 6 . , . . , 6 > ). The method of Richardson Extrapolation is shown in Equation 32 [22]. 6 6…Ž=‰•$ ‡ e P ℎ e . ℎ . e • ℎ • ⋯ where 6 is discrete solutions, ℎ is grid spacing, and e P and e . are functions that are illustrated in the continuum and do not lean on any discretization. The quantity 6 is considered "second-order" when e P 0. The 6 '"_ is the continuum value at zero grid spacing. Without considering the non-appearance of odd power in Equation 32, Richardson Extrapolation can be generalized to th order methods and •-value of grid ratio as follows: 6 9 ‹-≈ 6 P 3˜H3 ™ " š HP In this study, the grid refinement ratio, • for the uniformed meshed is described as: • }› ‹` >›.›3 ašloe •l>9" }› ‹` >›.›3 ašloe ž9oel Ÿ" }› ‹` >›.›3 ašloe ž9oel Ÿ" }› ‹` >›.›3 ašloe ›‹š¡9" 1.54 The refinement ratio is higher than the minimum value of 1.3. Equation 32 can be measured for orderof-accuracy by applying Equation 35. Convergence conditions of this system must be clarified first in order to assess the extrapolated value from the equations above. The convergence conditions are listed as follow: • Monotonic convergence: 0 < R < 1 • Oscillatory convergence: R < 1 • Divergence: R > 1 where R is the convergence ratio: ˆ For monotonic convergence, generalized Richardson Extrapolation is applied to estimate the errors and uncertainties, which are explained in Equation 37. For oscillatory convergence, the results show to exhibit some oscillations. Lastly, for divergence, the results diverge while errors and uncertainties are impossible to be determined [23]. Table 5 shows the results for order of accuracy for draining time, $ oeš‹l> and flow rate, ¨. The results are calculated from three different types of mesh as defined in the previous Index 1, 2 and 3 represent case A, B and C, respectively The convergence conditions for draining time, $ oeš‹l> and flow rate, ¨ are monotonic as the value of convergence ratio, R is more than zero and less than one for both cases. Hence these two parameters are applicable in the Grid Convergence Index (GCI) study. The draining time, $ oeš‹l> can be expressed theoretically with the following Equation 38. $ oeš‹l> Here, is the tank diameter, > is the nozzle diameter, $ is the draining time, ℎ › is the initial water level, ℎ is the water level at a $ time and e is gravitational acceleration. This equation is derived with the assumption that the flow is irrotational and inviscid. On another hand, the equation for the flow rate, ¨ can be estimated following in Equation 39, where v is flow velocity and A ¥ is cross-sectional area inside the outlet nozzle. Roache [22] has done a valuable contribution to systematic and most common methodology for the grid refinement studies, which is called Grid Convergence Index (GCI) method. GCI is based on grid refinement error estimator derived from the generalized Richardson Extrapolation. The percentage of differences between the computed and asymptotic values is calculated by using GCI. It demonstrates how far is the error of the computed value with the asymptotic value. How much the solution of the computed value would change with further refinement also can be illustrated with GCI. A small value of GCI percentage shows the computed value is approaching asymptotic range. The GCI for fine grid can be interpreted as shown in Equation 40. Å•AE l P,l 4 ¡ Ç€ Ș, Ç 3 š HP" where 4 ¡ is safety factor. As three different grids are used in this study, the safety factor should be 4 ¡ 1.25 [24]. Otherwise, for the refinement with two grids, the safety factor is 4 ¡ 3.0. Table 6 shows that GCI for parameter draining time, $ oeš‹l> and flow rate, ¨ from three different meshes is good with the decrement value from Å•AE •. to Å•AE .P (Å•AE .P <Å•AE •. ). The results demonstrate that the dependency of numerical method on the mesh size has been decreased when the GCI for the finer grid Å•AE .P " is lower than the coarser grid Å•AE •. ". Therefore, the results of simulation will not be of much difference as further refinement of the grid is changed. Table 8, Figure 6 is plotted and it shows a different line pattern from Figure 5. This demonstrates that the Richardson extrapolated value is slightly lower than the values obtained for three different meshes for flow rate, ¨. However, both of these solutions are still in the range of finer GCI as shown in Figure 5 and Figure 6. Besides, error of these two parameters (draining time, $ oeš‹l> and flow rate, ¨) can be defined from this Richardson extrapolated value and discrepancy between simulation value. The error is determined by Equation 41. Based on the calculation of RMS error, Figure 7 shows relative error between Richardson extrapolated value with the finer grid is only 0.169 % for the parameter draining time, $ oeš‹l> and 0.216% for the parameter of flow rate,¨. These illustrate that the finer grid has approached asymptotic value, where the error due to the spatial discretization has been reduced significantly. Table 9 shows the comparison of draining time result for liquid level obtained in this study with that from studies by Park and Sohn [2]. It highlights that an excellent agreement is achieved between the results, especially with the experimental result from Park and Sohn. Conclusion This grid refinement study was successfully achieved by using OpenFOAM framework. OpenFOAM displays a good result and simultaneously saves cost since no license is required. The flow behavior was simulated via PISO algorithm with first order accuracy of discretization scheme. The multiphase solver -interFoam and Volume of Fluid (VOF) were adopted to track the multiphase problem. The value of Grid Convergence Index (GCI) from coarser to finer grid was found to be decreasing as the grid was refined. The finer grid (Case A) was relevant to be applied in the future studies as the value of GCI is less than 1% .
2017-12-25T05:15:12.589Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "962c5a3d606eb8ba803c84875f973554288224d8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/152/1/012058", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "584255e8ad87992f73cfcc478081a1bb591b1331", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
245144732
pes2o/s2orc
v3-fos-license
Decoupling Zero-Shot Semantic Segmentation Zero-shot semantic segmentation (ZS3) aims to segment the novel categories that have not been seen in the training. Existing works formulate ZS3 as a pixel-level zeroshot classification problem, and transfer semantic knowledge from seen classes to unseen ones with the help of language models pre-trained only with texts. While simple, the pixel-level ZS3 formulation shows the limited capability to integrate vision-language models that are often pre-trained with image-text pairs and currently demonstrate great potential for vision tasks. Inspired by the observation that humans often perform segment-level semantic labeling, we propose to decouple the ZS3 into two sub-tasks: 1) a classagnostic grouping task to group the pixels into segments. 2) a zero-shot classification task on segments. The former task does not involve category information and can be directly transferred to group pixels for unseen classes. The latter task performs at segment-level and provides a natural way to leverage large-scale vision-language models pre-trained with image-text pairs (e.g. CLIP) for ZS3. Based on the decoupling formulation, we propose a simple and effective zero-shot semantic segmentation model, called ZegFormer, which outperforms the previous methods on ZS3 standard benchmarks by large margins, e.g., 22 points on the PASCAL VOC and 3 points on the COCO-Stuff in terms of mIoU for unseen classes. Code will be released at https://github.com/dingjiansw101/ZegFormer. Introduction Semantic segmentation targets to group an image into segments with semantic categories. Although remarkable progress has been made [10,11,36,56,57,62], current semantic segmentation models are mostly trained in a supervised manner with a fixed set of predetermined semantic categories, and often require hundreds of samples for each class. In contrast, humans can distinguish at least 30,000 basic categories [6,18], and recognize novel categories merely from some high-level descriptions. How to * Corresponding author . ZS3 aims to train a model merely on seen classes and generalize it to classes that have not been seen in the training (unseen classes). Existing methods formulate it as a pixel-level zeroshot classification problem (b), and use semantic features from a language model to transfer the knowledge from seen classes to unseen ones. In contrast, as in (c), we decouple ZS3 into two sub-tasks: 1) A class-agnostic grouping and 2) A segment-level zero-shot classification, which enables us to take full advantage of the pre-trained vision-language model. achieve human-level ability to recognize stuff and things in images is one of the ultimate goals in computer vision. Recent investigations on zero-shot semantic segmentation (ZS3) [7,54] have actually moved towards that ultimate goal. Following the fully supervised semantic segmentation models [10,11,36] and zero-shot classification models [1,24,30,47,60], these works formulate zero-shot semantic segmentation as a pixel-level zero-shot classification problem. Although these studies have reported promising results, two main issues still need to be addressed : (1) They usually transfer knowledge from seen to unseen classes by language models [7,27,54] pre-trained only by texts, which limit their performance on vision tasks. Although largescale pre-trained vision-language models (e.g. CLIP [46] and ALIGN [26]) have demonstrated potentials on imagelevel vision tasks, how to efficiently integrate them into the pixel-level ZS3 problem is still unknown. (2) They usually build correlations between pixel-level visual features and semantic features for knowledge transfer, which is not natural since we humans often use words or texts to describe objects/segments instead of pixels in images. As illustrated in Fig. 1, it is unsurprising to observe that the pixel-level classification has poor accuracy on unseen classes, which in turn degrades the final segmentation quality. This phenomenon is particularly obvious when the number of unseen categories is large (see Fig. 6). An intuitive observation is that, given an image for semantic segmentation, we humans can first group pixels into segments and then perform a segment-level semantic labeling process. For example, a child can easily group the pixels of an object, even though he/she does not know the name of the object. Therefore, we argue that a human-like zero-shot semantic segmentation procedure should be decoupled into two sub-tasks: -A class-agnostic grouping to group pixels into segments. This task is actually a classical image partition/grouping problem [44, 50,52], and can be renewed via deep learning based methods [12,32,59]. -A segment-level zero-shot classification to assign semantic labels either seen or unseen to segments. As the the grouping task does not involve the semantic categories, a grouping model learned from seen classes can be easily transferred to unseen classes. The segment-level zero-shot classification is robust on the unseen classes and provides a flexible way to integrate the pre-trained largescale vision-language models [46] to the ZS3 problem. To instantiate the decoupling idea, we present a simple yet efficient zero-shot semantic segmentation model with transformer, named ZegFormer, which uses a transformer decoder to output segment-level embeddings, as shown in Fig. 2. It is then followed by a mask projection for classagnostic grouping (CAG) and a semantic projection for segment-level zero-shot classification (s-ZSC). The mask projection maps each segment-level embedding to a mask embedding, which can be used to obtain a binary mask prediction via a dot product with a high-resolution feature map. The semantic projection establishes the correspondences between segment-level embedding and semantic features of a pre-trained text encoder for s-ZSC. While the steps mentioned above can form a standalone approach for ZS3, the model trained on a small dataset is struggling to have strong generalization. Thanks to the decoupling formulation, it is also flexible to use an image encoder of a vision-language model to generate image embeddings for zero-shot segment classification. As we empiri-cally find that the segment classification scores with image embeddings and s-ZSC are complementary. We fuse them to achieve the final classification scores for segments. The proposed ZegFormer model has been extensively evaluated with experiments and demonstrated superiority on various commonly-used benchmarks for ZS3. It outperforms the state-of-the-art methods by 22 points in terms of mIoU for unseen classes on the PASCAL VOC [15], and 3 points on the COCO-Stuff [8]. Based on the challenging ADE20k-Full dataset [64], we also create a new ZS3 benchmark with 275 unseen classes, the number of unseen classes in which are much larger than those in PASCAL-VOC (5 unseen classes) and COCO-Stuff (15 unseen classes). On the ADE20k-Full ZS3 benchmark, our performance is comparable to MaskFormer [12], a fully supervised semantic segmentation model. Our contributions in this paper are three-fold: • We propose a new formulation for the task of ZS3, by decoupling it into two sub-tasks, a class-agnostic grouping and a segment-level zero-shot classification, which provides a more natural and flexible way to integrate the pre-trained large-scale vision-language models into ZS3. • With the new formulation, we present a simple and effective ZegFormer model for ZS3, which uses a transformer decoder to generate segment-level embeddings for grouping and zero-shot classification. To the best of our knowledge, the proposed ZegFormer is the first model taking full advantage of the pre-trained largescale vision-language model (e.g. CLIP [46]) for ZS3. • We achieved state-of-the-art results on standard benchmarks for ZS3. The ablation and visualization analyses show that the decoupling formulation is superior to pixel-level zero-shot classification by a large margin. Related Works Zero-Shot Image Classification aims to classify images of unseen categories that have not been seen during training. The key idea in zero-shot learning is to transfer kowledge from seen classes to unseen classes via semantic representation, such as the semantic attributes [2,24,28,30], concept ontology [16,37,39,47,48] and semantic word vectors [17,41,60]. Recently, there are some works that use large-scale vision-language pretraining [26,46] via contrastive loss. For example, by training a vision-language model on 400 million image and text pairs collected by Google Engine without any human annotations, CLIP [46] has achieved impressive performances on more than 30 vision datasets even compared to the supervised models. It has also shown the potential for zero-shot object detection [19]. However, there is a large gap between the pixel level features used in previous ZS3 models and image-level features used in CLIP. To bridge this gap, we build the correspondence between segment-level visual features and image-level vision-language features. Zero-Shot Segmentation is a relatively new research topic [7,25,42,54,61], which aims to segment the classes that have not been seen during training. There have been two streams of work: discriminative methods [5,42,54] and generative methods [7,20,49]. SPNet [54] and ZS3Net [7] are considered the representative examples. In detail, SP-Net [54] maps each pixel to a semantic word embedding space and projects each pixel feature into class probability via a fixed semantic word embedding [27,38] projection matrix. ZS3Net [7] first train a generative model to generate pixel-wise features of unseen classes by word embeddings. With the synthetic features, the model can be trained in a supervised manner. Both of these two works formulate ZS3 as a pixel-level zero-shot classification problem. However, this formulation is not robust for ZS3, since the text embeddings are usually used to describe objects/segments instead of pixels. The later works [20,21,29,31,49,51] all follow this formulation to address different issues in ZS3. In a weaker assumption that the unlabelled pixels from unseen classes are available in the training images, self-training [7,42] is widely used. Although promising performances are reported, self-training often needs to retrain a model whenever a new class appears. Different from the pixel-level zero-shot classification, we propose a new formulation for ZS3, by decoupling ZS3 into a class-agnostic learning problem on pixels and a segment-level zero-shot learning problem. Then we implement a ZegFormer for ZS3, which does not have a complicated training scheme and is flexible to transfer to new classes without retraining. A recent work [63] also uses region-level classification for bounding boxes. But it focuses on instance segmentation instead of semantic segmentation. Besides, it still predicts class-aware masks. We are the first to use the region-level classification for zero-shot semantic segmentation. Class-Agnostic Segmentation is a long-standing problem and has been extensively studied in computer vision [3,4,14,44,50,52,65]. There are evidences [22,23,43] that class-agnostic segmentation model learned from seen classes can be well transferred to unseen classes in the task of instance segmentation, under a partially supervised training paradigm. Recently, a class-agnostic segmentation task [45] called entity segmentation (ES) is proposed, which can predict segments for both thing and stuff classes. However, ES is an instance-aware task, which is different from semantic segmentation. In addition, entity segmentation [45] does not predict the detailed class names of unseen classes. Our work is inspired by the abovementioned findings, but we focus on the semantic segmentation of novel classes and also predict the detailed class names of unseen classes. With the formulation, our proposed method is simpler, flexible, and robust. Decoupling Formulation of ZS3 Given an image I on the domain Ω = {0, . . . , H − 1} × {0, . . . , W − 1}, the semantic segmentation of I can be defined as a process to find a pair of mappings (R, L) for I, where R groups the image domain into N piece-wise "ho- A fully-supervised semantic segmentation suggests that, one can learn such a pair of mappings (R, L) for I from a large-scale semantic annotated dataset, i.e., D = . This type of methods are often with an assumption that the category set C are closed, i.e., the categories appearing in testing images are well contained by C, which, however are usually violated in real-application scenarios. Actually, if we denote S as the category set of the annotated dataset D, i.e., the seen classes, and E as those appearing in the testing process, we have three types of settings for semantic segmentation, • fully-supervised semantic segmentation: In this paper, we mainly address the problem of GZS3, and denote U = E − E ∩ S as the set of unseen classes. Relations to Pixel-Level Zero-Shot Classification. Previous works [7,20,54] formulate ZS3 as a pixel-level zeroshot classification problem, where a model learned from pixel-level semantic labels of S needs to be generalized to pixels of U . It can be considered as a special case of our decoupled formulation, where each pixel represents a segment R i . Since the learning of R does not involve the semantic categories, our formulation separates a class-agnostic learning sub-task from ZS3. The class-agnostic task has a strong generalization to unseen categories, as demonstrated in [43,45]. In addition, as humans often associate semantics to whole images or at least segments, establishing a connection from semantics to segment-level visual features is more natural than that to pixel-level visual features. Therefore, our formulation is more efficient than pixel-level zero-shot classification in transferring knowledge from S to U . Fig. 2 illustrates the pipeline of our proposed ZegFormer model. We first generate a set of segment-level embeddings and then project them for class-agnostic grouping and segment-level zero-shot classification by two parallel layers. A pre-trained image encoder is also used for segment classification. The two segment-level classification scores are finally fused to obtain the results. Figure 2. The pipeline of our proposed ZegFormer for zero-shot semantic segmentation. We first feed N queries and feature maps to a transformer decoder to generate N segment embeddings. We then feed each segment embedding to a mask projection layer and a semantic projection layer to obtain a mask embedding and a semantic embedding. Mask embedding is multiplied with the output of pixel decoder to generate a class-agnostic binary mask, while The semantic embedding is classified by the text embeddings. The text embeddings are generated by putting the class names into a prompt template and then feeding them to a text encoder of a vision-language model. During training, only the seen classes are used to train the segment-level classification head. During inference, both the text embeddings of seen and unseen classes are used for segment-level classification. We can obtain two segment-level classification scores with semantic segment embeddings and image embeddings. Finally, we fuse the these two classification scores as our final class prediction of segments. ZegFormer Segment Embeddings. Recently, there are several segmentation models [12,32,59] that can generate a set of segment-level embeddings. We choose the Maskformer [12] as a basic semantic segmentation model for simplicity. By feeding N segment queries and a feature map to a transformer decoder, we can obtain N segment-level embeddings. Then we pass each segment embedding through a semantic projection layer and a mask projection layer to obtain a mask embedding and a semantic embedding for each segment. We denote segment-level semantic embedding (SSE) as G q ∈ R d and the segment-level mask embedding as B q ∈ R d , where q indexes a query. Class-Agnostic Grouping. Denote the feature maps out from pixel decoder as F(I) ∈ R d×H×W . The binary mask prediction for each query can be calculated as m q = σ(B q · F(I)) ∈ [0, 1] H×W , where σ is the sigmoid function. Note that N is usually smaller than the number of classes. Segment Classification with SSE. For training and inference, each "class name" in a set of class names C is put into a prompt template (e.g. "A photo of a {class name} in the scene") and then fed to a text encoder. Then we can obtain |C| text embeddings, denoted as T = {T c ∈ R d |c = 1, ...|C|}, where C = S during training, while C = S ∪ U during inference. In our pipeline, we also need a "no object" category, if the intersection over union (IoU) between a segment and any ground truths is low. For the "no object" category, it is unreasonable to be presented by a single class name. Therefore, we add an extra learnable embed-ding T 0 ∈ R d for "no object". The predicted probability distribution over the seen classes and the "no object" for a segment query is calculated as: where q indexes a query. s c (e, e ) = e·e |e||e | is the cosine similarity between two embeddings. τ is the temperature. Segment Classification with Image Embedding. While the aforementioned steps can already form a standalone approach for ZS3, it is also possible to use an image encoder of a pre-trained vision-language model (e.g. CLIP [46]) to improve the classification accuracy on segments, owing to the flexibility of the decoupling formulation. In this module, we create a suitable sub-image for a segment. The process can be formulated as, given a mask prediction m q ∈ [0, 1] H×W for a query q, and the input image I, create a sub-image I q = f (m q , I), where f is a preprocess function (e.g., a masked image or a cropped image according to m q ). We give detailed ablation studies in Sec. 4.5. We pass I q to a pre-trained image encoder and obtain image embedding A q . Similar to Eq. 1, we can calculate a probability distribution, denoted as p q (c). Training. During the training of ZegFormer, only the pixel labels belonging to S are used. To compute the training loss, a bipartite matching [9,12] is performed between the predicted masks and the ground-truth masks. The loss of classification for each segment query is −log(p q (c gt q )), where c gt q belongs to S if the segment is matched with a ground truth mask and "no object" if the segment does not have a matched ground truth segment. For a segment matched with ground truth segment R gt q , there is a mask loss L mask (m q , R gt q ). In detail, we use the combination of a dice loss [40] and a focal loss [34]. Inference. During inference, we integrate the predicted binary masks and class scores of segments to obtain the final results of semantic segmentation. According to the class probability scores, we have three variants of ZegFormer. (1) ZegFormer-seg. This variant use the segment classification scores with segment queries for inference by calculate a per-pixel class probability w) is a location in an image. Since there is an imbalanced data problem, which results in predictions being biased to seen classes. Following [54], we calibrate the prediction by decreasing the scores of seen classes. The final category prediction for each pixel is then calculated as: where γ ∈ [0, 1] is the calibration factor. The indicator function I is equal to 1 when c belongs to the seen classes. (2) ZegFormer-img. The inference process of this variant is similar to Eq. 2. The only difference is that the p q (c) is replaced by p q (c). (3) ZegFormer. This variant is our full model. We first fuse p q (c) and p q (c) for each query as: where a geometry mean of p q (c) and p q (c) will return, if c ∈ U . The contribution of the two classification scores is balanced by λ. Since p q (c) is usually more accurate than p q (c) if c ∈ S, we do not want p q (c) contribute to the prediction of S. Therefore, we calculate a geometry mean of p q (c) and p q,avg = j∈S p q (j)/|S| on seen classes. This way, the probabilities of seen classes and unseen classes can be adjusted to the same range, and only p q (c) contributes to distinguishing seen classes. The final results for semantic segmentation are obtained by a process similar to Eq. 2. Experiments Since most of the previous works [7,20,42] focus on the GZS3 setting, we evaluate our method on the GZS3 setting in the main paper. See our results on the ZS3 setting in the appendix. We introduce the datasets and evaluation metrics that we use in the following. Datasets and Evaluation Metrics COCO-Stuff is a large-scale dataset for semantic segmentation that contains 171 valid classes in total. We use 118,287 training images as our training set and 5,000 validation images as our testing set. We follow the class split in [54] to choose 156 valid classes as seen classes and 15 valid classes as unseen classes. We also use a subset of the training classes as a validation set for tuning hyperparameters, following the cross-validation procedure [54,55]. PASCAL-VOC Dataset has been split into 15 seen classes and 5 unseen classes in previous works [20,54]. There are 10582 images for training and 1,449 images for testing. ADE20k-Full Dataset is annotated in an open-vocabulary setting with more than 3,000 categories. It contains 25k images for training and 2k images for validation. We are the first that evaluate GZS3 methods on the challenging ADE20k-Full. Following the supervised setting in [12], we choose 847 classes that are present in both train and validation sets for evaluation, so that we can compare our ZegFormer with supervised models. We split the classes into seen and unseen according to their frequency. The seen classes are present in more than 10 images, while the unseen classes are present in less than 10 images. In this way, we obtain 572 classes for seen and 275 classes for unseen. Class-Related Segmentation Metric includes the mean intersection-over-union (mIoU) averaged on seen classes, unseen classes, and their harmonic mean, which has been widely used in previous works of GZS3 [7,54]. Class-Agnostic Grouping Metric has been well studied in [44]. We use the well-known precision-recall for boundaries (P b , R b , and F b ) as the evaluation metric, and use the public available code 1 for evaluation. Implementation Details Our implementation is based on Detectron2 [53]. For most of our ablation experiments, we use ResNet50 as our backbone and FPN [33] as the pixel decoder. When compared with the state-of-the-art methods, we use the ResNet-101 as our backbone. We use the text encoder and image encoder of ViT-B/16 CLIP model 2 in our implementation. We set the number of queries in the transformer decoder as 100 by default. The dimension of query embedding and transformer decoder is set as 256 by default. Since the dimension of text embeddings is 512, we use a projection layer to map the segment embeddings from 256 to 512 dimension. We empirically set the temperature τ in Eq. 1 to be 0.01. The image resolution of processed sub-images for segment classification with image-level embeddings is 224×224. For all the experiments, we use a batch size of 32. We train models for 60k and 10k iterations for COCO-Stuff and PASCAL Table 1. Comparisons with the baseline in class-related segmentation metric on COCO-stuff. We report the results with CLIP [46] text embeddings and the concatenation of fastText (ft) [27] and word2vec (w2v) [38] for each algorithm. We show the improvements from ft+w2v to CLIP text in brackets. Generalized VOC, respectively. We use the ADAMW as our optimizer with a learning rate of 0.0001 and 1e-4 weight decay. Zero-Shot Pixel Classification Baseline To compare decoupling formulation with pixel-level zero-shot classification, we choose SPNet [54] as our pixellevel zero-shot classification baseline, since SPNet and Zeg-Former both belong to the discriminative methods with neat designs. For fair comparison, we reimplement the SP-Net [54] with FPN (which is also used by ZegFormer). We denote this variant of SPNet as SPNet-FPN. We implement SPNet-FPN in the same codebase (i.e., Detectron2 [53]) as our ZegFormer. The common settings are also the same as the ones used in ZegFormer). Class-Related Segmentation Metric. We compare ZegFormer-seg with the baseline under two types of text embeddings (i.e. CLIP [46] text embeddings and the concatenation of fastText [27] (ft) and word2vec [38] (w2v).) The widely used ft + w2v in GZS3 [54] is trained only by language data. In contrast, CLIP text encoder is trained by multimodal contrastive learning of vision and language. From Tab. 1, we can conclude: 1) CLIP text embeddings 3 is better than the concatenation of fastText [27] and word2vec [38]; 2) Our proposed decoupling formulation for ZS3 is better than the commonly-used zero-short pixel classification ones, regardless of what class embedding methods are used; 3) ZegFormer-seg has a much large gain than the SPNet-FPN, when the class embedding method is changed from the ft + w2v to CLIP (10.6 points v.s. 4.1 points improvements). We argue that the segment-level visual features are aligned better to the features of CLIP than the pixel-level visual features. Class-Agnostic Grouping Metric. We compare the image grouping quality of ZegFormer-seg and the baseline on COCO-Stuff [8]. The image grouping quality is a classagnostic metric, which can provide us more insight. We can see in Tab. 2 that ZegFormer significantly outperforms the baseline on the F b , P b , and R b , regardless of what class embedding methods are used. This verifies that the decoupled ZS3 formulation has much stronger generalization than pixel-level zero-shot classification to group the pixels of unseen classes. Ablation Study on ZegFormer Preprocess for Sub-Images. We explored three preprocess to obtain sub images (i.e. "crop", "mask", and "crop and mask") in the full model ZegFormer. The three ways to preprocess a segment for "tree" are shown in Fig. 3. We can see that when we merely crop a region from the original image, there may exist more than one categories, which will decrease the classification accuracy. In the masked image, the nuisances are removed, but there are many unnecessary pixels outside the segment. The combination of crop and mask can obtain a relatively suitable image for classification. We compare the influences of three ways for the Table 4. Comparison with the previous GZS3 methods on PASCAL VOC and COCO-Stuff. The "Seen", "Unseen", and "Harmonic" denote mIoU of seen classes, unseen classes, and their harmonic mean. The STRICT [42] proposed a self-training strategy and applied it to SPNet [54]. The numbers of STRICT and SPNet (w/ self-training) are from [42]. Other numbers are from their original papers. ZegFormer in Tab. 3. We can see that the combination of crop and mask can get the best performance, while only using the crop preprocess is lower than the performance of ZegFormer-seg, which does not use an image embedding for segment classification. ZegFormer-seg v.s. ZegFormer-img. We compare their performance for unseen categories on COCO-Stuff in Fig. 4. We can see that the ZegFormer-img is better at things categories (such as "giraffe", "suitacase", and "carrot", etc), while worse at stuff categories (such as "grass", "playingfield", and "river", etc.) Therefore, these two kinds of classification scores are complementary, which illustrates why their fusion will improve performance. Comparison with the State-of-the-art We compare our ZegFormer with the previous methods in Tab. 4. Specifically, ZegFormer outperforms Joint [5] by 31 points in the mIoU of unseen classes on PASCAL VOC [15], and outperforms SIGN [13] by 18 points in the mIoU of unseen classes on COCO-Stuff [8]. When compared to the results with self-training, ZegFormer outperforms SIGN [13] by 22 points in mIoU of unseen classes on PASCAL VOC [15], and STRICT [42] by 3 points on COCO-Stuff [8]. It is worth noting that the generative methods and self-training methods need a complicated multi-stage training scheme. They also need to be re-trained whenever new classes are incoming (although semantic category labels of unseen classes are not required). The self-training also needs to access the unlabelled pixels of unseen classes to generate pseudo labels. In contrast, our ZegFormer is a discriminative method, which is much simpler than those methods and can be applied to any unseen classes on-the-fly. Similar to our ZegFormer, SPNet [54] (without self-training) and Joint [5] are also discriminative methods that can be flexibly applied to unseen classes, but their performances are much worse than ours. Results on ADE20k-Full Since we are the first that report the results on the challenging ADE20k-Full [64] for GZS3, there are no other methods for comparison. We compared ZegFormer with SPNet-FPN, and a fully supervised Maskformer trained on both seen and unseen classes. From Tab. 5, we can see that our ZegFormer significantly surpasses the SPNet-FPN baseline by 4 points in mIoU unseen and is even comparable with the fully supervised model. We can also see that the dataset is very challenging that even the supervised model only achieved 5.6 points in the mIoU unseen, which indicates there is still much room for improvements. Visualization For the visualization analyses, we use a SPNet-FPN and a ZegFormer-seg trained with 156 seen classes on COCO-Stuff.Then we test the two models with different sets of class names. Results with 171 classes from COCO-Stuff. From Fig. 5, we can see that when segmenting unseen classes, it is usually confused by similar classes. Since the pixels of a segment have large variations, these pixellevel classifications are not consistent. In contrast, decoupling formulation can obtain high-quality segmentation results. Based on the segmentation results, the segment-level zero-shot classification is more related to CLIP pretraining. Therefore, ZegFormer-seg can obtain more accurate classification results. Results with 847 classes from ADE20k-Full. As shown in Fig. 6, the SPNet-FPN (pixel-level zero-shot classification) is much worse than the ZegFormer-seg (our decoupling formulation). The reason is that there is severe com- Figure 5. Results on COCO-Stuff, using 171 class names in COCO-Stuff to generate text embeddings. ZegFormer-seg (our decoupling formulation of ZS3) is better than SPNet-FPN (pixel-level zero-shot classification) in segmenting unseen categories. Figure 6. Results on COCO-Stuff, using 847 class names from ADE20k-Full to generate text embeddings. We can see that the SPNet-FPN (pixel-level zero-shot classification baseline) is very unstable when the number of unseen classes is large. We can also see that the set of 847 class names provide richer information than the set of 171 class names in COCO-Stuff. For example, in the yellow box of the second row, the predicted "garage" for a segment is labeled as "house" in COCO-Stuff. We provide more visualization results in our appendix. petition among classes at pixel-level classification when we use 847 classes for inference. In contrast, the decoupling model is not influenced by the number of classes. These results confirm that the decoupling formulation is a right way to achieve human-level ability to segment objects with a large number of unseen classes. We can also see that the set of 847 class names contains richer information to generate text embeddings for inference than the set of 171 class names. For example, the unannotated "light, light source" is segmented by the two ZS3 models (white box in the 1 st row of Fig. 6.) The labeled "motorcycle" is predicted as "tricycle" by ZegFormer (red box in the 2 nd row of Fig. 6). Conclusion We reformulate the ZS3 task by decoupling. Based on the new formulation, we propose a simple and effective ZegFormer for the task of ZS3, which demonstrates significant advantages to the previous works. The proposed ZegFormer provides a new manner to study the ZS3 problem and serves as a strong baseline. Beyond the ZS3 task, ZegFormer also has the potentials to be used for few-shot semantic segmentation. We leave it for further research. Limitations. Although the full model of ZegFormer shows superiority in all the situations, we empirically find that ZegFormer-seg does not perform well when the scale of training data is small. One possible reason is that the transformer structure needs a large number of data for training, which has been discussed in [35]. A more efficient training strategy for ZegFormer-seg or using other mask classification methods such as K-Net [59] may alleviate this issue, and can be studied in the future. A.1. Results with the ZS3 setting Apart from the generalized ZS3 (GZS3), we also report the results achieved with the ZS3 setting, in the evaluation of which the models only predict unseen labels c ∈ U (see more details in the Sec. 3.1 of our main paper.), and the pixels belong to the seen classes are ignored. The results on COCO-Stuff [8] are reported in Tab. 6. Since most of the existing studies do not report the results with the ZS3 setting, we compared with the SPNet [54] and re-implemented it with FPN and CLIP [46] text embeddings for building a baseline, i.e., SPNet-FPN. The results on ADE20k-Full [64] are reported by Tab. 7. Table 6. Results on COCO-Stuff with the ZS3 setting. The result of SPNet is directly taken from its original paper [54], and SPNet-FPN is our re-implementation of the SPNet [54] with FPN and CLIP [46] text embeddings, which can be considered as a baseline. A.2. Speed and Accuracy Analyses Analyses of the Computational Complexity. Given C as the number of channels in a feature map, K as the number of classes, H × W as the size of feature maps that are used for pixel-wise classification, and N being the number of segments in an image, the complexity of the classification head in the pixel-level zero-shot classification is O(H × W × C × K), while the complexity of the classification head of our decoupling formulation is O(N ×C×K). N is usually much smaller than H ×W . For instance, in our COCO-Stuff experiments, N is 100, but H × W is larger than 160 × 160. Therefore, when K is large, pixel-level zero-shot classification will be much slower than the proposed decoupling formulation of ZS3. Speed and Accuracy Experiments. We compare the speeds of ZegFormer, ZegFormer-seg, and the SPNet-FPN. All these three models use R-50 with FPN as a backbone. ZegFormer-seg is an implementation for the decoupling formulation of ZS3, while SPNet-FPN is our implementation for pixel-level zero-shot classification. ZegFormer is our full model, with a branch to generate image embeddings (see Sec. 3.2 of our main paper for details.) As shown in Tab. 8 and Tab. 9, the ZegFormer-seg performs better than SPNet-FPN in both speed and accuracy on COCO-Stuff and ADE20k-Full. The ZegFormer improves the ZegFormer-seg by 12 points in term of mIoU of unseen classes on COCO-Stuff, and still remains an acceptable FPS. We can also see that the speed of SPNet-FPN is slow on ADE20k-Full. This verifies that the speed of pixel-level zero-shot classification is largely influenced by K (number of classes), as we have discussed before. A.3. Comparisons with Different Backbones. Since the ZegFormer-seg with R-50 is better than SPNet-FPN with R-50 in the supervised semantic segmentation, we also report the results of SPNet-FPN with R-101. From Tab. 10, we can see that the SPNet-FPN with R-101 is comparable with ZegFormer-seg in the supervised evaluation, but much lower than ZegFormer-seg with the GZS3. B. More Visualization Results We visualize the results of ZegFormer-Seg with R-50 and SPNet-FPN with R-50. The two models are trained on COCO-Stuff with 156 classes, and required to segment 847 classes. The visualization results are shown in Fig. 7, and Fig. 8. C. More Implementation Details C.1. HyperParameters Following [12], we use a FPN [33] structure as the pixel decoder of ZegFormer and SPNet-FPN. The output stride of the pixel decoder is 4. Following [9,12], we use 6 Transformer decoder layers and apply the same loss after each layer. The mask projection layer in ZegFormer consists of 2 hidden layers of 256 channels. During training, we crop images from the original images. The sizes of cropped images are 640 × 640 in COCO-Stuff, and 512 × 512 in the ADE20k-Full [64] and PASCAL VOC [15]. During testing, we keep the aspect ratio and resize the short size of an image to 640 in COCO-Stuff, and 512 in the ADE20k-Full and PASCAL VOC. C.2. Prompt Templates Following the previous works [19,46], for each category, we used multiple prompt templates to generate the text embeddings then ensemble these text embeddings by averaging. The following is the prompt templates that we used in ZegFormer: To see the influence of prompt templates ensemble, we set a baseline by using only one prompt template, (i.e., "A photo of the {} in the scene.") The comparisons are shown in Tab. 11. We can see that the prompt ensemble will slightly improve the performance. Figure 7. Results on COCO-Stuff. ZegFormer-seg is our proposed model as an implementation of decoupling formulation of ZS3, while the SPNet-FPN is a pixel-level zero-shot classification baseline. Both the two models are trained with only 156 classes on COCO-Stuff, and required to segment with 847 class names. We can see that the pixel-level zero-shot classification is totally failed when there is a large number of unseen classes. In the yellow box are the unannotated category in COCO-Stuff but segmented by our model.
2021-12-16T02:16:22.806Z
2021-12-15T00:00:00.000
{ "year": 2021, "sha1": "8cafa8545ac3d42854e70408d837ef8244d52544", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eef9b31e672cfa18d323ebd653d11f204ee80039", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1790416
pes2o/s2orc
v3-fos-license
Effects of bromadiolone poisoning on the central nervous system Cases of rodenticide poisoning (second-generation long-acting dicoumarin rodenticide, superwarfarin) have occasionally been reported. The main symptoms of bromadiolone poisoning are skin mucosa hemorrhage, digestive tract hemorrhage, and hematuresis. However, the symptoms of central nervous system toxicity have rarely been reported. Our case reports on a 41-year-old male who had no contact with bromadiolone. His main symptoms were dizziness, unsteady gait, and abnormal behavior. Laboratory test results revealed the presence of bromadiolone in his blood and urine, a longer prothrombin time, activated partial thromboplastin time, and a high international normalized ratio. Magnetic resonance imaging of the brain showed that the bilateral posterior limb of the internal capsule, splenium of corporis callosum, and bilateral centrum semiovale formed symmetrical patch distribution. The patient gradually recovered after treated with vitamin K1 and plasma transfusion. Our clinical study could pave the way to improve the detection of bromadiolone poisoning and avoid misdiagnosis. Introduction Bromadiolone, a widely available superwarfarin, is a second-generation long-acting dicoumarin rodenticide. In the mid-1970s, superwarfarin was the most widely used rodenticide worldwide. Unfortunately, cases of poisoning increased with the increasing usage of this compound. As reported in Turkey, Croatia, Taiwan, China, Australia, Argentina, and America, superwarfarin poisoning is a worldwide health problem. [1][2][3][4][5][6][7] In recent years, cases of rodenticide poisoning were occasionally reported in China with symptoms including skin mucosa hemorrhage, digestive tract hemorrhage, and hematuresis. However, damage to the central nervous system (CNS) has rarely been reported. We report on a case of bromadiolone poisoning treated at our hospital. The case A 41-year-old male driver was hospitalized on May 28, 2016, because of dizziness, unsteady gait, and abnormal behavior. Two days before admission, the patient experienced dizziness, eyeball rotation with blindness, unsteady gait, absence of headache, nausea, and an instance of emesis. However, the patient disregarded the symptoms and did not initially seek medical help. One day before admission to our hospital his symptoms worsened, and a manifestation of alalia occurred. He was sent to a local hospital, and on the way to hospital, the patient demonstrated irrational fear of smooth driving, although no visible abnormality was observed in a brain computed tomography (CT) scan. The patient was Nine hours before admission to our hospital, the patient exhibited sudden dysphoria. After an intravenous injection of diazepam, symptoms of dysphoria were alleviated. For further treatment he was sent to our hospital (Binzhou Medical University Hospital, Binzhou, China), and admitted to the Emergency Department because of his psychological and behavioral abnormalities. No visible abnormality was observed after reexamination of his brain CT. In the past week before admission to our hospital, the patient experienced dizziness once. Physical examination results showed symptoms of confusion, dysphoria, and alalia. A detailed physical examination observed dicoria, sensitivity to light, shallow right nasolabial groove, body mobility, bilateral Babinski (-), disobliging during coordinated movement assessment, and soft neck. After admission, the patient still presented dysphoria, was uncommunicative with his family, could not write, had dysdipsia, and was salivating. However, signs of cognition were still present given that he could understand his family. Magnetic resonance imaging (MRI) of the brain ( Figure 1) shows the following: bilateral posterior limb of internal capsule, splenium of corporis callosum, and bilateral centrum semiovale that formed symmetrical patch distribution; abnormal signals of long T1 and T2, high signal of fluid-attenuated inversion recovery (FLAIR), and diffusionweighted imaging (DWI). Considering the possibility of brain intoxication, the hospital performed a poison detection test, and results indicated the presence of bromadiolone (239 ng/mL). Regarding abnormal blood coagulation mechanism, the patient was diagnosed with brain intoxication (bromadiolone poisoning) and treated with vitamin K1 and blood plasma. Reexamination Written informed consent has been provided by the patient to have these case details and any accompanying images published. Discussion Bromadiolone is a strong and long-acting rodenticide. The compound is called superwarfarin because of its high potency and long-acting anticoagulation, which depends on vitamin K in the body. Reports show that superwarfarin is 100 times more effective than warfarin. 8 Given its long half-life, the liver detoxifies slowly because of warfarin's lipophilic property. 9,10 Bromadiolone's maximum half-life period is 56 days (mean 20-30 days). 7,11 Due to its high lipid solubility, bromadiolone might easily diffuse across the blood−brain barrier and, therefore, cause CNS toxicity. Blood-brain barrier models could be helpful to investigate its putative brain penetration. The chemical decreases blood coagulation factors (II, VII, IX, X) of vitamin K-dependent proteins by inhibiting vitamin K epoxide reductase, which plays a role in anticoagulation. Clinical manifestations are tissue and organ hemorrhages, such as skin mucosa hemorrhage, digestive tract hemorrhage, and hematuresis. Laboratory examination shows prolonged PT and APTT and rising INR for bromadiolone poisoning. Vitamin K-dependent proteins not only function in coagulation but also in the CNS, where they are involved in maintaining normal brain cells and homeostasis. 12,13 Glutamine carboxylase plays an important role in neurons and neuroglia cells. In the CNS, a lack of vitamin K may decrease the activities of glutamine carboxylase and carboxylases of protein in the brain, resulting in reduced synthesis of sulfatide, which is an important inherent structure of medullary sheath. Warfarin has been proven to reduce rodent cerebroside sulfate (.40%); this effect can be reversed by vitamin K treatment. 14 This research shows that superwarfarin can cause lesions in the CNS. Our case is rare in clinical settings. The patient did not present any hemorrhage symptoms in the tissues and organs such as digestive tract, urinary system, or skin mucosa. The patients symptoms included dizziness, alalia, dysdipsia, inability to write, unsteady gait, sense of fear, dysphoria, and nervous system symptoms. An MRI of the brain revealed the following: signals of long T1 and T2 and high signal of FLAIR and DWI in bilateral pons of brachium conjunctivum, basal ganglia region, splenium of corpus callosum, and corona radiata region, revealing multiple protein lesions. Coagulation index indicated prolonged PT and APTT and rising INR. Bromadiolone was detected in the blood. Thus, the abovementioned symptoms may cause lesions of the nervous system. Vitamin K1 is an effective antidote to bromadiolone poisoning. After treatment with vitamin K1 and plasma Conclusion Bromadiolone poisoning should be diagnosed and treated as early as possible. Misdiagnoses can easily incur because some patients cannot identify their contact history with Conclusion Clinical symptoms, laboratory examinations, and a brain MRI revealed that bromadiolone poisoning might cause lesions in the CNS. Vitamin K1 and plasma transfusion are an effective treatment. This disease should be detected as early as possible with early diagnosis and treatment.
2018-04-03T03:16:02.452Z
2017-08-30T00:00:00.000
{ "year": 2017, "sha1": "5ca2a216d9cd71d52a0260923850c49da794cce3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=38198", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a19fb1e3d0f934edf2a1601f189560cffce5432", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269724493
pes2o/s2orc
v3-fos-license
Advances in Therapeutics to Alleviate Cognitive Decline and Neuropsychiatric Symptoms of Alzheimer’s Disease Dementia exists as a ‘progressive clinical syndrome of deteriorating mental function significant enough to interfere with activities of daily living’, with the most prevalent type of dementia being Alzheimer’s disease (AD), accounting for about 80% of diagnosed cases. AD is associated with an increased risk of comorbidity with other clinical conditions such as hypertension, diabetes, and neuropsychiatric symptoms (NPS) including, agitation, anxiety, and depression as well as increased mortality in late life. For example, up to 70% of patients diagnosed with AD are affected by anxiety. As aging is the major risk factor for AD, this represents a huge global burden in ageing populations. Over the last 10 years, significant efforts have been made to recognize the complexity of AD and understand the aetiology and pathophysiology of the disease as well as biomarkers for early detection. Yet, earlier treatment options, including acetylcholinesterase inhibitors and glutamate receptor regulators, have been limited as they work by targeting the symptoms, with only the more recent FDA-approved drugs being designed to target amyloid-β protein with the aim of slowing down the progression of the disease. However, these drugs may only help temporarily, cannot stop or reverse the disease, and do not act by reducing NPS associated with AD. The first-line treatment options for the management of NPS are selective serotonin reuptake inhibitors/selective noradrenaline reuptake inhibitors (SSRIs/SNRIs) targeting the monoaminergic system; however, they are not rational drug choices for the management of anxiety disorders since the GABAergic system has a prominent role in their development. Considering the overall treatment failures and side effects of currently available medication, there is an unmet clinical need for rationally designed therapies for anxiety disorders associated with AD. In this review, we summarize the current status of the therapy of AD and aim to highlight novel angles for future drug therapy in our ongoing efforts to alleviate the cognitive deficits and NPS associated with this devastating disease. Introduction General Overview of the Pathology of Alzheimer's Disease Dementia is one of the most common neurological disorders affecting memory and behaviour.Alzheimer's disease (AD) is the most common form of dementia, constituting a minimum of 60% of dementia diagnoses [1][2][3]. The symptoms associated with AD include specific onset and progression of cognitive and functional deterioration, such as neuronal loss and cognitive deficits [4,5].AD is one of the leading mortality factors among the elderly.At least 50 million people worldwide are diagnosed with AD, and the rate is expected to rise to 152 million by 2050 [6].Over the previous decade in the United Kingdom (UK), the number of deaths due to AD tripled from 4.23% to 12.53%.In 2019, AD was the fifth most common cause of mortality among those above 64 years old [7], believed to have a median survival duration of 8-10 years from diagnosis, with some exceptions where patients survived >20 years.The major risk factors for this disease include genetics; environmental factors; age; gender; lifestyle choices; comorbidities that influence disease onset, including vascular disease, diabetes, and infection; and traumatic brain injury (TBI).Familial AD is a hereditary form of AD caused by mutations in certain genes; notably, a mutation due to a polymorphism on chromosome 19 forms ApoE4 lipoprotein, which is inefficient at removing amyloid beta (Aβ) plaques from the brain's cortical regions, thereby escalating AD risk, and presenilin 1 and 2 genes, responsible for the catalytic subunit of γ-secretase transcription, which have been reviewed previously [8][9][10]. According to the World Health Organization (WHO, 2021), neuropsychiatric symptoms (NPS) including anxiety and depression are the leading cause of disability and a high rate of inadequate treatment is a major concern [11,12].The impact of the COVID-19 pandemic has increased the prevalence of depression and anxiety by 27.88% and 26.35%, respectively, in comparison to the pre-pandemic era in the UK, which indicates the growing need for effective interventions [13].Furthermore, anxiety disorders are among the most prevalent and disabling NPS disorders worldwide; in the UK alone, one in six adults per given week are affected by NPS, with ~10% of the UK general population taking anxiolytics [14].However, up to 70% of patients diagnosed with AD are affected by anxiety [15,16].As ageing is the major risk factor for AD, this represents a huge global burden in ageing populations. Clinical Manifestations and Conventional Therapeutic Approaches for AD to Alleviate Cognitive Decline Patients with AD typically experience linguistic impairment, poor judgement, disruptive behaviour, increased memory loss, disorientation, and difficulty learning new things [7,17].However, the diagnostic procedure for AD is complex as it requires the exclusion of conditions that can have similar symptoms, such as infection and depression, and there is no single test.Radiologically, images from computed tomography (CT) scans, magnetic resonance imaging (MRI), and positron emission tomography (PET) scans [18] can provide an indication of brain atrophy and amyloid plaques which are associated with the disease [19][20][21].Histologically, the abundant extracellular accumulations of amyloid plaques and intracellular neurofibrillary tangles (NFTs) are the main hallmarks of AD, which may interfere with the normal communication between neurons and disrupt the brain's ability to function properly [4,[22][23][24].Studies have indicated a correlation between the severity of the cognitive deficits associated with AD and hyperaccumulation of plaques and NFTs [25]. Conventional treatments for AD primarily focus on symptom management and enhancing the quality of life of patients.Until now, AD patients had limited treatment choices, primarily consisting of supportive measures such as cognitive stimulation therapy (CST) or reminiscence.In addition to these, non-pharmacological approaches including cognitive stimulation, physical exercise, and a balanced diet also play an important role.Occupational therapy can also provide significant assistance in preserving patient independence and managing their daily activities. Furthermore, recent research indicates the significant potential of combination therapies for AD, which can be extremely beneficial in slowing the progression of the disease and improving quality of life [26,27].Such treatments typically involve a combination of medications, lifestyle changes, and non-pharmacological interventions such as cognitive stimulation, physical exercise, and dietary changes [28].Notably, the application of combination therapies has demonstrated greater efficacy compared to single treatments, offering notable improvements in cognitive function and slowing disease progression. Current FDA-Approved Treatments for AD-Associated Cognitive Deficits Current therapeutics and novel drug strategies have always been trends in discussion to further our understanding and design of newer drug therapies for dementia.In general, recent reviews also cover these topical themes [29][30][31].Current treatments for AD primarily aim to alleviate symptoms, with strategies ranging from patient care and support to exercise programs, which are associated with a reduced risk of dementia [32].Pharmacological treatments include antidepressants, antipsychotics, and notably, acetylcholinesterase inhibitors, a widely used approach [33].Despite their effectiveness in symptom management, these treatments do not address the underlying causes of AD.Moreover, they can lead to undesirable side effects like vomiting and headaches or hallucinations, highlighting a significant gap in pharmacological strategies for treating the disease. Until 2020, only four FDA-approved drugs for AD were shown to provide a modest benefit in symptom management but not to halt disease progression: donepezil, galantamine, rivastigmine, and memantine (the first three are inhibitors of the acetylcholinesterase (AChE) enzyme [17] and the latter is a glutamate N-methyl-D-aspartate (NMDA) receptor antagonist [19]).AChE inhibitors are used to compensate for the low levels of acetylcholine (ACh) in the brain in AD.They work by limiting the breakdown of ACh into harmful products, not by treating the underlying cause that leads to the increased breakdown, and have shown moderate efficacy and safety in patients with moderate-tosevere AD [34].While AChE inhibitors are usually well tolerated by patients, they may also produce significant gastrointestinal side effects.However, memantine has a different mechanism of action than cholinergic drugs and is thought to be neuroprotective [35].Memantine is recommended for more severe cases of dementia [36] and is also an agonist for certain dopamine receptors as well as an antagonist for NMDA receptors, which are part of excitatory glutamatergic transmission.Seeing as it is thought that glutamatergic transmission is impaired in AD (hyperactive) and causes neuronal excitotoxicity, the use of memantine as a low-affinity glutamate/NMDA receptor blocker reduces the excess Ca 2+ transmission and lowers excitotoxicity.However, memantine, like AChEs, treats symptoms rather than actually prevent disease progression, and most patients do not show any real benefit in the long term [35,37].This situation shows the urgent need for more effective treatments that directly target the main causes of AD. Excitingly, over the past two years, antibody therapy has emerged as a breakthrough in AD treatment, shown in Table 1.Aduhelm (aducanumab), the fifth FDA-approved drug for AD, is an example of this therapy and received approval in the United States in June 2021 [22,[38][39][40].Further progressing antibody therapies, Leqembi (lecanemab-irmb) was approved in January 2023 for the treatment of early-stage AD [41,42].Both treatments are monoclonal antibodies, meaning that they are designed to bind to a specific protein (in this case, amyloid-β) to reduce its levels in the brain and slow the progression of the disease [39,42].Aducanumab, as the first drug to remove amyloid-β, effectively alleviates the cognitive and functional impairments caused by AD, especially for people living with early AD [43].After a controversial accelerated FDA approval, Biogen has decided to halt the development and commercialization of aducanumab due to little evidence proving that reduction of amyloid helped patients improve their memory and cognitive problems.Additionally, trials indicated potential risks of brain swelling and bleeding associated with the drug [44,45].Instead, Biogen will shift its focus to developing lecanemab [46].Lecanemab is also used to treat AD patients with mild cognitive impairment or mild dementia with a known amyloid-β pathology [47].This makes antibody therapy the most promising and groundbreaking treatment for AD.It preferentially targets soluble aggregated Aβ and works on Aβ oligomers, protofibrils, and insoluble fibrils [48,49].Lecanemab was found to reduce the progression of AD by about 27% [50]. Recently, Eli Lilly reported a new AD drug 'Donanemab (LY3002813)', which is also an antibody therapy [51,52].Similar to the previous antibody therapies, donanemab is still a humanized IgG1 monoclonal antibody, designed to treat early AD by targeting a different form of Aβ to clear the existing Aβ plaques to slow the decline in cognitive function associated with AD.Donanemab is targeted against an epitope at the N-terminal of a specific type of Aβ-pyroglutamate Aβ-which is found only in the brain amyloid plaques associated with AD [52].A phase II study conducted by Eli Lilly reported that donanemab slowed the progression of AD by about 35%, showing greater ability in clearing amyloid plaques compared to previous drugs [51].However, antibody therapies have only been tested on people with early stages of the disease and it is unknown how effective they are for more advanced forms. Despite antibody therapies showing promise in alleviating symptoms of AD, they still face several challenges that limit their efficacy and applicability.Primarily, these therapies are used to treat early-stage AD patients and can only slow down but not halt the disease progression [53].Moreover, current antibody treatments are only targeted to Aβ; however, the pathology of AD is complex.Beyond Aβ accumulation, other critical aspects of AD pathology, such as NFTs (tau) and neuroinflammation, are increasingly being recognized as important factors in the disease's progression.All these imply that we need to find new treatment options for AD.Additionally, a notable concern associated with amyloidtargeting antibody therapies is amyloid-related imaging abnormalities (ARIA) [54,55].Aducanumab has been linked to ARIA, especially in APOE ε4 carriers, leading to a dosedependent increase in occurrences of vasogenic edema [55].Together, these factors suggest an urgent need to develop new AD treatment options that encompass a wider range of the disease's complex pathology. In addition to therapies targeting Aβ, several treatment strategies focus on tau pathology, including tau antisense oligonucleotides and anti-tau oligomer antibodies.Increasing evidence supports the role of hyperphosphorylated tau aggregation as a central contributor to neurodegeneration in AD [56][57][58].The tau protein, primarily expressed in neurons, is encoded by the microtubule-associated protein tau (MAPT) gene [59,60].Preclinical studies suggest that reducing tau can prevent certain deficits mediated by amyloid-β (Aβ), underscoring tau's pivotal role in Aβ toxicity during the early stages of AD pathogenesis [61,62].At the close of 2023, Biogen revealed new Phase 1b clinical data for BIIB080, an investigational antisense oligonucleotide (ASO) therapy targeting tau in patients with mild AD [62].BIIB080 is engineered to target MAPT mRNA to lower tau protein production.Inhibiting MAPT expression to reduce tau levels is a crucial strategy which directly targets a key mechanism of disease that affects AD patients [61].While BIIB080 targets the reduction of tau protein production by inhibiting MAPT mRNA, another promising strategy involves the anti-tau oligomer antibody APNmAb005 [63][64][65].APNmAb005 is a humanized monoclonal antibody designed to block the synaptic toxicity caused by tau oligomers [65].APNmAb005 selectively binds to tau oligomers and aggregates, primarily within the synapses of pathological brain tissues, effectively inhibiting tau propagation [64,65].Currently, APNmAb005 is undergoing Phase 1 clinical trials to evaluate its safety and tolerability, representing a novel approach in tau immunotherapy for neurodegenerative disorders [64]. In 2024, some clinical trials are actively exploring therapies for AD.A significant trial involves AL002, targeting early AD stages and assessing changes in cognitive and biomarker outcomes over up to 96 weeks.More information is mentioned in Table 2 [66].These trials reflect a broad effort to target various aspects of AD pathology and progression. Name of Drug Class of Drug Chemical Structure Action of the Drug Side Effects Rivastigmine Approved in 1997 [69,70] AChE inhibitor used to treat mild to moderate symptoms of AD.Rivastigmine inhibits both butyrylcholinesterase and AChE, preventing the hydrolysis of acetylcholine and thus leading to an increased concentration of acetylcholine at cholinergic synapses. Gastrointestinal side effects including nausea and vomiting, decreased appetite, diarrhoea, and abdominal pain.Nervous side effects including pain, headache, dizziness, syncope, fatigue, and malaise. Galantamine Approved in 2001 [71,72] AChE inhibitor used to manage mild to moderate AD.Memantine blocks the neurotransmitter glutamate from acting on NMDA receptors that are partly responsible for neuronal excitability, thus preventing hyperexcitability seen in early and late AD. Gastrointestinal side effects including nausea and vomiting.Nervous side effects including dizziness, headache, insomnia, and confusion.Others including falls and hypertension. Potential Therapy to Target Neuroinflammation in AD Neuroinflammation is considered to be a central factor in shaping neuronal vulnerability during AD pathogenesis, which has gained more focus recently.This includes an increase in the density of glial cells such as astrocytes and microglia in the brain and a change in their secretory profile from protective and anti-inflammatory to acute and proinflammatory.This triggers a cascade of changes at both the molecular level and at higher, macro levels which alters the healthy brain homeostasis to promote pathology.Among the effects is impaired Aβ processing, a harmful increase in the level of cytokines such as tumour necrosis factor (TNF)-α [78] and glutamate excitotoxicity due to failure to reuptake excess neurotransmitters from the synaptic cleft [79], causing dysfunction of neuronal networks and memory impairment [80].Studies in both humans and rodents show that neuroinflammation is elevated in AD brain tissue compared to brain tissue from healthy, age-matched subjects (reviewed in [81]).Thus, accumulating evidence suggests that an inflammatory response, driven by activation of the brain's innate immune cells, plays a crucial role in exacerbating neuronal damage and promoting disease progression [82].Therefore, it is not surprising that new drug development and drug repurposing strategies aimed at targeting neuroinflammation have been considered in AD research.If these strategies were to work, it would be a revolutionary breakthrough in drug development as it would accelerate the drug development process, reduce the costs and risks inherent to drug development, and provide new therapeutic implications for clinically-approved drugs with different indications [83][84][85][86]. Here, we will briefly discuss these strategies and provide a novel perspective on targeting neuroinflammation. Repurposing Established Antiviral and Anti-Inflammatory Drugs Specifically, established antiviral and anti-inflammatory drugs are currently under scrutiny for their potential to modulate neuroinflammation in AD, which stems from recent evidence which suggests that pathogens such as viruses and bacteria are present in the AD brain [87].Specifically, it has been proposed that the amyloid-β peptide, traditionally viewed as a hallmark of AD pathology, may function as an antimicrobial peptide within the framework of innate immunity.Antiviral drugs, such as Acyclovir and Penciclovir (Figure 1), traditionally used to target herpes simplex virus 1 (HSV-1), have shown promise in reducing both viral presence and amyloid-β accumulation in the brain [87][88][89].Therefore, antiviral therapies may be considered as potential AD therapies. The Endothelin B (ETBR) Receptor as a Potential Therapeutic Target The ETBR is one of the receptors in the endothelin family, which are predominantly expressed in the hippocampus and amygdala.Notably, astrocytes expressing ETBRs showed high levels in ligand binding assays and studies of mRNA expression, highlighting its relevance in AD research [101,102].Consequently, there has been an increasing focus on exploring the ETBR for its potential therapeutic applications [103][104][105].Recent studies have revealed the crucial role of astrocytic ETBRs in the development of traumatic brain injury (TBI) and its potential implications in the progression of AD [106].Since the ETBR can activate astrocytes through the autocrine effect of its ligand endothelin-1 (ET-1), targeting ETBRs is therefore considered as a precise way to regulate astrocyte activity. Further research into the inhibitory effects of the ETBR on TBI models has provided evidence suggesting that the ETBR may be an effective target for controlling astrocyte activity during TBI [102,107].This research also showed possibilities for understanding its implications in the progression of AD, making the ETBR a promising target in the field of neurodegenerative disease research [108,109]. A noteworthy advancement in this area is the use of the ETBR antagonist BQ788.Several studies have demonstrated its efficacy in reducing the number of reactive astrocytes, improving disruptions in the BBB, and decreasing brain swelling [102,110,111].These outcomes indicate that antagonists of ETBRs could play a crucial role in diminishing astrocyte activation, thereby offering relief in neurological disorders such as TBI and AD where astrocyte hyperactivation is a common factor.In proofread submitted version 1, the space after the hyphen in "Antiinflammatory" is not revised, 2, the clarity is poor Based on the central role of inflammation, anti-inflammatory drugs, mainly nonsteroidal anti-inflammatory drugs (NSAIDs), are thought to have a protective effect against AD [90].In one study, the use of ibuprofen, paracetamol, aspirin, and naproxen was linked to a significant reduction in AD compared to the non-pain-reliever group [91] (Figure 1).Recent systematic reviews and meta-analyses, including 18 observational studies and a randomized clinical trial (RCT), highlight that long-term NSAID use can lower the incident risk of AD by 28%, especially when used for durations longer than previously studied in trials like the Anti-inflammatory Prevention Trial, where 15 months of NSAID use proved to be insufficient [92,93].These findings suggest that anti-inflammatory treatments, particularly long-term NSAID use, could potentially align with the neuroinflammatory hypothesis of AD, offering a promising avenue for reducing disease incidence and progression. With the idea of neuroinflammation, the roles of resident glial cells such as microglia and astrocytes, along with endothelial cells and mast cells, are important in protecting the brain against foreign pathogens [94,95].Microglia, the primary immune effector cells of the central nervous system (CNS), continuously survey the environment for potential threats, transitioning to an activated state characterised by cytosolic enlargement and the production of inflammatory cytokines and chemokines upon detecting invasive agents [96][97][98].Similarly, astrocytes contribute significantly to mediating neuroinflammation and are integral in various neuroprotective functions, including maintaining the integrity of the blood-brain barrier (BBB) and buffering neurotransmitters [99].Like microglia, astrocytes undergo morphological changes and increase their reactivity and secretion of cytokines and chemokines following injury, underlining their vital role in the neuroinflammatory response in AD [100].This led us to propose a novel target, the endothelin B receptor (ET B R), that was related to astrocytes. The Endothelin B (ET B R) Receptor as a Potential Therapeutic Target The ET B R is one of the receptors in the endothelin family, which are predominantly expressed in the hippocampus and amygdala.Notably, astrocytes expressing ET B Rs showed high levels in ligand binding assays and studies of mRNA expression, highlighting its relevance in AD research [101,102].Consequently, there has been an increasing focus on exploring the ET B R for its potential therapeutic applications [103][104][105].Recent studies have revealed the crucial role of astrocytic ET B Rs in the development of traumatic brain injury (TBI) and its potential implications in the progression of AD [106].Since the ET B R can activate astrocytes through the autocrine effect of its ligand endothelin-1 (ET-1), targeting ET B Rs is therefore considered as a precise way to regulate astrocyte activity. Further research into the inhibitory effects of the ET B R on TBI models has provided evidence suggesting that the ET B R may be an effective target for controlling astrocyte activity during TBI [102,107].This research also showed possibilities for understanding its implications in the progression of AD, making the ET B R a promising target in the field of neurodegenerative disease research [108,109]. A noteworthy advancement in this area is the use of the ET B R antagonist BQ788.Several studies have demonstrated its efficacy in reducing the number of reactive astrocytes, improving disruptions in the BBB, and decreasing brain swelling [102,110,111].These outcomes indicate that antagonists of ET B Rs could play a crucial role in diminishing astrocyte activation, thereby offering relief in neurological disorders such as TBI and AD where astrocyte hyperactivation is a common factor. The connection between reactive astrocytes in the pathogenesis of TBI and the successful inhibition of ET B Rs has laid a foundational basis for exploring the impact of ET B Rs in AD.While it remains to be clarified whether the mechanisms observed in TBI are directly applicable to AD, this research provides fresh and exciting perspectives on potential new therapeutic strategies in the treatment of AD. Current Therapies for NPS in AD, Limitations, and Challenges An under-researched aspect of dementia in general includes mood disorders such as anxiety and depression, among other NPS, that are often comorbid in patients with AD; a cross-sectional study assessing a cohort of 103 AD patients found that 51% presented with depressive symptoms-23% had major depression (MDD) and 28% were found to have dysthymia (a milder but longer-lasting form of depression) [112].Patients tend to present with symptoms of anxiety at the mild cognitive impairment stage of AD.Anxiety is commonly described as a negative emotional state, which can be characterised by psychological symptoms (hypervigilance and feelings of worry and dread) and physiological changes (increased sympathetic tone, resulting in sweating and increased blood pressure and heart rate) [113].In non-AD elderly patients, there is often a higher prevalence of anxiety and depression [114]; however, this prevalence is even higher in patients with AD, with over 70% displaying symptoms of anxiety, as observed in one study of 523 community-dwelling AD subjects [114], and it can be reasonably assumed that there is an association between AD and the development of mood disorders.A behavioural analysis study using transgenic mouse models of AD found that when subjected to a series of stressful stimuli, the mice exhibited anxiety-like defensive behaviours and risk assessment behaviours which can be equated to hypervigilant and avoidance behaviours in anxious humans [113]. However, there is currently little known about the direct relationship between AD and concomitant NPS and the cellular causative factors behind this correlation, yet the prevalence of comorbidity between AD and anxiety is highly apparent and there is an urgent clinical need to address this issue. Currently prescribed medications for the treatment of anxiety and depression include SSRIs/SSNRIs and short-term benzodiazepines.Whilst the use of SSRIs is in accordance with the pathophysiology of depressive disorders (although major depressive disorders often require additional treatment options such as bupropion [115,116]), it may not be the most rational choice of drug design for anxiolytics. The γ-aminobutyric acid (GABA) system is the main inhibitory system in the CNS that is involved in the pathophysiology of anxiety-related disorders.Synaptic and extrasynaptic γ-aminobutyric acid type A receptors (GABA A Rs) have different subtypes and each subtype is composed of different subunits which are encoded by different genes and determine the pharmacological properties of the GABA A Rs [117,118].Subunit combinations and synaptic GABA A R activation cause phasic inhibition and most of them consist of α1, β2, and γ2 subunits.Benzodiazepines can only modulate GABA A receptors containing the γ subunits which are expressed synaptically; when bound to these subunits, they can result in immediate and effective relief of symptoms related to anxiety.However, their use is associated with concomitant adverse drug reactions (ADRs) and sedative, amnestic, or anticonvulsant effects as well as impaired coordination and visual disturbances, especially amongst the elderly who are particularly susceptible due to changes in pharmacokinetic and pharmacodynamic profiles.A further unpleasant side effect profile has deemed benzodiazepines as unsafe for long-term use due to addiction and dependence problems.Therefore, SSRIs are currently considered the first-line therapy for the management of mood-related disorders even though this class of drugs may not be in accordance with the primary pathophysiology of anxiety.This highlights the need for the development of more rationally designed medications that can target the main pathophysiological pathways required for the management of anxiety disorders. Insights into More Targeted Therapy for NPS in AD Experimental evidence suggests that cognitive impairment and emotional dysregulation in AD are linked to synaptic excitation-inhibition imbalance in the brain caused by damage to the hippocampus, prefrontal cortex (PFC), and limbic subcortical regions [113]. The role of the major inhibitory neurotransmitter GABAergic system in the brain beyond the use of benzodiazepines is often overlooked, particularly the extrasynaptic components that could offer an alternative solution to the problems associated with benzodiazepines.This view stems from the evidence that GABA A R signalling deficits have been linked with many neurological disorders such as anxiety, depression, AD, chronic alcohol dependence, and schizophrenia [119,120], which suggests there is a clinical need for subtype-specific compounds that target the GABA A R. Although some studies suggest that GABAergic neurons are more resistant to neurodegeneration in AD [121,122] relative to cholinergic and glutamatergic neurons, others have reported that aberrant brain GABA levels are present in AD and mood disorders as well as in normal ageing [123][124][125][126]. Furthermore, evidence suggests a selective vulnerability of inhibitory interneurons during the disease progression of AD [127,128].Therefore, there is scope for the development of drugs that can preferentially bind to specific subunits of the GABA A R family that will have fewer ADRs than the current benzodiazepines used clinically.One area of focus for the future should be to consider the extrasynaptic components of the GABAergic system that are exclusively located on extrasynaptic terminals. Activation of extrasynaptic GABA A Rs leads to tonic inhibition, indicating that a low concentration of GABA is required for their activation.The tonic inhibition of GABA A Rs leads to a stronger inhibitory effect on excitatory neurons without affecting their sensitivity, which can reduce the risk of resistance [129].The main subunits of interest in the extrasynaptic GABA A Rs are the δ subunit-containing receptors, which are mostly coupled with α4 subunits.These receptors have high GABA sensitivity with slow de-sensitivity [130,131].The αβδ and αβγ GABARs have different pharmacological properties, and GABA is considered a partial agonist of αβδ GABARs. Another avenue for potential new therapies for general NPS is focusing on drug development centred around neuroactive steroids (NASs), which act as positive allosteric modulators (PAMs) of both synaptically-and extrasynaptically-expressed GABA A Rs. NASs also lead to increased expression of extrasynaptic GABA A Rs through a metabotropic mechanism [132].There are also extrasynaptic GABA A R-selective drugs such as Gaboxadol (THIP), which acts on α4β3δ subunits, that can potentially be used for the treatment of anxiety and depression.Gaboxadol selectively targets the extrasynaptic GABA A Rs that contain α4 and δ subunits.It has been shown that Gaboxadol has anti-anxiolytic (as well as sleep-inducing) effects but due to its side effect profile and inconsistent therapeutic benefit, its use was discontinued.However, due to its selectivity in targeting extrasynaptic GABA A Rs, which regulate tonic inhibition, it remains an investigational therapy of interest.Furthermore, it has been proven that there is an altered expression of δ subunitcontaining GABA A Rs in conditions such as anxiety and depression, which further suggests that targeting extrasynaptic GABA A Rs can be useful in the management of mood disorders.Currently, other than Gaboxadol, such subunit-specific therapies have not been identified.However, there are neurosteroid modulators such as Brexanolone (SAGE-547), Zuranolone (SAGE-217), and Ganaxolone that can target both synaptic and extrasynaptic GABA A Rs [133][134][135][136]. An analysis of the clinical trials with Zuranolone, Ganaxolone, and Brexanolone indicated that these neurosteroids were effective in the management of depressive-related disorders.The included studies had limitations, including small sample size and unknown long-term effects of the therapies.These studies were placebo controlled.Further studies are warranted where the effectiveness of these therapies is compared with the standard SSRIs/SNRIs to identify which therapy is more effective.One of the main advantages of these neurosteroid medications is their fast onset of action in comparison to SSRIs, where the effects are commonly observed after up to 12 weeks of use.Results of an indirect trial comparison (ITC) that compared the HAM-D (Hamilton Depression) rating score levels of depression of trials with Brexanolone in post-partum depression with SSRIs indicated that Brexanolone showed a larger change from baseline in HAM-D scores.This, combined with the rapid response rate of Brexanolone, indicates that this therapy can be advantageous over SSRIs, particularly in conditions such as post-partum depression where a rapid response is crucial for the patient as well as the child [24].Direct comparison studies are needed to investigate this hypothesis further. Pregnenolone used for the management of bipolar depression did not indicate a notable improvement in the HAM-D score.For the primary measurements, there was a notable increase in the remission rate as indicated by the IDS-SR (Inventory of Depressive Symptomatology Self Report) score; however, this may be due to a lower baseline IDS-SR score in the pregnenolone group.The use of pregnenolone was also associated with an improvement in the HARS (Hamilton Anxiety Rating Scale).Figure 2, illustrates a summary of our discussion points in this review, summarizing the current treatments available for alleviating cognitive decline and NPS associated with AD, and potential targets for future drug design.scores.This, combined with the rapid response rate of Brexanolone, indicates that this therapy can be advantageous over SSRIs, particularly in conditions such as post-partum depression where a rapid response is crucial for the patient as well as the child [24].Direct comparison studies are needed to investigate this hypothesis further.Pregnenolone used for the management of bipolar depression did not indicate a notable improvement in the HAM-D score.For the primary measurements, there was a notable increase in the remission rate as indicated by the IDS-SR (Inventory of Depressive Symptomatology Self Report) score; however, this may be due to a lower baseline IDS-SR score in the pregnenolone group.The use of pregnenolone was also associated with an improvement in the HARS (Hamilton Anxiety Rating Scale).Figure 2, illustrates a summary of our discussion points in this review, summarizing the current treatments available for alleviating cognitive decline and NPS associated with AD, and potential targets for future drug design. Conclusions The proportion of the ageing population in the world is increasing; therefore, so is the incidence of AD.We need better medications for cognitive decline as well as other symptoms of AD.This problem is accentuated by the fact that treatments do not yet provide a cure for AD nor a rationally designed therapy to manage symptoms of anxiety or agitation that are commonly associated with the disease.Consequently, the call for developing more suitable and effective pharmaceutical treatments for AD and for the management of associated anxiety-related disorders is all the more urgent as there is currently a lack of selective and rationally designed medications that can prevent, halt, or reverse disease progression.This review advocates for the exploration of novel therapeutic avenues, including the selective targeting of specific pathways such as the ET B R system to reduce neuroinflammation and GABA A Rs to mitigate hyperactivity associated with cognitive deficits.By focusing on these unexplored angles, we aim to develop treatments that are not only more effective but also come with fewer side effects, providing a promising direction for future AD management strategies. to amyloid-β, reducing amyloid plaques in the brain.The treatment is associated with slowing the rate of progression of AD.Aβ plaques and prevents Aβ deposition in the brain with high selectivity for Aβ protofibrils.Infusion-related reactions, headache, amyloid-related imaging abnormalitiesedema (ARIA-E), ARIAsuperficial siderosis, cerebral microhaemorrhages, ARIA cerebral microhaemorrhages, and falls.Donanemab [51,77] Monoclonal IgG1 antibody for earlier stages of Not available This drug works by inducing microglial-mediated clearance of existing Aβ plaques with the intent of slowing the Gastrointestinal side effects including nausea, diarrhoea, and vomiting.Commented [M1]: We no responded with modificat the picture clarity has not is no clearer picture, we w current format.Thank you understanding. to amyloid-β, reducing amyloid plaques in the brain.The treatment is associated with slowing the rate of progression of AD.Aβ plaques and prevents Aβ deposition in the brain with high selectivity for Aβ protofibrils.Infusion-related reactions, headache, amyloid-related imaging abnormalitiesedema (ARIA-E), ARIAsuperficial siderosis, cerebral microhaemorrhages, drug works by inducing microglial-mediated clearance of existing Aβ plaques with the intent of slowing the Gastrointestinal side effects including nausea, diarrhoea, and vomiting.Commented [M1]: We not responded with modificat the picture clarity has not is no clearer picture, we w current format.Thank you understanding. Figure 1 . Figure 1.All the chemical structures in antiviral and anti-inflammatory drugs. Commented [M2]: Thank you for your modification, but we noticed that there are sti issues (spaces between words, and figure clar so we re-modified the picture based on the version you submitted on May 2, please check whether the current version is suitable: Figure 1 . Figure 1.All the chemical structures in antiviral and anti-inflammatory drugs. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 11 of 17 advantages of these neurosteroid medications is their fast onset of action in comparison to SSRIs, where the effects are commonly observed after up to 12 weeks of use.Results of an indirect trial comparison (ITC) that compared the HAM-D (Hamilton Depression) rating score levels of depression of trials with Brexanolone in post-partum depression with SSRIs indicated that Brexanolone showed a larger change from baseline in HAM-D Figure 2 . Figure 2. The summary diagram of the graphical representation of the current and potential future treatments for alleviating cognitive decline and NPS. Figure 2 . Figure 2. The summary diagram of the graphical representation of the current and potential future treatments for alleviating cognitive decline and NPS. Table 1 . Current, key FDA-approved treatments for AD. Table 1 . Current, key FDA-approved treatments for AD. Name of Drug Class of Drug Chemical Structure Action of the Drug Side Effects By inhibiting AChE, donepezilimproves the cognitive and behavioural signs and symptoms of AD, which may include apathy, aggression, confusion, and psychosis.Gastrointestinal side effects including nausea, vomiting, anorexia, and diarrhoea.Nervous side effects including dizziness, confusion, and insomnia.Rivastigmine Approved in 1997 [69,70] AChE inhibitor used to treat mild to moderate symptoms of AD.Rivastigmine inhibits both butyrylcholinesterase and AChE, preventing the hydrolysis of acetylcholine and thus leading to an increased concentration of acetylcholine at cholinergic synapses.Gastrointestinal side effects including nausea and vomiting, decreased appetite, diarrhoea, and abdominal pain.Nervous side effects including pain, headache, dizziness, syncope, fatigue, and malaise.Galantamine AChE inhibitor used to manage Galantamine inhibits AChE in the synaptic cleft, thereby Gastrointestinal side effects including nausea, vomiting, anorexia, and abdominal By inhibiting AChE, donepezil improves the cognitive and behavioural signs and symptoms of AD, which may include apathy, aggression, confusion, and psychosis.Gastrointestinal side effects including nausea, vomiting, anorexia, and diarrhoea.Nervous side effects including dizziness, confusion, and insomnia. Table 1 . Current, key FDA-approved treatments for AD. of Drug Class of Drug Chemical Structure Action of the Drug Side Effects By inhibiting AChE, donepezilimproves the cognitive and behavioural signs and symptoms of AD, which may include apathy, aggression, confusion, and psychosis. Table 1 . Current, key FDA-approved treatments for AD. of Drug Class of Drug Chemical Structure Action of the Drug Side Effects The drug binds to amyloid-β, reducing amyloid plaques in the brain.The treatment is associated with slowing the rate of progression of AD.Commented [M1]:We no responded with modifica the picture clarity has not is no clearer picture, we w current format.Thank yo understanding. Table 1 . Current, key FDA-approved treatments for AD. Table 1 . Current, key FDA-approved treatments for AD. of Drug Class of Drug Chemical Structure Action of the Drug Side Effects The drug binds to amyloid-β, reducing amyloid plaques in the brain.The treatment is associated with slowing the rate of progression of AD.Commented [M1]:We no responded with modifica the picture clarity has not is no clearer picture, we w current format.Thank yo understanding.The drug binds to amyloid-β, reducing amyloid plaques in the brain.The treatment is associated with slowing the rate of progression of AD. Table 2 . Clinical trials for AD therapy from 2024.
2024-05-12T15:17:47.434Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "9036c5769d8566d536c4827896630b53927ed3af", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/10/5169/pdf?version=1715257671", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "78a9ad10179b6da0500b1483aff447a38ace712b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246475628
pes2o/s2orc
v3-fos-license
Nodular scleritis-a rare presentation of COVID-19& variation with testing Purpose – To report a rare case of patient presenting with nodular scleritis and SARS-CoV2. Observations This case highlights a unique presentation of SARS-CoV2 positive patient with nodular scleritis as a presenting feature. Patient initially had ocular symptoms and developed only mild systemic features subsequently which did not require hospitalization. COVID testing done at different time points showed variable results which correlated with the ocular features. This patient was followed up during quarantine using tele-ophthalmology. Conclusion and importance This case highlights a possible rare presentation of a SARS-CoV2 patient with nodular scleritis and also importance of tele medicine during these unprecedented times. Introduction An important aspect of the current efforts to control and reduce the impact of the COVID-19 pandemic has been to reduce the transmission of virus between individuals. In this respect, it is vital to understand the routes and modes of transmission, including the possibility of spread via the ocular surface. Several clinical studies have reported the presence of SARS-CoV-2 in tear specimens from individuals with COVID-19. 1 There has been discordance in reports regarding the proportion of COVID-19 patients with presence of virus in ocular specimens, possibly relating to factors including sensitivity of tests, type of ocular specimen and timing of specimen procurement in patients. In a large study including 1099 hospitalised patients with laboratory-confirmed COVID-19 from 30 hospitals in China, conjunctival congestion was documented in 9 patients (0.8%). 2 In another study of patients diagnosed clinically as COVID-19, one third had ocular symptoms and signs, including conjunctival hyperaemia, chemosis and epiphora. Early recognition and detection of these cases can help in adequate protection and reduced transmission of the disease. 3,4 Although viral infection of ocular cells has not yet been reported in patients, a recent report found that SARS-CoV-2 can infect conjunctival epithelium in an ex-vivo culture system. 5 This case report highlights an unusual presentation of a patient positive for COVID-19 and the need for treating every patient with red eye in this pandemic with adequate precautions to prevent inadvertent cross infection. Case report A 39 year old male patient presented with redness and pain in the right eye since 2 days. On examination, the patient had minimal chemosis with congestion in the right eye. He was diagnosed as conjunctivitis ( Fig. 1) and prescribed topical 0.5% moxifloxacin eye drops 4 times a day in the right eye. Since conjunctivitis has been described as a possible clinical feature in patients of COVID-19, the patient was tested for SARS-CoV2 with Real time-polymerase chain reaction (RT-PCR) on nasopharyngeal swab sample and was found to be negative. 2 days later, the patient developed fever and cough along with worsening of ocular involvement and was reviewed through video consultation. He was found to have localised congestion and swelling in the superomedial quadrant of the right eye and diagnosed to have possible nodular scleritis clinically. A repeat nasopharyngeal swab sample was taken to test for SARS-Cov2 RT-PCR and patient was started on tablet azithromycin with antipyretics-acetominophen for symptomatic relief while awaiting test results. Repeat swab tested positive for COVID-19 but since patient had only mild symptoms and signs of systemic illness, and he did not require hospitalization and was advised home quarantine. Patient was started on topical betadine 0.25% drops and nepafenac eye drops twice a day. Systemic blood investigations to look for underlying autoimmune disorders which could cause a nodular scleritis (Complete blood count, Random blood sugars,RA factor,ANA,c-ANCA,p-ANCA,ESR,CRP,urine microscopy) were advised and found to be within normal limits. Patient was also advised Chest X-Ray which was normal and TPHA,HIV 1 and HIV 2 to rule out associated infections which also was negative. Patient was followed up via telemedicine during the quarantine period. Subsequently there was an exacerbation of redness and pain in the right eye but no worsening of systemic condition 5 days after onset of fever. Examination of the right eye via teleophthalmology showed increased conjunctival congestion with worsening of the nodular inflammation. Fever and cough had reduced significantly by this consultation. Owing to non-resolving inflamed conjunctival/episcleral nodule an MRI orbit was advised to look for other possible causes of such a nodule like neoplasms and to rule out orbital extension. The MRI,T1W image ( Fig. 2)was reported as a nodular lesion with scleral thickening and a possible inflammatory etiology suggesting a diagnosis of nodular scleritis. Patient was started on topical 1% prednisolone eye drops 4 times a day along with 0.25% betadine drops along with oral NSAID (non-steroidal anti-inflammatory drug)etoricoxib 60mg once daily and reported significant improvement in ocular symptoms and signs which was continued for a week followed by tapering of steroid eye drops and continuing oral NSAID. A repeat MRI (Fig. 2) orbit was done after 4 days showed resolution of inflammation and scleral thickening. The patient's systemic condition remained stable and resolved without worsening or requiring hospitalization. 12 days after onset of fever and COVID positive test, a repeat nasopharyngeal swab for SARS-CoV-2 was found to be negative. Discussion This is the first reported case of nodular scleritis being the presenting feature in a patient with COVID-19 infection. Another unusual feature of the case is that the patient had very mild systemic features with only ocular features which resolved with supportive treatment and did not require hospitalization. There has been a report of 2 cases with confirmed COVID-19 developing anterior scleritis after their systemic symptoms improved and in these cases a thorough systemic workup did not identify any underlying autoimmune diseases. 6 One patient presented with necrotizing anterior scleritis and required intravenous cyclophosphamide, subcutaneous adalimumab in addition to oral prednisolone and the other patient had only sectoral anterior scleritis and responded to topical betamethasone and oral prednisolone. 6 There have also been reports of patients with COVID-19 developing acute follicular conjunctivitis, conjunctival hyperaemia, chemosis, epiphora, and increased ocular secretions. 7,8 These manifestations however have been observed more frequently in patients with severe pneumonia and during the middle phase of illness. 4 Only 1 patient in a series of 38 cases was reported to have conjunctivitis as the initial manifestation of the disease. 4 There has also been a case report of a patient presenting with conjunctival congestion and then rapidly worsening to develop severe acute respiratory illness within few hours. 9 This is a possible first report of an associated scleritis with an active SARS-CoV-2 infection.Though the patient did develop fever and cough, the patient was only mildly symptomatic requiring home quarantine and not hospitalization in contrast to earlier reports of ocular manifestations presenting in patients with severe respiratory distress often requiring admission to the intensive care unit. 7 Scleritis is an inflammatory process involving the outer coating of the globe which is characterized by focal or diffuse hyperaemia, moderate to severe pain, and possible impairment of vision. The autoimmune scleritis constitutes the majority of cases of scleritis. Topical and/or systemic corticosteroids are the management of choice in these cases. Seldom, scleritis may be caused by an infectious etiology, seen in 5%-10% of cases. There has been an association of scleritis with herpes group of viruses, however there has been no evidence of association with SARS-CoV-2 yet. 10 This case also highlights an unusual trajectory of the disease wherein the patient had a fluctuating COVID test positivity correlated with the fluctuating levels of ocular manifestations as well. It is important to try and decipher the possible reasons for the varying COVID test results. Negative test result immediately followed by a positive test result within a short period of time, could be due to a false-negative test result. RT-PCR is commonly used to detect SARS-CoV2 in samples. There are a number of reasons for false negative results. It is well known that results from real-time RT-PCR can be affected by the variation of viral RNA sequences. 10,11 In addition according to the natural history of the COVID-19 and viral load kinetics in different anatomic sites of the patients, sampling procedures can contribute to the false-negative results. A study has reported sputum as the most accurate sample for laboratory diagnosis of COVID-19, followed by nasal swabs, while throat swabs were not recommended for the diagnosis. The role viral load kinetics of SARS-CoV-2 has was documented in two patients in Korea where they have shown a variation, suggesting a different viral load kinetics from that of previously reported other coronavirus infections. 12,13 The virus was detected from upper respiratory tract (URT) and lower respiratory Fig. 1. a) Diffuse anterior segment photographs reveal localised conjunctival congestion with dilated tortuous telangiectatic vessels in superior and superonasal conjunctiva. b) Shows additional violaceous hue appearance of the sclera in addition to conjunctival congestion and chemosis with dilated tortuous telangiectatic vessels with nodular swelling in the superonasal quadrant with dilated tortuous conjunctival vessels. c) and d) gradual resolution of conjunctival congestion and chemosis with resolving nodular swelling. tract (LRT) specimens 2 days after onset of symptoms, however an altered viral load led it to be negative after 5 days but on the 7th day due to a spike in the viral load the patient again turned out to be positive. These findings indicate the different viral load kinetics of SARS-coV-2, suggesting that sampling timing and period of the disease development play an important role in real-time RT-PCR results. Conjunctival swabs from patients with ocular manifestations have proven positive for SARS-CoV-2 5% in only 5% patients. 14 This low detection rate could indicate a low prevalence of the virus in conjunctival secretions and tears or viral loads below the detection thresholds of existing PCR diagnosis techniques. 15 This could explain the initial negative result on RT-PCR. One limitation of this case report was that we did not demonstrate the virus from conjunctiva to conclusively link the nodular inflammation to SARS-CoV2. Even though the virus was not detected, and low detection rates of SARS-CoV-2 from the conjunctiva is known, 14 it still can be a co-incidental finding of a nodular scleritis with a coexisting SARS-CoV2 infection even after ruling out other possible causes. Hence a larger cohort of patients presenting with scleritis and COVID-19 need to be evaluated to conclusively prove the association, however this case does represent an interesting association of the scleritis with a SARS-CoV2 infection. This case also illustrates the usefulness and relevance of teleophthalmology procedures during the COVID-19 epidemics, which, in addition to preventing the transmission of SARS-CoV-2, could help detect potentially COVID-19 patients. Ophthalmologists should be aware of these unusual ocular presentations of COVID-19 since they could precede the development of systemic manifestations and help early identification and treatment of these cases. Patient consent Consent to publish this case report has been obtained from the patient in writing, This report does not contain any personal identifying information. Disclosures No funding or grant support Author contribution All authors attest that they meet the current ICMJE criteria for Authorship. Dr. Arif Adenwala -Case imaging and data.Dr. Rohit Shetty -Case imaging and concept. Dr. Sharon D'souza -Manuscript writing and data processing.,Dr. Padmamalini Mahendradasimaging interpretation,Dr. Gairik Kundu-manuscript writing and corresponding author. Declaration of competing interest The following authors have no financial disclosures:AA,RS,SD,PM, GK.
2022-02-03T14:16:29.439Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "f7acac5f748c1a35ae91a08d97fd98ab5bbb9bd0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ajoc.2022.101396", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "97194c71536fe79fa1e66c73403474d00f120a6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244676713
pes2o/s2orc
v3-fos-license
A new frontier: Navigating hospital pharmacy practice during the COVID-19 pandemic The novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), first manifested in Wuhan, China in December 2019 as multiple cases of pneumonia with unknown etiology. This was the herald of an infectious catastrophe that would eventually affect millions of people across the world, claim countless lives, and uproot the very foundations of modern-day healthcare practice. Hospital pharmacists, alongside with physicians, nurses, and numerous other disciplines, are an integral part of the healthcare team that responded to this pandemic. The purpose of this article is to highlight the teamwork, determination, and innovativeness demonstrated by clinical pharmacists at a 510-bed community hospital in response to the coronavirus disease of 2019 (COVID-19). Pharmacists rose to the occasion to ensure that patients continue to receive the best therapy possible during this pandemic, and they supported other disciplines to ensure a collaborative response. Despite the unprecedented challenges posed to hospital pharmacy practice in the setting of COVID-19, our pharmacy team’s response has resoundingly proven the resiliency of the human spirit, and shows that nothing is insurmountable in the face of collaboration, creativity, and an overwhelming desire to care for our community. INTRODUCTION The novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), first manifested in Wuhan, China in December 2019 as multiple cases of pneumonia with unidentified etiology [1].Unbeknownst to the rest of the world, this was the herald of an infectious catastrophe that would infect nearly 27 million people and claim over 900,000 lives globally as of September 2020 [2]. On March 11, 2020, the World Health Organization (WHO) officially declared the coronavirus disease of 2019 (COVID-19) a pandemic [3].Two days later the United States designated COVID-19 as a national emergency [3].The very foundations of modern-day healthcare practice were subsequently uprooted amidst the implementation of sweeping policy changes in an urgent attempt at adaptation to this nebulous new threat. Alongside the physicians and nurses that have been rightfully lauded as heroes, hospital pharmacists are another integral component of the healthcare team that responded to this pandemic.Their duties are myriad, including being active members of hospital multidisciplinary teams, leading hospital committees, participating in patient rounds, acting as a repository of drug information and providing recommendations to other healthcare professionals, order verification to ensure the administration of safe and effective medication regimens to patients, and completing clinical consults that range from dosing antimicrobial agents to managing electrolytes and total parenteral nutrition [4].Hospital pharmacists, as with all healthcare workers, experienced an abrupt alteration of their daily workflow with the emergence of COVID-19 and the rapid influx of operational, communication, and clinical changes that accompanied it [5]. What follows is an account of the challenges and triumphs experienced by the pharmacy team in a 510bed community hospital in Florida, United States during the early phase of COVID-19 pandemic, highlighting the teamwork, determination, and innovativeness required to successfully navigate hospital pharmacy practice during a crisis. Pharmacy operations A strong operations team was the driving force that ensured our hospital continued to deliver the best care possible to patients during COVID-19 pandemic.Before the first case of COVID-19 was reported at the state level, an interdisciplinary team worked tirelessly every day to ensure that patient care standards would not be compromised in the event of a local outbreak.Law enforcement, emergency medical services (EMS), pharmacists, respiratory therapists, physicians, nurses, and representatives from nearby hospitals conducted tabletop exercises to discuss collaborative efforts needed to be successful once the pandemic inevitably reached our community [5]. The hospital pharmacy department was responsible for operational initiatives related to medications that could potentially lead to interruption in patient care, and also to ensure that all patients continue to receive the best therapeutic option despite all challenges that could arise.One method was a pharmacy team assessing potential manufacturing issues of medications that would be impacted by the pandemic, helping anticipate shortages before they occurred, and devising contingency plans.Upon experiencing a shortage of certain medications, the pharmacy department was able to expediently coordinate with other disciplines the implementation of alternative medications.For example, cisatracurium is the preferred neuromuscular blockage of choice for patients needing continuous infusion of a paralytic agent.Once our facility was not able to secure an adequate supply of cisatracurium, the pharmacy team immediately collaborated with physicians and other healthcare disciplines to switch to vecuronium as the alternative neuromuscular blockage agent for infusion [6]. The pharmacy operations team made several adjustments throughout the course of the COVID-19 pandemic as deemed necessary.Weekly operations meetings with other regional hospitals in our system were changed to daily so as to improve communication and encourage sharing of best practices in real-time.All inperson meetings were converted to an online platform to minimize the potential of COVID-19 transmission between employees and the consequent weakening of our workforce.The pharmacy department also implemented strategies to conserve personal protective equipment (PPE), such as outsourcing the compounding of select medications to an outside facility. Multiple other operational changes were implemented to target employee safety.Before COVID-19, our hospital primarily utilized nebulizers for breathing treatments.To minimize the risk of infection through aerosol generation with nebulizer usage, the protocol was revised to instead prioritize inhaler use for patients with suspected or lab-confirmed COVID-19 and requiring breathing treatment.Prior to the pandemic, all satellite pharmacy personnel were required to respond to emergency codes on their floors -this responsibility was eventually centralized to one pharmacy team to limit possible COVID-19 exposure of multiple pharmacists.Pharmacy also instilled stringent measures to clean used crash carts and returned medications that could have been potentially exposed to a patient with COVID-19 to avoid cross-contamination and reduce viral spread. The staffing model for the pharmacy department constantly evolved throughout this pandemic based on hospital needs and patient volume.During the initial local onset of COVID-19, decreases in the hospital census necessitated a temporary reassignment of roles.Pharmacists assisted nursing staff with responsibilities within their scope, and were also enlisted to participate in screening patients and visitors as per the hospital's infection control measures. Communications Transparent and routine communication is a crucial element in maintaining team morale, fostering trust, and reducing panic in overwhelming situations.With new COVID-19 information being circulated each day through the news, journal articles, and even word-of-mouth, hospital pharmacists may struggle with an excess of information.Therefore, it is important to establish reliable chains of communication in the hospital through which pharmacists can feel assured they are receiving accurate and relevant COVID-19 updates. Communication from leadership is key to ensuring healthcare workers feel supported.Regular COVID-19 updates are electronically communicated by facility administrators; these missives include the hospital census and current number of COVID-19 patients, PPE availability and conservation tips, masking and social distancing advice, and the process for COVID-19 testing.Furthermore, weekly online meetings are hosted by the company division's clinical pharmacy leaders.These sessions feature COVID-19 treatment guidance updates, review prominent new literature related to COVID-19 management, and provide an open platform for dialogue between corporate pharmacy leadership and local hospital pharmacy teams. Additionally, smaller-scale communications are needed to give hospital pharmacists the opportunity to initiate discussion and voice concerns to their colleagues and supervisors.The pharmacy department schedules two 15-minute virtual huddles each day in the morning and afternoon, which are attended by pharmacists, pharmacy technicians, residents, and students.These huddles serve to disseminate important policy and procedure changes or drug shortage information, to confer about important operational, patient care, and medication safety matters, and to perform a hand-off of any pressing patient care issues. Clinical practice One of the most arduous aspects of the COVID-19 pandemic for hospital pharmacists has been the rapidly-changing treatment recommendations.The United States Food and Drug Administration (FDA), for example, withdrew their March 2020 emergency use authorization (EUA) for hydroxychloroquine and chloroquine secondary to clinical trials finding both a lack of efficacy and risk for serious adverse events [7].In May 2020, the WHO recommended against corticosteroid use in their interim guidance document for COVID-19, yet the randomized, controlled RECOVERY trial published in July 2020 supported dexamethasone usage in patients hospitalized with COVID-19 secondary to observed mortality benefit [8,9]. Hospital pharmacists are fundamental in ensuring the use of the most up-to-date therapy for management of COVID-19 patients.The hospital antimicrobial stewardship program (ASP) committee consisting of infectious diseases (ID) pharmacists, physicians, and other key stakeholders creates and maintains a comprehensive COVID-19 treatment algorithm, incorporating recommendations from various regulatory and healthcare entities.The hospital monthly ASP meetings are led by the ID pharmacist and ID physician champion, which allows for multidisciplinary collaboration to revise the COVID-19 treatment recommendations as needed.Additionally, pharmacists have the opportunity to make significant clinical interventions at the order verification stage, such as intervening on polypharmacy issues with COVID-19 regimens.Pharmacists can discuss evidence-based treatment recommendations with physicians, discouraging off-label usage of medications that have not shown compelling evidence for treatment of COVID-19 [8].As the use of vitamins and minerals such as ascorbic acid, vitamin D, and zinc becomes more prevalent (despite these supplements having inadequate data to recommend against or for their use in COVID-19), pharmacists can intervene by discussing available evidence with the ordering provider and discouraging the use of these supplements, especially when there are patient safety concerns such as drug-drug interactions with other medications, or potential adverse events [10]. Due to the high acuity of COVID-19 patients, collaboration of a multidisciplinary team is one of the factors necessary for successful care of these patients.In pre-COVID-19 times, multidisciplinary patient rounds were performed daily at bedside.However, due to the risk of viral transmission, the challenge of rounds became evident since there are representatives from all disciplines, including rehabilitation services, physicians, nurses, pharmacists, dieticians, and more.During the course of this pandemic, some multidisciplinary rounds were moved to a virtual platform to minimize the potential to spread the virus.Bedside multidisciplinary rounds in critical care areas with COVID-19 patients continued to occur in-person, but were restricted to physicians, midlevel practitioners, nurses, respiratory therapists, and pharmacists only. Furthermore, COVID-19 patients require significant respiratory care and critical care management.Sedation and analgesia are used to assure that the patient stays comfortable while on mechanical ventilation, and also to prevent agitation while receiving other therapies such as rehabilitation therapy, proning, and chest physiotherapy.COVID-19 patients can become hemodynamically unstable during the course of hospitalization, which requires support through the use of vasopressors (i.e., norepinephrine, vasopressin), inotropes (i.e., milrinone, dobutamine), and/or fluids [10].Due to these increased requirements, hospital pharmacists are a critical component of the multidisciplinary team taking care of COVID-19 patients. One of the findings for patients with COVID-19 is the increased risk of venous thromboembolism (VTE), as some COVID-19 patients present with or progress to a hyper-inflammatory, hypercoagulability state [10].For example, one of the signs that a COVID-19 patient is hyper-inflammatory is having an elevated D-Dimer (2-3x higher than normal).This finding necessitated a proactive approach of using pharmacologic thromboprophylaxis anticoagulation in hospitalized patients with COVID-19 or empiric therapeutic-intensity anticoagulation in patients suspected to be at very high risk of developing VTE [11][12].Pharmacists are instrumental in screening COVID-19 patients for the aforementioned hyper-inflammatory state through the ordering of inflammatory lab markers.In addition to ensuring that appropriate anticoagulation is selected for patients with COVID-19, hospital pharmacists monitor the use of these anticoagulant medications, to ensure that patients are maintained at therapeutic goal and prevent adverse events such as bleeding [11]. Challenges A significant number of challenges have been posed to hospital pharmacists by the COVID-19 pandemic.For the past few months, hospital policies and protocols have undergone frequent changes in a reflection of the fluctuating COVID-19 management recommendations by organizations such as the Centers for Disease Control and Prevention, FDA, and the National Institutes of Health.Providing daily communication from pharmacy leadership, implementing electronic health record (EHR) order sets for COVID-19 management, and using automated EHR alerts to monitor different aspects of COVID-19 therapy can help pharmacists feel supported and optimize the care provided to patients. Another barrier to overcome during this pandemic has been the provision of experiential learning to pharmacy students and residents, attempting to simultaneously ensure their safety on rotations while not diminishing the quality of their education.Social distancing requirements have necessitated the removal of students from smaller satellite pharmacy locations in the hospital, compelling some learners and preceptors to conduct rotations remotely with reduced face-to-face time.Learners' active participation during bedside patient rounds were impacted, with rounds either transitioning to virtual modalities or limiting physical participants to only essential personnel.The move from in-person to online meetings also affected learner presentations and journal clubs, with audience engagement and discussion becoming more difficult to foster. Recruitment has also faced difficulties.Decreased revenue has resulted at some point in hiring freezes in many healthcare systems, and low patient census numbers have caused many hospitals and systems to decrease their workforce during the early phase of this pandemic.Advertising for student and resident positions and job vacancies has been limited to only virtual forums.The interview process similarly underwent changes; the prevalence of phone and video interviews has increased, and onsite interviews may be hindered by measures to reduce risk of COVID-19 transmission, such as shorter interview durations, limited group interview capabilities due to social distancing, and difficulties gauging expressions and reactions due to mandated masks. CONCLUSION As with all pioneers traversing new and uncertain territory, hospital pharmacists have encountered unique challenges while responding to the COVID-19 pandemic.As described, pharmacists rose to the occasion to ensure that patients continue to receive the best therapy possible during this pandemic, and supported other disciplines to ensure a collaborative response.Many aspects of healthcare evolved rapidly during the COVID-19 pandemic, including our methods of communication, the flow of our day-to-day operations, and our clinical practice.Our experience during this pandemic has resoundingly served to demonstrate that nothing is insurmountable in the face of teamwork, creativity, and an overwhelming desire to care for one another and our community.
2021-11-27T16:06:59.279Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e99897ac02fdc9b7b73bc29e381b4cd72b575bd6", "oa_license": null, "oa_url": "https://jrespharm.com/pdf.php?id=942", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "00f7a0d79aef37e467b401c04714eec279678c82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265933
pes2o/s2orc
v3-fos-license
Novel curcumin- and emodin-related compounds identified by in silico 2D/3D conformer screening induce apoptosis in tumor cells Background Inhibition of the COP9 signalosome (CSN) associated kinases CK2 and PKD by curcumin causes stabilization of the tumor suppressor p53. It has been shown that curcumin induces tumor cell death and apoptosis. Curcumin and emodin block the CSN-directed c-Jun signaling pathway, which results in diminished c-Jun steady state levels in HeLa cells. The aim of this work was to search for new CSN kinase inhibitors analogue to curcumin and emodin by means of an in silico screening method. Methods Here we present a novel method to identify efficient inhibitors of CSN-associated kinases. Using curcumin and emodin as lead structures an in silico screening with our in-house database containing more than 106 structures was carried out. Thirty-five compounds were identified and further evaluated by the Lipinski's rule-of-five. Two groups of compounds can be clearly discriminated according to their structures: the curcumin-group and the emodin-group. The compounds were evaluated in in vitro kinase assays and in cell culture experiments. Results The data revealed 3 compounds of the curcumin-group (e.g. piceatannol) and 4 of the emodin-group (e.g. anthrachinone) as potent inhibitors of CSN-associated kinases. Identified agents increased p53 levels and induced apoptosis in tumor cells as determined by annexin V-FITC binding, DNA fragmentation and caspase activity assays. Conclusion Our data demonstrate that the new in silico screening method is highly efficient for identifying potential anti-tumor drugs. Background The COP9 signalosome (CSN), a conserved multimeric protein complex, functions at the interface between signal transduction and ubiquitin (Ub)-dependent proteolysis [1]. Because of associated enzymes, the CSN possesses kinase acitivity. Two of the associated kinases are the protein kinase CK2 (CK2) and the protein kinase D (PKD) [2]. More than 200 proteins are known to be phosphorylated by the CK2, which is located nearly everywhere in the cell. The PKD is a serine/threonine kinase localized at either the plasma membrane or the cytosol of lymphocytes [3] and is associated with very diverse cellular functions, including Golgi organization, plasma membrane directed transport, metastasis, immune response, apoptosis and cell proliferation [4]. It is assumed that the CSN is a platform that brings together the kinases and appropriate substrates [5]. Transcriptional regulators such as p53 and c-Jun are phosphorylated by the CSN kinases [6,7]. The phosphorylation of p53 at Thr155 results in Ub-dependent degradation of the tumor suppressor [6]. In contrast, the CSN-directed phosphorylation of c-Jun leads to the stabilization of the transcription factor towards the Ub/26S proteasome system [8]. Cellular functions such as regulation of transcription, DNA repair, cell cycle regulation, senescence and apoptosis are modulated by p53 as well as c-Jun. Defects most frequently observed during tumorigenesis are mutations in the p53 gene [9]. It is well known that wild type p53 provides a critical brake in tumor development [10]. In contrast, as a component of the activator protein-1 the onco-protein c-Jun is mostly a positive regulator of cell proliferation and involved in oncogenic transformation (for review see [11]). Hence, the intracellular concentrations of p53 and c-Jun are decisive for tumor development. Therefore, in tumor therapy it is of great interest to control the stability of p53 and c-Jun in tumor cells. One strategy might be the inhibition of CSN-associated kinases, CK2 and PKD. It has been demonstrated before that blocking CSN-mediated phosphorylation causes an increase of p53 [6] and a decrease of c-Jun [12], very useful effects for anti-tumor drugs. Curcumin has been identified as an inhibitor of CSN-associated kinases [13], which is already in phase I clinical trials for evaluations concerning the prevention of colon, breast, lung and prostate cancer [14]. Former investigations showed that curcumin is a potent inhibitor of angiogenesis [15] and of the recombinant kinases CK2, PKD and the purified CSN complex from erythrocytes [2,13]. In addition, a natural product called emodin is also known as an inhibitor of the CK2 (PDB-Code: 1F0Q), PKD and the CSN complex [2]. In this study we developed an in silico screening to identify novel, more effective inhibitors of CSN-associated kinases by using our in-house database (more than 10 6 compounds). Curcumin and emodin served as lead structures in the screenings. Using a 3D superposition algorithm [16] the lead structures were compared with every compound of the database. For better coverage of the compounds and to assure their flexibility during usage of the algorithm a total of ~50 conformers were computed for every compound of the database. Compounds identified from the in silico screening were evaluated in kinase assays and cell culture experiments. With the new screening strategy potential new drugs for tumor therapy were identified, which stabilized endogenous p53 and induced apoptosis in tumor cells. Methods In silico screening Three dimensional (3D) similarity search Lead structures (curcumin and emodin) and compounds in the database were prepared for the 3D search, which is based on structural similarities. As a first step the centers of mass of each compound were determined and superimposed. The plane and the straight line of minimal quadratic distance to all atoms were computed to determine the least and largest (orthogonal) expansion. One structure was rotated such that the major directions coincide. In a further step the normalization of the atomic set was used to identify pairs of corresponding atoms. The root mean square distance (rmsd) was calculated for the related atomic pairs. An improvement of this value was obtained by removing or adding atoms to this superposition [16]. Two dimensional (2D) similarity search Another possibility to find new inhibitors of CSN-associated kinases was the 2D search, which is based on chemical similarities. The presence or absence of common functional groups such as alcohols or ring systems such as pyrimidins was investigated. Each substructure element was assigned to one bit of a Boolean array. To calculate 2D similarities between two structures the Tanimoto-coefficient was estimated. Bits set in both structures (BitsAB) and bits, which were only set in one structure (BitsA and BitsB) were taken into consideration. The value varies between 0 (different molecules) and 1 (equal molecules): [17]. In further steps each compound was analyzed for its possible application as a drug. First we investigated the absorption and permeability using the Lipinski's rule-offive, which implies that molecules should contain less than 10 H-bond-acceptors and less than 5 H-bonddonors. The calculated logP-value (describes the lipophilic properties) should be less than 5 and the molecular weight should be less than 500 g/mol [18]. Any compound violating more than one rule is not a promising candidate for a drug. Based on the similar property principle one can predict toxic effects from the molecular structure. We used two quantitative structure toxicity relationship (QSTR) models to analyze the compounds for their toxicity. Using the software TOPKAT ® DS MedChem Explorer (Discovery Studio, Accelrys Inc., http://www.accelrys.com/dstudio/ [19]) from Accelrys Inc., which comprises the QSTR models, the toxicological data were obtained [20]. Caspase-3/7 activity assay To determine apoptosis, B8 cells (10 4 cells/well) were incubated with indicated inhibitors for 4 hours. After incubation, the caspase-3/7 reagent, containing a DEVDpeptide (Promega) was added as recommended by the manufacturer (Promega). The fluorescence of each well was measured at an excitation wavelength of 485 nm and an emission wavelength of 530 nm. Cell viability assay Cell viability was assessed by the MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay (Sigma-Aldrich), which is based on the ability of a mitochondrial dehydrogenase from viable cells to oxidize the tetrazolium rings of the pale yellow MTT and to form a dark blue formazan crystals, which are largely impermeable to cell membranes and, therefore, accumulate within healthy cells. The number of surviving cells is directly proportional to the concentration of the formazan produced. Cells were incubated with the indicated inhibitors at different concentrations (20 or 50 µM) for 24 h. Then a solution of MTT in phosphate-buffered saline (PBS) was added to each well to a final concentration of 0.5 µg/µl. After 4 h incubation dark blue formazan was solubilized with 100 µl DMSO. Absorbance was measured at 590 nm using an ELISA reader. Annexin-V-FITC/propidium iodide (PI) double staining Annexin V binds to phosphatidyl serine externalized to the outer leaflet of the plasma membrane bilayer during initial stages of apoptosis. To measure cell staining by annexin V the substance was labeled with FITC (fluorescein isothiocyanate). Simultaneously the cells were stained with PI. By the double staining the test discriminates between intact (FITC -/PI -), apoptotic (FITC + /PI -) and necrotic cells (FITC + /PI + ) [21]. First, HeLa cells were incubated with the indicated inhibitors at different concentrations (20 or 50 µM) for 24 h. After harvesting and washing with PBS cells were resuspended in 100 µl annexin V binding buffer (containing 10 mM HEPES/ NaOH, pH 7.4, 140 mM NaCl, 2.5 mM CaCl 2 ). Subsequently cells were incubated with 5 µl of FITC-conjugated annexin V (ApoTarget) for 20 min at room temperature. After annexin-V-FITC staining, 400 µl of annexin V binding buffer containing PI (2.5 µg/ml) were added and the cells were analyzed by flow cytometry within 1 hour after staining. DNA-fragmentation Cells were seeded at a density of 10 5 cells/ml and treated with different concentrations (20 and 50 µM) of the indicated emodin-and curcumin-like compounds. After 24 h the cells were collected, washed with PBS at 4°C and fixed in PBS/2% (vol/vol) formaldehyde on ice for 30 min. Table 1: Creation of two groups of potential inhibitors of CSN-associated kinases. The structures found in the database by 3D and 2D screening were divided into two groups (curcumin-group and emodin-group) depending on the lead structures curcumin or emodin. The analysis of the 2D structures abet the division into the two groups. The double line marks the threshold, which normally limits the hits (2D similarity <85%). Results In silico screening plays an important role in drug-design [21]. The method used here was developed to identify potential new inhibitors of the CSN-associated kinases, CK2 and PKD. Based on the structures of the known 3D superposition and 2D comparison of curcumin and piceatannol Table 1 was calculated. In vitro kinase assays with piceatannol and BTB14431 Table 2. The demonstrated results are representative for at least four independent experiments. inhibitors curcumin and emodin [2] a search in our inhouse database containing approximately 10 6 compounds was performed. A 3D superposition algorithm was developed to compare structures of the known inhibitors (lead structures) and the database compounds. Using the 3D superposition algorithm the identification of structures, which do not exactly fit into the pattern of the lead structure (scaffold hoppers, 2D similarity <85%), can be realized. To carry out specific searches, many features such as the size of a molecule (Å), limit of rmsd, number of assigned atoms and number of similar atoms were compared. With this approach 35 compounds similar to the lead structures curcumin and emodin were identified and further analyzed. As a first step we tested the behavior of the compounds concerning the Lipinski's Rule-of-five using Accord for Excel from Accelrys Inc. Our investigation showed that no compound broke more than one rule (data not shown). Further the toxicological effects of the compounds were tested. Two QSTR models were employed: Rat Oral LD50 (Lethal Dose) and NTP (National Toxicology Program) Rodent Carcinogenicity. All identified compounds were predicted to be harmless. Based on the different lead structures and the calculated Tanimoto-coefficients the identified compounds were divided into two groups (see Table 1). The compounds of the first group were found by searching the database with curcumin and the compounds of the second group are related to emodin. Fig. 1A shows the superposition of the structures curcumin (blue) and piceatannol (green). In addition, the chemical properties of the two compounds are compared (Fig. 1B). Both structures contain two aromatic rings and a number of H-bond acceptors, which seem to be important for the inhibitory effect. All 35 compounds selected from the database by the screenings described above were tested in kinase assays for their ability to inhibit CSN-associated kinases. In these assays recombinant CK2 or PKD as well as purified CSN were incubated in the presence of [γ-32 P]ATP and 50 or 200 µM of the potential inhibitors (data not shown). The data showed that only 7 out of 35 identified compounds inhibited the CSN-associated kinases significantly in the chosen concentration range. Therefore 7 reagents, 3 compounds of the curcumin-group and 4 of the emodingroup, were used for further analysis. Next IC50 values of these 7 compounds were determined with recombinant CK2, PKD or with the purified CSN. Kinase assays were performed in the presence of different inhibitor concentrations. After incubation assays were analyzed by SDS-PAGE and autoradiography. Kinase activity (%) was estimated by densitometry. Fig. 2 demonstrates the results for piceatannol (curcumin-group) and BTB14431 (emodingroup). Values obtained by densitometry were plotted against inhibitor concentrations. Obtained curves were used to calculate IC50 values, which are summarized in Table 1 for all tested compounds and compared with the values for curcumin and emodin determined before [2]. Data show that curcumin-derived compounds seem to have a higher affinity for PKD whereas inhibitors from the emodin-group were more efficient in suppressing the CK2. In most cases IC50 values obtained with the purified CSN were lower than those with recombinant kinases (Table 1). Effect of curcumin-and emodin-derived inhibitors on the stability of c-Jun and p53 Because cell treatment with curcumin or emodin led to Ub-and proteasome-dependent degradation of c-Jun [2,12], we tested whether HeLa cells incubated with different concentrations of the new inhibitors possess decreased c-Jun levels. It can be seen in the Western blot analysis (Fig. 3A) that resveratrol as well as piceatannol (curcumin-group) and BTB14431 as well as JFD02836 (emodin-group) induced a significant reduction of endogenous c-Jun in a dose-dependent manner. It has been shown before that curcumin stabilizes endogenous p53 toward the Ub system in HeLa and MCF7 cells [6]. These cell lines possess wild type p53 [22][23][24][25][26]. We were interested to see whether our new inhibitors of CSNassociated kinases also increase intracellular steady state p53 concentration. Mouse B8 fibroblasts, also expressing wild type p53 [27] were treated with inhibitors. Subsequently cell lysates were tested by Western blot analysis using an anti-p53 antibody. Data are shown in Fig. 3B. All tested compounds induced significant stabilization of endogenous p53 in B8 cells in a dose-dependent manner. It has been shown that emodin, curcumin and resveratrol induce apoptosis through p53-dependent pathways [28,29]. Therefore we asked the question whether piceatannol, BTB14431, SEW04213 and JFD02836 also induce apoptosis. Several studies on compounds similar to curcumin have been published. The ability to induce apoptosis was measured in approximately 60 tumor cell lines and effects were obtained in nearly all cells examined [30]. Most studies demonstrate the execution of apoptosis by the oxidation of tetrazolium [30], the measurement of DNA-fragmentation as well as caspase activity and annexin-V-FITC/propidium iodide (PI) double staining [31]; methods also applied in our investigations. First the caspase activity was measured in B8 fibroblasts. Data shown in Fig. 4 demonstrate that all inhibitors tested caused an increase of caspase activity. The most pronounced increase was obtained with 50 µM of curcumin. Piceatannol (50 µM) and BTB14431 (50 µM) elevated caspase activity by approximately 5-times, whereas significant effects with SEW04213 and JFD02836 were only obtained at compound concentrations of 200 µM. To corroborate these findings we investigated the DNAfragmentation after treating HeLa cells with curcumin, emodin, resveratrol, piceatannol, BTB14431 and JFD02836 for 24 h. Except for JFD02836 in all experiments apoptosis was triggered (Fig. 5B) and the subG1 peak, which reflects the number of apoptotic cells increased in a dose-dependent manner (Fig. 5A). Measurement of caspase activity after cell treatment with curcumin-and emodin-derived inhibitors In addition, by staining cells with annexin-V-FITC/PI cell death was quantified [32]. After 24 h of incubation a large number of apoptotic cells appeared in the experiments with the inhibitors curcumin, piceatannol as well as BTB14431 (Fig. 6B). Emodin and resveratrol produced less apoptotic cells. In experiments with high concentrations (50 µM) of curcumin and piceatannol ( Fig. 6A) increased amounts of necrotic cells were observed. The viability of the cells was checked by the MTT test, which detects the percentage of irreversibly damaged cells after treatment with the indicated inhibitors. By adding curcumin and BTB14431 at a concentration of 20 µM the percentage of viable cells decreased to ~25%. A significant reduction of viable cells with emodin, resveratrol or piceatannol was only detected at high inhibitor concentrations (50 µM) (Fig. 7). Virtual screening versus high-throughput screening Recently we have shown that curcumin and emodin, inhibitors of CSN-associated kinases, induce Ub-and pro- teasome-dependent degradation of c-Jun in tumor cells [2,12]. Moreover, curcumin treatment causes stabilization of the tumor suppressor p53 towards the Ub system [6]. It has been demonstrated that curcumin-and emodininduced increase of p53 results in massive tumor cell death due to p53-dependent apoptosis [28,29]. Therefore, at least in tumor cells with wild type p53 elevation of cellular p53 levels could be of high therapeutic effect. Both events, the reduction of c-Jun and the increase of p53 are important for tumor therapy and can be accomplished by inhibition of CSN-associated kinases. Detection of DNA-fragmentation in HeLa cells after 24 h incubation with kinase inhibitors Therefore a new method was developed to identify compounds which are effective blockers of CSN-associated kinases and can be potentially used in tumor therapy. In our in silico screening we referred to curcumin and emodin as lead structures and compared them with approximately 10 6 compounds of our in-house database regarding their structural properties (similar property principal). In con-Apoptosis or necrosis: Annexin-V-FITC /PI double staining of HeLa cells after treatment with CSN kinase inhibitors trast to high-throughput screening (HTS) our method allows to exclude a large number of compounds before experimental testing. HTS is often used and appropriate assay systems can evaluate more than 125,000 compounds a day. However, HTS is not without problems [33,34]. HTS experiments become more and more expensive and the handling of the large amount of data is very time consuming. In addition, it is difficult to exclude false-positive hits [33]. By our virtual screening method we identified 35 compounds that seemed to be promising candidates for CSN kinase inhibition. Out of the 35 structures found by in silico screening 7 compounds had an inhibitory effect on recombinant CK2 and PKD and on the kinase activity of the purified CSN complex. Thus, the hit rate of our virtual screening was 20%. In contrast, the hit rate of HTS is usually approximately 2% [35]. We have used additional methods such as the Lipinski rule-of five and the toxicological investigations for ranking the found substances. These methods, however, did not serve to exclude compounds. Summing up, the data demonstrate that our in silico screening is a reliable and efficient method to find new active molecules. The curcumin-and emodin-derived inhibitors of CSNassociated kinases The present study includes only compounds which were identified using curcumin or emodin as lead structures. Our screening revealed 3 compounds of the curcumingroup and 4 compounds of the emodin-group, which showed inhibition of the kinases in vitro and in cell experiments. Interestingly, some of the identified compounds are more effective inhibitors than the lead structure. Therefore, in silico screening is a sensitive method for the identification of molecules with specific biological function. The two groups can be clearly divided by structural features. The different structures are most likely responsible for their different effects. Whereas members of the curcumin-group possess much better IC50 values MTT test for detecting the cell viability after treatment with kinase inhibitors Figure 7 MTT test for detecting the cell viability after treatment with kinase inhibitors. The cells were treated with curcumin, emodin, resveratrol, piceatannol or BTB14431 for 24 h at different concentrations (20 µM and 50 µM). As a control the cells were cultured with 0.1% DMSO. The bar chart displays the amount of viable cells after treatment. with the PKD, the members of the emodin-group are better inhibitors of CK2. However, all compounds inhibit both CK2 and PKD relatively unspecifically. It has been shown before that emodin is a competitive CK2 inhibitor [36]. As demonstrated by crystal structure it binds into the ATP-binding pocket of the kinase [37]. Simulation on the basis of the emodin data revealed that curcumin also fits into the same ATP-binding pocket of the CK2 (data not shown). In addition, the effects of curcumin are reversible [12] as it would be expected of a competitive inhibitor. Therefore, we conclude that all identified compounds compete with ATP for the ATP-binding site of the kinases. Since the ATP-binding sites of CK2 and PKD are slightly different, members of the curcumin-group have another preference as compared to the emodin-group. On the other hand, ATP-binding sites are highly conserved among kinases. Therefore the identified kinase inhibitors are rather unspecific. Inhibitors of CSN-associated kinases are potential drugs for tumor therapy Interestingly, many identified inhibitors of CSN-associated kinases are compounds of natural products such as curcumin, resveratrol, piceatannol, emodin and honokiol, which have been shown to inhibit angiogenesis and development of malignancies [12,15,38,39]. Here we demonstrate possible anti-tumor mechanisms of known and new substances. The inhibitors of CSN-associated kinases exhibit two important effects. As shown here they reduce c-Jun levels in tumor cells. In addition, it has been demonstrated that inhibitors of CSN-associated kinases cause an increase of intracellular p53, which can be explained by the stabilization of the tumor suppressor towards the Ub/26S proteasome system [6]. However, because our data were obtained by estimating steady state levels of p53, altered expression might also contribute to the increased protein concentration in the cells. Nonetheless, elevated steady state levels of p53 are accompanied with massive cell death caused by apoptosis as demonstrated here with B8 fibroblasts and HeLa cells. In addition, the viability of the cells decreased after treatment with the selected substances. Based on our data we cannot exclude that the tested compounds exert their pro-apoptotic effects independently of the CSN and p53. For example, in addition to CSN-associated kinases curcumin inhibits NF-κB activation associated with the induction of apoptosis [40]. Moreover, although the exact role of c-Jun in apoptosis is not known, low c-Jun steady state levels as measured in our experiments also might contribute to the induction of the apoptotic program (for rev. see [11]). In any case, the identified inhibitors exert their effects by stimulating apoptosis in tumor cells, which is most beneficial for tumor therapy. Based on its anti-tumor potential the CSN kinase inhibitor curcumin is already in phase I clinical trials [14] and perhaps inhibitors identified here will follow. Competing interests For compound BTB14431 a patent application is pending for assignee Charité.
2016-05-04T20:20:58.661Z
2005-08-05T00:00:00.000
{ "year": 2005, "sha1": "928947c1188155f773601fcf13768b6116b9f1ef", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-5-97", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "928947c1188155f773601fcf13768b6116b9f1ef", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17135793
pes2o/s2orc
v3-fos-license
Changes in Coronary Perfusion after Occlusion of Coronary Arteries in Kawasaki Disease Purpose Myocardial infarction in children with total occlusion of a coronary artery after Kawasaki disease is rare due to multiple collateral vessels. We aimed to investigate the changes in coronary perfusion associated with coronary artery occlusion after Kawasaki disease. Materials and Methods Eleven patients with coronary artery occlusion after Kawasaki disease were investigated. Serial coronary angiographies after total occlusion of a coronary artery were reviewed and the changes were described in all patients with additive information collected. Results The median age at the occlusion was 5.9 years old. The interval to occlusion was 6.2±6.9 years. Four left anterior descending coronary artery total occlusions and 10 right coronary artery total occlusions were detected. Immediate coronary artery bypass graft for left anterior descending coronary artery total occlusion made right coronary total occlusion occurred in all except one patient and the intervals thereof were 1 year, 1.8 years, and 4 years. Collaterals to the left coronary artery regressed after recanalization, while new collaterals to the right coronary artery developed. In three, collaterals to the right coronary artery decreased without recanalization without clinical signs. Conclusion The right coronary artery should be followed up carefully because of possible occlusion of new onset or changes in collaterals. INTRODUCTION Approximately 5% of patients with Kawasaki disease (KD) have aneurysms of the coronary arteries despite receiving intravenous gamma-globulin therapy. 1 Coronary artery aneurysms tend to regress, but sometimes, lead to stenosis or total occlusion, especially in patients with giant coronary aneurysm. 2 Patients with severe coronary artery stenosis or obstruction due to KD frequently develop multiple collateral vessels. 3 For this potential reason, patients with coronary stenosis or obstruction remain asymptomatic. 4 Nevertheless, patients with clinical symptoms of acute myocardial infarction and revascularization require relief from symptoms. 5,6 It has been proposed that collateral vessels develop more easily in the immature heart with ischemia, and accordingly, collateral vessels in coronary stenosis are comprised 1974 patients with KD who were consecutively admitted to our hospital. During the follow-up period, serial coronary angiography was performed in 67 patients. Total occlusion of coronary artery with collaterals was found in 12 patients. Eleven patients who underwent CAG more than twice were enrolled in this study. For these enrolled patients, medical records and 38 serial CAGs were reviewed in order to investigate the changes in coronary perfusion including collaterals after coronary artery occlusion. Occlusion time was defined as the age of the first detection of total occlusion of one or more of coronary arteries on CAG. The last follow-up age was the age at the last CAG. We determined serial changes in distal perfusion according to flow grading of coronary vessels and collateral blood flow according to flow grading of collateral vessels. All variables are reported as ranges (median) and mean± standard deviation. This study protocol was reviewed and approved by our Institutional Review Board, and informed consent for this research project was waived. Flow grading of coronary vessels The flow of coronary vessels was determined according to Thrombolysis in Myocardial Infarction (TIMI) grading. TIMI flow grade was assessed at the angiographic core laboratory as defined previously. 9,10 Flow grading of collateral vessels The classification proposed and validated by Cohen and Rentrop 11 was used for assessment of collateral blood flow. (Table 1) Among the eleven patients included for analysis, seven (63.6%) were male. The age at onset of KD ranged from 3 months to 7 years (median, 2.2 years old). The age at the occlusion ranged from 1.5 years old to 25.2 years old (median, 5.9 years old), and the follow-up duration till last CAG was 4.5±2.5 years. The interval from the onset of KD to the occlusion time was 6.2±6.9 years (1.5-20.8 years). Two patients (18.2%) complained of chest pain and significant ST-T change on electrocardiography was found in two patients (18.2%) when the occlusion was detected. A positive result on SPECT that was compatible with CAG was recorded in seven patients (63.6%). In all four patients with left anterior found more often in children than adults. 7 The development of collateral vessels associated with total occlusion of coronary artery is helpful to prevent myocardial infarction in infants and young children, and this may contribute to a better prognosis. 8 Therefore, the status of collaterals associated with total occlusion of coronary artery in KD is important when deciding on a particular treatment. While there are a few reports on changes in myocardial perfusion after total occlusion of a coronary artery in KD patients, this study aimed to assess changes in patterns of myocardial perfusion after occlusion of coronary arteries in KD. MATERIALS AND METHODS We reviewed all coronary angiographies (CAG) in patients diagnosed with KD at Samsung Medical Center from April 1994 to December 2010. The subjects consisted of patients who exhibited total occlusion of one or more coronary arteries that were confirmed by CAG. The patients who did not undergo CAG after diagnosis of total occlusion were excluded. For every patient, two-dimensional (2D) echocardiography was performed before every CAG. From the echocardiography, global ventricular function and regional wall motion were evaluated. Left ventricular shortening fraction from 28% to 38% was regarded as normal. Right ventricular function and regional wall motion abnormality of left ventricle were evaluated qualitatively by the investigator. 201 Tl single photon emission computed tomography (SPECT) was checked at the time of the first diagnosis of total occlusion of coronary artery on CAG. The extent and the area of infarct were determined on the basis of the normal values of circumferential profiles using conventional methods. A 17-segment analysis with 5-point scoring was used for SPECT image interpretation. Qualitative visual analysis was performed by a specialist in cardiac nuclear medicine. The first CAG was performed after the confirmation of the coronary abnormality proven by echocardiography, usually at 6-9 months after KD. Follow-up CAG was performed at an interval of 1 to 3 years according to lesion severity, even those without any signs of myocardial ischemia. Selective CAG was performed for determining the location of occlusion and the degree of collateral vessels. In addition to occlusion, giant aneurysm greater than 8 mm in diameter, as well as stenosis (75-90%) or severe stenosis (>90%), was determined. We considered giant aneurysm to be at a very high risk of obstruction even without stenosis. The subjects CAG at the time TIMI grade 0 occlusion was detected showed various degrees of collaterals (Rentrop grade 1-3), and the collaterals came from undefined other coronary arteries. Among cases of isolated RCA total occlusion, most of the collaterals extended from the LCA and RCA proximal to occlusion. In patient 9, CAG of the RCA revealed TIMI grade 0 occlusion of RCA origin; there were no collaterals from the proximal RCA. In the other patients, some bridging vessels were found from the proximal RCA. Among cases of LAD total occlusion, collaterals arose from the RCA and LCX. CAG of the LCA in patient 4 showed LAD total occlusion and grade 3 collaterals from the RCA, LCX, and proximal LCA, because the proximal LAD was preserved. We could not quantify the contribution of each patent coronary artery to collaterals. Changes after coronary revascularization Coronary intervention was performed in 6 patients, CABG in 4 patients, and percutaneous coronary intervention (PCI) in 2 patients of TIMI grade 1. The sites of intervention included the LAD and LCX. No intervention was performed for the RCA. CABG was performed at the LAD in three TIMI grade 0 obstructions and one TIMI grade 1 stenosis. PCI was done at the LAD and LCX in cases of TIMI grade 1 stenosis and TIMI grade 0 obstruction. These showed good patency on follow-up CAG after CABG, but repeated interventions were necessary after PCI in two patients: CABG in one patient and a second PCI in the other. descending coronary artery (LAD) total occlusion (TIMI gr. 0), SPECTs showed compatible ischemic sign, whereas only three patients in six patients with isolated right coronary artery (RCA) total occlusion (TIMI gr. 0) had positive SPECT results. After recanalization, myocardial ischemia was recovered by SPECT in all patients. In three cases of consequent RCA total occlusion after LAD recanalization, SPECT was performed in only one patient and showed no significant change. Echocardiographically, only two patients showed abnormal findings of decreased left ventricular shortening fractions (13% and 20%) at the occlusion time. No significant changes in serial echocardiography for two patients were found. All patients had taken anti-platelet and/or anticoagulant medicine after diagnosis. No death was recorded during follow-up. (Table 2) TIMI grade 0 left coronary artery (LCA) occlusion was found in five patients (45.5%), LAD in four patients, and left circumflex coronary artery (LCX) in one patient; meanwhile, TIMI grade 0 RCA occlusion was found in ten patients (90.9%). Coronary artery bypass graft (CABG) was done for all patients with TIMI grade 0 LAD occlusion. Isolated RCA occlusion (TIMI grade 0) was found in six patients, and TIMI grade 0 RCA occlusion in one of two vessels was detected in two patients. TIMI grade 0 occlusion in two coronary arteries was found concurrently in two patients (LAD/ RCA and LCX/RCA). Detailed descriptions of the patients with TIMI grade 0 LCA occlusion are listed below. CABG, and the stenosis progressed from TIMI grade 2 to TIMI grade 1 with each CAG. Accordingly, the next CAG was scheduled for close monitoring of the RCA. A new TIMI grade 0 RCA total occlusion presented in one patient initially treated with PCI for TIMI grade 1 LAD severe stenosis. This occurred four years after the initial PCI, and LAD total occlusion was also found on the same CAG at the age of 18.2 years. Another patient also underwent PCI after LCX and RCA total occlusion with TIMI grade 1 LAD stenosis were found on the first CAG. Percutaneous coronary intervention was performed three times for patency of the LAD and LCX. Thereafter, the patient was stable In 4 patients, TIMI grade 0 RCA occlusion occurred after coronary intervention: three after CABG and one after PCI. The intervals from the time of CABG to RCA total occlusion were 6 months, 1 year, and 1.8 years. In cases of RCA occlusion after CABG to the LAD in three patients, a previous CAG showed that all RCAs had aneurysm or TIMI grade 1 stenosis (Fig. 1). Therefore, TIMI grade 0 RCA occlusion occurred in all cases that underwent CABG to the LAD except for one patient. This one patient underwent CABG to the LAD due to LAD total occlusion, whereas the LCX and RCA showed simultaneous aneurysm. Fortunately, RCA occlusion had not occurred till 4.5 years after CAG 3 years later in the other patient. Additionally, spontaneous decrease in collateral perfusion to the RCA on CAG after 1 year was found in one patient (from Rentrop gr. 2 to Rentrop gr. 1) (Fig. 2). Nevertheless, in spite of small changes in collateral perfusion, we did not observe any changes in cardiac manifestations. DISCUSSION Coronary artery stenosis in KD occurs frequently in patients with giant aneurysm and steadily progresses with time. 2 with patent LAD and LCX, although TIMI grade 0 RCA occlusion persisted. Collaterals to the LAD in patients with total occlusion disappeared after CABG to the LAD on following CAG. However, interestingly, in a case of RCA total occlusion after LAD recanalization, prominent collaterals could be seen from the re-vascularized distal LAD (Fig. 1). There were some changes in the degree of collaterals to the RCA in three patients. The collaterals to the RCA occlusion after LAD recanalization decreased in two patients: Rentrop grade 3 decreased to Rentrop grade 2 on CAG 2.5 years later in one patient and Rentrop grade 2 to Rentrop grade 1 on curred in less than 2 years after CABG. Therefore, we recommend regular follow-up CAGs until 2 years after CABG. Also, natural progression of RCA occlusion from severe stenosis could not be excluded because all RCAs had aneurysms or stenosis on a previous CAG. From our findings, changes in collaterals after LCA recanalization were natural, and we should expect significant changes in other coronary arteries after treatment. We could not explain the reasons for decrease in collaterals to the RCA in three patients. Nevertheless, there could be personal bias when the grade was analyzed. Coronary arterial remodeling in KD has not yet been clearly described, 16 and to the best of our knowledge, very few reports on the assessment of the changes in collaterals that develop after occlusion of coronary arteries in KD have been published. 12 The development of collateral vessels associated with total occlusion of coronary artery in infants and young children may contribute to a better prognosis, 8 Some investigators have reported that myocardial viability is highly associated with the distribution of collateral blood flow within the occluded infarct bed. 17 And obstructive lesions of the right coronary artery have been known to have a better prognosis than obstructive lesions of the left anterior descending coronary artery. 8 A better understanding of the course of the coronary arterial remodeling in KD may lead to more innovative and effective treatment. In general, the guidelines for catheter intervention by the research committee of the Japanese Ministry of Health, Labor and Welfare have accepted it as a coronary intervention after KD. 18 There are no clear indications for RCA intervention even in total occlusion. From our study, all patients with RCA occlusion had collaterals, and the patients did not have any symptoms, even those patients with decreased collaterals. Although RCA occlusion might be safe for myocardial ischemia, we found some changes in collaterals on follow-up CAG. Therefore, careful evaluation is important not only for the LAD and LCX, but also for the RCA. Especially, when considering CABG to the LCA, we should be careful of the possibility of RCA total occlusion after intervention. Close follow-up is recommended because of possible RCA occlusion of new onset. Further study regarding the development of myocardial perfusion in children is required to elucidate the process of change from localized stenosis to occlusion and the changes of collaterals. There are several limitations in this study. First, this is a retrospective and descriptive study with a small number of patients, so we cannot expect our findings to be generalized However, severely stenotic or occlusive coronary arteries as a sequelae of KD develop collaterals in the chronic phase of KD, most of which develop sufficiently to prevent myocardial infarction. 7 Therein, recanalization or collateral formation improves the perfusion to the distal myocardium of the occluded native coronary artery. 8,12 Contrary to this, Tatara, et al. 13 reported that the development of collateral vessels cannot protect against myocardial ischemia and the degree of collaterals have no effect on myocardial imaging. From our observations, only two patients out of eleven showed clinical symptoms associated with myocardial ischemia. This could be from the collaterals and protected myocardial viabilities, but the results from the SPECT showed perfusion defects in seven patients. Patient 6, who presented with LCX and RCA total occlusion, complained of chest pain, but SPECT showed normal perfusion. Some discrepant results were found in our clinical manifestations. It was mentioned that some of perfusion defects on thallium-201 SPECT could be possible due to extensive fibrosis as a sequel of acute myocarditis of KD. 7 However, it has been reported that reversible perfusion defects can be seen in asymptomatic patients with KD. 14 Kato,et al. 15 reported that myocardial infarctions in KD were asymptomatic in 37% of patients. The reasons for asymptomatic myocardial ischemia in KD are not clear, but collateral circulation could ameliorate the degree of ischemia. For this reason, we performed regular CAG with SPECT, even in asymptomatic patients. A regular myocardial stress test and CAG should be done in patients with coronary arterial complications associated with KD. Fukuda, et al. 12 proposed that unnecessary CAG could be postponed until the time at which ischemic findings change in the dipyridamole stress SPECT. With myocardial ischemia, there are no evidence-based guidelines for follow-up and evaluation of patients with coronary abnormalities after KD. Therefore, more well-designed studies are needed to determine the clinical implications of changes in collateral vessels on myocardial ischemia. Of interest, we observed many RCA occlusions that occurred after LAD recanalization. Only one patient showed patent RCA after CABG to the LAD. The last CAG, performed at 4.5 years after CABG, also showed that RCA stenosis was becoming more severe. As we were not sure whether this RCA would keep its lifelong patency, this patient was decided to be followed up closely. The reason for RCA occlusion after LCA recanalization is unknown, but we propose that collaterals from recovered distal LCA flow could have provoked it. All cases of RCA occlusion oc-to similar post-KD patients. Second, the follow-up duration was not enough for complete expectation. Third, the quantification of the collaterals was not specific for the coronary disease in KD. Finally, the grading system described by Rentrop lacks objectivity and could be biased. In conclusion, even though clinical symptoms are subtle in coronary occlusion after KD, changes in collaterals and obstruction should be evaluated regularly for the prevention of further myocardial infarction. Regular evaluation for RCA might be needed as well, especially after CABG due to LAD occlusion.
2016-05-04T20:20:58.661Z
2014-02-10T00:00:00.000
{ "year": 2014, "sha1": "30741d887468bca13188ac9fce999472400574ab", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3349/ymj.2014.55.2.353", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30741d887468bca13188ac9fce999472400574ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256045421
pes2o/s2orc
v3-fos-license
Heterotic string on the CHL orbifold of K3 We study N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} compactifications of heterotic string theory on the CHL orbifold K3×T2/ℤN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \left(K3\times {T}^2\right)/{\mathrm{\mathbb{Z}}}_N $$\end{document} with N = 2, 3, 5, 7. ℤN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathrm{\mathbb{Z}}}_N $$\end{document} acts as an automorphism on K3 together with a shift of 1/N along one of the circles of T2. These compactifications generalize the example of the heterotic string on K3 × T2 studied in the context of dualities in N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} string theories. We evaluate the new supersymmetric index for these theories and show that their expansion can be written in terms of the McKay-Thompson series associated with the ℤN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathrm{\mathbb{Z}}}_N $$\end{document} automorphism embedded in the Mathieu group M24. We then evaluate the difference in one-loop threshold corrections to the non-Abelian gauge couplings with Wilson lines and show that their moduli dependence is captured by Siegel modular forms related to dyon partition functions of N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} string theories. JHEP02(2016)056 1 Introduction N = 2 compactifications of heterotic string theory have proved to be good testing ground to explore duality symmetries of string theory. One of the main motivations to explore these compactifications is that these vacua have dual realization in terms of type II compactifications on Calabi-Yau. Identifying dual pairs on the heterotic and type II side enables highly non-trivial tests of dualities with N = 2 symmetry [1]. The simplest example of such theories is the heterotic string theory compactified on K3 × T 2 . This theory was first constructed in d = 6 in [2,3]. An important observable for the test of duality in this theory is the dependence of the one-loop corrections of gauge and gravitational coupling constants on the vector multiplet moduli of the theory. The moduli dependence of these threshold corrections are encoded in automorphic forms of the heterotic duality group [4][5][6][7][8][9]. Our goal in this paper is to first consider more general compactifications of the heterotic string on (K3 × T 2 )/Z N , with N = 2, 3, 5, 7. Z N acts by a 1/N shift on one of the circles of T 2 together with an action on the internal CFT describing the heterotic string theory on K3. This freely acting orbifold of K3×T 2 was first studied on the type II side first as duals JHEP02(2016)056 of CHL compactifications [10,11] of the heterotic string [12][13][14]. We will call this orbifold, the CHL orbifold of K3. These compactifications of the heterotic string on the CHL orbifold of K3 preserve N = 2 supersymmetry and the number of vector multiplets, but reduce the the number of charged and un-charged hypermultiplets in the theory. They also affect the vector multiplet moduli dependence of the one-loop corrections. The two main aspects of these compactifications we study in this paper are the new supersymmetric index and the gauge threshold corrections. We summarize the results obtained in the next few paragraphs. The basic quantity from which one-loop thresholds of heterotic string on K3 × T 2 are obtained is the new supersymmetric index [7,9,[15][16][17][18] which is defined as The trace in the above expression is taken over the Ramond sector in the internal CFT with central charges (c,c) = (22,9). Here F is the world sheet fermion number of the right moving N = 2 supersymmetric internal CFT. For the standard embedding of the spin connection into a SU(2) of one of the E 8 's of the heterotic string, it was shown [7,9] that this index decomposes as Z new (q,q) = 8 η 12 Γ 2,2 (q,q)E 4 (q) × E 6 (q) η 12 , (1.2) = 8 η 12 Γ 2,2 (q,q)E 4 (q) Here E 4 , E 6 refer to Eisenstein series of weight 4, 6 respectively, Z K3 (q, z) is the elliptic genus of the N = 4 conformal field theory of K3 and Γ 10,2 η 10 = 1 η 10 Γ 2,2 (q,q)E 4 (q) , (1.4) is the partition function for the second E 8 lattice along with the lattice from T 2 . In [19], it was shown that due to the factorization of the new supersymmetric index as given in second equation of (1.3), the BPS states of the heterotic compactifications on K3×T 2 have a decomposition in terms of representation of the Mathieu group M 24 . We will evaluate the new supersymmetric index for heterotic compactifications of the CHL orbifolds of K3 and show that new supersymmetric index is given by the same form as in (1.3) but now with Z K3 (q, z) replaced by the twisted elliptic genus of the CHL orbifolds of K3. We will evaluate the new supersymmetric index explicitly for the N = 2 CHL orbifold (K3 × T 2 )/Z 2 and then generalize this for the other values of N using results of [20]. We then generalize the observation of [19] and show that the BPS states for heterotic compactifications of the CHL orbifolds of K3 have a decomposition in terms of representations of the Mathieu group M 24 . Threshold corrections are important observables in string compactifications and there has been a recent revival in studying properties of these observables mainly due to the work of [21][22][23][24]. Let us examine the threshold corrections evaluated in K3×T 2 compactifications JHEP02(2016)056 which we will generalize in this work to CHL orbifolds of K3. For concreteness consider the standard embedding in which the spin connection connection of K3 is equated to the gauge connection. Starting from the E 8 × E 8 theory compactifying on K3 × T 2 at generic points of the moduli space of T 2 results in E 7 × E 8 × U(1) 4 . Let the E 8 which is broken to E 7 be referred to G ′ and the second E 8 be called as G. Let ∆ G ′ (T, U, V ) and ∆ G (T, U, V ) be the corresponding one-loop corrections to gauge coupling corrections. T, U refer to the Kähler and complex structure moduli of the torus T 2 and V is the Wilson line modulus in T 2 . Then it was shown [25] that the difference in the thresholds is given by where Ω = U V V T , (1.6) and Φ 10 (T, U, V ) is the unique cusp form of weight 10 transforming under the duality group Sp(2, Z) ≃ SO(3, 2, Z). In [25], it was also shown that this difference in thresholds was independent of the way K3 was realized and is also holds for non-standard embeddings. In this paper, we evaluate the difference for heterotic compactifications on CHL orbifolds of K3 and show that the difference in the threshold corrections for the two gauge groups G, G ′ is given by where Ω k is a weight k modular form transforming under subgroups of Sp(2, Z) with k k = 24 N + 1 − 2 , (1.8) where N = 2, 3, 5, 7 labels the various CHL orbifolds. This generalizes the observation in [25]. Thus the gauge threshold corrections are automorphic forms under sub-groups of the duality group of the parent un-orbifolded theory. The cusp form Φ 10 also makes its appearance in partition function of dyons in heterotic on T 6 , a theory which has N = 4 supersymmetry [26][27][28][29]. 1 This theory is related to type II on K3×T 2 by string-string duality. In [20,31,32], it was shown that the partition function of dyons for the CHL orbifolds of the heterotic preserving N = 4 supersymmetry are captured by Siegel modular forms of weight k transforming under subgroups of Sp(2, Z) with k given by (1.8) for the various CHL orbifolds of the heterotic theory. These theories are related to type II on the CHL orbifold of K3 which has N = 4 supersymmetry. We show that the modular forms Φ k obtained for the difference of the thresholds in (1.7) are related by a Sp(2, Z) transformation to the dyon partition function in CHL orbifolds. The relationship between the difference in the thresholds of the non-abelian gauge groups of the N = 2 heterotic compactification to the dyon partition functions in the N = 4 heterotic is certainly interesting and worth exploring further. We will comment on this relation in section 6. JHEP02(2016)056 This paper is organized as follows. In section 2, we discuss the spectrum of heterotic compactifications on the CHL orbifold (K3 × T 2 )/Z N and show that the orbifold preserves the number of vectors but reduces the number of hypers. In section 3, we evaluate the new supersymmetric index for compactifications on the CHL orbifold of K3. We will discuss the case of N = 2 in detail for which we realize K3 as a Z 2 orbifold. We then generalize the results for the other values of N . In section 4, we show that the the new supersymmetric index for these orbifolds contains representations of the Mathieu group M 24 . In section 5, we evaluate the difference in the gauge corrections between the groups G and G ′ and show that it is captured by a modular form Φ k transforming under subgroups of Sp(2, Z). Section 6 contains our conclusions and discussions. Appendix A contains various identities involving modular forms used to obtain our results. Appendix B contains details regarding lattice sums and finally appendix C has the details of the calculations for the Z 2 CHL orbifold of K3. Spectrum of heterotic on CHL orbifolds of K3 In this section we derive the spectrum on (K3 × T 2 )/Z N compactifications. Before we go ahead, let us recall how these manifolds are constructed. The non-zero hodge numbers of K3 are given by The Hodge numbers of T 2 are given by To ensure N = 2 supersymmetry we need to preserve SU (2) holonomy. This implies that the Z N acts freely [12]. The orbifold action must also preserve the holomorphic 2-forms on K3 and the holomorphic 1-form on T 2 . It is known that the Z N symmetry action on K3 always involves fixed points on K3 [33], therefore it should freely act on T 2 . This action is just a shift by a unit 1/N on one of the circles of T 2 . Since the orbifold action involves both K3 and T 2 the compactifications on the CHL orbifold of K3 can not be thought of as obtained from a N = 1 vacuum in d = 6. Thus (0, 0) and (2,2) form are just the scalar form and the volume form on K3 which are preserved under the action of Z N . Also the 1/N shift on the circle does not project out any of the forms on T 2 . Thus the orbifold acts only on the (1, 1)forms of K3. The number of such forms on K3 which are invariant are given by 2k with [20] h (1,1) = 2k, Among the (1, 1) forms which are not projected out is the Kähler form g kl . The Kähler form, the (0, 2) and (2, 0) forms are self dual while the 2k − 1 forms are anti-self dual. Thus the Euler number of the orbifold along the K3 directions reduces to 2k+4. This information of the CHL orbifold (K3×T 2 )/Z N is sufficient to obtain the spectrum of massless modes in d = 4. We generalize the method developed in [3] for K3 compactification of the heterotic string. We will first discuss the states arising from compactifying the d = 10 graviton multiplet and then we will examine the spectrum from the d = 10 Yang-Mills multiplet. JHEP02(2016)056 Universal sector We call the spectrum from the d = 10 graviton multiplet the universal sector. This multiplet consists of the following fields Here G M N is the graviton, Ψ (−1) is a negative-chirality Majorana-Weyl gravitino, B M N the anti-symmetric tensor and Ψ (+) is a positive-chirality Majorana-Weyl spinor. On dimensional reduction these fields should organize themselves to a N = 2 graviton multiplet, vector multiplets and hypermultiplets in d = 4. The field content of these multiplets are given by The N = 2 graviton multiplet in d = 4 consists of a graviton g µν , two Majorana gravitinos ψ i µ , i = 1, 2, and the graviphoton a µ . The vector multiplet consists of the gauge field A µ , two Majorana spinors ψ ′i and two real scalars φ i . The hypermultiplet consists of two Majorana spinors χ i and 4 real scalars ϕ a with a = 1 · · · 4. We will label the 4 non-compact direction by µ, ν ∈ {0, 1, 2, 3}. The directions of the T 2 by r, s ∈ {4, 5} and the directions of the K3 by m, n ∈ {6, 7, 8, 9}. Let us first examine the bosonic fields under dimensional reduction. The d = 10 graviton reduces as G µν = g µν (x) ⊗ 1 ⊗ 1 where 1 refers to the constant scalar form on (K3 × T 2 )/Z N . There are 2 vectors from G µr = A µ (x) ⊗ f r ⊗ 1 where f r refers to the 2 holomorphic 1-forms on T 2 which are unprojected by the orbifold. Similarly there are 2 vectors B µr = A µ (x) ⊗ f r ⊗ 1. These 4 vectors arrange themselves into the single graviton multiplet and 3 vector multiplets. Let us now count the total number of scalars, this will determine the number of hypers. There are totally 4 scalars from the following components of the metric in 10 dimensions G 44 , G 55 , G 45 , B 45 . Now consider the scalars arising from the metric and the anti-symmetric tensor with indices along the K3 directions. The antisymmetric tensor reduces as B mn = φ(x) ⊗ 1 ⊗ f mn where f mn are the harmonic 2-forms on the CHL orbifold of K3. This results in 2k + 2 scalars. To obtain massless scalars from the metric we require solutions of the Lichnerowicz equation on the CHL orbifold of K3. These are constructed as follows, let us use a,b ∈ {1, 2} to refer to the two complex directions along the CHL orbifold of K3. Then the zero modes from the metric are constructed as follows [3] h ab = f ′ ab , Here f ′ ab refer to the 2k harmonic (1, 1)-forms on the CHL orbifold of K3. Note that h a,b and hāb vanish when f ′ ab is the Kähler form. Therefore there are 3 × 2k − 2 solutions of JHEP02(2016)056 the Lichnerowicz equation on the CHL orbifold of K3. This leads to 6k − 2 scalars from the dimensional reduction of the metric with indices along the CHL orbifold of K3. The 10 dimensional dilaton reduces as ϕ = ϕ(x) ⊗ 1 ⊗ 1 to give rise to a single scalar. Finally the anti-symmetric tensor reduces as B µν = b µν (x) × 1 × 1, but a anti-symmetric tensor in d = 4 is equivalent to a scalar by hodge-duality. Adding all the scalars we get 8k + 6 scalars. Among these 6 scalars are needed to complete the 3 vector muliplets. The rest of the scalars arrange themselves in to 2k hyper multiplets. To summarize we have the following dimensional reduction of the graviton multiplet in d = 10. To complete the analysis let us verify that the fermions also arrange themselves into these multiplets. Before we go ahead we need to recall some facts about index theory. There is a one to one correspondence of solution of the massless Dirac equation on a 4 dimensional complex manifold and the number of harmonic (0, p) forms [34,35]. The (0, 0) form and a (0, 2) form on the CHL orbifold of K3 results in two real Dirac zero modes which have negative internal chirality [3]. Let us call these spinors Ω and ω. Consider the gravitino in d = 10 it reduces to a Rarita-Schwinger field in d = 4 as the following 4 real gravitinos where ξ (±) are the constant spinors on T 2 . The superscripts refer to the chirality. These 4 real spinors organize themselves as 2 Majorana Rarita-Schwinger fields ψ i µ in d = 4. These form the superpartners in the graviton multiplet R(4). Now consider again the gravitino in 10 dimensions and reduce it with the vector index along the T 2 directions, these result in spinors in d = 4. Using the similar reduction as in (2.8) we can conclude that there are 2×2 = 4 Majorana spinors in d = 4. Finally reduce the d = 10 spinor Ψ (+) again on similar lines as in (2.8) and we obtain 2 Majorana spinors in d = 4. Thus totally we have 6 Majorana spinors which form the superpartners of the 3 vectors multiplets. Now let us move to the situation when the gravitino has indices along the CHL orbifold of K3. Now given a harmonic (1, 1) form we can construct the following solutions to the Rarita-Schwinger equations on the CHL orbifold of K3 [3]. Here Γ's are the internal γ-matrices and f ′ refer to the 2k (1, 1) forms. Again by reducing the d = 10 gravitinos with a similar construction as in (2.8) but with the vector indices of the gravitino along the CHL orbifold of K3 we obtain 2 × 2k = 4k Majorana spinors in d = 4 which form the fermionic content in the 2k hyper multiplets. This completes the analysis of the dimensional reduction of the graviton multiplet in 10 dimensions which results in the fields given in (2.7). Thus we see that it is only the number of hypers in the universal sector which is sensitive to the orbifolding. JHEP02(2016)056 Gauge sector Now let us examine the spectrum that arise from dimensional reduction of the Yang-Mills multiplet in d = 10. The field content of this multiplet is given by The negative chirality Majorana fermions as well as the gauge bosons are in the adjoint representation of E 8 ⊗ E 8 transforming as (248, 1) ⊕ (1, 248). This multiplet must decompose to N = 2 vectors and hypers in d = 4. To obtain the number of vectors and hypers we will use index theory to find the number of zero modes of fermions in the CHL orbifold of K3. To preserve supersymmetry in d = 4 the spin connection must be set to equal to the gauge connection. Let us consider the standard embedding in which the we take an SU(2) out of the first E 8 and set it equal to the spin connection on the CHL orbifold of K3. As mentioned earlier the SU(2) holonomy of the spin connection is preserved by the orbifolding procedure. This procedure breaks the E 8 to a subgroup, let us consider the maximal subgroup E 7 ⊗ SU(2), in which the SU(2) of the gauge connection is set equal to the SU(2) spin connection. Under the maximal subgroup E 7 ⊗ SU(2) ⊗ E 8 , the Yang-Mills multiplet decomposes as follows. and therefore behave conventionally. That is for these fermions, we can use the two spin 1/2 zero modes on the CHL orbifold of K3 of negative chirality denoted by Ω, ω earlier to to construct two Majorana fermions in d = 4 in the same representations. These are the fermionic partners in the vector multiplets. Let us state the existence of the two spin 1/2 zeros modes as an index theorem. Essentially we have Note that, we have normalized the integral by the Euler number of the CHL orbifold and the integral is performed over the orbifold. n (±1) 1/2 counts the number of massless spin 1/2 zero modes of the appropriate chirality. Let us examine the fermions which are charged under the SU(2) in the decomposition (2.11). Since the corresponding gauge connection is identified to be the spin connection, these fermions must arrange themselves into N = 2 hypers. First consider the fermions which transform non-trivially under the SU (2). To obtain the number of fermions JHEP02(2016)056 in d = 4 we need to use the index theorem of the Dirac operator on the of the CHL orbifold of K3. Since these fermions are charged under the SU(2) we need the expression for the twisted index, which is given by [36] I r γ·∇ = n We label the representation of the fermions by its dimension, this is denoted by r and the dimension of this representation is denoted by r. Note that just as in (2.12), we have normalized the integral of the curvature term by the Euler number of the CHL orbifold of K3. For k = 10, the expression reduces to that for K3. Setting the gauge connection equal to the spin connection we obtain (2.14) The 1/2 is because the trace in the Tr(R ∧ R) is taken in the 4 of SU(4) which are two doublets of SU (2). Now one can relate the trace in representation r to the trace in the doublet by Substituting this relation in (2.13) and using the last equality in (2.12) we obtain Note that for the singlet r = 1, the expression shows that there exist two negative chirality modes which was known by explicit construction as the spinors Ω (−1) , ω (−1) . Now each pair of spin 1/2 zero modes given by the index (2.16) gives rise to a pair of Majorana fermions in d = 4 which form the fermions in a single hypermultiplet. Thus the number of hypers in the representation r of SU(2) in d = 4 from the gauge sector is given by Note that this is always an integer. Let us apply this formula to the fermions which transform non-trivially under SU (2). Consider the doublets transforming as (56, 2, 1). Using (2.17) we can conclude that there are k charged hypers in the (56, 1) representation of E 7 × E 8 . Similarly consider the triplets (1, 3, 1) which lead to 4(k + 2) − 3 hypers uncharged under the gauge group. From the above discussion we see that the Yang-Mills multiplet in d = 10 results in the following multiplets in d = 4 Here we have also indicated the representations of E 7 ⊗ E 8 . As a simple check note that for K3 we have k = 10 which results in the well known 10 charged hypers and 65 uncharged JHEP02(2016)056 hypers [1]. The complete spectrum in d = 4 is given by Thus, compactifications on the CHL orbifold of K3 change the number of the hypers. It is important to note that these orbifolds involve the shift on S 1 together with the automorphism in K3 which reduces the number of (1, 1) forms. Therefore, they cannot be thought of as a four manifold which implies this compactification cannot be lifted to 6 dimensions. Thus, the difference in the number of hypers and vectors is not constrained by anomaly cancellation in d = 6. Let us now discuss the generic spectrum of these models. The generic spectrum is labeled by the number of uncharged hypers M and number of commuting U(1) denoted by N . For the embedding of SU (2) we have considered the model is given by (2.20) We have listed this for the various (M, N ) values of k corresponding to the CHL orbifold. (11,19). Let us now consider compactifications in which a SU(n) with n = 3, 4, 5 of one of the E 8 is embedded in the spin connection. Doing so, breaks the E 8 to E 6 , SO(10) and SU(5) respectively. The number of uncharged hypers from the gravition multiplet remains invariant and is given by 2k. A similar analysis shows that the number of uncharged hypers from the Yang-Mills multiplet is given by the index Note that this expression reduces to 4(k+2)−3 for n = 2 as seen earlier in detail. Therefore adding the 2k uncharged hypers from the universal sector, the total number of uncharged hypers for these compactifications is given by 2k(n + 1) − (n 2 − 4n − 1). Thus the (M, N ) values for these models are Again we see that it is only the number of hypers that are affected by k. These models are the generalization of the ones considered in [1] for k = 10. Though the number of JHEP02(2016)056 vectors are not affected by these compactifications, it will be clear from our analysis of the threshold corrections that the duality group under which these models are invariant are subgroups of the parent theory. For the rest of the paper we will restrict our study to the case of the standard embedding when one of the E 8 is broken to E 7 . However we expect our results for the new supersymmetric index as well as the threshold corrections for the CHL orbifolds will generalise with slight modifications to other gauge groups. New supersymmetric index for CHL orbifolds of K3 In this section we evaluate the new supersymmetric index for the CHL orbifold of K3. This index forms the basic ingredient for both gauge and gravitational threshold corrections for the heterotic compactifications we considered in the previous section. The new supersymmetric index is defined as 2 Here, the trace is taken over the internal CFT with central charge (c,c) = (22,9). Note that the left movers are bosonic while the right movers are supersymmetric. The right moving internal CFT has a N = 2 superconformal symmetry. It admits a U(1) current which can serve as the world sheet fermion number, we denote this as F . The subscript R refers to the fact that we take the trace in the Ramond sector for the right movers. For the K3 × T 2 compactifications, this index was evaluated in [7] using the Z 2 orbifold realization of K3. We will first generalise this computation for the CHL orbifold (K3 × T 2 )/Z 2 . Then using observations from the explicit calculations done for the Z 2 orbifold, we will generalise and obtain the expression of the new supersymmetric index for the CHL orbifolds (K3×T 2 )/Z N with N = 3, 5, 7. The Z 2 orbifold The N = 2 CHL orbifold of K3 admits the following simple orbifold realization. First, K3 is realized as a Z 2 orbifold by the action g on a torus T 4 , and then, the CHL orbifold of K3 is obtained by the action of g ′ given below. Here, the directions 4, 5 label the T 2 and the 6, 7, 8, 9 directions are the K3 directions. Note that, the g ′ action involves as shift of π along one of the circle of T 2 . This is embedded in the heterotic string by performing a shift of π along 2 of the directions of the E ′ 8 lattice 3 i.e. there is a shift given by JHEP02(2016)056 where X I refer to the bosonic co-ordinates of the E ′ 8 lattice. If the action g ′ is not implemented the action of g together with the shift in (3.3) breaks E ′ 8 to E 7 . The presence of g ′ ensures the CHL orbifolding. This shift in (3.3) is coupled to the g, g ′ action as follows. Here, Z[E 8 ; q] is the partition function of the second E 8 lattice which is given by The Eisenstein series, E 4 , admits the following decomposition in terms of theta functions. The partition function of the E ′ 8 which involves the following shifted lattice sum. The sum runs over all the lattice vectors λ of E 8 . The lattice shift γ for the Z 2 case is given by In appendix B we have evaluated the shifted lattice sum for various values of (a, b). This result is given by What is now left, is to define the partition function over (K3 × T 2 )/Z 2 referred as Z[CHL; q,q] in (3.4). For this we first define the lattice momenta on the T 2 which is given by The variables T, U refer to the complex structure and the Kähler moduli of the torus T 2 . Then the partition function can be written as JHEP02(2016)056 where the 1/η 2 factor arises due to the left moving bosonic oscillators where F m 1 ,m 2 ,n 1 ,n 2 (a, b; q) is independent of T, U and is given by Here The trace is taken over the subspace of Hilbert space carrying momentum (m 1 , m 2 ) and winding (n 1 , n 2 ). The subscripts g, g ′ in the trace indicates that the trace should be taken in the twisted section. The definition of L ′ 0 ,L ′ 0 ensures that the partition function F m 1 ,m 2 ,n 1 ,n 2 is independent of the T 2 moduli. Since the left moving bosonic oscillators on T 2 has been taken into account in (3.11), the trace does not involve these oscillators. Note that if one does not have the presence the insertions of the action of the Z 2 element g ′ which is responsible for orbifolding K3 × T 2 , the coupling of the shifts in the E ′ 8 reduces to the coupling of K3 realized as a involution of T 4 by the action of g. F T 4 is right moving world sheet fermion number of the (0, 4) superconformal algebra of T 4 . This U(1) is twice the U(1) of the SU(2) present in the (0, 4) superconformal algebra. Finally F T 2 is the right moving world sheet fermion number of the (0, 2) superconformal algebra of T 2 . It can be seen that among that unless the fermionic zero modes on T 2 are saturated the trace given in the last line of (3.12) vanishes. Therefore we obtain F m 1 ,m 2 ,n 1 ,n 2 (a, r, b, s; q) = Tr m 1 ,m 2 ,n 1 ,n 2 ;g a ,g ′r ; 14) The detailed evaluation of the trace is provided in the appendix C. The result for the various sectors are given by The contributions in which the winding n 1 takes half integer values arise due to the twisted sectors in the element g ′ . The contributions proportional to (−1) m 1 arise due to the insertions of the element g ′ in the trace. Note that if one ignores the contributions where n 1 takes half integer values and the ones proportional to (−1) m 1 , the result for the various sectors is JHEP02(2016)056 proportional to that for K3 realized as a Z 2 orbifold of T 2 . The expressions in (3.15) can be then be substituted in (3.11) to obtain the partition function on the CHL orbifold of K3. Let us now use the results in (3.9) and (3.15) to obtain the new supersymmetric index given in (3.4). Note that the dependence of the traces in (3.15) over the winding and momenta is mild. One just needs to consider the case when n 1 ∈ Z and n 1 ∈ Z + 1 2 separately. Multiplying the various sectors and summing over the sectors we obtain The superscript (2) refers to the fact that this is the index for the orbifold (K3 × T 2 )/Z 2 . Here we have used the decomposition of E 6 in terms of θ-functions which is given by Note that this is the generalization of the new supersymmetric index obtained for the standard embedding in K3 × T 2 compactifications given in (1.3) for which we obtain the just the term involving E 6 in the first line (3.16). The result we have in (3.16) is the expression for the new supersymmetric index for the compactifications on (K3 × T 2 )/Z 2 . We will now discuss two equivalent ways of rewriting the expression in (3.16) which are useful for the questions addressed in this paper. Decomposition in terms of characters of D 6 . From the general arguments in [7], we expect that the new supersymmetric index for K3×T 2 decomposes in terms of characters of the sub-lattice D 6 of E ′ 8 . The coefficients in this decomposition can be written in terms of the elliptic genus of the N = 4 superconformal field theory of the d = 4 compact manifold. For K3 × T 2 compactifications, this decomposition of the new supersymmetric index is given in (1.3). We will show that the new supersymmetric index for the (K3 × T 2 )/Z 2 also can be decomposed in terms of characters of D 6 with coefficients as the twisted elliptic genus of K3. Let us first define the twisted elliptic genus for the CHL orbifolds of K3. Let g ′ be the generator of the Z N action on K3 which results in the CHL orbifold. We define the twisted elliptic genus of K3 as where the trace is taken in the N = 4 super conformal field theory associated with K3 in the g ′r twisted Ramond sector. F K3 andF K3 denote the left and right world sheet fermion number which can be written as the U(1) charges corresponding to the SU(2) R-symmetry in this theory. The twisted elliptic genus for the various CHL orbifolds were provided JHEP02(2016)056 in [20]. The results for the N = 2 CHL orbifold are given by Using these expressions for the twisted elliptic genus we can see that the new supersymmetric index in (3.16) can be written as (3.20) Though the above expression is lengthy, the structure of the index is quite easy to decipher. To see this, let us list the characters of the the D 6 lattice. Consider the lattice in the fermionic representation. Then we have the following partition functions for the various sectors. It is important to note that the new supersymmetric index given in (3.16) was obtained by an explict calculation and it admitted a decomposition in the form given in (3.20). It is interesting that the structure seen for K3×T 2 by [7,9] in which the elliptic genus of the internal CFT plays the role in determining the new supersymmetric index is generalized to the twisted elliptic genus for the CHL compactification. Decomposition in terms of Eisenstein series. It is also useful to rewrite the new supersymmetric index in (3.16) in another form to obtain the gauge threshold corrections. For this, note that we have the following identities between modular forms. The identities in (3.22) have been verified by performing a q-expansion which is detailed in the appendix A. Substituting these identities in (3.16) we obtain the form It is also instructive to derive the the expression in (3.24) for the new supersymmetric index directly from from (3.20). For this we use the more general form for the twisted elliptic genus of the N = 2 CHL orbifold of K3 from [20]. Substituting these forms for the twisted Elliptic genus in (3.20) it is easy to see that it organizes into the form (3.24). To show this it is convenient to use the identities A τ, JHEP02(2016)056 Modular invariance. The new supersymmetric index has the property that τ 2 Z new (τ,τ ) has to be an SL(2, Z) non-holomorphic modular form of weight −2. This is essentially because it occurs in threshold integrals along with modular forms of weight 2 5 and the integrand in any threshold integral has to be modular invariant. Let us now verify that τ 2 Z new indeed transforms as a weight −2 modular form. For this, we need the following transformation property of E N Using this property, it is easy to see that for the special case of N = 2 we have Let us define the following lattice sums over T 2 From the expression for p L , p R given in (3.10) it is easy to see that under the shift τ → τ +1, we obtain the following relations between the lattice sums Using Poisson resummation one can show that under the transformation τ → −1/τ the following relations hold JHEP02(2016)056 Using the equations (3.28), (3.29), (3.31) and (3.32) it is easy to see that τ 2 Z (2) new where the new supersymmetric index given in the form (3.24) is a modular form of weight −2. To demonstrate this we have to also use the fact that η, E 4 , E 6 are modular forms of weight 1/2, 4, 6 respectively. This result ensures that the result for the integrand in the threshold corrections is modular invariant. The Z N orbifold From the explicit calculation and the discussions in the earlier section for the N = 2 CHL orbifold of K3 it is easy to arrive at the expression for the new supersymmetric index for the other values of N . To write down the expression for the index it is useful to define the following for 0 ≤ r, s, ≤ N − 1. Here F (r,s) (τ, z) is the twisted elliptic genus of the CHL orbifold of K3 which is given by [20] where A(τ, z), B(τ, z) are defined in (3.26). Using these definitions, the new supersymmetric index for the Z N CHL orbifold of K3 is given by A simple check of the above formula is that it reduces to (3.24) for the N = 2 case. One can re-write this expression by performing the sum over the phases wherever possible, but it is convenient to keep the expression as it is. It can be shown that τ 2 Z (N ) new (q,q) is a modular form of weight −2 by generalizing the method discussed for the N = 2 case in detail. Therefore the structure of the new elliptic index for CHL orbifolds of K3 is such that the Eisenstein function E 6 which occurs for the K3 is modified to the form given in the curly brackets of the expression in (3.36). Our analysis of the new supersymmetric index for heterotic compactification on the CHL orbifolds of K3 was restricted to the case of the standard embedding when one of the gauge groups of the heterotic is broken to E 7 . However we expect our observation that the new supersymmetric index decomposes to sum over twisted elliptic genera of K3 will be true for other embeddings and gauge groups. For the unorbifolded case, that fact the elliptic genus of K3 determines the new supersymmetric index was explicitly shown by the study of various cases in [9,25]. We expect similar results to hold for the compactifications considered in this paper and it will interesting to perform explicit checks for the various gauge groups. Mathieu moonshine From the analysis of the new supersymmetric index for CHL orbifolds of K3 we have seen that it is essentially determined by the twisted elliptic index of K3. This property is seen in the expressions (3.24) for the N = 2 orbifold and (3.36) for other values of N . It is known [37][38][39][40] that the twisted elliptic genus of K3 admits M 24 symmetry. Therefore, it must be possible to discover the M 24 representations in the new supersymmetric index for the CHL orbifolds of K3, just as it was done for the new supersymmetric index for K3 compactifications in [19]. Let us first recall how Mathieu moonshine -i.e. M 24 representations -is seen in the elliptic genus of K3. It is given by Let us decompose the elliptic genus into the elliptic genera of the short and the long representations of the N = 4 super conformal algebra. These are given by [41] ch h= 1 4 ,l=0 (τ, z) = −ie πiz θ 1 (τ, z) η(τ ) 3 Then we have where the first few values of A [42]. The generalization of this observation to the twisted elliptic genus of K3 was done in [37,38,40]. Let us first discuss the N = 2 CHL orbifold of K3. Consider the twisted elliptic index This admits the following decomposition in terms of N = 4 Virasoro characters Where the coefficient 8 is the twisted Euler number of K which is given by In (4.6) the first few values of A We have multiplied by a factor of 2 to agree with the normalizations of the twisted elliptic genus of K3 used in [37]. Then the new supersymmetric index in the (0, 1) sector admits the following decomposition G (2) (q) = 8g h= 1 4 ,l=0 (τ ) + The g's are products of characters of D 6 and N = 4 Virasoro characters. G (2) given in (4.9) is the generalization of which is the new supersymmetric index for K3 compactifications. Substituting the expressions for g's from (4.11) into (4.10) and using (4.9) we can solve for the coefficients A n . We have checked using Mathematica that the first 8 coefficients fall into the McKay-Thompson series for the Z 2 involution embedded in M 24 given in (4.8). Let us now proceed with the analysis for other values of N . From (3.36) we see that the new supersymmetric index in the (0, 1) sector is given by Here we have multiplied a factor of N to agree with the normalizations of the twisted elliptic genus of K3 in [37]. Let us write G (N ) as (4.14) By equating (4.14) and (4.13) we can solve for the coefficients A As we have seen explicitly, for the N = 2 case, the new supersymmetric index in the (1, 0) twisted sector is related to that of the (0, 1) sector by the modular transformation τ → JHEP02(2016)056 −1/τ . This is also true for other values of N . This implies that the new supersymmetric index in these sectors must also contain the modular transformed version of the McKay-Thompson series. It will be interesting to show this explicitly. There are 26 McKay-Thompson series corresponding to the 26 conjugacy classes of M 24 . It will be interesting to to construct and study the properties of the the new supersymmetric index corresponding to remaining classes. The twisted elliptic genera of K3 for each of these classes have been constructed in [37][38][39][40] 8 which will be a good starting point for this study. Gauge threshold corrections In this section, we will evaluate the one-loop threshold corrections for each of the two unbroken gauge groups E 7 and E 8 as a function of the Kähler and complex structure moduli and the Wilson line modulus on T 2 for the heterotic compactifications on CHL orbifolds of K3. To begin we will recall the evaluation of the threshold integrals for the gauge couplings of heterotic on K3 × T 2 . We then proceed to generalize to the case of the Z 2 CHL orbifold and then present the results for the Z N orbifold with N = 3, 5, 7. We will show that the difference in the threshold integrals of the two unbroken gauge groups reduces to Siegel modular forms associated with dyon partition functions in N = 4 string compactifications studied in [20]. Thresholds in K3 × T 2 Let us first discuss the situation without the Wilson line turned on. The moduli dependence of the one-loop running of the gauge group is given by where B is a trace over the internal Hilbert space which is defined as where Q is the charge of the lattice vectors. The coefficient b(G) is the one-loop beta function which is present to ensure that the integral is well-defined in the limit τ 2 → ∞. Since we will be interested only in the moduli dependence, this coefficient will not play a crucial role in our analysis. Note that B is closely related to the new supersymmetric index. In fact the term proportional to 1/8πτ 2 is the new supersymmetric index. The easiest way to determine the term with the charge insertion Q 2 (G) is to consider the action of q∂ q on the partition function of the appropriate lattice sum so that τ 2 B is modular invariant. The integral in (5.1) is carried out over the fundamental domain. Let us recall how to evaluate the one-loop threshold integrands for the groups E 7 and E 8 for the K3 × T 2 compactifications. For group E 8 , the integrand is given by JHEP02(2016)056 Here we have supressed the moduli dependence of B which arises due to the lattice sum on T 2 given by Γ 2,2 . Note that, this is essentially an operation on the new supersymmetric index for these compactifications which is given in (1.2). The charge insertion of the E 8 lattice is obtained by the action of q∂ q on the lattice sum E 4 (q). The coefficient α G is determined by demanding τ 2 B is modular invariant. To determine this coefficient consider the following identity due to Ramanujan Substituting this identity in (5.3) we obtain It is now clear that choosing α G = 1 8 ensures the the quasi-modular form E 2 occurs in the combinationẼ which transforms as a good modular form of weight 2. Therefore the threshold integrand for the gauge group E 8 is given by Similarly the threshold integrand for the group E 7 is obtained by evaluating Now we have the Ramanujan identity This identity together with modular invariance determines α G ′ = 1/12 . Thus the threshold integrand for the gauge group E 7 is given by Finally consider the difference in the threshold integrands for the gauge groups in (5.7) and (5.10). We obtain To obtain the second line we have used the identity JHEP02(2016)056 Therefore the threshold integral reduces to the trivial integral over the fundamental domain of just the lattice sum which is given by The constant (−1) can be obtained by carefully keeping track of the constants b ( G) in the threshold integrand (5.1). Essentially the (−1) serves to regulate the integral as τ 2 → ∞. This integral was done by [4] and the result reduces to the product of the Dedekind η functions. ∆ (1) Here we are ignoring moduli independent constants. T 2 , U 2 are the imaginary parts of the the T, U moduli of the torus T 2 . Note that the normalization of the thresholds used in this paper involves a division by the beta function compared to standard normalizations in the literature. This is keep uniformity in the discussion when we evaluate the difference in thresholds as well as when we turn to the CHL orbifolds. Wilson line V = 0. Let us now repeat this exercise with the Wilson line V on the torus T 2 turned on. The Wilson line can be embedded either in the gauge group E 8 or E 7 . We will take the Wilson line to be embedded in E 8 . 9 The procedure to evaluate gauge thresholds with the Wilson line was given in [9]. Here we out line the steps. Due to the presence of the Wilson line, the lattice sum over T 2 is enhanced to Γ 3,2 which is given by and Thus the lattice sum over T 2 is characterized by the five charges (m 1 , m 2 , n 1 , n 2 , b). The new supersymmetric index with the Wilson line is then determined by first re-writing the lattice sum over E 8 in terms of a Jacobi form of index 1 given by Note that E 4,1 (τ, 0) = E 4 (q), essentially we have decomposed the E 8 lattice into D 6 and D 2 and introduced a chemical potential for the charges in the D 2 sub-lattice. This breaks 9 The discussion can be generalized when the Wilson line is embedded in E7, with the same results. JHEP02(2016)056 the gauge group E 8 down to SO(12) × U(1) we will refer to this group as G. We then decompose this Jacobi form of index one into SU(2) characters as follows E 4,1 (τ, z) = E even 4,1 (q)θ even (τ, z) + E odd 4,1 (q)θ odd (τ, z). (5.19) where θ even (τ, z) = θ 3 (2τ, 2z), θ odd (τ, z) = θ 2 (2τ, 2z). (5.20) This decomposition can be performed using the relations Using these relations we get Note that the even and odd parts depend only on the modular parameter τ . Finally the modified new supersymmetric index in the presence of the Wilson line is written as (5.23) Here p L , p R contain the Kähler, complex structure and the Wilson line moduli dependence of the T 2 . A similar procedure can be carried out when the Wilson line is embedded in the unbroken group E 7 . In this situation the Jacobi form E 6,1 given by must be decomposed into its even and odd parts. The coupling of the lattice sum Γ 3,2 to the even and odd parts of E 4,1 in (5.23) is compactly denoted as Now we move to evaluating the integrand B G in the gauge thresholds with the Wilson line. Let us evaluate the threshold integrand for the group E 8 first. To determine the coefficient of the α G in the action q∂ q we need the following identity analogous to (5.4) which is given in [44,45] q∂ q E even,odd From (5.26) it is easy to see that to preserve modular invariance we need α G = 1/7. Therefore we obtain The threshold integrand B G ′ for group E 7 is given by Let us now take the difference between threshold corrections corresponding to the two gauge groups. We obtain Here we have ignored the constant term in the integrand which can be determined by examining the behaviour of the integrand as τ 2 → ∞. The combination of the Eisenstein series which occurs in the (5.30) can be identified with the elliptic genus of K3 due to the following identities where Z K3 (τ, z) = 8A(τ, z) is the elliptic genus of K3. The integral in (5.30) can be performed [46] and it results in where Φ 10 (T, U, V ) is the unique cusp modular form of weight 10 under Sp(2, Z) which is also known as the Igusa cusp form. The observation that the difference in thresholds of the two gauge groups results in the Igusa cusp form was made in [25]. It is also important to note that the duality symmetry SO(3, 2) present classically in heterotic on K3 × T 2 is broken to Sp(2, Z) due to this quantum correction. The modular form Φ 10 (T, U, V ) also determines the degeneracies of 1/4 BPS dyons in heterotic string theories compactified on T 6 or equivalently type II theories on K3 × T 2 . Note that these theories are N = 4 string vacua while we have evaluated the threshold correction (5.32), in heterotic compactified on K3×T 2 which has N = 2 supersymmetry. It is also interesting that the difference in thresholds is in fact sensitive only the elliptic genus of K3. In the next subsections we will generalize this property of the gauge thresholds to heterotic compactified on the CHL orbifolds of K3. Thresholds in the Z 2 orbifold Let us first evaluate the threshold integrands without the Wilson line turned on for the Z 2 orbifold of K3. As we have seen in the previous subsection, the most suitable form of the new supersymmetric index for this task is the expression in (3.24) in terms of the Eisenstein series. Let us write in a compact form using the lattice sums defined in (3.30). As discussed in the earlier subsection, the insertion of Q 2 in the construction of the integrand B in (5.2) is done by the action of α G q∂ q with α G = 1/8 and α G ′ = 1/12 when the derivative acts on the lattice partition function E 4 and E 6 respectively. This ensures modular invariance of the resulting integrand. Let us first evaluate the threshold integral for the gauge group E 8 . For this, α G q∂ q acts only on the first E 4 in (5.33). This results in whereẼ 2 is given by (5.6). Similarly the gauge threshold integrand for the E 7 gauge group is given by The modular integral with these difference can be performed using the methods in [20]. The difference in the gauge thresholds is given by where p R , p L are the lattice momenta with the Wilson line given in (5.16). The new supersymmetric index with Wilson line in the E 8 gauge group is given by Note here the product ⊗ refers to the fact that the even/odd part of the E 4,1 multiplies the even/odd part of the various lattice sums as explained in the earlier subsection. The threshold integrand for the gauge group E 8 broken down to G is given by To obtain this note that the insertion of Q 2 to obtain the threshold integrand is realized by α G q∂ q acting on E 4,1 with α G = 1 7 . The threshold integrand for the gauge group E 7 is given by Taking the difference in the threshold integrands given in (5.40) and (5.41) we obtain JHEP02(2016)056 We now use the identity in (5.31) as well as the following identity verified in appendix A 1 η 24 (E 4,1 (τ, z)E 6 − E 6,1 (τ, z)E 4 ) = −144 On comparing the twisted elliptic genus for the N = 2 CHL orbifold of K3 given in (3.25) we can rewrite the above equation as 45) This is precisely the integrand in the modular integral to obtain the Siegel modular form Φ 6 (Ω) of weight 6. Using the result of the integration in [20], we obtain The Siegel modular form, Φ 6 (T, U, V ), transforms as a weight 6 form under a subgroup of Sp(2, Z). This subgroup is explicitly discussed in [20]. 10 The appearance of the Φ 6 in the threshold calculation here shows that the duality group of this compactification is a subgroup of Sp(2, Z). Just as in the case of heterotic string on K3 × T 2 , the modular form Φ 6 is also related to the partition function of 1/4 BPS dyons in on type II theory on the CHL orbifold of K3. This theory has N = 4 supersymmetry, it is dual to the original CHL compactifications of heterotic studied in [10]. LetΦ 6 be the generating function of dyons in this theory, then the modular form Φ 6 is related toΦ 6 in (5.46) by the following Sp(2, Z) transformation. Thresholds in the Z N orbifold In this subsection we generalize the calculation of the gauge one loop thresholds to the Z N orbifold for N = 3, 5, 7. Since we have discussed the case for N = 2 in detail we will directly present the results for the threshold with Wilson line embedded in the unbroken gauge group E 8 . Again to present the results it is convenient to define the following lattice sums. From the expression for the new supersymmetric index in (3.36), it is easy to generalize for the situation with the Wilson line embedded in the E 8 gauge group. This is given by Again, using the same manipulations to evaluate the difference in the threshold integrands for the two gauge groups, we obtain Now using the expressions for the twisted elliptic genus for the CHL orbifold of K3 given in (3.34) we can recast the above expression as The integral of this function over the fundamental domain has been performed in [20]. The result of this integral is Here Φ k is the Siegel modular form of weight k transforming according to a subgroup of Sp(2, Z). This modular form is related toΦ k the generating function for 1/4 BPS dyons in type II theory compactified on the CHL orbifold of K3 by the Sp(2, Z) transformation We have thus demonstrated that the moduli dependence in the difference in the gauge thresholds for heterotic string compactified on the CHL orbifold of K3 are captured by Siegel modular forms Φ k of weight k = 24 (N +1) − 2. These are related to the modular forms which are generating functions for 1/4 BPS states in N = 4 string theories obtained by compactifying type II theories on the CHL orbifold of K3. We would like to again emphasise that our analysis was done only for the standard embedding in which one of the E 8 of the heterotic was broken to E 7 . However we expect the difference in gauge thresholds will still be determined by the Siegel modular form Φ k . For the unorbifolded case this was explicitly demonstrated in [25] by considering various embeddings and gauge groups. JHEP02(2016)056 6 Conclusions We have introduced N = 2 string theories constructed by compactifying heterotic string theories on CHL orbifolds of K3 . These generalize the well studied example of the heterotic string compactified on K3 × T 2 . The CHL orbifolding reduces the number of hypers in the resulting N = 2 theory and preserves the vectors in the theory. These models do not have a lift to 6 dimensions since the orbifolding involves a shift on one of the circles of T 2 . We evaluated the new supersymmetric index for these compactifications and showed that it admits an expansion in terms of the McKay-Thompson series of the group M 24 associated with the Z N automorphism used to construct the CHL orbifold. We then studied the moduli dependence of one-loop corrections to the gauge couplings in the CHL orbifolds of K3. We showed that the moduli dependence of the difference in the gauge thresholds is captured by Siegel modular forms closely related to partition function of 1/4 BPS dyons in N = 4 string theories. These Siegel modular forms transform under sub-groups of Sp(2, Z) which shows that the CHL orfbifolding reduces the duality symmetry of the original K3 compactification to a subgroup of Sp(2, Z). It will be interesting to evaluate gravitational thresholds in these theories to see if these also admit a nice structure seen for the gauge thresholds. Another direction to explore is generalize the observations of this paper to examples involving different embeddings with other gauge groups. A simple example to study is the compactification in the heterotic string which will lead to the Siegel modular which captured degeneracies of dyons in type II N = 4 constructed in [47]. Another generalization is to consider compactifications in heterotic based on the new classes of twisted elliptic genera of K3 constructed in [37,38,40]. We observed that the difference in integrands of the gauge thresholds reduces to the twisted elliptic genus of K3 for the CHL orbifold. This points to the fact that the difference in the thresholds is essentially sensitive only to a supersymmetric index of the internal CFT. It will be interesting to prove this in general. A similar phenomenon was observed by [48,49], in which the authors evaluated the difference in thresholds in compactifications of heterotic which completely break supersymmetry. They noticed that the difference in thresholds is purely a holomorphic function in the modular parameter indicative of a supersymmetric index. Another direction worth exploring is the N = 2 string duality between heterotic string theory compactified on these CHL orbifolds of K3 and the appropriate Calabi-Yau on the type II side. Since the CHL orbifolds reduce the number of hypers, the appropriate Calabi-Yau should have the reduced Hodge number h 2,1 = 6k + 4. It is interesting to study what symmetry action on the Calabi-Yau reproduces this Hodge number. In this context it will be also important to study the one-loop threshold corrections to gravitational couplings in these models. Note that the modular forms Φ k obtained in the difference of thresholds of the CHL compactifications in this paper factorize in the V → 0 limit as [20] lim It is also interesting to investigate if the difference in thresholds have other degeneration limits for discrete values of V as seen in [48,49]. 11 This degeneration should correspond JHEP02(2016)056 (1 + q n− 1 2 )(1 + q n− 1 2 ), One identity of theta functions which we repeatedly use is triple product identity Finally we will also use the following shift properties of the theta functions. B Lattice sums In this appendix we provide the details of evaluating the lattice sum over the shifted lattice Let us now perform the lattice sum without any shifts. This is the (0, 0) sector. The last term is zero. Hence the final expression for the lattice sum is For the case (a, b) = (0, 1). The weight vectors are the same as (B.3). To evaluate the phase we use the shift in (B.2) and the weight vectors to get λ A · γ = n 1 + n 2 , λ B · γ = n 1 + 1 2 + n 2 + 1 2 . (B.9) The lattice sum is θ 6 2 θ 2 4 − θ 6 4 θ 2 2 . (B.14) Note that is the case where there are corrections due to the shift and factors present due to the even integer constraint and the extra phase. The overall negative sign is due to the overall phase in the definition (B.1). C Details for the Z 2 orbifold This appendix provides the details of the evaluation of the following trace F m 1 ,m 2 ,n 1 ,n 2 (a, r, b, s; q) = Tr m 1 ,m 2 ,n 1 ,n 2 ;g a g ′r ;RR g b g ′s e iπ(F T 4 +F T 2 ) F T 2 q L ′ 0qL ′ 0 . (C.1) The orbifold action g and g ′ is defined in (3.2). We label the various sectors in terms of only the action of g. The action of g ′ is summed over in each of these sectors. Also in the above trace the bosonic oscillators in the holomorphic direction of the T 2 is not included since it is has already been included in (3.11). Due to the presence of the fermionic zero modes associated with T 4 which is along the 6, 7, 8, 9 direction the trace vanishes for a = 0, b = 0 irrespective of the values of r and s. Therefore we have F m 1 ,m 2 ,n 1 ,n 2 (0, r, 0, s; q) = 0. (C.2) Therefore from the definition of F in (3.12) we see that this implies The only difference in this trace from that of (C.4) is the insertion of g ′ in the trace. This picks up the factor (−1) m 1 on the state carrying m 1 units of momentum along the circle y 4 . Now the following traces vanish F m 1 ,m 2 ,n 1 ,n 2 (0, 1, 1, 0; q) = F m 1 ,m 2 ,n 1 ,n 2 (0, 1, 1, 1; q) = 0. (C. 6) This is because in this sector the winding numbers along y 6 is half integer moded and the action of g as well as gg ′ reverses the sign of these modes and therefore they do not contribute in the trace. Thus we have Here the factor of 16 in the first line is due to the 16 twisted sectors localized at the 16 fixed points of T 4 at y m = 0, π for m = 6, 7, 8, 9. To arrive at the second line we have used the product representation of θ 4 and the identity (A.5). Again the bosonic and fermionic oscillators in the anti-holomorphic sector cancel leaving behind the bosonic oscillators in the holomorphic sector. These oscillators are half integer modded since they belong to the twisted sector. Now F m 1 ,m 2 ,n 1 ,n 2 (1, 0, 0, 1; q) = 0. (C.9) This is because the action of g ′ exchanges the fixed points pairwise, the twisted sector states are off diagonal and therefore the trace vanishes. Here the states twisted by gg ′ are now labelled by the fixed points y 6 = π 2 , 3π 2 , y m = 0, π for m = 7, 8, 9. The rest of the analysis to obtain the above equation is same as that in (C.8), but note that here the winding n 1 ∈ Z + 1 2 due to the twisting by g ′ . Finally F m 1 ,m 2 ,n 1 ,n 2 (1, 1, 0, 1; q) = 0. (C.11)
2023-01-21T14:09:19.423Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "f1c8adb0bb52f498bd3a6dc8e1e69f1081b8a95d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/jhep02(2016)056", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f1c8adb0bb52f498bd3a6dc8e1e69f1081b8a95d", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [] }
267927791
pes2o/s2orc
v3-fos-license
PTEN hamartoma tumor syndrome: Clinical and genetic characterization in pediatric patients Objective The aim of this study was to provide a full characterization of a cohort of 11 pediatric patients diagnosed with PTEN hamartoma tumor syndrome (PHTS). Patients and methods Eleven patients with genetic diagnostic of PHTS were recruited between February 2019 and April 2023. Clinical, imaging, demographic, and genetic data were retrospectively collected from their hospital medical history. Results Regarding clinical manifestations, macrocephaly was the leading sign, present in all patients. Frontal bossing was the most frequent dysmorphism. Neurological issues were present in most patients. Dental malformations were described for the first time, being present in 27% of the patients. Brain MRI showed anomalies in 57% of the patients. No tumoral lesions were present at the time of the study. Regarding genetics, 72% of the alterations were in the tensin-type C2 domain of PTEN protein. We identified four PTEN genetic alterations for the first time. Conclusions PTEN mutations appear with a wide variety of clinical signs and symptoms, sometimes associated with phenotypes which do not fit classical clinical diagnostic criteria for PHTS. We recommend carrying out a genetic study to establish an early diagnosis in children with significant macrocephaly. This facilitates personalized monitoring and enables anticipation of potential PHTS-related complications. Introduction PTEN hamartoma tumor syndrome (PHTS) is a disease with a broad spectrum of signs and symptoms.First described in 1993, it includes the classical Cowden syndrome (CS), Bannayan-Riley-Ruvalcaba syndrome (BRRS), Lhermitte-Duclos disease (LDD), and Proteus and Proteus-like syndrome.The phenotypic spectrum of PHTS has been evolving and expanding paralleling the increasing accessibility of genetic diagnosis.There are currently many other alterations related to PTEN pathogenic variants such as neurodevelopmental disorders, segmentary overgrowth, autistic spectrum disorder (ASD), or macrocephaly.In the pediatric population, classic forms with mucocutaneous manifestations, hamartomatous lesions, and malignancies are less common than development delay, brain magnetic resonance imaging (MRI) alterations, and growth disorders such as macrocephaly, overweight, or limb asymmetries [1].The etiology of PHTS lies with the PTEN (phosphatase and tensin homolog deleted on chromosome 10) gene.The protein encoded by this gene is a dual phosphatase with both lipid and protein activity.It is ubiquitously expressed in different cells.As regards the lipid activity, it dephosphorylates phosphatidylinositol-3, 4, 5-phosphate (PIP3), a mediator in the MAPK pathway.This action induces cell cycle arrest in G1 and apoptosis, regulating cell growth.It is known as a tumor suppressor, mutated in different cancer types.Moreover, it has been associated with insulin regulation pathways and mitochondrial metabolism [2].Pathogenic variants of this gene have been reported to cause PHTS.The exact prevalence of PHTS is unknown due to the high variability in the manifestations and the difficulties to carry out massive genetic testing.The prevalence of CS was estimated in 1:250,000 in the Dutch population, although the global real prevalence of PHTS is likely to be higher [3]. Different types of genetic variants have been reported that affect the PTEN gene without a clear genotype-phenotype correlation.An apparent lack of missense mutations has been described in LDD [3].However, missense variant 5′ to or within the phosphate core motif has been associated with involvement of five or more organs, although none of these genotype-phenotype correlations has been confirmed in large case series [4].PTEN pathogenic variants include missense and nonsense nucleotide variants, deletions, insertions, and splicing mutations, all of them with autosomal dominant inheritance [5].Frequency of de novo mutations is estimated to be between 10.7 and 47.6% [6].Mosaicism for PTEN has also been described in PHTS. The penetrance of PHTS is near 100% by the fourth decade in patients with a pathogenic variant in PTEN [2].The clinical manifestations can vary between different individuals: macrocephaly and neurodevelopmental disorders are more frequent in childhood while classical symptoms (intestinal polyps, malignancies) are more frequent in patients diagnosed in adulthood without a clear evolution from one spectrum to the other. The literature regarding PHTS in children is limited.The largest series of cases published in Pubmed includes just sixteen cases, so we aim to describe a cohort of eleven children with a pathogenic variant of PTEN, diagnosed in the "Reference Unit of Rare Diseases Advanced Diagnosis" (DiER-CyL) to provide more data to the literature detailing the clinical, genetic, and other issues related to this condition. Patients and methods This study includes patients diagnosed with PHTS between February 2019 and April 2023 in DiERCyL.Demographic, clinical, and genetic data were recorded retrospectively from their hospital medical history. Genetic studies The genetic studies were carried out in the Laboratory of Molecular Genetics and Pharmacogenetics at University Hospital of Salamanca.Genomic DNA was extracted from peripheral blood leukocytes by magnetics beads or silicamembrane-based nucleic acid purification.Whole exome sequencing was performed on the NextSeq 500 platform (Illumina; San Diego, CA).For this purpose, DNA libraries were prepared using TruSeq technology (Illumina; San Diego, CA) and captured using xGen Exome technology (Integrated DNA Technologies, IDT; Coralville, IA).A bioinformatic study of the DNA sequences obtained was performed by comparison with the reference genome version GRCh37/hg19.The genetic variants detected were confirmed by Sanger sequencing.The assessment of the pathogenicity of the variants was determined according to the American College of Medical Genetics and Genomics (ACMG) guidelines, using the ClinVar database and Varsome scores.Genetic studies were amplified to parents and siblings, when possible (six out of eleven index cases), although they do not present clinical features of PHTS. Signs and symptoms According to clinical signs, macrocephaly was defined as percentile > 97 and standard deviation (SD) was calculated according to the anthropometric standards published for the Spanish population [7].Height and weight percentiles and SD were calculated according to Spanish growth studies from 2010 [8].In case it was required, a clinical examination by a pediatric neurologist was performed in order to evaluate ASD and development delay (DD).MRI was performed in all but four patients.Other complementary tests and oncologic follow-up were individualized according to the clinical spectrum and the individual risk. Clinical and demographic data As summarized in Table 1, we have studied 11 patients, 7 males and 4 females, diagnosed with PHTS.The mean age was 6.87 ± 3.02 (1.91-11.5)years.In our cohort, we have two siblings with the same mutation. The age at diagnosis ranged from 1 year and 11 months to 11 years and 6 months.All genetic studies except one were requested by a pediatric neurologist, while the other one was requested by a pediatric endocrinologist.The most prevalent diagnoses prior to the genetic study were macrocephaly (11/11), developmental delay (5/11), and overgrowth (3/11).The mean head circumference SD score was +4.3 ranging from +2.6 to +5.According to their growth, three of our patients were diagnosed with some kind of overgrowth.One of them presented overweight (+3 SD), another one overheight (+2.35 SD), and the last one was diagnosed with overheight with no data available about her height. In general, our patients did not present facial dysmorphism, and most of the anomalies were secondary to macrocephaly.Facial phenotype was not described in two of the patients.In the rest of them, frontal bossing (5/9), prognathism (3/9), and dental anomalies such as dental agenesis or dental malocclusion (3/9) were the most frequent phenotypic manifestations.No facial asymmetries were reported, although two of the patients showed slight asymmetry of the lower limbs. Cutaneous signs were observed in three of the patients.Patient 1 had cafe-au-lait macules and a hamartoma on the thumb.Patient 2 also showed cafe-au-lait macules besides follicular keratosis and a keloid scar.Patient 3 had had a cervical lipoma removed.No other dermatological findings were described.Three of our patients showed joint laxity without presenting joint pain, dislocations, Marfan phenotype, or other manifestations related to collagen disturbance such as varicose veins, hernias, or uterine/anal prolapse.Four of the patients presented tonsillar hypertrophy, one of them requiring surgical removal due to recurrent tonsillitis and sleep apnea/hypopnea syndrome. Seven of our patients (64%) suffered some kind of developmental delay, and five of them (71%) presented clinical improvement in their symptomatology.Our cohort showed a wide range of neuropsychiatric involvement.Six patients showed both speech and motor delay and two patients showed only motor delay.ASD was reported in three patients (27%), the rest showed normal social behavior.Specific intelligence quotient tests were not performed. Currently, none of our patients has developed any malignancies, although they are still young and oncologic screening is being conducted.Only patient 5 has thyroid involvement, with multinodular goiter, and he is waiting for surgical removal due to mild dysphagia and aesthetic reasons.A special case is patient 4, with a diagnosis of intention tremor and macrocephaly without any other neurological symptoms.This patient's phenotype included frontal bossing, prognathism, dental agenesis, short limbs and fingers, low-set ears, short philtrum, low nasal bridge, and hypertelorism.She had a cervical lipoma removed prior to diagnosis.As far as we know, none of these signs has been described in PHTS.Brain MRI showed an enlargement of perivascular spaces and a 2-mm tonsillar descent.A de novo nonsense mutation in PTEN gene was detected during the genetic study. Imaging studies A brain MRI was performed in seven of our patients following symptomatic criteria.Three of the patients (43%) showed no abnormalities in MRI.Two patients presented enlarged perivascular spaces, one of them with a 2-mm tonsillar descent.Another patient was diagnosed with diffuse leukoencephalopathy and patient 8 showed cerebellar asymmetry and cortical dysplasia compatible with LDD. Genetic studies Genetic analysis was performed to all the patients, and Sanger sequencing or confirmed nine heterozygous pathogenic mutations, three of them not described before up to our knowledge.The most common mutation was nonsense point mutation, identified in six patients (55%).Two of the patients presented missense point mutation and the other two frameshift insertion.In patient 10, a deletion involving the region of the PTEN gene was found by CGH-arrays and confirmed by quantitative PCR. Table 2 contains the genetic data of our patients.We were not able to establish a clear genotype-phenotype correlation because of the sample size, although the two missense mutations were in a hotspot region according to VarSome.This region includes a phosphatase domain that contains the CX5R signature motif for phosphatases [2].One of our patients presented a deletion involving part of the PTEN gene.The rest of our patients' mutations affect the tensin-type C2 domain and all of them are point nonsense mutations except for one frameshift insertion (patient 6).The mutation detected in patient 3 has been reported to generate a truncated protein affecting the functionality of a C-terminal C2 domain with different phenotype expressions [9][10][11].Mutation in patient 8 affects a putative tyrosine phosphatase domain affecting the tertiary structure of the protein [12,13].The molecular effect of the rest of the mutations detected has not been described for the moment.Figure 1 shows the PTEN structure and the location of the detected mutations. Segregation studies were performed in six of our patients, including both parents in four of them.The mutation found in the siblings (patient 9 and patient 11) was also present in their mother, suggesting maternal inheritance, although for the moment she does not present any symptoms or oncological history.Moreover, a de novo mutation was confirmed in two patients.Finally, in patients 1 and 2, just the father or the mother were studied, and no PTEN mutations were found. Discussion PHTS involves a broad clinical spectrum, where the oncologic risk is the main point under consideration.There is not much literature on this syndrome in the pediatric population.Through this retrospective study, we try to offer new data according to clinical, genetic, and neuroimaging findings in our PHTS cohort. The molecular approach to certain pathologies has made it possible to know the underlying cause behind many of them.On some occasions, a known disease or syndrome has been subclassified into several different pathologies according to their molecular substrate.In other cases, such as PHTS, a variety of different syndromes (e.g., CS, BRRS, PS…) have shown a common etiology, in this case, alterations in the PTEN gene affecting its protein functionality.PHTS has been widely studied in adults due to its tendency to develop different kinds of tumors.However, it has barely been described in the pediatric population.We aim to offer new data in pediatric patients and to correlate our patients with the existing literature.In our study, we have identified four new PTEN alterations that need to be considered when studying PHTS.There are no differences in prevalence associated to sex described in the literature.Several series of cases with PHTS show male preponderance while others do not show any sex predisposition [5,[14][15][16].In our cohort of 11 patients, seven of them are male.As developmental disorder is more common in males and it was the leading sign in most of our patients, this might be associated with our findings [17]. As shown in the "Results" section, we were not able to establish a clear genotype-phenotype correlation according to previous studies, including a review of the clinical literature [18].However, it has been hypothesized after considering some facts.For example, there are no data about missense variants detected in LDD disease in the literature, all of them are caused by a nonsense or frameshift variant which is in line with the clinical and genetic data of patient 8 [3].It has been suggested that pathogenic variants 5′ to or within the phosphatase core are associated with higher severity and involvement of five or more organs; however, these studies are based on an adult population, and no conclusions can be established in our cohort, although a narrower oncologic screening may be implemented in these patients [1].There is one study which hypothesizes about a correlation between the phenotype and post-transcriptional PTEN expression and its splice variants.However, we cannot add new data because none of our mutations was in the promoter region and no transcriptional studies were conducted [9].A more recent study using an artificial humanized yeast model and evaluating lipid phosphatase activity in different PTEN mutations showed a higher phosphatase activity in ASD-associated variants [19].This fact, albeit interesting, is difficult to evaluate in real patients, but it opens a window for future research and better understanding of the PHTS. Macrocephaly is a major sign in PHTS.All our patients presented macrocephaly, as it has been reported in several other cohorts [5,20].Seventy-eight percent of our sample (7/9) had a head circumference SD score over + 4, greater than a French cohort and similar to an Italian cohort, which are the studies closest to the Spanish population [5,20]. According to a PTEN review in childhood, the most common facial characteristics described were frontal bossing, depression of nasal bridge, horizontal eyebrows, and dolichocephaly [9].Frontal bossing was present in approximately half of our patients, which could be secondary to the macrocephaly.Other common features described in the literature were less frequent in our patients.Only two patients presented depression of the nasal bridge and one patient showed acrocephaly, although dolichocephaly has been reported in other series [21].Prognathism was described in three of our patients while we have just found one case in the literature reporting this sign [22].It is also remarkable that teeth alterations such as agenesis and malocclusion were present in three of the patients.This fact has not been described up to our knowledge and could be secondary to gingival disturbances which are more common, but it is also possible that PTEN mutations imply dental issues because PTEN plays an important role in osteogenesis and proliferation in dental pulp cells [18,23]. Neuropsychiatric involvement has also been described in the literature, with a wide spectrum of signs including motor and speech delay, ASD, and intellectual disability which tends to improve, although some alterations persist through adulthood [5,18].Seven of our patients (64%) presented some grade of developmental delay, which was only motor in two of them and motor and language-related in the rest.This prevalence is lower than the prevalence reported in other case series and comparable to findings from an Italian study [5,21].Four of the affected patients showed an improvement in successive controls and in two of the patients who did not present neurological improvement this might be due to a short follow-up period.Developmental impairment during childhood with a subsequent normal cognitive outcome in adulthood has been previously reported [24]. ASD is another of the main neuropsychiatric issues related to PTEN alterations.Three of our patients (27%) had some symptoms of the autistic spectrum.These data are similar to those of other studies suggesting a strong association between PTEN and ASD [5].The underlying cause could be an alteration in neuronal growth, survival, and migration via the PI3K/ AKT pathway as well as a synergy between PTEN mutations and other alterations in autism susceptibility genes [25]. There are several manifestations described in brain MRI in PHTS.However, it is difficult to assess accurately the prevalence of these alterations, especially in the pediatric population since MRI is not a diagnostic test exempt from certain risk.In general terms, it was just conducted in those patients who showed neuropsychiatric signs.In our series, brain MRI was performed in seven patients, and three were completely normal (43%) which is a greater percentage than reported in other studies, especially considering that, in the Italian cohort, MRI was performed in all patients, including those without neurological involvement [5,15].Enlarged perivascular spaces were found in 29% of our patients, which was less frequent than the 94% prevalence previously described [15].In this study, 36% of their patients presented white matter abnormalities, defined as changes in signal intensity.This condition was detected in only one of our patients.One of our patients presented a tonsillar descent of 2 mm, which represents a lower prevalence than the 33-37% reported by other studies involving adults and children, respectively [5,26].LDD is difficult to assess in children due to the lack of the typical "tiger-stripe" in T2.One of our patients presented cortical cerebellar dysplasia compatible with LDD.Despite being a pathognomonic criterion for CS in adults, this finding has not been reported in several case series in children's populations with PHTS and the relation between LDD and PHTS in pediatric populations remains unclear according to PHTS reviews in children, although in some serial of cases of pediatric patients with LDD, some of them meet CS clinical criteria [18]. Vascular features such as hemangiomas, cavernomas, or arteriovenous malformations, which are a major sign in several of the classic syndromes related to PTEN alterations (CS, BRRS…), have not been reported in our cohort, although they may appear later in the natural evolution of the disease [18]. PTEN alterations have been frequently associated with skin features, which were a CS criterion even before the discovery of the PTEN gene.Hamartomatous growths of the same or different tissues (e.g., trichilemmomas, fibromas, lipomas) can be detected in almost every part of the body, although they tend to appear on the face and surrounding orifices [5,18].Acral keratoses and tongue alterations are some of the most prevalent skin alterations in PHTS [18].The presence of several of these features, especially in a macrocephalic child, is very suggestive of PHTS [5].In our cohort, one patient presented a hamartoma on the thumb and another patient presented a cervical lipoma.The prevalence is lower than what has been reported in other studies, but this may be due to the age-related penetrance described in skin lesions [5,18].Two of our patients presented cafe-au-lait macules, which is not a typical manifestation of PHTS, and its prevalence has not been studied for the moment, although there are several case reports which include this alteration [27−29].One of these patients also presented keloid scarring.The PTEN gene may play an important role in keloid scarring as suggested in a case control study that demonstrates underexpression of PTEN in keloid samples compared to normal controls [30].Penile freckling is another classic hallmark of PHTS, especially in the pediatric population.It has been reported in childhood from the first age of life and was absent in our cohort [18]. One of the main concerns of PHTS is the oncological risk.PTEN mutations have been associated with thyroid, breast, endometrium, and kidney tumor, among others that are less frequent [1].Defining the exact risk of developing malignancies is difficult.Studies involving CS, BRRS, or PS patients prior to molecular diagnosis had an important recruitment bias.In our cohort, many of the new diagnosed children were just studied because of macrocephaly and developmental problems.There is a lack of longitudinal studies in this group of patients to accurately estimate oncological risk.Although some malignant tumors have been reported in children, such as thyroid and renal cell carcinoma, granulosa cell tumor of the ovary, or colonic ganglioneuroma, none of our patients has presented any malignancy for the moment [31].This lack of evidence in the literature makes it difficult to establish the best clinical management of these children.A 2019 review of PHTS in children proposes several management considerations for these patients, considering that in spite of the lack of data about the prevalence of oncologic complications in children, there have been some cases reported [18,31].Achieving an early diagnosis through a PTEN genetic study in pediatric patients with macrocephaly would allow an early diagnosis of possible malignancies by performing an accurate follow-up of these patients. Conclusions This study shows the wide variety of clinical signs and symptoms associated with PTEN mutations, which sometimes express phenotypes which do not meet any of the classic diagnostic criteria for CS. All except one of our patients were referred by a pediatric neurologist, and macrocephaly and neurodevelopmental issues were the main reason to initiate genetic studies.We highly recommend looking for PTEN mutations in children with pronounced macrocephaly, especially if they present other symptoms such as neurodevelopmental disorders, ASD, certain facial dysmorphisms, or thyroid nodules. In PHTS, as in other heterogenous syndromes, it is important to describe as many clinical manifestations as possible in order to get a better knowledge of the disorder and help other clinicians to reach an early diagnosis.Being able to diagnose PHTS during childhood makes it possible to keep a closer follow-up in the patients and detect certain complications associated earlier, improving the treatment and, subsequently, the prognosis and the quality of life.This is especially relevant in oncologic issues, in which an individualized screening for malignancies may be useful to detect tumors in earlier stages.Besides, as it is a hereditary syndrome, genetic counselling could be possible with an early diagnosis. Molecular diagnosis may be difficult to carry out in many centers.Whole exome sequencing usually takes too much time and resources to get accurately to a diagnosis.That is why being able to direct the molecular study according to the clinic features will be very helpful to save both money and time. Fig. 1 Fig. 1 PTEN structure and location of the mutations detected Table 1 Demographic and clinical data as well as the reason for derivation to genetic study of the patients Table 2 Genetic alterations detected in the patients.Mutation location was determined using the mRNA reference sequence NM_000314.8.Hotspot region refers to areas of DNA which are more likely to mutate according to the information from Uniprot obtained in VarSome platform
2024-02-27T06:17:54.672Z
2024-02-26T00:00:00.000
{ "year": 2024, "sha1": "c923b02528ab94bd9bb3403c49ee2338aabccc7b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00381-024-06301-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "da5ea22c08417262b69566f90cefef8028dccb36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13319439
pes2o/s2orc
v3-fos-license
Self-Induced Quasistationary Magnetic Fields The interaction of electromagnetic radiation with temporally dispersive magnetic solids of small dimensions may show very special resonant behaviors. The internal fields of such samples are characterized by magnetostatic-potential scalar wave functions. The oscillating modes have the energy orthogonality properties and unusual pseudo-electric (gauge) fields. Because of a phase factor, that makes the states single valued, a persistent magnetic current exists. This leads to appearance of an eigen-electric moment of a small disk sample. One of the intriguing features of the mode fields is dynamical symmetry breaking. Introduction For a localized region with a finite-space domain of charge distribution, standard equations of electrostatics, , may lead to appearance of so-called self-induced electrostatic fields [1]. (stationary-state) with the quantized permittivities corresponding to discrete potential-egenfunction states [1]. The self-induced electric fields in small samples considered by Kapuścik [1] are pure static fields. currents in Maxwell equations can be negligibly small, so oscillating fields are the quasistationary fields [3]. For the case considered by Fredkin and Mayergoyz, one neglects a magnetic displacement current and has quasistationary electric fields. A dual situation (with respect to quasielectrostatic resonances) is demonstrated for quasistationary magnetic fields in small samples with strong temporal dispersion of the permeability tensor: ) ( ω µ µ t t = . In such small samples, variation of the electric energy is negligibly small compared to variation of the magnetic energy and so one can neglect the electric displacement current in Maxwell equations [3]. These magnetic samples can exhibit the magnetostatic resonance behavior in microwaves [4 -7]. For resonance modes, there are resonance values of permeability µ . So one may call these modes as the selfinduced quasimagnetostatic fields. When one neglects the displacement currents, one can introduce a notion of a scalar potential: electrostatic potential φ for quasielectrostatic fields and magnetostatic potential ψ for quasimagnetostatic fields. These potentials, however, do not have the same physical meaning as in a situation of pure electrostatics and magnetostatics. (1) If a sample does not posses any magnetic anisotropy, we have ( Similarly, for magnetostatic resonances in small magnetic objects one neglects the electric ( If a sample does not posses any dielectric anisotropy, we have As it follows from Eqs. (2) and (4), the electric field in small resonant dielectric objects as well as the magnetic field in small resonant magnetic objects vary linearly with time. This leads, however, to arbitrary large fields at early and late times, and is excluded on physical grounds. An evident conclusion suggests itself at once: the electric (for electrostatic resonances) and magnetic (for magnetostatic resonances) fields are constant quantities. This contradicts, however, to the fact of temporally dispersive media and any resonant conditions. Another conclusion is more unexpected: for a case of electrostatic resonances the Ampere-Maxwell law is not valid and for a case of magnetostatic resonances the Faraday law is not valid. The purpose of this paper is to demonstrate that self-induced quasistationary fields of magnetostatic (MS) resonance modes in small samples are rather the Schrödinger-like (or even the Dirac-like) fields than the Maxwell-like fields. MS oscillations in small objects are characterized by the pseudo-electric (gauge) fields. The power flow density for propagating quasistationary magnetic modes MS ferromagnetism has a character essentially different from exchange ferromagnetism [9,10]. When field differences across the sample become comparable to the bulk demagnetizing fields the local-oscillator approximation is no longer valid, and indeed under certain circumstances, entirely new spin dynamics behavior can be observed. This dynamics behavior is the following. Precession of magnetization about a vector of a bias magnetic field produces a small oscillating magnetization and a resulting dynamic demagnetizing field m r H r , which reacts back on the precession, raising the resonant frequency. Vectors H r and m r are coupled by the differential relation: This, together with the Landau-Lifshitz equation, leads to a complicated integro-differential equation for the mode solutions. Usually, to calculate these effects the Walker's [5] differential formulation is used and the general solution of this equation is expressed through a fictitious MS- . Such a way of solution is used both for continuous-wave FMR [11,12] and NMR [13] measurements. The question, however, arises: Is the MS-potential wave function ψ really fictitious function? The power flow density of MS waves propagating along z axis is expressed as where z e r is the unit vector along z axis. This expression can be obtained by two ways. As it was shown in [14], one derives Eq. (6) from a spectral problem formulation based on quasistatic operator equations for two wave functions: MS-potential function ψ and magnetic flux density B r ). This operator equation is the following: is the differential-matrix operator and is the vector function included in the domain of definition of operator . In this derivation, no DME are used. L Another derivation is based on use of DME: the power flow density of MS waves formally corresponds to the Poynting vector obtained for the curl electric field and the potential (quasimagnetostatic) magnetic field [12]. This reveals (together with the McDonald's remarks [8] shown above) a certain physical contradiction. The contradiction becomes evident when one considers the gauge transformation for MS-wave fields derived from the DME. In supposition that there exists a curl electric field E r defined by the Faraday law, one can introduce a magnetic vector potential: ) and based on the Faraday law, we have This equation shows that formally two types of gauges are possible. In the first type of a gauge we have: and, therefore, The second type of a gauge is written as (13) and, therefore, The last equation shows that any sources of the electric field are not defined and thus the electric field is not defined at all. So only the first type of a gauge, giving Eq. (12), should be taken into account. The main point, however, is that the considered above gauge transformation does not fall under the known gauge transformations, neither the Lorentz gauge nor the Coulomb gauge [15], and cannot formally lead to the wave equation. Moreover, to have a wave process one should suppose that there exists a certain physical mechanism describing the effect of transformation of the curl ) magnetic field. From a classical electromagnetic point of view, one does not have such a physical mechanism. The gauge electric fields for magnetostatic oscillations MS oscillations in a one-dimensional linear structure are completely described by scalar wave function ψ . In a case of MS wave propagating along z axis in a lossless structure, one has the Schrödinger-like equation [14,16,17]: where and are imaginary coefficients. Based on this equation one can find the normalized average MS energy of a propagating mode. The second-order homogeneous differential equation for MS-potential wave function, the Walker equation [5], we write in a form: is a second-order differential operator. Let us represent the MS-potential wave function as a propagating wave in a certain waveguide structure where χ is the MS-potential membrane function and k is a propagation constant along z axis. The eigenvalue equation for MS mode q in an axially magnetized ferrite rod is expressed as: For a ferrite region we have , In accordance with the Ritz method it is sufficient to use basic functions from the energetic functional space with application of the essential boundary conditions [18]. For a constant bias ⊥ Ĝ magnetic field, the energy eigenvalue problem for MS waves in a ferrite disk resonator is formulated as the problem defined by the differential equation: together with the corresponding (essential) boundary conditions [14,16,17]. A two-dimensional ("in-plane") differential operator and energy are determined as: where g is the unit dimensional coefficient. The energy orthonormality in a ferrite disk described as Evidently, the equation is satisfied for the NBCs, but not for the EBCs. For the NBC problem described by Eqs. (7), we represent function V for propagating waves as where tilder means MS-wave membrane functions: , κ is a propagation constant along z axis. The eigenvalue equation for MS mode m is expressed as: where Formulation of the NBC spectral problem is based on the homogeneous boundary conditions for the radial component of B r which is described as Here and are, respectively, radial and azimuth components of the RF magnetic field and Hamiltonian is conserved, there should be single valuedness for egenfunctions [20]. Since the eigenstates of Eq. (28) are not single valued, one should find a phase factor that will make the states single valued. Following a standard way of solving boundary problems in mathematical physics [18,21], let us consider two joint boundary problems: the main boundary problem and the conjugate boundary problem. The problems are described by differential equations which are similar to Eq. (28). The main problem is expressed by a differential equation: The conjugate problem is expressed by an equation: From a formal point of view, it is supposed initially that these are different equations: there are different differential operators, different eigenfunctions and different eigenvalues. A form of differential operator one gets from integration by parts: In this case operator is a self-conjugate operator. We demand continuity of φ and r B on the border C. So the boundary condition (36) we should write as We now uncover the expression for magnetic flux density B r in Eq. (37). Since in a ferrite region In the above equation we represented a contour integral (37) as a sum of two contour integrals. Let us introduce a new membrane function η : where . (40) The function φ changes a sign when θ is rotated by π 2 . Therefore, in order to cancel this sign change, ± γ must change its sign to preserve the single-valued nature of φ . From this we conclude that . That is 1 where ... . This gives: where u is a real quantity. We introduce now a problem with eigenvalue equation conjugate to Eq. (48) The transformation (39) restores the single valuedness, but now there is a nonzero vector-potentialtype term: Since ... , there are the positive and negative vector-potential-type terms. The superscript "m" means that there is the magnetic vector-potential-type term. Function η is identical to function χ . The confinement effect for magnetic-dipolar oscillations requires proper phase relationships to guarantee single-valuedness of the wave functions. To compensate for sign ambiguities and thus to make wave functions single valued we added a vectorpotential-type term to the MS-potential Hamiltonian. This procedure is similar to the procedure made by Mead for the Born-Oppenheimer wave functions [22,23]. The corresponding flux of pseudo-electric field ∈ (the gauge field) through a circle of radius r ℜ is obtained analogously to [22]: where is the flux of pseudo-electric field. The energy levels are periodic in the electric flux There should be the positive and negative fluxes. These different-sign fluxes should be inequivalent to avoid the cancellation. Similar to electromagnetic theory, the vector potential m A θ r is defined up to a gauge transformation. By performing the formal transformation it is easy to show that Despite the fact that 0 . So the gauge electric field ∈ r is not related to the Faraday-law electric field E r . Persistent magnetic currents in magnetic-oscillation disks The value 0 r r θ can be observable. The above analysis of a phase factor that makes the states single valued and so makes a total Hamiltonian to be conserved is related to a topological effect in a closed system. In this case the results are gauge invariant and the Stokes theorem can be used. In such a closed system, there should be a certain internal mechanism which creates a non-zero vector potential . This internal mechanism becomes evident when one compares the EBC (providing single-valuedness) described by Eq. Let us formally introduce a quantity of a magnetic current: We can rewrite the boundary condition (31) as follows: where is a density of an effective boundary magnetic current defined as (55) In supposition that membrane ("flat") functions χ form a complete basis in the energy functional space with use of boundary condition (21), it becomes evident that the effective boundary magnetic current slips from the main properties of this functional space. This current, being a persistent magnetic current, cannot be considered as a single-valued function. A singular "border" MS-potential function m δ is described by Eq. (43). For a certain MS oscillating mode in a ferrite disk we can represent an annual magnetic field as where function ) (z ξ describes z-distribution of the MS potential in a ferrite disk [14]. For a circular effective boundary magnetic current we have now: (58) The fields existing inside a ferrite disk form a very special field structure outside a disk. For nonzero circulation one can formally define an electric moment of a whole ferrite disk resonator (in a region far away from a disk) as follows [24]: where h is a disk thickness. In the above consideration, transport round a closed path (which gives Berry's phase factor π ) is the excursion of the system in time. The circular motion described by "border" MS-potential functions m δ is the time-reversal-odd process. Also a µ is the time-reversal-odd function and, therefore, an electric moment should be the time-reversal-even function. At the same time, since magnetic current is an axial vector, it follows that vector e a m i r m i r r × ρ is a polar vector. So an electric moment e a r is the parity-odd time-reversal-even function. Symmetry properties of oscillating magnetic modes Self-induced quasistationary magnetic fields are characterized by dynamical symmetry breaking. Let us introduce the quantity The energy eigenstate (see Eq. (25)) is determined by two waves propagating in a ferrite disk: the forward and backward waves with respect to axis z [17]. Since in a normally magnetized disk the forward and backward waves propagate along opposite directions of a bias magnetic field, they are the time-reversal-odd waves. These waves are characterized by different signs of a µ [11]. The circular motion described by "border" MS-potential functions is the time-reversal-odd process as well. Evidently, for a given energy eigenstate there should be the same sign of Q (and, therefore, the same direction an electric moment ) for the forward and backward waves. At the same time, the direction of the "spinning rotation" with respect to the direction of a polar vector is different for the forward and backward waves. So one has different symmetry properties of the forward and backward waves. To a certain extent, this resembles the "particle -antiparticle" symmetry properties in elementary particle physics. The above analysis gives an evidence for four types of oscillating modes: two different-symmetry (forward and backward) modes for Q > 0 and two different-symmetry (forward and backward) modes for Q < 0. e a r e a r As it follows from the theoretical analysis [14,16,17] and experimental studies [6,7,25] Conclusion The problem of the self-induced quasielectrostatic and quasimagnetostatic fields is especially important in understanding mechanisms of interaction of small temporally-dispersive-material samples with electromagnetic radiation. In particular, electrostatic resonances of isolated nanoparticles have recently attracted substantial interest because of intriguing possibility of obtaining very strong and localized electric fields. However, when the theory predicts multiresonance electrostatic (plasmon) oscillations in small temporally-dispersive-permittivity samples [2,26,27], experiments of the electromagnetic response [28] show, in fact, only a very few absorption peaks. Contrarily, in a case of small temporally-dispersive-permeability disks one can find (both from the theory [14,16,17] and experiments [6,7,25]) the pictures of multiresonance magnetostatic oscillations. The present paper gives further explanation of these phenomena. Our main standpoint is that, unlike the known results of the self-induced quasielectrostatic fields, the self-induced quasimagnetostatic fields in small magnetic samples are the Hilbert-space modes. Together with interaction with the magnetic component of the electromagnetic radiation, a small magnetic disk interacts with the electric component of the electromagnetic field. This is because of dynamical symmetry properties of magnetostatic modes. The dynamical symmetry breaking in quasistatic magnetic oscillations shows special-type gauge transformation for the fields.
2018-04-03T00:37:18.550Z
2006-01-12T00:00:00.000
{ "year": 2006, "sha1": "efed412e7b73b27bb780833c95c41f2f8ead63fe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0601319", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "166119786a42c5bd13647e084ef6fceba9f93d5a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
265255011
pes2o/s2orc
v3-fos-license
GTExome: Modeling commonly expressed missense mutations in the human genome A web application, GTExome, is described that quickly identifies, classifies, and models missense mutations in commonly expressed human proteins. GTExome can be used to categorize genomic mutation data with tissue specific expression data from the Genotype-Tissue Expression (GTEx) project. Commonly expressed missense mutations in proteins from a wide range of tissue types can be selected and assessed for modeling suitability. Information about the consequences of each mutation is provided to the user including if disulfide bonds, hydrogen bonds, or salt bridges are broken, buried prolines introduced, buried charges are created or lost, charge is swapped, a buried glycine is replaced, or if the residue that would be removed is a proline in the cis configuration. Also, if the mutation site is in a binding pocket the number of pockets and their volumes are reported. The user can assess this information and then select from available experimental or computationally predicted structures of native proteins to create, visualize, and download a model of the mutated protein using Fast and Accurate Side-chain Protein Repacking (FASPR). For AlphaFold modeled proteins, confidence scores for native proteins are provided. Using this tool, we explored a set of 9,666 common missense mutations from a variety of tissues from GTEx and show that most mutations can be modeled using this tool to facilitate studies of protein-protein and protein-drug interactions. The open-source tool is freely available at https://pharmacogenomics.clas.ucdenver.edu/gtexome/ Introduction Missense mutations are responsible for many genetic diseases through a variety of mechanisms 1 and explain many drug side effects. 2 There are over 28 million known missense mutations that have been discovered by genome-wide association studies to date 3 and a total of over 71 million feasible missense mutations are possible 4 making experimental study of all of them intractable.Only 17% of native human protein structures have experimental structures available 5 and missense structures are rarer still.AlphaFold is a deep learning method that predicts 3D protein structure from the amino acid sequence based on known protein structures with similar sequences. 6By using artificial intelligence to model entire proteomes, including that of humans, 7 AlphaFold has opened new methods to study protein structure and many groups continue to improve upon the publicly available code for AlphaFold. 8AlphaFold structures have been used to study protein-protein interactions, 9 protein function, 10 and small molecule docking. 11,12. In one analysis, AlphaFold was used to model missense mutations in three proteins where a specific mutation was known to disrupt the main chain packing of the protein. 13In those cases, AlphaFold did not capture the changes seen in experimental structures caused by the single nucleotide variation (SNV).A more systematic exploration of missense modelling has not been conducted and would be difficult and time-prohibitive given the currently available methods for generating structures.Using the publicly available cloud-based version of the AlphaFold software, called ColabFold, it takes about 45 minutes to create a single missense variant. 14Here we describe an open-access, high throughput, web-based tool, called GTExome, for identifying and visualizing the effect of single missense mutations on the three-dimensional structure of proteins.Native protein inputs can be selected from either experimental structures or AlphaFold predicted structures.The missense mutation data comes from aggregated large scale sequencing data from 60,706 exomes available in the public database gnomAD. 15To help focus searches on more commonly expressed proteins and to aid matching hypotheses to tissues where these proteins are common, tissue specific gene expression data from the Genotype-Tissue Expression (GTEx) database was used to scaffold access to the SNV data.This tissue-specific data is coupled in, allowing for searches based on protein expression levels measured from RNA-Seq experiments in different tissues. 16GTExome provides lists of mutations for download that have been discovered for these commonly expressed proteins.Using GTExome, we created three-dimensional structures for 9,666 of the most common mutations in common proteins and analyzed their suitability as models.To select genes based on expression in different tissue types, the gtex tab is selected (Figure 1, step 1a).The genes can be selected based on the absolute expression in transcripts per million (TPM) or as a ratio of expression in one or more selected tissues to the remaining tissues.To enter a gene directly without using tissue-specific expression data, the exac tab can be selected.Any gnomAD gene name entered there will lead directly to a list of SNVs for that gene (Figure 1, step 1b).Alternatively, if both a gene (by ENSG number) and specific SNV (by HGVS Consequence) are known already, one can select the refold tab, enter this info, and proceed directly to the mutation analysis page (Figure 1, step 1c). Results In steps 4 and 5, after a specific mutation is selected by one of the three entry points, the possibility that the mutation results in a protein with a similar fold to the native is evaluated.In some cases, we expect the mutations will minimally disrupt backbone structure.In these cases, the user can create a new model for the missense variant using the same three-dimensional backbone coordinates from the experimental or modeled native protein.The user can select from the highest resolution available experimental non-SNV containing structure or from an AlphaFold model of the native protein.To accommodate the mutation, side chains in the region can be repacked within a user-defined region to provide a structure as close to the native as possible while avoiding side chain conflicts.This is accomplished using Fast and Accurate Side-chain Protein Repacking (FASPR). 17FASPR is used to sample the side-chain rotamers for each amino acid within the assigned radius of the missense location.Atomic interaction energies are calculated using a scoring function where the side-chain packing search is performed using a deterministic searching algorithm combining self-energy checking, dead-end elimination, and tree decomposition.An analysis of the impact on the radius that is repacked on the number of residues that end up with different side chain conformations in contrast to direct modeling using ColabFold (supporting info) reveals that a repacking radius of 30 Å is sufficient to allow the side chains in the region to adopt their lowest possible conformations.In contrast to ColabFold, GTExome allows for nearly instant structure creation for any of the millions of known or hypothetical missense mutations in the exome.FASPR repacks residue side chains while maintaining backbone structure.This ensures there is less change to the native structure to ensure a more accurate model for non-disruptive mutations.Users are provided with information that can help in deciding if the accuracy of the model is sufficient for their use and can select a region of the protein in which the side chain atoms are repacked.Like Missense3D, 18 warnings include if there is a swapped charge in a residue, a cis proline is replaced, there is a gain or loss of a buried charge, loss of a buried glycine, a buried proline is introduced, or loss of a residue that hydrogen-bonds with another residue.Furthermore, for the 9,660 most common mutations, if the mutation is adjacent to a binding pocket as determined by fpocket, 19 the volume of any pockets that contain the residue at the site of the SNV are provided.Higher pLDDT scores correlate with better correspondence in small molecule binding sites compared to experimentally determined structures 11 and more accurate binding pocket predictions. 20 a hydrogen bond is removed by the mutation (identified if hydrogen donor atoms are within a 3.0 Å distance of hydrogen acceptor atoms) the mutation is flagged.Similarly, disruption of a salt bridge, where oppositely charged atoms are within a 3.2 Å distance is flagged.These distance values are the default parameters used in VMD to identify these types of bonds. 21Disulfide bond presence was directly searched for using labeled ssbonds in the Protein Data Bank (PDB) file.Charge switches were identified if negatively charged amino acids (Glu, Asp) were mutated to a positively charged amino acids (His, Lys, Arg) or vice versa. Gain or loss of buried charges were determined if either a positively or negatively charged amino acid were either replaced by or replaced a non-charged amino acid.The gain of buried proline or loss of glycine was flagged if the amino acid a residue was being mutated was a proline or glycine.The gain of a buried hydrophilic residues was identified if the mutation residue was hydrophilic (Ser, Thr, Cys, Tyr, Asn, Gln) while the native residue is not already a hydrophilic residue and a hydrogen bond acceptor was within binding distance. Prolines in cis configurations, if mutated will remain cis when repacked using FASPR.Torsion angles of proline were calculated to determine if they were in cis or trans configuration before mutation.A torsion angle that deviated from the range of 150° to 190° is flagged as cis. Fpocket is used to determine residues that are a part of binding pockets. 19Pockets are determined based on the alpha spheres present in the protein. 22If the mutation site involved in any of the identified pockets, it is flagged.In addition, the druggability score for each pocket is provided to the user. 23 model the mutation before repacking, the user can choose the radius around the mutation site where side chains will be repacked or select all residues to be repacked.The sequence of the protein is returned with the list of residues to be repacked listed in all caps and the mean pLDDT score of those residues provided.The repacked protein is shown to the viewer using a plotly-dash tool that includes a slider allowing zoom in on specific residues with the missense mutation highlighted (Figure 1, step 6).Finally, the user can select to download a structure file or to go back to re-edit the input parameters (Figure 1, step 7). In a sample trial that highlights one use of GTExome, we examined the genes that had an expression ratio at or above 1 relative to expression in all other tissues combined for each tissue in GTEx.Of the 9,666 genes identified, 78% of the SNVs were rare, with an allele frequency of <0.1.The mean pLDDT score across all the mutations was 70.43 at the site of mutation.No mutations resulted in a loss of disulfide or buried glycine.Across all mutations studied, a change in charge from positive to negative or negative to positive occurred in 3% of the SNVs, loss of a cis proline in 7%, loss of a salt bridge in <1%, and loss of a possible buried hydrogen bond in 7%.Overall, we classified 80% of the SNVs as AlphaFold suitable based on not having any of these deleterious changes that are unlikely to be modeled correctly by applying FASPR to the native structure.Limiting to proteins with a pLDDT score at the SNV site above a stringent threshold of 90 would limit this further.The average pLDDT per tissue type for tissues with 10 or more proteins and the percentage suitable by tissue type are shown in Figure 2 and the rates of cis proline loss, salt bridge loss, or changes to buried hydrogen bonds or charge swaps are show in Supporting Information (Figures S1 -S5). AlphaMissense is a newly available tool that predicts the pathogenicity of 71,140,163 feasible missense mutations that could occur in the human exome.We compared our trial set of mutations to predictions made by AlphaMissense and found that 88% of the 9,660 are classified as benign and 6% as pathogenic.This is higher than the 57% reported as benign in AlphaMissense for all potential missense mutations.This supports the idea that SNVs that are expressed in tissues are under more selective pressure and that this filtering method favors mutations that are likely to produce nonpathogenic proteins.In contrast we found that the COSMIS p-value for the 8,780 mutations of this study set that were in the COSMIS database was -0.0156 for p-values derived from AlphaFold structures and 0.02 for PDB derived structures.Both values are higher than the -0.47 reported for median SNV across the entire proteome. 24This higher value indicates that the three-dimensional region surrounding these mutations are under less evolutionary constraint than average. We also compared the 9,660 selected mutations against a list of mutations shown to cause adverse drug reactions that have been shown to be clinically relevant. 2We identified 14 missense SNVs that overlap between our selected mutants and those in the study.These SNVs span 6 different genes of the 12 studied by Swen et al., all of which are available as AlphaFold models in GTExome.The SNV sites have an average pLDDT score of 95 and 10 out of 14 are rated as suitable for modeling based on the type of mutation (Supporting information, Table 1). Currently, modeling the effects of missense mutations without experimental structures can be done by running AlphaFold directly on the sequence or running it remotely using ColabFold.We compared 47 SNVs modeled using GTExome and ColabFold and found very similar structures.The C- RMSD values between the two structures after performing an alignment in Chimera had a median value of 0.52 Å for a range of protein sizes and types of mutations (Supporting information, Table 2, and Figure S1). An Application Programming Interface (API) endpoint is provided on the website to developers through a registration process to allow direct calls to many of the backend APIs that run the website.One access point provides the PDB file name, chain ID, and best resolution of an experimental structure for a given the ENSG number.A second API takes a geneID and the HGVS consequence (CCID number from the ExAC browser) as input and returns the pLDDT score at the SNV, the mean pLDDT score for all residues in the protein, the number, volume, and druggability of any pockets adjacent to the SNV, the name of the AlphaFold file, a recommendation on whether the AlphaFold structure is suitable for modeling, and the individual results of each amino acid check (as provided in the web interface).The third API takes an input geneID, CCID, and repacking radius and returns the list of residues, the sequence length, the sequence of the protein with residues within the given radius in all caps, and the mean pLDDT score for the residues that would be repacked at that radius. To compare predicted structures to actual experimental structures containing the same mutation, we searched the Structure Integration with Function, Taxonomy and Sequence (SIFTS) database 5 for files in the PDB that match the Uniprot number for the given parent protein for each of the 9,660 mutations.Then we examined each chain in each model for these PDB files to see if they had the same mutation.We found a total of 8 SNVs in our set that had verified matching experimental structures determined by x-ray crystallography in the PDB with high overlap to the modelled structure and that contain the same mutation.We examined the C- RMSD values in comparing the two (supporting information, Table 3) after using matchmaker in Chimera to align the two structures.The structures were similar in each case with a mean RMSD of 0.55 Å for the overall structures, the region where the SNV occurred was very similar, and the side chains at the SNV had small rotational differences in only a few cases (Figure 4). Materials and Methods The GTExome web application was written using the Python-based Django framework.The backend is comprised of a mySQL relational database and the frontend rendered with a combination of Django, Javascript, JQuery, and Vue for handling reactive elements built alongside a second web application, Metabolovigilance. 25ree ways to input protein mutation selection are provided; gtex, exac, and refold.For the gtex tab, data from GTEx is stored locally with expression level for all proteins.The counts are available % Suitable for modeling pLDDT per tissue in TPM for each gene and stored in the database for the 54 tissue sites and 2 cell types obtained from nearly 1000 individuals in the GTEx project.Gene lists are created by either a user input range of TPM counts for genes in a tissue or combination of tissues or from a ratio where the numerator is the sum of expression in selected tissues and the denominator is the sum of the remaining tissues. The exac tab can be used to get a list of all SNVs for a single protein if the gene name is known.For gtex and exac tabs, a button is provided to filter the results to display only missense mutations.The refold tab can be used if the specific gene ID and mutation ID is known.A search box on each page provides additional searching and filtering methods.Contact Set Missense Tolerance (COSMIS) code was modified to be used to determine the experimental structure with the best match to the sequence provided.COSMIS was originally used to quantify constraint of mutations on proteins. 22The first part of their process was repurposed, which utilizes the SIFTs database. 5SIFTs was used to search for an experimental structure based on the geneID with the highest sequence overlap and the highest resolution of the structures only if the SNV site was located within the structure file. 5AlphaFold models created using AlphaFold2 are available on the GTExome server for 98.5% of human proteins.Proteins longer than 2,700 amino acids are excluded. 7If both AlphaFold and experimental structures are available, a user can select which structure to use.The pLDDT score for AlphaFold structures and the resolution of experimental structures are provided to the user.pLDDT scores are a prediction of global distance test (GDT) based on backbone RMSD values using the following equations. Where GDT Px are the percentage of C-α atoms with cutoffs of 1, 2, 4, and 8 Å RMSD where the RMSD between predicted and actual values of distance in the x, y, and z coordinates is calculated as: Both the average pLDDT score (the average pLDDT score of each residue in the protein), and pLDDT score at the site of the mutation are presented.For proposed repacking, the mean of the pLDDT score of those residues that would be repacked is provided. Conclusion Missense mutations in the human exome play a critical role in understanding human health, however experimental structures of them are rare.We found 8 structures while examining close to 10,000 proteins that are commonly expressed in healthy adults.This underscores the need for accurate models of these structures.In this publication, a fast, high-throughput tool to model protein missense mutations is described.This tool increases the availability of mutated model proteins as few experimental structures have been determined experimentally. 5This tool makes 98.5% of human proteome available through either AlphaFold and/or experimental structures.Users can choose their preferred parameters for the model and are presented with information about the suitability of the model being created.The output structures can be used as a starting point for more advanced structure modeling for docking studies or to investigate energy differences between native and mutated structures using software such as Dynamut. 26This fast tool provides the best available structures to users and can be updated as better models become available. Describing if the location of the missense mutation is a part of a binding pocket allows facilitating selection of models for studying the effects of missense mutations on drug-protein interactions.With this information, predicted structures can be used later in docking studies to understand possible side effects associated with missense mutations, or to research protein -protein interactions.A recent drug docking study using AlphaFold models 12 reported a median RMSD of 2.9 Å between AlphaFold models and corresponding experimentally determined structures, significantly more accurate than the RMSD of 4.3 Å for traditional models indicating that despite imperfections in the side chain structures, the cavities are generally well represented.For virtual screening, this may be sufficient to identify possible hits. In contrast to the three protein missense mutations reported previously 13 which may have been selected precisely because they were anticipated to be difficult to model with AlphaFold, here we selected proteins based on expression levels and found high similarity between experimental and predicted structures and high similarity to models created directly by ColabFold.The comparison to experimental structures provides evidence of similarity between modeled and experimental structures with the important caveat that the structures examined were deposited in the PDB between 1993 and 2015 which pre-dates AlphaFold and were therefore likely part of the training set used to build the predictor. Figure 1 . Figure 1.Walkthrough of the GTExome website at https://pharmacogenomics.clas.ucdenver.edu/gtexome/showing pageviews in the sequence used to generate a missense mutation model. Figure 2 . Figure 2. Number of missense SNVs of the 9,660 examined found by tissue type from GTEx for the 9,660 SNVs examined across the GTEx tissues with 10 or more SNVs above ratio threshold. Figure 3 . Figure 3. top) Percentage of SNVs of the 9,660 examined found suitable for modeling, and bottom) mean pLDDT score at site of SNV by tissue type (for tissues with 10 or more SNVs above ratio threshold).
2023-11-18T14:11:00.638Z
2023-11-15T00:00:00.000
{ "year": 2023, "sha1": "ba46feb17e38ef5048185bac5e126f1be8549430", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/11/15/2023.11.14.567143.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f8825818e9f9b3258717e920b2faf720f377e203", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
13503679
pes2o/s2orc
v3-fos-license
Two-dimensional Blue Native/SDS Gel Electrophoresis of Multiprotein Complexes from Helicobacter pylori* The study of protein interactions constitutes an important domain to understand the physiology and pathogenesis of microorganisms. The two-dimensional blue native/SDS-PAGE was initially reported to analyze membrane protein complexes. In this study, both cytoplasmic and membrane complexes of a bacterium, the strain J99 of the gastric pathogen Helicobacter pylori, were analyzed by this method. It was possible to identify 34 different proteins grouped in 13 multiprotein complexes, 11 from the cytoplasm and two from the membrane, either previously reported partially or totally in the literature. Besides complexes involved in H. pylori physiology, this method allowed the description of interactions involving known pathogenic factors such as (i) urease with the heat shock protein GroEL or with the putative ketol-acid reductoisomerase IlvC and (ii) the cag pathogenicity island CagA protein with the DNA gyrase GyrA as well as insight on the partners of TsaA, a peroxide reductase/stress-dependent molecular chaperone. The two-dimensional blue native/SDS-PAGE combined with mass spectrometry is a potential tool to study the differences in complexes isolated in various situations and also to study the interactions between bacterial and eucaryotic cell proteins. Helicobacter pylori is a spiral, microaerophilic, Gram-negative bacterium that colonizes the gastric epithelium (1) in 40 -60% of the world's population. Approximately 10 -20% of these infected individuals suffer from diseases, such as peptic ulcer disease, or from conditions such as chronic atrophic gastritis, which can then evolve toward gastric cancer (2)(3)(4). Identification and characterization of multiprotein complexes are important steps in obtaining an integrative view of protein-protein interaction networks that determine protein function and cell behavior. The availability of complete DNA sequences of two H. pylori strains (5,6) has led to the devel-opment of reliable proteome-wide approaches for a better understanding of the virulence mechanisms of the bacterium. One of the strategies of functional proteomics, a method used to identify gene function at the protein level, is the comprehensive analysis of protein-protein interactions related to the functional linkage among proteins and the analysis of functional cellular machinery to better understand the basis of the organism's functions. Recent progress in high throughput technologies has allowed the characterization of protein-protein interactions more directly than ever before using procedures such as the two-hybrid assay (7), co-purification (8,9), or co-immunoprecipitation (10). The workhorse of experimental proteomics has been the two-hybrid screening method, although it has been criticized for its limited accuracy of results and its laborintensive nature (11,12). Indeed, it is currently the most reliable technique for large scale characterization of protein interactions in complete genomes (13). Protein chips may eventually provide large scale simultaneous protein-protein interaction data (14,15), but technical problems (denaturation and substrate biocompatibility) must be overcome to scale up for high throughput analysis. To date, these technologies have generated large interaction networks for bacteria (16), yeast (8,9), fruit flies (17), and nematode worms (18). Other approaches will undoubtedly become prominent as proteomics technology continues to evolve. The limiting factor for identifying protein complexes is the separation method that must be performed under native conditions to prevent protein dissociation. Because of these limits, the two-dimensional blue native (2D BN) 1 /SDS-PAGE method was applied to study the H. pylori reference strain J99 complexome. Protein identification was performed by using LC-MS/MS. This highly resolvent separation method was initially described for the separation under native conditions of the membrane protein complexes of mitochondria (19). Later on, numerous studies focused on the protein complexes of the respiratory chain (20 -27). More recently, because this method is reproducible, it was successfully used to study the detection of protein complex deficiencies of mitochondria (28 -32). In the same manner, this method was applied to the protein complexes of the chloroplast membranes of cyanobacteria (33) and protein complexes of the mitochondrial and chloroplast membranes of plants (34 -41). Moreover a similar method using agarose instead of acrylamide was developed to study protein complexes of approximate molecular mass greater than 1200 kDa (42). The anionic dye used in this method (Coomassie Brilliant Blue G-250) binds to the surface of all proteins, particularly on aromatic residues and on arginines. This binding of a large number of negatively charged dye molecules to proteins facilitates the multiprotein complex migration in a first dimension native electrophoresis (BN-PAGE), and the tendency for protein aggregation is thus reduced considerably. Multiprotein complexes are separated according to their size and shape. Each multiprotein complex is denatured in a second dimension electrophoresis (SDS-PAGE), and the protein alignment allows the identification of interactive proteins. Recent modifications have made it possible to apply this method to whole protein complexes (43)(44)(45). Indeed this method was recently used to study the Escherichia coli cell envelope complexome (44), human embryonic kidney 293 cells (43), and human platelets (45). A dialysis must be performed on cytoplasmic extracts to eliminate salt and small molecules (43). This method was applied to analyze both H. pylori cytoplasmic and membrane complexes. Purification steps such as liquid IEF or chromatography fractionation and enrichment were used to improve the multiprotein complex separation from the cytoplasm. EXPERIMENTAL PROCEDURES Bacterial Growth Conditions-H. pylori strain J99 (ATCC 700824) (6) was used for the experiments. H. pylori cells were cultured for 48 h on Wilkins Chalgren agar (Oxoid Ltd., Hampshire, UK) plates supplemented with 10% human blood and the following antibiotics: 10 mg/ml vancomycin (Lilly France S.A., Fergesheim, France), 2 mg/ml cefsulodin (Takeda France S.A., Puteaux, France), 5 g/ml Fungizone (Bristol-Myers Squibb Co.), and 5 mg/ml trimethoprim (GlaxoSmith-Kline). A bacterial suspension was grown in brucella broth (BD Biosciences) supplemented with the same antibiotics cited above and with 10% fetal calf serum (Eurobio, Les Ulis, France). The plates were incubated at 37°C under microaerobic conditions (5% O 2 , 10% CO 2 , 85% N 2 ), and the broth was incubated in a 1-liter loosely capped container, with a 150 rpm agitation, in a microaerobic atmosphere at 37°C for less than 72 h. Bacteria harvested from two or three agar plates were suspended in 250 ml of brucella broth. The resulting liquid culture showed an approximate bacterial growth of 0.1% (w/v). For the development of the 2D BN/SDS-PAGE applied to H. pylori cytoplasmic extract and to obtain the final results for both the cytoplasmic and the membrane extract, a total of 10 g of H. pylori strain J99 was frozen. Bacterial Lysate, Cytoplasmic, and Membrane Preparations-All of the bacteria and sample manipulations were performed at 4°C. Bacteria were harvested from culture by a centrifugation at 2,500 ϫ g for 10 min and washed twice in PBS buffer. Bacteria were suspended (v/v) in native extraction buffer A (750 mM 6-amino-n-caproic acid, 50 mM Tris/HCl, pH 7.0, at 4°C) supplemented with a 1 mM final concentration of PMSF and passed three times through a One Shot disruptor at 2 kilobars. The lysate was centrifuged at 6,000 ϫ g for 20 min, the supernatant was kept, a 0.2 mg/ml final concentration of DNase I was added, and digestion was carried out for 1 h at 25°C. Then the supernatant was centrifuged at 100,000 ϫ g for 30 min at 4°C; the resulting supernatant and pellet contained the cytoplasmic protein and the membrane protein, respectively. For the cytoplasmic sample preparation, the supernatant was filtered with a Miracloth membrane (Calbiochem). and this sample was named fraction I. The concentration of 125-150 mg/ml protein was obtained, and the sample appeared limpid. For the membrane sample preparation, a bacterial lysate was centrifuged at 6,000 ϫ g for 20 min, and the pellet was resuspended in buffer A and passed three times through a One Shot disruptor at 2 kilobars. The resulting lysate was centrifuged at 100,000 ϫ g for 30 min, and the pellet was resuspended in buffer A and centrifuged once more. The washing step was performed three times. The protein extraction was then carried out by resuspending the membrane in 1-2 ml of buffer A supplemented with 2% ␤-D-dodecyl-n-maltoside detergent (Sigma). This sample was then centrifuged at 100,000 ϫ g for 30 min, and the membrane multiprotein complexes contained in the supernatant were separated by 2D BN/SDS-PAGE. Concerning the membrane extract, the detergent used interfered with the Bradford protein dosage. For this reason and to obtain sufficiently resolvent gels, a preliminary experiment was carried out with various sample dilutions directly loaded onto the first dimension of the 2D BN/SDS-PAGE gel to determine the quantity of material needed for the electrophoresis. Purification Steps on the Cytoplasmic Sample-All of the steps were carried out at 4°C. Purification steps using physical or chemical properties of the multiprotein complexes constituted an important aspect to confirm protein-protein interactions. Therefore, liquid IEF or exclusion filtration methods were used as purification steps before applying the 2D BN/SDS-PAGE. Liquid IEF purification was used to separate the multiprotein complexes according to their pI in a pH range from 3.5 to 10. An aliquot of fraction I (containing ϳ60 mg of protein) was analyzed in a Rotofor system (Rotofor Prep IEF Cell, Bio-Rad). The protein mixture was prepared according to the manufacturer's recommendations before filling the Rotofor chamber. The IEF method produced many protein precipitates in the most abundant protein fractions with a pI of ϳ5-6. Very low protein concentrations were found in the basic fractions although a concentration step with the Centricon kit (Amicon, Inc., Beverly, MA) was used. After the IEF of the protein sample, the resulting fractions were desalted in buffer A using a 5-ml HiTrap TM desalting column (Amersham Biosciences). Multiprotein complexes were recovered in 300-l fractions. Indeed for H. pylori cytoplasmic extracts, a preliminary dialysis is necessary to obtain highly resolvent gels. Here, dialysis was replaced by a desalting step, which allows the elimination of small molecules and salts, as was described for the purification of the human embryonic kidney cell line HEK293 (43). Gel filtration purification was also tested. An aliquot of fraction I (300 l containing ϳ60 mg of protein) was loaded on a Superdex TM 200 column (Amersham Biosciences). Buffer A was run at a flow rate of 0.3 ml/min using the FPLC Akta (Amersham Biosciences). Multiprotein complexes were recovered in 250-l fractions. The cytoplasmic sample was separated into five peaks of major interest: 1000, 580, 340, 100, and 93 kDa. This method allowed the adaptability of the 2D BN/SDS-PAGE acrylamide gradient according to the mass of interest of the complexes. The Centricon kit (Amicon, Inc.) with a cutoff of 50 kDa was also used to concentrate the purified samples when the sample concentration was lower than 50 mg/ml. First Dimension: BN-PAGE-Sample preparation and BN-PAGE were carried out as described by Schagger and von Jagow (19) with the following minor modifications. The gel dimension was 22 cm ϫ 16.5 cm ϫ 1 mm. Separating gels with linear 4 -9% acrylamide gradient gels were used. Anode and cathode buffers contained 50 mM Tris, 75 mM glycine, and only the cathode buffer was supplemented with 0.002% Serva blue G (Serva, Heidelberg, Germany). Before loading the sample, 1 l of sample buffer (500 mM 6-amino-n-caproic acid, 5% Serva blue G) was added. The gel was run overnight at 4°C at 1 watt. Thyroglobulin (669 kDa) and bovine serum albumin (66 kDa) were used for each BN-PAGE analysis as molecular weight size standards (Sigma). Different acrylamide gradients were tested for the BN-PAGE to improve the multiprotein complex separation. We found that the best multiprotein complex separation was obtained with a linear 4 -9% acrylamide gradient. A certain balance needs to be found to optimize both focalization and separation of complexes with a mass greater than 60 kDa. The 4 -9% acrylamide gradient was performed three times on the same sample and two more times on a sample originating from another extraction; this experiment showed that the migration distance of the multiprotein complexes was the same (data not shown). This is proof that the multiprotein complexes of H. pylori maintain their global conformation during the electrophoresis. Therefore, a molecular mass could be attributed to the cytoplasmic complexes based on the two different marker proteins mentioned above. Second Dimension: SDS-PAGE-Individual lanes from BN-PAGE were equilibrated for 5 min in an equilibrating buffer containing 1% SDS (w/v), 0.125 mM Tris/HCl, pH 6.8, and then dipped into equilibrating buffer supplemented with 100 mM dithiothreitol (Acros Organics, Morris Plains, NJ) for 15 min. Individual lanes were subsequently soaked in equilibrating buffer supplemented with 55 mM iodoacetamide (Sigma) for 15 min. An ultimate washing step lasting 5 min was performed in the equilibrating buffer without supplement. Individual lanes were placed on a glass plate at the usual position for stacking gels. After positioning the spacers and covering with the second glass plate, the gel was brought into a vertical position. Then the 10% separating gel mixture was poured. After polymerization the stacking gel mixture was poured. Gel Staining-Silver staining was performed using a silver staining kit (Sigma) according to the manufacturer's instructions. In-gel Protein Digestion-Silver-stained proteins separated by SDS-PAGE were excised and destained using the PROTSIL2 silver staining kit (Sigma) according to the manufacturer's instructions. Spots were subsequently washed in distilled water/methanol/acetic acid (47.5:47.5:5) until complete destaining. The solvent mixture was removed and replaced by acetonitrile. After shrinking of the gel pieces, acetonitrile was removed, and gel pieces were dried in a vacuum centrifuge. Gel pieces were rehydrated in 10 ng/l trypsin (Sigma) and 50 mM bicarbonate and incubated overnight at 37°C. The supernatant was removed and stored at Ϫ20°C, and the gel pieces were incubated for 15 min in 50 mM bicarbonate at room temperature under rotary shaking. This second supernatant was pooled with the previous one, and a distilled water/acetonitrile/acetic acid (47.5:47.5:5) solution was added to the gel pieces for 15 min. This step was repeated again twice. Supernatants were pooled and concentrated in a vacuum centrifuge to a final volume of 30 l. Digests were finally acidified by the addition of 1.8 l of acetic acid and stored at Ϫ20°C. On-line Capillary HPLC Nanospray Ion Trap MS/MS Analysis-Peptide mixtures were analyzed by on-line capillary HPLC (LC Pack-ings, Amsterdam, The Netherlands) coupled to a nanospray LCQ ion trap mass spectrometer (ThermoFinnigan, San Jose, CA). Peptides were separated on a 75-m-inner diameter ϫ 15-cm C 18 PepMap TM column (LC Packings). The flow rate was set at 200 nl/min. Peptides were eluted using a 5-50% linear gradient of solvent B for 30 min (solvent A was 0.1% formic acid in 5% acetonitrile, and solvent B was 0.1% formic acid in 80% acetonitrile). The mass spectrometer was operated in positive ion mode at a 2-kV needle voltage and a 38-V capillary voltage. Data acquisition was performed in a data-dependent mode consisting of, alternatively in a single run, a full scan MS over the range m/z 300 -2000 and a full scan MS/MS in an exclusion dynamic mode. MS/MS data were acquired using a 3 m/z unit ion isolation window, a 35% relative collision energy, and a 5-min dynamic exclusion duration. Data Analysis-Data were analyzed by SEQUEST (ThermoFinnigan, Torrance, CA) against a subset of the National Center for Biotechnology Information (NCBI) database consisting of H. pylori strain J99 and 26695 protein sequences. Carbamidomethylation of cysteines (ϩ57) and oxidation of methionines (ϩ16) were considered as differential modifications. Only peptides with an Xcorr values greater than 1.5 (single charge), 2 (double charge), and 2.5 (triple charge) were retained. In all cases, ⌬C n values have to be greater than 0.1. RESULTS Global Presentation of the Results-Multiprotein complexes from the cytoplasm were named "C," and those from the membrane were named "M." Putative multihomooligomeric protein complexes were named "H." Certain identifications (noted S1-S5) could not be attributed to complexes; they are not presented in Table I but are noted in the legends of the figures only. Proteins that are components of the same multiprotein complex co-migrate in the first dimension and are found aligned with a similar shape in the second dimension. This is the basic condition for the identification of a multiprotein complex. As an example, several spots were found to be perfectly aligned (Figs. 1A and 2), and the five proteins with a similar shape (spots 21-25) were attributed to the multiprotein complex named C8. However, the elongated form of these spots in Fig. 1A has a more rounded form in Fig. 2. In a further example, spots 4 -6 ( Fig. 3) fulfilled these criteria, and the three corresponding proteins were considered to be subunits of a multiprotein complex named C2. Aligned spots but with different shapes were considered to belong to different multiprotein complexes. Obvious examples are indicated by arrows (spots A1-A6) in Figs. 2 and 3. The spots A1 and A2 indicated by arrows (Fig. 2) are perfectly aligned; however, spot A1 has a rounded form, whereas spot A2 has a more elongated shape. For this reason, they were attributed to different complexes. Other examples of aligned spots with different shapes are indicated by arrows: 1) spots A3 and A4 in Fig. 2 and 2) spots A5 and A6 in Fig. 3. One can also notice that only spot A6 was identified in the crude extract (Fig. 1A). All these spots (spots A1-A6) were considered to belong to distinct complexes. The crude cytoplasmic and membrane protein complexes separated by 2D BN/SDS-PAGE are shown in Fig. 1, A and B, respectively. To increase the number of multiprotein com- Description of the protein complexes identified in H. pylori reference strain J99 using 2D BN/SDS-PAGE Multiprotein complexes named "C" and "M" correspond to complexes isolated from the cytoplasm and from the membrane compartments, respectively. Sample preparation before applying the 2D BN/SDS-PAGE is indicated for each protein complex. Putative multihomooligomeric protein complexes were named "H." The multiprotein complexes were all localized on 2D BN/SDS-PAGE gels performed in this study and are represented in Figs Protein complex identified when a liquid isoelectrofocalization purification (Rotofor system) was performed before applying 2D BN/SDS-PAGE (Fig. 2). d Protein complex identified when a gel filtration purification (Superdex 200 column) was performed before applying 2D BN/SDS-PAGE (Fig. 3). e Protein complex identified when no purification step was performed before applying 2D BN/SDS-PAGE (Fig. 1). plexes observed, non-denaturing purification steps were performed on the cytoplasmic extract before applying the 2D BN/SDS-PAGE separation, e.g. liquid IEF (Fig. 2) or exclusion filtration (Fig. 3). All of the multiprotein complexes identified in this study are described in Table I. Attempts to identify proteins were made on a large number of spots, but because some LC-MS/MS identifications failed, certain complexes have not been reported in this study. Different types of complexes were observed. Multiheterooligomeric complexes fulfilling the previously defined criteria were clearly identified, and in certain cases, these complexes were found several times. Complex C1 (spots 1-3) is an example; it is found in Figs. 2 and 3. The aligned spots are elongated in Fig. 3 but have a more marked curve in Fig. 2. This complex is no doubt present in Fig. 1A, but only spots 1 and 2 can be seen. Because spots 1 and 2 are more intensely colored than spot 3 (in Figs. 2 and 3) and the spots 1 and 2 are very light in Fig. 1, a plausible explanation is that the latter could not be visualized in Fig. 1. One can also notice that the shape of spot 3 is more difficult to interpret because the gel migration is diffuse in this molecular mass range. Interestingly the genes jhp0631, jhp0632, and jhp0633 that encode these three proteins are located on the same operon (6). Complex C2 (spots 4 -6) is clearly visualized in Fig. 3 but to a lesser FIG. 1. Analysis of the crude cytoplasmic and membrane samples of H. pylori reference strain J99. The first dimension gel (BN-PAGE) was performed with an acrylamide gradient of 4 -9% for the crude cytoplasmic sample (A) and 6 -13% for the crude membrane sample (B). C and M represent complexes isolated from the cytoplasm and from the membrane compartments, respectively. H represents putative multihomooligomeric complexes. * represents spots where different proteins were identified (see Table I). A second migration of the C3 and M1 complexes is shown in Boxes 1 and 3, respectively. The migration of the C5 complex identified in Fig. 3 was also performed using the crude cytoplasmic extract, but the quantity of protein loaded on the gel was increased (Box 2). A second migration of spots S3 and S4 is shown in Box 4. Arrow A6 refers to Fig. 3. Two proteins were identified in spot S1: AlpB (or HopB; two peptides, coverage ϭ 2%) and AlpA (or HopC; three peptides, coverage ϭ 2%). Spot S2 corresponds to TsaA (five peptides, coverage ϭ 31%). Spot S3 contains FrpB3 (22 peptides, coverage ϭ 30%), CagA (four peptides, coverage ϭ 2%), and ClpB (four peptides, coverage ϭ 2%), and spot S4 contains BabB (seven peptides, coverage ϭ 13%), HydB (or HyaB; 11 peptides, coverage ϭ 18%), and HopM (or OMP5, six peptides, coverage ϭ 7%). Spots D1-D14 correspond to monomers of AtpA, AtpD, GroEL, Tig, RecA, AddB, Tfs, TufA, JHP0971, JHP1356, Adk, PorC, CagL, and JHP1494, respectively. extent than in Fig. 1A because spot 4 is not perfectly aligned with spots 5 and 6 in this figure. Indeed a mixture of proteins containing DnaN (23 peptides, coverage ϭ 83%) and PorA (six peptides, coverage ϭ 18%) was identified in spot 4 in the crude extract. In fact, spot 4 probably corresponds to two different spots with the spot corresponding to DnaK masking FIG. 2. Protein complexes identified from H. pylori reference strain J99 when the cytoplasmic sample was purified using the liquid isoelectrofocalization method (Rotofor system) before applying 2D BN/SDS-PAGE. Analysis of the sample fraction containing protein complexes with a global pI of 5 is shown. The first dimension gel (BN-PAGE) was performed with an acrylamide gradient of 4 -9%. H represents putative multihomooligomeric complexes. * represents spots where different proteins were identified (see Table I the less intense spot corresponding to PorA. The complex C3 represented by spots 7-10 was identified several times in the crude extract (Fig. 1, A and Box 1). However, after the exclusion filtration purification step, not all of the spots comprising the complex were found. Only the most intense spots, i.e. spots 9 and 10, were found at a molecular mass of 500 kDa (Fig. 3). They both correspond to alkyl hydroperoxide reductase TsaA (Table I). After the liquid IEF purification step, only three intense spots were found at a molecular mass of 500 kDa, and they were also all identified as TsaA (Fig. 2, spots S5-S7). These results are in agreement with the fact that TsaA has been described as an abundant protein in H. pylori (46). The results also strongly suggest that spots 7 and 8 in complex C3, which were already very light in Fig. 1A, were probably not detected after the different purification steps due to their weak intensity or due to a loss of these two subunits during the purification process. In addition, after the liquid IEF purification step, all of the visible spots on the Fig. 2 gel were analyzed by LC-MS/MS, and the proteins corresponding to spots 7 and 8 were not found in the 75 analyzed spots (data not shown). Among the multiheterooligomeric complexes, certain ones were not identified until after the purification step. The exclusion filtration purification of the samples enabled the identification of complexes C4, C5, and C6 (Fig. 3). The analysis of the sample fraction containing protein complexes eluted in the range of 580 kDa from the exclusion filtration allowed the identification of complex C4, comprised of two subunits. Indeed spots 11 and 12 are aligned and have the same elongated form. Similarly spots 13 and 14 with their lengthened form belong to complex C5 and were also found in the crude cytoplasmic extract when the first dimension gel was performed with a preparation containing more proteins (Fig. 1, A, H, and Box 2). Furthermore this complex was also found in other 2D BN/SDS-PAGE gels (data not shown). Lastly, complex C6 contains three subunits represented by the similar and aligned spots 15-17 (Fig. 3). Problems were encountered for certain identifications (Figs. 1A and 2). For example, spots 18 -20 from complex C7 were found to be aligned with a similar shape as were spots 21-25 from complex C8. Spots 18 -20 (UreB, GroEL, and UreA) from complex C7 were also found in complexes C8 and C9, but the quantities of the corresponding proteins appeared lower in complexes C7 and C8 compared with complex C9, although this amount was estimated on a silver-stained gel considered as not adequate to provide a quantitative measure. This may suggest that the C7-"specific" subunits are present in C8 and C9 but are near or below the detection limit of the silver stain. However, these three complexes migrate at different molecular mass, suggesting different states of oligomerization. Consequently three different complexes were attributed. In addition, spots 26 -32 that were found to be aligned and to all have the same shape in Fig. 1A were also identified in Fig. 2 but with the additional spots 33-35. These spots also seem to be present after purification by exclusion filtration (Fig. 3); unfor-tunately their identification after this purification step was not sufficient to attribute spots 26 -32 and 33-35 either to a single complex or to two different complexes. A hydroxyapatite affinity chromatography purification (HA Ultrogel column) was also performed, but the complex corresponding to spots 26 -32 did not show an affinity to calcium and was directly eluted as demonstrated by the 2D BN/SDS-PAGE analysis of this fraction (data not shown). In fact, the analyses by 2D BN/SDS-PAGE and LC-MS/MS of the unretained fraction and the fractions eluted with the 1-300 mM linear gradient of sodium phosphate buffer did not allow the confirmation of the presence of all of these spots in only one complex or in two different complexes because spots 26 -32 were never identified in these different fractions (data not shown). The absence of spots 33-35 on the gel in Fig. 1A can be explained in two ways: 1) either the proteins present in spots 26 -32 and 33-35 are all part of the same complex and spots 33-35 that were less intense were not identified in the crude extract or 2) spots 26 -32 belong to a different complex than that of spots 33-35 but with the same apparent pI (which would explain why they were not found in the crude extract except after the liquid IEF purification). As a conservative measure to avoid describing a false complex, these spots were attributed to two different complexes, i.e. complexes C9 and C10 containing spots 26 -32 and 33-35, respectively. Another problem relates to the identification of a mixture of proteins in certain spots, and as a result, the same shape criterion was more difficult to apply. Different complexes can indeed co-migrate in the first dimension, and therefore the spots are aligned on the second dimension. Accordingly complex C11 is comprised of spots 36 and 37 with a protein mixture in spot 37 (Fig. 1A). As a result, it is not possible to evaluate whether the protein present in spot 36 interacts with only one or with two proteins present in spot 37. To the contrary, because the proteins present in spot 37 are the subunits A and B of succinyl-CoA transferase (ScoA and ScoB) of H. pylori and are known to interact together (16,47), this complex was considered to contain JHP0739, with the two partners ScoA and ScoB (Table I). Furthermore spots H1-H5 (Figs. 1A and 2) that were very intense were identified several times. The molecular mass observed during the migration in the first dimension of the complexes corresponding to these spots did not correspond to the molecular mass of the proteins identified in the second dimension. No other similarly formed spot was found in alignment with spots H 1 -H 5 either before or after different purifications. Several hypotheses could explain these incoherent migrations in the first and second dimensions: these spots could correspond to multihomooligomeric complexes, or they could also belong to multiheterooligomeric complexes that contain weakly expressed proteins that are not visible. Because these spots were identified several times after different purifications, it can be assumed that they are multihomooligomeric complexes. Furthermore these five complexes named H1-H5 all have already been described as multihomooligomers in other organisms (Table I), and most of the proteins involved in H1-H4 complexes, e.g. TsaA, Pfr, SodB, and AroQ, have already been reported in multihomooligomeric protein complexes in H. pylori. For each of these five putative multihomooligomeric protein complexes named H1-H5, the putative number of subunits was estimated to be 35, 31, 12, 14, and 14, respectively. Concerning the crude membrane sample (Fig. 1B), the complex M1 was easily identified with the aligned spots 38 -41 having a slightly elongated and oval form. The spots 42 and 43 were aligned; however, it was difficult to compare their form in the second dimension clearly because the molecular mass corresponding to spot 43 is low, and the gel migration is diffuse in this molecular mass range. The co-migration in the first dimension seems to be correct because these two spots were observed several times (data not shown). Moreover, these proteins (spots 42 and 43) have been identified previously as the subunits A and B of the fumarate reductase (FrdA and FrdB) of H. pylori (48). This complex was named M2. We had difficulties identifying another complex. Indeed spots S3 and S4 were aligned and presented a similar, very elongated shape, and they were identified several times in the crude extract (Fig. 1, B and Box 4). However, the molecular mass observed during the first migration (ϳ250 kDa) was lower than the molecular mass of all the proteins identified in the second dimension because protein mixtures were observed in these two spots: spot S3 contained the predicted iron-regulated outer membrane protein FrpB_3 (or FrpB3, JHP1405; 97 kDa), the cag pathogenicity island protein A CagA (or Cag26, JHP0495; 129 kDa), and the predicted ATPdependent protease binding subunit/heat shock protein ClpB (JHP0249; 96 kDa); spot S4 contained the predicted adhesin BabB (or OMP19, JHP1164; 76 kDa), the large subunit of the quinone-reactive Ni/Fe-hydrogenase HydB (or HyaB, JHP0575; 64 kDa), and the predicted outer membrane protein HopM (or OMP5, JHP0212; 75 kDa). The difference between the molecular mass of the complex observed in the first dimension or deduced from the migration of the proteins in the second dimension is too important for these proteins to belong to the same complex. These incoherent migrations in the first and second dimensions could be explained by the presence of at least two complexes that co-migrate in the first dimension and whose subunits have the same mass. Given these results, no complex including these proteins, considered to be important in the virulence of H. pylori, can be described yet, and complementary experiments must be carried out. This example clearly reflects the various difficulties of interpretation of the results obtained by the 2D BN/SDS-PAGE technique as well as the frequent difficulty in assigning certain proteins to a complex when a mixture of proteins is present. A total of 13 multiprotein complexes containing 34 different proteins (46 in total) were identified from H. pylori strain J99. Among these complexes, 11 were obtained from the cyto-plasm, and two were from the membrane. Seven multiprotein complexes identified in this study were described previously (Table I) either totally or partially, confirming the interest of this method to study the H. pylori complexome. As an example, several interactions between the different proteins from the cytoplasmic complex C9 (Figs. 1A and 2) were partially described in the literature (Table II). These interactions included the interaction between the two structural subunits, UreA and UreB, of the well known urease enzyme (49). Moreover, previous studies described the urease enzyme interacting with GroEL (Hsp60), a chaperone and heat shock protein (50). On the other hand, some protein interactions described in the complex C9 and reported by Hybrigenics (Rain et al. (16)) involved an intermediary protein (Table II) not identified in the current study, e.g. the interaction between GroEL and RpoB included the predicted outer membrane protein JHP0600. This cytoplasmic complex C9 that includes nine different proteins contained the greatest number of interacting proteins observed in this study. In addition to heterooligomeric complexes, five putative multihomooligomeric protein complexes were identified from the cytoplasm (H1-H5, Table II). Concerning the membrane, very few complexes are reported in this study due to a poor resolution and/or visualization of the spots observed on the numerous gels performed, although the 2D BN/SDS-PAGE was initially reported for the separation of membrane protein complexes (51). This suggests that membrane purifications should also improve the gel quality. Different oligomerization states were sometimes observed in certain complexes. For example, TsaA was found in different oligomerization states and in both cytoplasmic and membrane samples: C3 and spots H1 and S2 (Figs. 1, A and B; 2; and 3 and Table I). These results are in agreement with those Solid and dotted brackets represent direct protein-protein interactions and interactions including an intermediary protein (indicated in italics), respectively. Uvra corresponds to HP0705 and JHP0644 in H. pylori reference strains 26695 and J99, respectively. *, Dunn et al. (49), †, Evans et al. (50); §, Hybrigenics (Rain et al. (16)); ¶, the intermediary protein reported in this interaction is specific to H. pylori reference strain 26695, and no homolog exists in H. pylori reference strain J99. of Backert et al. (46) who also identified TsaA, both in structure-bound and in soluble fractions of H. pylori, in eight oligomerization states in their 2D gel electrophoresis and mass spectrometry study. In addition, different migrations of TsaA were observed during the separation in the second dimension (Fig. 2, spots S5-S7; and Fig. 3, spots 9 and 10) suggesting that multiple isoforms of TsaA exist. Their occurrence can be explained by probable post-translational modifications modifying the physicochemical criteria (pI, molecular mass, and binding affinity) of the protein, as reported previously for numerous proteins (52,53), thus altering the migration in the second dimension. In fact, H. pylori proteins are subject to a high degree of post-translational modification, and TsaA is among several proteins present in multiple isoforms in the bacterium (54). TsaA, originally identified as a 26-kDa antigen, belongs to a family of antioxidants called AhpC/TSA (alkyl hydroperoxide reductase/thiol-specific antioxidant) involved in the defense against oxidative stress and survival of the bacterium in the host. More recently, TsaA was shown to switch from a peroxide reductase to a stress-dependent molecular chaperone function (55). H. pylori TsaA was initially reported to induce an immune response in H. pylori-infected patients with gastric adenocarcinoma (56) and also an immune response in H. pylori-infected Mongolian gerbils that is associated with the emergence of gastric diseases such as chronic active gastritis, gastric ulcer, and gastric cancer (57). TsaA belongs to the five immunodominant H. pylori proteins (58). Pentameric arrangements of homodimers of TsaA were described in H. pylori cytoplasm (59,60). In our study, TsaA was found in homooligomeric form but also in a complex including proteins JHP1030 and PepQ. The presence of JHP1030 with TsaA is not really surprising because JHP1030 is a predicted zinc-dependent mannitol dehydrogenase that possesses an oxidoreductase activity (inferred from electronic annotation). On the other hand, the presence of PepQ in this complex is more difficult to interpret because it is a predicted proline peptidase that has high homology with prolidases and X-prolyl dipeptidyl aminopeptidases and can be involved either in protein maturation or in nitrogen metabolism. Other experiments should make it possible to determine the exact role of these two partners of TsaA. Complexes Involved in H. pylori Virulence-Before considering virulence factors stricto sensu, complexes containing proteins involved in the energy metabolism (electron transport pathway, tricarboxylic acid cycle, and gluconeogenesis), which is a prerequisite for virulence, will be briefly presented. Three pyruvate ferredoxin oxidoreductases, PorA, PorB, and PorC (ex-PorG), involved in electron transport were found together in complex C2 (Table I and Figs. 1A and 3). The heterotetrameric POR complex is comprised of the subunits PorA, PorB, PorD, and PorC (61) and was reported previously to be essential because inactivation of porB appeared to be lethal for H. pylori (62). PorD, the fourth subunit of the POR complex, was not found in this study probably because its molecular mass (14.9 kDa) is near the border limit of the gel migration. POR has been implicated in the bioreduction of nitroimidazole drugs, particularly metronidazole used for H. pylori eradication (61,63). To understand the mechanisms of metronidazole activation and resistance, which are currently poorly characterized, a more detailed knowledge of the electron transport pathways of the enzymes involved would be helpful. Fumarate reductase catalyzes the reduction of fumarate to succinate in the Krebs cycle and appears to play an important role in the energy metabolism of H. pylori. The two catalytic subunits of H. pylori fumarate reductase, FrdA and FrdB (48), were found in the membrane complex M2 (Fig. 1B and Table I). Fumarate reductase is not necessary for H. pylori survival in vitro but is essential for H. pylori colonization of the mouse stomach (64). It was shown to be strongly immunogenic in sera from H. pylori-positive patients, suggesting that this protein involved in H. pylori energy metabolism could also be used as an anti-H. pylori vaccine candidate (65). Examples of multiprotein complexes including known H. pylori virulence factors are described below. The Urease Enzyme-H. pylori produces large amounts of urease that catalyzes the hydrolysis of urea to yield ammonia and carbon dioxide, neutralizing hydrogen ions before they can lower the intracellular pH, thus enabling the bacterium to survive in gastric acid and to colonize the gastric mucosa (66). This urease activity is thus required for colonization in vivo (67). The native urease complex was found in both the membrane and the cytoplasm of the bacteria. Different oligomerization forms of the urease were identified in the cytoplasm (Table I). Four urease complexes named C7, C8, C9, and M1 with a global pI of 5 and a molecular mass range of 900 -1300 kDa were identified with crude and purified extracts (Figs. 1, A and B, and 2 and Table I). In fact, UreA and UreB are known to be present in multiple isoforms in H. pylori (54) and are among the predominant H. pylori proteins identified during cell infection (68). H. pylori is known to produce a 550-kDa heterohexameric enzyme composed of three ␣ (UreA) and three ␤ subunits (UreB). Four heterohexamers (2200 kDa) form a 16-nm spherical complex also produced by the bacteria (69). This kind of oligomerization may be adapted for acid resistance. The separation resolution of the 2D BN/SDS-PAGE did not allow the visualization of large complexes with a molecular mass greater than 1500 kDa. All of the urease complexes identified in this study included the UreA/UreB/ GroEL core. These three proteins were also reported to be present in structure-bound and in soluble fractions of H. pylori (46) and were strongly reactive to specific antibodies (58), indicating that this core is highly immunodominant in H. pylori. Under all pH conditions, the most abundant proteins observed were the urease structural subunit UreB and the chaperonin GroEL (70). A close interaction between GroEL, the co-chaperone protein GroES, and the urease enzyme had been suspected (50,71), but until now no interaction between GroEL and the urease has been confirmed. The C8, C9, and M1 complexes included additional proteins besides the core UreA/UreB/GroEL. The C8 complex included the IlvC protein, a putative ketol-acid reductoisomerase involved in isoleucinevaline biosynthesis and the thiol peroxidase Tpx involved in detoxification; the M1 complex included the FrpB_3 protein, a putative iron-regulated outer membrane protein; and the C9 complex included six other proteins (Tables I and II). cag Pathogenicity Island (PAI) Proteins-CagA is the marker and one of the effectors of the cagPAI, translocated in gastric epithelial cells and conferring increased virulence to H. pylori strains (72). CagA was identified both in the cytoplasm and in the membrane: complex C5 and spot S3, respectively (Figs. 1, A and B, and 3 and Table I). This localization of CagA in the membrane was reported previously (46) and is probably due to its presence in the type IV secretion system in the H. pylori cell wall (73,74). The multiprotein complex C5 including CagA and the DNA gyrase subunit A (GyrA) was found in the cytoplasmic sample with a molecular mass of 475 kDa (Figs. 1A and 3) suggesting a probable oligomerization of CagA and/or GyrA in this complex. This CagA-GyrA complex was also observed on other 2D BN/SDS-PAGE (data not shown). Cultures of CagA-negative H. pylori strains show a slower growth than the CagA-positive strains (72,75). This phenomenon could be explained by the CagA/GyrA interaction described in this study because this interaction may have a favorable effect on the normal DNA replication process, leading to a better development of the bacterial cell. However, Hybrigenics (Rain et al. (16)) described an interaction between GyrA and the cagPAI protein I (CagI), including a small intermediary cysteine-rich protein B (HcpB), which is a weak ␤-lactamase. DISCUSSION The classical methods widely used for large scale analysis of protein interactions are the yeast two-hybrid and the tandem affinity protein tag methods with the main limitation of expressing chimerical proteins. With the yeast two-hybrid method, mostly binary interactions are identified. Furthermore the proteins of interest are often expressed in a heterologous system, the yeast, and the interactions are observed in a particular compartment, the nucleus. Moreover the reliability of high throughput yeast two-hybrid assays is ϳ50% (76). In the tandem affinity protein tag method, the protein complexes are analyzed in vivo, but the organism being studied needs to be competent for exogenous DNA. Another disadvantage of this method is that multihomooligomeric complexes cannot be analyzed. The 2D BN/SDS-PAGE used for the study of the H. pylori complexome also has some limitations. First, it can be difficult to easily assign each spot to independent protein complexes when they are located on the same vertical line in the second dimension; complexes C8 and C9 are very representative of this problem as well as spots containing a mixture of proteins such as spots S3 and S4 for which no complex could be attributed. For this reason, it is sometimes necessary not only to analyze the crude extract but also to carry out various methods of purifications based on different physicochemical criteria (pI-, molecular mass-, or affinity-based purification). This makes it possible to validate the identified complex and to identify complexes sometimes with a better resolution and/or visualization. However, subunits of unstable complexes can be lost during these various purification steps, therefore leading to the description of incomplete complexes. Second, proteins that are weakly expressed in certain complexes may not be visible, again leading to the description of incomplete complexes or to false multihomooligomeric complexes. In addition, mixtures of proteins can sometimes be identified in the same spots, making it difficult to determine whether all of the identified proteins belong to the same complex. This was a frequently found case for the membrane samples analyzed in this study where the outer membrane proteins often had similar molecular mass and pI. For this reason, several complexes were not reported in this study. Third, the extraction is carried out under non-denaturing conditions, and therefore the complexes requiring denaturing conditions for extraction are not easily found. Examples include 1) H. pylori vacuolating cytotoxin VacA, which is quickly exported over the membrane (77, 78) but was not detected with this method, and 2) flagellar sheaths, anchored in the membrane and requiring denaturing extraction conditions (79), which were not detected either. However, the use of non-denaturing conditions is, in fact, the major advantage of this technique because it provides the native form of protein complexes and thus allows an analysis of the physiologic state of the organism. The method was highly reproducible, and even if it is not possible to determine the totality of the H. pylori complexes, this technique is of great interest to compare the complexes of a bacterium in two different physiological states. Moreover immunoblot applied to the 2D BN/SDS-PAGE would lead to the identification of specific protein complexes, particularly those including strain-specific proteins and virulence factors. Therefore, the comparison of the currently available results obtained with all of the different methods used to analyze protein complexes will allow an accurate identification of the H. pylori protein-protein interaction network. Acknowledgment-We are grateful to Corinne Asencio for technical support in bacterial cell culture. * This work was supported in part by the Conseil Ré gional d'Aquitaine and the Association pour la Recherche sur le Cancer. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ¶ Recipient of a predoctoral fellowship from the Fondation pour la Recherche Mé dicale.
2018-04-03T04:09:19.151Z
2007-02-01T00:00:00.000
{ "year": 2007, "sha1": "3d96a2e06e1b1056a57a1651e50b4ee7d748b1f9", "oa_license": "CCBY", "oa_url": "http://www.mcponline.org/article/S1535947620314109/pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6a0e4ee75ceaa06312a1374954d6da81ed836f89", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
198576105
pes2o/s2orc
v3-fos-license
Study of the Viability of the Departmental Health Insurance Unit in the Health District of Koungheul (Senegal) The promotion of departmental units of health insurance (UDAM) is an essential path advocated by the State of Senegal to access to universal health coverage. The UDAM is a departmental mutual, professionalized with an expanded package of covered services and allowing early recourse of beneficiary patients to the public health structures of the district. After two years of implementation (2014-2015) from this initiative in a pilot phase, this work is to study the viability of UDAM Koungheul. A quantitative and qualitative evaluation study was conducted from July 20 to August 15, 2016. For the quantitative part, the study population of all records (logs and records) of members of UDAM Koungheul. A comprehensive recruitment files was conducted. An observation grid was crafted to collect data. As for the qualitative aspect, it was the UDAM officials, members, and non-members. Of individual interviews and focus group guides were used to collect the perceptions of these different targets on UDAM. Quantitative data were entered and analyzed using Excel 2007. Content analysis of about was conducted for qualitative data. Administratively, the UDAM had a good overall score of its operation tracking quality with 85% in 2014 and 100% in 2015 but gaps were noted in the use of some tools for managing and monitoring its operation. Functionally, the number of members and beneficiaries increased during 2015 with rates of monthly recoveries were around 100%. The penetration rate increased from 2% to 8% from 2014 to 2015. Technically, the beneficiaries of the least supported UDAM benefit costs covered compared to non-beneficiaries. Excluding this benefit to populations, UDAM was submitted the risks of abuse and adverse selection. Financially and economically, loss of membership fees failed to cover the cost of benefits with ratios of 52% (2014) and 55% (2015). The capital remained insufficient to cover its operating expenses and investment. Operating expense ratios for 2014 and 2015 were 85% and 176% respectively. The own financing rate increased from 62% to 35% in 2015. Apart from administrative and operational viabilities, other sustainability dimensions: technical, financial and technical threatened once the partner withdraws. Thus, it is important that the additional funding is UDAM able to fill the financial gap with the withdrawal of a partner to face the charges and to strengthen its outreach activities in order to join the people and retain members. Introduction In Senegal, the percentages of women and men with no medical coverage are 94% and 92% respectively [1]. In 2012, the issue of UHC for Senegalese families was reaffirmed as a strong commitment to achieving the Sustainable Development Goals (SDGs). This priority resulted in the inclusion of UHC as one of the strategic axes of the National Health Development Program (PNDS) 2009-2018 and the National Strategy for Economic and Social Development (SNDES) 2013-2017 through its axis 2: "Accelerating access to basic social services, social protection and sustainable Health District of Koungheul (Senegal) development" [2]. The new orientation of the Government of Senegal focuses on the informal and rural sector through community health mutuals as an essential strategic axis for achieving UHC. One of the objectives is "One local community, one mutual health insurance at least". It is in this dynamic that a departmental unit of health insurance (UDAM) has been set up since February 2014 at the level of the department of Koungheul. Underutilization of care services is noted in this department where the majority of the population is poor. As a result, this population cannot cope with direct payment, especially since the flat rates applied are considered high in relation to their daily income. In fact, in terms of the demand for care, regardless of free health initiatives (the sesame plan, free health care for children under 5 years and cesarean sections), the Koungheul populations continue to bear the costs of health care. The payment of a flat rate. To overcome these difficulties, a departmental mutual is set up under the Health Supply and Demand Support Program (PAODES) to promote people's buy-in. Membership in the departmental mutual will be a good alternative to the reduction of direct payment given the financial contributions supported by the PAODES, the UDAM and the State. Recourse to the health mutual scheme as a means of financing health is only of interest if its viability as an organization is guaranteed [3]. The department of Koungheul which is our frame of study, is located in the extreme East of the Kaffrine region 350 km from Dakar. The population of Koungheul represents 29% of the total population of the Kaffrine region. In 2014, it was estimated at 179,201 inhabitants with a density of 32 inhabitants per km² and a rural population representing 80%. On the economic front, the department is one of the poorest departments in Senegal with a poverty rate of 52% according to the poverty monitoring survey in Senegal [4]. The labor force works in an informal sector where incomes are generally low and precarious. Moreover, this part of the population does not benefit or does very little from social protection. The economy is essentially based on agriculture, livestock and informal trade. It is the main income generating activity in the department. It is an industry that employs more than 60% of the workforce. According to the National Agency of Statistics and Demography, in 2014 agricultural households represented 84.6% in the department [5]. In terms of health, the district has 01 type II health center with an operating theater, and 18 functional health posts. In terms of the demand for care, other than the free health initiatives (sesame plan, free care for children under 5 years and cesarean section), a UDAM was set up to facilitate access to care for populations within the district. Set up on May 3, 2014, Koungheul's UDAM won the bargaining challenge with the small health mutuals that ended up being absorbed by it in order to set up a departmental health mutual, professionalized, including an agreement with all Koungheul district health posts and health center with a wider package of services. After two years of experience, it seems appropriate to evaluate the viability of the UDAM at the Koungheul department. Methodology This is a quantitative and qualitative evaluative study, which took place from July 20 to August 15, 2016. Study Population The study population was constituted by all the files (registers and membership records, activity reports) of the UDAM of Koungheul. Included were records (registers and membership records, activity reports) that belonged to the UDAM. Sampling Comprehensive recruitment of files meeting the inclusion criteria was completed. Collection Tools and Techniques An observation grid provided information on the existence and use of management tools to assess administrative viability (Table 1). Data Entry and Analysis The Microsoft Excel 2007 software was used to enter and analyze the data. The data analysis made it possible to calculate the UDAM administrative, technical, functional, financial and economic viability indicators using the formulas illustrated in Table 1. The results were compared against International Labor Office (ILO) indicator standards (Table 3) [6]. For hidden costs, this is the estimated cost of the resources provided by UDAM free of charge. Target Populations The study focused on the following populations: UDAM officials: the director, the manager, the two collectors and the president, Mutual members and non-members. Data Collection Tools and Techniques Individual interview guides were used for UDAM officials, members and a focus group guide for non-members. The topics addressed by UDAM officials concerned: the process of setting up the UDAM, the steps that led to the establishment of the UDAM, the management of UDAM's activities, the difficulties encountered during the collection of contributions, UDAM's contribution to the state of health of the populations, the strong and weak points of the UDAM. For the members, the topics were: information on the functioning of the UDAM, the degree of confidence in the UDAM, the assessments on membership fees and contributions, the provision of care, participation in decision making. A focus group guide was used for non-members of the UDAM. The topics were: the therapeutic route in the event of illness, the difficulties encountered in the health structures, the information on the existence of the UDAM, the acceptability of the UDAM, the reasons for non-membership to the UDAM, the wish to join the UDAM. Individual interviews were conducted with UDAM officials. The 25 UDAM beneficiaries were interviewed at the UDAM headquarters or the Health Center at the time of the general consultations. For the establishment of the focus group, with the support of the District Chief Medical Officer, participants were contacted directly at the main door of the health center; volunteers who agreed to participate formed a group of 12 non-adherents. Participants benefited from clear and precise information on the objectives of the study and the freedom to accept or not to participate in the survey. All gave their verbal consent. The focus group lasted about 30 minutes. Data Analysis The analysis of the content of the individual interviews and focus groups made it possible to categorize the information according to their frequency of appearance. This key information related to the perceptions of the populations on the UDAM. Administrative Viability At the end of the administrative viability analysis, the overall score for the quality of the monitoring was respectively 85% and 100% in 2014 and 2015. The UDAM showed shortcomings in the use of certain tools for management and monitoring of operations. These included deficiencies in the monitoring of the collection of contributions and membership fees, the monitoring of entitlement to benefits, monitoring of the risk portfolio and financial monitoring. Resolutions to these gaps remain important for the UDAM to be administratively viable (Table 3). Functional Viability The number of members and beneficiaries increased significantly in 2015. This result was attested to by the gross growth rates. The UDAM has a great capacity to retain its members. With gross growth rates of over 100% and a retention rate above 80%, the UDAM registered many new members in 2015. The penetration rate increased from 2% to 8%. The monthly collection rates for 2015 ranged from 96% to 120%. Having maintained these assets and increased UDAM's capacity to retain its members and register new members suggests that the departmental mutual is functionally viable (Table 3). Technical Viability The beneficiaries of the UDAM bore the costs of the covered benefits less than the non-beneficiaries. Apart from these advantages for the populations, the UDAM was subject to the major risks like the cases of over prescriptions, overconsumption and adverse selections. The technical viability of the UDAM is very threatened (Table 3). At the posts and health center level, UDAM beneficiary copayment was lower than that of the non-beneficiary (Table 4). Financial Viability UDAM have any debts with the public health structures (posts and health center) of the Koungheul department. Membership fees were able to cover benefit costs and UDAM bore less with the co-financing of PAODES. The loss ratio remained high without co-financing (more than 100%) exceeding the threshold (75%). Without partner support, UDAM would have supported operating costs with operating expense ratios of 85% and 176% respectively in 2014 and 2015 (> 15%). The financial viability of the UDAM remains compromised after the withdrawal of the partner (Table 3). Economic Viability The UDAM could not cover all of its expenses with its own funds. The 2014 and 2015 self-funding rates were 62% and 35% respectively (<100%). Similarly, the amount of contributions remained very low to support the total expenses. The financial support of the PAODES allowed the UDAM to pay part of the expenses excluding medical consumption. The economic viability of the UDAM would be threatened once this partner withdrew (Table 3). Individual Interviews with UDAM Officials i. Process of setting up the UDAM This question made it possible to know what type of initiative was at the base of the creation of the mutual and whether the necessary conditions for the establishment of a health mutual were met. According to the director of UDAM: "Like any initiative to create a health mutual, the goal was to allow people throughout the Koungheul department to access quality health care at a lower cost. With the advent of flatrate pricing, health mutuals that were active in the department and had a few hundred beneficiaries up to date on contributions were no longer able to support the share of reimbursement of care benefits that they returned. In other words, these mutual health insurance companies did not have the financial viability that allowed them to reimburse health care benefits. It was necessary to set up a health insurance at the departmental level that would be able to fulfill its commitments to health care benefits." According to the chairman of the board: "Initially with universal health coverage (CMU) the state wanted to set up a solid health system based on flat rates. And iii. Difficulties encountered during the collection of contributions The director listed these difficulties: the weakness of the purchasing power of the populations who are mostly farmers affected by poverty; the difficulty of accessing certain areas at certain times of the year; difficulty for the heads of households to contribute for family members; the absence of a culture of mutuality, which means that beneficiaries always evaluate their level of consumption of care in relation to their total contribution, linked to ignorance of the principle of solidarity; the internal organization in each village for collecting dues is not yet clear. iv. Contribution of UDAM to the health status of populations The director said: "The UDAM contributes to the improvement of health for several reasons: with the payment of the co-payment, the heads of household and their dependents no longer give up the care. The accessibility of the costs of medical care solves the problem of partial exclusion given the drugs to buy. Populations are less inclined to resort to traditional healers (decline in parallel Health District of Koungheul (Senegal) practices). Populations no longer have valid reasons for resorting to health care late. The medical care of the beneficiary no longer waits on the head of household, the member, traveling. Health indicators are improved (reduction of maternal and infant-juvenile mortality, reduction of home deliveries)." Health indicators are improved (reduction of maternal and infant-juvenile mortality, reduction in home deliveries) as an example from 2013 to 2015, the rate of deliveries in structures had increased from 73% to 84, 6% and a contraceptive prevalence rate of 8.85% to 18.7%. According to the administration assistant: "If you belong to a mutual and you get sick you can be consulted with 500 FCFA for 2000 FCFA because 1500 FCFA are supported by the UDAM or 1000 FCFA for 10,000 FCFA because 9000 FCFA are the responsibility of the UDAM and the PAODES and having the right drugs. v. Highlights of UDAM This will allow the mutual to see where to rely on to perpetuate its existence. The director quoted us some points: the professionalization which makes it that at any moment, there is staff which responds to the request of the members; the effective involvement of mayors and other local elected representatives in the animation of the community structure, which is supposed to facilitate the medical care of populations, health being a transferred competence; the availability of logistical means to carry out the field work; the technical and financial support of PAODES and the State to UDAM. vi. Weaknesses of the UDAM The president of administration explains to us: "The material resources are low while the department is vast. The high illiteracy rate of the population is a factor limiting their membership in the UDAM. The director added: "The population is slow to see the difference between UDAM and the previous mutuals that have experienced many difficulties, including the breakdown of partnerships with health care providers and the slowness on the part of the delegates of the aerials of local authorities to appropriate the UDAM and to get fully involved in its animation at the local level." According to the manager of UDAM "despite the practices of affordable contributions, some members have trouble paying their dues. That's why agents were hired to collect door-todoor dues. ". Individual Interviews with Members of UDAM i. Information on the operation of the UDAM All members met acknowledged being informed about the operation of the mutual. Most of the beneficiaries (18/25) explained that a UDAM agent was passing through their homes to explain how the UDAM works. The others (7/25) had the information through community radio. The members (19/25) do not know the principles of mutuality based on mutual aid and solidarity between members. ii. Degree of confidence of UDAM members and assessments of membership fees and contributions In unanimity, the members we met felt that they had confidence in the managers of the mutual society. Some (15/25) explain that they had confidence because the officials have carried out a sufficient communication by emphasizing the advantages of the UDAM compared to the basic mutuals that existed in the department. Others were reassured by their neighbors that are beneficiaries. For all members, membership fees and fees are affordable. They recognized that joining the Mutual can significantly reduce the cost of access to health care. However, membership fees and contributions are considered high by some beneficiaries (7/25). According to the beneficiaries (17/25) the benefits covered by the health district meet their needs. However, others (13/25) believe that the costs of benefits and medicines are quite high especially at the level of private pharmacies. iii. Participation in decision-making All the members met felt that they were not involved in the decision-making and life of this departmental mutual. According to a beneficiary: "The mutual is made for the beneficiaries and it lives off of our contributions, so it is quite normal that our opinions be taken into account. I suggest at least that we open a suggestion box at the entrance of the structure and inform members of the holding of general meetings." The managers of the mutual recognize this lack of information and the irregularity of meetings of statutory bodies. Focus Group Results for Non-members A focus group of 12 participants was realized. i. Therapeutic route in case of illness The health district is the first resort for any illness revealed a fraction of non-members. This, despite the parallel use of traditional medicines. According to one participant, " although the first resort is the health center, they must always combine traditional medicines which are also sometimes more effective than modern treatment especially in children." Those who do not use modern medicine point to the high cost of prescriptions prescribed in health facilities and poverty. ii. Difficulties encountered in health structures Non-adherents have all denounced the poor reception in the health structures and the behavior of some health workers: "They do not give respect to the patients, even to ask for information you go around the structure without satisfaction". The lack of qualified personnel available in the structures is also deplored: "You have to go to the clinics to meet the best doctors". iii. Information on the existence of the UDAM The principles of the mutual are largely unknown by the participants of the focus group. A minority has heard of mutual: "At some point some people came here to talk about the mutual, but we found that it was something not very clear. "Most non-adherents are aware of the existence of the UDAM. They were made aware by neighbors, health workers or UDAM and through community radio. iv. Acceptability of the UDAM The non-adherents of the UDAM believe that, like the associations and cooperatives, the mutuals are organizations managed by a minority and that only benefits itself: "Mutuals do not do their job and diversions are commonplace" . Others estimate that they have very little income to meet the priority needs of their families and categorically reject the idea of saving for an unlikely event. The tendency of some participants is indifference to the risk of the occurrence of diseases: "... anyway it is God who gives health or sickness, and no one can predict when it will happen..." However, UDAM is positively appreciated by some of the non-adherents. For them, it allows them to have proximity to quality health care and cope with unannounced health expenses. v. Reasons for non-accession to UDAM Non-adherents unanimously evoke the high cost of living and poverty in households for avoiding joining the mutual. One of the participants said, " I cannot register my family at UDAM because membership fees and contributions are expensive and what I earn is not enough to support my household" . Another said: "One day there was in our neighborhood a very poor patient who did not have enough to pay a for consultation ticket, we went to the market to make a collection so that he has enough to pay for his care". vi. Wish to join the UDAM Most of the non-members would like to join the UDAM but it is necessary that those responsible decrease the contributions. Others do not find the need to adhere; someone said: "I will not join the UDAM even for my family because with the free initiatives set up by the state we are covered." Discussion Membership in the UDAM is voluntary, familial. The UDAM recorded 2 as a score, which means that the risk of adverse selection is average. Unlike a private insurance system of a commercial nature, the mutual health insurance company cannot select its beneficiaries or charge each of them premiums corresponding to their personal risk. According to Buter JD et al, a mutual health insurance is viable when it is designed and organized to the satisfaction of its customers and partners, it has guards who protect it against major risks such as adverse selection, the risk of escalation costs and is self-financing [7]. However, to minimize the risk of adverse selection, the UDAM required the minimum enrollment unit to be the family and introduced a two-month observation period or waiting period for new registrants according to the payment modality, monthly, semi-annual, or annual. This mandatory nature is a factor of non-adherence, especially since the members do not establish a link between such a formula and the need for the mutual to counter a selection adverse of risks (which occurs if the head of household only affiliates the most vulnerable members of his family, which obviously imbalances the insurance system) [8][9][10]. Managers should also strengthen advocacy with village and / or neighborhood leaders to advocate group membership to reduce the risk of adverse selection. Several authors have demonstrated the effect of information and awareness of the population on their membership of the mutual insurance, especially that they strengthen their confidence in the mutual. According to our context, this awareness must be based on the attractiveness factors of the UDAM; it is unifying with an extension of covered benefits such as chronic diseases. According to Basaza et al. (2008), the fact that Ishaka Mutual Health Insurance Company in Uganda does not take care of chronic diseases appears to be one of the causes of low enrollment rates [11]. A fortiori, the exclusion of a service would have a negative effect on the dynamics of membership [12]. To avoid escalating costs, the application of flat rate pricing has led to good cost control. According to Betsi E, the mutuals sometimes feel abandoned to themselves, the health mutuals are exposed to all forms of exploitation and the measures of security and protection of its heritage remain weak. The adverse selection is almost institutionalized; many cases of over-prescription and fraud exist. In his study, unlike ours, the prescription syndrome exists, the costs of the acts are not uniform and the costs of the benefits increase day by day. Thus, social control and the preservation of mutual benefit entitlements constitute one of the main advantages for community participation and the retention of members [13]. The members are a lever against the providers by defending their rights and therefore be more demanding. This place of beneficiaries in the management of the health mutual has been well described in the study by Criel B et al [14]. The UDAM must have a medical adviser to verify certain services delivered to UDAM beneficiaries in order to prevent abuse or fraud. UDAM had monthly collection rates of contributions close to or above 100%. In Ghana, the premium rate considered high was recognized as one of the reasons for the drop in the enrollment rate in mutual health insurance, which went from 8.3 points for the period (2007-2008) to 6.5 points for the period (2008-2009) [15]. The analysis of the monthly contribution recovery rates shows that UDAM was recovering the maximum of its contributions. With respect to providers, UDAM pays care bills without delay or debt. These aspects reinforce the mutual trust between providers and UDAM managers at the services of these beneficiaries. With the support of PAODES and the state in the form of grants, the claim rate increased from 52% to 55%. The financial implication of the partner reduces the share of funding to cover the costs of services. This result demonstrates UDAM's ability to cope with expenses. This situation is partly explained by the good indicators recorded, namely the monthly recovery rates and the retention rate of around 100%. Therefore, the UDAM has no debts vis-à-vis the health structures. In this case, all the entities are mutually beneficial: the population, the health structures, the UDAM and even the State. Adhering to the UDAM alleviates from the beneficiaries the expenses incurred for care. According to Kagambega MT, affiliation to a mutual insurance company can therefore significantly reduce the cost of health services [16]. The system of health insurance via the departmental mutuals has a positive impact on the expenses of care of the members of the mutual, on their health and perhaps even on the economy of the household [17]. Without co-financing, loss ratios were high (≥75%); these results were higher than those found in J Kanyeshuli's study of the Musanze Mutual calculated at 72% [18]. This finding reveals a risk to financial viability because more the contributions earned do not cover the expenses. Similarly, the economic viability is threatened once this financial partner withdraws because it supported UDAM's operating and investment costs. Indeed, its own funds were insufficient to cover expenses. As a result, efforts must be made to collect contributions and to find mechanisms of financial autonomy, such as the creation of income-generating activities, and the recruitment of a medical adviser to ward off additional sources of expenditure. The UDAM can certainly control its operating costs but it cannot fully control the benefits it will support because the occurrence of disease is unpredictable. The state of Senegal through its universal health coverage program must further support the UDAM once the partner withdraws. This additional funding will cover non-medical expenses and the UDAM's own funds through membership fees can finance the object for which it exists. In addition, ACMU must also articulate free initiatives such as those targeting seniors, namely the Sésame Plan and the UDAM targeting the indigent through the National Health Equity Fund [19]. The main criteria for assessing the quality of care provision are reception (availability of health care providers, waiting times, respect and consideration on the part of carers), prescription and availability of medicines and the speed of the results of diagnosis [20]. This study based on a normative evaluation using the ILO reference framework [6] does not take into account the quality of the offer at the district level of Koungheul, this tool is interested in the demand side of health. Similarly, the deepening of the qualitative survey on the issue of supply among UDAM beneficiaries, non-beneficiaries and service providers could have made it possible to understand the behavior of these different targets. Thus, the evaluation of the quality of the health supply could have provided, on the one hand, information on cases of over-prescription among providers or of abuse such as moral hazard among beneficiaries and, on the other hand, identifying non-adherence factors related to the provision of care among non-UDAM beneficiaries. Conclusion The mutual may be an alternative to access to care. It is a focal point in the strategic plan for universal coverage. Thus, the Ministry of Health of Senegal with the support of PAODES is experimenting with an initiative to promote the departmental unit of health insurance in the health district of Koungheul. The study of the viability of the UDAM shows that at the administrative level, there is a good overall score of quality of the follow-up of the functioning but there is a lack of verification of the validity of the attestations of care, the list of those excluded from provision of services and the monitoring of beneficiary registration by providers. Functionally, the UDAM registered many new members in 2015. In addition, the risk of adverse selection and the absence of a medical adviser threaten the technical viability of the UDAM. In financial and economic terms, contributions are still insufficient to cover operating and investment costs. With the financial support of PAODES, UDAM has managed to meet the expenses, but its withdrawal will compromise its financial and economic viability. This study showed that UDAM is not viable. Thus, to ensure the viability of the mutual in all its dimensions, it seems appropriate to strengthen the communication to the populations with a view to a mass membership, to increase the equity of the UDAM by additional funding of ACMU and the reduction of the covered benefits package such as those of chronic diseases to reduce the cost of care. In perspective, it is important to carry out a study of the estimated costs of this departmental mutual to estimate the shares supported by the UDAM and the CAEP and to evaluate the relevance of the unitary contribution for a good control of the loads.
2019-07-26T13:58:28.310Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "1a89c51b87a45d6302761d848d823727b52ccfd9", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.cajph.20190504.14.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "071e85869a6ac3073686db175b8cbe90176fdc54", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Business" ] }
13398157
pes2o/s2orc
v3-fos-license
A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft : a case report J Korean Assoc Oral Maxillofac Surg 2012;38:110-5) Ankylosed tooth is defined as ‘the discontinuance of normal passive tooth eruption without any mechanical barrier’. Ankylosed tooth treatment is a challenge to dental clinicians. In treatment of maxillary molar ankylosis cases there are risks of oro-antral fistula, displacement of root fragments into the maxillary sinus, as well as the necessity for providing additional sinus bone augmentation for future implant placement. In this study, we suggested a new technique using a piezoelectric device and a lateral side approach to the maxillary sinus leading to the simultaneous removal of the ankylosed maxillary molar and sinus grafting for the purpose of implant site development. I. Introduction Tooth ankylosis is defined as the condition wherein normal passive tooth eruption is ceased without physical impediment such as supernumerary tooth or impacted wisdom tooth 1 .The pathogenesis of ankylosis is not understood completely, and it is also called infraocclusion, secondary dental retention, and submerged tooth.Clinical findings for dental ankylosis include reduced occlusal surface area, inclination of adjacent teeth, underdevelopment of alveolar ridge of concrescence, absence of dental mobility, and abnormal percussion sound upon tapping (e.g., metallic percussion).Patients visit a clinic to address such clinical symptoms with no knowledge of ankylosis and normally seek orthodontic therapy 2 .Since the treatment of ankylosed tooth is not an easy job even for seasoned orthodontic dentists, surgical orthodontic treatment is primarily considered to avoid extracting the ankylosed tooth and to conserve it instead 3 .Another possibility is pro-Advanced General Dentistry, College of Dentistry, Yonsei University to have her right maxillary first molar extracted.This patient made a prior visit to the Orthodontics Department of the hospital to address her different vertical teeth height.This case report presents the case wherein an ankylosed tooth in the maxillary posterior region with root canal and gold crown was extracted and immediate sinus grafting was subsequently done for future implant placement.Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report.J Korean Assoc Oral Maxillofac Surg 2012 Fig. 3. Computed tomography evaluation showing the fusion of the tooth and alveolar bone segments; this fused apparatus is connected to the maxillary sinus septa. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 gingival swelling or rubor was detected around the tooth, with a paradental cyst 3-4 mm deep.(Fig.2) Symptoms associated with maxillary sinusitis, e.g., headache, runny nose, nasal congestion, and post-nasal drip, were not observed.The panoramic radiography revealed that two-thirds of the root of the right maxillary molar had subsided into the sinus.Severe sinus pneumatization was detected, but sinusitisrelated complexity was not observed.The CT scan showed The Orthodontic Department sent us a referral for surgical dental extraction of the ankylosed tooth, after estimating that orthodontic treatment would be difficult because the molar was restored through gold crown after root canal.(Fig.1) Root canal and prosthetic treatment were done at a private dental clinic 8 years ago, and approximately 3 mm of the crown was exposed upward from the gum.No dental mobility was observed, but sensitivity to percussion was noted.No Fig. 4. Lateral approach to the ankylosed tooth after the removal of the anterior wall of the maxillary sinus.The Schneiderian (sinus) membrane was carefully detached from the root surface. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 Fig. 5.A piezoelectric device was used to cut the residual fused tooth-bone segment to preserve sinus membrane integrity. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 Fig. 6.The ankylosed tooth was completely removed.Some rootcanal filling material was observed over the apex of the adjacent teeth (arrow). Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 Fig. 7. Allogenic graft material was grafted at the defect. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 removed, and the sinus membrane linked with the root was carefully elevated.(Fig.4) In the course of separating the sinus membrane from the root, a small perforation 0.3 ×0.3 mm occurred, but it was automatically restored with the sufficient elevation of the sinus membrane.Using a piezoelectric device (Surgybone; Silfradent, Santa Sofia, Italy), the fused sinus septa-root complex was incised, and the tooth piece and septa bone were successfully removed.(Fig. 5) Foreign matter such as Gutta-percha was completely removed as well.(Fig.6) The sinus membrane was reinforced with absorbent collagen membrane (Biogide; Geistlich Biomaterial, Wolhusen, Switzerland), and 2 cc of allograft material (Orthoblast II; Isotis, Irvine, CA, USA) was placed in the elevated sinus space and the socket.(Fig.7) Tensionfree primary closure was achieved with a buccal advancement flap formed with a releasing incision.(Fig.8) Antibiotic, nonsteroidal, anti-inflammatory agent and nasal decongestant were prescribed for one week.Since the patient was still in the physical development stage, future treatment includes implant placement after the completion of her bone development at a private dental clinic in the vicinity.(Figs.9, 10) III. Discussion The treatment of an ankylosed tooth is not an easy task since the surgery itself is difficult and patient age and ankylosis location must be considered.Chaushu et al. 19 suggested some treatment methods for maxillary molars with infraocclusion: periodic observation, surgical removal, prosthetic restoration, and orthodontic treatment.Raghoebar that one-third of the root of the upper first molar was fused with alveolar bone, and fusion with sinus septa was observed in particular.The inside of the sinus was clean, and no lesion such as mucous retention cyst was found.(Fig.3) The patient and her guardian were advised with regard to the surgical procedure, and then surgical tooth extraction and bone grafting via the lateral approach to the sinus were performed. Under 2% lidocaine local anesthesia, two vertical cuts around the upper right first molar and crestal incision were made.Flap elevation was performed; the thin sinus bone wall around the buccal root of the ankylosed molar was noted.Using a bone rongeur, the full wall of the sinus was Fig. 10.The extracted tooth. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 Fig. 8.A buccal advancement flap was formed with periosteal releasing incision for closure. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 Fig. 9.The extracted tooth. Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report. J Korean Assoc Oral Maxillofac Surg 2012 however, bone volume fluctuation is actually difficult to predict.Consequentially, this issue is to be considered even though such surgical method may decrease the volume for future bone graft, and additional bone graft may be necessary for implant placement following developmental completion. In conclusion, the direct approach to the lateral sinus can be a desirable treatment for simultaneous sinus floor augmentation for the extraction of an ankylosed maxillary molar and future implant placement.The appropriate use of a piezoelectric device can be expected to ensure the effective removal of tooth-bone complex while conserving the sinus membrane.et al. 20 suggested that prosthetic restoration is advantageous for adult patients, but that it means limited vertical growth of the alveolar bone around the ankylosed tooth for young patients, possibly leading to a worse ankylosed condition; in such case, they recommend surgical removal instead.To assure proper vision and accessibility for tools, the extraction of an ankylosed tooth should be done through surgical extraction.During treatment, a wide range of alveolar bone of the affected tooth needs to be cut for the successful removal of the ankylosed root.Such extensive surgery causes significant impairment to the bone around the affected tooth.Furthermore, in the maxillary posterior region, it poses consequential risk, i.e., oro-antral fistula in the event of surgical extraction of the ankylosed tooth or displacement of root fragments into the sinus.Regardless of the clean extraction of the ankylosed tooth, future site development, e.g., sinus bone graft for implantation, should be considered. In this case, via the lateral approach to the sinus, direct access to the ankylosed root was attempted for tooth extraction; at the same time, bone grafting was performed for future implant placement.This enables assured direct sight and access and facilitates future site development for subsequent implant placement, ultimately foregoing the need for additional pre-implant surgery.The most important condition for wise sinus bone grafting is to conserve the sinus membrane.For this, the author used a piezoelectric device, enabling conserving safely the sinus membrane during the removal of the sinus septa.In this case, during surgery, a small perforation occurred in the sinus membrane, but such small sinus membrane perforation gets naturally fixed upon sinus elevation in most cases.Here, what matters is not to cause a large perforation.Without tool manipulation in the perforated area, forming a slightly bigger bone window ensures the safe elevation of membrane; eventually, the reduced perforation naturally closes.Even with the natural closing of the perforated membrane, however, sinus membrane makeup is recommended using absorptive membrane.If a large perforation occurs, it is good to delay bone grafting. In this case, the timing for implant placement can become a crucial point.Implantation is not recommended for patients in the developing stage except those with congenital diseases, e.g., (partial) edentia.Since implant placed at a young, tender age acts like an ankylosed tooth, delayed implantation until the full completion of physical development is recommended in general.Similar to this surgical treatment, sinus elevation in case of tooth extraction and accompanied bone grafting can help in future implant work.Depending on the growth, A 16 - year-old female patient visited the Department of Fig. 1 .Fig. 2 . Fig. 1.Panoramic radiograph taken at the patient's first visit.An ankylosed right maxillary first molar is noted with full gold crown and root canal.Jae Ho Hwang et al: A lateral approach to the maxillary sinus for simultaneous extraction of an ankylosed maxillary molar and sinus graft: a case report.J Korean Assoc Oral Maxillofac Surg 2012
2017-08-16T00:55:29.051Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "9c1d09c346e8e63f1242be3e1e2b9dff6735c231", "oa_license": "CCBYNC", "oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/3070jkaoms/jkaoms-38-110.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9c1d09c346e8e63f1242be3e1e2b9dff6735c231", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
133622597
pes2o/s2orc
v3-fos-license
The political impacts of adaptation actions: Social contracts, a research agenda Managing climate and disaster risk is a deeply political act sitting at the interface of popular expectations, legal mandate, and political fiat. This article makes the case for an expanded research agenda on social contracts in climate and disasters scholarship as a mechanism to better reveal activity across this interface, identify the winners and losers of adaptation, and improve the equity outcomes of negotiated and imposed risk management settlements. Social contracts are defined as multiple and constructed (not singular or fixed), and three distinct yet intersecting forms of social contracts are identified: imagined, practiced, and legal‐institutional. The article argues that mapping the disjunctures, overlaps and transitions between these concurrent social contracts can help reveal gaps between responsibilities held de facto and de jure. This makes a timely contribution to understanding tensions between need, obligation and entitlement that underlie contestations over “who” is responsible for “what” in risk governance. It also helps reveal the dynamic boundaries of social acceptances at the centre of debates around fair adaptation governance. Such work can provide insight on how development relations, including but reaching beyond risk management and climate change adaptation, can be transformed progressively and fairly in a changing climate. understanding shifting outcomes; yet to date the adaptation literature has fallen short of exploring the relevance of social theory lenses to this question (Fazey et al., 2017). This has implications for research and for policy framing, prioritization, and legitimacy. In this article, we highlight the potential of social contracts as an emergent analytical lens on the politics of adaptation, and develop their conceptualization in an adaptation context. We draw on Campbell's (2010) definition of a social contract as recognition of the legitimizing force of citizen consent to the authorities which limit their freedoms, and the reciprocal duty of social institutions to uphold the equal rights of all. However, we prefer the term "social contracts" (multiple) over "the social contract" (singular), in order to capture diversity and multiplicity in the form of those co-dependent relationships. The particular contribution of a social contracts lens to questions of climate change adaptation and its consequences lies in: (a) highlighting tensions between need, obligation, and entitlement that underlie contestations over "who" is responsible for "what" in risk governance; and (b) drawing attention to boundaries of social acceptance surrounding risk and risk management actions, and hence to the conditions under which legitimate adaptation pathways are negotiated and contested. Such concerns lie at the centre of debates around fair adaptation and just risk governance. These contributions allow for a lens that can be extended across other policy domains and practices to approach the cultural and political trade-offs between risk and development that often transcend economic rationality. Note the focus in this article on climate-driven adaptation is not to limit the application of a social contracts lens, when contestations over fairness may apply equally to other forms of risk management (economic, social, political, technological, reputational). This focus stems rather from the particularly complex questions of sociospatial and intergenerational equity that climate change raises, and its role as a risk amplifier (see Renn, 2011). To get at the above dynamics, the article first reviews the existing application of social contracts in adaptation and disaster risk management thinking. Challenges are then identified that have inhibited fuller deployment of social contracts to date: the assumed homogeneity, fixedness and consensual qualities of the social contract in classical contractarian theory. In response, a proposal is made for an analytical application that recasts the social contract into three intersecting yet distinct forms: social contracts that are imagined, practiced, and legal-institutional. The article reflects on these forms and the research agenda this opens onto human rights and responsibilities upheld de facto and de jure. The conclusion further clarifies the research and policy implications of a social contracts approach, namely to help identify and explain cultural and political tensions arising from climate change and natural hazard impacts, and the consequences of adaptation for sustainable development. | ADAPTATION AS POLITICAL: THE (RE)EMERGENCE OF SOCIAL CONTRACTS It is widely recognized that the Anthropocene demands urgent critical reflection on the stability, equity, and future of currently dominant development trajectories (Folke et al., 2002;Steffen et al., 2011). Many now agree that transformational adaptations are necessary to achieve sustainable development (defined broadly to encompass social, ecological, and economic equity across generations) (Eriksen et al., 2011;O'Brien, 2012;O'Brien, Eriksen, Inderberg, & Sygna, 2015). However, amid the shifting goalposts of global environmental change, the roadmap toward those futures remains far from clear. Important questions remain, not only in clarifying precisely what adaptation futures are sought, but also in defining what constitutes fair governance of those adaptive transitions (Pelling, O'Brien, & Matyas, 2015). What level of risk is tolerable, what trade-offs between risk and development are acceptable, and-most importantly-who decides (Ziervogel et al., 2017)? Geographies of power and agency will ultimately determine the priorities that are embedded in adaptive pathways; hence addressing the above questions is of fundamental importance to defining precisely whose futures are protected and how costs are distributed. These are fundamentally political concerns with real-life implications, demanding heightened attention from critical scholarships. The social contract has already emerged as a language to describe shifts in governing behaviors that are either necessary for, or act as a pathway of, transformational adaptation. The application in this literature has been heterodox, tending not to invoke classical contractarian theory and toward a symbolic rather than analytical application, used to demark a tension between differing communities of practice or epistemology. Lubchenco (1998) and Demeritt (2000) were early to invoke social contracts in sustainability research, observing an increasing pressure on environmental scientists to produce work with demonstrable societal value (what they call a new social contract for science). DeFries et al. (2012), Castree et al. (2014) and Castree (2016) have transposed this into calls for action on global environmental change, arguing for more plural, actionoriented scholarship that pays due attention to social science and humanities alongside physical sciences. Others in the policy and business sphere have invoked the social contract as an argument for altered and/or strengthened accountability chains in environmental management and regulation (Miliband, 2006;White, 2007;Zadek, 2006). Alongside this, the language of the social contract has entered adaptation, disasters and development literatures as a broad analytical lens. Current applications invoke the social contract to highlight inequalities resulting from specific development failures which underlie unequal geographies of disaster risk reduction (DRR; Mitra et al., 2017) impact and recovery (Pelling & Dill, 2010), as a mechanism for adaptation (Adger et al., 2012; O'Brien, Hayward, & Berkes, 2009), as a lens on the evolution or stability of state-society relations in post-disaster settings (Blackburn, 2018;Siddiqi, 2013;Siddiqi & Canuday, 2018), and as the building block for more accountable development pathways (Hickey & King, 2016). The social contract has been used to articulate those conditions causing risk governance to be seen as illegitimate or unacceptable (Christoplos, Ngoan, Hoa Sen, Thanh Huong, & Lindegaard, 2017;Pelling, 2011), and to help conceptualize and visualize what fairer governance might look like in the future . Such literature does important work by situating adaptation squarely as a governance and development concern. It raises questions about the opportunities offered by the renegotiation of the social contract across multiple relationships as a mechanism for improved governance. This is part of a movement in the literature toward inviting critical reflection on what type of adaptation we need (or want), and the deeply political challenge of how we might get there. However, to date this literature remains vague about the precise definition of the social contract adopted, and tends to invoke it as a metaphor to describe the distribution of rights and responsibilities and/or citizen expectations and experiences of the state, rather than to interrogate the processes through which those relations are produced. This article argues that whilst existing literature does well to draw attention to the centrality of governance and state-society relations as a limiting factor to adaptation and resilience, social contracts can be made to work harder as an analytical frame. | WHAT MORE CAN SOCIAL CONTRACTS OFFER? Those working with the social contract have tried hard to balance received classical contractarian ideas with the empirical observation and contemporary interpretations of risk governance and its social context. Two logical problems arise from attempting an application of received theory in this way. These constraints are discussed below (this section), and stem from classical contractarian theory's starting point that a single social contract exists in the polity, and that this is controlled by individuals who collectively hold power over their (legitimate) ruler. Neither position is readily observable; but we argue a social contract lens does not need to assume the existence of a singular hold nor its direction of authority. Rather than attempting to find the social contract (as described by contractarian theory) in contemporary contexts, we argue for using this theory as a starting point to problematise development relations, and as a common meta-theory for the synthesis and communication for questions about representation, leverage, empowerment, risk perception, and citizen agency. The two received constraints on contractarian logic, and implications of moving beyond them from a social contract to a social contracts lens, are outlined below. First is classical contractarianism's singular concern with the social contract between a sovereign ruler and the people over whom they rule (see Boucher & Kelly, 1994;Campbell, 2010;Lessnoff, 1990;Morris, 1999). This excludes other, asymmetric power relations: the family, household, workplace, community, and soforth. Particularly given the ever-more powerful global forces of neoliberalism that challenge and reform the role of the state, alongside the near-ubiquitous resilience discourse which emphasizes local capacities for adaptation, governing for adaptation demands an urgent rethinking of the governance structures, lines of accountability and power relations that will define how and in whose interest adaptation occurs . Intergovernmental agreements such as the Paris Agreement and UN Sendai Framework for DRR set out ambitious targets for cross-sector, multilateral cooperation and embedded national policies on climate adaptation and DRR-yet these continue to be undermined by what Pearson and Pelling (2015) term the "awkward politicization of intergovernmental negotiations" (p. 2). In an increasingly complex governance landscape, new trade-offs, compromises and arrangements are inevitable. Understanding what sorts of protections individuals expect to receive from state and nonstate actors in a warming world, and the consequences for polities when these are not met, is an increasingly urgent issue, and one that may challenge current conceptions of citizenship and just governance. Social contracts describe the distribution of rights and responsibilities between parties, and thus provide a lens on conditions where previously assumed or stable geographies of right and responsibility can be visualized and questioned by researchers and policy actors alike. This has practical and policy value in the sense it is difficult to design/pre-empt more progressive governance landscapes without fully understanding how current ones stand and evolve. This connects to concerns with geographies of blame, accountability and responsibility in risk governance (Bulkeley, 2001;Butler & Pidgeon, 2011), by drawing attention to implications of institutional arrangements on democratic legitimacy and accountability. Social contracts add particular richness by placing greater emphasis on boundaries of social acceptability and perceived fairness (as called for by Paavola & Adger, 2006;Adger, Lorenzoni, & O'Brien, 2009). The second constraint is classical theory's proposition that the social contract is an outcome of collective societal acquiescence or consent, which implies a comfortable exchange of rights and responsibility between ruler and ruled (in particular, Rousseau, 1762Rousseau, [1987). Contractarian theory is primarily interested in the shape of this relationship, rather than the mechanisms through which it is produced and the potential of the use of force to establish, maintain or resist, subvert or overthrow relationships culminating in (the) social contract(s). Many classical theorists including John Locke (amongst the famous of contractarian philosophers) argued that when the legitimacy of the state is lost, then citizen resistance is justified (Lessnoff, 1990); we argue this aspect has been under-utilized analytically. In a disaster context, a social contract lens can emphasize the gap between formal civic rights or protections and on-the-ground realities of mutually constituted poverty and hazard vulnerability (Pelling, 2011). By drawing attention to instances where states fail to protect basic human rights (to life, to security, to essential services)-for example, through unacceptably slow response or exclusionary geographies of relief/rehabilitation which magnify pre-existing inadequacies or inequities in service provision-a social contract framing highlights the capacity for extreme events to reveal development and governance failures (what Pelling [2011, p. 95] describes as a "break" in the social contract; also Pelling & Dill, 2010). Social contracts provide a powerful lens for understanding crises of state legitimacy, and the ways in which these are captured (or not) by political and social actors as a moment for social-institutional change-highly pertinent to the burgeoning literature on transformation. This lens could be applied in post-disaster settings as well as to understand rationales of complicity or resistance to particular adaptation (or maladaptive) policies and practices. | MOVING FORWARD: MULTIPLE RISK SOCIAL CONTRACTS The above discussion demonstrates that, despite the constraints imposed by classical contractarian logic, certain principles of classical social contract thinking are strongly resonant for adaptation scholarship. We propose a framework which helps move beyond the conceptual challenges above in three ways. First, we argue that the idea of social contracts need not necessarily be confined solely to state and society, inspired by Boucher and Kelly who challenge the assumption "that there is a single unified tradition or a single model or definition of the contract" (1994, p. 1). In light of the need to recognize nonstate actors as governance players (as called for by White, 2007), we advocate a view of multiple social contracts in the plural (as opposed to the social contract, singular), between individual(s), organizations, collectives or institutions either in-or outside the state infrastructure. At a subsocietal level this includes intra/inter familial and community relationships of codependency, which may or may not reflect meso-and macro scale power relations within society at large. Second, by emphasizing mechanisms through which social contracts are (re)produced or contested (rather than taking their existence for granted), we argue a social contracts lens can draw attention to the multiple, ongoing, everyday scalar politics through which power is centralized, distorted, or otherwise stripped from the local in ways that undermine community resilience. This is facilitated by an acceptance of the multiple pathways through which social contracts are established, and their multiple social-political construction. For example, governments may claim to have decentralized decision-making and implementation plans, when in reality local agency is constrained by a lack of institutional support (Allen, 2006) or weak channels of cross-scale communication, trust and representation that isolate local communities from spaces of decision-making (Blackburn, 2014). A social contracts approach offers a pertinent framing to such challenges, since it is fundamentally concerned with politics of relative power and agency between stakeholders, both at and between scales. Its pertinence stems from the inherently scaled nature of risk and vulnerability; vulnerability stems from action (and inaction) at multiple scales, and both responding to crises as well as reducing risk meaningfully in the long term demands collaborative, complementary actions across and between all scales. Third, responding to the constraint of classical contractarianism conceiving a social contract as inherently reciprocal, we propose drawing a separation between three intersecting yet differentiated social contracts: legal-institutional, imagined, and practiced. These represent three distinct realms in which rights and responsibilities are held in tension, which exist concurrently and may or may not overlap. Social contract analysis might either focus on one realm only, or on the relationships between them. Each form of social contract is explained below: | Legal-institutional social contract The legal-institutional social contract (LSC) exists in the formal, legally sanctioned distribution of rights and obligations between societal actors, which is defined by and through legal and constitutional frameworkswhether or not this distribution is deemed fair by the individuals it governs. The LSC may be fixed over multiple generations but can also evolve quickly, and is a product of dominant institutionalized cultures, values, and social relations; it is not inherent but constructed. As Angel and Loftus (in press) argue, the state (and its instruments) are not a "coherent thing" but rather a "form emerging out of a contradictory set of social relations and a process of struggle" (p. 3). | Imagined social contract The imagined social contract (ISC) constitutes individuals' own subjective vision of a just social order, which may or may not be reflected in policy or practice. It is imagined rather than material (although it likely informs, and is informed by, material struggles), and could be either perceptive ("this is what I believe it to be"), expectant ("this is how it should be") or hopeful ("this is how I wish it would be"). This social contract relates closely to Rousseau's assertion that the legitimacy of an authority is defined by those over whom it rules (1762 [1987]). Being sensitive to social relations, personal and collective history and culture, ISCs may associate in communities of shared experience or belief, but are also inevitably differentiated (between individuals, locales, social groups) and fluid over a lifetime. The ISC is independent of the law (although again, is likely influenced by it), the latter of which exists either in a state of compliance or breach of the fluid, heterogeneous ISC. The key challenge for the ISC, both theoretically and methodologically, is the diversity of societal values which exist within a single citizenry and, due to this subjectivity, the likely impossibility of unanimous agreement. | Practiced social contract Whilst the ISC is imagined, the extent to which it is reflected in practice is material. The practiced social contract (PSC) is the "real-life" balance of rights and responsibilities which are performed and claimed by individuals and state actors, and is observable in the everyday state-citizen and citizen-citizen relations. This is the social contract that is most frequently discussed in current literature-exemplified in Pelling's definition of the social contract as "the prevailing balance of rights and responsibilities in society and may be held in place by legitimate government or the rule of force" (2011, p. 172). The PSC is the product of negotiation between multiple conflicting ISCs (which coexist in society) and the LSC, and may sit closer to one, both or neither. Analyzing the disjunctures, overlaps and transitions between these social contracts offers a research frontier in its own right, but also an organizing framework for burgeoning research on the political and justice implications and contexts for climate change adaptation and disaster risk management research and practice. The relative closeness between contracts from different stakeholders' perspectives (and how this changes over time) could indicate the degree to which climate change adaptation policy reflects, justifies or challenges dominant public priorities, experiences, and expectations, bringing climate change research into broader debates on the social acceptability of government in practice. This is a core requirement for research and practice that recognizes the need for transformation in moving toward sustainable and just futures. In a perfect democracy, the ISC would shape the PSC and LSC in its image through democratic channels. More likely, however, is a PSC which reflects inequities of power and influence within society, since the most powerful are best able to shape social relations in their favor. Gaps between ISC, PSC and LSCs may arise where inherited constitutional arrangements are (or become) inappropriate to local history and culture-observed, for example, in many post-colonial contexts. Furthermore, even within a single legal jurisdiction, each of these social contracts-and the gaps experienced between them-will not be the same for all people. Differences may exist, for example, between recognized citizens and illegal migrants, or between majority and minority groups. The closeness between PSC and LSCs will also be a product of the strength and culture of enforcement, and may vary across scales. For example, there may be a disjuncture between the formal LSC because of corruption, which stipulates that corruption is illegal, and the PSC at the local scale, where corruption is locally accepted as a legitimate pathway for resource access. This is perhaps more likely to occur where the state is absent or perceived to be acting against the will of the citizenry. The distance between social values and existing legal-institutional settings has previously been explored by Pelling and Manuel-Navarrete (2011) as a possible indicator of impending transformation. In addition to the differentiation of the three social contract forms, the reframing of social contracts as multiple (i.e.,between multiple social actors and groups) marks a shift from a focus on the social contract towards a framework that can take account of nonstate organizations (particularly NGOs and private organizations, both domestic and international), that increasingly deliver essential services historically provided by the state (including water, sanitation, energy), but sit outside the state infrastructure. This new geography of service provision may alter public perceptions about the appropriate distribution of rights and responsibilities (White, 2007). Questions include how respective responsibilities and obligations are negotiated between these actors (are we witnessing the emergence of multiparty social contracts?) and to what extent and how accountability is ensured. | EVOLVING SOCIAL CONTRACTS: A RESEARCH AGENDA The framework introduced above opens four specific research avenues, detailed below. First, it offers a methodology to map current or projected allocations of responsibility and rights in adaptation governance, decision-making and action, and through this to highlight power/agency vacuums and areas of overlap and contestation. Such a project could be used in the policy sphere to focus resources and negotiation time on ensuring protective mechanisms for at-risk groups or sectors. It could also reveal mismatches between policy and practice-for example, to reveal the efficacy of decentralized governance frameworks, one might find that more and more risk management responsibilities are delegated to citizens (shifts in the LSC), yet lack of movement in citizens' capacities to enact those responsibilities (a static PSC) might only be revealed by a disaster event. Second, mapping social contracts could shed light on pathways of transition or acts of transformation, by exploring the contextual events/factors which contour, stimulate or reflect their evolution, and paying attention to which specific social contracts evolve in response to what. Methods could include historical root cause analysis or qualitative field research into postdisaster recovery. One might find, for example, that the Legal Social Contract can act either as a constraint to, benchmark of, or a stimulus for change. By analyzing shifts over time, the framework could reveal the speed as well as direction of movement between the ISCs, PSCs, and LSCs, for example, whether gaps/overlaps emerge in a creeping or sudden way. This has implications for those seeking to manage social change processes unfolding with climate change impacts and adaptation consequences. Third, the relative closeness of the PSC and ISC could indicate the capacity for citizen-led action to leverage local priorities for adaptation. ISCs describe boundaries of social acceptance, expectations and felt entitlements, and can thus help understand locally-specific logics of resistance, moments where new (or newly articulated) rights claims emerge, and the role of risk in crystallizing those claims, either in calls for or in response to particular adaptive strategies. Conversely, gaps between LSC and ISCs could also point to complacent citizenship, for example, denial or failure to claim rights (which may equally be due to passive dependency or political apathy, or to active political suppression or lack of visibility of rights). Through a clearer understanding of citizens' own perceived and enacted agency (within the Imagined and PSCs), this could help explain why state failures and/or crises of legitimacy get captured politically (or not), and the role of risk in driving a migration of previously-stable expectations, including the ISC-with implications for understanding post-disaster settings as transformative moments, building on Pelling and Dill (2010). Fourth, mapping ISCs could reveal social and cultural limits to adaptation. This includes investigating different stakeholders' subjective conceptions of tolerable loss and damage, to identify boundaries of social acceptance within the ISC-a pursuit of critical importance in designing fair and liveable adaptive policies. Alongside, attention must be paid to the PSC as it relates to adaptation stakeholders' relative power and agency over others. Revealing stakeholders' subjective priorities, in combination with political-economic analysis of social reproduction, could reveal whose values are more or less likely to become embedded in adaptive pathways. Such work is of critical importance in identifying the projected (and existing) winners and losers of adaptation activity, with a view to improve the equity outcomes of negotiated and imposed risk management settlements. | CONCLUSION This article has highlighted the potential of a social contracts lens to address complex questions around the politics of adaptation. It has defined social contracts as fluid, multiple, and fundamentally political constructs, that are shaped concurrently by the expectations and aspirations of the citizenry, the degree and means of fulfillment of those expectations, and the conditions for the legitimacy of formal security provisions. As an analytical framework, social contracts can bring questions around responsibility and entitlement for citizen security to the fore, inviting interrogation of the social processes reproducing uneven geographies of vulnerability and exposure, critical reflection on the norms and expectations dictating "who" is responsible for "what" in risk governance, and the conditions under which the legitimacy and practice of current ways-of-governing are challenged and renegotiated. Understanding convergences and disjunctures among LSC, PSC, and ISCs offers a timely lens for unpacking how blame and perceived responsibility for adaptation are constructed and contested, and how more legitimate, fair or otherwise socially progressive governance landscapes are defined or negotiated. These emphases open important analytical space, responding to mounting evidence that the possibilities, mechanics and limits of adaptation are as much social, political, and cultural as they are technical. It responds to the need-both academic and pragmatic-for a framework that emphasizes how (and with what implications) rights and responsibilities for adaptation are negotiated, and invites creative responses to this challenge across disciplinary divides (geography, philosophy, politics, and beyond). However, more than analytical space, a social contracts lens also open reflective space for the contemplation of adaptation as a normative challenge. In a warming world beset by deep and growing social inequality and ecological crisis, it is insufficient for DRR and adaptation to focus narrowly on small-scale, incremental or localized improvements to infrastructures, livelihoods and emergency-response in isolation from mainstream development concerns. Doing so makes adaptation unable to address the underlying and systemic root causes of risk-including structural inequality, poverty, and social exclusion (Pelling, 2011). Business-as-usual development-and business-as-usual governance of development-is no longer tenable, and rather than viewing either DRR, adaptation or development in isolation, action is needed at what Solecki, Pelling, and Garschagen (2017) term the adaptation-development nexus. This is essential to meeting the Sustainable Development Goals' ambitious targets for climate action at the same time as building just, peaceful and inclusive societies (UN, 2018). Transitioning toward sustainable development is undoubtedly a wicked problem. This article sets out a specific response: one that contributes to the visioning and analysis of social navigation across the ever-more complex terrain of adaptation governance. ACKNOWLEDGMENTS This article draws on research conducted during an Economic and Social Research Council (ESRC) PhD Studentship hosted by the King's Interdisciplinary Social Science Doctoral Training Centre (KISS-DTP) from 2012 to 2018. The authors wish to thank Alex Loftus and colleagues in the Contested Development group in the Geography Department at King's College London for their valuable feedback during the development of these ideas. We thank two anonymous reviewers for their very helpful comments. The opinions expressed here remain those of the authors.
2019-04-27T13:10:31.321Z
2018-08-31T00:00:00.000
{ "year": 2018, "sha1": "5d6752cc290ba762e556ec976f8361fedf06437a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wcc.549", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "71dd84ab8fdabf4c99aede446ee7d9ee45d42842", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
238770615
pes2o/s2orc
v3-fos-license
Tensile Buckling of a Rod with an End Moving along a Circular Guide: Improved Experimental Investigation Based on a Dynamic Approach : Investigation of buckling under tension is highly important from theoretical and practical viewpoints to ensure safety and the proper performance of mechanical systems. In the present work, tensile buckling is investigated experimentally, and the critical force is measured in systems where one end of an elastic tensile rod slides along a straight guide, while the other slides along a curve. An experimental setup is proposed and developed for determining the critical tensile load of the elastic rod by a dynamic method. This setup allows measuring free vibrations and frequency with the required accuracy. Improvement of the critical load accuracy is achieved by approaching the maximum load to the critical one. Limitations in selecting the test parameters are found according to the required extrapolation accuracy of the dominant natural vibration frequency dependence on tensile load. Theoretical analysis and tests are performed for the rod connection schemes pinned– rigid, rigid–pinned, and rigid–rigid, considering imperfections in the fixation of the rod ends. It is experimentally shown that the system buckling at tensile load is possible and that experimental and theoretical values of the critical load are in good agreement. The achieved accuracy, estimated by the discrepancy between the calculated and the experimental values, is 2.1–3.5%. The present study was aimed at an experimental investigation of the buckling phenomenon for an elastic rod with an end sliding along a circular guide and with the following connection schemes: pinned–rigid, rigid–pinned, and rigid–rigid. To decrease the influence of friction in the guides, a dynamic method was used to study buckling and obtain the critical load With this method, the critical force value was assumed to correspond to the zero frequency of natural vibrations Since the zero frequency cannot be reached, the critical load was determined by extrapolating the results measured with a certain error at lower values of the axial force. It was shown theoretically [11] and experimentally [12] that for a compressed rod with rigid, pinned, and free end conditions, the buckling phenomenon is indicated by a monotonic decrease in the natural vibration frequency as the compression load increases. The same indicator for buckling at tension was used in the current research. The novelties of this study applying the dynamic method include: Introduction The phenomenon of tensile buckling is of considerable scientific interest, and numerous theoretical studies have been devoted to it, especially in the last decade. For example, tensile buckling is studied for elastic rods [1,2], elastomeric bearings [3], which lose stability due to shear deformation, bars with sliding connections [4] installed in separate sections, allowing elastic transverse movement in the beam sections. Various theoretical models of elements and mathematical models of their analysis are used. It is shown that the results of theoretical analyses depend on the models and methods for their analysis [5]. As an example, contradictory analysis results of this phenomenon for a stretched hinged supported beam have been provided. In this regard, an experimental study is of particular importance in order to verify the existence of the phenomenon and compare the calculated and experimental values of the critical force. Theoretical and practical issues, related to an elastic tensile rod with one end sliding along a straight guide and the other one along a curve, have been widely investigated in the recent years [6][7][8]. The possibility of buckling at some critical tensile force value was studied theoretically. The post-buckling behavior of the system was described, and calculation dependences, allowing to find the critical load, were proposed [6,7]. Hinged and clamped connection schemes between rod and curved guide were considered. However, the investigated system was studied experimentally only after buckling. A thorough literature review on the topic was recently presented by Simão and Silva [8]. The present study was aimed at an experimental investigation of the buckling phenomenon for an elastic rod with an end sliding along a circular guide and with the following connection schemes: pinned-rigid, rigid-pinned, and rigid-rigid. To decrease the influence of friction in the guides, a dynamic method was used to study buckling and obtain the critical load [9,10]. With this method, the critical force value was assumed to correspond to the zero frequency of natural vibrations [10,11]. Since the zero frequency cannot be reached, the critical load was determined by extrapolating the results measured with a certain error at lower values of the axial force. It was shown theoretically [11] and experimentally [12] that for a compressed rod with rigid, pinned, and free end conditions, the buckling phenomenon is indicated by a monotonic decrease in the natural vibration frequency as the compression load increases. The same indicator for buckling at tension was used in the current research. The novelties of this study applying the dynamic method include: developing a testing stand that allows reducing the influence of resistance to the movement of the rod end along a circular guide; -the methodology of the experiment, which allows determining the critical load with sufficient accuracy, including the choice of rod-curved guide system parameters, taking into account the deformation of the rod at the connection nodes. To obtain the critical load by extrapolating the experimental values of squared natural frequencies, a linear dependence between frequency and axial load is expedient, since it includes a minimum of parameters and therefore it is less sensitive to frequency and force measurement errors. For the above-mentioned connection schemes, this dependence was proved previously [13]. It was shown that as the axial force becomes higher, the frequency increases and asymptotically approaches to that of a cable under tension. There are no similar available data for a system in which the rod end moves along a circular trajectory, like the case that is investigated in the present study. In known devices for testing rods with an end moving along a curved guide, roller guides are used [7,14,15]. The main direction for reducing the friction resistance of the rod end to movement in the nodes connecting the rod ends to the supporting structure is the use of bearing units, such as in the case of an axial rotary attachment [16] or the rod's end movement along a cylindrical surface [14], when the rod is loaded by a followed force, directed towards the positive pole. In both cases, the friction resistance during rotation and movement creates significant damping and interference during the rod free vibrations. This resistance limits the possibility of increasing the axial force in the rod to values close to critical and distorts the dependence of the vibration frequency on the axial force. Both factors reduce the accuracy of determining the critical force by extrapolating the measured values to zero frequency. To improve the extrapolation accuracy, the maximum test load should be brought closer to the critical one. At the same time, the possibility of measuring the natural vibration frequency with sufficient accuracy is correspondingly reduced, due to its small value. Since the minimum frequency is limited by the rod movable end's friction resistance to movement, the direction for the test device improvement should be a decrease in the resistance to movement. To plan the experiment and define the parameters limits that can be used for extrapolation, dependence between the squared natural vibration frequency and tensile load was theoretically investigated in the present study, considering the lumped mass of the sliding mechanism at the rod end. In the present study, that was aimed at an experimental investigation of buckling in systems including a tensile rod with an end sliding along a circular guide, the following problems are solved: finding a dependence of the natural vibration frequency of the rod with a mass at the end that moves along a curved guide on the tensile force; -finding a range of the rod parameters in the form of L/R (L-rod length, R-radius of curvature), allowing a linear extrapolation to obtain the buckling load, using squared frequencies; Appl. Sci. 2021, 11, 7277 3 of 17 -experimental verification of rod buckling at tension, finding the buckling load using experimental data and comparing the obtained value with the calculated one. The scientific novelty of the present study includes: developing a scientifical method for the experimental determination of the critical load for a rod that has an end moving along a circular guide. The method includes an originally developed stand, the idea and design of which allow reducing the influence of resistance to the movement of the rod's end that moves along a circular guide and ensuring an accurate determination of the critical load; -experimentally proving the phenomenon of buckling under tension of a rod, the end of which moves along a curved guide under different end conditions, considering imperfections in its end's fixation; -experimental determination of the critical load and proven correspondence of the critical load value to theoretical calculations. Analytical Investigation of the Experimental Parameters Range The stand for testing ( Figure 1) consists of a wheel 1, placed on bearing 2, an elastic rod 3, a sliding guide 4 that moves straight along a line, passing through the wheel center (line OC in the figure). One end of the rod is connected to the wheel, and the other moves along the direction slide. The rod connection with the wheel and the direction slide enables changing the deformable rod length. It also allows pinned-rigid, rigid-rigid, or rigidpinned connections. The axial load in the rod was applied by a weight connected to the clamp through a block by a cable. finding a range of the rod parameters in the form of L/R (L-rod length, R-radius of curvature), allowing a linear extrapolation to obtain the buckling load, using squared frequencies; -experimental verification of rod buckling at tension, finding the buckling load using experimental data and comparing the obtained value with the calculated one. The scientific novelty of the present study includes: -developing a scientifical method for the experimental determination of the critical load for a rod that has an end moving along a circular guide. The method includes an originally developed stand, the idea and design of which allow reducing the influence of resistance to the movement of the rod's end that moves along a circular guide and ensuring an accurate determination of the critical load; -experimentally proving the phenomenon of buckling under tension of a rod, the end of which moves along a curved guide under different end conditions, considering imperfections in its end's fixation; -experimental determination of the critical load and proven correspondence of the critical load value to theoretical calculations. Analytical Investigation of the Experimental Parameters Range The stand for testing ( Figure 1) consists of a wheel 1, placed on bearing 2, an elastic rod 3, a sliding guide 4 that moves straight along a line, passing through the wheel center (line OC in the figure). One end of the rod is connected to the wheel, and the other moves along the direction slide. The rod connection with the wheel and the direction slide enables changing the deformable rod length. It also allows pinned-rigid, rigid-rigid, or rigidpinned connections. The axial load in the rod was applied by a weight connected to the clamp through a block by a cable. Figure 1. (a) Scheme of a stand for testing systems, including a tensile rod with an end sliding along a circular guide: 1-curved guide, placed on bearing 2; 3-elastic rod; 4-sliding guide; (b) forces in the rod section and forces acting on the curved guide. Note that V A and M A are internal forces acting in the rod at its connection to the curved guide, N A is the tensile force in the rod, F is the external force applied to obtain the transverse stiffness of the system. According to the Rayleigh method [17], it was assumed that the system's mode shape corresponds to that under a static force [18] applied to the mass. The dominant modal frequency of the system was obtained as follows: where K is the rod static stiffness in transverse direction, δ = M/m, M is the equivalent mass at the end of the rod, m is the elastic rod mass, α is equivalence coefficient of the elastic rod mass and that of the lumped muss at the rod end: Here, y(x) is the shape of the elastic rod due to unit static displacement at its end, corresponding to the trajectory; L is the length of the deformed part of the rod. According to possible rod end displacement, the rod stiffness in transverse direction is defined as a force F that yields a unit displacement at the rod end along the circle with a radius R (see Figure 1). According to the possible displacement at the end A of the rod, the above-mentioned force direction is tangent to the trajectory (perpendicular to the radius, connecting point A with the rotation center). From the equilibrium of the curved guide ( Figure 1), the load F, required for deviation ϕ = 1/R, corresponding to rigid connection, is where V A and M A are shear force and bending moment in the rod, respectively. At F = 0, Equation (3) corresponds to the equilibrium of a curved clamp [7,14]. For a pinned connection between the wheel and the rod end where N A is an axial force in the rod, γ is the angle between the rod axis and the line OC, and ϕ is the wheel rotation angle, corresponding to unit displacement of the rod end along the circular trajectory (see Figure 1). To obtain the dominant vibration frequency of the system, it was assumed that the equivalent mass of the rod was lumped at its end [19]. The general equivalent mass of the system includes that of the rod and that of the curved guiding elements. The shape of the elastic rod and the stiffness in transverse direction are obtained by integrating the differential equation of the rod deformation under tension where z = u x/L, u = L (N/EI) 0.5 , E is the modulus of elasticity, I is the moment of inertia of the rod section, N is the axial force. As then u 2 is a non-dimensional parameter of the axial force in the rod. The general solution of Equation (5) for tensile force is For all cases that are considered in the present study (rigid-rigid, pinned-rigid and rigid-pinned) y(0) = 0; y(x = L) = 1. Additionally, for the rigid-rigid connection, considering the sign of the curvature y x (0) = 0; y x (x = L) = −1/R. (9) For the pinned-rigid connection and for the rigid-pinned connection According to the boundary conditions, the coefficients of particular solutions, C 1 , C 2 , C 3 and C 4 can be found as follows: For the rigid-rigid connection for the pinned-rigid connection for the rigid-pinned connection The elastic rod shapes for various connection schemes at N = 0 within a diapason of 1.5 ≤ L/R ≤ 3 used in the experiments are given in Figure 2. For comparison, shapes corresponding to a linear displacement of the rod end (L/R = 0) are also shown in the Figure. Figure 3 presents the dependence of α(0) and α(u cr ) on L/R for various connection schemes. Here, u cr is a non-dimensional parameter that corresponds to the buckling load, and its value is obtained later. A numerical analysis of the data showed that for 0 ≤ u ≤ u cr , the dependence of α on u can be approximated as a linear function of u 2 : Following Equations (3) and (4) and using known dependences for moments and shear forces, the rod stiffness in transverse direction is obtained as follows: for the rigid connection between the rod and the wheel Appl. Sci. 2021, 11, 7277 6 of 17 -for the pinned connection between the rod and the wheel Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 18 The expression for y(x) in these equations is a particular solution of Equation (7) corresponding to the boundary conditions combination at the rod ends. For all connection types, the transverse stiffness can be represented as = ′ / , where K' is non-dimensional stiffness: for the rigid-rigid connection for the pinned-rigid connection The expression for y(x) in these equations is a particular solution of Equation (7) corresponding to the boundary conditions combination at the rod ends. For all connection types, the transverse stiffness can be represented as = ′ / , where K' is non-dimensional stiffness: for the rigid-rigid connection for the pinned-rigid connection The expression for y(x) in these equations is a particular solution of Equation (7) corresponding to the boundary conditions combination at the rod ends. For all connection types, the transverse stiffness can be represented as K = K EI/L 3 , where K is non-dimensional stiffness: for the rigid-rigid connection for the pinned-rigid connection for the rigid-pinned connection The non-dimensional stiffness depends on the non-dimensional parameters u and L/R. The values of u range from 0 to u cr , which corresponds to the buckling load. The value of u cr is obtained from K' = 0 [12] and the critical load N cr according to Equation (6): Table 1 presents the values of u cr for the investigated end connection conditions. Figure 4 shows the dependence of u cr on R/L. For the rigid-rigid and rigid-pinned cases, the values of u correspond to those obtained by Misseroni et al. [7] and Bigoni et al. [6], respectively. The symmetry of the rigid-rigid case relative R/L = 0.5 was demonstrated previously [7]. The values of the rigid-pinned case are symmetric to those of the pinnedrigid, also with relative R/L = 0.5. for the rigid-pinned connection The non-dimensional stiffness depends on the non-dimensional parameters u and L/R. The values of u range from 0 to ucr, which corresponds to the buckling load. The value of ucr is obtained from K' = 0 [12] and the critical load Ncr according to Equation (6): Table 1 presents the values of ucr for the investigated end connection conditions. Figure 4 shows the dependence of ucr on R/L. For the rigid-rigid and rigid-pinned cases, the values of u correspond to those obtained by Misseroni et al. [7] and Bigoni et al. [6], respectively. The symmetry of the rigid-rigid case relative R/L = 0.5 was demonstrated previously [7]. The values of the rigid-pinned case are symmetric to those of the pinnedrigid, also with relative R/L = 0.5. The present study analyzed the dependence of K' on 0 ≤ u ≤ ucr using the normalized non-dimensional values K'/K'0 and u 2 /u 2 cr. Here, K'0 is non-dimensional stiffness at N = 0. This approach is convenient to investigate the deviation from linear dependence For all connection cases, the graphs of normalized dependences pass through points (1, 0) and (0, 1). For the rigid-rigid connection ( Figure 5), the graphs become close to the linear dependence, as the ratio L/R is closer to 2. For other values of L/,R the dependence is nonlinear. The effect of L/R for the pinned-rigid and rigid-pinned cases is similar. The present study analyzed the dependence of K on 0 ≤ u ≤ u cr using the normalized non-dimensional values K /K 0 and u 2 /u 2 cr . Here, K 0 is non-dimensional stiffness at N = 0. This approach is convenient to investigate the deviation from linear dependence For all connection cases, the graphs of normalized dependences pass through points (1, 0) and (0, 1). For the rigid-rigid connection ( Figure 5), the graphs become close to the In a similar way, we analyzed the dependence of ω 2 /ω0 2 on u 2 /u 2 cr, where ω0 is the dominant vibration frequency at N = 0. According to Equation (1), For 1.5 ≤ L/R ≤ 3, in the experiments with constant R and variable L, the coefficient δ varies from 6.8 to 22.7. The coefficient α for the same L/R and for u ≤ ucr varies within a diapason between 0.14 and 0.28. As the values of α are rather small, the nonlinearity of ω 2 /ω0 2 vs. tensile force N is determined by that of K'. There is a range of L/R for which the dependence is close to linear with a certain accuracy. Considering a linear dependence of ω 2 on the rod tension, the dependence of maximum deviation on ω 2 /ω0 2 is analyzed according to Equations (23) and (24) in a range of 0 ≤ u ≤ ucr: The value of ω 2 /ω0 2 according to Equation (23) is obtained considering the dependence α(u) using Equation (15) for the given values of δ. The maximum deviation value occurs about at the middle of the interval 0 ≤ u ≤ ucr. Table 2 and Figure 6 present the results of this analysis. The range of allowed L/R depends on the selected accuracy of the correspondence between the dependence and its linear approximation. As the accuracy of the experiment, dependent on measuring the frequency and using the squared frequency for extrapolation, was 2%, and the accuracy of force measuring was 0.5%, the total maximum error was assumed to be 2.5%. In this case, the range of the allowed L/R values was 1.33 ≤ L/R ≤ 4 for the rigid-rigid connection, 1 ≤ L/R ≤ 3 for the pinned-rigid case, and L/R ≥ 1.5 for the rigidpinned one. The common range for all connections was 1.5 ≤ L/R ≤ 3. Therefore, this interval was selected for further experiments. For all end connection schemes, the following values were selected from this interval, which are symmetric to R/L = 0.5: 0.33; 0.4; 0.5; 0.6 and 0.67. In a similar way, we analyzed the dependence of ω 2 /ω 0 2 on u 2 /u 2 cr , where ω 0 is the dominant vibration frequency at N = 0. According to Equation (1), For 1.5 ≤ L/R ≤ 3, in the experiments with constant R and variable L, the coefficient δ varies from 6.8 to 22.7. The coefficient α for the same L/R and for u ≤ u cr varies within a diapason between 0.14 and 0.28. As the values of α are rather small, the nonlinearity of ω 2 /ω 0 2 vs. tensile force N is determined by that of K'. There is a range of L/R for which the dependence is close to linear with a certain accuracy. Considering a linear dependence of ω 2 on the rod tension, the dependence of maximum deviation on ω 2 /ω 0 2 is analyzed according to Equations (23) and (24) in a range of 0 ≤ u ≤ u cr : The value of ω 2 /ω 0 2 according to Equation (23) is obtained considering the dependence α(u) using Equation (15) for the given values of δ. The maximum deviation value occurs about at the middle of the interval 0 ≤ u ≤ u cr . Table 2 and Figure 6 present the results of this analysis. The range of allowed L/R depends on the selected accuracy of the correspondence between the dependence and its linear approximation. As the accuracy of the experiment, dependent on measuring the frequency and using the squared frequency for extrapolation, was 2%, and the accuracy of force measuring was 0.5%, the total maximum error was assumed to be 2.5%. In this case, the range of the allowed L/R values was 1.33 ≤ L/R ≤ 4 for the rigid-rigid connection, 1 ≤ L/R ≤ 3 for the pinned-rigid case, and L/R ≥ 1.5 for the rigid-pinned one. The common range for all connections was 1.5 ≤ L/R ≤ 3. Therefore, this interval was selected for further experiments. For all end connection schemes, the following values were selected from this interval, which are symmetric to R/L = 0.5: 0.33; 0.4; 0.5; 0.6 and 0.67. Experimental Program and Setup According to the main aim of the research-experimental investigation of critical tensile force and its comparison with an analytical value-the following problems were solved experimentally: • measuring the moment of inertia of rotating masses in order to obtain the equivalent mass M; • finding the modulus of elasticity of the rod; • studying the influence of semi-rigidity in the rod ends connection, caused by the elasticity of the connection elements at the rod ends to the guide and slider; • obtaining the buckling load for all investigated types of end connections. To solve the above-mentioned problems, a stand, corresponding to a general scheme shown in Figure 1, was used. The view of the stand, presenting its construction, is shown in Figure 7a. The axis of wheel 1 with bearings and the linear guiding elements of the direction slide 2 are connected to the basic rod 3. Joint 4 that connects the elastic rod 5 to the wheel allows a rigid or a pinned connection. Connection joint 6 between elastic rod and direction slide has a screw stopper that allows a rigid or a pinned connection. According to the construction of the elastic rod connection joint to the wheel, the radius r of the Experimental Program and Setup According to the main aim of the research-experimental investigation of critical tensile force and its comparison with an analytical value-the following problems were solved experimentally: • measuring the moment of inertia of rotating masses in order to obtain the equivalent mass M; • finding the modulus of elasticity of the rod; • studying the influence of semi-rigidity in the rod ends connection, caused by the elasticity of the connection elements at the rod ends to the guide and slider; • obtaining the buckling load for all investigated types of end connections. To solve the above-mentioned problems, a stand, corresponding to a general scheme shown in Figure 1, was used. The view of the stand, presenting its construction, is shown in Figure 7a. The axis of wheel 1 with bearings and the linear guiding elements of the direction slide 2 are connected to the basic rod 3. Joint 4 that connects the elastic rod 5 to the wheel allows a rigid or a pinned connection. Connection joint 6 between elastic rod and direction slide has a screw stopper that allows a rigid or a pinned connection. According to the construction of the elastic rod connection joint to the wheel, the radius r of the rod end displacement was 305 and 295 mm for the rigid and pinned connections, respectively. The elastic rod was made of a hardened steel strip with a section of 19.5 × 1.47 mm 2 ; the working length L was controlled by measuring, according to the selected L/R ratio. For the rigid connection, the working distance was measured from contact A (Figure 7b,c), and for the pinned one, from the rotation axis (Figure 7d). The extension of the traction cable 7 (pulley block and weights at the end) are not shown in Figure 7a and were used as a typical solution for loading. Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 18 rod end displacement was 305 and 295 mm for the rigid and pinned connections, respectively. The elastic rod was made of a hardened steel strip with a section of 19.5 × 1.47 mm 2 ; the working length L was controlled by measuring, according to the selected L/R ratio. For the rigid connection, the working distance was measured from contact A (Figure 7b,c), and for the pinned one, from the rotation axis (Figures 7d). The extension of the traction cable 7 (pulley block and weights at the end) are not shown in Figure 7a and were used as a typical solution for loading. To obtain the vibration frequency, the wheel circular velocity was measured by an inductive gage, including a permanent magnet located on the wheel and a stationary coil. The signal was transmitted from a data logger to a PC each 0.01 s. This transmission allowed transferring the data at the maximum measured frequency with an error of 0.8%. For linear measurements of static displacements, the error for a standard gage was 0.05 mm. In the proposed testing stand design, the reduction of the rod movable end resistance to movement was achieved by the increased movement radius and structural design of mobility by wheel rotation. If the radius of the rolling balls in the central bearing assembly is 12 mm and the wheel radius is 295 mm, the decrease in friction resistance to rod end movement is 24.6 times. The design of the test setup and the methods for determining the investigated system parameters and the critical force allow testing of other systems which include a tensile or compressed element and an end that slides along a curved trajectory [8,20]. A theoretically investigated system that includes a rigid tensile rod and rotation springs at its ends [8] can be tested without changing the design of the bench. To measure the rotation spring stiffness, the guide element with a spring at one end is fixed on the wheel rim, and the second end of the rod is pivotally fixed on the guide element. The theoretically investigated system [21], the so-called Ziegler's beam [20], includes a stretchable elastic element on two hinged supports, one of which can move in the longitudinal direction. An additional rigid rod is rigidly fixed to the movable end of the elastic rod and is loaded by a force that has a constant direction when the rod is rotated. To test this system, the wheel axle is mounted on a linear sliding element 2 (Figure 7), one end of the elastic rod is fixed rigidly to the wheel axle, and the second one is hinged to the basis rod 3 in the figure. The wheel rim is loaded by a constant direction force from the elastic rod side. With this arrangement, the wheel spokes act as a rigid rod. This avoids the possibility of contact between the compressed curved element and the rigid rod [20], since they are in different planes. To create a force that acts in a constant direction, a weight load can be used. With this aim, the basis rod 3 should be installed vertically. Obtaining the Moment of Inertia of the Wheel's Rotating Mass The experiment was carried out for a vertical location of the wheel (Figure 8). The elastic rod (1) was made of a ∅ 2 mm high-strength wire; its length was 600 mm, and its mass 15 g. Two equal additional masses (2) (M b /2 each one) were connected to a flexible belt (3) placed on the wheel, as shown in Figure 8. To obtain the vibration frequency, the wheel circular velocity was measured by an inductive gage, including a permanent magnet located on the wheel and a stationary coil. The signal was transmitted from a data logger to a PC each 0.01 sec. This transmission allowed transferring the data at the maximum measured frequency with an error of 0.8%. For linear measurements of static displacements, the error for a standard gage was 0.05 mm. In the proposed testing stand design, the reduction of the rod movable end resistance to movement was achieved by the increased movement radius and structural design of mobility by wheel rotation. If the radius of the rolling balls in the central bearing assembly is 12 mm and the wheel radius is 295 mm, the decrease in friction resistance to rod end movement is 24.6 times. The design of the test setup and the methods for determining the investigated system parameters and the critical force allow testing of other systems which include a tensile or compressed element and an end that slides along a curved trajectory [8,20]. A theoretically investigated system that includes a rigid tensile rod and rotation springs at its ends [8] can be tested without changing the design of the bench. To measure the rotation spring stiffness, the guide element with a spring at one end is fixed on the wheel rim, and the second end of the rod is pivotally fixed on the guide element. The theoretically investigated system [21], the so-called Ziegler's beam [20], includes a stretchable elastic element on two hinged supports, one of which can move in the longitudinal direction. An additional rigid rod is rigidly fixed to the movable end of the elastic rod and is loaded by a force that has a constant direction when the rod is rotated. To test this system, the wheel axle is mounted on a linear sliding element 2 (Figure 7), one end of the elastic rod is fixed rigidly to the wheel axle, and the second one is hinged to the basis rod 3 in the figure. The wheel rim is loaded by a constant direction force from the elastic rod side. With this arrangement, the wheel spokes act as a rigid rod. This avoids the possibility of contact between the compressed curved element and the rigid rod [20], since they are in different planes. To create a force that acts in a constant direction, a weight load can be used. With this aim, the basis rod 3 should be installed vertically. Obtaining the Moment of Inertia of the Wheel's Rotating Mass The experiment was carried out for a vertical location of the wheel (Figure 8). The elastic rod (1) was made of a ∅ 2 mm high-strength wire; its length was 600 mm, and its mass 15 g. Two equal additional masses (2) (Mb/2 each one) were connected to a flexible belt (3) placed on the wheel, as shown in Figure 8. From the expression of the frequency of a system where I 0 is the moment of inertia of the wheel's rotating mass, D is the rotational stiffness, f is the free vibrations frequency. The measurements were carried out for three cases: 1-without additional mass; 2-with an additional mass of 1164 g; 3-with an additional mass of 2272 g. The experimental results at R b = 315 mm are shown in Figure 9. From the expression of the frequency of a system where I0 is the moment of inertia of the wheel's rotating mass, D is the rotational stiffness, f is the free vibrations frequency. The measurements were carried out for three cases: 1without additional mass; 2-with an additional mass of 1164 g; 3-with an additional mass of 2272 g. The experimental results at Rb = 315 mm are shown in Figure 9. From a linear approximation of MbRb 2 vs. 1/f 2 by the least squares method, it follows that I0 = 0.131 kg⋅m 2 . Correspondingly, for radius R = 305 mm and R = 295 mm for the rigid and pinned connections between the rod and the wheel, the equivalent masses were 1.40 kg and 1.51 kg, respectively. For the selected range of L/R and rod cross sections, the ratio δ = M/m is 7 … 15. This estimation illustrates a negligible influence of the rod mass on the calculated deviation of ω 2 vs. N and the corresponding error, caused by assumptions regarding the rod modal shape for vibrations with dominant frequency. Measuring the Flexural Rigidity of the Rod An experimental investigation of the rod flexural rigidity is necessary due to the following reasons: • steel class, its heat, and mechanical treatment; • influence of the ductility of the rigid connections at the ends on the rod's lateral stiffness. The modulus of elasticity of the steel strip was measured using a standard procedure [22] with a confidence level of 90% and a mean value depreciation area corresponding to 204 ± 2 GPa. The ductility of rigid connections is caused by contact deformations at the contact area of the rod and the elastic compliance of the attachment screws. It consists in a decrease in the rod's lateral stiffness [23][24][25]. In the present research, this decrease was taken into account considering the stiffness reduction coefficient λ, which was determined experimentally by comparing the experimental and calculated values of the transverse stiffness at N = 0 From a linear approximation of M b R b 2 vs. 1/f 2 by the least squares method, it follows that I 0 = 0.131 kg·m 2 . Correspondingly, for radius R = 305 mm and R = 295 mm for the rigid and pinned connections between the rod and the wheel, the equivalent masses were 1.40 kg and 1.51 kg, respectively. For the selected range of L/R and rod cross sections, the ratio δ = M/m is 7 . . . 15. This estimation illustrates a negligible influence of the rod mass on the calculated deviation of ω 2 vs. N and the corresponding error, caused by assumptions regarding the rod modal shape for vibrations with dominant frequency. Measuring the Flexural Rigidity of the Rod An experimental investigation of the rod flexural rigidity is necessary due to the following reasons: • steel class, its heat, and mechanical treatment; • influence of the ductility of the rigid connections at the ends on the rod's lateral stiffness. The modulus of elasticity of the steel strip was measured using a standard procedure [22] with a confidence level of 90% and a mean value depreciation area corresponding to 204 ± 2 GPa. The ductility of rigid connections is caused by contact deformations at the contact area of the rod and the elastic compliance of the attachment screws. It consists in a decrease in the rod's lateral stiffness [23][24][25]. In the present research, this decrease was taken into account considering the stiffness reduction coefficient λ, which was determined experimentally by comparing the experimental and calculated values of the transverse stiffness at N = 0 where K 0 is the experimental value of the rod flexural rigidity at N = 0, and K * 0 is the calculated value of the rod flexural rigidity. Following Equations (18)- (20) at N → 0, for the rigid-rigid connection: for the pinned-rigid connection: and the for rigid-pinned connection: The experimental values of the rod flexural rigidity were obtained by static testing according to the scheme shown in Figure 8 by adding a weight at one of the cables ends and measuring the corresponding displacement of the weight. The tests were carried out for all end connection types and specimens' lengths planned for obtaining the buckling load. The dependence of displacement X on the load at the end of the rod for the rigid-rigid connection and rod length of 610 mm (L/R = 2) is shown in Figure 10. The experimental values of the rod flexural rigidity were obtained by the least-squares method [26]. It was found that, for the investigated length range of 443-915 mm, just the end connection combination has an essential effect on the end factors. Independent of the length, the following values can be used: 0.89 for rigid-rigid, 0.96 for pinned-rigid, and 0.94 for rigid-pinned connections. where K0 is the experimental value of the rod flexural rigidity at N = 0, and K0* is the calculated value of the rod flexural rigidity. Following Equations (18)- (20) at N → 0, for the rigid-rigid connection: for the pinned-rigid connection: and the for rigid-pinned connection: * = 3 / . The experimental values of the rod flexural rigidity were obtained by static testing according to the scheme shown in Figure 8 by adding a weight at one of the cables ends and measuring the corresponding displacement of the weight. The tests were carried out for all end connection types and specimens' lengths planned for obtaining the buckling load. The dependence of displacement X on the load at the end of the rod for the rigidrigid connection and rod length of 610 mm (L/R = 2) is shown in Figure 10. The experimental values of the rod flexural rigidity were obtained by the least-squares method [26]. It was found that, for the investigated length range of 443-915 mm, just the end connection combination has an essential effect on the end factors. Independent of the length, the following values can be used: 0.89 for rigid-rigid, 0.96 for pinned-rigid, and 0.94 for rigidpinned connections. Following the determined values of the stiffness reduction coefficient, from Equations (28)-(30), the effective rod's length should be increased by 1.039, 1.013, and 1.02 times, relative to the measured values. The value of Ncr was calculated using Equation (21), considering the influence of ductility on increasing the rod's effective length. Accordingly, the values of the critical load, calculated correspondingly to the measured rod's length, should be decreased by 8%, 3%, and 4%, respectively. Experimental Verification of the Design Scheme The mode shapes and dominant modal frequencies for the selected structural scheme (see Figure 1) depend only on the rod and rotating masses inertia forces, rod flexural rigidity, and natural damping. Free vibrations should be harmonic, and their spectrum has one dominant frequency. Additionally, the values of dominant vibration frequencies for Following the determined values of the stiffness reduction coefficient, from Equations (28)-(30), the effective rod's length should be increased by 1.039, 1.013, and 1.02 times, relative to the measured values. The value of N cr was calculated using Equation (21), considering the influence of ductility on increasing the rod's effective length. Accordingly, the values of the critical load, calculated correspondingly to the measured rod's length, should be decreased by 8%, 3%, and 4%, respectively. Experimental Verification of the Design Scheme The mode shapes and dominant modal frequencies for the selected structural scheme (see Figure 1) depend only on the rod and rotating masses inertia forces, rod flexural rigidity, and natural damping. Free vibrations should be harmonic, and their spectrum has one dominant frequency. Additionally, the values of dominant vibration frequencies for the selected end connection schemes should correspond to the calculated values (Equation (1)). To verify the above-mentioned requirements, free vibration tests were carried out for rigid-rigid connection schemes, and further spectral analysis of the measured data was performed. The L/R ratio was 2, and N = 0. The ratio δ between the equivalent mass of the wheel and the rod was 20.5. It allowed decreasing the error due to approximation in calculating the coefficient α (Equation (2)). Figure 11 presents the vibration time histories of the structure and the corresponding spectra (obtained using FFT for 100 vibration periods). The calculated value (according to Equation (1)) was 0.62 Hz. Differences between experimental and calculated values were up to 3%. The measured damping ratio was 3%. (1)). To verify the above-mentioned requirements, free vibration tests were carried out for rigid-rigid connection schemes, and further spectral analysis of the measured data was performed. The L/R ratio was 2, and N = 0. The ratio δ between the equivalent mass of the wheel and the rod was 20.5. It allowed decreasing the error due to approximation in calculating the coefficient α (Equation (2)). Figure 11 presents the vibration time histories of the structure and the corresponding spectra (obtained using FFT for 100 vibration periods). The calculated value (according to Equation (1)) was 0.62 Hz. Differences between experimental and calculated values were up to 3%. The measured damping ratio was 3%. Thus, the experiments confirmed the selected design scheme and the validity of the assumptions that were used for the calculation of modal frequencies. Experimental Verification of the Proposed Dependencies The possibilities of experimental verification are limited by the minimal natural vibration frequency of the system. As the tensile force becomes higher, the resistance of the bearing increases, the rod stiffness in transverse direction decreases, which yields high damping, and vibrations significantly decrease during a very short time that is lower than one natural vibration period; this prevents finding the natural frequency. It is experimentally shown that to obtain the natural vibration frequency with an accuracy of 1%, this frequency for the investigated system should be at minimum 0.3 Hz. A corresponding value of the maximal tensile load achieved in the experiment, Nmax, is presented in Table 3. It is shown in the experiments that, in order to obtain the critical load by extrapolating the measured values up to zero, the minimal number of experimental values of frequency should be 5. If the number is lower, extrapolation accuracy decreases. Increasing the number of the measured values does not increase the accuracy due to the errors in single measurements. To evaluate the accuracy of dependence of N on f 2 and to find the value of Ncr, linear extrapolation of the measured values was used: Thus, the experiments confirmed the selected design scheme and the validity of the assumptions that were used for the calculation of modal frequencies. Experimental Verification of the Proposed Dependencies The possibilities of experimental verification are limited by the minimal natural vibration frequency of the system. As the tensile force becomes higher, the resistance of the bearing increases, the rod stiffness in transverse direction decreases, which yields high damping, and vibrations significantly decrease during a very short time that is lower than one natural vibration period; this prevents finding the natural frequency. It is experimentally shown that to obtain the natural vibration frequency with an accuracy of 1%, this frequency for the investigated system should be at minimum 0.3 Hz. A corresponding value of the maximal tensile load achieved in the experiment, N max , is presented in Table 3. It is shown in the experiments that, in order to obtain the critical load by extrapolating the measured values up to zero, the minimal number of experimental values of frequency should be 5. If the number is lower, extrapolation accuracy decreases. Increasing the number of the measured values does not increase the accuracy due to the errors in single measurements. To evaluate the accuracy of dependence of N on f 2 and to find the value of N cr , linear extrapolation of the measured values was used: where N cre is the extrapolated value of the critical force. The parameters N cre and µ were obtained by approximating Equation (31) by the least squares method [26]. Figure 12 presents the dependence of N on f 2 for rigid-rigid end connections and L/R ratio of 1.5 and a corresponding linear approximation. Estimation of the linear dependence error was carried out according to the absolute value of the deviation: where N lin is calculated following Equation (31), corresponding to the obtained values of N cr and r and the experimental frequency values. The maximum deviation values are shown in Table 3. Evaluation of the error in obtaining Ncr was performed using the following dependence: Evaluation of the error in obtaining N cr was performed using the following dependence: D N % = abs[(N cr − N cre )/N cr ]%. The calculated and experimental values of the critical load as well as estimation of measurements' accuracy are given in Table 3. The experimental values of N cre are given only for cases in which the maximal tensile force is at least 60% of the calculated critical value. From the analysis of the error values given in Table 3, it follows that: • the linear approximation model that was used in this study corresponds to the experimental dependence of f 2 on N; • the calculation model of the system and buckling conditions of the tensile rod correspond to the physical system. Conclusions The buckling of a system, including a tensile rod with one end moving along a linear sliding guide and the second end connected to a circular sliding guide, was studied theoretically and experimentally for pinned, rigid, and semi-rigid connection schemes. A device for experimental investigation of the rod critical tensile load is proposed. The device can provide higher accuracy compared to known alternative devices. This is achieved by approaching the maximum load to the critical one. The achieved accuracy, estimated by the discrepancy between the calculated and the experimental values depending on the L/R ratio, was 2.1-3.5%. In the frame of the study, the following characteristics were determined: • dependency of natural vibration frequencies on tensile load, cross-sectional dimensions of the elastic rod, and mass on its end; • values of the critical load for different connection schemes, considering elastic compliance in the attachment nodes; • range of the ratio between the rod length L and the radius of the rod end trajectory R that yields a linear dependence between the transverse stiffness of the system and the tensile load, allowing the application of a dynamic method to obtain the critical load with minimal error. It was experimentally confirmed that: • an elastic rod with various types of end connection schemes buckles under a tensile load; • the free vibrations feasibility is subjected to changing the tension load from 0 to a value that is close to the critical load; • it is possible to use the linear approximation model to obtain the dependence of the squared natural vibration frequency f 2 on the tensile load N; • the calculation model of the system, buckling conditions, and critical load value correspond to the physical model. The results of this study can be used to design mechanisms including elements with ends moving along curved sliding guides. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-09-27T20:05:41.070Z
2021-08-07T00:00:00.000
{ "year": 2021, "sha1": "873ad32efca665f7390c5ab5207dda21849289bd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/16/7277/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c157c615141d4d3ad16189be0f39d7970e191905", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
153311105
pes2o/s2orc
v3-fos-license
Cybersecurity Behaviour: A Conceptual Taxonomy . User cybersecurity behaviour is a concern for organisations as well as home users. This is because cyber-criminals have made a shift from targeting security systems to targeting the users of the systems. As a result, an increasing number of studies have been conducted in efforts to understand user cybersecurity behaviour. The advantage in understanding user behaviour is that researchers and security practitioners can apply this knowledge and begin to change behaviour to benefit cybersecurity. Different studies have categorised similar cybersecurity behaviours, however the naming conventions differ across studies. This brings out the first contribution of the paper, unified terminology for the cybersecurity behaviour. Secondly, most studies were conducted in an organisational setting. User behaviour in other environments is yet to be identified and categorised. The second contribution of this study is the identification and categorisation of home user cybersecurity behaviour. The identification and classification of more cybersecurity behaviour is aimed to have a positive impact in the creation of strategic interventions to change and maintain good cybersecurity behaviour. Introduction To decrease the number of cyber incidents, it is key that users, especially user behaviours, are understood to understand how to change user behaviour.An initial step is the identification and classification of user cybersecurity behaviour. Cybersecurity Behaviour (CSB) Human behaviour refers to an individual's actions, reactions, mannerisms and conduct within different environments [4].Cybersecurity behaviour (CSB) is therefore defined, by the current research, as an individual's actions, reactions, mannerisms, and general conduct in the cyber domain.The goal of studying user CSB is to promote good CSB while decreasing malicious or bad CSB. Cybersecurity Behaviour Context Researchers [5,6] have noted that users act differently in different settings.But, the categorisation of CSB has been focused mainly on behaviour in organisations.Numerous targets of cybersecurity attacks fall outside the context of the organisation [7].Therefore, a gap exists in literature, where other cybersecurity contexts have been left out.The importance of including different contexts is the ability to accurately categories CSB. Cybersecurity Behaviour Taxonomy To understand a system, it is necessary to understand the components that make up that system.A taxonomy is a tool used to classify components in a domain.Making use of a taxonomy allows for the structural organisation of concepts that make up the system.CSB have been expressed in the form of taxonomies.More recently, Bitton et al. presented a taxonomy for mobile cybersecurity awareness [8].The current study builds on previously published taxonomies in the classification of users' CSB. The remainder of the paper is presented as follows.Section 2 presents a literature review and Section 3 presents the proposed conceptual taxonomy.The conclusion and future work is presented in Section 4. Literature Review The following section presents literature that focuses on user CSB in the workplace as well as at home. Cybersecurity Behaviour Context is made up of the circumstances surrounding a behaviour.Context has an influence on behaviour [10].An example related to CSB is: social engineering attacks may be more effective at certain times of the year, such as the festive season.In this section, the CSB in the work and home context are discussed. Cybersecurity Behaviour at Work The CSB of employees is mostly governed by policies and regulations.Employees are held accountable for misconduct or not adhering to the organisational rules [11,12].Furthermore, ICT departments assist users in adhering to policy by sending reminders about software updates, information on new threats, information best practices, and blocking dangerous or inappropriate sites [9]. Blythe strived to understand CSB in an organisational setting [13].In an organisational setting, cybersecurity is usually evaluated as a function of compliance [14].In an organisation, bad CSB is seen as noncompliance to the set policies.Blythe contended that behaviour is more entailed than this.The study argued that the evaluation of compliance is limited in that it tests a small scope of policies and procedures.Among other behavioural determinants, the behaviour is a function of interior and exterior influences.Interior influences include selfmotivation or drive, similar to intentions mentioned in [9], while exterior influences are features such as the environment [13]. To categorise CSB in organisations, a six-element taxonomy was developed by Stanton et al.The dependent variables used to group the behaviours were 1) the amount of expertise required to carry out the behaviour, and 2) the intention towards the organisation when carrying out the behaviour.The six categories of the taxonomy where: intentional destruction, dangerous tinkering, aware assurance, detrimental misuse, nave mistakes, and basic hygiene [9]. Intentional destruction, detrimental misuse, dangerous tinkering, and nave mistakes are examples of bad CSB and aware assurance and basic hygiene are examples of good CSB.To visualise the taxonomy, the categories of the taxonomy were put on a two-dimensional plane.On the x-axis, the intention of the user is plotted; intentions range from malicious to unintentional.On the y-axis, the user expertise ranging from expert to novice are plotted (see Fig. 1.).Guo observed the disparities in the conclusions of information systems behaviour research.The different, often contradicting results were hypothesised to be due to methodological issues or the ill definition of information system's behaviour.The study realised the need for more clear definitions of information systems behaviours.Through the review and synthesis of previous studies, a conceptual framework was designed which incorporates four categorisations of organisational behaviour: security assurance behaviour, security compliant behaviour, security risk-taking behaviour, and security damaging behaviour [17].Chu, Chau, and So developed a typo-logical theory for information security deviant behaviour in an organisational setting [18].The result of the study categorised cybersecurity into four categories: Misuse of information systems resources, Information security carelessness, System protection deviance and Access control deviance. In terms of CSB at work, Fig. 2 presents a graph with the cybersecurity categories taken from the studies presented in Section 2. The categories are plotted on the same graph that was used in the research by Stanton et al. [9].The graph shows most of the categories identified require some level of cybersecurity expertise.This implies that users in organisations, generally, have the expertise in cybersecurity.An inference can be made that CSB in organisations is not hindered by the lack of cybersecurity expertise. The next important observation is that a majority of the security behaviour categories are not intended on malicious or benevolent behaviour.This neutral attitude towards cybersecurity is a risk because users, then show no interest in improving or applying their skills. Currently, through the graph in Fig. 2, it is not clear what the intentional difference between Security Risk taking behaviour and Security Compliance behaviour is.It is therefore a need that a clear distinction in intentions is derived.This distinction must also be represented, though the graph. Cybersecurity Behaviour at Home Home users are individuals of different ages that make use of computers or mobile devices that connect to the Internet.In the home context, users are typically solely responsible for managing their CSB.It is assumed that cybersecurity knowledge, awareness and skills are much lower for home users, as they are not exposed to training programs [19].This assumption was later proven false by [20] where it was found that home users do have cybersecurity knowledge and skills.The knowledge may be gained from other environments such as work, but the behaviour at home is different [11,21]. Lastly, there are home users that do follow cybersecurity principals at home.Cybercitizens is a term found in the study by Catherine et al.Cybercitizens describe home users that are proactive in being cybersecurity aware and applying cybersecurity skills in their environments [19].The study focuses on the intentions of cybercitizens and presents interventions to encourage more users to become cybercitizens.The type of behaviours that a cybercitizen exhibits are installing and updating antivirus software, be cautious of emails as well as email attachments, and lastly choosing strong and easy to remember passwords [19]. Proposed Conceptual Cybersecurity Behaviour Taxonomies The proposed CSB taxonomy addresses the two points: 1) Ambiguity in cybersecurity intentions, and 2) Completeness in the introducing of context as an independent variable when categorising CSB. Updated Work Cybersecurity Behaviour Taxonomy The current section proposes an updates CSB taxonomy for the work environment.The contribution in this section is the division of behaviour intentions into four categories.Fig 3 presents the CSBs at work with the derived intentions.This new graph offers the advantage of clearly showing the intentions associated with the categories of behaviours.headings should be numbered.Lower level headings remain unnumbered; they are formatted as run-in headings. Intentions Intentions are plans for performing a behaviour.The literature on CSB intentions can be divided into intentional and unintentional CSB [25]. Intentional Cybersecurity Behaviour Intentional CSB refers to instances where the user purposefully wants to harm systems or disregard cybersecurity principals.Opposite to this, users can purposefully uphold or/and promote cybersecurity principals.Intentional Good (IG) Cybersecurity Behaviour There exist instances where the user purposefully upholds cybersecurity principles [9].This category is adopted from Stanton et al.where aware assurance and basic hygiene are collapsed into intentional good CSB.An example of users performing intentional good CSB are users that create good passwords to protect information belonging to them or an organisation.Literature has developed terms such as good cybersecurity hygiene and conscious care behaviour to describe this category of users [9,26]. Unintentional Cybersecurity Behaviour Unintentional CSB refers to instances where the user does not intend on disregarding nor upholding cybersecurity principals.In these instances, good or bad CSB is a by-product of other actions or intentions. Unintentional Bad (UB) Cybersecurity Behaviour Behaviours categorised as unintentional bad CSB are those where the user does not intend to cause malicious harm or purposefully disregard cybersecurity principals.Ifinedo referred to these behaviours as counterproductive computer security behaviours [27], while Stanton et al. referred to it as dangerous tinkering and nave mistakes [9] and Chu et al. refers to it as information security deviant behaviour [18].An example of unintentional bad CSB is a user that writes their password down or recycles their password [28]. Unintentional Good (UG) Cybersecurity Behaviour Behaviours categorised as unintentional good CSB are behaviours where users preserve cybersecurity because of other intentions or actions.The study by Virginia Tech found that even though users complied with password change policies, the users still felt that cybersecurity is an obstacle.In this case, the intention of the behaviour is to comply, and it is not to practice good CSB [29].Unintentional good CSB is not ideal, because for a behaviour to be repeated the user must be intentional in repeating as well as sustaining the behaviour. Home Cybersecurity behaviour Taxonomy Categories captured in Fig. 3's graph focus on CSB that occur at work.In the interest of completeness, the next section of the study aims to categorise CSBs of home users.To do so, different CSBs were extracted from literate.These behaviours were plotted on a similar graph as used in Fig. 4.However, the y-axis had to be adjusted.According to the literature presented, home users do have cybersecurity knowledge and skills.In the home environment the application of these knowledge and skills is more distinguishing of the behaviours as opposed to having the knowledge and skills.Therefore, the y-axis is divided into None or Limited Knowledge and Skills, Knowledge and Skills but No Implementation and finally, Knowledge and Skills with Implementation.In the Knowledge and Skills with no Implementation there are three behaviour categories, namely Disrupting, Unconcerned and Knowledge Gaining.Disrupting behaviours are intentional bad CSBs.The behaviours under this category show neglecting of cybersecurity principals through the reckless actions such as downloading torrents from peer-to-peer networks.The category Unconcerned describes careless CSBs.However, the intention of these behaviours in not to intentionally cause harm.Knowledge gaining CSB is intentionally good behaviour.However, without applying the knowledge and skills these behaviours have little use in maintaining cybersecurity. In the No or Limited Knowledge row there are two behaviour categories, namely Cognitive Laziness and Convenience.These terms were taken from [30].Cognitive Laziness is unintentional bad CSB.This category of behaviours describes behaviour that is done mindlessly without consideration of any cybersecurity.Finally, the Convenience category describes unintentional good CSB.These behaviours are done only if the cybersecurity related task is convenient. Conclusion The aim of the paper was to provide a clear representation and visualisation of CSB.The study reviewed literature on CSB in the workplace.The literature was consolidated and represented on one graph.Previous research had represented user intentions of CSB on an ordinal scale ranging from malicious to benevolent.The current study improved on this measurement by dividing intentions into smaller units of measurement.The result was four categories to describe user CSB intentions.The second half of the paper focused on home user CSB.Eight categories of home user CSB were presented.The categories were obtained by plotting home user CSB found in literature against knowledge and skill implementation and user intentions.The information in this study contributes to the understanding of user CSB and can be used by researchers and practitioners of cybersecurity.This research aids in specifying the question from How to change CSB? to How to change CSB of home users who exhibit Cognitive Laziness behaviour.Future work will need to conduct an experiment to verify the conclusions found in this study.Future work will also need to address the influences of CSBs.
2019-05-15T14:35:02.009Z
2018-12-10T00:00:00.000
{ "year": 2019, "sha1": "60850468db4b51305b6a620bc75eeff9c47141f2", "oa_license": "CCBY", "oa_url": "https://hal.archives-ouvertes.fr/hal-02294613/file/484602_1_En_11_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "23f79f709964bfa0533c81824067f45f748778fb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270524443
pes2o/s2orc
v3-fos-license
Clinical Trials Corner Issue 10(2) Study Rationale: EG-70 (detalimogene voraplasmid) is a non-viral, non-integrating gene therapy consisting of a nanoparticle formulation of plasmids that activates innate and adaptive immune responses.Immune stimulation of the bladder locally avoids systemic toxicities, and this therapy was evaluated in the treatment of BCG unresponsive NMIBC. Study Design: This is a Phase I/II study with dose escalation to assess safety and tolerability of EG-70. Endpoints: The primary endpoint of the Phase I portion was to assess safety and tolerability of EG-70 through the fi rst 12 weeks.Complete response rate and duration of response were secondary endpoints. Results: Twenty-four patients received at least one dose of intravesical EG-70.Overall, 13 (54.2%)patients experienced TRAEs which were mostly grade 1 and 2 with one grade 3 TRAE of renal failure in a patient with pre-existing renal failure.The most common TRAEs were urinary tract infection (12.5%), micturition urgency (12.5%), hematuria (12.5%), and dysuria (12.5%).Across all treatment doses, the anytime CR was 73%.The CR was 45% at 6 months.The CR was improved at 60% at 6 months in those patients that received the recommended phase II dose (R2PD). Rationale: Nivolumab (NIVO) became a standard of care adjuvant treatment for patients with high-risk MIUC after radical surgery based on the initial results from the phase 3 CheckMate 274 trial, which assessed adjuvant NIVO for patients with high-risk MIUC after radical surgery and met both of its primary endpoints. Study Design: Phase 3, randomized, double-blind, multicenter study of adjuvant NIVO versus PBO for highrisk MIUC.Eligible patients had ypT2-ypT4a or ypN+ MIUC with prior neoadjuvant cisplatin chemotherapy or pT3-pT4a or pN+ MIUC without prior neoadjuvant cisplatin chemotherapy and were not eligible or refused adjuvant cisplatin chemotherapy.Patients underwent radical surgery within the past 120 days and had diseasefree status within 4 weeks of randomization. Primary endpoints: DFS in all randomized patients (ITT population) and DFS in all randomized patients with tumor PD-L1  1%.Key secondary endpoints: time from date of randomization to fi rst local non-urothelial tract or distant recurrence or death (from any cause; NUTRFS) and OS. Endpoints: With initial follow-up (median follow-up, 20.9 months for NIVO and 19.5 months for PBO), adjuvant NIVO improved DFS versus PBO in the ITT population (HR 0.70 [0.55-0.90];P < 0.001) and in patients with tumor PD-L1 expression  1% (HR 0.55 [0.35-0.85];P < 0.001.The authors present extended follow-up results from CheckMate 274, including the fi rst report of OS outcomes from the trial. Results: EAU new data: Median (minimum) follow-up in the ITT population, 36.1 (31.6) months; median (minimum) follow-up in PD-L1  1% population, 37.1 (32.1) months.DFS was defi ned as time from date of randomization to date of fi rst recurrence (local urothelial tract, local non-urothelial tract or distant) or death (from any cause). OS data from interim analyses favored adjuvant NIVO over PBO.In the ITT population, median OS reached 69.5 months with NIVO versus 50.1 months with PBO (HR 0.76 [0.61-0.96].In the PD-L1  1% population, median OS was not reached with either treatment (HR 0.56 [0.36-0.86];36-month OS rates were 71.3% with NIVO versus 56.6% with PBO.There was a trend for OS benefi t with NIVO among prespecifi ed subgroups of ITT patients. Comments: With extended follow-up in CheckMate 274, adjuvant NIVO continued to show improved DFS, NUTRFS, and DMFS (DMFS was time from the date of randomization to fi rst distant recurrence (non-local) or date of death (from any cause) benefi ts versus PBO in both the ITT and PD-L1  1% populations.Continued follow-up of OS is ongoing.These results, along with those of the AMBASSADOR study presented at GU ASCO, provide additional support for adjuvant immunotherapy as a standard of care for high-risk MIUC after radical resection, potentially providing an opportunity for a curative outcome.
2024-06-17T15:43:17.128Z
2024-06-12T00:00:00.000
{ "year": 2024, "sha1": "146850d17f5bca43b85d4f34cf05285ceea64c8a", "oa_license": "CCBYNC", "oa_url": "https://content.iospress.com/download/bladder-cancer/blc249006?id=bladder-cancer/blc249006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "baa3daf24ce158c6cf730406f9062a7bf0075650", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51959130
pes2o/s2orc
v3-fos-license
“I aspire to look and feel healthy like the posts convey”: engagement with fitness inspiration on social media and perceptions of its influence on health and wellbeing Background Fitspiration is a popular social media trend containing images, quotes and advice related to exercise and healthy eating. This study aimed to 1) describe the types of fitspiration content that users access and how they engage with content, 2) investigate the disordered eating and exercise behaviours and psychological distress of individuals who access fitspiration, and 3) understand the perceived influence of fitspiration on health and wellbeing. Methods Participants who access fitspiration content were recruited via social media to complete a cross-sectional online survey. Participants’ psychological distress was measured using the Kessler 10 Psychological Distress Scale (K10); disordered eating behaviours using the Eating Attitudes Test-26 (EAT-26); and compulsive exercise behaviours using the Exercise Addiction Inventory (EAI). Participants also answered a series of open-ended questions about their experiences with fitspiration. A descriptive statistical analysis was conducted for quantitative data. Responses to open-ended questions were analysed for key themes using an iterative process of open, axial and thematic coding. Results Participants (N = 180, 151 female, median age 23.0 years (IQR 19.0, 28.5)) most commonly accessed content posted by personal trainers and athletes (59.4%), posts tagged with the ‘fitspiration’ hashtag (53.9%) and posted by ‘everyday’ people (53.3%). Overall, 17.7% of participants were classified as high risk for an eating disorder, 17.4% reported very high levels of psychological distress, and 10.3% were at risk of addictive exercise behaviours. Participants described both positive and negative influences of engaging with fitspiration content. The influence on their health beliefs and behaviours was explained through four key themes: 1) Setting the ‘healthy ideal’, 2) Failure to achieve the ‘ideal’, 3) Being part of a community, and 4) Access to reliable health information. Conclusions Many participants reported benefits of fitspiration content including increased social support and access to health information. However, participants also reported that fitspiration content could negatively influence their wellbeing and perception of healthy goals. Content posted by relatable individuals or qualified experts was perceived as most trustworthy. Future research is needed to determine the individual and content-related factors associated with negative and positive fitspiration experiences. Background Increasingly, people look to social media to connect with each other, create expressions of self-identity, and seek information about health [1][2][3]. 'Fitness inspiration' , often abbreviated to 'fitspiration' or 'fitspo' , is a popular health trend on social media where individuals post or view images, quotes and advice about fitness and nutrition [4]. Some social media users who follow the fitspiration trend also engage in discussions that shape an online 'fitness culture' including expressing views around a 'healthy' appearance and 'correct' dieting and exercise behaviours [5]. Fitspiration content is generated and shared on major social media platforms such as Instagram, Facebook and Tumblr with 'friends' , 'followers' , or the general public [6]. 'Hashtags (#)' accompanying content enable it to be followed by other social media users. To illustrate its popularity, a search (October 9, 2017) of '#fitspo' on Instagram returned over 48 million posts [7]. Despite an ostensible focus on healthy lifestyle behaviours, content analyses have revealed that fitspiration content commonly portrays several harmful themes [6,8,9]. For example, fitspiration content contains objectifying images that depict an idealised thin-athletic female body type and hypermuscular male body type [6,8,9]. Researchers have posited that while the shift from a focus on thinness to fitness may outwardly seem positive, the healthy looking ideal is still underpinned by aesthetic perfection [5]. In order to achieve this athletic-ideal body, individuals may be required to practice greater dietary restriction and engage in high intensity exercise regimes [10]. Content analyses also demonstrate that fitspiration content depicts themes related to restrictive eating and exercise practices [6,8]. A common focus of fitness inspiration content is on exercising for appearance reasons [9], which has been associated with more negative body image [11,12]. Messages on fitness inspiration pages such as 'clean eating' (eating foods perceived to be healthy and unprocessed) and guilt for eating 'unhealthy' foods may encourage people to diet or restrict certain food groups [9]. Experimental studies also suggest that acute exposure to fitspiration images can increase short-term body dissatisfaction and negative mood among female undergraduate students [4,13,14]. This is concerning as body dissatisfaction is a significant risk factor for eating disturbances including anorexia and bulimia nervosa [15,16]. In addition, a study conducted with female participants who posted fitspiration on Instagram found that these participants reported more disordered eating and compulsive exercise behaviours compared to participants who posted about travel [17]. A survey of social media users aged 15-29 years in Australia also found that 'liking' or 'following' diet and fitness content, including fitspiration, on social media was associated with greater odds of self-reported history of an eating disorder and misuse of detox products or diet pills [18]. From a sociological perspective, fitspiration has been described as entrenching dominant discourses on health and the ideal body [5]. Despite the online interactive platform providing a safe space for users to challenge current orthodox ideals surrounding thinness which are presented by traditional media, fitspiration users internalised these 'healthy' ideals as truths [5]. Further, individuals were held morally accountable for regulating their bodies by adhering to normative health choices and behaviours [5]. While previous research highlights the potential harms of fitspiration, to date, content analyses have had a narrow scope, focussing on posts and websites explicitly identified as fitspiration either through hashtags linked to the posts (e.g., Tiggemann and Zaccardo [4]) or using a keyword search engine for websites (e.g., Boepple, Ata, Rum and Thompson [9]). Further, in the study by Holland and Tiggemann [17], researchers recruited only female participants who posted on social media with one hashtag (#fitspo). Less is known about other types of fitspiration content aiming to motivate users to engage in healthy eating and exercise, or which types of content are the most commonly viewed. Thus, in this paper we offer a broadened definition of fitspiration as a category of social media content including images posted to social media with the hashtags (#) '#fitspiration' or '#fitspo'; as well as profile pages or blogs by personal trainers, fitness models or body builders; and content related to fitness challenges, diets, and health cleanses. Research that specifically investigates individuals who post or follow fitspiration content is also limited. Experimental studies investigating the effects of fitspiration content recruited female undergraduate students regardless of whether they choose to engage in fitspiration [4,13]. The study by Carrotte, Vella and Lim [18] included social media users who reported 'liking' or 'following' fitspiration; however, the survey did not use validated measures to capture information about body image or disordered eating and exercise behaviours. The study by Holland and Tiggemann [17] recruited individuals who posted fitspiration content; however, did not include individuals who only view content. Individuals who view, but do not post, fitspiration content are also exposed to the potentially harmful imagery and themes and are also likely to experience the negative effects associated with fitspiration content. Therefore, the current study aims to explore the characteristics and perceptions of individuals who post as well as view fitspiration content. Finally, given few studies have investigated people who choose to follow fitspiration content, there is little evidence about how individuals perceive its impact on their health and wellbeing, including both the positive and negative effects. Understanding the perceptions of individuals who access fitspiration content will also enable a greater understanding of how fitspiration content shapes health beliefs and attitudes, which are theorised as important predictors of health behaviours [19]. Given these gaps in the current literature, this study aimed to answer the following research questions: 1) What are the characteristics of individuals who choose to engage with fitspiration content including psychological distress, and risk of disordered eating and exercise behaviours?; 2) What types of fitspiration content do people access and how do they engage with this content?; and 3) What is the perceived influence of fitspiration content? Study design This study employed a cross-sectional online survey consisting of closed-ended questions to capture quantitative data and open-ended questions to capture qualitative data. Participants Participants were recruited through advertisements on Facebook and Instagram during a six-week period in June and July, 2016. Eligible participants were aged 16 years or older, reported engaging with at least one type of fitspiration content on social media, and lived in Australia. Advertisements were targeted to reach individuals whose profiles met the eligibility criteria and were also posted on Facebook group pages related to health and fitness where the researcher had gained approval from the group's administrator. In addition to the targeted advertising strategy, participants also indicated their eligibility prior to the survey and were excluded from completing the survey if they gave an answer that was discordant with the eligibility criteria. Clicking on the advertisements directed participants to the online survey. There were 813 clicks from social media advertisements to the survey webpage, indicating a 22% (180/ 813) conversion rate of those who clicked on the link and completed the survey. No incentive or reimbursement was offered for participation. Data collection The survey was pre-tested by the primary researcher's colleagues, and feedback was sought to improve clarity of questions and functionality of survey administration. Total time to complete the survey (including collection of quantitative data) was approximately 30 min. A secure, web-based application, Research Electronic Data Capture (REDCap) [20], was used to collect and manage participant data. The following domains were included in the survey: Demographic characteristics Demographic characteristics were collected including age, gender, sexual identity, highest level of education completed, recreational spending money per week, and whether participants lived with their parents, or had any children. Body mass index (BMI) was calculated from participants' self-reported height and weight. Area of residence, defined as major city or regional, was determined using participants' self-reported postcode. Psychological distress Psychological distress was measured using the Kessler 10 Psychological Distress Scale (K10) [21,22]. The scale consists of 10 items relating to feelings of fatigue, nervousness, hopelessness, restlessness, depression and worthlessness during the past 4 weeks [22]. Each item is scored on a five point scale ranging from 1-'none of the time' to 5-'all of the time' , with scores of 30-50 indicating 'very high' psychological distress, 22-29 'high' , 16-21 'moderate' , and 10-15 'low' [23]. Previous research has shown the K10 scale to have good validity and reliability [22,24]. In the current sample, internal consistency was excellent (Cronbach's α = 0.93). Disordered eating behaviours Disordered eating behaviours were measured using the Eating Attitudes Test-26 (EAT-26), a 26-item scale that measures attitudes towards food and eating (e.g. 'I find myself preoccupied with food') [25]. Participants indicated how often each item applies to them using a 6-point scale (0-'never' , 'rarely' or 'sometimes' , 1-'often' , 2-'usually' , 3-'always'). A score of 20 or more indicates risk of an eating disorder. The EAT-26 has been found to have acceptable to excellent internal consistency and good validity in past research [25,26]. The scale demonstrated good internal consistency in the present sample (Cronbach's α = 0.89). Fitspiration content Participants indicated which types of fitspiration content they access on any social media platform by selecting from a checkbox list with the following types of content: 'fitspiration' posts; weight loss or body transformation motivation; personal trainers or athletes, celebrities and models (for the purpose of weight loss or fitness motivation); body building or strength training; 'clean eating'; diets; fitness challenges; and detoxes/cleanses. Participants could select multiple responses. We also asked if they accessed 'thinspiration' , another social media trend promoting a thin body type, extreme weight loss and advice for engaging in disordered eating behaviours content, as this is known to have negative effects for body perceptions and disordered eating behaviours [29]. We were interested in how many participants accessed both types of content as fitspiration is often credited as a healthier reaction to the thinspiration trend [30]. However, participants who only viewed thinspiration, not fitspiration, were not included in the current study. Fitspiration engagement Participants indicated how they engage with fitspiration content by selecting from a checkbox list of engagement behaviours (Table 3). Engagement behaviours were subsequently categorised into two groups to determine the level of engagement with fitspiration: 1) activities that involve actively contributing and sharing content (e.g., 'Participate in discussions'), and 2) activities that involve more passive engagement through observing content (e.g., 'Scroll through posts or images') ( Table 3). Previous studies investigating social media have used self-reported estimates of frequency and duration of exposure to social media content; however, such estimates can be heavily affected by recall bias and do not capture the different ways that people can interact with social media content or their level of involvement [31]. Instead, researchers and marketing professionals have suggested classifying social media users based on their behavioural engagement with content and theorise that those who participate actively with social media content may be more likely to make behavioural changes [32,33]. Motives for accessing fitspiration content We captured participants' reasons for accessing fitspiration from a predefined checkbox list that included reasons related to improving health and wellbeing, diet, appearance, muscular strength, body shape and size, and because their friends like it (Table 4). Participants could select multiple reasons. Perceived influence of fitspiration on health and wellbeing The online survey contained three open-ended questions ( Table 1). Questions were designed to elicit both positive and negative experiences with fitspiration. Open-ended questions were asked before specific questions relating to health outcomes and behaviours to maximise the emergence of new ideas and minimise bias due to perceived knowledge of the researcher's intent and the subsequent questions about body image, mental health and exercise behaviours. Data analysis Quantitative data analysis was performed using Stata version 13 [34]. We used descriptive statistics to determine the percentage of participants who accessed each type of fitspiration content and the way they engaged with it. Descriptive statistics are also provided for BMI, K10, EAI, EAT-26 and socio-demographic characteristics. Qualitative data were analysed using NVivo 11 Software [35]. A coding framework was developed after reading all responses, and was refined throughout an iterative process of open, axial, and thematic coding [36]. Open coding identified the discrete categories capturing perspectives, health behaviours and beliefs, and effects of fitspiration. Axial coding focused on identifying themes relating to similar meanings and ideas, relationships between participants' responses and comparisons to the open codes. Thematic coding identified overarching themes that emerged from the conceptual links between open and axial codes. The coding and conceptual links were discussed amongst the research team to ensure consistency and to resolve any discrepancies. Illustrative quotes were copied verbatim with 'sic' used to denote words that appeared erroneous such as a typing mistake. To maintain anonymity, only individuals' gender and age are reported. Sample characteristics The overall sample included 180 participants. The majority of participants were female (n = 151, 84%), median age was 23.0 years (IQR 19.0, 28.5) and median BMI was 24.5 kg/m 2 (IQR 21.6, 26.8) ( Table 2). According to the EAT-26 scale, 17.7% (n = 21) of participants were classified as at high risk of an eating disorder. K10 scores indicative of psychological distress were highly prevalent with 25.4% (n = 35) of participants having 'high' and 17.4% (n = 24) having 'very high' psychological distress. From the EAI scale, 10.3% (n = 15) of participants were at risk of addictive exercise behaviours. Table 3 summarises the types of content accessed by participants and the ways that they engaged with fitspiration content. The most popular types of content accessed by participants were content posted by personal trainers and athletes (n = 107, 59.4%), followed by posts tagged with the 'fitspiration' hashtag (n = 97, 53.9%), posted by 'everyday' people (n = 96, 53.3%), and 'clean eating' (n = 93, 51.7%). The majority of participants (n = 117, 65.0%) only reported passive engagement activities and 35% (n = 63) reported any activities involving contributing fitspiration content. Reasons for accessing fitspiration content The majority of participants (n = 159, 90.3%) reported that fitspiration content inspired them to exercise or eat healthy. Furthermore, the most common reasons for accessing fitspiration content related to health and wellbeing such as "To inspire me to exercise to improve my health or wellbeing," selected by 73.9% (n = 133) of participants (Table 4). However, reasons related to weight loss and appearance were also common with 53.9% (n = 97) of participants selecting "To inspire me to exercise or diet to lose weight" and 42.2% (n = 76) selecting "To inspire me to improve my appearance." Perceived influence of fitspiration Responses from 155 participants who completed the open-ended questions were analysed. The number of words provided per participant ranged from two to 227 words, with a median of 36 words. Most participants perceived fitspiration had influenced their health through thinking about, changing, or maintaining their diet and exercise behaviours. Four key themes emerged regarding how fitspiration content had influenced their health beliefs and behaviours: 1) Setting the 'healthy ideal' , 2) Failure to achieve the 'ideal' , 3) Being part of a community, and 4) Access to reliable health information. Approximately half of our participants saw fitspiration images as portraying an ideal representation of health and fitness that they wanted to strive towards, for example, "I use the images for goal setting i.e. I could look like her" (Female, 18). They described using the ideal image as their primary goal for increasing their exercise and healthy eating. Subsequently, their goal image became the benchmark from which participants compared themselves and measured their success. These participants were determined to work hard towards their ideal goal until they reached it. They did not discriminate between themselves and others' potential for a certain appearance or body type, but felt that by applying enough determination, effort, discipline and hard work to their goal, it could be achieved: "It's helped [me] work harder and shows that when effort and determination is put in you can always reach your goal." (Female, 16). Furthermore, one participant described a lack of dedication and commitment to the ideal as a sign of weakness and disregard for health: "I don't think 'fat acceptance' should be a thing. It's just laziness. By looking at pictures of fit girls and learning about good food and workout routines, it keeps your mind on the right track." (Female, 27). For female participants, entwined in the notion of an ideal goal was the perception that achieving this goal equated to success and would bring happiness: "…I want to look like them and be able to do the strength/ flexibility feats they can do (Pole dance/yoga/handstands). I think I will be happier if I do." (Female, 25). The perceived ideal appearance for female participants was to look strong, fit and toned. The fitspiration trend had contributed to a shift in the ideal body image from thin to strong, and had reduced stigma around weight training and muscle building for females: "Women are taught cardio is the way and weights will make you 'too big' but at the moment there are so many strong insta[sic] famous ladies who are killin[sic] it." (Female, 23). As the trend in cultural body ideals had changed, so too had their personal body goal. Participants praised Note: Participants could select multiple options the shift towards strength and fitness as a healthier trend and also credited it with improving their sense of a positive body image and focussing more on health: "This content has motivated me to join the gym and has also changed my ideal body goal from thin/ extreme weight loss to strong/healthy, therefore having a healthier body image." (Female, 18). It became apparent that, for about a fifth of participants, their idea of being healthy had often become conflated with looking healthy. Participants suggested that attaining the ideal appearance, as portrayed by fitspiration images, equated to optimal health, fitness and strength. For example, "I aspire to look and feel healthy like the posts convey" (Female, 18). Among the relatively small number of male participants, about a third similarly emphasised slimness and strength as their ideal body goal, however, responses from males were more likely to specifically emphasise muscular "gains," size and definition. One participant explained his current routine of increasing muscle mass and tone, "Through reading and studying what certain athletes have done (how they eat and train) ... And because of this I went from 130kgs to 78kgs. Now currently in the routine of putting muscle on in the off season and toning in season." (Male, 27). Failure to achieve the 'ideal' Approximately one-quarter of responses conveyed an underlying sense of feeling inadequate in terms of their appearance, fitness level and overall sense of worth as many female participants cited a desire to be "better", "stronger" or "fitter" than they currently were. A small number of female participants assigned themselves blame and reported failing to reach their goal when they compared their progress and appearance to the fitspiration trend. Failure to meet their perceived ideal goal made them question their worth as a person saying that, "…it makes me upset that I don't feel I look good enough to start with" (Female, 20). Another participant spoke of the pressure to look like the ideal and resulting sense of inadequacy as negatively affecting her recovery from an eating disorder: "It has definitely impacted my mental health and has probably slowed recovery from my eating disorder. It can cause anxiety and hopelessness to know that I don't and will never look like 'fitspiration' people." (Female, 19). Only one male participant expressed body-image related concerns for males who may compare themselves to images depicting hyper-muscularity "when they see photos of a male claiming to be natural but using steroids." (Male, 22). In contrast, a small number of participants demonstrated a critical awareness surrounding the potential for fitspiration to negatively affect body image by setting unrealistic goals. These participants acknowledged that an ideal body was often unattainable and depended on genetic factors or "unrealistic" extreme regimes. They reported that, "… they look that way mostly because of their natural body type and that no amount to[sic] exercise or healthy eating is guaranteed to make someone else appear just as good." (Female, 19). Furthermore, a few participants were cynical towards the idea of appearance-related goals. These participants valued content that emphasised the skilful and functional aspects of fitness and they preferred to set performance-related goals. For example, "If it's done correctly and it focuses on building strength or improving stamina, it can be motivating. A lot of us know we'll never be VS [Victoria Secret] angels so I think we're moving away from wanting to look like that and focusing more on improving ourselves to what our body can achieve" (Female, 16). Being part of a community Participants also perceived fitspiration as an online community for like-minded people to share their interest in health and fitness. This community offered participants a sense of support and sharing in each other's health and fitness 'journey'. Participants felt a sense of accountability through a shared commitment to strive towards their health and fitness goals. Rather than having an individual goal they saw improving health and fitness as a shared goal. Participants enjoyed "feeling a part of the larger fitness community and sharing it with friends" (Male, 24) and stated that, "We're all working towards something amazing" (Female, 27). A small number of participants felt that having an online fitness community was beneficial because they did not have access to such a community offline. This related to the advantages of social media: to increase accessibility and connect people with similar interests. For example, "… it gives you the accountability and support without having to have a group of people around you, can be very motivating" (Female, 25). While some participants stated that they followed their 'role models' or 'idols' , more commonly reported by about a third of participants was feeling inspired by looking at the success stories and progress of 'everyday' , 'normal' people posting fitspiration content. This inspiration was predominantly due to being able to relate to the person posting; they were perceived as a person who faced similar challenges and barriers to getting healthy. Therefore, they felt that the results of others were also achievable for themselves: "It is inspiring to see normal people doing what I wish to achieve" (Male, 16). "If they can do it then there is no reason why I can't." (Female, 25). There was also a greater sense of trust about those who were 'everyday' people because participants felt that they give a more honest account of their difficulties as opposed to a paid fitness model or actress who could give a superficial, glorified view of becoming fit, be sponsored by a company, or understate the challenges of changing their lifestyle. These participants were selective about the content they followed and deliberately avoided content perceived to be 'superficial' or 'fake': "I have to make sure that the pages I follow are realistic. I don't follow any fashion models or people that will make me feel shit about myself. I like to follow people who are honest about how hard it can be to lose weight and to stay healthy." (Female, 25). Access to reliable health information About half of participants frequently commented that they enjoyed learning about health and fitness through their online communities. Being a part of the fitspiration trend reportedly gave participants greater access to healthy recipes, exercise ideas and knowledge about fitness and nutrition: "it shows me ways to make tasty looking healthy snacks and meals and gives me ideas for short workouts and new exercises to mix up my work out" (Female, 21). Participants reported that they found advice to be practical and transferrable to their own lifestyle, as they could try exercises or recipes at home, or find information about upcoming fitness events. Another benefit mentioned was that access to advice and ideas on social media removed barriers such as having to pay for a gym membership or a personal trainer. For example, "It also inspires me to do 'at home' workouts with minimal equipment rather than paying memberships/ attending personal training and classes." (Female, 21). While many participants enjoyed sharing advice with each other, a minority of participants acknowledged the challenge of finding reliable and accurate information; they noted the unregulated nature of social media and potential for underqualified people to give inappropriate advice. These participants had a cynicism towards people posting in the fitspiration trend, saying many are "uneducated and underqualified to give the advice they do" (Female, 26). Some participants explicitly reported being careful about whose advice to follow; they specifically chose to follow people with relevant qualifications to ensure that the information was scientifically reliable and could be trusted. For example, one participant explained that "the people i [sic] choose to follow have qualifications in the field … And base their methodology on science and evidence rather than on looks and their ability to market themselves or products." (Male, 23). Discussion This study aimed to determine which types of fitspiration content people choose to access and how they engage with this content. Of interest, our results show that participants commonly accessed fitspiration content that has not been previously researched in great depth such as content related to 'clean eating' , and social media posts generated by athletes and personal trainers. This supports the need for further research to examine the potential health outcomes of engaging with these types of content. For example, the notion of 'clean eating' has the potential to lead to the restrictive eating practices characteristic of orthorexia nervosa [37]. Also interesting was that the majority of participants passively viewed or liked content, but did not actively contribute fitspiration content themselves. It would be important to factor this knowledge in when recruiting for future fitspiration studies as a previous study recruited people who posted fitspiration on Instagram [17]. Furthermore, passive Facebook use has previously been associated with lower subjective wellbeing compared to active use [38]. We also sought to examine the health and wellbeing in our sample who engage with fitspiration. Of concern, 43% of our sample had high or very high levels of psychological distress, which is considerably higher than estimates in the general Australian population of 20% of females and 11% of males aged 18-24 years [39]. This finding generally aligns with those of a previous experimental study, which found that brief exposure to fitspiration leads to lower self-reported mood states [4]. The current study extends this previous finding to a real-word sample that choose to access fitspiration content, supporting an association between fitspiration usage and psychological distress that warrants further investigation. The proportion of the sample at risk of an eating disorder was also high at 17.7%. This proportion is similar to that of Holland and Tiggemann [17], who found that 17.5% of female participants who posted fitspiration on Instagram were at risk of an eating disorder, significantly greater compared to participants in their study who posted content about travel. While the current study generally supports this finding, the authors used a different measure to determine those at risk of an eating disorder, limiting our ability to directly compare the proportions between these two studies. The high proportion of our sample at risk of an eating disorder also further supports the finding from a previous study that found individuals who liked or followed fitspiration-related content were more likely to self-report having an eating disorder and misusing diet pills [18]. A greater proportion of participants in this study also had scores indicative of compulsive exercise behaviours (10%) compared to other studies using the EAI, which estimated risk for exercise addiction among males and females who regularly exercised at 3-7% [27,40,41]. Consistent with the findings of the present study, Holland and Tiggemann [17] found that people who posted in the fitspiration trend had higher scores on the Obligatory Exercise Questionnaire compared to those who posted about travel. Given this is a cross-sectional study, we are unable to determine whether the high proportion of psychological distress and disordered eating and exercise behaviours was caused by accessing fitspiration content. It is also possible that individuals with existing psychological distress and disordered eating and exercise behaviours may be predisposed to seek out fitspiration content. While several experimental studies support an immediate short-term effect of fitspiration on mood and body dissatisfaction [4,13,14], further research is required to explore the relationship between fitspiration and outcomes for health and wellbeing in the longer-term. The majority of participants in this study reported positive benefits, including motivation to exercise and eat healthily, access to exercise ideas and being part of an online community. By providing motivation and social support fitspiration may lead to increased physical activity, which has significant benefits for physical and mental health [42,43]. However, a minority of participants reported having experienced negative impacts which were often related to feeling inadequate or failing to achieve their goals. Furthermore, psychological distress and risk of an eating disorder or compulsive exercise behaviours were common in our sample. The presence of both positive and negative impacts suggests that fitspiration content may differentially influence individuals or that some pages and content are more harmful than others. Additionally, the frequency that individuals access fitspiration content may influence their perceptions and may mediate the potential impact on their health and wellbeing. Future research is needed to determine the individual and content-related factors associated with these negative and positive fitspiration experiences. The benefits of social connection and interaction offered by the online fitspiration community can help to explain how fitspiration may enable active behaviour change. This concept is supported by previous studies, which have found that social support may enhance the efficacy of online interventions to improve physical activity and nutrition [44,45]. However, through a sociological lens, the notion of this supportive community can also be reinterpreted as a social mechanism for regulating adherence to the 'healthy practices' and pursuit of an unattainable ideal currently endorsed by the fitspiration community [5]. Further research is required to examine whether fitspiration can genuinely contribute to balanced, attainable and sustainable behaviour change. Despite outwardly reporting the benefits of fitspiration, the language that participants used and its underlying meaning revealed concerning findings embedded within their beliefs. Participants did not appear to realise that they had conflated their appearance ideal with optimal health, but justify their efforts to look fit with the belief that they are improving their overall health. Likewise, experimental research suggests that focussing on the functional aspects of idealised images actually has a negative effect on the appearance satisfaction of young female participants [46]. Fitspiration may be contributing to the construction and reinforcement of an ideal, attractive body type that is also perceived to be healthy, and also providing misinformation about what it is to look and be healthy with potentially harmful consequences. This aligns with previous studies that have found fitspiration posts often depict an appearance ideal that is lean, muscular and athletic [5,8,9]. Similar to Jong and Drummond [5], our results suggest that many users are internalising a particular appearance, and perceiving it as the ideal appearance of a healthy individual. Open-ended responses from participants also offer some explanation for why content generated 'everyday' individuals was popular among our sample. Content generated by 'everyday' individuals was considered more relatable and trustworthy compared to celebrities or models. Similarly, a content analysis of fitspiration on Instagram found that personal accounts were more popular than commercial accounts, as indicated by a higher number of likes and followers [47]. In contrast to the findings of Jong and Drummond [5], who found that their participants (of a similar age, and also Australian) displayed less discernment of credibility and vested interests in their fitspiration role models, some participants within our sample demonstrated additional critical awareness through selecting more 'realistic' role models and acknowledging that goals may differ between individuals. However, it is not clear whether this content actually contains more reliable and beneficial information. Content may actually portray an idealised version of the 'everyday person' , and therefore individuals may be comparing themselves to a glorified version of 'normal'. Particularly as social media enables users to control how they present themselves to their social network by selectively posting content that reflects their desired image [3]. Furthermore, comparing one's appearance to peers on social media has been shown to have a greater indirect effect on body image concerns than comparing to celebrities [48]. Future research is needed to determine whether fitspiration content created by 'everyday' individuals and personal trainers is reliable and beneficial given that these types of content are perceived as trustworthy and therefore have a greater potential to influence health beliefs and behaviours. The present study has some limitations that are likely to have influenced the sample's representativeness and our ability to confidently determine the relationship between health outcomes and engagement with fitspiration. This study used convenience sampling, in particular the majority of participants were recruited via Facebook and very few via Instagram, which is known to be a common source of fitspiration content [6]. We may have also recruited participants who are likely to spend more time on social media than those in the wider population who access fitspiration content [49]. Alternatively, reaching individuals who spend more time on social media may capture the audience most 'at-risk' and with higher exposure to fitspiration. The study had a low response rate; although, this is consistent with other studies that used social media recruitment for online surveys [17]. The amount of missing data ( Table 2) suggests the survey may have been too lengthy, difficult or sensitive for some participants. Due to the study design, we were unable collect in-depth qualitative data about participants' perception or to probe participants' responses further. In particular, fewer responses from male participants meant their experiences, and the differences between female participants' experiences, were unable to be explored in greater depth. However, this study can inform future qualitative research to capture a more comprehensive story of the influence of fitspiration. The questionnaire also included questions, particularly those relating to fitspiration content and engagement, that were generated by the researchers and these questions were not validated nor has their reliability been assessed. Finally, this study did not differentiate between different types of fitspiration content or characteristics of participants and therefore we were unable to determine how perceived harms and benefits may be attributed to particular types of content or individual characteristics. Comparing to a control group who do not engage with fitspiration would also be useful to determine the association between fitspiration and other health or demographic characteristics. Conclusion Social media is an increasingly popular and accessible way to gather and share health-related information. This study has described the types of fitspiration content that users engage with and offers direction for future research into the potential impact of lesser-researched but popular types of content. Findings indicate that future research should broaden its scope to consider these other types of fitspiration-related content and their potentially associated harms with larger and more representative samples. This study also increases understanding in an emerging area of research about the potential negative impact of fitspiration on individuals who engage with this content, highlighted by the high prevalence of risk for compulsive exercise and eating behaviours and psychological distress in our sample. Fitspiration content may also be contributing unreliable health information and endorsing unrealistic appearance-related goals. While many participants report positive benefits associated with fitspiration, including social support and increasing motivation. Future and more in-depth research is needed to determine the individual risk factors as well as the types and features of content that are associated with either negative or positive fitspiration experiences.
2018-08-13T01:23:51.125Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "6697bcc156c68d470f0ef6753898dedc922e4869", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5930-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6697bcc156c68d470f0ef6753898dedc922e4869", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
211020551
pes2o/s2orc
v3-fos-license
Secure MIMO Transmission via Intelligent Reflecting Surface In this letter, we consider an intelligent reflecting surface (IRS) assisted Guassian multiple-input multiple-output (MIMO) wiretap channel in which a multi-antenna transmitter communicates with a multi-antenna receiver in the presence of a multi-antenna eavesdropper. To maximize the secrecy rate of this channel, an alternating optimization (AO) algorithm is proposed to jointly optimize the transmit covariance R at transmitter and phase shift coefficient Q at IRS by fixing the other as a constant. When Q is fixed, existing numerical algorithm is used to search for global optimal R. When R is fixed, three sucessive approximation to the objective function to surrogate lower bound is applied and minorization-maximization (MM) algorithm is proposed to optimize the local optimal Q. Simulation results have be provided to validate the convergence and performance of the proposed AO algorithm. I. INTRODUCTION Intelligent reflecting surface (IRS) has drawn wide attention for its applications in wireless communications. IRS is a low complexity software-controlled passive metasurface which could significantly help enhancing user's transmission rate with very low power consumption [1]. Motivated by these advantages, IRS was recently applied to the study in physical layer security, and several research results about secrecy rate maximization of IRS-assisted multiple-input multiple-output (MISO) wiretap channels wiretap channels were established, including single user case [2]- [4] and downlink multi-user case [5] [6]. All these studies indicate that IRS significantly enhance user's secrecy rate. However, all the aformentioned contributions in the current literatures [2]- [6] are only restricted to MISO settings, i.e., only single antenna at the receiver as well as eavesdropper are considered. When multi-input multi-output (MIMO) is considered, there are two significant differences about the optimization problems compared with that in conventional MISO case. Firstly, in MIMO systems, beamforming is not always optimal solution. Therefore, we need to optimize an transmit covariance instead of beamformer vector in the secrecy rate maximization problem for this case. Secondly, for secrecy rate optimization problems in the MIMO case, the objective function is a complicated log of determinant formular compared with simple log of scalar formular for the MISO case. Therefore, all these existing solutions for the MISO case fail to the MIMO case. To the best of our knowledge, the study of IRS-assisted MIMO wiretap channel is still an open problem and there is no existing numerical or analytical solutions to maximize its secrecy rate. Motivated by the aformetioned aspects, the main contribution in this letter is that we consider an IRS-assisted Gaussian MIMO wiretap channel, and aim at maximizing the user's secrecy rate numerically. To solve this non-convex problem, an alternating optimization (AO) algorithm is proposed to jointly optimize the transmit covariance R at transmitter as well as phase shift coefficient Q at IRS. When Q is fixed, the existing algorithm is applied to optimize R globally. When R is fixed, we approximate the objective function to surrogate lower bound, and the local optimal Q is optimized via minorization-maximization (MM) algorithm. In particular, the key difficulty is how to find a proper surrogate function for the complicated objective function. Hence, we apply three successive approximations for the objective function to obtain the proper lower bound so that MM alogrithm can be applied, which is significantly different from the existing MM used in the simple MISO case [2] [4] in which only one time approximation for the objective is needed due to the simple structure of the objective function. As the convergence is reached, the results returned by the AO algorithm is guaranteed to be a Karush-Khun-Tucker (KKT) solution of the original problem. Notations: A T and A H denote transpose and Hermitian conjugate of A, respectively; λ max (A) denotes the maximum eigenvalues of A; |A| and tr(A) are determinant and trace of A; ⊙ denotes Hadamard product; arg(a) denotes the phase of the complex value a; A ij denotes the element in i-th row j-th column of A; a i is the i-th element of a. II. CHANNEL MODEL AND PROBLEM FORMULATION Let us consider an IRS-assisted MIMO wiretap channel model shown as Fig. 1, in which a transmitter Alice, receiver Bob, eavesdropper Eve and an IRS are included. The number of antennas deployed at Alice, Bob and Eve are m, d, e respectively, and the number of reflecting elements on the IRS is n. We assume that Alice, Bob and Eve are located in city's hot spot area, and the direct link between Alice and Bob/Eve is blocked by a building. Then, the IRS is located in a higher position to help Alice's transmission by passively reflecting the signals to Bob. Due to broadcast nautre of wireless channels, the reflected signal could also be sent to Eve. Therefore, the main task for IRS is to adjust the phase shift for signals by the reflecting elements so as to increase the information rate at Bob but decrease the information leakage to Eve. Based on these settings, the received signals at Bob and Eve are expressed as respectively where x is the transmitted signal, Q is the diagonal phase shift matrix for IRS, in which the diagonal element is e jθi (i = 1, 2, ..., n), θ i is the phase shift coefficient at reflecting element i, H AI , H IB and H IE are the channel matrices representing the direct link of Alice-IRS, IRS-Bob and IRS-Eve respectively, ξ B and ξ E represent noise at Bob and Eve respectively with i.i.d. entries distributed as CN (0, 1). And we consider that full channel state information (CSI) is available at Alice, which can be achieved by modern adaptive system design, where channels are estimated at Bob and Eve, and send back to Alice. We note that Eve is just other user in the system and it also share its CSI with Alice but is untrusted by Bob. The controller is used to coordinates Alice and IRS for channel acquisition and data transmission tasks [3]. Given (1), the secrecy rate maximization of this model can be expressed as the following problem P1. where H i = H Ii QH AI , i ∈ {B, E}, P denotes total transmit power budget for Alice, R = E{xx H } stands for the transmit covariance for Alice and where the unit modulus constraint |Q i,i | = 1 ensures that each reflecting element in IRS does not change the amplitude of the signals. Obviously, the determinant part in the objective function of P1 cannot be simplified to scalar formular as for MISO case, thereby significantly increasing the difficulty to solve this problem. III. ALTERNATING OPTIMIZATION ALGORITHM To solve this new non-convex problem, we propose an iterative AO algorithm to optimize R and Q alternatively by fixing the other as constant. We firstly fix Q as a constant and maximize R. Note that when Q is fixed, H B and H E are also fixed so that P1 is reduced to a secrecy capacity optimization problem of general Gaussian MIMO wiretap channel. To solve this problem, we apply the key Theorem 1 in [7] so that the original problem is equivalently transformed to a convexconcave max-min optimization problem. Then, we apply the existing algorithm in [7] which is based on barrier method in combination with Newton method and backtracking line search method to globally optimized R. Note that in [7], the algorithm was developed only based on real-valued channel matrix case. Therefore, we extend this algorithm to complexvalued channel cases by re-deriving the gradient and Hessians of the barrier objective function. Using the same steps of proof illustrated in [7], the non-singularity of Hessian matrix as well as global convergence of the algorithm still can be proved for the complex-valued channel case. Here we omit the detailed steps of this algorithm due to page limit. The next step is to optimize Q for fixed R in the subproblem, which can be express as P2. where P2 is a complicated non-convex problem with both non-convex objective function and constraints, and the existing solutions (e.g., semi-definite relaxation and fractional programming) for IRS-assisted MISO case [2][3] cannot be directly applied to our problem. To optimize Q, we apply MM algorithm to solve P2, in which the key idea is to firstly obtain an approximately lower bound (i.e., surrogate function) of the objective, and then iteratively compute the optimal value of this bound subject to the constraints. If the bound is constructed properly, any converged point genrated by MM is a KKT point (i.e., local optimal point) for the original problem. For detailed explanation of MM, please refer to [8]. Since the objective function f (Q) in P2 consists of two complicated log determinant functions, the difficulty to directly find its proper lower bound has significantly increased. Therefore, the solution we apply is firstly find the lower bound for f B (Q) and f E (Q) respectively, and then formulate the new approximated problem by adding this two bounds together. After that, we make further two successive approximations to this bound to formulate the final surrogate function of f (Q), and apply MM algorithm to optimize a local optimal solution of Q. Firstly, considerQ is a feasible point satisfying the unit mudulous constraint, a quadratic lower bound of the function f E (Q) can be expressed as wheref The inequality in (4) is obtained via the lemma: for any matrix A ∈ C n×n andà ∈ C n×n , log 2 |A| ≤ log 2 |Ã| + tr(à −1 (A −Ã)). Then, to obtain the lower bound of f B (Q), let T = H IB QL 1 2 , according to matrix inversion lemma, f B (Q) can be further expressed as LetT = H IBQ L 1 2 , by applying (5), f B (Q) is also lower bounded by where Hence, combining (4) and (7), the approximated problem of P2 can be expressed as P3 However, we find that it is still difficult to optimize Q givenQ due to the complicated term h B (Q) as well as the constraint (3). Therefore, a second approximation of the objective function in P3 is needed. In the following, we apply the key lemma of matrix fractional functions [9]: for any positive semi-definite matrix A ∈ C m×m and positive definite matrix B,B ∈ C n×n , and X,X ∈ C m×n , Therefore, by applying this lemma to the term h B (Q) via setting A =Q −1 B , X = T,X =T, B = I + T H T and B = I+T HT and after some manipulations, the lower bound of f (Q) can be further expressed as and where J B =T(I +T HT ) −1 . In the following, we express (8) to a more tractable form. Let q = [e jθ1 , e jθ2 , ..., e jθn ] T , and let We firstly apply the lemma of matrix identity in [10]: for any matrix A, B and diagonal matrix V with proper sizes, v are all diagonal elements in V. Using this lemma, f (Q) is lower bounded as where g(q) = q H Zq, Z = A 2 ⊙ (A 3 A 1 ) T + (H H IE A 5 ) ⊙ L T and where the entries in a are all diagonal entires in A 4 . It can be known that given fixed feasibleQ, the bound (9) is a quadratic convex function respect to q. However, since q needs to satisfy the non-convex unit modulus constraint |q i | = 1, it is still difficult to use MM algorithm to optimize (9). Hence, a third approximation of f (Q) is needed by finding a surrogate function of g(q), which is expressed as follows [8]. whereq is the feasible point, the entries of which are the diagonal entries ofQ. Hence, f (Q) can be further approximated by By dropping the constant terms of this bound, P3 can be further approximated to P4. Therefore, by initializing a feasible pointq and use MM to optimize P4, a KKT solution of P2 given fixed R can be obtained. Based on the above analysis, the overall AO algorithm for maximizing the secrecy rate of IRS-assisted MIMO wiretap channel is summarized as Algorithm 1. Since R and Q are optimized alternatively, the objective function C s (R k , Q k ) is monotonically increasing with number of iterations k. Moreover, since R and Q are both bounded by the independent constraints in P1, according to Cauchy's theorem [5], the algorithm is guaranteed to converge. IV. SIMULATION RESULTS To validate the convergence and peformance of our proposed AO algorithm, simulation results have been carried out in this section. We consider a fading environment, and all the channels H AI , H IB and H IE are formulated as the product of large scale fading and small scale fading. The Algorithm 1 (AO algorithm of solving P1) Require: Starting point R 0 and Q 0 . 3. Optimize Q k given fixed R k via MM algorithm. entries in the small scale fading matrix are randomly generated with complex zero-mean Gaussian random variables with unit covariance. For the large scale fading in all links of Alice-IRS, IRS-Bob and IRS-Eve, we refer to [3] by setting the path loss as -30dB at reference distance 1m, and path loss exponents as 3. In AO algorithm, we set the target accuracy for barrier method as 10 −8 and for MM algorithm as 10 −4 . Fig. 2 illustrates the convergence of the objective C s (R k , Q k ) as function of number of iterations k in the proposed AO algorithm under randomly generated channels with different settings of m, d, e and n. Based on the results, it requires 42, 107 and 166 steps for C s (R k , Q k ) to converge to 10 −4 for each considered setting, also note that the convergence is monotonically increasing. In fact, given fixed target accuracy, larger settings of m, n leads to larger dimensions of variable R and Q so that the AO algorithm requires more iterations to optimize each element of these variables. In addition to these results, our extensive simulations show a monotonic convergence of AO algorithm. In Fig. 3, we compare the performance of our AO algorithm with two benchmark schemes: 1)optimize R given zero phase shift (i.e., Q = I) at IRS; 2)optimize R given random phase shift at IRS. The results are averaged over 100 randomly generated channels. According to the figure, we note that our proposed AO algorithm has significantly better performance than the other two benchmark schemes. For both zero phase shift and random phase shift methods, it can be seen that only optimizing R at transmitter has very limited performance on enhancing secrecy rate. For random phase shift method, the randomly generated Q can deteriorate quality of effective channel H B but improving H E in some channel realization cases so that it has least performance. V. CONCLUSION In this letter, the secrecy rate maximization problem of an IRS-assisted Gaussian MIMO wiretap channel is studied. To solve this difficult non-convex problem, an AO algorithm is proposed to jointly optimize the transmit covariance at Alice and phase shift coefficient at IRS. Simulation results have validated the monotonic convergence of the proposed AO algorithm, and it is shown that the performance of AO is significantly better than the other benchmark schemes.
2020-01-30T09:03:36.648Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "bd1c82a312739c06da6fa3845d17a34e3aa6b8da", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.00990", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0f71e3589ed1870efeb2d3874928bf44b91bcc0c", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
16246901
pes2o/s2orc
v3-fos-license
Templated growth of PFO-DBT nanorod bundles by spin coating: effect of spin coating rate on the morphological, structural, and optical properties In this study, the spin coating of template-assisted method is used to synthesize poly[2,7-(9,9-dioctylfluorene)-alt-4,7-bis(thiophen-2-yl)benzo-2,1,3-thiadiazole] (PFO-DBT) nanorod bundles. The morphological, structural, and optical properties of PFO-DBT nanorod bundles are enhanced by varying the spin coating rate (100, 500, and 1,000 rpm) of the common spin coater. The denser morphological distributions of PFO-DBT nanorod bundles are favorably yielded at the low spin coating rate of 100 rpm, while at high spin coating rate, it is shown otherwise. The auspicious morphologies of highly dense PFO-DBT nanorod bundles are supported by the augmented absorption and photoluminescence. Background In recent years, poly [2,7-(9,9-dioctylfluorene)-alt-4,7-bis (thiophen-2-yl)benzo-2,1,3-thiadiazole] (PFO-DBT) has attracted numerous attention due to its exceptional optical properties. Applications in electronic devices such as solar cells and light-emitting diodes have elevated PFO-DBT thin films to be one of the most promising materials [1][2][3][4][5][6] in accordance with its capability in absorbing and emitting light effectively. In solar cell application, the harvested light at longer wavelength of PFO-DBT thin film matches with solar radiation [3,4]. Although, PFO-DBT films and nanostructures have the same properties in absorption, PFO-DBT nanostructures can exhibit more surface area which can enhance light absorption. Nanostructured materials have been proven to extremely exhibit large surface area and substantial light absorption intensity [7][8][9]. Considerations on nanostructured formation have been prioritized due to the superior morphological and optical properties [8,[10][11][12][13]. Introducing nanostructure would enhance the light absorption intensity, and the low absorption issue of PFO-DBT thin film can be overcome. Therefore, the fabrication of PFO-DBT nanostructures such as nanotubes, nanorods, and other novel nanostructures formation is rather essential and pragmatic. One of the mutual approaches in fabricating the nanostructures is template-assisted method. Template-assisted method has been generally used to produce the unique nanostructured materials [8,10,[14][15][16]. By using the template, various shapes and properties of nanostructures can be formed. The dimension of nanostructures can be controlled by varying either the thickness or the diameter of porous template. However, the formation in zero-, one-, two-or three-dimensional nanostructures can be controlled by applying various infiltration techniques during the deposition of polymer solution into porous alumina template [10,[12][13][14][15][16]. Among the infiltration techniques are wetting-, vacuum-, and spin-based techniques. A spin-based technique of spin coater is potentially used as a lucrative and superficial fabrication tool. Spin coating of solution into the porous template can possibly enhance the infiltration. On the planar substrate, the thickness of macroporous polymers can be easily tuned by varying the spin coating rate [13], in which the different behaviors of materials during spin coating have to be the main influence. Commonsensically, the behavior of a polymer solution would probably be affected by the spin coating rate during the deposition onto the porous substrate of alumina template due to the changes of surface energy [16]. Modification on the morphological, structural, and optical properties of PFO-DBT nanostructures that were synthesized by varying the spin coating rate has not been widely studied. Therefore, it is noteworthy to study the effect of the spin coating rate on the morphological, structural, and optical properties of PFO-DBT nanostructures. This work is crucial since it provides an alternative method to utilize the facile fabrication technique. Methods The commercially existing copolymer of PFO-DBT from Lum-Tec (Mentor, OH, USA) was utilized without further purification. A 5-mg/ml solution concentration of PFO-DBT was dissolved in chloroform. Commercially available porous alumina template from Whatman Anodisc Inorganic Membrane (Sigma-Aldrich, St. Louis, MO, USA) with nominal pore diameter of 20 nm and a thickness of 60 μm was cleaned by sonicating it in water and acetone for 10 min prior to the deposition of PFO-DBT solution. The PFO-DBT solution was dropped onto the porous alumina template prior to the spin coating process. The spin coating rate was varied to 100, 500, and 1000 rpm at a constant spin time of 30 s, by using a standard spin coater model WS-650MZ-23NPP (Laurell Technologies Corp., North Wales, PA, USA). In order to dissolve the template, 3 M of sodium hydroxide (NaOH) was used, leaving the PFO-DBT nanorods. The PFO-DBT nanorods were purified in deionized water prior to its characterization. The characterizations of PFO-DBT nanorods were performed using a field emission scanning electron microscope (FESEM) (Quanta FEG 450, Beijing, China), transmission electron microscope (TEM) (Tecnai G2 FEI, Tokyo, Japan), X-ray diffraction spectroscope (Siemens, Figure 1 FESEM images of PFO-DBT nanorod bundles with different spin coating rates. FESEM images of PFO-DBT nanorod bundles with different spin coating rates of (a) 100 rpm at lower magnification, (b) 100 rpm at higher magnification, (c) 500 rpm at lower magnification, (d) 500 rpm at higher magnification, (e) 1,000 rpm at lower magnification, and (f) 1,000 rpm at higher magnification. The insets show enlarged images (scale bar, 1 μm). Morphological properties A common practice in producing nanostructured materials via template-assisted method is by drop casting the solution on the template. However, the drop casting alone without the assistance of a spin coating technique would not efficiently allow the solution to infiltrate into the template. Infiltration of PFO-DBT solution into the cavity of an alumina template can be done by varying the spin coating rate. The FESEM images of the PFO-DBT nanorod bundles are shown in Figure 1a,b,c,d,e,f. Distinct morphological distribution of the PFO-DBT nanorod bundles are depicted by the different spin coating rates (100, 500, and 1000 rpm). It is expected that by varying the spin coating rate from low (100 rpm), intermediate (500 rpm), and high (1000 rpm), dissimilar morphological distributions will result. At all spin coating rates, the PFO-DBT nanorod bundles are seen to ensemble, however, with different densifications of morphological distribution. At the low spin coating rate of 100 rpm, the denser PFO-DBT nanorod bundles are synthesized. Looking at the top of the bundles, the tips of the nanorods are tending to join with one another which could be due to the van der Waals force interaction. Apart of that, the high aspect ratio of the PFO-DBT nanorods obtained at low spin coating rate can be one of the contributions as well. However, the main contribution to the distinct morphological distribution is merely the different behaviors exhibited by PFO-DBT during the spin coating. The smallest diameter recorded at 100, 500, and 1,000 rpm is 370, 200, and 100 nm, respectively. An analysis of nanorods' length is depicted in Figure 2 by bar graphs. For 100, 500, and 1,000 rpm, the average length is 3 to 5 μm, 1 to 3 μm, and 1.5 to 2.5 μm, respectively. Although the length is quite uniform, the nanorods' length is still affected by the spin coating rate. Figure 3a,b,c shows the proposed diagrams of the PFO-DBT nanorod bundles synthesized at different spin coating rates from the side view. As reported elsewhere, the resulting polymer films are highly dependent on the characteristics of spin coating [17]. Thus, it is sensible to predict that the structure formation of resulting films can be straightforwardly controlled by altering the spin coating rate. The mechanism of the controlled PFO-DBT nanorod bundles is affected by the phase transitions of the spin-coated polymer solution. Sensibly, the infiltration properties between the static and vibrate polymer solution holds an enormous transformation. The most remarkable attribute of spin coating rate is the occurrence of enhanced infiltration. The PFO-DBT nanorods have undergone three phase transitions: from less infiltration (1,000 rpm) to high infiltration (100 rpm), in which medium infiltration can be achieved at 500 rpm. At low spin rate, the low centrifugal force allows the polymer enough time from its starting position to infiltrate all of the surrounding porous gaps. Depending on the applications, the morphological distributions of the PFO-DBT nanorods can be simply tuned via the spin coating of template-assisted method. Further corroboration on the effect of spin coating rate can be confirmed by the ability of the PFO-DBT solution to occupy the cavity of the template. At the intermediate spin coating rate (500 rpm), the gaps between the nanorod bundles started to form. The formation of these gaps may be due to the infirmity of PFO-DBT solution to occupy the cavity. In other words, the gap corresponded to the unoccupied cavity that will be dissolved with NaOH. Auxiliary Page 4 of 7 http://www.nanoscalereslett.com/content/9/1/225 increase of centrifugal force in spin coating rate will create an intense gap between the nanorod bundles which is identical to the scattered islands. Rapid evaporation of the PFO-DBT solution at 1,000 rpm has caused the formation of scattered islands. The top view images of the PFO-DBT nanorod bundles are illustrated in Figure 4. These diagrams corresponded to the FESEM images taken from the top view (see Figure 1). Highly dense PFO-DBT nanorods can be obtained from the low spin coating rate of 100 rpm. The morphologies of the PFO-DBT nanorod bundles are further supported by the TEM images (Figure 5a,b,c,d,e,f). As expected, distinct morphological distributions as an ensemble are recorded from the different spin coating rates. The highly dense PFO-DBT nanorod bundles are obtained at 100 rpm. At this spin coating rate, the greater numbers of nanorods are produced which could cause the bundles to agglomerate. Agglomeration of bundles in TEM images taken from the different spin coating rates agreed with the FESEM images; however, rigorous TEM preparation has initiated the broken and defected nanorods. An individual TEM image has confirmed that the nanorods are the sort of nanostructures obtained in this synthesis. It can be seen from the formation of solid structure without the composition of tubes (wall thickness). Structural properties The structural properties of the PFO-DBT nanorods are investigated by XRD. Figure 6 shows the XRD patterns of Figure 5 TEM images of the PFO-DBT nanorod bundles with different spin coating rates. TEM images of PFO-DBT nanorod bundles with different spin coating rates of (a) 100 rpm at lower magnification, (b) 100 rpm at higher magnification, (c) 500 rpm at lower magnification, (d) 500 rpm at higher magnification, (e) 1,000 rpm at lower magnification, and (f) 1000 rpm at higher magnification. template and PFO-DBT nanorods grown inside the template of different spin coating rates. Diffraction peaks of porous alumina template are exhibited at 13.3°and 16.8°. All the PFO-DBT nanorods that grown inside the template have an additional diffraction peak at 25.2°. The additional diffraction peak is nearly similar to that reported by Wang et al. [1]. Sharper diffraction peaks are observed from the diffraction peaks of the PFO-DBT nanorods which indicate a semi-crystalline polymer. The PFO-DBT nanorod is confined inside the cavity of the template which then alters its molecular structure to a more aligned and elongated chain segment [11,12]. The crystallite size of the PFO-DBT nanorods can be verified using the Scherrer equation as shown in Equation 1: From this equation, L is the mean crystallite size, K is the Scherrer constant with value 0.94, λ = 1.542 Å is the X-ray source wavelength, and β is the FWHM value. The PFO-DBT crystallite size is around 20 to 30 nm. The PFO-DBT nanorods that have been deposited inside the porous template exhibited a semi-crystalline polymer with enhanced polymer chain due to the restricted intrusion into the cavities. Optical properties The absorption spectra of the PFO-DBT nanorod bundles with different spin coating rates are shown in Figure 7a. These spectra portray two absorption peaks mainly assigned to PFO segments (short wavelength) and DBT units (long wavelength). The absorption band of the PFO-DBT thin film has been reported to locate at 388 nm (short wavelength) and 555 nm (long wavelength) [2,4]. Enhancement on the PFO-DBT's optical properties can be realized with the low spin coating rate of 100 rpm. With the denser distribution of the PFO-DBT nanorod bundles, the absorption band at short wavelength and long wavelength is shifted to 408 and 577 nm, respectively. The absorption peak of the PFO-DBT nanorod bundles at short wavelength is redshifted at approximately 20 nm compared to that of the PFO-DBT thin film reported by Wang et al. [4]. The peak at short wavelength corresponds to the transition of π-π* at fluorene units [4], which indicates that the strong π-π* transition has occurred via the denser PFO-DBT nanorod bundles. At the long wavelength, the PFO-DBT nanorod bundles that were obtained at the low spin coating rate of 100 rpm were recorded to have an absorption band at 577 nm which was assigned for the DBT units [3]. The maximum peak of 577 nm yields the higher intensity which indicates that the absorption of dioctylfluorene moieties is assisted by the thiophene [18]. The redshift of the absorption peaks is correlated with the morphological distribution of PFO-DBT nanorod bundles. It can be postulated that the highly dense nanorod bundles with close pack arrangement would give a better conjugation length and chain segment. Such improvement in conjugation length can be utilized to enhance the photovoltaic properties of polymeric solar cell. The morphological distribution of the PFO-DBT nanorod bundles has a significant contribution to their optical properties. The optical properties of polymer can be easily tuned by varying the spin coating rate, which indeed gives the different morphological distributions. This postulation can be further proven by the UV-vis spectra of the PFO-DBT nanorod bundles prepared at 500 and 1,000 rpm. With the implementation of spin coating rates of 500 and 1000 rpm, the absorption band at long wavelength are blueshifted at about 12 and 32 nm, respectively. The photoluminescence (PL) spectra of the PFO-DBT nanorod bundles synthesized at different spin coating rates are shown in Figure 7b. The emission of the fluorene segment which normally lied between 400 and 550 nm [2,5,6] is not recorded by all of the spectra. It indicates that the fluorene unit has been completely quenched, and an efficient energy transfer from the PFO segments to the DBT units has occurred. The redshift of PL emission of the DBT units (shown by arrow) that are presented by the denser PFO-DBT nanorod bundles well correlated with the redshift of its UV-vis absorption. PFO emission has completely quenched and being dominant by the DBT emission. This phenomenon could be due to the incorporation of the DBT units into the PFO segments which hence leads to the better conjugation length and chain alignment produced by the PFO-DBT nanorod bundles. Conclusions In the present study, the effect of different spin coating rates on the morphological, structural, and optical properties of PFO-DBT nanorod bundles is reported. Polymer solution has been demonstrated to have different characteristics and abilities to infiltrate into the cavities at different spin coating rates. Highly dense PFO-DBT nanorod bundles are obtained at low spin coating rate with enhancement of structural and optical properties. Competing interests The authors declare that they have no competing interests. Authors' contributions MSF carried out the experiment, participated in the sequence alignment, and drafted the manuscript. AS participated in the design of the study,
2016-05-12T22:15:10.714Z
2014-05-08T00:00:00.000
{ "year": 2014, "sha1": "81b68efb2487d51e1ffc1d648b2d7d93bc8df43e", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-9-225", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3eef39ed85f40a60b0a1b5616be4b58a5a4d23e1", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
256779290
pes2o/s2orc
v3-fos-license
Grayscale median (GSM) post-processing, posterizing, and color mapping for carotid ultrasound Abstract Factors related to atherosclerotic plaques may indicate instability, such as ulcerations, intraplaque hemorrhages, lipid core, thin or irregular fibrous cap, and inflammation. The grayscale median (GSM) value is one of the most widespread methods of studying atherosclerotic plaques and it is therefore important to comprehensively standardize image post-processing. Post-processing was performed using Photoshop 23.1.1.202. Images were standardized by adjusting the grayscale histogram curves, setting the darkest point of the vascular lumen (blood) to zero and the distal adventitia to 190. Posterization and color mapping were performed. A methodology that presents the current state of the art in an accessible and illustrative way should contribute to the dissemination of GSM analysis. This article describes and illustrates the process step by step. INTRODUCTION Randomized studies conducted between 1990 and 2000 were able to identify risk of stroke in symptomatic and asymptomatic patients with extracranial carotid disease based on the percentage of stenosis observed in the internal carotid, encompassing a considerable number of patients and establishing guidelines that are employed worldwide for carotid endarterectomy or angioplasty based on the degree of stenosis. [1][2][3][4][5] The main pathophysiologic mechanisms underlying cerebral ischemic events are caused by embolization of atherosclerotic debris or thrombi originating from unstable plaques, primarily located in the carotid bifurcations. 6 However, cerebral hypoperfusion is not the principal pathophysiologic mechanism of strokes of extracranial carotid origin. In these cases, the principal factor causing stroke is displacement of atherosclerotic debris and thrombi originating in unstable plaques. 6 Use of percentage stenosis as the only indicator for surgery has been shown to be insufficient and there is a debate about adding new markers related to plaque characteristics, such as ulcerations, lipid core volume, presence of intraplaque hemorrhage, fibrous cap thickness, and inflammation, reinforcing the concept that an unstable plaque is the determinant factor for indicating surgery, rather than taking stenosis as the only predictor. Many different factors can be associated with instability of carotid plaques, giving rise to a wide field of research attempting to define predictors of ischemic cerebral events that could replace use of percentage stenosis alone, such as ulcerations, intraplaque hemorrhages, lipid core, thin or irregular fibrous cap, and inflammation. 7,8 One of the most widely studied factors related to the atherosclerotic plaque is its grayscale median (GSM) value. A study published in 2010 by the Asymptomatic Carotid Stenosis and Risk of Stroke (ACSRS) study group reported that GSM values lower than 30 were associated with an increased risk of atherosclerotic embolic events, with a significant increase in relative risk at values below 15. 7 While study of GSM has been widely reported in the literature, it is not routinely performed by most sonographers, and its use is restricted to a few teaching and research centers, possibly because the majority of ultrasound machines do not offer software for automatic GSM assessment. There is also a considerable degree of variation in the information available on methods for measuring GSM, ranging from use of built-in software, 9 i.e., programs installed in the ultrasound equipment -to post-processing -which consists of analyzing echographic images on a computer. In order to contribute to dissemination of the technique and to improve reproducibility, it is important to standardize post-analysis in a manner that is clear and comprehensible to sonographers who have had technical training in post-processing of images. Image acquisition Ultrasound images were obtained in B-mode by a single sonographer with qualifications from the Brazilian College of Radiology and Diagnostic Imaging (CBR) and the Brazilian Society of Angiology and Vascular Surgery (SBACV), using a single Logiq S8 machine (General Electric, Boston, Massachusetts, United States) with a multifrequency linear transducer (8.5-11 MHz) set to 10 MHz. Focal distance was defined at the posterior tunica adventitia, the time gain compensation (TGC) levers were set to the center, and all gain parameters were standardized at the machine's original presets, including 69 dB dynamic range. All images were acquired longitudinally. The images obtained comprised 1,552 x 970 pixels, containing a rectangular region of interest comprising 934 x 840 pixels, without the titles, configuration buttons, time, date, and grayscale bar. We chose to perform post-processing on the entire image (1,552 x 970 pixels), to avoid undesired exclusions and to maintain the grayscale bar. Post-processing: standardization of the GSM Post-processing was performed using the paid subscription version of Photoshop 23.1.1 (Adobe, Mountain View, California, United States), in common with the majority of studies that perform GSM analysis by post-processing. [10][11][12][13] The image files were converted to 8-bit, 300 dpi grayscale format ( Figure 1) before adjustment of curves and GSM analysis. Images were standardized by adjusting the curves of the grayscale histogram. The most widely adopted standardization method involves setting the darkest point of the vascular lumen (blood) to zero and the distal adventitia to 190 ( Figure 2). [12][13][14][15][16] After this step, it can be observed that the contrast pattern seen previously (Figure 3) changes to a correctly standardized image (Figure 4), maintaining similar histogram curve characteristics. When analyzing GSM, it is essential to remember that each pixel in an 8-bit grayscale image has a value from zero to 255, i.e. 256 different tone options (two raised to the power of eight), where zero equates to black and 255 to white. After standardization, the region of interest is selected and analyzed using the "histogram" tool, which enables the user to obtain the mean, median, and standard deviation of the values of the pixels in a given area ( Figure 5). The method used to select the region of interest can be automatic, semiautomatic, or manual. The GSM intervals observed represent different tissues, as shown in Table 1. Post-processing: posterizing, and color remapping Although they do not constitute a source of reproducible data (on the contrary, they are a result of the data), posterizing and color remapping enable grayscale images to be enhanced, recoloring different ranges of tones and, in the ultimate analysis, they are useful additional methods for presenting data in an accessible and educational manner. 7,13,15,16 Posterizing consists of converting an image with a continuous gradation of tones (as mentioned above, 256 different tones for 8-bit grayscale images) into an image with fewer tones, resulting in more abrupt transitions from one tone to the next, which become perceptible to the human eye ( Figures 6 and 7). Most image editing programs offer a posterizing function. The gradient of tones can vary depending on how much information the researcher wishes to suppress to create a simplified version of the original grayscale image. We performed posterizing using the same software, following the path: image > adjust > posterize. Color remapping is conversion of specific ranges of tones to a color gradient, enabling colored images to be created from grayscale images (Figures 8 and 9). Although color mapping is widely used in all fields of science and radiology, there is no standard color map for analysis of GSM in ultrasound images, since it is not an essential step in the examination. The color map is therefore always subject to the researcher's personal choice or predetermined by the software of the machine being used. In the present case, we used a classic spectrum of visible colors, from violet to red, with black at the lower limit of the gradient (GSM = 0) and white at the upper limit (GSM > 190). Using this spectrum, blue and purple colors represent GSM < 30 and, therefore, greater risk. After color remapping, it is possible to restrict the color gradient to the region of interest only, for a visually instructive presentation (Figures 10 and 11). DISCUSSION Although there is robust literature supporting the applicability of GSM to identification of patients at high risk of stroke, demonstrating that it is an important instrument for therapeutic decision-making, it is still little used. 14 It is possible that this is because of the difficulties involved in conducting the analysis, which requires specific software that is not widely available in ultrasound equipment. As a contribution to dissemination of the technique, assessment using post-processing is a viable alternative, since it only requires image processing software and a computer -which are instruments that are widely available in vascular laboratories and radiology departments. When compared with other techniques for evaluation of atherosclerotic plaques, GSM offers an important advantage, since it eliminates the subjective component of evaluation. The examination's objectivity is shown by its high interobserver correlation. 7,10,11 Notwithstanding, GSM assessment is still not completely free from subjectivity if a standardized To perform color remapping, we use a layer mask over the standardized and posterized image. Once more, we use the same software as for the previous steps, following the path: layer > new adjustment layer > gradient map. method is not employed, especially considering the extent to which different ultrasound machines and a slight change in B-mode gain can cause large changes in GSM. 15 To achieve the greatest possible degree of standardization, it is essential to use post-processing by curve editing, rendering the new image after definition of known values, such as blood and the adventitia. Automatic image standardization by software, as is suggested by some authors, 10,17 can lead to significant distortions and compromise interobserver reproducibility and comparisons, and can become even more discrepant if observations are made automatically using different ultrasound machines. 15 CONCLUSIONS We have access to a vast literature on the association between percentage stenosis observed in the extracranial internal carotid and its correlation with risk of stroke. Currently, boosted by developments in minimally invasive or noninvasive diagnostic techniques, most efforts are focused on assessment of the atherosclerotic plaque individually, attempting to identify factors that could contribute to therapeutic decision-making. Abstract Factors related to atherosclerotic plaques may indicate instability, such as ulcerations, intraplaque hemorrhages, lipid core, thin or irregular fibrous cap, and inflammation. The grayscale median (GSM) value is one of the most widespread methods of studying atherosclerotic plaques and it is therefore important to comprehensively standardize image postprocessing. Post-processing was performed using Photoshop 23.1.1.202. Images were standardized by adjusting the grayscale histogram curves, setting the darkest point of the vascular lumen (blood) to zero and the distal adventitia to 190. Posterization and color mapping were performed. A methodology that presents the current state of the art in an accessible and illustrative way should contribute to the dissemination of GSM analysis. This article describes and illustrates the process step by step.
2023-02-12T16:11:05.584Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "07d8d524ae46209495be2e707d7e4ddc070341b0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7c0d08db1886c574251716a9a03c111aca8f93d0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231696543
pes2o/s2orc
v3-fos-license
Influence of soil physicochemical parameters on species composition and structure in the Togo Plateau Forest Reserve in Ghana Soil-Species correlation studies help in understanding the ecology of plateau ecosystems. However, this information is scarse thereby posing a challenge in their effective management in Ghana. Hence, the study on the influence of soil physicochemical parameters on species composition and structure in the six fringed communities which constitute the focus of the study: Bowuri (BO), Nkonya (NK), Akpafu (AK), Santrokofi (SA), Hohoe (HH) and Alavanyo (AL) in the Togo Plateau Forest Reserve in Ghana. Soil and vegetation parameters were recorded in a total of 180 plots (each measuring 25m × 25m) demarcated across the communities and analyzed. Canonical Correlation Analysis (CCA) results showed that pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI – P (ppmP), Sand and Silt were the drivers of trees, saplings and seedlings composition and structure (including density, richness, shannon, evenness and basal area (BA)) on the plateau. This vegetation attributes were seen to be highest and more correlated with soil parameters for BO, NK and AK occupying lowland areas and lowest in the SA, HH and AL occupying highland areas of the plateau. The soil is somewhat weak acidic to neutral, with a pH ranging between 4.17 and 7.06. The CV values revealed Base sat showing lowest values (c.v.<15%), with moderate (c.v.=34%-15%) for TK and highest (c.v.˃35%) for TCa, TMg, Na, T.E.B, EX. ACIDITY and ECEC, AVI-P1. This study provides a better understanding of the current status of this plateau in Ghana. Introduction Soil physico-chemical parameters and vegetation are reportedly linked [7,14,5,8], in the sense that soils play a major role in the heterogeneity of habitats, thus contributing to physiognomic differentiation of vegetation and ultimately changes the composition and structure of species across landscape sites [6,14], and also gives support (moisture, nutrient, and anchorage) for vegetation to thrive. Vegetation however, provides protective cover for soil, suppresses soil erosion, and helps to maintain soil nutrient through litter accumulation and subsequent decay (nutrient cycling) [7,14,11]. Soil-Species correlation studies must be the first step in understanding the diversity and ecology of plateau ecosystems [1,5], because knowledge of the diversity and ecological needs of the species will provide a clue in developing the species particularly species growing in special locations across plateau fringed communities [8,14]. And also aid in the soil-vegetation ecosystem model development [10,13]. Several studies have found a deep correlation between soil physico-chemical parameters and vegetation composition and structure in plateau ecosystems and have reported soil nutrient content and soil depth to affect basal area and consequently influence structure of plant communities [14], soil fertility which positively correlates with species richness [1], Organic Matter (OM) and Nitrogen (N) availability which often constrains productivity [7,12]. For plant growth, soil N and P were major nutrient elements, which influence the photosynthesis process and other processes related to plant production [10,13] and C/N ratio, and pH which positively correlates with diversity [6,8,11]. Other studies believed that a significantly positive correlation of vegetation exists with OC, TCa, TP, TK, TMg, Na, T.E.B, EX. ACIDITY, ECEC, and AVI-P 1 in tropical forests [3,2].Soil physico-chemical parameter have a role to play in vegetation composition and structure in plateau ecosystems, and is important to vegetation managers and ecologists for meaningful management strategy [3,2,7,12]. Unfortunately, with regard to establishing soil -vegetation relationships, available information is scarce and scanty particularly in Africa [2]. The Togo Plateau Forest Reserve, is the largest forest reserve in the Volta Region of Ghana and is recognized as a biodiversity significant area [1,8,14]. The reserve is characterized by horizontal layers of sedimentary rocks, a wide range in elevation, hundreds of escarpment, flat-topped plateaus and rocky mountain provinces. Its high topographic complexity has created different soil compositions as a result, and have developed different life zones that support different vegetation communities. The reserve is important for the fringe communities as they rely on it heavily for their traditional healthcare needs, and also provides cool climate, unique topography, a potential tourist site and supports high level of endemism [1,14]. Despite these enormous benefits, the composition and structure of the Plateau has not been studied. Its ecological needs and major threats are thus not understood, and the conservation requirements are not appreciated [9]. This dearth of scientific information is certainly presenting a major limitation for the effective management and conservation of the reserve. The current state of degradation of this ecosystem will certainly affect its prospects for future conservation action. This study therefore, assessed the influence of soil physicochemical properties on plant species composition and structure in the six fringed communities in the Togo Plateau Forest Reserve so as to contribute to management measures. The study addressed two research questions: (a) What is the composition of soil of the Togo Plateau Forest Reserve? (b) Does the plant species composition and structure in the TPFR recognizably vary among species along the plateau and across the study communities? If yes-is Soil physicochemical parameters responsible? To answer these questions, a hypothesis is postulated that soil physicochemical parameters are closely linked to species composition and structure in the TPFR in Ghana. Study Area The Togo Plateau Forest Reserve was established by the British Colonial Administration in 1929, in the then Trans-Volta-Togoland and gazetted in 1931 as a forest reserve in Ghana. The reserve occupies an area of 14.763 hectares, making it the largest in the Volta Region. It lies within longitudes 0 o 15E and 0 o 45E and latitudes 6 o 45N and 7 o 15 N with the elevation between 250 and 2680 m.a.s.l ( Figure 1). The reserve is surrounded by several communities which constitute the focus of the study including Hohoe (HH) and Alavanyo (AL) which are located within the Hohoe Municipality of the Volta Regionand Santrokofi (SA) and Akpafu (AK) in the proposed SALL District as well as Bowiri (BO)and Nkonya (NK) in the Biakoye District of the Oti Region all of Ghana. The Hohoe municipality has a total land area of 1,172 km2, representing 5.6% of the land area of the Volta Region, and has Hohoe as its capital. The municipality lies in the wet semi-equatorial climatic zone, with annual rainfall of 1016-1210 mm and 4-5-month dry season between November and April. Temperatures are high throughout the year and range from 26 ºC to about 32 ºC. The population of the Municipality in 2010 was 172,950 (Ghana Statistical Service 2010). The Biakoye District, on the other hand, has a total land area of 738.20 km2, representing about 4.1 % of the total land area of the region. The district capital is Nkonya Ahenkro. The district experiences the wet equatorial rainfall regime with its peak in July and September, respectively. The mean annual rainfall is about 1500 mm. There is a rather short dry season, which is characterized by the cool dry North-East trade winds from early December to mid-March. Temperatures vary between 22ºC and 34ºC. The district is estimated to have 63,645 people [4]. Major economic activities of the inhabitants include fishing, lumbering, carpentry, blacksmithing, distilling, palm oil extraction and gari processing. Soil sampling design To investigate the influence of soil physicochemical parameters on species composition and structure in the six fringed communities (Hohoe, Alavanyo, Santrokofi, Akpafu, Bowiri, and Nkonya) in the Togo Plateau Forest Reserve in Ghana, composite soil samples (mixing three sub-samples) were collected in mini-pits dug at depths (0-60cm) with the help of soil auger and vegetation parameters were determined from the thirty plots (each of dimension 25m × 25m) that were demarcated in each of the six communities (n = 180).The collected samples were analyzed at the Soil Research Institute of Ghana, Analytical Services Division, Kumasi for 16 edaphic variables. Soil samples were air-dried and sifted through a 2-mm mesh and physical and chemical characteristics were determined. The analyses included granulometry (sand, clay and silt contents); active acidity (pH) in water exchangeable acidity (Al); contents of TCa, TMg, TK, Na and available P; BASE SAT; total cation exchange capacity (CEC) including micronutrients (Zn, Fe, Mn, Cu plus ex. acidity); electrical conductivity (EC); organic matter (OM) and organic carbon (OC). The mean value was then calculated for each plot. The soils were classified according to World Reference Base for Soil Resources (WRB) (ISSS Working Group RB 2015) EC (meters). Data Analysis The statistical parameters including mean, minimum, maximum, standard deviation and coefficient of variation (CV) of the 16 soil physicochemical parameters was obtained using ANOVA [13]. To compare means of vegetative parameters (density, richness, Shannon Diversity, Evenness and BA) along the plateau and across the study communities, SPSS (p>0.05, p>0.01) was used [5]. The influence of soil physico-chemical properties on species composition and structure was determined employing Canonical Correspondence Analysis using R-Software Version 4.0.3 [5]. Variation among the soil physicochemical properties in the TPFR The results obtained from Soil properties (pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, EX. l SAT, AVI-P1 and % Sand, % Clay and % Silt), statistical parameters and coefficients of variation (CV) were presented ( Table 1). The soil of the study area tended to be somewhat weak acidic to neutral, with a pH ranging between 4.17 and 7.06. Table 1). The pattern of variability within the mechanical classes was (31.9) for clay, (26.5) for silt and (9.09) for sand. In general, the descriptive statistics showed high variability in soil properties in the study area (Table 1). Soil and vegetation composition along and across the study communities in the TPFR The study results showed that there was a significant difference in soil physicochemical parameters across the study communities in the TPFR (*p<0.10, ** P < 0.05, *** P < 0.01, Table 2). Table 3 presents similarities and differences in soil physicochemical parameters across the study communities (Significant differences showed by different letters and the same letters are not statistically significantly different from each other at (* P < 0.05; ** P < 0.001). The degree of correlation among the sixteen soil properties is shown in Table 8. All the soil physicochemical parameters were significantly positively and negatively correlated with each other, indicating similar and opposite spatial distribution patterns respectively. Total Nitrogen (TN) and TP were positively correlated with soil pH, TK however, correlated negatively with pH. The concentrations of OC, TN and TP had no correlation between each other, and therefore no correlation coefficients between OC and TN, OC and TP and for TN and TP, this means that the C:N, C:P and N:P ratios were not constrained (Table 8). Across the communities fringing the landscape, soil properties including pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI -P (ppmP), Sand and Silt were seen to be the drivers of trees composition and structure based on the first two axes of the CCA (p<0.05, Table 9). Table 9). Many of the soil variables that were correlated with these two CCA axes were further strongly and significantly mutually correlated with the first 20 most important trees across the study communities (Appendix 2, 3) and vegetation parameters including density (P = 0.000, f = 5.21), richness (P = 0.000, f = 7.71), Shannon (P = 0.000, f = 5.96), evenness (P = 0.000, f = 11.00) and BA (P = 0.000, f = 6.71) ( Table 10). It is only AL which exhibited different patterns showing only sand and silt as having a strong influence on the tree distribution (Appendix 2, 3). In the other vein, all the vegetation attributes were seen to be highest for BO, NK and AK and lowest in the SA, HH and AL communities of the plateau (Table 10). A detailed similarities and differences in vegetation attributes across the study communities have been presented (differences are showed by different letters and the same letters show similarity between the communities at (* P < 0.05; ** P < 0.001) (Table 11). With respect to the saplings composition and structure, soil characteristics including pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, Clay and Silt were discovered to have significant influence based on the first two axes of the CCA across the study communities (p<0.05, Table 10 Table 10). Many of the soil variables that were correlated with these two CCA axes were further strongly and significantly mutually correlated with the first 20 most important saplings across the study communities (Appendix 4, 5) and vegetation parameters including richness (P = 0.010, f = 6.92) and Shannon (P = 0.004, f = 8.56) ( Table 10). Except for HH and BO that have insignificant soil correlation with diversity and evenness, all other communities have a significantly soil -vegetation correlations (p<0.05, Appendix 4, 5, Table 10). Effect of soil physicochemical on vegetation composition and structure In general, 16 soil variables including (pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI -P (ppmP), Sand, Clay and Silt) had significant influence on vegetation (trees, saplings, seedlings) composition and structure across the six study communities (Table 8). These results suggest that soil plays a more important role in the determination of composition and structure of vegetations in plateau forest habitats. In relation to the study communities, 15 Soil characteristics (pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI -P (ppmP), Sand and Silt) significantly influenced tree distribution in Axis 1 in the BO, NK, AK, SA and HH forest formation habitats (Table 9). However, this degree of influence varies across these communities partly due to different gradients created by these 15 Soil characteristics. The distinct interaction of the 15 soil variables with each other on Axis 1 and Axis 2 ( Table 9) has accounted for all the vegetation parameters (density, richness, shannon, evenness and BA) values for trees decreasing in the order from BO, NK, AK, SA, HH and AL (Table 10). The first 20 most important trees and their correlations with the soil variables across the study communities have been presented ( Figure 2, 3). Among the saplings, more than 13 soil characteristics (pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat and Silt) influenced vegetation composition and structure significantly by Axis 1 in BO, NK, HH and AL forest formation habitats (Table 10). Akpafu (AK) however, is distinct and did not show any interaction with any soil characteristics on any Axis in its forest formation habitats (Table 10). Similarly, this trend of influence accounted for the vegetation parameters including (richness and shannon) values for saplings decreasing in the order from BO, NK, AK, SA, HH and AL (Table 10). No significant difference was however seen in density and evenness across the communities. (Table 10). The first 20 most important saplings and their correlations with the soil variables across the study communities have been presented (Figure 4 and 2b). Among the seedlings, more than 13 soil characteristics (pH, OC, TN, OM, TCa, TMg, TK, Na, ex. Acidity, ECEC, Base sat, AVI -P (ppmP) and Sand) influenced vegetation composition and structure significantly by Axis 1 in BO, AK, HH and AL forest formation habitats (Table 11). Nkonya (NK) however, is distinct and did not show any interaction with any soil characteristics on any Axis in its forest formation habitats (Table 11). This trend of influence accounted for density and shannon parameters showing a significant difference across the communities (Table 11). The first 20 most important seedlings and their correlations with the soil variables across the study communities have been presented (Figure 6, 7). Discussion The study sought to analyse the influence of soil physicochemical parameters on vegetation (trees, saplings, seedlings) composition and structure along the TPFR and across the fringed communities. The composition and structure of the various groups (trees, saplings and seedlings) were influenced by 15 soil physicochemical parameters along the plateau and across the study communities based on the first two axes of the CCA. Variation among the soil physicochemical properties in the Togo Plateau Forest Reserve The coefficients of variation (CV) were used to estimate the variability in soil properties. Several studies have documented CV values for soil physico-chemical properties to be low (c.v.<15%), moderate (c.v.=15%-34%) and high (c.v.>35%) [6,8,10,13]. This study in the TPFR in Ghana indicated that the CV values for most of the soil properties including OC, TN, OM, TCa, TMg, Na, T.E.B, ex. ACIDITY, ECEC, and AVI-P 1were in the range of between 37.5% to 61.7% and the soils area tended to be somewhat weak acidic to neutral, with a pH ranging between 4.17 and 7.06 [ Table 1] which generally is in the range of results obtained from other similar studies [6 10, 13]. Therefore, soils in the study area were highly variable. This is evidenced in the plateau being floristically rich. The accounting factor for this trend could be that the vegetation was supplying a high amount of litter to the soil [10]. In addition, the most important species in the study area (see Table 9) have a strong nitrogen fixation function and thus, the available K and total N contents increased [6,10,13]. Studies have shown that high soil organic matter, P-retention capacity, ex. ACIDITY, ECEC, and AVI-P 1presents surface accumulation and has a significant influence on vegetation growth and development, and different vegetation types may also increase the soil organic matter to different degrees because a high amount of litter was introduced [6, 10,13]. And the decomposition by microorganisms resulted in a greater amount of humus, and increases in the soil organic matter. The pattern of variation observed along the landscape may be due to different quantum of deposition of organic matter [6,8,10]. Soil, vegetation composition and structure across the six communities in the TPFR The results demonstrated variations in concentrations of soils physicochemical parameters across the six study communities. These results which is considered as a major characteristic of several plateau forests [3,2,7,12] support the hypothesis that soil physicochemical properties are the main factor determining the variations in species composition and structure in plateau ecosystems [2]. Accounting factor might be the influence of environmental heterogeneity on the landscape which results in the creation of edaphic gradients [7]. The trees structural differences exhibited across the six communities forest formations on the landscape is in line with several documented studies that composition and structure of vegetation can vary across landscapes that are at least a hundred meters apart topographic range [1,14]. In this study, vegetation attributes (density, richness, Shannon, evenness and BA) all follow a similar patterns of distribution a across the communities. All were highest for BO, NK AK and lowest for SA, HH and AL. This similar trend of distribution was also seen among the saplings and seedlings. Another probable explanation may be that BO, NK AK lie in lowland areas on the plateau and these habitats have better and higher variations in composition of soil physicochemical parameters that affected the structure of plant communities and therefore showed high values for density and basal area [9]. On the other hand, SA, HH and AL lie on the highland area on the plateau and these habitats have relatively poor soil physicochemical properties that affected the structure of their plant communities and therefore showed low values for density and basal area [2]. Therefore, the structural complexity of the vegetation of the TPFR in Ghana strongly correlates with soil physicochemical parameters and agrees with the reports from similar studies [3,2,7]. In the case of richness, Shannon, evenness which measures diversity, BO, NK and AK again showed a higher significant difference among habitats than SA, HH and AL (Table 10). This may be due to the communities exhibiting a distinct soil types that result in habitat differentiation [2]. Soil characterization have been well documented as the main factor that influences plant diversity in landscapes [7,14,5,8]. Trees for instance have been documented to have a strong positive correlation with diversity under favourable environmental conditions [8]. Effect of soil physicochemical on vegetation distribution The results obtained from the CCA analysis suggest that soil plays a more important role in the determination of composition and structure of vegetations in plateau forest habitats. Canonical Correlation Analysis (CCA) results showed that pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI -P (ppmP), Sand and Silt were the drivers of species (trees, saplings and seedlings) composition and structure (including density, richness, Shannon, evenness and BA) along the plateau and across the study communities. However, vegetation attributes were seen to be highest and more correlated with soil physicochemical parameters for BO, NK and AK occupying lowland and lowest in the SA, HH and AL which occupy highland areas of the plateau. A similar relationship has been reported for a number of tropical forest sites [2,8,14]. Although five communities have similar soil parameters controlling the vegetation with one distinct community, their concentrations (which is controlled by nutrient and water availability) vary from one community to the other due to different gradients created and consequently results in vegetation distribution variation across landscapes sites [8,14]. This result also agrees with similar studies that for a number of tropical forest sites vegetation and soil characteristics distribution varies [3,2,7]. In plateau landscape-scale, variation in functional community composition and distribution results from local-scale specialization of a given species in response to its filtering of the locally available species pool by physical and chemical soil properties [3]. Studies on differences in plant functional community composition and distribution across edaphic gradients in plateau ecosystems have reported a general trend that sites with lower resource availability and nutrient retention contained less diverse and distributed plant communities than those with ample soil water and nutrient supply [2]. From the study, BO, NK and AK have a more neutral pH levels than SA, HH and AL. This report agrees with other studies that across topo-edaphic gradients, plateau forests are generally characterized by a more neutral pH levels in lower gradients which is beneficial to the decomposition of soil organic matter by microorganisms and affects the release of phosphorus than higher gradient and this promotes growth and wide distribution of species in lowland sites [1,14]. This result which account for local habitat heterogeneity also confirms that every species has different growth and distribution strategy for local edaphic gradients and therefore plant community composition is shaped by soil resource availability [1,14]. Conclusion Soil -Species correlation studies must be the first step in understanding the diversity and ecology of plateau ecosystems, because knowledge of the diversity and ecological needs of the species provide clues for the particularly species growing in special locations along and across plateau fringed communities. In the TPFR in Ghana, the canonical correlation analysis (CCA) results showed that pH, OC, TN, OM, TCa, TMg, TK, Na, T.E.B, ex. Acidity, ECEC, Base sat, AVI -P (ppmP), Sand and Silt were the drivers of tree composition and structure (including density, richness, Shannon, evenness and BA), saplings (richness and Shannon) and seedlings (density and richness) along the plateau and across the study communities. However, vegetation attributes were seen to be highest and more correlated with soil physicochemical parameters for BO, NK, AK occupying lowland and lowest in the SA, HH and AL which occupy highland areas of the plateau. The soil of the study area tended to be a somewhat weak acidic to neutral, with a pH ranging between 4.17 and 7.06. The CV values are well within the standard values with Base sat showing lowest values (c.v.<15%), with moderate values of (c.v.=34%-15%)) for TK and highest values (c.v.˃35%) for TCa, TMg, Na, T.E.B, ex. ACIDITY, ECEC, AVI-P1 on the landscape. The study provides a better understanding of the current status so as to assist in the management of this plateau in Ghana.
2020-12-31T09:11:44.592Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "bb5eb613639b6b1dc08314476c191374837d6320", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2020-0480.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8c9c12a9c9be6a08dca91e557a0a6a05021817ff", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
84939180
pes2o/s2orc
v3-fos-license
PHYLOGENY OF HEDYCHIUM AND RELATED GENERA ( ZINGIBERACEAE ) BASED ON ITS SEQUENCE DATA The phylogeny of Hedychium J. Koenig was estimated using sequence data of internal transcribed spacer regions 1 and 2 (ITS1, ITS2) and 5.8S nuclear ribosomal DNA. Sequences were determined for 29 taxa, one interspecific hybrid of Hedychium and one species in each of 16 other genera of Zingiberaceae representing tribes Hedychieae , Globbeae , Zingibereae and Alpinieae . Cladistic analysis of these data strongly supports the monophyly of Hedychium , but relationships to other genera are poorly supported. Within Hedychium , four major clades are moderately supported. These clades are also distinguishable on the basis of number of flowers per bract and distribution. Stahlianthus , Curcuma , and Hitchenia also form a strongly supported clade. Based on this limited sample, the currently defined tribes of Zingiberoideae are not monophyletic. The Asiatic genera form a monophyletic group within this broadly defined Hedychieae . The taxonomy and biogeography of Hedychium are reviewed. INTRODUCTION Within the Zingiberaceae four tribes are currently accepted (Smith, 1981): Globbeae (bow-shaped filament with unilocular ovary with parietal placentation; four genera); Zingibereae (pointed anther crest surrounding the style and staminodal lobes adnate to the labellum; one genus, Zingiber); Hedychieae (plane of leaf distichy parallel to the rhizome, lateral staminodes petaloid and free from the lip except in Siphonochilus and Curcumorpha, ovary trilocular with axile placentation or unilocular with basal or free columnar placentation; 21 genera); and Alpinieae (distichy of leaves perpendicular to the rhizome, lateral staminodes reduced to teeth or swellings or absent, ovary trilocular with axile placentation; 22 genera).Alpinieae have no capacity to shed the stems or inflorescences by abscission.The other three tribes readily shed their stems and form a corky abscission layer on the rhizome in response to old age, photoperiod, soil temperature or drought.The position of Hedychium within tribe Hedychieae is uncertain. The tribe Alpinieae is pantropical, ranging from New Guinea and Fiji in the East through Asia, Africa, and Central and South America ( Wood, 1991;Wu, 1994).The other three tribes occur mainly in southern Asia with sparse representation in Oceania (with no truly indigenous species east of the Moluccas).The only exception is Siphonochilus which is endemic to Africa.All fossil Zingiberaceae, some of which date from the late Cretaceous, have been interpreted as having affinities with Alpinieae ( Friedrich & Koch, 1970;Hickey & Peterson, 1978). The terrestrial species in the circum-himalayan region grow in cool, wet mountains up to 2400m, whereas the Malesian species are mostly epiphytes.Species delimitation has varied among authors; for example some divide H. coronarium into 9 species, and H. coccineum into 7 species (Roscoe, 1828;Turrill, 1914).All authors agree that the genus is monophyletic.It is characterized by long, linear or lanceolate lateral staminodes, persistent, coriaceous bracts, and a long, exserted stamen with a lower dorsifixed anther.The labellum is showy and usually emarginate or cleft into two lobes. Wallich (1853) circumscribed the following subgeneric divisions: Coronariae (more or less tightly imbricate spikes); Spicatae (elongated spikes with distant, spreading bracts); Siphonium (one species, H. scaposa, a slightly crested anther and a stemless habit similar to the genus Kaempferia to which it has long since been transferred ); and Brachychilum (H.horsfieldii, a cleistogamous plant almost lacking a labellum with two wide lateral staminodes, formerly a segregate but recently placed within Hedychium (Newman, 1990).Horaninow (1862) divided Hedychium into three groups: Gandasulium (stamen shorter than or equal to the length of the labellum), Macrostemium (stamen much longer than the labellum), and Brachychilum (one species, lacking a significant labellum).He also classified four Indonesian species as incertae sedis. Schumann (1904) redefined Gandasulium to include taxa with a dense, short and wide, ellipsoid, ovoid, rarely long cylindrical inflorescence; with bracts flat, densely imbricate, rarely arched; and with rachis always hidden.He defined his other subgenus, Euosmianthus, to include species with less dense, with longer than wide inflorescence; bracts never densely imbricate, commonly patent, or divergent, or distant from each other, clasping the flowers; and with rachis not hidden.He also maintained a separate genus for Brachychilus, comprising a Moluccan species and H. horsfieldii.It is the aim of this paper to examine the validity of these classification schemes in light of modern molecular phylogeny. The first author is writing a monograph of Hedychium.Preliminary phenetic analy-sis of 15 species ( Wood, 1996) using 11 inflorescence characters showed some support for grouping of the species into imbricate and tubular bracted groups.Later analysis ( Wood, unpublished data) using 110 specimens of 67 species did not indicate wellsupported clustering of the two bract types in factor analysis; however, discriminant analysis correctly scored bract types in 90% of the observations and five of the eleven misclassifications involved Malesian species.The only published molecular studies of Zingiberaceae use rbcL sequences (Clark et al., 1993;Kress, 1995).In these analyses Costaceae are unresolved among the other five families of Zingiberales and only the four tribes of the Zingiberaceae form a single clade.When morphological characters are added to the rbcL data the Alpineae becomes the outgroup to the other three tribes. Wood (1991) hypothesized that the Costaceae and Alpinieae are the earliest groups in the Zingiberaceae and originated in Western Gondwanaland before the effective separation of South America and Africa, although fossil evidence from North America and Europe run counter to this hypothesis.These two groups were rafted on the Indian subcontinent from Africa to Asia.Sometime when the subcontinent was in the middle latitudes, when the paleoclimate was fairly dry, the progenitors of the other three tribes of the Zingiberaceae evolved in response to climate, and the ancestors of the African genus Siphonochilus dispersed to eastern Africa.Upon the collision of India with Asia, the uplift of the Himalayas provided many isolated and seasonally favorable habitats that prompted a massive radiation of genera of the Hedychieae, Globbeae, and Zingibereae in upland areas while the Alpinieae flourished in the lowland tropics of Asia and Oceania. MATERIALS AND METHODS Leaf samples of 29 species and cultivars of Hedychium plus representative species from 16 genera in the tribes Hedychieae, Globbeae, and Zingibereae, and one member of the tribe Alpinieae were obtained from material cultivated by the first author.The Alpinia was selected as the outgroup based on Kress (1995), fossil evidence, and the biogeographic evidence cited above.Also, one specimen that was thought to be an interspecific hybrid (Schilling, 1982) and a wide interspecific hybrid created by hand pollination were included in the analysis in order to evaluate our ability to detect natural interspecific hybrids.A list of taxa examined and voucher numbers is presented in Table 1.Vouchers are deposited in the University of Florida Herbarium ( FLAS). Fresh or dried tissue was ground and extracted by the modified CTAB method (Doyle & Doyle, 1990).The ITS1 and ITS2 regions along with the intervening 5.8S nrDNA region were amplified using PCR with primers 5 and 4 of Baldwin (1992).The amplified products were cleaned on QiagenA columns according to manufacturer's instructions.Dye terminator cycle sequencing reactions were performed using Applied Biosystems reagents and protocols.AutoAssembler software (Applied Biosystems) was used to assemble the complementary strands and edit nucleotide Swofford, 1999) was used for parsimony analysis.Initial analyses consisted of 1000 replicates of random taxon addition using SPR and MULTREES, saving only three trees per replicate.These trees were then swapped to completion, or until 10,000 trees were saved.The data set was then subjected to three rounds of successive weighting (reweighted on rescaled consistency index; 1000 replicates, saving 5 trees per replicate) to decrease the effects of highly homoplasious sites.Lledo ´et al. (1998) gave convincing reasons for using successive weighting.Successive weighting reduces the effects of highly homoplasious sites, and thus emphasises sites that are more consistent. In addition to maximum parsimony analyses, we evaluated support of the clades by: 1, bootstrap analyses (Felsenstein, 1985) to obtain bootstrap support for nodes using both equally weighted and successively weighted trees with 1000 replicates of bootstrapping using SPR swapping, MULTREES on, holding 10 trees/replicate; 2, by use of Bremer support (Bremer, 1988(Bremer, , 1994) ) to obtain branch support for the equally weighted trees using the program Autodecay, version 4.0 ( Eriksson, 1998) and PAUP* 4.0b2 (Swofford, 1999); and 3, by the reliability percentages obtained by the quartet puzzeling method of maximum likelihood (ML) (Strimmer and von Haeseler, 1996) as available in PAUP* 4.0b2.The ML parameters were the H-K-Y model, ti/tv=2, with other parameters set to default.Bremer support trees and ML trees were drawn in the TREEVIEW program (Page, 1996). RESULTS AND DISCUSSION The aligned matrix is 733 base pairs (bp) long, and consists of the 3∞ end of 18S (30bp), ITS1 (235bp), the 5.8S region (165bp), ITS2 (238bp), and 16bp of the 26S region.Twenty-four bases were excluded due to ambiguous alignment.Of the 709 included sites, 330 were variable and 169 parsimony informative. The initial unweighted analyses yielded 10,000+ trees with a length of 726 (CI= 0.618, RI=0.644).After three rounds of successive weighting (100 replicates, saving 5 trees per replicate), the final analysis produced 500 trees of length 315.56 (CI= 0.875,RI=0.838).The Fitch length of the successively weighted trees (equal weights reapplied ) was 728 (CI=0.617,RI=0.641), two steps longer than the shortest unweighted trees.The CI for unweighted trees with uninformative characters excluded was 0.485, and the CI for weighted trees with uninformative characters excluded was 0.690.Fig. 1 shows a randomly-chosen reweighted tree with branch lengths above the lines, bootstrap support values (if Á70%) directly under lines, and decay values (given as d=) below the bootstrap values.Decay values are only given for major clades if Á1. The monophyly of Hedychium is highly supported (bootstrap=100%).The tree in Fig. 1 shows four moderately supported clades within Hedychium and illustrates poor resolution within these clades due to low sequence divergence.The four clades form an unresolved polytomy in the bootstrap analysis.Clade I plants occur only in southern Vietnam, the Malay Peninsula, and Oceania.They are short, generally epiphytes or calciphiles, with a short day or day neutral photoperiod, slender inflorescences, one or two flowers per bract (three in H. bousigonianum), and flowers much exserted from the bracts.Clade II is sister to Clade I in the successively weighted strict consensus tree, and these two clades have weak decay support (d=1).Clade III, represented here by only H. acuminatum, is sister to clades II and I in the strict consensus tree (not shown).Clade III and Clade II are high altitude Himalayan species that have only one flower per bract and a strict dormancy requirement.Clade IV species have a wider circum-himalayan distribution at lower altitudes than Clades II+III.A comparison of collection information from 23 herbarium specimens from Clade II and 43 specimens from Clade IV showed a mean altitude for Clade II of 1783m and 1260m for Clade IV.This difference is highly significant (a<0.0005)using a students t-test.These tall plants do not normally go dormant in the winter and have three or more flowers per bract. Because artificial interspecific hybrids of Hedychium are easily created ( Wood, personal observation), natural hybridization is a potential source of taxonomic confusion in Hedychium.The ITS sequence of the artificial hybrid (H.gardnerianum × hasseltii ) was intermediate between that of the two parents.The parents differed by 17 bases; at each of these positions, the hybrid sequence displayed polymorphic states, indicating ITS copies from each parent.These positions were scored using ambiguity codes.When the hybrid sequence was included in the PAUP analyses (not shown), the hybrid was sister (on a zero-length branch) to one or the other parent, depending upon the addition-sequence replicate.In contrast, H. densiflorum and the aberrant variety named 'Stephen' possess nearly identical sequences and are sister taxa in the analyses with high bootstrap support.The sequence of cultivar 'Stephen' revealed no ambiguous sites.The similarity of these two sequences does not support the hypothesis that 'Stephen' is a cultivar of recent hybrid origin (Schilling, 1982) and suggests that it is merely an aberrant form of H. densiflorum.These examples show that ITS sequences may reveal recent interspecific hybridization that might otherwise confound phylogenetic analyses.Because ITS regions are known to undergo rapid concerted evolution (Baldwin, 1992), species of ancient hybrid origin may be difficult to detect without additional lines of evidence. On a higher taxonomic level, a clade consisting of Cautleya, Pommereschea, Rhynchanthus, and Roscoea is weakly supported (61% bootstrap), but not as the sister group to Hedychium as we had expected on the basis of morphology.The clade of Pommereschea and Rhynchanthus has a bootstrap support of 94% and a decay value of d=5, with both values indicating very strong support for this clade.This supports the idea that these two genera are in the tribe Hedychieae not the Alpiniae as Schumann indicated based on the lack of lateral staminodes.This classification has been supported by Smith (1980) and by Z.Y.Chen (personal communication).They both lack petaloid staminodes, making them quite different from Hedychium.Roscoea is sister to Cautleya with 92% bootstrap support and a decay value of d=7.These are high altitude Himalayan taxa that bear single flowers in each bract.The flowers have oblong, petaloid lateral staminodes and a bifid labellum like Hedychium.The four genera together form a weakly supported clade. Other related clades, while not reflecting on the phylogeny of Hedychium, do shed light on the evolution of the Zingiberaceae.A clade consisting of Hitchenia, Curcuma, and Stahlianthus has a high bootstrap support of 89% and very strong decay support of d=7.The separation of Hitchenia from Curcuma on the basis of exserted flowers and non-versatile anthers has never seemed adequate, but Stahlianthus (single bell shaped bract) perhaps should be considered as a Curcuma with two adnate bracts. The position of Zingiber makes the tribe Hedychieae polyphyletic.In spite of describing the genus Cornukaempferia as having an 'anther crest (that) shows a striking resemblance with the anther appendage characterizing Zingiber', Mood and Larsen (1997) placed it close to Kaempferia on the basis of vegetative habit.The ITS data indicate that Cornukaempferia is sister to Zingiber (92% bootstrap support) and might not be generically distinct.The placement of Gagnepainia and Globba make tribe Globbeae paraphyletic.It is notable that the Asian species of Globbeae, Zingibereae, and Hedychieae form a monophyletic group with a bootstrap support of 99% and decay support of d=3.The fact that the African genus Siphonochilus appears as the sister group to the rest lends credence to the theory of an origin of these tribes on the Indian subcontinent 40-50 million years ago. In conclusion, sequence data show that the genus Hedychium is monophyletic and includes the genus Brachychilum.We propose these clades which may be described later as subgenera: 1, Clade IV (three to five flowers per bract) and 2, Clade II (one flower per bract), both of which have a natural circum-himalayan distribution; and 3, Clade I (one or two flowers per bract) that occur only in the Malay Peninsula, Philippines, Borneo, Sumatra, Java, Sulawesi, and the Moluccas.The position of Clade III (H.acuminatum) is uncertain but probably includes H. venustum (not available for this study) on the basis of morphology.This molecular phylogeny runs counter to Schumann's classification because species in three of these clades occur in each of his subgenera.This analysis also emphasises that the most important factor in the evolution of this genus is geographic and ecological isolation.Because tribes Globbeae and Zingibereae make tribe Hedychieae paraphyletic we recommend that only tribe Zingibereae be retained.The position of the African genus Siphonochilus is uncertain and must be evaluated along with other genera in the tribe Alpinieae and Costaceae, preferably including African species, to determine its placement.This data set suggests that tribal concepts need to be re-evaluated in the Zingiberaceae.Expanded data sets, such as the conserved matK, need to be used before the existing taxonomy is radically altered.The sister group of Hedychium is uncertain based on this sample. F IG . 1 .One of 500 reweighted equally most parsimonious trees with branch lengths given above the lines and the bootstrap support values directly below the lines, with weighted bootstrap values followed by unweighted bootstrap values.Weighted boostrap values are not given for clades with values <70%.Decay values are below the bootstrap values and indicated by d=(value).Additional statistics are given in the text.Genera not in the Hedychieae are indicated by (Z, Zingibereae), (G, Globbeae), and (A, Alpinieae). Sequences were easily aligned manually.The sequences are deposited in GenBank (accessions AF202374-AF202420) and the aligned matrix is available from the first author.PAUP* 4.0 b2 (
2019-03-22T16:06:49.623Z
2000-07-01T00:00:00.000
{ "year": 2000, "sha1": "479d22368ac406daf917ab3544b2666976794deb", "oa_license": "CCBY", "oa_url": "https://journals.rbge.org.uk/ejb/article/download/1207/1098", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b68bfb7e9505b2bcad430dfe8dca633c8b0cb3cc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
12364425
pes2o/s2orc
v3-fos-license
First principles study of Si(335)-Au surface The structural and electronic properties of gold decorated Si(335) surface are studied by means of density-functional calculations. The resulting structural model indicates that the Au atoms substitute some of the Si atoms in the middle of the terrace in the surface layer. Calculated electronic band structure near the Fermi energy features two metallic bands, one coming from the step edge Si atoms and the other one having its origin in hybridization between the Au and neighboring Si atoms in the middle of the terrace. The obtained electronic bands remain in good agreement with photoemission data. experiments show a single chain within each terrace, so one would naively expect a single metallic band in electronic structure. However, ARPES data show not one but two bands crossing the Fermi level [12]. It is the purpose of the present work to solve this puzzle and to identify the bands observed in the ARPES experiment. Moreover, there is no structural model confirmed by first principles calculations. However, the model of this surface, based on an analogy with Si(557)-Au surface, has been proposed in Ref. [12]. It is simply a truncation of the Si(557)-Au structure. So the second purpose of the present work is to check whether the model of Ref. [12] is a good candidate for the atomic reconstruction of Si(335)-Au surface. In the following, I will focus on the presentation of the results regarding structural model and electronic properties of the system. Details of calculations The calculations have been performed using the SIESTA code [32]- [35], which performs standard pseudopotential density functional calculations using a linear combination of numerical atomic orbitals as a basis set. I have used here the generalized gradient approximation (GGA) to DFT [36,37], Troullier-Martins norm-conserving pseudopotentials [38], and a double-ζ polarized (DZP) basis set for all the atomic species [33,34,39]. A Brillouin zone sampling of 24 inequivalent k points, and a real-space grid equivalent to a plane-wave cutoff of 225 Ry (up to 82 k points and 300 Ry in the convergence tests) have been employed. This guarantees the convergence of the total energy within ∼ 0.1 meV per atom in the supercell. The Si(335)-Au system has been modeled by slabs containing up to four silicon double layers plus reconstructed surface layer. All the atomic positions were relaxed except the bottom layer. The Si atoms in the bottom layer were saturated with hydrogen and remained at the bulk ideal positions during the relaxation process. To avoid artificial stresses, the lattice constant of Si was fixed at the calculated bulk value, 5.42Å, which is very close to the experimental value of 5.43Å. Structural model The total energy calculations show that it is energetically very favorable for the Au atoms to substitute into the top Si layer. The surface energy gain per unit cell is more than 1 eV as compared to adsorption above the surface. Furthermore, the Au substitution in the terraces is more stable than adsorption of the Au atoms at the step edge (by about 0.5 eV per unit cell). Similar conclusions have been obtained for the case of Si(557)-Au surface, where the Au atoms prefer to substitute in the topmost Si layer, too [12,21,23,24]. Therefore, in the following, I will focus on the structural models featuring the Si top layer atoms substituted by the gold. The most stable model is shown in Fig. 1. However, the other models, in which the Au atoms occupy various top layer silicon positions, from Si 1 to Si 5 (see Fig. 1 for labeling), have comparable energies. The differences are usually less than ∼ 0.7 eV per unit cell, and the next 'best' structural model, in which the gold occupies the Si 3 position, differs in energy by 182 meV only. The total energies of the above models with respect to the most stable model are summarized in Table 1. The present model is simply the model proposed by Crain et al. [12], which has not been deduced from any total energy calculations but was based on an analogy with Si(557)-Au model -a simple truncation of Si(557)-Au surface. Here, the DFT calculations confirm that this model is a good candidate for Si(335)-Au surface reconstruction. As one can read off from Fig. 1, the Au atoms sit in the middle of the terrace and the rigidity of the Si structure keeps the Au wire stable against dimerization. The calculated Au-Si 4 bond length 2.43Å and 2.38Å for Au-Si 5 (see Fig. 1) is quite close to calculated bulk Si-Si distance of 2.35Å, indicating that Au atoms affect the Si structure very little. Another important feature is a strong rebonding of the Si atoms near the step edge. The step edge atoms tend to saturate the dangling bonds in the neighboring terrace. As a result, a sort of 'honeycomb' building block is created at the step edge, originally proposed for the alkali-induced 3 × 1 reconstruction of Si(111) surface [40]. It turns out that this sub-structure is a common feature of other Au-decorated Si vicinal surfaces [12]. Band structure The calculated band structure for this structural model, along the high symmetry line of twodimensional Brillouin zone (Fig. 2), is shown in Fig. 3. The line defined by points Γ, K and M' is parallel to the steps, i.e. it goes along the Au chains. The Mulliken population analysis [41] has been performed in order to identify the main character of the bands. Although this analysis is not completely unambiguous, it is particularly useful for surface states. Fig. 3, having its origin in unsaturated bonds of the Si 3 atoms. This band is also very flat, indicating its surface nature, and is located around 1 eV above the Fermi level. All the bands discussed above have also been identified in the case of Si(557)-Au surface [21,23]. This similarity can be easily understood if one recalls that the Si(335)-Au surface is a truncation of the Si(557)-Au surface, as discussed previously. Thus, similar bands, although having different k dependence, should also be observed here. A comparison of the calculated band structure with the photoemission spectra of Ref. [12] is shown in Fig. 4. As one can see, the present DFT calculations agree very well with the experimental data. In particular, there are two bands crossing the Fermi energy, one associated with the step edge Si atoms, and the other one coming mainly from the Si atoms neighboring to the Au chain. Moreover, the shape of those bands and the values of k , for which the bands cross the Fermi level, remain in good agreement with experiment. Unfortunately, a more detailed comparison between present calculations and experiment of Ref. [12] is not possible, as the photoemission spectra were measured in the energy window between E F and -0.5 eV. However, I expect the agreement not be worse for lower energies. At this point I would like to comment on the band structure of the other models studied here. Since the energy differences between all these models are rather small, it is natural to compare the band structure with the ARPES data. This would be the convincing criterion that the structural model is the correct one or at least is very close to the true Si(335)-Au surface reconstruction. Figure 5 shows a comparison of the ARPES data (Ref. [12]) and the band structure calculated for the next 'best' structural model, in which the Au atoms occupy the Si 3 positions (see Fig. 1). This model does not give as good agreement with the experimental data as the original one. In particular, there are three electronic bands crossing the Fermi energy, and none of them crosses the E F at the correct k . The other models studied here also give wrong values of k at which the electronic bands cross the Fermi energy. Thus, this is an additional argument supporting the validity of the model shown in Conclusions In conclusion, the structural and electronic properties of Si(335)-Au surface have been discussed within the density functional theory. The DFT calculations revealed that the most stable structural model contains one Au atom per unit cell, which substitutes the Si atom in the middle of terrace in the surface layer. The calculated electronic structure agrees well with photoemission experimental data, showing two metallic bands. The less dispersive band comes from the Si atoms at the step edge, while the other one originates from hybridization between the Au and the neighboring Si atoms in the middle of the terrace in the surface layer. Table 1: Total energies of the structures of the Si(335)-Au surface for various positions of the Au atoms, as labeled in Fig. 1. The energies (in eV per unit cell) are relative to the most stable model, shown in Fig. 1 [12]) and the calculated band structure within the model in which the top layer Si 3 atoms are substituted by the gold atoms.
2007-12-20T15:09:04.000Z
2007-12-20T00:00:00.000
{ "year": 2007, "sha1": "00b918f172bcb39e1404e0521d34d7da938d9936", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.3448", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00b918f172bcb39e1404e0521d34d7da938d9936", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
237055690
pes2o/s2orc
v3-fos-license
Microbial Warfare on Three Fronts: Mixed Biofilm of Aspergillus fumigatus and Staphylococcus aureus on Primary Cultures of Human Limbo-Corneal Fibroblasts Background Coinfections with fungi and bacteria in ocular pathologies are increasing at an alarming rate. Two of the main etiologic agents of infections on the corneal surface, such as Aspergillus fumigatus and Staphylococcus aureus, can form a biofilm. However, mixed fungal–bacterial biofilms are rarely reported in ocular infections. The implementation of cell cultures as a study model related to biofilm microbial keratitis will allow understanding the pathogenesis in the cornea. The cornea maintains a pathogen-free ocular surface in which human limbo-corneal fibroblast cells are part of its cell regeneration process. There are no reports of biofilm formation assays on limbo-corneal fibroblasts, as well as their behavior with a polymicrobial infection. Objective To determine the capacity of biofilm formation during this fungal–bacterial interaction on primary limbo-corneal fibroblast monolayers. Results The biofilm on the limbo-corneal fibroblast culture was analyzed by assessing biomass production and determining metabolic activity. Furthermore, the mixed biofilm effect on this cell culture was observed with several microscopy techniques. The single and mixed biofilm was higher on the limbo-corneal fibroblast monolayer than on abiotic surfaces. The A. fumigatus biofilm on the human limbo-corneal fibroblast culture showed a considerable decrease compared to the S. aureus biofilm on the limbo-corneal fibroblast monolayer. Moreover, the mixed biofilm had a lower density than that of the single biofilm. Antibiosis between A. fumigatus and S. aureus persisted during the challenge to limbo-corneal fibroblasts, but it seems that the fungus was more effectively inhibited. Conclusion This is the first report of mixed fungal–bacterial biofilm production and morphological characterization on the limbo-corneal fibroblast monolayer. Three antibiosis behaviors were observed between fungi, bacteria, and limbo-corneal fibroblasts. The mycophagy effect over A. fumigatus by S. aureus was exacerbated on the limbo-corneal fibroblast monolayer. During fungal–bacterial interactions, it appears that limbo-corneal fibroblasts showed some phagocytic activity, demonstrating tripartite relationships during coinfection. INTRODUCTION Biofilms are microbial consortiums of sessile cells fused inside an extracellular matrix (ECM) that is composed of self-excreting biomolecules by the different microbial species. Thus, biofilms are considered a link between microorganisms and the site they are trying to colonize (Fanning and Mitchell, 2012;Ramıŕez Granillo et al., 2015). As such, biofilms are also considered a virulence factor influencing the pathogenesis of microbial diseases (Archer et al., 2011;Gibbons et al., 2012;Peters et al., 2012). The study of polymicrobial biofilms has gained increased attention in the last few years and has focused on the study of virulence factors such as adhesion, production, and secretion of enzymes, proteins, and toxins (Karkowska-Kuleta et al., 2009;Archer et al., 2011;Gabrilska and Rumbaugh, 2015). Fungal-bacterial interactions (FBIs) are an example of the link that exists between these microorganisms during biofilm formation over both biotic and abiotic surfaces (Frey-Klett et al., 2011;Tarkka and Deveau, 2016). Mixed fungal-bacterial biofilms (MFBBs) tend to be more prevalent than previously thought, especially in humans, and have been associated with antimicrobial resistance, postsurgical infections, and immunodeficiency diseases (Elder et al., 1996;Nucci and Marr, 2005;Wargo and Hogan, 2006;Jabra-Rizk, 2011;Peters et al., 2012;Diaz et al., 2014;Arvanitis and Mylonakis, 2015). These risk factors are determinants of the development of MFBB infections in the eye. Structurally, the ocular surface is composed of the cornea, conjunctiva, and sclera; its main function is to protect the physical integrity of the eye (Busquet and Gabarel, 2008;Jimeńez-Martıńez et al., 2016;Lu and Liu, 2016). This protecting process depends on its ability to regenerate the epithelial layer under the influence of human limbo-corneal fibroblast cells (HLFCs). On the other hand, when the ocular surface is altered, changes in the microbiota can be promoted, causing ophthalmological pathologies associated with either the microbiota located in the tissue adjacent to the cornea or the conjunctiva (paucibacterial) or the resident microbiota of high pathogenic potential (pathobionts) (Chow et al., 2011;Doan et al., 2016). The microbiome of the ocular surface is mainly composed of Grampositive bacteria such as Propionibacterium sp., Corynebacterium sp., Staphylococcus sp., and Gram-negative bacteria such as Pseudomonas sp., Acinetobacter sp., among others (Dong et al., 2012;Kugadas and Gadjeva, 2016;Lu and Liu, 2016). A handful of retrospective studies around the world have assessed the finding of FBI during ocular surface infections (Delgado et al., 2008;Mejia-Lopez et al., 2010). Erroneous sampling, low microbial populations on the ocular surface, and the presence of non-culturable microorganisms in ocular samples have been associated with underestimation of FBIs during ocular keratitis (Kugadas and Gadjeva, 2016;Lu and Liu, 2016). Clinical manifestations of ocular disease related to MFBBs are even harder to characterize (Samimi et al., 2013). The in vitro formation of polymicrobial biofilms of Aspergillus fumigatus (AF)-S. aureus (SA) isolated from patients with infectious keratitis has been demonstrated and showed an antagonistic behavior (Ramıŕez Granillo et al., 2015). To our knowledge, this is the first study where the formation of polymicrobial biofilms (bacteria-fungi-cells) has been assessed using primary cultures. The aim of this study was to demonstrate the formation of mixed biofilms in vitro using primary cultures of HLFCs coinfected with the two main etiologic agents of infectious keratitis, A. fumigatus and S. aureus, as well as to study the effect between these microbial agents on primary cultures of human limbo-corneal fibroblasts. Strains Clinical isolates of A. fumigatus (Ramıŕez Granillo et al., 2015;Gonzaĺez-Ramıŕez et al., 2016) and S. aureus (Ramıŕez Granillo et al., 2015) were kindly donated by the Instituto de Oftalmologıá Fundacioń Conde de Valenciana (IOFCV). The characterization of both isolates was carried out in the IOFCV, and the identity of the isolates was corroborated as previously reported by Ramıŕez-Granillo et al. (2015). A. fumigatus was cultured in Potato Dextrose Agar (PDA) (MCD Lab, Tlalnepantla Estado de Mexico, Mexico) and incubated for 5 days at 37°C. The conidia from the A. fumigatus culture were harvested by flooding the plate with phosphate buffer saline (PBS) added with 10% v/v Tween 20. The surface of the fungal culture was scraped with a sterile glass scraper, followed by the obtention of the microconidia suspension with a sterile pipette. Afterward, the conidia were filtered through two sterile nylon filters (44 and 37 µm) as previously reported (Mowat et al., 2010;Ramıŕez Granillo et al., 2015). The conidial suspension should be used immediately after extraction for the infectivity test. To avoid rapid germination of the conidia, we use an ice bath during handling; it is possible to keep A. fumigatus conidia for 10 h without loss of viability. The S. aureus strain was seeded in BHI broth (MCD Lab, Tlalnepantla Estado de Mexico, Mexico) and incubated overnight (ON) at 37°C under agitation. From this original culture, a stock suspension was prepared using RPMI 1640 medium supplemented with fetal bovine serum (FBS) 10% v/v heat inactivated (Gibco, Waltham, MA, USA) adjusted with the McFarland nephelometer (tube 0.5). The isolates used in this work can be shared with the scientific community upon request. Primary Human Limbo-Corneal Fibroblast Cultures Primary HLFCs were kindly donated by the IOFVC and obtained as previously reported by Luna-Baca et al. (2007). The primary cultures were used between the third and fifth passages to avoid the proliferation of particular clones. The culture of HLFCs was standardized to adapt the cells to the biofilm formation conditions. For the propagation of HLFCs, frozen vials of fibroblasts were thawed and seeded in T75 culture flasks (Sarstedt AG, Nümbrecht, Germany) using Dulbecco's Modified Eagle Medium/Nutrient Mixture F-12 Ham (DMEM/ F-12) (Sigma-Aldrich Chemical Co., St. Louis, MO, USA) supplemented with FBS 10% v/v heat inactivated. The cells were incubated at 37°C in a 5% CO 2 atmosphere. For the microbial challenge, confluent cell monolayers were detached with a solution of 0.5% trypsin in PBS, and cell viability was assessed using trypan blue 0.1%. Viable cells were counted under a Neubauer chamber and the cellular suspension was adjusted to the standardized infection cell multiplicity index (MOI = 1; which is proportional to 50,000 HLFCs/50,000 conidia or bacteria per well) (Luna-Baca et al., 2007;Castañeda-Sanchez et al., 2013). The counted fibroblasts were seeded on flat-bottom polystyrene multi-well plates with RPMI 1640 medium supplemented with FBS 10% v/v until reaching an adequate cell confluence (80%-90% fibroblasts). The volume is relative to the type of polystyrene plate used (96-well plate, the final volume was 200 µl; 12-well plate, the final volume was 3 ml; six-well plate, the final volume was 4 ml), and the cultures were incubated under the conditions previously indicated. Flow Cytometry for Immune Typification of Primary Human Limbo-Corneal Fibroblast Cell Cultures To demonstrate the phenotype of the HLFCs, the cells were stained with three monoclonal antibodies for flow cytometry analysis. Briefly, an anti-vimentin antibody (ab92547; Abcam, United Kingdom) and an anti-pan-cytokeratin antibody (CTK) (ab7753; Abcam, United Kingdom) were used as primary antibodies. Additionally, a third antibody directed against alpha smooth muscle actin (SMA) was used to stain limbocorneal cells (ab5694; Abcam, United Kingdom). For staining, the culture was washed twice with PBS, and the cells were detached using 0.1% trypsin in PBS and suspended in DMEM-F12 supplemented with FBS 10% v/v. The cells were fixed and permeabilized with the BD Cytofix/Cytoperm ™ Fixation/ Permeabilization Solution Kit (BD Biosciences, San Diego, CA) following the manufacturer's instructions. Afterward, the fixed cells were concentrated twice by centrifugation at 800 rpm for 5 min and subsequent washing with the Buffer BD Perm Wash ™ . The cells were then stained with the primary antibodies previously described and the proper secondary antibodies. Flow cytometric analyses were carried out in the BD BioSciences, BD FACSVerse, acquiring 10,000 cells. The flow cytometry data were analyzed using the FlowJo version 7.6.2 software (Ashland, OR, USA). Microbial Biofilm Formation Single and Mixed on Primary Human Limbo-Corneal Fibroblast Cells Single (AF, SA) and mixed (AF+SA) biofilm formations were developed using the protocol described by (Ramıŕez-Granillo et al. 2015), but RPMI 1640-FBS 10% v/v was used as described below. Three different infection models (AF+HLFC, SA+HLFC, and AF+SA+HLFC) were assayed on HLFC monolayers grown to confluence over 6, 12, and 96 well plates. For the infection process, the cell media were discarded, and the cell monolayers were washed twice using PBS, followed by infection at an MOI of 1 with the microbial inocula previously in culture medium supplemented with FBS as previously described (Luna-Baca et al., 2007;Castañeda-Sanchez et al., 2013). The adhesion phase was left to proceed by incubating the inoculated cultures at 37°C under a 5% CO 2 atmosphere for 4 h. After the adhesion phase, the culture medium on each well was changed for fresh RPMI 1640+FBS 10% v/v, and the incubation continued until reaching 24 h to achieve the maturation phase of the biofilm (Ramıŕez Granillo et al., 2015;Gonzaĺez-Ramıŕez et al., 2016). For all assays, a monolayer of uninfected HLFC was used as a control to verify that no significant cell culture changes were detected over time. Biofilm Quantification by the Christensen Crystal Violet Method Single and mixed biofilm cultures with and without HLFCs on 96-well plates (Nunc ™ , Roskilde, Denmark) were left to proceed for 6, 12, and 24 h. Afterward, the supernatant was discarded, and the biomass produced was evaluated as previously described by Christensen et al. (1985), with the modifications proposed by Ramıŕez-Granillo et al. (2015). Subsequently, adhered cells were fixed with 99% methanol (200 µl) for 15 min. After removing the methanol, 200 µl of 0.005% crystal violet were added and stained for 20 min. The dye excess was removed and allowed to dry at room temperature. After drying, the contents of the well were washed gently with distilled water. The washes were carried out until the total elimination of the crystal violet reagent. Additionally, to extract the dye absorbed in the biofilm, 200 µl of 33% acetic acid were added, avoiding touching the bottom and walls of the wells. The acetic acid was allowed to act for 15 min. Then, the excess acetic acid was removed and quantified at a wavelength of 595 nm using the ELISA microplate reader Multiskan Ascent Thermo Labsystems. Three individual experiments for each infection model were evaluated. Biofilm Metabolic Activity by Tetrazolium Salts Reduction Method The biofilm metabolic activity was evaluated by the reduction of 3-(4, 5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) as previously described by (Walenka et al. 2005). After the biofilm maturation process (6, 12, and 24 h), the supernatant of the infected cells was discarded, followed by one washing step with PBS. After the washing step, 100 µl of PBS and 100 µl of the MTT solution (SIGMA ® , St. Louis, MO, USA) at 0.3% were added to each well. The cells were incubated for 2 h at 37°C. After the incubation period, the supernatant on each well was discarded, and 150 µl of dimethyl sulfoxide (DMSO) (Riedel-de Haën ™ , Seelze, Germany) in 25 µl of glycine buffer (0.1 M, pH 10.2) were added to each well followed by an incubation of 15 min at room temperature under light shaking. Finally, the microplates were read at 540 nm using the ELISA microplate reader Multiskan. Three individual experiments for each infection model (monospecies biofilm and mixed biofilm with HLFCs and without HLFCs) were evaluated. Assessment of Biofilm Formation by Scanning Electron Microscopy and Transmission Electron Microscopy For SEM, HLFC cultures grown on 12-well plates (Santa Cruz Biotechnology, Santa Cruz, CA, USA) were infected as previously described above (monospecies biofilm and mixed biofilm with HLFCs and without HLFCs). After a biofilm maturation period of 24 h, cells were fixed for 2 h with 2% glutaraldehyde (Electron Microscopy Sciences ® , Washington, PA, USA). Then, cells were washed twice with PBS, and a postfixation step with 1% osmiumtetroxide (Electron Microscopy Sciences ® , Washington, PA, USA) was carried out, incubating the cells for 2 h. The bases of the wells were removed using a warm metal auger. The samples were placed in a polystyrene plate and dehydrated with subsequent solutions of ethanol (10%-90%) before a final dehydration step with ethyl alcohol 100% for 10 min in triplicate (JT Baker, Phillipsburg, NJ, USA) (Bozzola and Russell, 1999;Vázquez-Nin and Echeverría, 2000). To desiccate samples until the critical point, one drop of hexamethyldisilazane (Electron Microscopy Sciences ® , Washington, PA, USA) was added to each sample and left to evaporate completely (Hazrin-Chong and Manefield, 2012). Biofilm samples were covered with a gold-palladium ally for 70 s at 5.0 kv. Finally, samples were observed under a high-resolution scanning electron microscope (JEOL, Field Emission Scanning Electron Microscope JSM-7800F, Japan). For TEM, samples were prepared the same way as that for SEM until the ethanol dehydration step, after which the samples were included in resin ON at 60°C for the polymerization step. Semi-fine cuts of the included samples were made with Leica Ultracut UCT (Wetzlar, Germany) equipment and exposed to lead-uranyl solutions for contrast. Finally, samples were mounted for their observation by TEM (JEOL Tokyo, Japan) at Central Microscopy Laboratory of the ENCB-IPN. Statistical Analysis The data obtained from the three different experiments of biofilm quantification methods were analyzed by two-way ANOVA. To determine the statistical significance of the observed differences, a Student-Newman-Keuls (SNK) was used. For statistical significance, a p-value of <0.05 was used. The values of the means of the different samples in the assays performed were corrected by subtracting the value of the negative control. The negative control used in all experiments was RPMI 1640+FBS 10% v/v. Data were plotted using SigmaPlot 12.0 (Systat Software Inc., San Jose, CA, USA). These characteristics of the statistical analyses were handled according to the recommendations of Allkja et al. (2020). Characterization of the Human Limbo-Corneal Fibroblast Cell Primary Cultures The phenotypic profile of the HLFC primary cultures was assessed by flow cytometry. Three different markers of limbocorneal fibroblasts were selected: vimentin (VIM), cytokeratin (CTK), and alfa smooth muscle actin (SMA). The flow cytometric analysis revealed that 99.1% of the cells expressed VIM, while only 7.17% expressed CTK and 8.75% expresses SMA. Negative controls for each marker were also included in the analysis, allowing the corroboration of the HLFC phenotype as VIM + CTK -SMA -(Supplementary Figure S1). Biofilm Formation on Human Limbo-Corneal Fibroblast Cells by Christensen Crystal Violet Method The amount of biomass was assessed by the CVM under each of the experimental conditions described after 6, 12, and 24 h of biofilm formation. All of the experimental models showed an optimal biomass production at 24 h postinfection. In the AF+HLFC model, the amount of biomass produced at 24 h was higher [absorbance unit (AU) >1.0] in comparison to the biomass produced by the fungal biofilm alone (AU <0.7) ( Figure 1A). In the SA+HLFC model, an increase in the biomass production was detected (AU >0.3) with respect to the biomass produced in the bacterial biofilm without HLFCs (AU <0.1) ( Figure 1D). Finally, for the AF+SA+HLFC model, an increase in the biomass production of AU <1.0 was detected in comparison to the mixed biofilm excluding HLFCs in which a statistically significant decrease in the biomass production was observed (AU >0.6) ( Figure 1G). The uninfected HLFC monolayer used as a control was evaluated, and no significant changes in monolayer biomass over time were detected. Metabolic Activity of the Biofilms Formed on Human Limbo-Corneal Fibroblast Cells by MTT The in vitro reduction of tetrazolium salts (MTT) method revealed that the metabolic activity of the sessile cells embedded in the biofilm is optimal. For all the biofilm models, the metabolic activity was determined at 24 h. For the AF+HLFC model, the metabolic activity of the fungal biofilm was AU >0.04, while, compared to monospecies biofilm, the metabolic activity without the HLFC monolayer increased (AU >0.06), which represented a statistically significant difference ( Figure 1B). We detail this aspect further on, since it describes a possible inhibition mechanism by the HLFCs over A. fumigatus. For the SA+HLFC model, an efficient metabolic activity was detected (AU <0.14), which was directly proportional to the incubation time. When comparing the above S. aureus biofilm on fibroblasts, bacterial viability was significantly reduced with the bacterial biofilm without HLFCs (AU <0.12) ( Figure 1E). The metabolic activity for both mixed biofilm models (AF+SA and AF+SA+HLFC) was estimated as the maximum value of AUs (AU >0.20). Compared to the monospecies biofilm models with HLFC and without HLFC, no statistically significant differences were observed between both mixed biofilms in the MTT assay ( Figure 1H). The uninfected control (HLFCs alone) maintained a basal absorbance. Morphological Analysis of Biofilm Formation on Human Limbo-Corneal Fibroblast Cells by Scanning Electron Microscopy and Transmission Electron Microscopy The biofilms were developed on different surfaces (polystyrene/ abiotic and HLFC/biotic). The typical characteristics (specific for each microbial biofilm model) are shown in Table 1. Moreover, the formation of MFBB was observed in the AF+SA+HLFC model ( Figure 2A) with formation of channels, hyphae development, and bacteria embedded in ECM ( Figures 2B, C). The topography of the limbo-corneal fibroblast monolayer without infection under SEM was characterized by a thick flat layer adhered to the abiotic surface. Most of the cells are embedded in an amorphous material ( Figure 3A). Fibroblasts showed a large fusiform morphology with a length of 170 µm, a concave zone related to the nuclear zone, as well as a convex zone resembling nucleoli. In some borders of the cellular membrane, filopodia could be observed ( Figure 3A). When the monolayer of HLFCs was observed by TEM, residues of an amorphous material were observable. Nascent filopodia were also detected in the cellular membrane of some fibroblasts ( Figure 3B). Elongated cells were distributed in palisades with secreted material surrounding their cytoplasmic membrane. The intracellular structures were unmodified. The nucleus showed a size ≈10 µm, with a highly electrodense and well-delimited elliptical nuclear membrane. Also, a circular nucleolus was observed inside the nucleus, with a diameter around 1,500 nm. The cytoplasm was unaltered, and several cellular structures were observed, such as ribosomes (≈2,500 nm), intracellular vesicles (500 nm), and secretory granules (<500 nm) ( Figure 3C). The cytoplasmic membrane in the HLFCs was unaltered, and collagen fibers adjacent to the outer nuclear envelope were observed. At 24 h of AF+HLFC biofilm formation, both types of electron microscopy revealed that hyphae are capable of generating cellular damage by two mechanisms. The first fungal process is the penetration of hyphae through the HLFCs by SEM ( Figure 3D) and TEM ( Figures 3E, F). The second fungal process is the colonization to the cell surface accompanied by secretion of some extracellular polymeric substances (EPS) ( Figures 3G-I). This phenomenon did not cause the loss of the nuclear envelope of the fibroblasts. In the SA+HLFC model, several indicators of cell damage were revealed by SEM, as well as cracks on the surface of the limbocorneal fibroblast cells that maintained their original size. Cocci were detected on the HLFC monolayer next to the filopodia ( Figure 3J). By TEM, the bacterial population at the periphery of the HLFCs was observed. Inside the cytoplasm, highly electrodense circular structures were observable ( Figure 3K). At a higher magnification, as single bacterium was identified. In some cocci, the bacterial septum was evident, indicating cell duplication ( Figure 3L). Moreover, several interstitial (<1.0 µm in diameter) were detected, and destabilization of the cell membrane caused abnormalities in the cytoplasm, while the nucleolus showed an irregular shape ( Figure 3L) compared to uninfected cells. During SEM observation of the mixed biofilm (AF+SA+ HLFC), it was shown that the fungal population decreased. In addition, the increase of amorphous fungal structures and the absence of conidia were evident. The affinity of bacteria for A. fumigatus was evident compared to HLFCs. Also, apparent damage caused by the bacteria to the fungal wall by coating the hyphae was distinguished. As for fibroblasts, only alterations in shape and size were observed. In fields where the fungus was not perceived, the bacteria were closer to the fibroblasts that produce several filopodia ( Figure 3M). In the micrographs obtained by TEM, the effect of FBI on HLFCs was similar ( Figure 3N). The cellular wall of the hyphae secreted EPS that are surrounded by cocci. In the same field, an HLFC showing an interstitial, but still with an unaltered nucleus, could be observed ( Figure 3O). When we observed other fields, it was possible to observe that S. aureus was able to infect HLFCs and cause cell lysis. However, in certain fields, the fibroblasts produce filopodia appearing to surround cocci ( Figure 3J). Throughout the course of the fungal biofilm on HLFCs, severe damage to the hyphae was observed. In some fields, the hyphae retracted while colonizing the cellular surface ( Figure 4A). During this interaction, spherical bodies, with a diameter of ≈100 nm, were detected over the cellular monolayer; these structures could be secretory granules or exosomes ( Figure 4B). In other fields, the fungi were able to colonize and degrade the cell monolayer; hyphae were observed enveloped in a dense material ( Figure 4C). The hyphae showed a scalded appearance and were abruptly terminated in the apical zone ( Figure 4D). Biofilm Fungal Bacterial Composition on Primary Human Limbo-Corneal Fibroblast Cells Using Epifluorescence Microscopy Uninfected HLFC monolayers were analyzed by EFM using several dyes to detect ECM components as well as chitin-like compounds in the monolayer ( Figure 5A1). Furthermore, the detection of carbohydrate residues was weak ( Figure 5A2), and PI clearly showed the nuclei of the fibroblast, but not extracellular DNA (eDNA) ( Figure 5A3). Bright-field images of the cell monolayer ( Figure 5A6) showed a flat surface covered by a dense material identified as carbohydrate ( Figure 5A5). Respectively, in fungal biofilms with HLFC and without HLFC, higher ECM production by A. fumigatus on fibroblasts was confirmed. The highest signal was observed in the AF+HLFC biofilm, with all three dyes ( Figure 5C) demonstrating that the ECM composition is chitin, glucose, and/or mannose residues and eDNA ( Figures 5B, C). Likewise, fibroblasts were weakly labeled with CW ( Figure 5C1). However, hyphae were strongly marked; a similar effect occurred with ConA ( Figure 5C2). In the 3D model of fungal biofilm with HLFC, increased fluorescence was observed in the biofilm structures ( Figure 6B) compared to the monospecific biofilm ( Figure 6A). In addition, a higher amount of glucose or mannose ( Figure 6B2) and eDNA ( Figure 6B3) was detected in AF+HLFC than in the fungal biofilm without fibroblasts ( Figures 6B3, A2, A3). Moreover, a co-localization effect was observed with dense fluorescence and structural integrity of the fungal biofilm developed on the HLFC monolayer ( Figure 6B4). On the other hand, the composition of the ECM was similar in the bacterial biofilms, with carbohydrates and eDNA being the main components ( Figures 5E, D). In the 3D structure of the biofilm, it was observed that the radius of the microcolonies is larger during colonization of the cell monolayer ( Figure 6D). This HLFC monolayer is still organized ( Figure 6D1) but with abundant cocci surrounding the cells. Also, eDNA and carbohydrates are in the center of the bacterial microcolonies in both models ( Figures 6C4, D4). During FBIs, CW showed the highest labeling for the hyphae (Figures 5F, G) despite the reduction in these fungal structures in mixed biofilm including HLFC ( Figure 5G) and a high number of bacteria surrounding the hyphae; these cocci were mainly marked by ConA ( Figure 5G2); eDNA detection is evident on the hyphae ( Figure 5G3). Furthermore, 3D constructions corroborated that hyphae are surrounded by numerous cocci and are scarce in the AF+SA+HLFC model ( Figures 6E, F). Additionally, in the merged images, the co-localization of eDNA and complex and simple carbohydrates is evident (Figures 6E4, F4). Interaction Model During Mixed Infection on In Vitro Human Limbo-Corneal Fibroblast Cell Culture The set of results obtained in this study provided a backdrop to describe the possible events that occur during the establishment of mixed biofilms on HLFCs. Therefore, a graphical overview was constructed for understanding these microbial effects through three different pathways (Figure 7). The data suggest that possibly two microbiological behaviors were detected during the FBI. The first behavior was MFBBs, which initiates with the colonization of the monolayer surface by planktonic conidia, maintaining the stable union of the fungal surface with the HLFCs (Figure 7A). At this stage, secretion of EPS promotes the ECM formation. A mature fungal biofilm was characterized by an abundant and rigid ECM; planktonic propagules contribute to form structural bioscaffolds that reach a sessile stage. The co-aggregation pathway could follow two different routes ( Figure 7B). In the first pathway, the secondary colonizer joins the surface of the mature fungal biofilm, forming new bioscaffolds of sessile cocci ( Figure 7C). In the second pathway, planktonic bacteria induce co-aggregation in the fungal biofilm, reaching the sessile phase. At this point, it is possible that these unaggregated planktonic cells can then migrate to another site ( Figure 7D). On the other hand, the second behavior was antibiosis relationships. During FBI, S. aureus inhibits A. fumigatus, which may be involved in the production of unknown compounds that trigger cell lysis ( Figure 7E). Regarding A. fumigatus against HLFCs, there are two possible ways for fungus spreading. The first is by hyphal perforation [turgor mechanisms accompanied by the performance of the Spitzenkörper system (Cell wall enzymes, microvesicles, and macrovesicles) as well as thigmotropic reactions] and the second by EPS secretion and hyphal adhesion ( Figure 7F). Furthermore, the behavior of S. aureus against limbo-corneal fibroblasts led to the dissemination of the bacterium into the human cell, triggering pore formation and cell lysis; this behavior affects cytoplasmic membrane and the cytoskeleton with disruption in desmosomes ( Figure 7G). Finally, the effect of HLFCs against microbes is described, where a self-defense conducted through various innate immune mechanisms is triggered (exosomes, phagocytic microvesicles, and crescent formation) (Figures 7H, I). DISCUSSION The processes and factors involved in the establishment of polymicrobial biofilms remain poorly characterized. In the case of MFBBs, several studies suggest that FBIs are driven by physical interactions between the biofilm components. An important example of this type of relationship occurs between two of the main etiologic agents of microbial keratitis worldwide: A. fumigatus and S. aureus. Previous studies have suggested that the interaction between these two microorganisms can form biofilms over abiotic surfaces (Ramıŕez Granillo et al., 2015); in fact, the methodologies used were quite similar, but in this study, we used RPMI supplemented with FBS to have the same conditions when culturing fibroblasts. The results of this research demonstrated that this fungus and bacteria are capable of forming biofilms on biotic surfaces. Additionally, to our knowledge, this is the first report where A. fumigatus-S. aureus interaction has been observed in a primary cell culture of HLFCs with biofilm formation. In general, the results showed that biofilm development is more efficient on biotic surfaces (HLFCs) than on abiotic surfaces (polystyrene); to directly compare both surfaces, FBS was added to all treatments ( Figure 1, Table 1). Similarly, abundant amounts of extracellular material of single and mixed biofilms were demonstrated in the presence of the HLFC monolayers by EFM (Figures 5, 6). This qualitative technique of biofilm has been used by our research group for the detection of biomolecules that constitute the ECM (Ramıŕez Granillo et al., 2015;Gonzaĺez-Ramıŕez et al., 2016;Camarillo-Maŕquez et al., 2018;Bautista-Hernańdez et al., 2019). Other authors have reported that by using molecules to specifically eliminate ECM components, such as sodium periodate (that destroys carbohydrates), DNase (that digests DNA), and proteinase K (that digests proteins), it was possible to demonstrate that fluorochromes detect specifically such biomolecules (Baillie and Douglas, 2000;Chandra et al., 2001;Coŕdova-Alcańtara et al., 2019). This biofilm detachment assay represents a good approximation to the composition of ECM. However, some authors have noted that not all the components are available or are susceptible to the action of the degrading molecules; for example, the oxidation of carbohydrates is not fully accomplished as demonstrated by Ikeda et al. (2007) and Doern et al. (2009). When the human ocular surface constituted by HLFCs is compromised by mechanical damage, adhesion sites are exposed, generating an optimal environment for adhesion and establishment of microbial populations in the eye. Likewise, on abiotic surfaces, the adhesion processes are nonspecific and are mediated by hydrophobic and electrostatic forces. This is demonstrated by the reversibility of the adhesion process on abiotic surfaces not pretreated with synthetic substrates, microorganisms, or tissues known to favor microbial adhesion (Rittman, 1989;Asaria et al., 1999;Fulcher et al., 2001;Dunne, et al., 2002;Harris et al., 2002;Parsa et al., 2010;Sun et al., 2010;Percival et al., 2011;Abelson and McLaughlin, 2012;Sengupta et al., 2012;Zhang et al., 2012;Samimi et al., 2013;Bispo et al., 2015;Boukahil and Czuprynski, 2018;Ponce-Angulo et al., 2020). Thus, the development of mixed biofilms on abiotic surfaces previously conditioned with primary cell cultures is an opportunity to understand the features and processes of such polymicrobial associations. The intention to obtain a close eyelike response was the main reason why we chose primary cultures of HLFCs in this work. Moreover, it is well known that primary cultures had a finite number of duplications and had characteristics closer to the original host. We do not use cell lines because of their aneuploidy; cell lines had lost the original host characteristics. On the other hand, we used the biomass quantification and metabolic activity determination to understand the ecological relationships between our three microbial models. When the HLFC culture was analyzed, neither the biomass production nor the metabolic activity of the culture was modified during the kinetics performed (Supplementary Figure S2A). These basal lectures indicated that both CVM and MTT techniques are sufficiently sensitive to detect the microorganisms in the biofilm experiments (Mowat et al., 2007;Ramage et al., 2009;Camarillo-Maŕquez et al., 2018). Besides, the mixed biofilm and fungal biofilm with HLFCs produced the highest amount of biomass, while the bacterial biofilm on limbo-corneal fibroblasts produced significantly less. These results are similar to those reported for our research group on an in vitro mixed biofilm, with the exception of MTT assays (Ramıŕez Granillo et al., 2015). We reported that A. fumigatus establishes a dense biofilm. This fungus is a better biofilm producer than S. aureus over abiotic surface; the same was confirmed on biotic surfaces. Therefore, we suggest that the formation of MFBB on HLFCs begins with the : biofilm is denser with strong co-localization of the ECM components on the monolayer. In the bacterial 3D model, microcolonies are enveloped in layers of polysaccharides and eDNA, with stronger co-localization in SA+HLFC (D) compared with SA (C). The ECM was scarce for FBI models, AF+SA (E), and AF+SA+HLFC (F), detection of a strong antibiosis over AF. Calcofluor White (CW: blue-chitin and glycosylated carbohydrates), concanavalin A (ConA: green-glucose and mannose residues), and propidium iodide (PI: red-nucleic acids). Co-localization was obtained by image merging. HLFCs, human limbo-corneal fibroblast cells; AF, Aspergillus fumigatus; SA, Staphylococcus aureus; Ct, chitin; GM, N-acetylglucosamine/glucose and mannose residues; eD, extracellular DNA; F, fibroblast; Co, co-localization; H, hypha; Cc, cocci; N, nucleus. interaction of the conidium and limbo-corneal fibroblasts, leading to the expression of several molecular components that allow an initial stable adhesion, giving rise to the beginning of the colonization process of the biotic surface. These planktonic fungal populations (conidium and hyphae) adhere consecutively origination "structural bioscaffolds", which are exploited by Gram-positive bacteria. After the cocci adhesion, bacterial aggregates appeared, and a true mixed biofilm is formed on HLFCs. This hypothesis needs to be tested in more detail to characterize the molecules that drive the process, leading to the identification of possible therapeutic targets that could aid in the treatment of polymicrobial keratitis. In contrast, the MTT reduction assays (Supplementary Figure S2B) also allowed the identification of a second microbial behavior: "the antibiosis relationship for this FBI". The maximum absorbance value was for the mixed biofilm over HLFCs, followed by the bacterial biofilm, and finally the fungal biofilm. Likewise, the antibiosis effect was observable by SEM; bacteria are the predominant population during this microbial interaction on limbo-corneal fibroblast cultures (Supplementary Figure S2F), while the A. fumigatus and HLFCs appear diminished in number in the micrographs. Likewise, monolayer destruction is evident and the HLFCs express numerous filopodia in the cellular membrane. Additionally, the size of the fibroblasts is altered (≈10-30 µm), and the monolayer of the HLFCs showed a dramatic erosion from the abiotic surface. These results are consistent with a previous report from our research team (Ramıŕez Granillo et al., 2015) in which the antibiosis on this filamentous fungus by the action of S. aureus during biofilm formation in vitro was reported. This mycophagy event, generated by the bacteria in the HLFC model, has a direct impact over the fungal population, since bacteria are taking advantage of the fungal components for self-nutrition (Leveau and Preston, 2008). Mycophagy events have previously been reported for this Staphylococcus species in other in vitro fungal-bacterial models (Ikeda et al., 2007;Camarillo-Maŕquez et al., 2018;Bautista-Hernańdez et al., 2019). In summary, the correlation of the MTT values with the microscopic evidence obtained by SEM suggests that in this FBI on HLFCs, the prevalent microorganism is S. aureus. In this work, several antibiosis effects were detected, suggesting a microbial war taking place in at least three fronts: FBI (AF-SA: antagonistic relationship previously described), microbial interactions (AF, SA, AF-SA) with HLFCs, and interactions between HLFCs against microorganisms ( Figure 7). The fungal antagonism effect against the limbo-corneal fibroblast cells was also evidenced by SEM and TEM for the assayed models. The first fungal antibiosis effect is related to hyphal perforation of HLFCs ( Figures 3D-F). Hyphal perforation of pulmonary endothelial cells by A. fumigatus has previously been reported and has been associated with the disruption of the endothelial barrier to promote in vivo hematogenous dissemination of the fungi (Kamai et al., 2009). Non-mechanical perforation mediated by the turgor of the apical zone of the hyphae has also been described, accompanied by the Spitzenkörper process that causes the accumulation on vacuoles filled with lytic enzymes. Likewise, thigmotropic reactions that deform the cell wall are associated with this mechanism (Bowen et al., 2007;Brand and Gow, 2009;Steinberg et al., 2017;Dimou and Diallinas, 2020). The other type of fungal damage is the secretion of EPS with subsequent adhesion of the hyphae, also detected in this work over the cell culture ( Figures 3G-I). Previous studies of pulmonary epithelia of patients infected with A. fumigatus have suggested that this fungus is capable of secreting metabolites such as sialic acid by conidia, which are beneficial for the invasiveness process. Sialic acid mediates the adhesion of the fungi to cellular components such as fibronectin and laminin. Aspergillus species can secrete other metabolites such as gliotoxins, fumagillin, and several types of proteases that can trigger changes in the cellular membrane and the cytoskeleton to facilitate the invasion process (Dagenais and Keller, 2009;Osherov, 2012;Croft et al., 2016;Gago et al., 2018). We recommend that further studies address the effect of these secondary metabolites and ligands on keratitis, since their participation during this infectious process in the eye has not been studied in detail. Bacterial antibiosis was related to intracellular invasion, observed in the SA+HLFC model. This behavior was observed in more detail with TEM during FBI ( Figures 3K, L). Several in vitro studies using osteoclasts, endothelial cells, and fibroblasts have suggested that S. aureus can act as a facultative intracellular bacterium; this phenomenon has not been observed in vivo. The most studied mechanisms involved in the intracellular dissemination of this bacterium in vitro are the interactions between S. aureus and adhesins related to toxin synthesis. In fact, several lines of evidence suggest that fibronectin in the HLFCs plays an important role during in vitro infections and can be related to the keratitis caused by this microorganism (Lowy, 2000;Jett and Gilmore, 2002;Foster et al., 2014;Rollin et al., 2017). Both the formation of bacterial interstitial and hyphal perforation ( Figures 3M-O) were observed in the FBI+HLFC biofilm, specifically intracellular spread was also detected by EFM. This interstitial with a diameter <1.0 µm similar to the size of bacteria is observed on the surface of the fibroblasts. When overlaying the images (ConA: green halos; PI: red halos), the bacteria were distinguished as intense orange marks adjacent to the HLFCs; by TEM, it seems to be the same bacterial invasion (Supplementary Figure S3). Additionally, changes in cell-cell junctions, particularly in structures that resemble desmosomes, were observed. This phenomenon has been related to several S. aureus toxins, such as the Exfoliative Toxin A (ETA), which acts directly over desmoglein, one of the main components of desmosomes (Lowy, 2000;Kowalczyk and Green, 2013;Johnson et al., 2014;Mariutti et al., 2017). Some of our observations suggest a damage in the desmosomes (Supplementary Figures S4C, D), as reported during P. aeruginosa monoinfections over corneal epithelial cell cultures (Fleiszig et al., 1996;Lee et al., 1999). On the other hand, there is the antibiosis caused by HLFCs against microorganisms. HLFCs have a direct impact on fungal growth, as evidenced by the presence of thin, poorly branched hyphae in several fields ( Figures 4C, D). Furthermore, the presence of exosome-like structures in the cytoplasm of the infected HLFCs ( Figures 4A, B) could be associated with proinflammatory immune responses, as has been reported for this type of cells during mycobacterial infections (Castañeda-Sanchez et al., 2013). HLFC-producing microvesicles, which have also been reported during murine fibroblast infections and have been associated with phagocytosis, were also observed (Masci et al., 2016). Thus, for fungal dissemination, we could suppose that these structures represent some type of defense mechanism against this filamentous fungus agent (Supplementary Figures S4E, F). Several structures in the fibroblast monolayer were visualized throughout the micrographs observed in the SA+HLFC model. These resemble the cellular structures associated with innate immune responses, such as filopodia surrounding cocci ( Figure 3J), which in other cellular models have been associated with phagocytic processes of bacterial agents mediated by Toll-like receptors. Similarly, HLFCs showed crescent formation (Supplementary Figures S4A, B) during this FBI on monolayers. We suggest that this behavior was related to a phagocytosis process. These phenomena together with the fi lopodia ( Figure 3M) and microvesicles (Supplementary Figure S4C) have been found to suggest a phagocytic activity by the limbo-corneal fibroblast cells, as reported for macrophages. This behavior could be directly related to the immunology of the eye, which is considered a privileged immune organ with an innate response (Kress et al., 2007;Heimer et al., 2010;Peŕez et al., 2013;Jimenez-Martinez et al., 2016;Masci et al., 2016;Bautista-Hernańdez et al., 2017;Horsthemke et al., 2017;Rosales and Uribe-Querol, 2017). These ideas allowed us to represent the possible events that occur during the MFBB over the HLFCs (Figure 7). In this study, we were able to identify first evidence on the MFBB of potential opportunistic pathogens (A. fumigatus-S. aureus) and ocular host response (HLFC). In addition, our data suggest that both microbial agents are able to attack and destroy the limbo-corneal fibroblast cell monolayers, but the HLFCs are able to strike back. Furthermore, our results suggest a microbial warfare on three different fronts. The first is a clear antibiosis between A. fumigatus and S. aureus. The second and third are a bidirectional antibiosis between microorganisms against fibroblasts. We believe that our experimentation could open a new research field to understand the eye immunology and its interactions with biofilm polymicrobial infections. The ecological interactions are complex, and all of the members interact with each other. Further understanding could permit the use of this microbial warfare as a source of new therapeutic molecules. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS AR-G performed key experiments and drafted the manuscript. LAB-H, FSM-G, AD-L, and IMC-A worked in the laboratory with the cell cultures and biofilm formation and quantification. VMB-D, NOP and MAM-R participated in the experimental design and paper edition and discussions. AVR-T is the leader of the research line and funded the investigation. All authors contributed to the article and approved the submitted version. Supplementary Figure 1 | Immune typification of the primary in vitro culture of HLFC. Flow cytometry analysis is representative for a population of 100,000 cells from a population of a 66.2% subset of the cell lineage (A). Anti-rabbit (B, H) and Anti-Mouse (E) were used as negative controls. It was observed that antibodies directed for Vimentin (VIM) were expressed with more than 99% (C), whereas for Cytokeratin (CTK) (F) and alpha smooth muscle actin (SMA) (I), were poorly expressed demonstrating the absence of these markers in the study phenotype. The overlapping of the expression of the markers corroborated the identity of these proteins (D, G, J). Supplementary Figure 2 | Comparison of the analysis of the quantification and characterization methods of monoculture and mixed biofilms and MFBB on HLFC cultures. CVM) AF+HLFC and AF+SA+HLFC showed the highest biomass production (≈1.0 AU); followed by SA+HLFC (≈0.3 AU) which presented a lower biomass production (A: TTM) AF+SA+HLFC showed the most efficient metabolic activity (<0.20 AU); followed by SA+HLFC (<0.15 AU) and AF+HLFC (<0.05 AU) (B: SEM) The HLFC cultures infection free (C: 1000x; 2000x) were observed without apparent alteration (normal size). Observation of the AF+HLFC model showed that hyphae produced a ECM although with a worn-out appearance in some of them. The HLFCs were observed abnormal in size and shape (D: 1000x; 2500x). Model SA+HLFC exhibits a few microcolonies formation with EPS production. HLFCs have a normal size, with a presence of surface cracks and developed filopodia (E: 1000x; 5000x). The micrographs of the AF+SA+HLFC model reveal that the bacteria is exceeding its growth compared to fungus and fibroblasts. Moreover, the monolayer was limited to certain areas and the HLFCs were observed abnormal on several fields (F: 1000x; 5000x). The results are four replicates of three different experiments: n=12. Significance was determined using the Student-Newman-Keuls test, with multicomparison of procedures and are indicated as: (*), P<0.050. HLFCs, Human Limbo-Corneal Fibroblast cells; AF, Aspergillus fumigatus; SA, Staphylococcus aureus; H, hypha; F, Fibroblast; A, Anastomosis; Ch, Channels; Asterisk (*): Extracellular matrix; ML, Monolayer; Fp, Filopodia; Cc, Cocci. Supplementary Figure 3 | Intracellular infection during FBI on HLFC. Biofilms were grown throughout 24 h on in vitro monolayer cultures of HLFCs. EFM) Fibroblasts were detected with CW (A: 63x). Stained Bacteria Con A) enveloping fibroblasts (B: 63x). The IP (C: 63x) showed the nucleus of the HLFCs as red halos. The overlay of the images shows that the bacteria are distinguished as intense orange marks surrounding the HLFCs (D: 63x). A digital zoom showed interstitial of around 1 µm (D1; D2). TEM. AF+SA+HLFC: Fungal population was reduced compared to the cocci surrounding HLFCs (E: 20000x). In the cytoplasm of HLFCs intracellular cocci were observed; fibroblast formed interstitial (<1.0 µm) (F: 40000x). HLFC: Cells without infection were seen with their cytoplasm and internal structures unaltered. In addition, intracytoplasmic inclusions approximate size of 0.3-0.5 µm was observed (G: 6000x; H: 20000x). SA+HLFC: High bacterial population compare to fibroblasts. In the center of the micrograph, a HLFC with interstitial caused by intracellular infection of SA. (I: 15000x). AF+HLFC: Hyphae were seen secreting extracellular material between a group of fibroblasts (J: 7500x). In this model, HLFCs with interstitial were not observed; and some cellular structures can be observed within the cytoplasm (K: 20000x). AF+SA: During this FBI, it is possible to observe abnormal hyphae with polar invaginations, as well as adjacent cocci that attach to the fungal cell wall (L: 20000x
2021-08-16T13:20:36.214Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "675c6f97e839af3d3be8decb2f7e89221d41beb2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.646054/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "675c6f97e839af3d3be8decb2f7e89221d41beb2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6457185
pes2o/s2orc
v3-fos-license
Regulating Immunogenicity and Tolerogenicity of Bone Marrow-Derived Dendritic Cells through Modulation of Cell Surface Glycosylation by Dexamethasone Treatment Dendritic cellular therapies and dendritic cell vaccines show promise for the treatment of autoimmune diseases, the prolongation of graft survival in transplantation, and in educating the immune system to fight cancers. Cell surface glycosylation plays a crucial role in the cell–cell interaction, uptake of antigens, migration, and homing of DCs. Glycosylation is known to change with environment and the functional state of DCs. Tolerogenic DCs (tDCs) are commonly generated using corticosteroids including dexamethasone, however, to date, little is known on how corticosteroid treatment alters glycosylation and what functional consequences this may have. Here, we present a comprehensive profile of rat bone marrow-derived dendritic cells, examining their cell surface glycosylation profile before and after Dexa treatment as resolved by both lectin microarrays and lectin-coupled flow cytometry. We further examine the functional consequences of altering cell surface glycosylation on immunogenicity and tolerogenicity of DCs. Dexa treatment of rat DCs leads to profoundly reduced expression of markers of immunogenicity (MHC I/II, CD80, CD86) and pro-inflammatory molecules (IL-6, IL-12p40, inducible nitric oxide synthase) indicating a tolerogenic phenotype. Moreover, by comprehensive lectin microarray profiling and flow cytometry analysis, we show that sialic acid (Sia) is significantly upregulated on tDCs after Dexa treatment, and that this may play a vital role in the therapeutic attributes of these cells. Interestingly, removal of Sia by neuraminidase treatment increases the immunogenicity of immature DCs and also leads to increased expression of pro-inflammatory cytokines while tDCs are moderately protected from this increase in immunogenicity. These findings may have important implications in strategies aimed at increasing tolerogenicity where it is advantageous to reduce immune activation over prolonged periods. These findings are also relevant in therapeutic strategies aimed at increasing the immunogenicity of cells, for example, in the context of tumor specific immunotherapies. Dendritic cellular therapies and dendritic cell vaccines show promise for the treatment of autoimmune diseases, the prolongation of graft survival in transplantation, and in educating the immune system to fight cancers. Cell surface glycosylation plays a crucial role in the cell-cell interaction, uptake of antigens, migration, and homing of DCs. Glycosylation is known to change with environment and the functional state of DCs. Tolerogenic DCs (tDCs) are commonly generated using corticosteroids including dexamethasone, however, to date, little is known on how corticosteroid treatment alters glycosylation and what functional consequences this may have. Here, we present a comprehensive profile of rat bone marrow-derived dendritic cells, examining their cell surface glycosylation profile before and after Dexa treatment as resolved by both lectin microarrays and lectin-coupled flow cytometry. We further examine the functional consequences of altering cell surface glycosylation on immunogenicity and tolerogenicity of DCs. Dexa treatment of rat DCs leads to profoundly reduced expression of markers of immunogenicity (MHC I/II, CD80, CD86) and pro-inflammatory molecules (IL-6, IL-12p40, inducible nitric oxide synthase) indicating a tolerogenic phenotype. Moreover, by comprehensive lectin microarray profiling and flow cytometry analysis, we show that sialic acid (Sia) is significantly upregulated on tDCs after Dexa treatment, and that this may play a vital role in the therapeutic attributes of these cells. Interestingly, removal of Sia by neuraminidase treatment increases the immunogenicity of immature DCs and also leads to increased expression of pro-inflammatory cytokines while tDCs Abbreviations: iDC, immature bone marrow-derived dendritic cell; tDC, tolerogenic bone marrow-derived dendritic cell; niDC, neuraminidase-treated bone marrow-derived immature dendritic cell; ntDC, neuraminidase-treated bone marrowderived tolerogenic dendritic cell; mDC, mature bone marrow-derived dendritic cell (LPS-treated); MLR, mixed lymphocyte reactions; Sia, sialic acid; Dexa, dexamethasone. are moderately protected from this increase in immunogenicity. These findings may have important implications in strategies aimed at increasing tolerogenicity where it is advantageous to reduce immune activation over prolonged periods. These findings are also relevant in therapeutic strategies aimed at increasing the immunogenicity of cells, for example, in the context of tumor specific immunotherapies. Keywords: tolerogenic dendritic cells, glycosylation, dexamethasone, immunogenicity, tolerogenicity, sialic acid, autoimmunity, cell therapy inTrODUcTiOn Dendritic cells are professional antigen-presenting cells, which are a component of the innate immune system which induce adaptive immune responses (1). Dendritic cells (DCs) were first described by Steinman and Cohn in 1973 (2) and were subsequently identified to be potent activators of the immune system when employed in mixed lymphocyte reactions (MLRs) (3). DCs are a heterogeneous population classified in different subsets dependent on the origin (4). DCs have been extensively investigated for potential use as a cellular therapy due to their ability to maintain peripheral tolerance, which is of importance in the field of transplantation and autoimmunity. Since mature DCs are potent activators of the T-cell responses, pharmacological approaches have been used to maintain DCs in a maturation resistant state (5)(6)(7). The glucocorticoid dexamethasone (Dexa) has been widely used in this context (8)(9)(10)(11). Glucocorticoids are potent immunosuppressive drugs that are used in clinical regimens to treat both Th1-and Th2-mediated inflammatory diseases including allograft rejection (12). Dexa is known to exert potent effects on many immune cells including DCs (8,13). It has been consistently described in the literature that Dexa has inhibitory effects on the development of immature DCs (iDCs) (5,8,12,14), and that it also impairs lipopolysaccharide (LPS) (TLR4) stimulation of DCs, which would otherwise lead to their maturation (mDCs) (15)(16)(17). In addition to this, Dexa-treated DCs have a reduced capacity to activate naïve T lymphocytes by interfering with Signals 1-3 important for T-cell activation (17). In the context of transplantation, preclinical experiments suggested the potential therapeutic use of both donor and recipientderived tolerogenic DCs to prevent organ graft rejection (18). In a rat model, we have recently shown that pretreatment of donor DCs with Dexa ex vivo prevents the maturation of DCs and prolongs rat corneal allograft survival upon injection in corneal transplant recipients (13). However, the mechanisms of how tolerogenic DCs engage with other immune cells and exert their immunomodulatory effects are not completely understood. Despite this, tolerogenic DCs have been already tested in humans suffering from various diseases. As of this writing, there are currently eight tolerogenic DC cell therapies listed in Phase I/ II clinical trials for treatment of autoimmune disease and graft rejection (https://clinicaltrials.gov. September 2017, search for key words tolerogenic DCs), which highlights the importance and urgency of understanding the mechanisms associated with the therapeutic effect. Glycosylation is one of the most vital and frequent forms of posttranslational modification and is involved in the function of many immune associated molecules. Some of the functions of glycosylation include, but are not limited to, protein folding and molecular trafficking to the cell surface (19)(20)(21)(22)(23). Glycosylation has also been implicated in the stability of proteins and protection from proteolysis (24). All immune cells are coated by a glycocalyx composed of a complex assortment of oligosaccharides (glycans), of which one frequent terminal component is sialic acid (Sia). Sias are a broad family of negatively charged, 9-carbon monosaccharides that are exposed to the cellular microenvironment and are involved in communication and in cellular defense (25). It has been reported that a typical somatic cell surface presents millions of Sia molecules (26) and also that they have long been noted to be important in immune cell behavior (27). It has been suggested that Sias can play important roles in both acting as a recognizable molecule for cellular interactions but also as a biological shield preventing receptors on cells recognizing their ligands (28). Large amounts of Sias on the cell surface of immune cells will result in an overall negative charge, which can have biophysical effects, such as the repulsion of cells from each other and subsequently disrupting cellular interactions (29). Since immune cell interactions form the basis of immune responses, glycosylation is, therefore, likely to play a major role in dictating these responses. However, there is a significant knowledge gap as to how glycosylation modulates immune responses. Currently, little information exists on how DC glycosylation patterns change after Dexa treatment. Here, we present a comprehensive profile of bone marrow-derived DCs (BMDCs), examining their cell surface glycosylation before and after Dexa treatment as resolved by both lectin microarrays and lectincoupled flow cytometry. In this work, the composition of the glycocalyx of both iDCs and tolerogenic DCs (tDCs) was altered using neuraminidase (sialidase) treatment and the functional consequences in immunogenicity and inhibition of T-cell proliferation were observed. We show that Sia is upregulated on tDCs contributing to the tolerogenic state of tDCs. However, removal of Sia leads to increased stimulatory activity of iDCs leading to enhanced T-cell activation and proliferation. These findings have important implications in strategies aimed at increasing tolerogenicity where it is advantageous to reduce immune activation over prolonged periods. These findings are also relevant in therapeutic strategies aimed at increasing the immunogenicity of cells, for example, in the context tumor specific immunotherapies. isolation and generation of iDcs and tDcs Immature DCs were generated using an adapted version of the protocol, which has been previously described (13) (Figure S1 in Supplementary Material). Briefly, on day 0, male DA rats of the specified age were sacrificed and the tibia and femur were surgically removed postmortem. The epiphyses were cut and the bone marrow was flushed from the long bones with a syringe/needle combination. The erythrocytes were removed from the suspension by lysis using a standard red blood cell lysis buffer (Sigma-Aldrich, Dublin, Ireland). After erythrocyte lysis, the cells were washed in RPMI-1640 (Gibco, Grand Island, NY, USA) medium supplemented with 10% heat-inactivated fetal bovine serum (FBS), 2 mmol/L l-glutamine, 100 mmol/L nonessential amino acids, 1 mmol/L sodium pyruvate, 100 U/mL penicillin, 100 µg/mL streptomycin, and 55 µmol/L 2-β-mercaptoethanol (2β-ME) (Gibco). Cells were resuspended at a concentration of 1.5 × 10 6 /mL and plated at a concentration of 4.5 × 10 6 per well of a 6-well plate. The culture medium was supplemented with 5 ng/mL rat granulocyte-macrophage colony-stimulating factor (GM-CSF) (Invitrogen, Paisley, UK) and 5 ng/mL rat IL-4 (Peprotech EC, London, UK). Cells were incubated under standard cell culture conditions (37°C at 5% CO2) and, on the third day of culture, half of the medium from each well was harvested and cells were resuspended in fresh medium supplemented with rat GM-CSF and IL-4 and added back into the culture. On the fifth day, the supernatant was exchanged with fresh supplemented growth medium to remove dead granulocytes and lymphocytes. In experiments requiring tDCs, Dexa (Sigma-Aldrich) was added to the culture at 10 −6 mol/L at this point. On the seventh day of culture, half of the medium was again removed and replaced with fresh supplemented medium (Dexa was added as required). To generate mDCs, LPS (1 µg/mL; Sigma-Aldrich) was added 24 h before the cells were cultured. Cultures were maintained until day 10 and then gently pipetted off the bottom of the wells for the in vitro assays. rna-isolation and rT-Pcr RNA was exacted from iDCs, tDCs, mDCs, niDCs, and ntDCs on day 10 using Bioline Isolate II RNA mini kits according to manufacturer's protocols. All cDNA was produced using RevertAid TM H Minus Reverse Transcriptase (Thermo Fisher Scientific, MA, USA) with random primers. For primer sequences of GAPDH, TNF-α, IL-12p40, inducible nitric oxide synthase (iNOS) IL-10, IDO, IL-6, and IL-1β, see Table S1 in Supplementary Material. All samples were normalized to expression of the house-keeping gene GAPDH and made relative to iDCs. All quantitative realtime PCR was performed according to the standard program using a real-time PCR system (StepOne Plus, Applied Biosystems, Thermo Fisher Scientific). For analysis of the assays involving lymphocytes from the lymph nodes and spleen, the following mAbs were used CD3/PE, CD8/PE-Cy7, CD4/APC (BioLegend), and CD25/FITC (eBioscience, San Diego, CA, USA). Prior to staining, cells were washed with FACS buffer. mAbs were diluted in 50 µL FACS buffer, added to the cells, and incubated for 15 min at 4°C. To remove any unbound antibodies, the cells were washed three times with FACS buffer. The cells were then filtered through a nylon mesh (40 µm) before analysis in the cytometer. Mixed lymphocyte reaction/T cell Proliferation assays Lymphocytes were isolated from the spleen and lymph nodes of LEW rats. T cells were washed with phosphate-buffered saline and stained in prewarmed (37°C) CellTrace™ Violet (CTV) phosphate-buffered saline staining solution (Invitrogen, Carlsbad, CA, USA) as per manufacturer's instructions. 2 × 10 5 CTV-stained T cells were stimulated at a 1:1 ratio with anti-rCD3/anti-rCD28-labeled beads in supplemented RPMI 1640 media. Assays were incubated at various BMDC: T-cell ratios in a humidified incubator for 4/5 days at 37°C following which T-cell proliferation and CD4 and CD8 expression were assayed by flow cytometry (mAbs CD4-APC and CD8α-PE-Cy7; Biolegend). T-cell proliferation, activation, and differentiation were analyzed using a FACS Canto II. Membrane Protein extraction and labeling Membrane proteins were extracted from iDCs, tDCs, niDCs, and ntDCs using a commercial protein extraction kit (Mem-Per ® , Thermo Fisher Scientific). Proteins recovered from 10 6 cells were labeled with 100 µg (10 mg/mL in DMSO) Alexa Fluor ® succinimidyl ester 555 dye (Thermo Fisher Scientific) as per the manufacturer's instructions. Labeled protein was separated from unconjugated dye with Bio-Gel ® P6 (Bio-Rad Laboratories, Dublin, Ireland). lectin Microarray construction and sample interrogation Lectin microarrays were constructed essentially as described previously in Ref. (30). Forty-four lectins (Table S2 in Supplementary Material) sourced from multiple vendors were diluted to 0.5 mg/mL in PBS supplemented with 1 mM of respective haptenic sugar to maintain binding site integrity (see Table S2 in Supplementary Material) and printed on Nexterion ® H (Schott, Mainz, Germany) functionalized glass substrates using a sciFLEXARRAYER S3 non-contact spotter (Scienion, Berlin, Germany). During printing, relative humidity and temperature were maintained at 62% (±2%) and 20°C, respectively. Following printing, slides were incubated in a humidity chamber overnight at 20°C to ensure completion of covalent conjugation. Unoccupied functional groups were deactivated by 1 h incubation with 100 mM ethanolamine in 50 mM sodium borate, pH 8. Finished slides were washed with PBS with 0.05% Tween-20 (PBS-T) three times for 3 min and once with PBS for 3 min, centrifuged dry (450 × g, 5 min), and stored at 4°C with desiccant until use. Labeled cellular proteins were incubated with finished microarrays following extensive optimization as described in Ref. (30). All processes were carried out with limited light exposure. Samples were applied to microarrays using an 8-well gasket slide and incubation cassette system (Agilent Technologies, Cork, Ireland). 70 µL of each labeled glycoprotein at 0.5 mg/mL, in incubation buffer [TBS-T; Tris-buffered saline (TBS; 20 mM Tris-HCl, 100 mM NaCl, pH 7.2, supplemented with 1 mM CaCl2, and 1 mM MgCl2) with 0.05% Tween ® -20], was applied to each well of the gasket. A total of 18 technical replicates were carried out for iDC and tDC profiling (encompassing samples of five biological replicates). Each microarray slide was loaded into a cassette with an accompanying gasket slide and placed in a rotating incubation oven (23°C, approximately 4 rpm) for 1 h. Incubation cassettes were disassembled under TBS-T, and microarrays were washed in a Coplin jar twice in TBS-T for 2 min each and once with TBS for 2 min. Microarrays were dried by centrifugation (450 × g) and imaged immediately using an Agilent G2505B microarray scanner at 5 µm resolution (532 nm laser, 100% laser power, 90% PMT). Microarray Data extraction and analysis Data extraction and analysis was performed essentially as previously described (30, 31). In brief, raw intensity values were extracted from high-resolution *.tif files using GenePix Pro v6.1.0.4 (Molecular Devices, Berkshire, UK) and a proprietary *.gal file (containing feature spot addresses and identities) using adaptive diameter (70-130%) circular alignment based on 230 mm features. Numerical data were exported as text to Excel (Version 2010, Microsoft, Dublin, Ireland). Local backgroundcorrected median feature intensity data (F543median-B543) was analyzed. The median value, derived from data from six replicate spots per subarray, was handled as a single data point for graphical and statistical analyses. Lectin microarray intensity values were normalized to the median total intensity value for all features across all subarrays. The significance of difference between relative intensity data (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001) was evaluated for each set of replicates on a lectin-by-lectin basis using a standard Student's t-test (two-tailed, two sample unequal variance). Unsupervised, hierarchical clustering of lectin-binding data was performed with Hierarchical Clustering Explorer v3.0 (http:// www.cs.umd.edu/hcil/hce/hce3.html). For clustering analysis, previously, normalized data were imported directly and clustered with the following parameters: no pre-filtering, complete linkage, Euclidean distance. Principal component analysis (PrCA) of previously normalized and pre-filtered data (those lectins which demonstrated p < 0.01 or better in the above t-tests, 15 in total) was performed using Minitab version 16.1.1 (Minitab, Inc., State College, PA, USA). statistical analysis Data were analyzed using the software package FlowJo v10 (Tree Star, Ashland, OR, USA). All data were analyzed with Graphpad Prism V6 software (Graphpad Software, CA, USA) and are expressed as mean ± SEM unless otherwise indicated. Comparisons among multiple groups were made with one-way ANOVAs followed by Tukey's multiple comparisons test. Data sets with two groups were analyzed using an unpaired t-test. Differences were considered statistically significant when p-value was <0.05. Dexamethasone Treatment of BMDc induces a Tolerogenic Phenotype Dexamethasone treatment of DCs has been reported to generate tolerogenic DCs (tDCs) (32). To generate iDCs, bone marrow was flushed from the long bones of the tibia and femur of DA rats and cultured in medium supplemented with GM-CSF, IL-4, and Dexa (for tDCs) as required ( Figure S1A in Supplementary Material). Following isolation, cell surface characterization was performed using flow cytometry by gating on the CD11b/c population ( Figure 1A). tDC generation did not result in any significant changes in cell size (Figure 1B, i) but the number of cells harvested from wells that were treated with Dexa was significantly lower than that of wells that were Dexa-free (Figure 1B, ii). This may be due to Dexa-induced apoptosis of the DCs, which has been reported by other groups (33). While lower numbers of cells were obtained from tDC wells, after harvesting and washing of the The mRNA expression of interleukin 6 (IL-6), Indoleamine 2,3-dioxygenase (IDO), interleukin 1 beta (IL-1β), inducible nitric oxide synthase (iNOS), and IL-12p40 was analyzed in iDCs and tDCs. Normalized to GAPDH and fold change relative to iDCs. Error bars: mean ± SEM *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 one-way ANOVA, Tukey's multiple comparisons test. cells, no significant changes in viability was noted (Figure 1B, iii). We also analyzed the expression levels of the costimulatory molecules CD80/CD86 and the major histocompatibility complex class I and II molecules (MHCI/II) as an indicator of the maturation status of generated iDCs and tDCs ( Figure 1C). The expression levels of CD80, CD86, MHC I, and MHC II indicate that the iDCs display a semi-mature phenotype. However, when the cells were treated with Dexa, a significant reduction in the expression level of MHC II was observed with no changes in MHC I ( Figure 1C). To mature iDCs or tDC in vitro, LPS was added to the cultures (1 µg/mL) for 24 h. A significant increase in both CD80/CD86, MHC I and MHC II was noted. However, tDCs following LPS treatment showed significantly reduced expression levels of CD80/CD86 and MHC I/II molecules compared to stimulated iDCs indicating a phenotype that is maturation resistant. iDC and tDC populations were also assessed for expression of pro-and anti-inflammatory markers with and without Dexa-treatment by qRT-PCR ( Figure 1D). Results indicate that LPS stimulation of iDCs leads to an increase in mRNA expression of pro-inflammatory molecules such IL-6, IL-12p40, and iNOS. In contrast, tDCs are less sensitive to TLR4 stimulation compared to mDCs, indicated by no observed increases in IL-6, IL-12-p40, and iNOS after LPS treatment. Higher levels of IDO mRNA, which is known as a marker in tolerogenic cells, is present in LPS-treated tDCs when compared to mDCs. Interestingly, IL-1β mRNA expression does not seem to be regulated by Dexa, as LPS stimulation leads to a profound increase, which cannot be blocked by Dexa. All together these data indicate that Dexa treatment of iDCs leads to the generation of a tolerogenic DC phenotype with reduced expression of markers of immunogenicity and reduced expression of pro-inflammatory molecules but increases in immunoregulatory molecules. tDc generation Modulates the glycocalyx by significantly increasing levels of α2-6-linked sia Changes in DC glycocalyx after induction of tolerogenic phenotype have not been investigated. To address this knowledge gap, lectin microarray profiling of proteins extracted from the membranes of iDCs and tDCs and lectin-coupled flow cytometry of intact iDCs and tDCs was undertaken. Comparisons of all lectin microarray replicate profiles were made by unsupervised hierarchical clustering. This clustering approach revealed two major clusters with separation at 53% minimum similarity (Figure 2A). With the complete linkage method employed, two untreated iDC replicates were placed into the tDC group while only three of the iDC replicates, two from biological set 2 and one from set 5 (Figure 2A), showed outlier behavior and were excluded from the major cluster containing the balance of the iDC replicate data. However, the well-defined separation of the vast majority of the iDC and tDC replicates into two groups (Figure 2A, Group 1 and 2) supports the solidity of the subtle profile differences and also the high level of reproducibility for the lectin profiling method in distinguishing membrane glycoprotein samples from iDCs and tDCs. Median values obtained from normalized lectin microarray profile data (n = 18) for iDCs and tDCs were broadly similar with only small, but significant, differences in intensities noted at a subset of the lectin panel ( Figure 2B). The general profiles of tDC glycoproteins remained similar to those of iDCs across lectin features. Furthermore, the lectin profiles displayed no obvious signs of cell stress as evidenced by a lack of elevation of signals suggesting increased endoplasmic reticulum-and proximal Golgi-associated glycan structures (i.e., increased evidence of high mannose structures). However, SNA-I showed a consistent intensity increase with tDC surface glycoproteins (p = 2 × 10 −10 ) relative to iDCs, which is in line with previous findings from our group (13). PrCA performed using the 15 lectins, which demonstrated p < 0.01 (SNA-II, BPA, PNA, DSA, LEL, SNA-I, RCA-I, CPA, ECA, LTA, UEA-I, EEA, GS-I-B4, MPA, and VRA) revealed a division of replicate lectin profiles dominated by distinct groups containing iDCs or tDCs with minimal overlap and further reinforced the ability of these lectins to distinguish untreated iDCs from tDCs ( Figure 2C). In short, these lectin microarray profiles demonstrate that the glycocalyxes of the iDC and tDCs are distinct. These changes were validated using lectin-coupled flow cytometry. The increase in SNA-I binding suggests an increase in quantity or better accessibility to α2-6-linked with no significant change suggested for α2-3-linked Sia (MAL-II) confirmed lectin microarray findings (Figure 2D). neuraminidase Treatment of iDcs and tDcs Modulates levels of α2-6-linked sia and alters expression levels of immunogenicity Markers Sia has long been reported to be important in DC biology (28). Considering the dramatic increase observed after Dexa treatment confirmed by both flow cytometry and lectin microarray (Figures 2B-D), we cleaved Sia using neuraminidase to study phenotypical and functional changes upon removal. iDCs and tDCs were treated with neuraminidase (designated niDC and ntDC, respectively) and lectin binding profiles for SNA-I and MAL-II were analyzed using flow cytometry. Both niDCs and ntDCs showed a significant reduction in SNA-I binding intensities and trend decreases MAL-II binding intensities suggesting the successful removal of α2-6-linked and α2-3-linked Sia, respectively ( Figure 3A, i-iv). Based on these results, we further investigated if the removal of Sia resulted in a detectable increase of the expression of MHC I, MHC II, CD80, and CD86 immunogenicity markers after treatment with neuraminidase. niDCs ( Figure 3B, i) had small but significant increases in MHC II and CD86 expression when compared to iDCs. MHC I showed a trend increase in expression on niDCs compared to iDCs, and there was no change in CD80 expression after treatment with neuraminidase. ntDCs (Figure 3B, ii) displayed a significant increase in both MHC I and MHC II with no changes in CD80 and a trend increase in CD86 after neuraminidase treatment. niDC and ntDC populations were also assessed for expression of pro-and anti-inflammatory markers by qRT-PCR ( Figure 3C). Although there was some sample-to-sample variation, our data indicate that neuraminidase treatment of iDCs leads to dramatic increases in pro-inflammatory mRNA expression of IL-6, IL-1β, iNOS, TNF-α, and IL-12-p40. However, ntDCs are protected from this strong increase in pro-inflammatory cytokine expression in the case of iNOS and IL-12-p40, but mRNA levels of IL-6, IL-1β, and TNF-α are increased. Interestingly, levels of anti-inflammatory IL-10 are lost after neuraminidase treatment in both iDCs and tDCs. In summary, these results indicate that neuraminidase treatment reduces Sia on the cell surface of both iDCs and tDCs and leads to the stimulation of pro-inflammatory cytokine mRNA expression, which can be largely inhibited by Dexa treatment. neuraminidase Treatment alters immunomodulatory Properties of iDcs and tDcs Considering that the removal of Sia altered the immunogenic phenotype of both iDCs and tDCs, we further analyzed the effects of neuraminidase treatment on iDCs and tDCs through in vitro allogeneic coculture experiments. iDCs or tDCs from DA rats were treated with neuraminidase and cocultured with allogeneic lymphocytes. The immunogenic potential or the ability of niDCs and ntDCs to induce the proliferation and/or the activation of allogeneic lymphocytes was analyzed by T-cell proliferation assays ( Figure 4A). Responder LEW rat T cells were analyzed based on their co-expression of CD3 + CD4 + or CD3 + CD8 + (Figure 4B). Proliferation of lymphocytes was measured using CellTrace TM Violet (CTV) and activation of lymphocytes was measured using CD25 as an activation marker. DA iDCs (Figure 4C, i) and tDCs ( Figure 4C, ii) did not induce an allogeneic response as indicated by a lack of changes in LEW CD3 + CD4 + or CD3 + CD8 + T cell proliferation when compared to unstimulated lymphocytes alone. Additionally, we observed no significant changes in CD3 + CD4 + CD25 or CD3 + CD8 + CD25 expression (data not shown) supporting our data on reduced immunogenicity of iDCs and tDCs. However, niDCs (Figure 4C, i) significantly stimulated both CD3 + CD4 + and CD3 + CD8 + T cell proliferation when compared to both unstimulated lymphocyte controls and iDCs. This indicates the importance of Sia in the maintenance of an iDCs phenotype. While ntDCs (Figure 4C, ii) show a trend increase to stimulate CD3 + CD8 + T cells, there were no significant changes noted (Figure 4C). To eliminate the possibility of cell death as a potential cause of this increase in proliferation, we assessed cell death using Sytox Blue. We observed that iDCs have less cell death after neuraminidase treatment than tDCs ( Figure S2 in Supplementary Material) enabling us to exclude this possibility. Finally, we investigated if niDCs and ntDCs can regulate the proliferation of stimulated T cells. LEW T cells were labeled with CTV, stimulated with CD3/CD28 labeled beads, and cocultured with niDCs and ntDCs ( Figure 5A) and CD3 + CD4 + and CD3 + CD8 + proliferation was measured by flow The mRNA expression of interleukin 6 (IL-6), interleukin 1 beta (IL-1β), inducible nitric oxide synthase (iNOS), tumor necrosis factor alpha (TNF-α), interleukin subunit beta (IL-12p40), and interleukin 10 (IL-10) was analyzed in iDCs, niDCs, tDCs, and ntDCs. Normalized to GAPDH and fold change relative to iDCs. Representative histograms and bar charts displaying relative fluorescence intensity (RFI) for flow cytometric analysis of DC cell surface. Median fluorescence intensities were established relative to iDCs in the case of niDCs and tDCs in the case of ntDCs. Error bars: mean ± SEM *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 one-way ANOVA, Tukey's multiple comparisons test. Data sets with two groups were analyzed using an unpaired t-test. cytometry. Neuraminidase treatment completely abrogates the T cell inhibitory effect of iDCs leading to full restoration of T cell proliferation (Figure 5A, i). Interestingly, Dexa treatment is not sufficient to enable iDCs to inhibit the proliferation of activated T cells as no differences were observed between tDCs and ntDCs ( Figure 5B, ii). In summary, these data indicate that the removal of Sia from iDCs increases the immunogenicity by its ability to stimulate CD4 and CD8 T cell proliferation, which can be prevented by Dexa treatment. In contrast, neuraminidase treatment completely restores the proliferation of polyclonally activated T cells, which cannot be prevented by Dexa treatment. DiscUssiOn Organ transplantation is often considered as the only therapeutic option for patients with life-threatening organ disease and is now performed on a routine basis. Due to incompatibilities between donor and recipient MHC-molecules, patients are required to take immunosuppressive drugs to prevent the destruction of the transplanted organ by the recipient's immune system. Immunosuppressive drug regimens are associated with severe side effects long term (34,35). As a result, alternative immunosuppressive treatment strategies have been researched and developed including the use of therapeutic DCs in the treatment of autoimmune diseases and in the prevention of allograft rejection. DCs promote central and peripheral tolerance through various mechanisms, such as T cell anergy, inhibition of memory T cell responses, and clonal deletion amongst others (36). These characteristics form the basis of the use of DCs in the induction of tolerance. iDCs even have displayed the ability to convert naïve conventional T cells to regulatory T cells (Tregs) both in vitro (37,38) and in vivo (39). As shown here, and as shown by others, iDCs in non-inflammatory conditions display a poor immunogenic phenotype. One of the major barriers for use of iDCs in cellular therapies is that they respond to inflammatory stimuli, exemplified here by TLR4 (LPS) stimulation. In the context of autoimmunity and transplantation, iDCs are bound to encounter inflammatory environments if employed in therapeutic regiments. A potential solution to overcome this is the use of tDCs, which are maturation resistant. Using tDC cellular therapies for the treatment of organ transplantation looks promising (18). tDCs are now routinely generated using different induction protocols, including the use of corticosteroids such as Dexa (11,14,15,17,40) and, in fact, we have recently shown in a rat model of corneal transplantation that Dexa generated tDCs significantly prolonged allograft survival without the need for additional immunosuppression (13). In this manuscript, we generate tDCs using Dexa and we characterize their maturation resistant phenotype by analyzing the expression of the immunogenicity markers MHCI, MHCII, CD80, and CD86 before and after TLR4 stimulation. We also analyze the expression of several immunomodulatory cytokine mRNAs. Dexa generated, maturation resistant, tDC have been well characterized by us (13,32) and by other groups (17). However, to our knowledge, little is known on how Dexa induction of tDCs may affect the glycosylation profile of these cells and what functional consequences this may have. Glycosylation changes are not routinely assayed, but are likely to play crucial roles in iDC and tDC biology. We describe here for the first time, using both lectin microarray and flow cytometry, that generation of tDCs by Dexa treatment leads to significant alterations in the cell surface glycosylation profile when compared to iDCs. We noted highly significant changes in lectin binding for α2-6-linked Sia (SNA-I) with no significant changes in lectin binding for α2-3-linked Sia (MAL-II). Interestingly, Jenner et al. (41) when comparing human iDCs with iDCs matured with a cytokine cocktail (IL-6, IL-1β, TNF-α, and prostaglandin E2) noted decreased α2-6-linked Sia with no changes in α2-3-linked Sia on the more immunogenic DC. This study also showed that Tregs have higher levels of α2-6-linked Sia when compared to activated conventional T cells. This suggests a possible link between α2-6-linked Sia content and tolerogenicity, where the increased α2-6-linked Sia may potentially serve as ligands for inhibitory sialic acid-binding proteins (Siglecs) on the surface of effector cells (41). In fact, hyper-sialylated antigens loaded onto DCs were recently shown to impose a regulatory program in the DCs. This resulted in the inducement of Tregs via Siglec-E and the inhibition of effector T cells (42). Looking more closely at the lectin microarray analysis, other differences in lectin profiles observed here also hint at significant changes in the total abundance or potential branching alterations of underlying oligosaccharide structures, particularly N-acetyllactosamine (LacNAc), which may have occurred To test the immunomodulatory properties of iDCs, niDCs, tDCs, and ntDCs, they were placed into MLRs for 5 days. (a) Schematic representation of experimental design. DA iDCs, niDCs, tDCs, and ntDCs were placed in cocultures for 5 days with allogeneic LEW lymphocytes isolated from the spleen and lymph nodes. (B) Representative gating strategy. Cells were selected according to size and granularity (i) followed by live/dead discrimination based on Sytox AADvanced TM negative cells (live) (ii). After single cell selection (iii) cells were selected by CD3 (PE) positivity (iv). Further selected by CD4 (APC) and CD8 (PE-CY7) and proliferation was measured by successive generations of CellTrace TM Violet positive cells. (c) The ability of iDCs, niDCs, tDCs, and ntDCs to stimulate allogeneic LEW T-cells was analyzed using unstimulated splenocytes/lymphocytes as a negative control (n = 3). (i) Representative histograms and bar charts displaying CD4 + and CD8 + T cell proliferation following a 5-day coculture with iDCs and niDCs. (ii) Representative histograms and bar charts displaying CD4 + and CD8 + T cell proliferation following a 5-day coculture with tDCs and ntDCs. Error bars: mean ± SEM *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 one-way ANOVA, Tukey's multiple comparisons test. Because of the reported importance of Sias in DC pattern recognition (41,43), endocytosis/phagocytosis (44)(45)(46)(47), antigen presentation (48), migration (28,(49)(50)(51)(52), and T cell interactions (28,53). But also, considering that α2-6-linked Sia was the most significantly increased change after tDC generation by Dexa, we choose to investigate Sia's importance in iDC and tDC immunogenicity in an allogeneic setting, which would have potential implications in iDC and tDC cellular therapies. For this, we removed Sia from the surface of the cells by enzymatic digestion using neuraminidase (sialidase). These experiments showed that Sia is involved in maintaining the tolerogenic phenotype of both iDCs and tDCs, as removal of Sia resulted in an increase in immunogenicity markers and increases in proinflammatory TH1 mRNA transcripts notably IL-6, IL-1β, iNOS (iDCs only), TNF-α, and IL-12p40 (iDCs only) with significant decreases in anti-inflammatory or tolerogenic IL-10. In experiments where neuraminidase-treated human monocyte derived DCs were cultured with ovalbumin (45) or Escherichia coli (44), there were reported increases in immunogenicity markers and cytokine gene expression also. Here, we show that even after Dexa treatment and tDC generation the removal of Sia from the cell surface results in increases in both cell surface immunogenicity markers and TH1 pro-inflammatory cytokine gene expression, underpinning the importance of Sia in a non-immunogenic phenotype. In the context of allogeneic cell therapy for the treatment of autoimmune diseases and in the prevention of allograft rejection, it is important that the cell therapy itself does not elicit a deleterious immune response. In unstimulated allogeneic co-cultures using LEW responder lymphocytes, we show that iDCs and tDCs are non-immunogentic and do not elicit either CD3 + CD4 + nor CD3 + CD8 + proliferation. This attribute makes them ideal candidates in DC cellular therapies. We show that removal of Sia from iDCs is sufficient enough to stimulate the allogeneic responders, again showing the importance of Sia in a non-immunogenic phenotype. This may indicate that the removal of Sia uncaps underlying structures, which are then recognized as a signal for T-cell proliferation or that the Sias may act as ligands for inhibitory Siglecs on the surface of effector cells and once removed, this inhibitory effect is lost. Sia removal of tDCs did not induce CD3 + CD4 + proliferation, but we noted a trend increase in CD3 + CD8 + proliferation. Interestingly, this indicates that, despite the increase of immunogenicity markers and the transcript increase in several pro-inflammatory mRNAs, Dexa treatment of iDCs was sufficient to keep the cells, at least partially, in a non-immunogenic state. In CD3/CD28 stimulated (hyper stimulated) allogeneic co-cultures using LEW responder lymphocytes, we show that iDCs had an impressive ability to supress stimulated allogeneic lymphocytes. Sia is critical in maintaining this suppressive ability as when it was absent we observed complete restoration of T cell proliferation for both CD3 + CD4 + and CD3 + CD8 + populations. These results are supported by the fact that Crespo et al. (45) showed increased T-lymphocyte proliferation in autologous mixed lymphocyte cultures using human monocyte-derived DCs where the lymphocytes were stimulated with tetanus toxoid, inactivated with mitomycin C, and cocultured with neuraminidase monocyte-derived DCs. Interestingly, we showed that tDCs do not have the ability to suppress hyperstimulated allogeneic lymphocytes to the same extent as iDCs. Sia removal had little effect on tDCs suppressive ability and did not exaggerate proliferation. Together, these experiments highlight that the tolerogenic properties between iDCs and tDCs are not inherently the same and understanding these characteristics and limitations will inform us on how to optimize therapy strategies. The findings outlined here could also have numerous implications for our understanding of DC phenotype and function in the tumor microenvironment. Efficient induction of antitumor To test the T-cell suppression properties of iDCs, niDCs, tDCs, and ntDCs, they were placed into stimulated MLR cultures for 4 days. Splenocytes/lymphocytes were stimulated with CD3/CD28 beads. (a) Schematic representation of experimental design. DA iDCs, niDCs, tDCs, and ntDCs were placed in cocultures for 4 days with CD3/CD28-stimulated allogeneic LEW lymphocytes isolated from the spleen and lymph nodes. (B) The ability of iDCs, niDCs, tDCs, and ntDCs to suppress CD3/CD28 stimulated allogeneic T-cells was analyzed using stimulated splenocytes/lymphocytes as a positive control. Error bars: mean ± SEM *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 one-way ANOVA, Tukey's multiple comparisons test. (i) Representative histograms and bar charts displaying stimulated CD4 + and CD8 + T cell proliferation following a 4-day coculture with iDCs and niDCs. (ii) Representative histograms and bar charts displaying stimulated CD4 + and CD8 + T cell proliferation following a 4-day coculture with tDCs and ntDCs. responses requires that DCs in the tumor undergo proper maturation and activation (54). Understanding DC activation is important both in terms of their role in regulating immune responses locally in the tumor microenvironment (55), and also their use in ex vivo cellular and vaccination strategies to induce tumor specific immune responses. In the context of tumor vaccination strategies using DCs, the required response is to induce tumor-specific effector T cells that can eliminate tumor cells specifically and that can induce immunological memory to control tumor relapse. Our findings suggest that Dexa, a common component of chemotherapy regimens, could suppress DC maturation and activation, their ability to present antigen (56), as well as their ability to induce T cell proliferation and activation. Interestingly, our data indicate that these potent Dexa-induced effects could be somewhat reversed in the presence of a neuraminidase, suggesting a key role for sialylation in Dexa generated tDCs. Removal of sialic acid has also previously been shown to increase tumor antigen-specific T cell responses (48). Our data also show that as well as a more potent ability to induce CD8+ T cell activation. In terms of modulating the tumor microenvironment directly, local delivery targeted approaches using sialyltransferase inhibitors delivered either to the tumor or the local lymph nodes could be exploited. In terms of ex vivo generated DCs for either cellular therapy or in vaccination strategies, treatment of DCs with sialyltransferase inhibitors could be sufficient to allow efficient priming of T cells systemically. As DCs provide an essential link between innate and adaptive immunity, these findings could have important implications in our understanding of the suppressive mechanisms within the tumor microenvironment that hinder adaptive antitumor immune responses and potential mechanisms by which they could be overcome. Together, these results highlight the importance of Sia's in DC biology, especially in the context of iDC allogeneic cellular therapy. While the precise implications of increased or decreased Sia expression on iDCs and tDCs remain to be elucidated in vivo, we show here strong evidence that supports a function of Sia in the therapeutic aspects of DC cellular therapies. Identification of the molecular mechanisms and factors, which are regulated by Sia's are important to exploit this phenomenon in the clinic. This study points toward the potential of DC surface sialylation as a therapeutic target to improve and diversify DC-based therapies and treatments. In the context of disease, cell glyco-engineering could have positive implications in the treatment of autoimmunity, DC-based vaccines, the tumor microenvironment, and transplant biology. FUnDing This work is supported by Science Foundation Ireland (12/TIDA/ B2370 and 12/IA/1624) and European Cooperation in Science and Technology (COST) for the AFACTT project (Action to Focus and Accelerate Cell-based Tolerance-inducing Therapies; BM1305). TaBle s2 | Lectin names and common major binding ligands. Table listing the abbreviations, the kingdom, species, common name, and major binding ligand of the lectins used in lectin micro array profiling.
2017-10-30T17:04:53.768Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "0ef15d07adfce767f209ed8dda25c2954ffe7261", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01427/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e0e3529ba261a52bf251cfd9d208e2b17153383", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
255507007
pes2o/s2orc
v3-fos-license
‘Woe Betides Anybody Who Tries to Turn me Down.’ A Qualitative Analysis of Neuropsychiatric Symptoms Following Subthalamic Deep Brain Stimulation for Parkinson’s Disease Deep brain stimulation (DBS) of the subthalamic nucleus (STN) for the treatment of Parkinson’s disease (PD) can lead to the development of neuropsychiatric symptoms. These can include harmful changes in mood and behaviour that alienate family members and raise ethical questions about personal responsibility for actions committed under stimulation-dependent mental states. Qualitative interviews were conducted with twenty participants (ten PD patient-caregiver dyads) following subthalamic DBS at a movement disorders centre, in order to explore the meaning and significance of stimulation-related neuropsychiatric symptoms amongst a purposive sample of persons with PD and their spousal caregivers. Interview transcripts underwent inductive thematic analysis. Clinical and experiential aspects of post-DBS neuropsychiatric symptoms were identified. Caregivers were highly burdened by these symptoms and both patients and caregivers felt unprepared for their consequences, despite having received information prior to DBS, desiring greater family and peer engagement prior to neurosurgery. Participants held conflicting opinions as to whether emergent symptoms were attributable to neurostimulation. Many felt that they reflected aspects of the person’s “real” or “younger” personality. Those participants who perceived a close relationship between stimulation changes and changes in mental state were more likely to view these symptoms as inauthentic and uncontrollable. Unexpected and troublesome neuropsychiatric symptoms occurred despite a pre-operative education programme that was delivered to all participants. This suggests that such symptoms are difficult to predict and manage even if best practice guidelines are followed by experienced centres. Further research aimed at predicting these complications may improve the capacity of clinicians to tailor the consent process. Introduction Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is an effective treatment for the motor symptoms (tremor, rigidity, bradykinesia) of Parkinson's disease (PD). It involves surgery to position electrodes within this target that emit continuous high frequency stimulation to modulate dysfunctional basal ganglia activity. DBS is typically indicated when motor symptoms become difficult to manage with dopaminergic medication due to the development of motor fluctuations, dyskinesias, or medication-refractory symptoms. Bilateral STN stimulation increases ON time, reduces motor fluctuations and dyskinesias, enhances performance of activities of daily living and improves quality of life [1]. The dose of dopaminergic therapy is often substantially reduced [2]. DBS of the STN (STN-DBS) in PD is set to grow as contemporary evidence suggests that earlier intervention produces superior results than best medical therapy [3]. Approximately 10% of persons treated with STN-DBS develop unintended mood and behavioural changes as a consequence of electrical stimulation that disrupt postsurgical quality of life [4]. These include euphoria, irritability, pathological gambling, hypersexuality and impulsivity, as well as more subtle changes in drive and empathy [4][5][6][7][8][9][10][11][12][13][14]. Henceforth, these putative 'stimulation-dependent' phenomena are referred to as 'neuropsychiatric symptoms', whilst recognising that more generally other symptoms in PD such as anxiety, apathy, psychosis and cognitive dysfunction may also be encompassed by this term. The emergence of these issues may not be recognized by the person with PD or not viewed as problematic. They may alienate the person with PD from their support network, leading to estrangement or relationship separation. Such neuropsychiatric symptoms also raise ethical challenges, including the responsibility of the person for actions committed whilst under the influence of stimulationdependent mental states [15 16]. Personality change may lead family members to contend that the person is no longer themselves, stimulating debate about the effect of DBS surgery on personal identity [17][18][19]. The occurrence of neuropsychiatric symptoms in PD is certainly not unique to STN-DBS. Indeed, PD has been referred to as the 'quintessential neuropsychiatric disorder' [20], such is the breadth of psychiatric and cognitive symptoms that may arise in the course of neurodegeneration. Dopamine replacement therapy (in particular dopamine agonist medication) has also been associated with the development of impulse-control disorders [21] and the rate of serious psychiatric side effects is similar amongst persons treated with STN-DBS as compared to matched individuals on best medical therapy [22]. Clinicians are therefore challenged: the risks of psychiatric side effects as a component of STN-DBS should be communicated to patients and their families, but placed into appropriate contextgiven that persons with PD may benefit greatly from the procedure and the alternatives are not without risk. The stimulation-dependent nature of psychiatric symptoms has been contested by Gilbert et al. [23], who suggest that they may reflect a worsening of preexisting psychiatric disorders or aggravation of difficult family relationships in the setting of major surgery, less related to electrical stimulation than to premorbid psychiatric, personality and psychosocial functioning. In particular, these authors propose that debate regarding the neuroethical consequences of DBS relies largely upon speculative assumptions rather than empirical evidence. However, clinical experience indicates that a substantial proportion of psychiatric symptoms arise de novo and in the absence of prior symptomatology, suggesting that there is not a clear 'at-risk' pre-surgical phenotype and that these symptoms may be an unintended consequence of the procedure [24]. Furthermore, the physiological role of the STN in decision-making lends biological plausibility to the view that modulation of this region may produce unintended cognitive and emotional side effects [25]. A direct relationship between the adjustment of electrical stimulation and the onset or remission of psychiatric symptoms has been reported, suggesting that STN-DBS is a proximate cause in many cases [8][9][10][11]. Furthermore, the precise site of stimulation within this nucleus is associated with the onset of psychiatric symptoms, supporting the existence of a biological gradient related to the locus and amplitude of stimulation [26]. Finally, the STN has been employed as a surgical target in DBS for obsessivecompulsive disorder [27], indicating that this subcortical nucleus can be a nexus for psychiatric as well as movement disorders, helping to explain why psychiatric symptoms may arise as a consequence of STN-DBS for PD. Complex changes in behaviour following STN-DBS are challenging to comprehensively assess with standard quantitative methods. Firstly, instruments that assess mood and personality only measure an operationalised subset of these phenomena; richer concepts such as 'identity' and 'autonomy' are not captured in these scales. Secondly, affected individuals may show deficits in their awareness of these difficulties, which are only revealed after consulting with an informant. Qualitative investigations employ open-ended questions that allow participants to disclose more than pre-determined scales. Moreover, the inclusion of spousal informants provides a second perspective that may corroborate or contrast with the experience of the person with PD. Qualitative methods also capture the participant's 'own voice', meaning that issues relevant to the person with PD are uncovered, assisting with the delivery of patient-centred care. Qualitative studies with people with PD [ 28 29] and a spouse [30] have increased our understanding of living with a DBS device. However, there has been little research regarding the impact of subthalamic stimulation-induced neuropsychiatric symptoms on persons with PD and their families [23]. The goal of the present investigation was to explore the meaning and significance of stimulation-related neuropsychiatric symptoms amongst a sample of persons with PD and their spousal caregivers. Here, participants and their spouses were purposively selected from a pool of consecutive surgical candidates based on the postoperative development of neuropsychiatric symptoms attributable to STN-DBS. Interviews were conducted 6-12 months postoperatively, after neuropsychiatric symptoms had been remediated following DBS manipulation. Findings from this study will enhance the capacity of clinicians to educate surgical candidates and respond to the emergence of neuropsychiatric symptoms in a manner that addresses the needs of the person with PD and their family. This will be of increasing importance as subthalamic DBS becomes a more widely utilised intervention [3]. Terminology In the investigation that follows, a number of ethical and philosophical concepts are identified. To aid the clarity of subsequent discussion we first define what we take these terms to mean and how we take them to be interrelated. We recognise that these concepts have a rich history of debate in the bioethics literature and it is beyond the scope of this study to engage in this analysis. In order for an agent to be morally responsible for an act (or omission), the consequences of acting (or not acting) must be foreseeable, and the agent must possess autonomous control over her cognitive and volitional capacities. Acting deliberatively or purposely may enhance moral responsibility and blameworthiness. Autonomy is the exercise of a set of mental competences to make a judgement about one's best action in a given situation. Autonomous agents reason consistently, reaching similar conclusions under similar environmental contingencies (i.e. they are sufficiently rational). Furthermore, an autonomous agent reasons and acts on the basis of authentic desires, i.e. the attitudes of the agent that move her to act are identified as her own, being consistent with the agent's evaluation of her values [31]. According to this view, to act authentically and therefore responsibly is to do so in accordance with one's Btrue self^. Selfhood is closely aligned with the concept of personal identity, a construct that is constitutive of responsibility. Broadly, there are two contrasting perspectives on personal identity and what constitutes someone's Btrue self^: a view of selfhood as a form of reflective, self-generated autobiographical narrative (referred to as existentialist) [32], contrasted with an essentialist model that proposes the existence of a deeply immutable inner Bcore^of being [33]. In what follows we do not take a position on which conception of identity is correct. Identity and selfhood are distinct from personality, which refers to those temperamental or characterological traits that influence a distinctive array of behaviour within an individual. Methods Qualitative data was gathered from 20 semi-structured interviews conducted with persons with PD (10) and their spousal caregivers (10) following subthalamic DBS. This study was part of a larger investigation of neuropsychological and neuroanatomical aspects of psychiatric symptoms [26 34 35] after DBS. Ethical approval was granted by the Human Research Ethics Committees of the Royal Brisbane and Women's Hospital, the University of Queensland, UnitingCare Health and the QIMR Berghofer Medical Research Institute. All participants received written information about the study and signed a consent form. Participants A larger cohort of persons with PD (from which these participants were drawn) comprised surgical candidates consecutively recruited at the Asia-Pacific Centre for Neuromodulation between 2013 and 2017, during the assessment of eligibility for STN-DBS. The diagnosis of PD was confirmed by a movement disorders neurologist according to the United Kingdom Queens Square Brain Bank criteria [36]. All persons with PD completed a psychiatric and cognitive evaluation prior to surgery. Individuals without a spousal caregiver, proficiency in English, and those with cognitive impairment, as defined by a Mini Mental State Examination Score (MMSE) of 25 or less, or a clinical diagnosis of PD dementia [37] were excluded from the study. Prior to consenting for surgery, persons with PD and their spousal caregivers completed a 60-min education session run by a psychiatrist (PM) and nurse specialist, including the potential neuropsychiatric side effects of subthalamic stimulation. DBS electrodes were implanted in a single-stage procedure using a stereotactic apparatus, after the STN was identified via neuroimaging. Intraoperative microelectrode recordings (MER) were employed to establish localisation within the STN and intraoperative test stimulation was performed. Further imaging confirmed satisfactory postoperative lead placement. Postoperatively, stimulation parameters were adjusted non-invasively through an implanted pulse generator sited in the pectoral region. Stimulation titration began as an inpatient, with the amplitude of stimulation gradually increased as dopaminergic medication was slowly withdrawn. Persons with PD returned to the clinic frequently during the first 6 postoperative months for routine neurological and psychiatric assessment, with further DBS manipulation undertaken according to motor symptoms. Identification of persons with PD who developed psychiatric symptoms (that the investigators had grounds for believing were) attributable to subthalamic DBS used the same process as that reported in prior work [26 34]. 1 These persons were identified during a postoperative schedule of repeated neuropsychiatric assessments. A semi-structured diagnostic interview and mental state examination were conducted by the psychiatrist (PM) who had assessed all participants at baseline, with attention to mood elevation, disinhibition, compulsivity and loss of empathy. The contribution of neurostimulation to the presentation was confirmed if symptoms responded promptly to a reduction in the amplitude or change in the locus of stimulation, as assessed by serial mental state examinations and feedback from close family members. These individuals were invited to take part in a qualitative interview, which was also undertaken separately with their spousal caregiver. The present sample of 10 patient-caregiver dyads was drawn from a total cohort of 91 recruited to the overarching investigation. Persons with PD and their caregivers were only approached for interview after their psychiatric symptoms had definitively resolved, at an interval of 6-12 months post-DBS. No individuals declined participation. Interviews Interviews used a semi-structured template exploring common psychiatric symptoms attributable to subthalamic DBS and its impact on autonomy, identity and responsibility (Supplementary Material). Participants were encouraged to introduce topics that were not prompted by the interviewer. Persons with PD and their spousal caregivers completed separate interviews to enable open disclosure and the expression of discrepant perspectives. Interviews were conducted face to face by PM with an approximate duration of 60 min. PM maintained field notes and a reflective diary. Audiorecordings of each interview were transcribed verbatim and checked for accuracy, with removal of all potentially identifiable information. All participants were informed verbally and in writing that the content of the interviews would not form part of their medical record. Data Analysis Deidentified transcripts were imported into NVivo qualitative analysis software (Mac version 11.4.2, QSR International Pty Ltd., Doncaster, Australia) and analysed thematically [38]. Each transcript was read several times before extracts were coded to reflect the experience or perspective of the participants. Coding was an iterative and inductive process, with codes generated, refined and merged, and transcripts re-coded as the data corpus increased. Each transcript was re-coded until saturation, where no further excerpts could be identified. Both PM and KR carried out this initial coding step separately in order to generate diverse perspectives on the data. Discrepancies were discussed between PM and KR until a consensus was reached. Preliminary analyses of the transcripts were conducted in parallel to the interviews, to facilitate reflection during data collection. Subsequently, stable frameworks of codes were identified that cohered as themes, each describing a defined aspect of participants' experience of psychiatric symptoms after subthalamic DBS. When coding, experience, preconceptions and bias were acknowledged. PM was a psychiatrist with involvement in over 400 cases of DBS for movement disorders. His position was that neurostimulation was causally responsible for the observed behavioural changes amongst these persons with PD, rather than a psychological adjustment to the relief of disability or changing roles in the patient-caregiver dyad. KR was a provisional psychologist with no prior clinical experience or knowledge of the participants. In order to maximise the transparency of subsequent findings the consolidated criteria for reporting qualitative research (COREQ) were employed [39]. Participant Characteristics The data corpus consisted of 20 qualitative interviews, comprising 10 persons with PD (9 male, 1 female, mean age 59.4, range 36-71) and 10 corresponding spousal caregivers (9 female, 1 male, mean age 57.9, range 35-70). The demographic and clinical characteristics of the persons with PD selected for interview are summarised in Table 1. Persons with PD were predominantly male, but with a broad range of age and variable degree of premorbid psychiatric history. Three had no prior psychiatric history, four had mild-moderate depressive or anxiety disorders and three had more severe behavioural addictions or psychotic symptoms related to dopaminergic therapies. Neuroimaging confirmed that the DBS electrodes were accurately targeted to the STN in all patients ( Fig. 1), with favourable motor outcomes from their procedure, manifested by a reduction in objective motor symptom scores and a reduction in the requirement for dopaminergic therapies. Quantitative data pertaining to these outcomes has been reported [ 26 34]. Clinical Vignettes Brief clinical vignettes are summarised below to provide a narrative context for each patient. Person with PD 01 A 64-year old male with an 11-year history of tremordominant PD. A retired senior government administrator with no personal or family history of psychiatric illness and no impulse control disorders despite long-term treatment with a dopamine agonist. One month after STN-DBS, he developed a coarsening of personality manifest with crude language, irritability and sexualised behaviour. He threatened to set up a rival DBS program, became preoccupied with sports betting and purchased a sports car on an internet auction. His symptoms remitted at 3-months postoperatively when his stimulation was moved to a more dorsal contact on both electrodes and a bipolar configuration (anode and cathode both localised to the electrode resulting in a more focussed stimulation field) was employed. Person with PD 02 A 61-year old male with a 5-year history of tremordominant PD. A retired sales executive with a history of depression emerging as an early symptom of PD, responsive to antidepressant medication. He developed an early postoperative hypomania characterised by euphoria and psychomotor agitation, which settled after 1 month. However, subsequent to an increase in stimulation 5-months postoperatively, he abruptly became irritable, began drinking heavily, purchased $2000 of camping equipment, assaulted his wife and attempted suicide by jumping from a hotel window. His symptoms remitted with a switch to bipolar stimulation on both electrodes. Person with PD 03 A 62-year old male with a 23-year history of akineticrigid PD. On long-term sickness benefits due to his PD, Fig. 1 Localisation of subthalamic deep brain stimulating electrodes. Using the Lead-DBS toolbox [58], preoperative T1 and T2weighted images were co-registered with the postoperative CT scan and spatially normalised into ICBM_2009b nonlinear asymmetric space. Medtronic 3389 and Boston Vercise electrodes were manually identified, their spatial position was corrected for brainshift, and their trajectory was evaluated with reference to a recent parcellation of the STN [59]. The full pipeline has been described in prior work [26]. A: coronal view of DBS electrodes, B: axial view of DBS electrodes. All electrodes were accurately targeted to the STN he had developed dopamine dysregulation on a duodopa infusion and was manipulating his dose so as to engage in compulsive woodworking. This behavioural addiction resolved after STN-DBS, but 3-7 months later, his wife complained of impulsive behaviour: he was attempting to open a nightclub and was apprehended by the police driving his mobility scooter on a busy highway. His wife also described a new habit of fetishistic masturbation. His symptoms remitted with a reduction in stimulation amplitude. Person with PD 04 A 46-year old male with a 4-year history of tremordominant PD. Serving in the armed forces, he had no prior psychiatric history. Three months after STN-DBS, he developed an elevated mood with irritability, verbal disinhibition, compulsive spending and hypersexuality. He purchased expensive wine, paintings and solicited sex on the internet. His symptoms remitted with a switch to bipolar configuration, move to more dorsal electrodes and reduction in stimulation amplitude. Person with PD 05 A 67-year old male with a 13-year history of tremordominant PD. A former naval serviceman, he had developed a delusion of infidelity during treatment with a dopamine agonist. This had resolved following cessation of the drug, but was associated with a subsequent depressive episode, remitted at the time of DBS. He displayed euphoria and verbal disinhibition in the first week after STN-DBS, which settled spontaneously. However, subsequent to increases in stimulation amplitude during the following 6 months, he displayed abrupt changes in affect characterised by elation, irritability and hypersexuality, demanding sex from his spouse. These symptoms responded to moving the stimulation to more dorsal electrode contacts and a reduction in stimulation amplitude. Person with PD 06 A 64-year old male with a 5-year history of tremordominant PD. A factory worker, he had a history of recurrent depressive disorder treated in primary care. Immediately after STN-DBS, he reported a non-motor effect of stimulation with resolution of his depressive symptoms, a phenomenon that was also positively received by his family. However, 9-months later a second contact was activated on the right electrode to manage residual motor symptoms, which led to the rapid development of a manic syndrome. This was associated with irritability, threats to his family, gambling and dangerous driving, eventuating in arrest and involuntary hospitalisation. His device was turned off and he was treated with mood stabilising medication and antipsychotics, with subsequent resumption of DBS under the initial postoperative settings. This case has previously been reported [35]. Person with PD 07 A 36-year old male with a 5-year history of tremordominant PD. A manual labourer, he had no prior psychiatric history and no background of impulse-control disorders despite treatment with a dopamine agonist. One month after STN-DBS, his wife described the emergence of a 'forceful' personality (previously he had been reserved) associated with a preoccupation with sex and agitation discernible on mental state examination. His symptoms remitted with a bipolar configuration and a move to more dorsal electrode contacts, but re-emerged at 3-months subsequent to further stimulation increases and remitted again with a reduction in stimulation amplitude. Person with PD 08 A 71-year old male with a 5-year history of tremordominant PD. A retired scientist, he experienced nonmotor fluctuations with cyclical anxiety symptoms in the inter-dose interval between doses of his levodopa. Two months after DBS, he developed an elevated mood in the irritable range after a stimulation increase. He presented with an uncharacteristically entitled affect and accused his treating clinicians of being incompetent. Upon admission, he attempted to buy artwork on the walls of the hospital and tried to give cash to the nursing staff. His family reported that he had bought artwork for them against their wishes. His symptoms remitted with a reduction in stimulation amplitude. Person with PD 09 A 61-year old female with a 5-year history of tremordominant PD. A retired teacher, she had a history of generalised anxiety in the setting of her movement disorder and had been treated by a psychiatrist for these symptoms. In the first week after DBS, she became uncharacteristically irritable with outbursts of inappropriate anger directed towards her husband. She had poor insight into her changed behaviour and these outbursts persisted despite intensive DBS reprogramming. Her husband also reported compulsive spending. Eventually her right STN electrode was repositioned surgically (to a more dorsolateral region of the nucleus) and her symptoms remitted. Person with PD 10 A 54-year old male with a 5-year history of akineticrigid PD. A retired postal officer, he had a history of impulse control disorders during treatment with dopamine agonist medication. These included pathological gambling, compulsive spending and hypersexuality comprising the compulsive use of internet pornography. His behavioural addictions remitted after STN-DBS corresponding with a reduction in his dopaminergic medication. However, 2-months after surgery, he became irritable and his wife reported dangerous driving and threats of aggression. On mental state exam, he was agitated with pressure of speech and verbal disinhibition. His symptoms remitted with the use of a bipolar configuration and dorsal electrode contacts. Coding and Themes A coding tree was developed from the data corpus, from which a network of primary and secondary themes was identified (Fig. 2). Illustrative excerpts are provided in the text below and as Supplementary Material. We haven't done any counselling at all and I think we need to. As a spouse, you need to be prepared that these things can happen and that husbands or partners can turn feral [wild] and not to -we were told not to take it to heart -whatever is said is said out of -they can't help it. But in saying that, that's Lack of Preparedness Almost all participants, both persons with PD and their caregivers, reported being ill-prepared for the nature and impact of stimulation-dependent neuropsychiatric symptoms. This was despite the inclusion of an education session on the potential emergence of these symptoms for all participants during the preoperative multidisciplinary evaluation. Some participants denied ever receiving information about neuropsychiatric complications, whilst others acknowledged that their desperation to receive treatment for their motor symptoms clouded contemplation of this matter. I probably sort of looked on the bright side and thought, oh well, I'll be right… Maybe it wasn't their fault that they -they probably did say it but you know when you're sort of a bit desperate I guess you don't sit on that negative sort of thing. (Person with PD 08) Other participants recalled receiving education but were unable to reference the personal significance of this in the absence of any prior psychiatric history. While the changes were seen as being consistent with their Breal^pre-PD self, the degree of behavioural change was sometimes seen as exaggerated or a return to a much younger self. To some extent I think that they're probably -at least in [ However, the change to a less passive self caused problems for caregivers, particularly when the person with PD was no longer willing to adhere to established roles within the family system. This appeared to be driven by changes in mood, rather than a simple reduction in disability due to the relief of motor symptoms. …he didn't want me looking after him and was calling me controlling, whereas normally it wasas I said, we were just a team. I don't call it controlling. I call it helping… it just triggered some… dark side. (Spouse 07) Control Participants who viewed neuropsychiatric symptoms as inauthentic, attributed behaviours to the DBS and perceived a loss of control or freewill. However, the views of persons with PD and spouses were sometimes discrepant on this matter. If she'd have seen a video of herself she'd have been surprised. In her mind, she thinks that she was fine, and she still believes that she was fine. But you understand from her point too because of where that wire was, she was high, for want of a better term, and feeling like a million bucks… that she's like superman. Almost like someone on drugs, but didn't believe that anything she was doing was wrong… It didn't get better. It just escalated. Every time they turned the unit, the voltage up she went up a level… You can't believe that a little bit of voltage would shift someone from there to there in that little bit of time. (Spouse 09) Participants (both spouses and persons with PD) who observed this close relationship between stimulation changes and changes in mental state were more likely to view neuropsychiatric symptoms as inauthentic and uncontrollable. Then on Monday when it had been increased to three that was when I was sort of -something was happening that wasn't typical of me… I sort of felt irritable. Something was going on there... I was aware that I was like that, but I couldn't seem to do too much about it. Then no, I don't understand what was happening or what was causing that. I was sort of over cooked, I was too stimulated, and maybe it had been increased too quickly. (Person with PD 08) Discussion The emergence of significant neuropsychiatric symptoms (such as mood changes, reckless decision-making and addictive behaviours) following STN-DBS can have a significant impact on the quality of life of both persons with PD and their families. We know little about the way in which these individuals understand the causes and emergence of these behaviours, how they impact upon their lives and relationships, and what information or support they receive. This study provides a qualitative examination of these issues. Our study also enriches our understanding of the philosophical aspects of these phenomena, capturing how such symptoms, when they arise, impact on autonomy and identity. Supporting Caregivers From a clinical perspective, emergent neuropsychiatric symptoms (as operationalised and identified by a psychiatrist) were particularly burdensome for caregivers, who reported changes in relational dynamics and enduring difficulties even after a recovered episode. In particular, caregivers often reported feeling helpless and overwhelmed by the changes observed in their partner. This finding is consistent with previous reports demonstrating that burden amongst PD caregivers is highly correlated with comorbid psychiatric symptoms [40 41]. We suggest that the wellbeing of caregivers should be explicitly considered by clinicians who encounter these neuropsychiatric symptoms in their patients. The persistent distress reported by caregivers may require provision of psychological assistance even after neuropsychiatric symptoms have abated in order to facilitate relational readjustment. Future work will evaluate the effectiveness and acceptability of psychological care in this population. Enhancing Understanding These personal perspectives also highlight how neuropsychiatric symptoms are unexpected by persons with PD and their families, despite prior education about the potential for behavioural changes. Most participants identified knowledge gaps in the psychiatric domain, with the majority able to recall accurate information regarding surgical complications of DBS and motoric benefits. Some participants disregarded information about psychiatric risks as they were preoccupied with addressing their motor symptoms, or they discounted the likelihood and impact of developing psychiatric symptoms, especially if they had no significant prior experience of psychiatric illness. However, it is important that surgical candidates and their families are explicitly prepared for this possibility, especially given that our participants perceived that ill-preparedness impaired their capacity to respond and cope with neuropsychiatric symptoms, and often delayed their help-seeking responses. Addressing this challenge may include the use of a structured instrument to deliver preoperative education. Further research is needed to develop such a tool, although the findings in this investigation will help identify knowledge gaps or when families are not likely to process information about the risk of behaviour change. Furthermore, clinicians may wish to employ a process of Bcorrected feedback^ [42] whereby the clinician can test the level of comprehension of imparted information. Corrected feedback also views the communication of important and complex clinical information as an ongoing process both prior and subsequent to the relief of motor symptoms. Enhancing understanding may also necessitate greater clinician engagement with PD support groups, which offer fellowship and advice to many persons with PD. Perhaps it is easier for those persons who have some Blived experience^of psychiatric symptoms to conceptualise themselves or their spouses receiving psychiatric care. Managing Unpredictability Given the relative unpredictability of postoperative psychiatric symptoms, it remains uncertain how forthcoming clinicians should be regarding the Bunknowns^of DBS. Unexpected and harmful neuropsychiatric symptoms may occur after STN-DBS, despite the oversight of a large and experienced movement disorders centre that follows best practice guidelines, including an embedded psychiatrist, preoperative psychiatric evaluation of all surgical candidates and a preoperative education programme delivered to spousal caregivers. It has previously been suggested that neuropsychiatric effects may be an integral, albeit unintended, consequence of STN-DBS for PD [24]. Other nuclei in the basal ganglia, specifically the internal segment of the globus pallidus (GPi), have been advanced as a Bsafer^target for DBS [43] but these outcomes have been contested [44 45]. Could the choice of target be adapted to favour the GPi in persons prone to psychiatric complications? However, based on information that can be derived from a standard clinical assessment and mental state examination, it seems unlikely that a psychiatrist can accurately predict an Bat risk^patient. Whilst false negatives are clearly a concern in this scenario, a false positive identification of an Bat-risk^individual may also harm a patient by implementing a bias towards a surgical treatment option that ultimately has a lesser benefit for their quality of life [44]. Furthermore, it is also difficult to prospectively quantify the magnitude of future harm arising from subthalamic DBS. Although the cases reported in this investigation are clearly at the most severe end of the spectrum, there are a greater number of cases in which no neuropsychiatric symptoms arise or any emergent symptoms are detected quickly and addressed through prompt intervention with minimal or no enduring harm. What is the threshold of potential harm at which alternative targets should be considered? We agree with previous suggestions that the ultimate choice of target should be undertaken by the neurologist and neurosurgeon after a discussion with the surgical candidate and their family, during which the benefits and risks of stimulation at available targets can be considered [24]. Contradictory Narratives of Causation and Control BI regard the mind-body problem as wide open and extremely confusingŜ aul A. Kripke, Naming and Necessity [46]. Many of us invoke differing narratives to explicate our behaviour, which may comprise neuroscientific, psychological and social understandings [47]. Our participants also employed a diverse explanatory framework, utilising both deterministic and moral paradigms. As a result, their attempts to make sense of their experiences in the context of DBS were frequently contradictory. The attitude of some caregivers was reminiscent of Immanuel Kant's assertion: B... although we believe that the action is thus determined, we none the less blame the agent^ [48] describing behaviour as Bbadô r Bchildish^despite endorsing a biological model of causation. This parallels findings in addiction, in which clinicians employ a neurobiological framework but retain a belief in the capacity of the individual to exercise control [49]. It is also conceivable that the connection of neuropsychiatric symptoms with DBS titration, or the biological model espoused by treating clinicians, challenged existing attitudes held by participants, causing them to switch between determined and moral modes of explanation. It has been argued that neuromodulation confronts the Bfolk dualism^of some persons [50]. Moreover, responsibility is not a unitary construct and can be seen as a syndrome of concepts, including causal relationships between intention, action and outcome, as well as moral judgements that an individual is blameworthy [51]. In addition, a distinction can be made between attributing moral character to an action and attributing moral responsibility for an action to an agent. This attributional ambiguity and complexity in the ways we tend to talk about responsibility may contribute to the distress experienced by many participants. One further possibility is that these contradictory narratives serve a purpose in providing moral justification for action. In parents of children with attention deficit hyperactive disorder (ADHD), definitions of authenticity shift according to prevailing cultural norms and developmental ideals [52]. In our cohort, participants moved between particular frameworks of attribution and authenticity depending on their utility in explicating positive and negative behaviours. For example, behaviours that were evaluated negatively, such as irritability, disinhibition and relationship disruption, were often construed as arising from the exogenous and malign influence of stimulation. However, phenomena such as increased energy, generosity and extraversion were often seen as an opportunity to return to a more authentic self, facilitated rather than imposed by DBS (see Supplementary Material for further excerpts). The perception of control exercised by participants with PD over neuropsychiatric symptoms after STN-DBS was variable. Some (e.g. person with PD 04) experienced a loss of autonomy manifest with lowered inhibitions and the perception of action contra to his identified values for normative behaviour. Others (e.g. person with PD 09) recognised that her actions had been out of keeping with her pre-surgical temperament but reasoned that she was able to exert voluntary suppression of those behaviours identified as problematic. Still others (e.g. person with PD 02) actively sought out changes in mood engendered by higher levels of stimulation, reminiscent of other cases previously reported [53]. In this latter excerpt, there is even a suggestion that the DBS is controlling the person with PD. Interestingly, no participant (person with PD or caregiver) raised concerns about a change of identity per se subsequent to STN-DBS. The language used by many participants evoked a notion of an essentialist Bcoreŝ elf, which had been suppressed by PD and released by DBS to a varying degree. However, DBS did not appear to disrupt the integration of these (sometimes radical) changes into the autobiographical narrative of the person with PD, even when viewed from the caregiver's perspective and even when acknowledging the causal role of brain manipulation in precipitating these changes [17 54]. Instead, when concerns were expressed by participants, these were primarily in the domain of autonomy, using phrases such as Bon drugs^or Bno circuit breaker^. Again, this language seems to reflect a perception of a dysfunction in the cognitive machinery of autonomous decision making, leading to the expression of inauthentic behaviours rather than a shift in an underlying authentic selfhood. It appears that participants (both persons with PD and caregivers) who noted a close relationship between stimulation changes and changes in mental status were more likely to conclude that these symptoms were inauthentic and uncontrollable. One could speculate that a close temporal association emphasises the connection between psychiatric symptoms and brain manipulation, which makes participants more likely to externalise this relationship. It is also possible that neuropsychiatric symptoms arising abruptly subsequent to stimulation manipulation are more likely to be of a negative valence, reflecting a more severe phenotype of neuropsychiatric dysfunction. Limitations The biological model of PM was acknowledged and may have affected data gathering and analysis. Furthermore, the dual clinical and investigative role of PM may have limited information disclosed due to concerns regarding confidentiality. We endeavoured to overcome the first issue by developing an investigative team with clinical and non-clinical backgrounds, with a spectrum of prior knowledge about the participants, in order to allow diverse perspectives. Participants provided constructive criticism of the clinical team, suggesting that they were willing to offer opinions and that PM's dual role did not prevent frank disclosure in this domain. Also, over 90% of participants in this investigation were male. This reflects a bias towards male gender in those accessing DBS at this centre (78% of all participants in the recruited cohort). Previous studies of drug addiction have identified gender differences in the attribution of control and responsibility [55]. Further research is needed to determine whether these findings are also applicable to female patients. The emergence of neuropsychiatric symptoms after STN-DBS for PD is a complex matter, with potential contributions from non-motor progression of neurodegeneration, dopaminergic therapies, as well as neurostimulatory effects [5]. However, in this investigation, our rigorous assessment schedule [26 34], involving a multidisciplinary neurological and psychiatric evaluation, increased the likelihood that observed symptoms were attributable to stimulation rather than other causes. However, we acknowledge that we are unable to definitively answer Bthe causal question^as posed by Gilbert et al. [23] and discussed by Pugh et al. [56]. In our cohort, neuropsychiatric symptoms arose alongside clinically meaningful reductions in motor disability. Therefore, it remains possible that observed changes in mood, cognition and behaviour were indirect effects resulting from an amelioration of the participant's condition. We suggest that our method of specifying stimulation-dependent neuropsychiatric symptoms increases the likelihood of a causal relationshipi.e. onset with adjustment of stimulation and offset with further adjustment of stimulation and we also point to the wealth of neuroscientific data implicating the STN in the genesis of psychiatric symptoms (reviewed in [25]). It is also important to acknowledge that STN-DBS has been shown to be of equivalent safety when compared with medical therapy [22], with some surgical centres reporting a postoperative reduction in problematic neuropsychiatric symptoms due to the reduction in dopaminergic medication afforded by neurostimulation [57]. To some degree this is reflected in our sample, with a change in the behavioural phenotype of participants 3, 5 and 10, who all had significant pre-surgical neuropsychiatric difficulties attributable to dopaminergic therapies. Conclusions In this investigation, we have shown that stimulationdependent neuropsychiatric symptoms following STN-DBS are often harmful and burdensome to persons with PD and their spousal caregivers. 2 Some participants did not fully integrate the information about potential psychiatric harms when it was delivered to them prior to surgery and further research will be important to identify new ways of preparing candidates in a way that is meaningful and memorable. Further work examining the neural basis of these symptoms may also assist clinicians to improve the informed consent process and deliver more reliable predictors of outcome, which at present remains affected by a degree of uncertainty. There is evidence to support a causal link between stimulation and the emergence of psychiatric symptoms in our participants, which corresponds with existing data in the quantitative domain. However, many persons with PD did not hold an exclusively deterministic view and gave justified reasons for their (or their spouse's) behaviour that did not rely on a biological model. Furthermore, some participants actively sought out changes in their mental state that were linked to stimulation, despite identifying that their behaviour under these conditions was markedly different from baseline. Whereas a Bscientific^, neurobiological analysis of this phenomenon has many potential benefits, including improved understanding of neural mechanisms, prognostication, effective therapies and a reduction in stigma experienced by sufferers, further work is needed to clarify whether a bias towards deterministic or moral explanations helps or hinders the ability of participants to manage the burden and harms associated with neuropsychiatric symptoms. None of our participants considered that a change in identity had been precipitated by DBS, but some perceived that their autonomy (or that of their spouse) had been overridden by the device. This was most common for symptoms with negative consequences and for those symptoms with a close temporal connection to DBS manipulation. For these participants, inauthentic behaviour was considered to arise from a dysfunction in the competencies of autonomous decision making, rather than from a shift in authenticallyheld values. We hope that the empirical data that we have provided will contribute to further philosophical debate in this area.
2023-01-08T15:22:12.182Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "cfbdedaffde0aa1ccbb198295ee960691a1e0b74", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12152-019-09410-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "cfbdedaffde0aa1ccbb198295ee960691a1e0b74", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
146033737
pes2o/s2orc
v3-fos-license
Zinc Oxide Nanoparticles Induces Apoptosis in Human Breast Cancer Cells via Caspase-8 and P53 Pathway Nanoparticles are a special institution of substances with precise capabilities and significant applications in many biomedical fields. In the present work zinc oxide nanoparticles were prepared through sol-gel approach. The synthesised nanoparticles were identified through the usage of X-ray powder diffraction (XRD), scanning electron microscopy (SEM) and transmission electron microscopy (TEM). In-vitro anticancer activity of zinc oxide nanoparticles towards MCF-7 cell lines using numerous parameters was investigated. Zinc oxide nanoparticles were determined to exert cell growth arrest against MCF-7 cell lines. The anti-proliferative efficiency of ZnO nanoparticles was due to cell dying and inducing apoptosis that were confirmed by the usage of acridine orange/ethidium bromide dual staining, DAPI staining and genotoxicity assay. Reverse transcription polymerase chain reaction (RTPCR) analysis achieved to identify the gene expression of Caspase-8, Caspase-9, and P53. The results suggested that ZnO nanoparticles might find a wide use in clinical applications and provide new drug recompense for chemotherapy drugs. Introduction Among women, breast cancer is the leading cause of cancer deaths and the most common cancer worldwide. Globally, 1.3 million new cases of breast cancer are diagnosed and approximately 465,000 deaths are recorded annually [1]. Cancer may be outlined as a disease within which a variety of abnormal cells develop uncontrollably irrespective to the traditional rules of cell division [2]. Traditional cells are perpetually controlled by signals that determine whether or not the cell ought to divide, differentiate into another cell or die. Among women, breast cancer is the leading cause of cancer deaths and the most common cancer worldwide. The exact cause of cancer is unidentified. Although the genetic aspect is involved in five to ten percent of cancers, other reasons including poor diet, certain infection, lack of physical activity, obesity, the use of tobacco and pollution may also directly or indirectly influence the activity of crucial genes that could lead to cancer development [3]. In the last few years, scientists have used a new pathway for treating cancer, dependent on the concept of nanotechnology [4]. Nanotechnology is a science based on the techniques and tools from diverse disciplines, including biology, chemistry, engineering and medicine. This field could critically improve the drug bioavailability, and in turn reduce the toxicity associated with the high doses that are typically required for optimum response and ability to transport the substance on a specific organ [5]. Materials in nanoscale have attracted the attention of many researchers, and as a result entered in wide applications in fabricating stronger materials, high memory capacity, smaller electronics, advanced medical treatments and enhanced sensors [6]. Nanoparticles (NPs) possess the properties as connections between atomic structures and bulk materials, which makes them very interesting. Zinc oxide (ZnO) is one of the most usual metallics used commercially that are produced via artificial techniques [7]. ZnO is safe and well suited with human pores and skin by developing an appropriate additive for textiles and surfaces which might be in touch with flesh. In assessment to bulk, the growing quantity of nanoscale ZnO has the potential to decorate the efficiency of material operation [8]. As a critical semiconductor with remarkable medical and technological interest, ZnO-NPs are extensively applied in numerous commercial areas together with ultra-violet (UV) light-emitting devices, ethanol gas sensors, image-catalysts, pharmaceutical and cosmetic industries [9,10]. The purpose of this study is to manufacture Zinc oxide nanoparticles depending on sol-gel method and their characters were studied, and their activity against breast cancer cell line were investigated. Experimental Preparation of zinc oxide nanoparticles The synthesis of zinc oxide nanoparticles was accomplished depending on Khan, et al. [11] with some modification (Fig. 1). Zinc acetate dehydrate (4.16 g per 100 mL -1 ) was prepared with distal water. Sodium hydroxide (3.5 g per 100 mL -1 ) was prepared using deionised water. Solution of sodium hydroxide was dropping above the prepared zinc acetate solution until obtaining pH scale range to 12. The mixture compound required further stirring for 1 h until the precipitate was seen. The precipitation was washed using deionised water and then filtered and dried overnight in the hot air oven at 70 °C. Characterisation of manufacturing ZnO nanoparticle Manufacturing compounds were analysed using X-ray diffraction to identify the structure and purity of ZnO nanoparticles (Shimadzu, Japan) using the spectral range of 25-50 2θ. Detection of light scattering from matter is a beneficial method with applications in several clinical disciplines, relying on the light supply and detector. 1 mL distilled water turned into the cell, after which 50 µL from stock dispersions was added. Length distributions of the nanoparticles were identified through the dynamic light scattering (DLS) method. Finally, scanning electron microscopy SEM (GENEX, USA) and transmission electron microscopy TEM (Philips EM) analyses were conducted to determine the particle size and morphology. Cytotoxicity determination using MTT assay To determine the cell killing effect of nanob ZnO, MTT assay was used according to Suliaman, et al. [12]. Human breast cancer cell lines (MCF-7) were seeded at 7000-10000 cells/well after 24 h or confluent monolayer was achieved. Cells were treated with nano ZnO at fold dilutions from 100-6.25 µg/mL in culture media. Cell viability was measured at 72 h of exposure by removing the medium, adding 50 µL of MTT stain and incubating for 2 h at 37 °C. After removing the stain, it was washed with phosphate-buffered saline (PBS). The absorbency was determined on a microplate reader at 492. The assay was performed in triplicate. Clonogenicity The cells line were plated in 12 well plates at the density of 1000 cells/mL. After 24 h, the cells were treated with zinc oxide nanoparticles with IC 50 concentration. After that, the medium was removed and the cell rinsed using PBS solution. The colonies were fixed and stained using crystal violet, then washed to remove excessive dye using distal water and then photographed [13]. Nuclear Morphology change detection using DAPI stain DAPI assay was done to visualize the nuclear morphological change of ovarian cells using 365-nm filter of fluorescent microscopy. The cells were treated with zinc oxide nanoparticles with IC 50 concentration and followed with incubation for 48 h. The wells were rinsed with filtered PBS, then, the cells fixed with absolute ethanol for 0.5 h. After that, washing required with distilled water. The cells were stained using DAPI stain solution for 30 min, the rinsed required to remove the excess stain using distilled water [14]. The nuclear changes visualized using 40× power of a fluorescence microscope. Acridine orange/ethidium bromide (AO/EB) dual staining AO/EB double staining assay was used to identify the cellular death. Acridine orange was taken up via viable cells and emitted green fluorescence. Ethidium bromide was taken up only via nonviable cells and emitted red fluorescence through intercalation with harm DNA [15]. Cells line was seeded with density (1×10 4 ) on the cover slide that was located at the 12-well plate with ZnO nanoparticles. After 48 h of incubation, the medium was removed and aliquot of 20 µL from dye mixture was combined with100 µL cell suspension in a well plate. After the incubation period for 15-30 min, the cover slides were taken for observation under fluorescent microscope 100× magnification. Gene alteration detection using real-time (RT) quantitative polymerase chain reaction (PCR) Gene alteration of cell line was investigated using real-timequantitative PCR. In this experiment, three main types of gene were measured to identify the pathway of apoptosis and the mechanism action. These genes included p53, caspase-9 and caspase-8. RT-PCR was accomplished to investigate the modifications in hippocampal expression genes. The primer sets were designed based totally on the sequences from the NCBI database. The sequences of primers used within the quantitative RT-PCR assay included: Each RT-PCR reaction combination contained 1 µL cDNA, 7.5 µL SYBR green, 0.3 µL ROX, and 0.3 µL related primers; the final quantity was topped up to 15 µL via adding 5.6 µL of distilled water. The assay was performed with SYBR Premix Ex. Taq TM kit. The real-time detection of emission intensity of SYBR green reacted to double-stranded DNAs and was performed via the implemented Biosystems ABI PRISM Sequence Detection System. GAPDH mRNA was used as an inner control to identify the relative expression amount of genes. Immunofluorescence microscopy MCF-7 cells were stimulated as indicated. The cells were with 1 × PBS, then fixed with 4% paraformaldehyde (PFA) for 30 min at room temperature, and then permeabilized in 0.5% Triton-X for 30 min at room temperature. The permeabilized cells were blocked with 10% normal goat serum for 30 min. The primary antibody was added to cells for 24 h at 4 °C. The antibodies were 1 mg/mL anti-p53 and 1 mg/mL anti-caspase-8. After washing three times with 1 × PBS, the secondary antibodies were added for 120 min at room temperature. The antibodies were 1 mg/ mL Alexa Fluor 488-conjugated goat anti-rabbit IgG or 1 mg/mL Alexa Fluor 568-conjugated goat anti-mouse IgG. The cells were washed three times with PBS, and viewed using a fluorescent microscope. Statistical analysis The study data were presented as means ± standard error of the mean. The statistical analyses were performed using the GraphPad Prism 5 software package (GraphPad Software, Inc. San Diego, California). Results and Discussion Characterization of manufactured ZnO nanoparticles X-ray powder diffraction (XRD) analysis of ZnO film samples was accomplished with variety 25-500 using Cukα radiation. Fig. 2 visualized an XRD of ZnO film samples having numerous peaks of ZnO nanoparticle, indicating random orientation for the polycrystalline nature and measured inter planer distances. The higher peak intensities of an XRD were because of the higher crystallinity, and the larger grain length might be attributed to the agglomeration of particles [16]. The pointy diffraction sample suggested that the sample revealed hexagonal ZnO crystalline shape with lattice consistent of a = 3.256 A and c = 5.31 A, which was stated in JCPDS (36-1451) for bulk ZnO. DLS technique was used to determine Brownian motion of spherical dispersed particles and to narrate this to the hydrodynamic length of the particles within the dispersed solution via dynamic fluctuations of scattered light intensity. This scattered light intensity was similarly mathematically manipulated to relate the hydrodynamic length of the debris. A vital characteristic of Brownian motion measured with the aid of DLS was that small debris circulated quicker in assessment to large debris, and the relationship between the dimensions of a particle and its velocity due to Brownian motion was defined within the Stokes-Einstein equation. As visualized in Fig. 3, ZnO NPs diameter was within the range of 4-10 nm. The morphology and the scale of the manufactured zinc oxide nanoparticles were detected via SEM and TEM. As visualized in Fig. 4(a) and (b), the morphology of the zinc oxide nanoparticles under identical magnifications, the images show that the ZnO nanoparticles were round in shape and the average diameter ranged from 10-15 nm. , the results illustrated that treatment with ZnO nanoparticles inhibited the growth of cells significantly (P ≤ 0.05) and the reduction was concentrationdependent. The inhibitory concentration value (IC 50 ) of ZnO nanoparticles was 15.88 µg/mL as visualized in Fig. 4-8. ZnO nanoparticles, with their specific properties which include biocompatibility, excessive selectivity, greater cytotoxicity and smooth synthesis, can be a promising anticancer agent [17]. Cytotoxicity determination using MTT assay ZnO nanoparticles have the specific capability to evoke oxidative stress in most cancers cells, which has been observed to be one of the mechanisms of cytotoxicity of ZnO nanoparticles closer to cancer cells. This property is due to the semiconductor nature of ZnO; conduction of electricity power takes vicinity via the motion of unfastened electrons inside the valence band [18]. Clonogenicity assay As visualized in Fig. 6, the anti-proliferative efficiency on human breast cancer cell MCF-7 through the use of clonogenic assay was determined to similarly verify the inhibition activity to tumor cells. Colony formation assay is an in-vitro cell survival assay based totally on the capacity of a single cell to develop right into a colony [19]. It may be used to identify the effectiveness of different cytotoxic agents. ZnO nanoparticles exhibited principal performance at the colony formation cell line at IC 50 concentration. The discount of colony formation may also lead to that the cancer cells within the continuous remedy were killed in the first 48 h of treatment, suggesting that ZnO nanoparticles have been taken up by way of cells and evoke the death mechanism. Consequently. The result may also display that the synthesis compound ought to set off cell death. However, the mechanism of cell dying is not genuinely understood and requires extra analysis to research the mechanism. Nuclear morphology change detection using DAPI stain To evaluate the cytotoxicity effects of the manufactured compounds effect at nuclear of the cell line, the changes inside the nuclear morphology were tested after exposing with IC 50 of ZnO nanoparticles as illustrated in Fig. 7. In assessment, apoptosis was characterized by using cell shrinkage, preservation of plasma membrane integrity, chromatin condensation, and nuclear fragmentation. Generally, the results suggested that ZnO nanoparticles could induce apoptosis in breast cell line. The discount of cell line growth frequently included amendment of various key signaling pathways, which ended up because of disturbance in apoptotic process or cell cycle and resulted in impact on the mRNA gene expression [20]. Acridine orange/ethidium bromide (AO/EB) dual staining AO/EB analysis was employed to examine the changes in nuclear morphology of human breast cancer cell MCF-7 treated cells. The apoptotic cells were indicated based on DNA damage. AO/EB dual staining was used to determine distinct apoptotic signs characteristics of nucleate alternations. Viable and non-apoptotic cells appeared green, while apoptotic cells appeared orange or red. As shown in Fig. 8, exposing cells to ZnO nanoparticles caused an increase in membrane disruption and formation of lysosomes vacuoles compared to untreated control cells. The excessive capacity to cause dying to the cell is associated with the capability of nanoparticle to penetrate via the cellular membrane and impact at the mRNA expression of suppression gene that cause increase in the production level of reactive oxygen species (ROS) within the cell [21]. With expanded levels of ROS and oxidative stress, ZnO NPs display a deleterious impact at the lipid, protein and nucleic acid of the cell [22]. Increased ROS can cause membrane damage via lipid peroxidation and protein denaturation, ensuing in cell dying via necrosis and DNA Strand damage, resulting in cellular dying via process called apoptosis apoptosis [23]. Z i n c o x i d e n a n o p a r t i c l e s u p r e g u l a t e s caspase-8 and p53 In this study, quantitative real-time PCR was used to identify the change in the expression of the mRNA of apoptotic genes (p53, caspase-8 and caspase-9) in cells that were exposed to ZnO nanoparticles for 24 h with concentrations of 6.25, 12.5 and 25 µg/mL. The PCR results revealed that apoptotic markers at the mRNA level were altered and dealt with cell lines due to ZnO nanoparticles remedy. The mRNA level of tumor suppression gene p53 showed an increase as visualized in Fig. 9(a) in treated cells; the level of p53 increased dependently on the concentration. Moreover, the performance of ZnO nanoparticles at the mRNA expression of caspase-8 and caspase-9 was observed. The expression of caspase-9 was downregulated at 24 h as shown in Fig. 9(c), while caspase-8 was upregulated in treated cells in comparison with untreated control cells at 24 h of treatment as visualized in Fig. 9(b). The tumor suppressor p53 gene and caspase enzyme assist to investigate cells frequently and save them from becoming cancerous [24]. If a cell indicates any form of malignancy, a DNA repair mechanism is activated to restore the altered DNA [25]. Zinc is one of the co-factors of those enzymes and performs a vital function in host defense in opposition to the initiation and development of cancer [26]. The particular DNA-binding domain of p53 consists of a complicated tertiary structure that is stabilized via zinc [27]. Consequently, zinc performs a main function in maintaining the interest of tumor suppressor gene p53 and performs a vital role within the activation of the caspase-8 enzyme, a main enzyme accountable for apoptosis [28]. Caspase-9 is the touchy apoptosis-associated molecular target of zinc [29]; it is responsible for the activation of caspase-3 and different enzymes which are liable for nuclear membrane dissolution leading to cellular dying. Zinc performs a crucial role in response to oxidative stress, DNA replication, DNA harm repair, cell cycle development and apoptosis; as a result a deficiency of zinc ends in disruption of important homeostasis in cells [30,31]. To confirm that ZnO NPs' up-regulated induction of p53 and caspase-8 and that the phenomena is concentration manner, immunofluorescent assay was conducted ccording to manufacturer's protocol. Fig. 10 represents a p53 and caspase-8 induction in MCF-7 cells after treated with ZnO NPs at different concentrations as indicated. The fluorescence of p53 and caspase-8 was very low in control cells, while it was very clear in p53 and caspase-8 treated MCF-7 cells, which suggested that ZnO NPs were able to induce apoptosis. Conclusions Nanoparticles including ZnO have been examined and developed for cancer therapy. Lately, numerous research corporations have suggested the usage of ZnO nanoparticles as anti-cancer therapeutic drug. In this study, cytotoxicity of ZnO nanoparticles to human breast cancer cells was investigated. The results confirmed that exposure of ZnO nanoparticles to cancer cells led to causing considerable cytotoxicity. Caspase-8 Control cells The findings proposed that ZnO nanoparticles had high activity within the induction of genes expression. Consequently, ZnO nanoparticles chemoprevention may be efficacious for the prevention and remedy of numerous cancers.
2019-05-07T13:49:56.160Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "a0c8091620590d62c853a1441e5aee3c7c4f94e3", "oa_license": "CCBY", "oa_url": "http://nanobe.org/Assets/userfiles/sys_eb538c1c-65ff-4e82-8e6a-a1ef01127fed/files/11(1)_p35-43%20(Haitham%20Ali%20Kadhem).pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "81571570600de12ecfea1b66e2fb5365e8cc157f", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
25893325
pes2o/s2orc
v3-fos-license
Expression and Processing of Mouse Proopiomelanocortin in Bovine Adrenal Chromaffin Cells A MODEL SYSTEM TO STUDY TISSUE-SPECIFIC PROHORMONE PROCESSING* Many neuroendocrine precursor proteins, such as proopiomelanocortin (POMC), are cleaved in a tissue specific manner at distinct pairs of basic amino acids. Elucidating the specificity of the prohormone endoprotease(s) is essential to understanding cleavage specificity. However, isolation of these enzymes has been difficult, due to the inability to distinguish authentic maturation enzyme from the many other trypsin-like activities present in tissue homogenates. Recently, a "signature" of the insulin cell endoprotease(s) was defined in vivo by assessing the processing of a series of mutant cleavage sites in a model prohormone, mouse POMC (mPOMC) (Thorne, B. A., and Thomas, G. (1990) J. Biol. Chem. 265, 8436-8443. To investigate mechanisms of tissue-specific processing, we sought to identify the endoprotease signature of a cell having a processing phenotype distinct from insulinoma cells. In this report, the cleavage site specificity of the endoprotease(s) expressed in bovine adrenal chromaffin cells is examined. High levels of mPOMC (1.6 pmol/10(6) cells) were expressed in these cells using a vaccinia virus vector, and the precursor was targeted to the regulated secretory pathway. Analysis of POMC-derived peptides revealed that chromaffin cells processed the prohormone to a set of peptides highly similar to anterior pituitary corticotrophs, including adrenocorticotropin hormone (ACTH) and beta-lipotropin, gamma-lipotropin, and beta-endorphin. This processing contrasted with the pattern of cleavage site utilization in Rin m5F insulinoma cells, which more closely resembled that of the intermediate pituitary melanotrophs. However, the processing preference for the sequences of pairs of basic amino acids (as tested using the entire series of mutant cleavage sites; -LysArg- (native), -ArgArg-, -ArgLys-, -LysLys-, -HisArg-, -MetArg- at the ACTH/beta-lipotropin junction and -LysLys- (native), -LysArg-, -ArgArg-, -ArgLys- in beta-endorphin) was the same in both insulinoma and adrenal chromaffin cells, suggesting recognition and cleavage by similar enzymes in both cell types. The cell-specific processing of mPOMC may thus result from expression of a common core set of processing enzymes and factors unique to each cell type affecting the enzyme accessibility to precursor cleavage sites. Many neuroendocrine precursor proteins, such as proopiomelanocortin (POMC), are cleaved in a tissue specific manner at distinct pairs of basic amino acids. Elucidating the specificity of the prohormone endoprotease(s) is essential to understanding cleavage specificity. However, isolation of these enzymes has been difficult, due to the inability to distinguish authentic maturation enzyme from the many other trypsin-like activities present in tissue homogenates. Recently, a "signature" of the insulin cell endoprotease(s) was defined in vivo by assessing the processing of a series of mutant cleavage sites in a model prohormone, mouse POMC (mPOMC) (Thorne, B. A., and Thomas, G. (1990) J. Biol. Chem. 265,8436-8443. To investigate mechanisms of tissue-specific processing, we sought to identify the endoprotease signature of a cell having a processing phenotype distinct from insulinoma cells. In this report, the cleavage site specificity of the endoprotease(s) expressed in bovine adrenal chromaffin cells is examined. High levels of mPOMC (1.6 pmol/106 cells) were expressed in these cells using a vaccinia virus vector, and the precursor was targeted to the regulated secretory pathway. Analysis of POMCderived peptides revealed that chromaffin cells processed the prohormone to a set of peptides highly similar to anterior pituitary corticotrophs, including adrenocorticotropin hormone (ACTH) and @-lipotropin, y l ipotropin, and @-endorphinl-31. This processing contrasted with the pattern of cleavage site utilization in Rin m5F insulinoma cells, which more closely resembled that of the intermediate pituitary melanotrophs. However, the processing preference for the sequences of pairs of basic amino acids (as tested using the entire series of mutant cleavage sites; -LysArg-(native), -ArgArg-, -ArgLys-, -LysLys-, -HisArg-, -MetArg-at the ACTH/@-lipotropin junction and -LysLys-(native), -LysArg-, -ArgArg-, -ArgLys-in @-endorphin) was the same in both insulinoma and adrenal chromaffin cells, suggesting recognition and cleavage by similar enzymes in both cell types. The cell-specific processing of mPOMC may thus result from expression of a common core set of processing enzymes and factors unique to each cell type affecting the enzyme accessibility to precursor cleavage sites. A common mechanism in the synthesis of peptide hormones and neurotransmitters is excision from larger precursor proteins. In most cases, maturation to bioactive forms occurs in a series of steps within the regulated secretory pathway of endocrine and neural cells (1). An early step in the process is endoproteolytic cleavage of the prohormone, typically at pairs of basic amino acids (e.g. -LysArg-) (2). Further modifications, such as removal of flanking basic amino acids by carboxyland/or aminopeptidases, NH2-terminal acetylation, and COOH-terminal amidation, may then occur (3). Many prohormones are processed in a tissue specific manner. For example, proopiomelanocortin (POMC)' is cleaved in the anterior lobe of the pituitary to ACTH, @-LPH, y-LPH, and @-endorphinl-al, but is processed in the intermediate lobe to a-melanocyte-stimulating hormone, corticotropinlike intermediate lobe peptide, and several carboxyl-shortened forms of p-endorphin (4) (refer to Fig. 1). Although the biochemical basis for tissue-specific processing is unclear, possible mechanisms include: (i) selective expression of distinct processing enzymes specific for cleaving different sites in the prohormone; (ii) modulation of the accessibility of potential cleavage sites by differential post-translational modification of the precursor (e.g. glycosylation, phosphorylation); and (iii) regulation of the microenvironment in the processing compartment(s): differences in pH, or concentration of ions, co-factors, or protease modulators could potentially influence interactions of processing enzymes with their prohormone substrates. Identification of the prohormone endoproteases from cell types with different processing capabilities is essential to an understanding of the tissue specificity of the processing reactions. Indeed, diverse enzyme activities, many of which can cleave substrates containing paired basic amino acids in vitro, have been isolated from both endocrine and neuroendocrine sources (5-10). However, none of these candidate prohormone endoproteases has been authenticated. A major obstacle has been distinguishing a bona fide maturation activity from the many other trypsin-like enzymes present in tissue homogenates. One approach to overcoming this difficulty is to first characterize the cleavage site specificity of prohormone processing endoproteases in vivo and use this information as a guide in identifying putative prohormone processing enzymes in vitro. A comparison of the processing specificities of differ- ent cell types also provides a foundation for distinguishing between potential mechanisms of tissue-specific processing. Previously, we examined the cleavage site specificity of the prohormone endoprotease(s) in an insulinoma cell line, Rin m5F (11,12). Using a recombinant vaccinia virus (VV) expression vector, this cell line was shown to efficiently process mouse POMC (mPOMC) at many of the sites specifically cleaved in the intermediate pituitary, including the -LysLysArgArg-sequence within ACTH and the -LysArgsite within P-LPH. Unlike the intermediate pituitary, however, the -LysLys-site within P-endorphin was inefficiently cleaved. By comparing the efficiency of cleavage as the four permutations of paired lysines and/or arginines, introduced at two of the cleavage positions in mPOMC, the sequence of the paired basic amino acids was shown to be critical for directing the degree of processing in insulinoma cells. However, the efficiency of cleavage for specific pairs of basic amino acid sequences was modulated by position within the prohormone. Furthermore, analysis of mutant POMC processing suggested that "paired basic amino acids" may not constitute the precise definition of a cleavage site in these cells. Mutation of an efficiently cleaved -LysArg-site to -MetArg-resulted in partial processing whereas substitution with -HisArg-prevented endoproteolysis. Expression of mPOMC in a cell type exhibiting a processing specificity distinct from Rin m5F cells (ideally mimicking anterior pituitary) would provide a manipulable model system for studies' on the mechanisms of tissue-specific processing. We selected chromaffin cells of the adrenal medulla as such a model for several reasons. First, these cells process a variety of neuropeptide precursors in a tissue specific manner (see Ref. 13 for review). For example, proenkephalin is processed at select pairs of basic amino acids resulting in the synthesis of large enkephalin-containing peptides in chromaffin cells. In contrast, the same peptide precursor is completely processed in the brain to enkephalin pentapeptides by processing at all paired basic amino acid cleavage sites (14,15). Furthermore, chromogranin A, the major soluble protein of chromaffin vesicles, is partially processed in the adrenal medulla at selected sites, whereas insulin-producing cells efficiently process this precursor to lower molecular weight forms (16,17). Second, large (>lo8 cells) pure populations of chromaffin cells can be isolated from bovine adrenal glands and maintained for weeks in culture (18). Finally, as a primary culture, this tissue may more accurately reflect processing and secretion of peptide hormones in endocrine tissues in situ than tumorderived cell lines. In this report, we demonstrate efficient synthesis and proteolytic maturation of mPOMC in primary cultures of bovine adrenal chromaffin cells using a vaccinia virus expression vector. Processing of mPOMC in these cells was very similar to processing in anterior pituitary, contrasting with processing in both the intermediate lobe and Rin m5F cells. The precursor was cleaved primarily to ACTH and P-LPH, with only partial conversion of P-LPH to y-LPH and P-endorphinl-31. However, despite the differential processing at the tetrabasic sequence in ACTH and the y-LPH/P-endorphin junction, the adrenal chromaffin cells and the insulinoma cells displayed very similar processing efficiency of both the ACTHIP-LPH junction and the &endorphin cleavage site, for the native sequences as well as all mutant sequences. Thus a strong similarity in at least some of the insulinoma and adrenal chromaffin cell prohormone endoproteases is indicated. Together, these two cell types represent a manipulable model system to determine the factors which modulate the tissue specificity of prohormone processing. MATERIALS AND METHODS Vaccinia Virus-VV strain WR was used in these studies. Viral recombinants directing expression of native and mutant mPOMC were constructed and virus stocks were maintained as described (12). Chromaffin Cell Cultures-Bovine adrenal glands were obtained from the local slaughterhouse. All reagents were cell culture grade from either Sigma or Gibco unless otherwise specified. Chromaffin cells were isolated essentially as described (19). Briefly, eight glands were perfused retrogradely 3 times with 5 ml of W3 (145 mM NaCl, 5.4 mM KC1, 1.0 mM NaH2P0,, 11.2 mM glucose, 15 mM HEPES, 100 units/ml of penicillin, 100 units/ml of streptomycin, and 25 pg/ml of gentamicin, pH 7.4), incubating 10 min a t 37 "C between perfusions. The glands were then perfused 3 times with 4 ml of W3 containing 2 mg/ml type I collagenase (Worthington) and 50 pg/ml of DNase I (Cooper Biomedical), incubating 15 min at 37 "C between perfusions. After the last perfusion, medullae were dissected out, minced, and incubated with 150 ml of W3 containing 1 mg/ml of collagenase and 25 pg/ml of DNase I for 30 min at 37 "C in a spinner flask. The cell suspension was filtered through nylon mesh and the cells were collected by centrifugation a t 100 X g for 15 min. For most experiments, the chromaffin cells were then purified on a Percoll gradient as described (19). Cultures enriched in enkephalin containing chromaffin cells were prepared essentially as described (20). Briefly, cell pellets from the filtrate of two glands were resuspended in 4 ml of W3 and layered on top of step gradients of bovine serum albumin (BSA) in two 16-ml tubes (1 ml of 30%, 2 ml of 23%, 3 ml of 19%, 3 ml of 15%, and 3 ml of 10% BSA/tube). The BSA solutions were made in W3 from a 35% stock solution (Path-o-cyte 4, Miles Diagnostics). Gradients were centrifuged a t 16,000 X g (r,,,A in a swinging bucket rotor for 30 min a t 4 "C, and the cells at the interface between the 23 and 30% layers were collected. Chromaffin cells from both Percoll and BSA gradients were washed twice in W3 containing 1.8 mM CaClz and 0.8 mM MgSO, and were collected by centrifugation at 100 X g for 15 min. Cells were then resuspended in culture medium (50% Dulbecco's modified Eagle's medium and 50% Ham's F-12 containing 10% fetal calf serum (HyClone), 5 mM HEPES, pH 7.0, 100 units/ml penicillin, 100 units/ ml streptomycin, and 25 pg/ml gentamicin) and plated on untreated plastic tissue culture dishes (Nunc) at an approximate density of 4 x lo7 cells/l50-mm dish. Cells were incubated at 37 "C in a humidified atmosphere containing 5% COz (standard culture conditions) for 5-9 h to allow attachment of fibroblasts. Dishes were then gently swirled, and the medium containing unattached chromaffin cells was collected. Chromaffin cells were resuspended in fresh culture medium and replated on plastic tissue culture dishes (Nunc) which had been coated with 5 pg of rat tail collagen/cmz (type I, Sigma). Cell densities were approximately: 2 X 106/35-mm plate for secretion experiments, Chromaffin cells were inoculated with either wild type vaccinia virus or vaccinia virus recombinants expressing native or mutant mPOMC at a multiplicity of infection (m.0.i.) of 1 as described (11). After 2 h, the inoculum was replaced with fresh culture medium and the cells were maintained a t 37 "C for 24 h before harvesting or performing secretion experiments. Purity of cultures and relative number of infected cells were assessed by immunofluorescence. Cells were fixed directly on culture dishes with formalin prior to addition of primary antibody. The number of chromaffin cells in a culture was determined by staining for dopamine @-hydroxylase (antiserum was obtained from Eugene Tech International, Inc.). Enkephalin-containingcells were identified using a monoclonal antibody raised against proenkephalin (PE2) (21). The percent of cells infected with VV was determined using an ant,ibody raised against wild type VV. Regulated Secretion-For measuring stimulated secretion, all conditions were performed in triplicate. Culture medium was removed from infected cells, and the plates were gently washed once in control collection medium followed by a 15-min incubation a t 37 "C in the same medium. For K' secretion experiments, KRH (125 mM NaC1, 4.8 mM KC1, 1.2 mM KH2P04, 2.2 mM CaC12, 5.6 mM glucose, 1 mM ascorbic acid, 0.07% BSA, and 25 mM HEPES, pH 7.4) was used as control medium (22, 23), and for stimulation with Ba2+, calcium-free BSS (125 mM NaC1, 4.75 mM KCI, 1.4 mM MgC12, 10 mM glucose, 0.07% BSA, and 25 mM HEPES, pH 7.35) was used (24). The above media were then replaced with 1 ml of collection medium k secretagogues. Cells were incubated at 37 "C for 15 min for K+ experiments, and either 30 or 45 min for Ba'+ experiments. For stimulating secretion with K' , KRH was adjusted to 56 mM KC1 and the NaCl was reduced to 74 mM to maintain isosmotic conditions. For stimulation with Ba'+, 3 mM BaCI? was added to the BSS. Following the collection period, media were briefly centrifuged before assaying directly by radioimmunoassay (RIA). Peptides in cell extracts and media samples were resolved on a C, reversed phase HPLC column as described (11). Briefly, the media was removed from the cells, clarified by low speed centrifugation (1000 X g for 5 min), and immediately frozen a t -70 "C until further use. The cells were washed once with fresh warm medium and then harvested in 0.5 ml of 1 M acetic acid (pH 1.9 with HCI), 1 mM phenylmethylsulfonyl fluoride. Cell extracts were probe sonicated 10 s at 40 watts, clarified in a microcentrifuge, lyophilized to dryness, and stored at -70 "C until further use. Molecular weights were estimated by migration on a Tricine SDS-polyacrylamide gel essentially as described (25). Briefly, C, column fractions were lyophilized to dryness and resuspended in 10 mM Tris-phosphate, pH 6.8, 2.5% SDS, 5% @-mercaptoethanol, and 0.02% bromphenol blue. Samples were heated for 5 min in a boiling water bath and resolved by electrophoresis. The separating gel contained 12.5% acrylamide, 0.83% bisacrylamide, 0.1% SDS, 13% glycerol, 0.98 M Tris-HC1, pH 8.45, 0.05% ammonium persulfate (w/v), and 0.05% TEMED. The stacking gel contained 4% acrylamide, 0.13% bisacrylamide, 0.1% SDS, 0.72 M Tris-HCI, p H 8.45, 0.05% ammonium persulfate, and 0.05% TEMED. The upper reservoir buffer contained 0.1% SDS, 0.1 M Tris, and 0.1 M Tricine, p H 8. 25. The lower reservoir buffer was 0.2 M Tris-HC1, pH 8.9. Molecular weight standards consisted of low molecular weight proteins (Bio-Rad low MW kit), tryptic fragments of myoglobin (Sigma MW-SDS-17 kit), and @-endorphin,-,,. Lanes with standards were stained with Coomassie Brilliant Blue R-250. Lanes with samples were cut into 2.5-mm slices and incubated overnight in 0.8 ml of RIA buffer I (11) a t room temperature on a rotating platform. Aliquots of the eluates were analyzed by RIA. Sequence analysis of processed peptides was performed by manual Edman degradation as previously described (12). Expression of mPOMC in Primary Cultures of Adrenal Chromaffin Cells-Efficient expression and processing of mPOMC in a wide variety of established cell lines was previously achieved with a VV vector (11,12). The wide host range and transient nature of this vector suggested that the same VV constructs could also direct expression in primary adrenal chromaffin cultures. Since all cells in a culture should be infected by the recombinant virus, including fibroblasts or other contaminating cell types, pure populations of chromaffin cells were essential. Chromaffin cells isolated from bovine adrenal glands were first purified on a Percoll gradient followed by differential plating (see "Materials and Methods"). After 60 h in culture >90-95% of the cells were chromaffin cells, as determined by staining with a dopamine P-hydroxylase antibody. The few cells which were not immunopositive for dopamine @hydroxylase predominantly had fibroblastlike morphologies. Approximately 36 h after replating, these primary cultures were either mock infected or infected with either wild type vaccinia (VV:WT), or a VV recombinant which directs the expression of mPOMC (VV:mPOMC). The majority (>80%) of cells in both VV:WT and VV:mPOMC stained immunopositive for VV whereas no positive staining was detected in mock-infected cultures. Replicate cultures were harvested 24 h after infection, and expression was assayed by quantifying y L P H immunoreactivity (IR) by radioimmunoassay (RIA). Synthesis of mPOMC was essentially undetectable in either mock or VV:WT-infected cells, while VV:mPOMC-infected cells accumulated 0.8 pmol/106 cells 7-LPH IR in each cell extract and culture medium (1.6 pmol/106 cells total). Regulated Secretion of POMC IR from Adrenal Chromaffin Cells-Prohormone processing is thought to occur strictly within the regulated secretory pathway of endocrine and neuroendocrine cells (26,27). To study the cleavage reactions, correct intracellular targeting of the exogenous prohormone substrate to the storage granules of this secretory pathway must be demonstrated in the model system. Intracellular stores of endogenous catecholamines and enkephalin containing peptides can be released from chromaffin cells upon stimulation with a wide variety of secretagogues (23), including high concentrations of potassium (calcium dependent) (22) or barium (competitively inhibited by calcium (24). These secretagogues also induced release of mPOMC (Fig. 2). Following a 15-min stimulation with 56 mM KCl, a 3.5-fold increase (Ca2+ dependent) in secreted y-LPH IR was observed ( Fig. 2 A ) . A 30-min incubation with 3 mM BaC12 elicited an 11.5-fold increase in secreted y-LPH IR, corresponding to a release of approximately 50% of the intracellular mPOMC IR. As expected, the barium-stimulated release was partially inhibited by 2.2 mM CaC12 (Fig. 2B). Processing of mPOMC in Adrenal Chromaffin Cells-Processing of native mPOMC by adrenal chromaffin cells was characterized by identifying peptide products of the ACTH and @-LPH domains of the precursor (refer to Fig. 1). Extracts of VV:mPOMC-infected cells were resolved by reversed phase HPLC, and mPOMC-derived peptides were identified by retention time coupled with domain-specific RIA (Fig. 3). When assayed for ACTH, y-LPH, and 0-endorphin IR, peaks of immunoreactivity co-eluting with P-LPH, @-endorphinl_31, y-LPH, and two prominent ACTH isoforms were found. The principal y-LPH IR product co-eluted with the major peak of @-endorphin IR at the position of @-LPH standard (48 min). Furthermore, both immunoreactivities co-migrated on an SDS-polyacrylamide gel as a single band (data not shown). The apparent molecular mass of this peptide (8-9 kDa) correlated well with the calculated molecular mass of authentic @-LPH (8.2 kDa). A second peak of y-LPH IR coeluted with authentic y-LPH (35 min) and a @-endorphin peak of equivalent amount co-eluted with authentic @endorphinl-al indicating that about 40% of the @-LPH produced by these cells was further processed to y-LPH and @endorphinl-sl. Prominent peaks of ACTH IR, eluting at 29 and 33 min, were observed in extracts of VV:mPOMC-infected cultures. By SDS-polyacrylamide gel electrophoresis (SDS-PAGE) these peptides had apparent molecular masses of approximately 14 and 4.5 kDa, respectively (data not shown). In the anterior pituitary, -50% of the ACTH is glycosylated to a 13-kDa isoform, with the 4.5-kDa peptide corresponding to unglycosylated ACTH (28). radioiodinated and the ACTH-immunoreactive peptides were immunoprecipitated and subjected to sequence analysis as described (12). For both the 29-and 33-min peaks, lZ5I was detected in the second cycle of Edman degradation, the position of tyrosine in ACTH (data not shown). Thus both ACTH forms contained the correct NH2 terminus. Although only ACTH peptides containing an intact internal -LysLysArgArgsequence could be detected with the antiserum used in this study, no additional peptides could be identified with a COOH-terminal directed ACTH antiserum (anti-corticotropin-like intermediate lobe peptide, data not shown), indicating that the tetrabasic site was not cleaved by chromaffin cells. The peak of ACTH, y-LPH, and @-endorphin IR at 65 min co-eluted with intact prohormone. In this experiment, approximately one-third of the mPOMC IR was in the form of precursor. While similar levels of intracellular prohormone were found in most experiments, variability was occasionally observed (ranging from <lo% to almost 50% unprocessed mPOMC). However, the ratios of the different processed peptides (i.e. @-LPH to y-LPH and @-endorphin) remained constant. The processing of mPOMC by chromaffin cell cultures was strikingly similar to processing in anterior pituitary corticotrophs (refer to Fig. 1): The two -LysArg-sites flanking ACTH were very efficiently cleaved, while less than half the @-LPH was processed at an internal -LysArg-site. Also, the -LysLyssite near the COOH terminus of @-endorphin remained completely intact. The only significant difference between the homologous and heterologous systems was the relatively high levels of intact precursor in chromaffin cells. mPOMC Processing in Chromaffin Cell Subtypes-Adrenal chromaffin cells can be classified into two subtypes by their catecholamine content: adrenalin containing or noradrenalin containing (29). In the bovine adrenal, enkephalin-containing peptides are localized to the adrenalin-containing cells (30), the major subtype in this species. Although the two subtypes are biochemically very similar, the possibility of differential mPOMC processing by adrenalin-and noradrenalin-containing cells was a concern. In particular, the apparent partial processing of @-LPH may have reflected complete processing in one cell type and lack of processing in the other. This possibility was examined by studying the processing of mPOMC in cultures enriched for enkephalin-containing cells. Enkephalin-containing cells were purified from total chromaffin cell preparations on a discontinuous BSA gradient. Greater than 90% of the cells collected from the 23/30% interface stained intensely for proenkephalin-derived products, as compared to -35% in cultured purified by the standard Percoll gradient. Enriched cultures were infected with VV:mPOMC and extracts of these cells resolved by reversed phase HPLC as described above. Processing of mPOMC by the enkephalin-containing cells was essentially identical to processing in the unfractionated chromaffin cell cultures, as determined by the profile of y-LPH IR, slightly more (3-LPH was produced than the fully processed y L P H , and the precursor protein accounted for approximately one-third of the total immunoreactivity (data not shown). Because no differences in mPOMC processing were observed in enriched enkephalin-containing cells compared to unfractionated cells, we concluded both cell subtypes processed mPOMC identically. All further experiments were therefore performed with unfractionated chromaffin cell populations. Stimulated Secretion of Processed Peptides-Anterior pituitary corticotrophs secrete both @-LPH and its processed forms (O-endorphinl_sl and y-LPH) in response to secreta-gogues, demonstrating that all three peptides are final products of processing. Accordingly, experiments were performed to determine whether the P-LPH and any of the intact prohormone present in chromaffin cell extracts (Fig. 3) were also final products of maturation, or were instead present as processing intermediates in a secretory compartment, preceding formation of mature (secretion-competent) storage vesicles. Secretion from VV:mPOMC-infected chromaffin cells was stimulated for 45 min with 3 mM Ba2+ in the absence of Ca2+ as described above. Media from 100-mm plates of control and stimulated cultures were resolved on the Cq column and fractions were assayed for ACTH, y-LPH, and @-endorphin IR (Fig. 4). Very low levels of mPOMC IR were detected in control medium. The predominant form was intact precursor, although P-LPH was also detected (Fig. 4A). However, the medium of cultures incubated with Ba'+ contained predominantly processed peptides, including ACTH, P-LPH, y-LPH, and P-endorphinl_31. Secretion of these peptides increased more than 50-fold in the presence of barium (Fig. 4B). Furthermore, the pattern of peptide products released was very similar to that found in cell extracts (compare Figs. 3 and 4B) except that prohormone accounted for only a minor portion of the secreted immunoreactivity. Thus P-LPH was apparently a final product of maturation whereas most of the intracellular prohormone was sequestered in a nonsecretable compartment(s), presumably rough endoplasmic reticulum and/or Golgi. These results indicate that processing of mPOMC occurred within the regulated secretory pathway of chromaffin cells and, after transmitting this pathway, >90% of the precursor had been proteolytically processed to a set of peptides identical to those found in anterior lobe corticotrophs. Expression of Mutant mPOMC in Chromaffin Cell Cul- tures-The processing of mPOMC by chromaffin cell cultures was strikingly distinct from the processing observed in Rin m5F cells (summarized in Fig. 5). The insulinoma cells cleaved mPOMC not only at the -LysArg-sites flanking ACTH but also at the -LysLysArgArg-site within ACTH. Additionally, P-LPH was processed completely to P-endorphinl-31 and y -LPH. Note, however, that both cell types inefficiently cleaved the -LysLys-site within @-endorphin and efficiently processed the -LysArg-site at the ACTHIP-LPH junction (see Fig. 5). In order to compare "signatures" of Rin m5F and chromaffin cell processing enzymes we chose to study the specificity of the chromaffin cell endoprotease(s) using the same series of mPOMC cleavage site mutants previously expressed in Rin m5F cells (12). Processing of all four permutations of lysine and arginine (-LysArg-, -ArgArg-, -ArgLys-, and -LysLys-) was determined to identify any sequence specificity of the processing enzyme(s). To control for positional effects, these four sequences were introduced at two positions in the precursor: the efficiently cleaved -LysArg-site at the ACTHIP-LPH junction, and at the inefficiently cleaved -LysLys-site within P-endorphin. To test the presumed requirements for paired basic amino acids, -HisArg-and -MetArg-sequences were substituted for the efficiently cleaved -LysArg-at the ACTHIP-LPH junction. The structures of these eight constructs are summarized in Processing of POMC Mutants in Adrenal Chromaffin Cells extracts of these cells on the reversed phase column and identifying mPOMC-derived peptides by retention time coupled with domain-specific RIA (Fig. 6). The peptide profiles of the five ACTH/@-LPH mutants fell into three classes: K163R-mPOMC (-LysArg-changed to -ArgArg-), the first class of mutant, was processed identically to the native prohormone (Fig. 6A). Authentic @-LPH, y-LPH, and @e n d~r p h i n~-~~ were synthesized, as well as the two major forms of ACTH, indicating efficient cleavage of the -ArgArg-at this position in the precursor. The second class of mutants consisted of KR163RK-mPOMC (-ArgLys-), R164K-mPOMC (-LysLys-), and K163H-mPOMC (-HisArg-) (Fig. 6B). All produced normal levels of P -e n d~r p h i n~-~~ but almost no y-LPH, @-LPH, or ACTH. Instead, two novel peaks containing both ACTH and y-LPH IR (eluting at 44 and 47 min) and two novel peaks containing ACTH, y-LPH, and @-endorphin IR (eluting at 51 and 53 min) were observed. These results suggested 1) that the mutant -LysLys-, -ArgLys-, and -HisArg-sites were not cleaved by the chromaffin cell endoprotease(s); and 2) that the partial cleavage of the -LysArg-site within @-LPH was not affected by the mutation. If this interpretation were correct, the two earlier eluting novel peaks should correspond to a fusion of ACTH and y-LPH sequences, while the two later eluting peaks should be ACTH/@-LPH fusions. The production of two forms of each peptide is expected, due to heterogeneity within the ACTH domain. To corroborate these assignments, the four novel peptides arising from KR136RK (-ArgLys-) peptides, taken as representative of this class of mutants, were resolved by SDS-PAGE. Eluates of 2.5-mm gel slices were assayed for ACTH, y-LPH, and @-endorphin IR. Each peak ran essentially as a single species on the gel, and the immunoreactive profiles completely overlapped (data not shown). The first two peaks with HPLC retention times of 44 and 47 min had apparent molecular masses of 16.5 and 8.5 kDa, respectively, consistent with the predicted sizes of glycosylated and nonglycosylated ACTHIy-LPH fusions (the calculated molecular mass of the nonglycosylated fusion is 8.9 kDa). The later eluting peaks with HPLC retention times of 52 and 54 min had apparent molecular masses of approximately 20 and 15 kDa, respectively, consistent with being glycosylated and nonglycosylated ACTH/@-LPH fusions (the calculated molecular mass of the nonglycosylated form of this fusion is 13 kDa). The third pattern of processing was obtained with K163M-mPOMC (-LysArg-changed to -MetArg-) (Fig. 6C). Low but detectable levels of authentic y-LPH, @-LPH, and ACTH were produced in cells expressing this construct. Again, normal levels of @-endorphinl-31 were found. As in Rin m5F cells, the ACTH, y-LPH, and @-endorphin IR peak from VV:K163M-mPOMC-infected chromaffin cells corresponding to intact precursor (eluting at 68 min) was retained several minutes longer on the C4 column than either native or any of the other mutant mPOMC. In addition, three novel peaks were apparent in VV:K163M-mPOMC-infected chromaffin cells, eluting as a partially resolved triplet. The earliest of these peaks (eluting at 57 min) contained only ACTH and y-LPH IR, while the other two (eluting at 60 and 62 min) contained ACTH, y-LPH, and @-endorphin IR. Rin m5F cells infected with this construct gave rise to a mPOMC-derived -MetArg-containing peptide that had a considerably longer retention time (15 min) than its -HisArg-containing counterpart. By analogy with the Rin m5F results, the novel triplet of peaks in the C4 profile of VV:K163M-mPOMC-infected chromaffin cells may correspond to the earlier eluting quartet of novel peptides produced by the second class of mutants (-LysLys-, -ArgLys-, and -HisArg-cleavage sites, Fig. 6B). This hypothesis was corroborated by SDS-PAGE of column fractions containing each of the three peaks. The first peak migrated on the gel as predicted for the glycosylated ACTH/ y-LPH fusion. The second peak resolved by gel electrophoresis into two ACTH and y-LPH IR peptides, but only one of them was also immunoreactive for @-endorphin. The apparent M , of these peptides was also consistent with an ACTHIy-LPH fusion (nonglycosylated) and a glycosylated ACTH/@-LPH fusion (ie. the two peptides present in the second HPLC peak originating from the -MetArg-construct correspond to the peptides in the two middle HPLC peaks originating from the -ArgLys-construct). The third peak from the HPLC triplet migrated on the gel as predicted for the nonglycosylated ACTH/@-LPH fusion (data not shown). Processing of the @-Endorphin Cleavage Site Mutants-The processing patterns observed at the ACTH/@-LPH cleavage site suggested a clear preference of the chromaffin cell prohormone endoprotease(s) for specific sequences of basic amino acids; -LysArg-and -ArgArg-were more efficiently cleaved than -ArgLys-and -LysLys-at that position. To determine whether this hierarchy was position independent, processing of the four permutations of lysine and arginine was examined in the context of the @-endorphin cleavage site. As demonstrated above (Fig. 3), VV:mPOMC-infected chromaffin cells synthesized P-endoprhinl-31, rather than the proteolytically processed forms (cleavage at -LysZ8-Lysz9) @endorphinl-27 or @-endorphin,-26. Thus, the -LysLys-site near the COOH terminus of the molecule was not processed by these cultures (refer to Fig. 1). The mutant prohormones containing the three other permutations of lysine and arginine (refer to Fig. 5) were expressed in chromaffin cell cultures as described above. Extracts of these cells were resolved by reversed phase HPLC, and column fractions were analyzed as before. Both the level of mPOMC IR and processing of the ACTH domain of all three constructs was identical to that of the native precursor (data not shown). When assayed for y-LPH and @-endorphin IR, the peptide profiles of each mutant prohormone was distinct, although extracts of all three cultures did contain similar levels of authentic y-LPH and intact prohormone (Fig. 7). The primary form of @-endorphin produced co-eluted with authentic P-endorphinl_27 (46 min), the product expected if the mutant cleavage site was processed. Less than 20% of the @-endorphin eluted at the position expected for the unprocessed mutant p-endorphin,.3, (42 min). Two major @-LPH peaks (containing both y-LPH and @-endorphin IR) were observed, eluting at 48-49 and 51 min, respectively. The earlier peak co-eluted with authentic P-LPH (containing an intact COOH terminal cleavage site). The increased retention time of the second @-LPH-related peptide suggests removal of the four hydrophilic residues at the COOH terminus of the @-endorphin domain by processing at the mutant cleavage site. Partial proteolysis was observed with KK232RR-mPOMC (-LysLys-changed to -ArgArg-) (Fig. 7C). The major forms of @-endorphin and @-LPH eluted from the reversed phase column at the expected positions for peptides containing intact mutant cleavage sites. However, approximately 20% of the @-endorphin IR peptides consistently co-eluted with the carboxyl-shortened forms (47 and 51 min). In summary, the pattern of mutant cleavage site utilization in adrenal chromaffin cells was similar at the ACTH/@-LPH junction and within @-endorphin,_,,,. Neither -LysLys-nor -ArgLys-sites in mPOMC served as efficient substrates, whereas -LysArg-and -ArgArg-could both be cleaved. Efficiency of -ArgArg-processing, however, was influenced by position within the precursor. This sequence was cleaved more completely between the ACTH and P-LPH domains of mPOMC than near the COOH terminus of @-endorphin, although -LysArg-was nonetheless efficiently processed at both positions. Finally, a mutant -MetArg-, but not -His Arg-, site could partially support proteolysis at the ACTH/@-LPH junction. DISCUSSION In this report we demonstrated that primary cultures of bovine adrenal chromaffin cells express high levels of mPOMC (>1.6 pmol of mPOMC IR/106 cells in 24 h) when infected with a number of recombinant VV vectors. The native precursor was efficiently processed to peptides endogenous to the anterior pituitary, including ACTH, @-LPH, 7-LPH, and @-endorphinl-3, (Fig. 3). As in corticotrophs, processing occurred within the regulated secretory pathway (Fig. 4) (26). Analysis of peptides released from secretagogue-stimulated cultures demonstrated that both intact ACTH and P-LPH were final products of maturation, rather than simply processing intermediates. However, the majority of the unprocessed precursor was retained intracellularly, presumably within the endoplasmic reticulum/Golgi. Thus, the selective cleavage site utilization in transfected chromaffin cell cultures was apparently the same as in anterior corticotrophs. The -LysArg-sites flanking the ACTH domain were very efficiently processed, while the -LysArg-site within @-LPH was only partially cleaved and both the -LysLysArgArg-tetrabasic site within ACTH and the -LysLys-site in p-endorphin remained intact. Chromaffin cells endogenously express and direct a variety of proteins containing multiple pairs of basic amino acids to the regulated secretory pathway. Of these chromogranins A, B, and C comprise about 87% of the total soluble proteins stored in bovine chromaffin vesicles, with lesser amounts of dopamine 0-hydroxylase (4%), proenkephalin-derived peptides (l%), and neuropeptide Y (0.2%) (16,31). The extent of processing of these major vesicle soluble proteins varies markedly; dopamine @-hydroxylase with six pairs of basic amino acids in its sequence (four are -ArgArg-and -LysArg-) (32) does not appear to be processed at all, whereas 50% of chromogranin A and C and much greater levels of chromogranin Band proenkephalin-derived molecules are found as processed forms (31). However, a significant proportion of the chromogranins-and proenkephalin-derived peptides present in mature chromaffin vesicles corresponds to partially processed products (14,31,33,34). Thus, like mPOMC processing reported here, cleavage site utilization of endogenous chromaffin cell peptide precursors is not obviously linked to the sequence of paired basic amino acids in the cleavage site. For example, peptide B, a prominent chromaffin vesicle proenkephalin-derived peptide, is excised from the precursor by selective cleavage of a -LysArg-doublet while an internal -LysArg-doublet within the peptide E sequence remains unprocessed. This is similar to the positional preference for the -LysArg-sequences observed during the processing of mPOMC by the cultured chromaffin cells. Thus, chromaffin cell processing activity shows positional and sequence hierarchy constraints as much in endogenously processed proproteins as in the heterologously expressed mPOMC. The mPOMC processing in chromaffin cells processing contrasted with the cleavage site utilization previously reported in the rat insulinoma, Rin m5F, which more closely resembled that of neurointermediate lobe melanotrophs (12). In addition to the mPOMC cleavage sites processed in the adrenomedullary cells, both the -LysArg-in P-LPH and the tetrabasic sequence in ACTH were efficiently processed by the insulin cell endoprotease(s). Consistent with the more extensive processing of POMC in Rin m5F cells, the endogenous chromogranin A is similarly more efficiently processed in insulin-secreting cells than chromaffin cells (16,17). Together, the differential POMC processing recorded in these two heterologous cell types provides a manipulable system to study the factors responsible for the tissue specificity of prohormone processing. To begin characterizing the cleavage site specificity and identify an enzymatic signature of the chromaffin cell prohormone endoprotease(s), processing at a series of altered cleavage sites in mPOMC was examined (Figs. 6 and 7). All four permutations of lysine and arginine (-LysArg-, -Arg Arg-, -LysLys-, and -ArgLys-) were introduced at two positions in the precursor: the ACTHIP-LPH junction and near the COOH terminus of @-endorphin. In two additional constructs, -HisArg-and -MetArg-were substituted for the efficiently cleaved -LysArg-at the ACTHIP-LPH junction. Despite the differential mPOMC processing by chromaffin medullary cells and Rin m5F cells, the two cell types displayed the same sequence selectivity at all mutant sites: -LysArgand -ArgArg-sequences were preferentially cleaved, whereas -ArgLys-, -LysLys-, and -HisArg-directed cleavages were very inefficient. In addition, the lack of cleavage at -HisArg-is consistent with the lack of processing of this POMC mutant in Rin m5F cells (12) as well as the processing of similar naturally occurring mutant sites; neither proalbumin Lille (-HisArg-) (35) or a mutant proinsulin (-LysHis-) (36) are cleaved in uiuo. Since histidine should be charged in the acidic environment of maturing secretory granules where processing is thought to occur, a positive charge followed by an arginine is apparently insufficient for recognition by either the insulinoma or the chromaffin cell enzyme(s). The partial processing of a -MetArg-site in K163M-mPOMC by the chromaffin cell enzymes is also in agreement with our previous studies in Rin m5F cells (12). It is not known whether the -MetArg-doublet is processed by a single basic directed enzyme or whether the long unbranched side group of methionine, a property shared only with lysine and arginine, provides a critical structural component for recognition by a paired basic endoprotease. The only apparent difference between the mutant cleavage site utilization in chromaffin cells and insulinoma cells was the extent of processing at the two "partially cleaved sites. While the extent of cleavage of the mutant -MetArg-site in K163M-mPOMC (ACTHIP-LPH) and -ArgArg-in KK232RR-mPOMC (Pendorphin) was only 20-25% in the chromaffin cells, close to 50% processing was observed in Rin m5F cells (12). For all the mutant precursors, the effect on processing was restricted to the altered cleavage site. Neighboring sites were apparently cleaved as in the native prohormone. For example, the proportion of intact forms of 0-LPH to processed -/-LPH and &endorphin (somewhat greater than 1:l) was essentially the same for all constructs, independent of the extent of processing at either the ACTHIP-LPH junction or the COOH terminus of @-endorphin. However, processing at the native -LysArg-within P-LPH did appear to influence cleavage of the mutant -LysArg-in K233R-mPOMC. While greater than 80% of the @-endorphin produced in VV:K233R-mPOMC infected chromaffin cells had been carboxyl-shortened to 0-endorphi111-27, only about 50% of the P-LPH had apparently been processed at the mutant site. The simplest explanation for differential processing of native mPOMC by chromaffin and insulinoma cells would be the tissue-specific expression of endoproteases having distinct substrate specificities. However, the identical pattern of mutant cleavage site utilization in the two cell types suggests a similarity in the enzymes which performed these reactions. Consistent with this hypothesis is the recent identification of three DNA sequences; fur, PC2, and PC3 (or PCl), whose predicted translated products share significant structural homology with the yeast KEX2 precursor protein endoprotease and are co-expressed in pituitary, pancreas, and adrenal gland (37)(38)(39)(40).' The fur gene, which is expressed in a wide variety of tissues and cell lines, encodes a Golgi-localized KEX2-like endoprotease, furin, which can efficiently process pro-P-nerve growth factor and pro-von Willebrand factor in the constitutive secretory pathway of mammalian cells (40,41). In contrast, expression of PC2 and PC3 is apparently restricted to endocrine and neural tissues including adrenal medulla, and insulinoma cells as well as the anterior and neurointermediate lobes of pituitary (38,39). Whether furin, PC2, and/or PC3 can correctly process POMC in the regulated secretory pathway is currently being addressed. If chromaffin and insulinoma cells do indeed have one or more prohormone endoproteases in common, either of two mechanisms could account for the differential processing of ACTH and 6-LPH. First, Rin m5F cells may express additional enzymes not present in chromaffin cells (nor in anterior J. Hayflick and G . Thomas, unpublished observations. pituitary corticotrophs), which efficiently cleave the -LysLysArgArg-sequence in ACTH and the internal -LysArgsequence in P-LPH. A second possibility would be modulation of cleavage site accessibility. Instead of regulating enzyme expression, such a mechanism would rely on controlling prohormone/enzyme interaction, potentially through post-translational modifications, complex formation with accessory proteins, or other adjustments in the microenvironment of processing compartments. For example, 0-glycosylation has been implicated in modulating the tissue-specific excision of 7 3melanocyte-stimulating hormone from the NH, terminus of mPOMC (42). Perhaps processing of ACTH and P-LPH is more efficient in Rin m5F cells because these cleavage sites are presented in a more accessible conformation than in chromaffin cells. Thus, a possibility which must be considered is that tissuespecific processing is a two-tiered mechanism. Regulation of factors which influence cleavage site accessibility and enzyme specificity and/or rates of catalysis may act in conjunction with differential expression of a small core of processing enzymes. Although the biochemical basis of tissue-specific processing remains conjectural, the development of insulinoma and adrenal chromaffin cell cultures as a readily manipulable model system provides the means of addressing many of these fundamental questions.
2018-04-03T02:52:53.524Z
1991-07-25T00:00:00.000
{ "year": 1991, "sha1": "8d823450c5957bd2b829fbb613f6a954711970f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)92743-2", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e1992d89f2d0dfa3fe0e1a9201d975c14da555a6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
37546232
pes2o/s2orc
v3-fos-license
Different sensitivities of vasoconstrictor responses to serotonin and KCl of isolated and perfused dog mesenteric arteries with and without endothelia. The stainless steel cannula inserting method was used to investigate the effects of serotonin on isolated and perfused dog mesenteric arteries with and without intraluminal saponin treatment. By intraluminal administration, serotonin and potassium chloride caused dose-related vasoconstrictions. After intraluminal treatment with 3 mg of saponin, the potassium chloride-induced vasoconstrictor response was significantly enhanced, whereas the serotonin-induced one was not potentiated but rather reduced slightly. Although serotonin reaches all parts of the body because mammalian blood contains it, it has no recognized physiological role on blood vessel walls. Recently, we demon strated that potassium chloride (KCI) induced vasoconstriction was potentiated by removal of the endothelium in isolated and perfused arteries produced by intraluminal administration of saponin (1, 2). Moreover, CaCl2-induced vasoconstriction was also significantly enhanced after removal of the endothelium (2). On the other hand, phenyle phrine-induced constriction was not poten tiated by removal of the endothelium (2). In isolated arterial ring preparations, it was reported that serotonin-induced constriction was potentiated by removal of the endo thelium (3, 4). Thus, they considered sero tonin may contribute to certain pathological conditions by inducing strong vasocon striction after the damage of the endothelium. In the present study using isolated and perfused mesenteric arteries which was developed by Hongo and Chiba (5) and modified by Tsuji and Chiba (6), we in vestigated whether the constrictor responses to serotonin were different with and without the endothelium in the same arterial vasculature. Six mongrel dogs of either sex, weighing 8-18 kg, were anesthetized with sodium pentobarbital (30 mg/kg, i.v.), and the dogs were sacrificed by rapid exsanguination from the right common carotid artery. Arteries (which supplied the large middle portion of the small intestine) which are median branches of the cranial mesenteric artery were carefully isolated. Isolated arteries selected for study were 10-15 mm in length and 0.8-1.3 mm in outer diameter, and they were cannulated as described previously (5, 6). The isolated, cannulated artery was placed in a bath maintained at 37 °C and perfused with Krebs solution by means of a peristaltic pump. The perfusion solution was bubbled with 95% 02 and 5% CO2 which maintained the pH between 7.2-7.4. The flow rate was initially adjusted so that the perfusion pressure was between 50-100 mmHg; subsequently, it was kept constant throughout the experiment (0.5-2 ml/ml). The vasoconstrictor response was, therefore, observed as an increase in perfusion pressure. Drugs used were saponin (Kanto Chem. Co.), serotonin creatinine sulfate (5 hydroxy-tryptamine, Sandoz), and potassium chloride. The drug solution was adminis tered into the rubber tubing close to the cannula in a volume of 0.01-0.03 ml by use of a microinjector (Terumo Co. On the other hand, the KCI-induced constriction was apparently potentiated in the same pre parations. Figure 1 shows a typical experi ment of vasoconstrictor responses to serotonin and KCI before and after intra luminal treatment with 3 mg of saponin in the same arterial preparation. As shown in Fig. 1, the vasoconstrictor response to 1 mg of KCI was markedly enhanced from 20 to 150 mmHg in a maximum increase, but that to 0.1 ,cog of serotonin was rather depressed from 200 to 160 mmHg. Summarized data are shown in Fig. 2. Previously, the cannula inserting method for isolated vessels was developed and modified (5, 6). By use of this method, various regions of arterial vessels were perfused, and effects of serotonin were examined. Serotonin usually caused a transient constriction in a dose-related manner, although the potencies were differ ent in different regional vessels (5-10). More recently, we demonstrated that a bolus of intraluminal saponin (1-3 mg) readily caused removal of the endothelium in the isolated arterial vessels (1, 2). After removal of the endothelium, vasoconstrictor responses to CaCl2 and KCI were significantly enhanced in the same preparations (1 , 2), suggesting that calcium ions may readily enter into the vascular smooth muscle cell in the absence of the endothelium. On the other hand, phenylephrine-induced vasoconstriction was not potentiated by removal of the endo thelium (2). In the present study, serotonin induced constriction was not potentiated, although the KCI-induced one was obviously enhanced by treatment with intraluminal saponin. Cocks and Angus (3) reported that removal of the endothelium from both dog and pig coronary artery rings shifted the concentration-contraction curves to sero tonin and norepinephrine to the left and increased their respective maxima compared with those of rings in which the endo thelium remained intact. They also reported that the concentration-contraction curves for KCI were decreased by the removal of the endothelium. They considered that serotonin released a vasodilator substance from endo thelial cells. Cohen et al. (4) also reported that serotonin-induced contractions were larger in the absence of the endothelium in isolated canine coronary artery rings, whereas those caused by phenylephrine and KCI were not. They considered that the vasodilator response to serotonin mediated by the endothelium was initiated at sero tonergic receptors on endothelial cells. Thus, it is postulated that serotonin-induced con striction may be enhanced by disappearance of endothelium-dependent relaxations after removal of the endothelium. However, they could not explain the reason for decreasing constriction for KCI by the removal of the endothelium. In the present experiments, our results were different from those. We con sidered the reasons of the differences from previous reports (3, 4). Since Cocks and Angus and Cohen et al. (3, 4) used the coronary arteries from greyhound, mongrel dogs and pigs, it is not ruled out that different kinds of arteries cause different results in addition to different methods for making the isolated arterial preparations. We need to confirm the effects of serotonin on the coronary arterial preparation by the present method in the future. As KCI or CaCl2 induced constrictions were potentiated in the absence of the endothelium in perfused preparations (1, 2), we consider that serotonin-induced constriction may be due to an increase in the release of intracellular Ca ions from intracellular stores, but not due to an increase in entry of Ca ions from the extracellular space, and a relatively large amount of saponin may exert its action directly on vascular smooth muscles which in turn causes slight suppression of the serotonin-induced constriction.
2018-04-03T04:03:45.946Z
1985-01-01T00:00:00.000
{ "year": 1985, "sha1": "7abf8a36baffb28d98eea1da10b6aa56f67b6016", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/39/2/39_2_271/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9c9c6e6d177cfc4af9cccbf03a6cc000a25a3da2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
149912383
pes2o/s2orc
v3-fos-license
Polymer composites based on oligosulfones Halogen-containing oligosulfones based on 1,1-dichloro-2,2-di (4-hydroxyphenyl) ethylene and 4,4′-dichlorodiphenylsulfone on various degrees of condensation were synthesized in the solution by high-temperature polycondensation. Using the obtained oligosulfones, we conducted a physical modification of industrial bisphenol A polycarbonate in a wide range of concentrations. Physical and mechanical properties of composites are investigated. The compatibility of the obtained oligosulfones with polycarbonate was studied using viscometry, differential scanning calorimetry, and probe microscopy. It was shown that the introduction of oligosulfones into the polycarbonate matrix promotes an increase in the glass transition temperature, found by differential scanning calorimetry, at 3 to 200 ° C, depending on the composition, the composites have good dielectric and technological properties. The temperature dependences of the dielectric properties of PCs and PC-based composites with different oligosulfone content are characterized by the presence of a single. Introduction Recently, a noticeable expansion of the applications of rigid chains of glassy polymers characterized by low resistance to cracking is largely due to the development of composites based on them. Bisphenol A (PC) polycarbonate, along with a complex of valuable properties, has a number of disadvantages, which significantly limit its fields of application. In particular, high internal (residual) stresses, which lead to cracking of products during operation, slow-going relaxation processes, low adhesion, low resistance to alkaline environment, high melt viscosity, and hence the processing difficulty associated with this [1][2][3][4][5]. Despite numerous scientific studies on modification of polycarbonate, its assortment is insignificant. Numerous attempts to modify polycarbonate with low-molecular compounds did not give positive results. Over 50 compounds of various classes have been tested. However, to date, modification of polycarbonate in order to improve a number of performance characteristics is relevant. In this direction, the most promising way is the physical modification, i.e. development of compositions based on polycarbonate and various modifiers. In addition, it is known that the introduction of halogen atoms into a polymer contributes to an increase in fire resistance [6][7][8][9][10][11]. In order to create polymer composites based on polycarbonate, oligosulfones based on 1,1-dichloro-2,2-di (4-hydroxyphenyl) ethylene and 4,4'-dichlorobiphenyl sulfone were synthesized and some properties of the composites were studied. In a three-necked flask equipped with a mixer, a reflux condenser with a Dean-Stark trap, a gas supply bubbler and a thermometer, are entered 5.62281 g (0.02 mol) 1,1-dichloro-2,2-di (4-hydroxyphenyl) ethylene, 40 ml of dimethyl sulfoxide (DMSO) and 30 ml of toluene. When stirring, nitrogen is passed in and the temperature is raised to 70 °C. After complete dissolution of 1,1-dichloro-2,2-di (4hydroxyphenyl) ethylene, 3.98406 ml of 10.04 normal solution of sodium hydroxide is added. The temperature is raised to 140-145 °C and the azeotropic toluenel-water mixture is distilled off until the water is completely removed. The reaction mass is cooled to 40-50 °C and 2.87294 g (0.01 mol) of 4.4'dichlorodiphenylsulfone is added. The reaction is carried out at 140-145 °C for 2 hours. The resulting mass is diluted with 10 ml of DMSO and precipitated into acidified distilled water. The precipitate formed is filtered off and washed with distilled water until the filtrate is negatively reacted with chlorine ion. The resulting oligoether sulfone is dried at 100 ° C under vacuum for 24 hours. Ubbelohde viscometer was used to determine the intrinsic viscosity. The concentration of the solutions was determined by the formula: , ) ( where a is the polymer sample weight, g; V, V1, V2 -respectively, the volumes of the solvent consumed to prepare the solution, the initial solution placed in the viscometer, and the solvent added to the viscometer upon dilution, ml. Study of the surface of the samples on a scanning probe microscope. The sample is mounted on a polycore substrate 20x25x10 mm in size and then attached to the scanner in the horizontal position. A measuring head with removable probes is installed above the sample. Rapid supply of the probe is carried out by a stepper motor in about 1-3 minutes. The measurement time depends on the speed and field of the scan and is approximately 0.5-5 minutes. Melt indices of polymers were measured by a capillary viscometer with a constant piston at a temperature of 220-280 °C. Every 5 minutes, the extruded melt was cut out of the capillary with a knife and weighed. The test result was taken as the arithmetic average of two determinations on three pieces of material, the difference in weight, between which did not exceed 5%. The melt flow rate (melt index) was calculated by the formula: where Q is the weight of the polymer, g; t -time extrusion, min. The main properties of these oligosulfones are described in detail in [12]. Unambiguous methods of studying the compatibility in polymer-plasticizer systems, polymer-polymer find difficult. In the block state, compatible polymers form transparent films and fibers, which in a phasecontrast microscope with high magnifications or in an electron microscope do not show a heterogeneous structure under any methods of contrasting. In addition, blends of compatible polymers should have the same glass transition temperature, regardless of research methods. These criteria are, in principle, unambiguous criteria for compatibility, but in practice there may be some difficulties in using them. In particular, incompatible polymers form transparent films if the refractive indices of both polymers are the same or if the refractive indices, although different, but the polymers can form a two-layer film that appears transparent when it is obtained by evaporating the solvent from the polymer solution. Such cases are not particularly difficult for analysis, since transparent films from blends of incompatible polymers are characterized by two glass transition temperatures corresponding to the glass transition temperatures of the components, if the latter differ sufficiently and can be determined using the method of investigation used. The above makes it critical to refer to any measurements on polymer blends, since it is very difficult to determine if the mixture is in an equilibrium state. Usually, without evidence, it is assumed that a film from a mixture of polymers obtained from a solution is in a more equilibrium state than samples obtained by mixing polymers in a block state. Everything said predetermined the need to determine the compatibility of polycarbonate with the obtained oligosulfones. To determine the compatibility, as well as to determine the nature of the distribution of the synthesized oligosulfones in the PC matrix, we used the method of scanning probe microscopy. Researches have shown, that properties of a surface depends on its structure, as illustrated in figure 1. However, in samples with a modifier content of up to 10% by weight. there is good compatibility between the components. Oligosulfones are evenly distributed in the polycarbonate matrix. The particle size of the modifier is from 0.31 to 3.13 microns and depends on the composition.Later, oligosulfone with a degree of polycondensation n = 5 (OC-5C-2) was used to modify PC. a b Figure1. SPM photographs of a PC composition with OC-5C-2, containing 5 (a) and 10 (b)% of the weight. X-ray phase analysis showed that all compositions of composites are amorphous. To determine the compatibility of the oligosulfones obtained, we also used the viscometric method. The studies were carried out in the Ubbelohde viscometer at a temperature of 25 ° C. At this temperature, polymer-solvent systems containing from 1 to 10 weight % of oligosulfone were investigated. Methylene chloride was used as a solvent. It turned out that for these systems the maximum compatibility is observed at a ratio of 95% by weight. PC and 5% of the weight . OS-5C-2. The data are presented in figure 2. The glass transition temperatures of composites found by differential scanning calorimetry are shown in figure 3. As can be seen from the figure, with an increase in the content of oligosulfone in the composition, Tg increases. An increase in the glass transition temperature probably indicates the interaction of the filler with the matrix. For the compositions obtained, the temperature dependences of the dielectric constant (ε ′) and the tangent of the dielectric loss angle were studied at a frequency of 104 Hz. The values of the dielectric constant of all the investigated samples of composites are ~ 3-3.6 and are stable in the temperature range from 20 to 200°C. Curves of dielectric properties as a function of temperature, for PCs and PC-based composites with different oligosulfone content, have the same dielectric loss tangent. The values of ε ′ for PC and all composites do not depend on the composition and, within the limits of error, coincide and correspond to values of 2.4-2.5. Comparison of melt indices of the original PC and composites based on them (figure 4) showed that the introduction of small amounts of OC-5C-2 in PC significantly affects the melt flow rate (MFR) of these composites. Thus, the introduction of only 1-3% of the weight OS increases the PTR PC 1.5-2 times. Moreover, the introduction of small amounts is more effective -up to 5% by weight OS. All the composites obtained are dissolved in inexpensive and easily volatile solvents, for example, in chloroform, methylene chloride, dioxane, dimethylformamide, etc. This will allow their processing into film products from solutions, for example, in available and volatile methylene chloride. Conclusion Composite polymeric materials based on aromatic polycarbonate bisphenol A synthesized on the basis of 1,1-dichloro-2,2-di (4-hydroxyphenyl) ethylene and 4,4'-dichloro diphenyl sulfone oligosulfones of various molecular weights were developed. The study of some properties of PC compositions with oligosulfone showed that these OS can be used to improve some of the performance properties of polycarbonate. The compatibility of obtained OS with industrial poly-carbonate has been studied by various methods. It was shown that the introduction of oligosulfones into the polycarbonate matrix promotes an increase in the glass transition temperature by 3 to 200 ° C, depending on the composition. The resulting composites have good dielectric properties. Comparison of the flow rates of PC melts and composites under various processing conditions allows us to conclude that these composites can be processed by extrusion and injection molding under milder conditions than bisphenol A polycarbonate.
2019-05-12T14:14:46.949Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "6170fd7670294d69447a09a938cb6f41557fc887", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/511/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b0583cea86e84c41718ffb100c1eec719fcaccab", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
226232740
pes2o/s2orc
v3-fos-license
Variant NAXOS-Carvajal Syndrome with Rare Additional Features of Systemic Bulla and Brittle Nails: A Case Report and Literature Review Skin abnormalities are often indicative of cardiovascular diseases. Such a disease entity is called cardiocutaneous syndrome; however, the details regarding the involvement of bulla and nails remain largely unclear. A 49-year-old man with systemic bulla was admitted for heart failure. His bulla had previously been diagnosed as epidermolysis bullosa, but no known gene mutations for it had been identified. He had a triad of palmoplantar keratosis, curly and fine hair, and cardiomyopathy, which are characteristic of NAXOS-Carvajal syndrome. This case highlights the fact that bulla and brittle nails can accompany NAXOS-Carvajal syndrome, showing that these extra-cardiac findings can help identify otherwise overlooked serious cardiac conditions. Introduction Like a diagonal earlobe crease indicating atherosclerotic cardiovascular disease, apparently isolated skin abnormalities can be a clue suggesting systemic disease (1). Specifically, conditions in which cardiac and skin disorders coexist are termed cardiocutaneous syndromes (CCS), regardless of the degree of causality (2). This disease entity includes NAXOS disease and Carvajal syndrome, or NAXOS-Carvajal syndrome, in which cardiomyopathy of the right, left, or both ventricles occurs with hyperkeratosis and woolly hair in a hereditary manner (3,4). Various gene mutations in the desmosome complex have been identified as common underlying causes of these diseases, with or without additional features (5-7), although several unidentified genes remain. We herein report a novel familial case demonstrating the NAXOS-Carvajal phenotype with rare additional features of systemic bulla and brittle nails, in association with a literature review. To our knowledge, this is the first report on bullous cardiocutaneous syndrome that is unrelated to desmoplakin, the most critical protein in the desmosome com-plex. Case Report A 49-year-old man with dyspnea and systemic bulla was admitted to our hospital due to heart failure (HF). His skin lesions had persisted for more than 25 years and were diagnosed as epidermolysis bullosa (EB). He had no other comorbidities or allergies nor was he taking regular medicine. His parents were nonconsanguineous; however, he had a family history of heart disease, blisters, and curly hair (Fig. 1). His mother had died of HF. Two of his siblings died: one shortly after birth due to an unknown cause, and the other in his 20s due to dilated cardiomyopathy accompanied by systemic blisters. The other two siblings were alive and free of HF but suffered from premature senility and systemic blisters (one each). The patient's son had transposition of the great arteries without any skin disease. On an examination, the patient's blood pressure was 144/ 102 mmHg, his heart rate was 95 bpm, and his oxygen saturation was 98%. His jugular vein was distended, and pitting edema was observed. A mixture of non-purulent bulla in as-Intern Med 60: 1119-1126, 2021 DOI: 10.2169/internalmedicine.5899-20 sociation with erosion, erythema, and pigmentation was found throughout his body (Fig. 2). His scalp hair had been fine and curly since birth. His hair had changed from black to brown during adolescence, as had his father's, his two siblings', and his daughter's. Mild focal keratosis was found on his palms and soles. His toenails were brittle and mostly detached. His fingernails were thick, white, and dystrophic (Fig. 3). The patient's teeth were normal. His plasma brain natriuretic peptide (BNP) level was 2218 pg/mL, serum creatinine 0.89 mg/dL, and C-reactive peptide 0.88 mg/dL. Serum antibodies for desmogleins and BP180 were negative. Chest X-ray showed marked cardiomegaly and vascular redistribution (Fig. 4). An electrocardiogram revealed a low voltage, a first-degree atrioventricular block, and epsilon waves (Fig. 5a). On echocardiography, the right ventricle was dilated to 76 mm, and its fractional area change was 11% (Fig. 6). Severe tricuspid regurgitation was also observed. The left ventricle was enlarged to 63 mm with a flattened ventricular septum. The ejection fraction fell to 21%. In response to diuretic therapy with oral furosemide 40 mg/day and spironolactone 25 mg/day, pulmonary and peripheral edema resolved within days. This patient then underwent a diagnostic workup for HF. Ventricular late potentials were positive on a signal-averaged electrocardiogram (Fig. 5b). Coronary angiography revealed no significant stenosis. Computed tomography and magnetic resonance imaging showed fatty infiltration into the ventricular septum and extensive fibrosis in the left ventricle, respectively ( Fig. 7a, b). Endomyocardial biopsy specimens from the right ventricular septum exhibited fibro-fatty replacement in approximately half of the area (Fig. 7c). These findings collectively led to the definite diagnosis of arrhythmogenic right ventricular cardiomyopathy (ARVC) with left ventricular involvement by meeting two major and two minor criteria (8). The patient experienced non-sustained ventricular tachycardias during admission and underwent implantation of an cardioverter defibrillator as a class I indication (9). He was discharged on day 31, and his BNP level decreased to 482 pg/mL. A continued dermatological examination in an outpatient setting provided a definite diagnosis of junctional EB (JEB). Candidate genes for JEB, including the desmoplakin gene (DSP) and desmoglein gene (DSG), were extensively investigated; however, no mutations were identified. Electron microscopy identified no changes indicative of specific dis-orders. Discussion Disorganized desmosome complexes impair cellular integrity and accommodability to stress. This induces disorders in multiple organs, especially those susceptible to stress, such as the heart and the skin (10). Three desmosomal genes have been identified to cause this type of cardiocutaneous syndrome: the plakoglobin gene (JUP) causing NAXOS disease, the DSP causing Carvajal syndrome, and the desmocollin-2 gene (DSC2) (11). Although small differences exist, they share the characteristic triad of cardiomyopathy, palmoplantar keratosis, and woolly hair. Diseases compatible with the triad are thus called NAXOS-Carvajal syndrome and are thought to be associated with desmosomal gene disruption. Cardiomyopathy in NAXOS disease is characterized by right-dominant ventricular dilatation, hypokinesis, and tachyarrhythmia, which are compatible with ARVC (3). In contrast, Carvajal syndrome predominantly involves the left ventricle, resembling dilated cardiomyopathy (4). This ventricular preponderance initially served to define each syndrome. However, the distinction was later considered ambiguous, as even mutations in the same gene or within the same gene family can affect both ventricles (11,12). Similarly, ARVC, originally regarded as a pure right ventricular disease, were later found to involve the left ventricle. Such variant ARVCs were once named left-dominant arrhythmogenic cardiomyopathy (LDAC) (13). Now these diseases may be collectively called as arrhythmogenic cardiomyopathy, as the same gene can affect both ventricles (14). In this Intern Med 60: 1119-1126, 2021 DOI: 10.2169/internalmedicine.5899-20 context, the biventricular cardiomyopathy in the present case was diagnosed as a common cardiac presentation of NAXOS-Carvajal syndrome. In addition, the myopathy can also be diagnosed as ARVC and LDAC, or arrhythmogenic cardiomyopathy. Blisters have been reported as a cutaneous variant in only four cases (Table 2) (12,18,19,27). Common features were homozygous DSP mutations, biventricular and nail involvement, and mild keratosis. While the blisters varied in size or distribution, the involvement of teeth was indeterminate. The findings of the present case were consistent with these char- acteristics. However, its novelty was accentuated in that the patient lacked a DSP mutation and exhibited a relatively large bulla that was distributed throughout his body. The older onset age of HF in our case not only suggests a better prognosis but also has pathophysiological implications. The age at the onset in patients with a DSC2 mutation is also older than in those with DSP mutations. This may be better explained by the interaction between desmocollin and desmoplakin than by the direct disruption of desmocollin (11). Thus, in addition to mechanical disruption of the desmosome complex, altered cell signaling pathways between desmoplakin and other desmosomal components or factors associated with desmosomes may underlie disease formation. There are only three known causative genes for NAXOS-Carvajal syndrome, but as exemplified in a case with DSC2 mutation, all of the genes related to the desmosome complex have the potential to induce the phenotype. Indeed, there have been several reports showing the NAXOS-Carvajal phenotype in which the responsible gene was unclear but not DSP (22,23). Furthermore, the genotype-phenotype association varies according to the site or mode of the mutation among cases with DSP mutations. These facts underscore the genetic heterogeneity of this syndrome. As we only examined DSP and DSGs in the present case, a thorough investigation of other related genes may elucidate the precise mechanism. Given the aggregation patterns of curly hair and HF, the disease may be transmitted through autosomal-dominant inheritance. This notion is consistent with the fact that most cases of variant Carvajal syndrome with additional abnormalities of the teeth are autosomal-dominant, whereas classi-cal ones are autosomal-recessive (25). However, this is only speculative, and other possibilities, including de novo or compound heterogeneous mutations with or without consanguinity, may underlie the disease expression. Furthermore, the penetrance or Lyon hypothesis can also affect the phenotypic expression, both of which are underrepresented in NAXOS-Carvajal syndrome with additional features. Genes causing EB may also affect multiple organs, including the heart, as in cases with lethal acantholytic EB that represent cardiomyopathy or HF (28,29). EB is classified into four subcategories: EB simplex, JEB, dystrophic EB, or a mixture thereof (30). As electron microscopy findings were indeterminate for the classification, JEB was chiefly diagnosed by physical findings. The presence of nail dystrophy and lack of palmoplantar bullas were inconsistent with EB simplex, while the lack of scarring or milium on and around the healed bullas contradicted dystrophic EB. We therefore examined the gene abnormalities known to induce JEB. Two independent dermatologists clinically established the diagnosis. However, an extensive analysis identified no gene mutations for major JEB subtypes, except for two rare variants without heart involvement. This suggested that JEB-related genes did not contribute to the NAXOS-Carvajal phenotype in the present case. This case highlights the fact that NAXOS-Carvajal syndrome can be accompanied by additional bullous lesions and brittle nails through unknown inheritable gene mutations or modes of transmission. This case also demonstrates that bulla and brittle nails serve as a critical clue for identifying serious cardiac conditions that may otherwise go undetected.
2020-11-03T14:06:30.809Z
2020-11-02T00:00:00.000
{ "year": 2020, "sha1": "c0de3478cc9f5abe650f041814ec002351038a27", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/60/7/60_5899-20/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2986bf0ec1cd8ab07792dd8fa29ce48828ca821", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119358335
pes2o/s2orc
v3-fos-license
Shadow of Schwarzschild-Tangherlini black holes We study the shadow cast by the H$D$ Schwarzschild-Tangherlini black hole, and analytically calculate the influence of extra dimensions on the shadow of a black hole. A black hole casts a shadow as an optical appearance because of its strong gravitational field which is known to be a dark zone covered by a circle for a Schwarzschild black hole. We demonstrate that the null geodesic equation can be integrated by Hamilton-Jacobi approach, which enables us to investigate the shadow cast by the H$D$ Schwarzschild-Tangherlini black holes. Interestingly, it turns out that, for fixed values of the mass parameter, the shadow in H$D$ spacetimes are smaller when compared with 4$D$ Schwarzschild black hole. Further, the shadows of H$D$ Schwarzschild-Tangherlini black holes are concentric circles with a radius of the circle decreases with increase in $D$. We visualize the photon regions and the shadows in various dimensions for different values of the parameters, and the energy emission rates are is also investigated. Our results, in the limit $D=4$, reduced exactly to \emph{vis-$\grave{a}$-vis} Schwarzschild black hole case. I. INTRODUCTION There is a great interest to investigate nature of the black holes, i.e., mass and spin of the black hole, which can be possibly determined by observation of black hole shadow [1][2][3][4]. Now, it is a general belief that a black hole, if it is in front of a luminous background, will cast a shadow. To observe the shadow of the black hole at the center of the Milky way, one will be looking for a ring of light around a region of darkness, which is called the black holes shadow. That light is produced by matter that is circling at the very edge of the event horizon, and its shape and size are determined by the black holes mass and spin. A shadow is the optical appearance cast by a black hole, and its existence was first studied by Bardeen [5]. The shadow of a nonrotating black hole is circular [6] while a distorted circle for rotating black holes and this is due to the presence of the spin parameter [5,7,8]. It was Synge [9] who studied shadow of Schwarzschild black holes and thereafter Luminet [10] discussed the optical properties of static, spherically symmetric Schwarzschild black holes and constructed a simulated photograph of the shadow. Later, it received a significant attention and has become a quite active research field (for a review, See [11]). More discussion on shadow of Schwarzschild black hole [12], other spherically symmetric black holes [13] have been intensively studied and have been extended to rotating black holes [14][15][16][17][18][19][20][21][22][23][24][25]. Over the past decade there has been an increasing interest in the study of black holes, and related objects, in HD, motivated to a large extent by developments in string theory. Recently, it has been proposed that our universe may have emerged from a black hole in a higher-dimensional (HD) Universe. Black holes are very interesting gravitational as well as geometrical objects to study in 4D which may also exist in HD spacetimes. Although, at present, the work on HD can probably be most fairly described as extended theoretical speculation and it has no direct observational and experimental support, in contrast to 4D general relativity. However, this theoretical work has led to the possibility of proving the existence of extra dimensions which is best demonstrated by Reall and Emparan to show that there is a 'black ring' solution in 5D [26]. If such a 'black ring' could be produced in a particle accelerator such as the Large Hadron Collider, this would provide the evidence that HD exist. There are other reasons for this interest in HD black holes [27][28][29][30][31] in particular, e.g., the statistical calculation of black hole entropy using string theory was first done for certain 5D black holes [32] and also the possibility of producing tiny HD black holes at LHC in certain brane-world scenarios [33]. This study of shadow is extended to HD spacetime by several researchers, e.g., Papnoi et. al. [34] have studied the shadow cast by 5D rotating Myers-Perry black holes, for pure Gauss-Bonnet gravity rotating black holes by [35], also in Kaluza-Klein gravity [19]. However, the results of these work can't go over to the Schwarzschild black hole. Hence, it is pertinent to investigate the apparent shape of HD Schwarzschild-Tangherlini black holes to visualize the shape of the shadow and compare the results with images for 4D Schwarzschild black hole. An apparent Shape of black hole is determined via boundary of the shadow which can be studied by the null geodesic equations. Clearly, the extra spacetime dimension shall change the equations of motion which may lead to the modification of black hole shadow. The paper is organized as follows: in Sect. II, we review the Schwarzschild-Tangherlini black hole solutions and present the associated thermodynamical quantities. In Sect. III, we have presented the particle motion around the Schwarzschild Tangherlini black hole by using the Hamilton-Jacobi approach necessary to discuss the shadow. The observables are introduced Sect. IV to plot the apparent shapes of the black hole shadows and finally in Sect. V, we have concluded with the results. II. THE SCHWARZSCHILD-TANGHERLINI SPACETIME We present the basic framework for general relativity in HD, and introduce the Schwarzschild Tangherlini solutions that generalize the 4D Schwarzschild solution to the HD. The action in HD space-time This is a straightforward generalization of Einstein-Hilbert action to HD and the only aspect that deserves some attention is the implicit definition of Newton's constant G D in HD, without loss of generality we use units such that G D = c = 1. Using variational principle, one obtains the Einstein equation in HD as where R ab , R and g ab are respectively Ricci tensor, Ricci scalar and metric tensor. Tangherlini [36] has found the asymptotically flat, static and spherically symmetric vacuum solution of (2) as a generalization of the Schwarzschild black hole where is a metric on (D − 2)-dimensional unit sphere. The parameter µ is related to black hole where Ω D−2 is the volume of the (D − 2) dimensional sphere given by This suggests that the Schwarzschild solution generalizes to HD in the form given by the metric (3). First thing to be noticed is that this simplifies to the normal Schwarzschild metric when D = 4. Secondly, the HD version is very similar to the 4D one and the notable difference is when we go to HD then the fall off term 1/r replaced with the 1/r D−3 . As shown by Tangherlini [36], this turns out to give the correct solution and that the metric (3) is indeed Ricci flat, and this solution is called the Schwarzschild-Tangherlini solution. As can be seen from equation (3) we have no black-hole solution in D = 3. If the mass parameter µ < 0 then we get a naked singularity, which is not physical. If µ > 0, the black hole horizon radius r h which is obtained by solving g tt (r h ) = 0 as The event horizon is the nonrotating Killing horizon and its spatial cross-section is around The mass of the black hole in terms of the horizon radius r h gives The area of the event horizon for the metric (3), is The black hole entropy is expected to obey area law [37]. Explicitly the entropy S in terms of M, can be written as (10) Clearly, all above result reduces to the 4D Schwarzschild black holes when D = 4 [37]. Although a black hole is invisible, it can cast a shadow when it is in front of a bright object. The aim of this work is to discuss the effect of extra dimension in a black hole shadow in the background of Schwarzschild-Tangherlini. III. MOTION OF A TEST PARTICLE When a black hole is in front of the light source, the light reaches the observer when deflect due to gravitational field of the black hole. However, it turns out that some of the photons may fall into the black hole, which result a dark zone called the shadow, and the apparent shape of the black hole is the boundary of the shadow. We present the necessary calculations for obtaining the shape of Schwarzschild-Tangherlini black hole shadow, which demands the study of the motion of test particle. We employ the Lagrangian and Hamilton-Jacobi equation to obtain equations of motion, which requires a study of geodesics equation of a particle near Schwarzschild-Tangherlini black hole. We begin with the Lagrangian which where an over dot is derivative with respect to affine parameter τ and g µν is the metric tensor. The canonically conjugate momentum for the Schwarzschild-Tangherlini black holes metric (3) can be calculated as where P φ = P θ D−2 [38], E and L are respectively energy and angular momentum of the test particle. For D = 4, we have i = 1 and the quantities P θ i and P φ reduces to Schwarzschild case, can be reads We use Hamilton-Jacobi method to analyze photon orbits around the black hole and use the formulation of geodesic equations by Carter approach for Schwarzschild black hole [39], which we extended here to HD. The Hamilton-Jacobi method in the HD reads where S is the Jacobi action. On using (3) in Eq. (17), we obtain Here, we consider a additive separable solution for Jacobi action S, which can be expressed where S r (r) and S θ i (θ i ) are respectively functions of r and θ i and m is the mass of the test particle, which is zero for the photon. On the right hand side, the second and third term related with the conservation of energy and the angular momentum respectively. The Hamilton-Jacobi Eq. (17), on using (19) can be recast as where K is the Carter constant [39]. By using Eqs. (13)- (15) in Eqs. (20) and (21), we get where "+" and "−" sign respectively gives a motion of photon in outgoing and ingoing radial direction and a dot denote the derivative with respect to affine parameter τ . For the null curves the expressions of R(r) and Θ i (θ i ) in Eqs. (24) and (25) can takes the form as The Eqs. Tangherlini black hole which is given by where V ef f is the effective potential for radial motion given by Thus it is straight forward to show that the effective potential for Schwarzschild-Tangherlini is maximum for critical radius r c [41] For D=4, the HD equation of effective potential (29) reduces to Schwarzschild spacetime. We have plotted the radial dependency of effective potential in Fig. 1. As we know for Schwarzschild case V ef f for photon has maximum at r = 3M, which shows a unstable circular orbit and as we go r → ∞, effective potential asymptotes to a constant value. One can see from Fig. 2, as we go to HD the maximum value of effective potential increases, which emphasizes that as we go to the HD unstable circular orbits becomes smaller. The photon orbits are circular and unstable corresponding to the maximum value of effective potential. The unstable circular orbit determine the boundary of apparent shape of the black hole and can be obtained by maximizing the effective potential, which demands we obtain impact parameters η and ξ related to dimension D via, The contour of the Eq. (32) can describe the apparent shape of Schwarzschild-Tangherlini black hole. For the Schwarzschild black hole the effective potential has maxima at r c = 3M. IV. SHADOW OF THE BLACK HOLE The shadow of black hole and its photon orbit can be determined by geometrical optics. The apparent shape of a black hole is defined by the boundary of the shadow. From Eq. (32) the size of Schwarzschild-Tanglerlini black holes depends upon mass and dimension of space-time. To visualize the shadow of the black hole, it is appropriate to use the celestial coordinates α and β [40]. For Schwarzschild-Tanglerlini black hole the celestial coordinate modified as [7] α = lim where [P (t) , P (φ) , P (θ i ) ] are the vi-tetrad component of momentum. By using Eqs. (13)- (16) and geodesics equations of motion (22)- (25), one can obtain the expressions of celestial co-ordinates α and β i , given by The Eqs. (35)-(36) relates the celestial coordinate α, β i to constant of motion η and ξ. If we take equatorial plane (θ i = π/2), the celestial coordinate reduces to the Eq. (37) must follow the condition The Eq. (38) governs the complete orbit of photon around black hole which cast shadow and appears as circle. Now we take the contour plot of Eq. (38) which shows the shadow of Schwarzschild-Tangherlini black hole, clearly shown in Fig. 2 A. Energy emission rate In this section, we study the energy emission rate from Schwarzschild-Tangherlini black hole. The energy emission rate can be calculated via [16] where T BH is Hawking temperature for Schwarzschild-Tangherlini black holes which is shown in Eq. (11) and σ lim is the limiting constant value and in HD it can be expressed as [41,42] σ lim ≈ π for Schwarzschild case (D = 4) the value of limiting constant reduced to where R s is radius of shadow. As we have shown in Fig. 2 and 3, the shadow of black hole is circle and the radius of shadow R s , which can also be a observable as [14] where (α t , β t ) and (α r , 0) are the top, bottom and right positions of co-ordinates from where's reference circle passes. The complete form of energy emission of black hole in terms of dimensions D can be reads The variation of d 2 E(ω)/dωdt vs ω can be seen from Fig. 5 for different dimension D. V. CONCLUSION The black hole shadow in the near future may be realistic due to an observation of black hole SgrA * in the center of our galaxy. It was Synge [9] and Luminet [10] who first calculated the shadow of non-rotating Schwarzschild black hole, and demonstrated that circular light orbit exists on photon sphere at r = 3m, which is the critical radius where the effective potential has a maximum. We have generalized this and other results to study the shadow cast by Schwarzschild-Tangherlini black holes by studying the motion of a test particle and derive the complete null geodesics equations by applying Hamilton-Jacobi equation and Carter separable method. We also derive the expression of effective potential from the radial equation of motion and found that it has spacetime dimension dependence. It turns out that apparent shape of the Schwarzschild-Tangherlini black holes is also a function of spacetime dimension and that the size of black hole shadows increases with increase in the spacetime dimensions and so is energy emission rate. We also observe the deviation of the peak of effective potential towards the central object. The results presented here are the generalization of previous discussions, on the Schwarzschild black hole shadow, in more general setting, and the possibility of a further generalization of these results to HD rotating Kerr black hole, Myers-Perry black hole and Lovelock black holes is an interesting problem for future research.
2017-07-22T09:15:31.000Z
2017-07-22T00:00:00.000
{ "year": 2017, "sha1": "c86f40a6743df9931ae5fe7c366dfd65cdb8fcd2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.07125", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c86f40a6743df9931ae5fe7c366dfd65cdb8fcd2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55169973
pes2o/s2orc
v3-fos-license
TALES OF TWO GUBERNATORIAL TRANSITIONS : UNDERLYING SCRIPTS FOR PRESS COVERAGE OF POLITICAL EVENTS The election of a new chief executive creates a number of needs, especially for information, for all other participants in the political system. Thus, the transition period, as the incumbent makes way for a successor, is a crucial instance of a rhetorical situation. Such discourse may be directly with the incoming chief executive for some key actors but for most others that discourse is mediated by channels of mass communication. Given the crucial character of this frequently recurring rhetorical situation in the American political process, it is surprising that not only has little research attention been given to their rhetorical aspects, but executive transitions generally are not well studied.1 Morever, nearly all of these research reports are singular case studies.This report attempts to redress these shortcomings by examining the press coverage of two guber natorial transitions, albeit both in a single state, Arkansas.Certainly, two transitions in one state constitute a very limited sample as a basis for generalizations.Still, we hope to make a significant start toward such generalizations as the two transitions represent distinctly different types of rhetorical situations for incoming governors. The first transition (Type I) followed the election in 1978 of Bill Clinton, a Democrat, to succeed David Pryor, also a Democrat.The second transition (Type II) followed Clinton's defeat in 1980 by a Repub lican challenger, Frank White.Thus, these two transitions separated by only two years in a single state allow an opportunity to study the two classic situations of a change in the incumbent only, as well as a change in both incumbent and partisan affiliation.2 Media Coverage of Gubernatorial Transitions Transitions are inevitably characterized by a hectic tempo: policy goals must be enunciated, key cabinet selections announced, legislative strategies devised, budgets studied, inaugural festivities planned and pub licized.All this activity occurs, however, in a situation where the mantle of power has been lifted from the outgoing governor and is descending upon the governor-elect, but the scepter of power has not yet been con ferred.In this governing hiatus the most casual remarks of the governorelect (and other key political actors) regarding programmatic preferences or administrative intentions may be seized upon as significant. Is the rhetorical agenda of gubernatorial transition as reported by the media set by these key political actors, or is it established by media personnel?We cannot answer that question directly by looking to media content.However, we can address the question of whether or not there is an agenda for media coverage by examining that content for structure, an underlying script, that points to what is newsworthy.Finn (1984; see also Dorsey, 1983) has pointed to the utility of this information-processing approach derived from research in cognitive psy chology and artificial intelligence.However, he argues that news value is determined in large part by recognizing deviations from such scripts, defined as "stereotypical sequences of events."How are reporters to make such determinations if the sequence of events to be expected is characteris tically ambiguous, as it is with a gubernatorial transition?Moreover, if a script can be determined for a given type of gubernatorial transition, can it be applied to other types of transitions? Assuredly, journalists would recognize and no doubt devote much attention to certain gross deviations from the usual in gubernatorial transi tions such as the appointment of a member of the opposition party to a key administrative post or the announcement of detailed plans for the inaugu ration within a few days after the election.But these are generally infrequent occurrences.The more common problem for media reporters and their gatekeepers is the determination of the relative news value of given transition events vis-a-vis other events, including other transition events.(That determination may well be hampered by the fact that many transition "events" are trial balloons floated by the incoming administra tion or even merely rumors disseminated by interested parties.Again, after all, the transition is very much a rhetorical situation.)Media decision makers are faced with the problems, then, of what must be reported, may be reported, or should be ignored in covering the transition.3Since there is no journalistic handbook for covering such situations, media reporters must look elsewhere for guidance. Social Science Literature on Transitions as a Guidepost We do not intend to argue here that journalists regularly look to the social sciences for scripting their reporting efforts.At the same time, social scientists may uncover (and even propagate) the stereotypical se quences of events that come to be associated with public practice.To that extent, then, social science (and related) literature is worthy of examina tion for the present purpose. The problem is that the literature on gubernatorial transitions so often varies in the basic assumptions about the purpose or goals of a transition period.These variations depend largely upon the perspective adopted as to who is to be benefitted or impacted upon by the transition.The National Governors Association, in a how-to handbook for new governors, for example, makes the following suggestion: If a Governor wants to be remembered at the end of his term for having accomplished certain things, then those things must be identified early in the term so that they can in fact be accomplished and so the Governor can be associated with their accomplishment.(Governing the American States, 1978: 144). That is the gubernatorial perspective.Norton Long (1972:84), from the perspective of other participants in a state's political system, describes the fundamental function of the governor-elect to be that of uncertainty absorption.Emphasizing the anxiety laden nature of this period for a state's political actors, Long suggests the prime necessity of a clear gubernatorial definition of the new governing situation: Friends and foes alike demand that he define the situation so that the players may know the nature of the game being played.Even the adversary coopera tion of the opposition requires that he set a target for them to shoot at.The press insists that he furnish a score card consisting of his musts so they can report the game. Whereas both of the above formulations stress the systemic need of stability and continuity, Beyle and Wickman (1972) and Ahlberg and Moynihan (1972) have stressed instead the difficulty and importance of impressing change upon an innovation-resistant governmental structure.As Beyle and Wickman (1972: 91-2) note: Incrementalism in personnel and policy change, budget constraints, en trenched habits of the old administration, and narrowly defined bureaucratic norms--all these factors contribute to what might be called systemic inertia . . .So while the very term transition denotes change, perhaps the greatest chal lenge to the incoming governor is one of inducing change. There is another possible characterization, surprisingly absent from the political science literature to date, the perspective of the responsible political parties doctrine.It has frequently been noted that the great achievement of political parties has been that of operationalizing the idea of democracy into a peaceful equivalent of revolution.Through a vigor ous contest between those in power defining their achievements, and the vehement criticism of those out of office wishing to get in, the issues are publicized, the public informed, the choices presented in manageable form to the electorate.Elections, according to this conception, represent a legitimate overthrow of government.Through party competition the power struggle inevitable within the political system is stabilized and institutionalized. Employing this conception, the transition represents a reluctant but peaceful surrender by those who have lost power, a joyous but orderly takeover by those who have achieved it.Since all contestants are loyal to the system, those bested will provide sufficient cooperation to the "revolu tionaries" as they assume their new tasks that the government itself will not collapse.Still, the parties remain political rivals, and thus the new government will be largely on its own in adjusting to the new situation of being the government instead of its critic. Clearly, countless features of American political reality have always departed, in varying degrees over time and place, from the competitive, responsible doctrine.Still, this conception of the transfer of power is as apt as ever for rhetorical analysis of gubernatorial transitions since Ameri can politicians and journalists have traditionally envisioned this as the proper, if not always the actual, mode for social change in a democratic society.Indeed, the notion has been reiterated so often over the past two centuries in this nation as to acquire the stature of political myth.As such, the notion of responsible party government provides a subliminal founda tion for evaluating political phenomena (see Ninno and Combs, 1980), or put differently, an underlying script that provides guidance for understand ing the pertinence and appropriateness of unfolding events. Assuredly, given the complex transactions among many actors dur ing the transition period and the institutional needs of the mass media, e.g., meeting deadlines and staff availability, the actual presentations in the press may reflect other approaches to this recurring political phenomenon.Still, the responsible party doctrine provides the most comprehensive rationale of the transition process, and consequentiy, it is the richest source of hypotheses for testing. For such testing we examine two recent gubernatorial transitions in Arkansas, the first of which involved an intra-party shift from Democrat to Democrat, the second and more recent involving a party turnover from Democrat to Republican.According to the responsible party ideal, these two types of transitions should display some distinctive differences.Since, as previously noted, transitions are essentially power vacuums in which rhetoric substitutes for actual governing authority, we test this mythic conception through analysis of what was communicated by and about the two govemors-elect during their respective transitions. Procedures: The Data and Their Analysis The data were obtained by reviewing and coding all accounts of Clinton as governor-elect during the period November 6 , 1978, through January 8 , 1979, and all accounts of White as governor-elect during the period November 6 , 1980, through January 13, 1981, in four Arkansas newspapers.Two of the newspapers are located in Little Rock and have statewide circulation.The other two are located in the northwest area of the state and are largely limited to a regional dissemination.Generally, the review used a code established by the authors before reading the newspa per items (see Table 2) .4 Each author independently examined all items, encoding each cate gory that appeared in a paragraph.No category was scored more than once per paragraph.Statements were also categorized as to source attribution: the governor-elect himself, other political leaders, editorial comment, and press background.Using the very conservative test, Scott's p i, intercoder reliabilities were 0.73 and 0.71 for the respective transitions, reasonable levels of agreement given the complexity of the code (see Holsti, 1969, 136-142). Hypotheses Using the responsible party doctrine, we offer a number of hypothe ses which we believe will distinguish between the press coverage for a Type I (One-Party) Transition and that for a Type II (Two-Party) Transi tion.First, a Type II Transition should be characterized by far greater emphasis on public policy.This, after all, is the presumed essential purpose of throwing out one government and replacing it with another.The people have grown dissatisfied with the performance of the "ins" and have been attracted by the criticisms and alterative proposals of the challenger.In a Type II Transition, therefore, one should expect much more extensive discussion of the programs that will be mounted by the newly-chosen chief executive in response to a new popular mandate. Second, there should also be a greater emphasis on personnel choices in a Type II Transition.It is also part of the ritualized exchange of power in a democratic system that a new leader will bring with him or her an entirely new cast of characters to assist in achieving the new objectives.Even with the moderation that civil service has imposed on the old spoils system, high-level officials will be replaced.New members of the cabinet, new staff personnel, new agency heads must all be chosen as part of the changing of the guard. Third, since new policies can only be enacted by the legislature, we hypothesize much more discussion of executive-legislative relations in a Type II Transition.Only through skillful leadership of and bargaining with the members of the legislature will the new executive be able to fulfill the programmatic promises of the campaign, and these relationships may be especially problematic if the partisan makeup of the legislature is different from the newly-elected governor. Fourth, a Type II Transition should also dwell more extensively on relationships with other governmental officials and organizations than would a Type I Transition.The entire political system must respond to this new governor and his/her associates, and the amount of cooperation or recalcitrance encountered will heavily impact upon the ability of the new regime to effectuate administrative change. Fifth, we also hypothesize a greater concern with political parties and party organization in a Type II Transition.This, of course, reflects another aspect of what the election has accomplished.There is a new set of victors and vanquished; new roles must be learned, new positions staked out through the press to the public.Those accustomed to criticizing must learn to defend; those accustomed to explaining and defending must begin to gather ammunition for what will now be their assault upon the establish ment. A sixth hypothesis is that a Type II Transition coverage will contain fewer references to purely personal considerations.While a certain amount of biographical and behavioral information will be reported in any case, we expect much greater emphasis on such personalistic matters in a Type I Transition.In a Type I Transition, it is primarily the personal nature and style of the incumbents that is changing; in a Type II Transi tion, the voters presumably have mandated more fundamental changes in the very purpose of government. A seventh hypothesis follows from the very underpinnings of the foregoing hypotheses.The differences between the two types of transi tions flow from the presumed change in the character of the mandate passed by the voters to the Type II governor-elect.Since this is a more drastic change, we predict a stronger concern will be exhibited in a Type II Transition for ongoing popular support of the new regime. Finally, flowing logically from all the above hypotheses, we expect much more press coverage for a Type II Transition.There is much more new information to be reported, speculated about, communicated to the actors in a political system and to the people who have set this new course of action in motion.Indeed, that a new party has captured the State House points to deviations from the past and marks subsequent events as all the more newsworthy. Findings In order to exhibit the corresponding relative treatments of the two transitions as economically as possible, we resort tc separate Q-factor analyses for the two transitions.As for each transition there are four attribution sources for each of four newspapers, a total of sixteen arrays of categorical treatment are available for each analysis. Using the eigenvalue-one criterion, only a single factor emerged in each instance, indicating a high degree of cohesion in descriptions across newspapers and across their sources of attribution.Table 1 presents the factor matrices (principal components) for both transitions. The consistency of treatment of the Type II Transition is especially remarkable as the weakest correspondence to the basic underlying pattern still shows that the pattern explains about 64% of the variance in this case, White's own comments in Newspaper Alpha.This newspaper featured not only more direct quotes by the governor-elect generally but also extensive in-depth interviews that allowed him more freedom to expand upon topics than the forums available through the other newspapers. In general, the greater consistency of treatment of the Type II Transi tion augurs well for our hypotheses since taken together they point to more 2 show the relative weights of categories in press treatments of the two transitions.The results tend to support the hypotheses generally, but not without some qualifications.Indeed, in relative weight of coverage, the first hypothesis is disconfirmed.Coverage of public policy positions in toto was about the same for both transitions.The difference lies in the heavier emphasis placed upon fiscal considerations during Transition II.In fact, on the average, the four newspapers pointed to fiscal matters in nearly 25% of mentions devoted to transition coverage as opposed to just over 20% in Transition I.Moreover, the difference that does exist has no clear basis in the responsible party doctrine.White campaigned upon the basis of less government which meant both fiscal constraints on, and less expansion of, programmatic activities of government.As a consequence, policy considerations in Transition II were much more likely to reflect fiscal concerns.Indeed, given their different philosophies of government action, transitions to Republican administrations may generally be divergent from transitions to Democratic administrations in this regard.Hypotheses 2, 3, and 4, regarding relations with other governmental actors, however, are all affirmed.Still, some cautionary remarks are called for.The stress upon legislative relations, despite the factor-analytic results, was actually not very different for the two transitions, averaging about 11.5% for Clinton and about 12.8% for White.The biennial pre session budget hearings of the Arkansas Legislative Council, may how ever, be an important mitigating factor in lessening the impact of party change in transition coverage since much of that coverage is simply an outgrowth of press attention to the Council hearings.The incoming governor or his representatives are usually afforded ample opportunity to appear before the Council. Hypotheses 2 and 4 are more strongly affirmed, particularly since personnel changes are involved in both areas.With regard to staff and cabinet this is very obviously the case.To amplify the factor-analytic results, the average percentages of mentions for staff and cabinet across the newspapers were 5.9 and 9.3 respectively.Personnel changes are also at issue with regard to other government organizations as many of these are boards and commissions for which the governor's control is limited largely to his appointment power which is a limited one indeed.These officials generally are appointed for specified terms that often overlap the governor's term of office.The press treatment in Transition II especially focused on personnel questions even in these agencies.The average coverage for the respective transitions were 1.5% and 7.1% respectively, a very substantial difference.Surprisingly, then, there is only weak confirmation of the fifth hypothesis.Differences in coverage of political party relationships are not confirmed in the factor-analytic results and resorting to the relative cover age percentagewise (combining both Democratic and Republican Parties) shows only slight support for the hypothesis, 1.9 and 2.9 for the respective transitions. The one aspect predicted under the party responsibility model to receive relatively greater coverage in the Type I Transition is that of personal qualities.Table 2 provides strong confirmation of this for both biographical and behavioral traits.This is even more apparent considering the relative volume of treatment percentagewise: combining the two categories and averaging across the four newspapers results in 19.4% for Clinton compared to only 9.2% for White. The only other hypothesis relating to relative treatment of categories is only lightly confirmed by the factor-analytic results.This would be rather damaging to the party responsibility model as an explanatory factor if the electoral mandate is accorded similar weight in both types of transition.Reexamination of the data, however, shows that one newspaper (Delta) emphasized this element in the Clinton transition much more than the other three newspapers.Disregarding Delta, then, and combining both categories relating to popular involvement produces average percentage scores of 5.4 and 12.0 respectively, much stronger support of the hypothe sis.The final hypothesis simply asserts that a change in political parties will result in a considerably larger volume of transition coverage than where there is only a change of persons.As shown in Table 3, this hypothesis receives the strongest degree of confirmation.Breaking out the volume for the two transitions in terms of both the four newspapers and the four attribution sources shows more paragraphs devoted to the White transition in every cell, resulting in very large differences in the composite or total for all newspapers.The respective ratios for the four newspapers for the number of paragraphs devoted to the Type II Transition for each one devoted to the Type I Transition are: 3.9, 2.6, 5.9, and 4.7. In general, then, seven of the eight hypotheses flowing from the ap plication of the responsible party ideal to a comparative analysis of press coverage for two types of gubernatorial transition receive slight to very strong confirmation.In the discussion that follows an explanation is suggested as well as an exploration of the larger implications of the findings generally. Discussion Finding that the underlying script for press coverage of a gubernato rial transition involving a change in partisan control of the office appears to follow the responsible parties ideal does not mean that the political system itself is characterized by a party structure adhering to the respon sible parties doctrine.Rather, it suggests that both political and media actors may more or less consciously fall back upon this mythic conception as a source of cues to guide them in what is a more stressful, perhaps even disorienting, situation than that occurring with a Type I Transition. More than this, the finding suggests that the media, by falling back upon a standard model that points to a legitimate means of social change, acquire a paragovernmental role by assisting in the assurance of orderly continuity in the governmental system.That editorial commentary and press background categorizations of the transition are so similar to catego rizations used by political actors further supports this contention.As suredly, media people determine in part what statements by political actors are published.Still, where governor-elect White was allowed ample freedom to say what he wanted in Newspaper Beta, he varied from the overall pattern only by placing greater emphasis upon policy matters without fiscal considerations and by downplaying his relations with the state legislature (moving closer to the responsible party ideal on the one hand and further away on the other). The major deviation from the responsible party conception was the lesser emphasis than expected upon public policy in the Type II Transi tion.However, we suspect that in reality a second factor intervened that may have produced greater policy emphasis in our case of Type I than would normally be expected and less in our Type II case than should be expected.Quite simply, the basic political philosophies, or more pre cisely, the orientations toward governmental action, of the two goven ors-elect contrasted with the change induced by partisan switch in incum bency.The representative for a Type I Transition, Bill Clinton, is a dedicated activist, whereas Frank White had a much more limited concep tion of appropriate government activity. These differences in philosophical premises affected only the rela tive coverage of policy in the two transitions and thus provide only a modest qualification of the role the responsible parties ideal as an underly ing script in press coverage of gubernatorial transitions.Indeed, given that political parties as candidate-recruiting, campaign-waging, fund-raising, and policy-making organizations probably have less consequential pres ence in Arkansas than in any other state, the general applicability of the mythic ideal here is as rigorous a test as can be constructed.5 Clearly, dimensions other than partisan affiliation may shape the character of gubernatorial transitions.Some obvious possibilities include the insider-outsider distinction, ideological or coalitional cleavages, and personality conflicts.Thus, where a governor-elect has presented himself/ herself as one outside the political establishment, personnel concerns might well take on greater importance even than in a Type II transition.Strong ideological conflict, whether in a Type I or Type II Transition would probably bring great emphasis to policy concerns.Coalitional conflict in a Type I Transition would likely heighten concerns for partisan relationships and perhaps for policy and personnel as well.Strong person ality conflicts between incumbent and successor are probably less predict able given their idiosyncratic character.Whatever the nature of such conflicts, in all cases the volume of coverage is likely to be higher than for transitions of Type I where one old party hand passes the reins of govern ment on to another partisan crony. Press coverage of a transition, then, is a rhetorical situation of critical importance in a democratic polity whether the purpose of the transition is viewed as establishing order or continuity, building a governing majority, inducing policy change, and/or serving the political ambitions of the governor-elect.Such purposes, however, point more to the words and actions of political leaders in given contexts than to the recounting of the media.Journalists will no doubt be sensitive to whatever cleavages emerge among political leaders but will look to their underlying scripts for evaluating those conflicts.For transitions of chief executives, the respon sible parties doctrine, a strong and enduring myth in American political life, is a very comfortable script for those in the journalistic enterprise. Notes !A review of literature on presidential transitions is to be found in Lee et al. (.1979), the first thoroughly study of this phenomenon from a rhetorical perspective.For a bibliography of the literature on gubernatorial transitions, see Beyle (1985: 459-461).The first rhetorical study of gubernatorial transitions was Blair and Savage (1980); see also Blair (1985) and Savage and Blair (1985). 2Actually, the most "stable" instance of a transition in opposition to one involving a change in party ties would be a same-party transition in a strongly commpetitive twoparty state, which Arkansas clearly is not.Still, if predictable differences in the two Arkansas transitions examined here do appear, then generalizations will be all the more warranted given the stronger test. 3As it happens, press releases from the governor-elect and his/her transition teams are often transmitted virtually verbatim, usually intermediately through the wire serv ices, but by no means are all these releases disseminated by the media, either partially or totally.Interestingly, such reports are not always flagged as to their source; this practice, intentional or otherwise, deserves more examination from both an empirical and ethical standpoint. 4In large part this code followed the one devised by Lee, et al, (1979) in their study of the Carter presidential transition with certain additions made necessary by the obvious diffferences between a presidential and a governor. 5For extensive discussion of the relative weakness of political parties in contem porary Arkansas, see Blair (1988: 98-104). Table 1 . Factor Loadings for Separate Q-Factor Analyses of the Categorical Treatments By the Press of the Clinton and White Transitions* Table 2 . Factor-Score Arrays for Separate Q-Factor Analyses of the Categorical Treatments Table 3 . Volume of Treatment of Clinton (BC)
2018-12-12T07:59:40.665Z
1990-01-01T00:00:00.000
{ "year": 1990, "sha1": "cd878ccdcc7adc0009170d0fba5a6cd385976789", "oa_license": "CCBYNCSA", "oa_url": "https://journals.shareok.org/arp/article/download/830/783", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cd878ccdcc7adc0009170d0fba5a6cd385976789", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
251182957
pes2o/s2orc
v3-fos-license
Methods Used for Enhancing the Bioavailability of Oral Curcumin in Randomized Controlled Trials: A Meta-Research Study It is unknown how randomized controlled trials (RCTs) approach the problem related to curcumin bioavailability. We analyzed methods and reporting regarding the bioavailability of systemic oral curcumin used in RCTs. We searched PubMed on 12 September 2020, to find articles reporting RCTs that used curcumin as an intervention. We extracted data about trial characteristics, curcumin products used, methods for improving curcumin bioavailability, and mentions of curcumin bioavailability. We included 165 RCTs. The most common category of intervention was simply described as “curcumin” or “curcuminoids” without a commercial name. There were 107 (64%) manuscripts that reported that they used methods to enhance the oral bioavailability of curcuminoids used in their intervention; 25 different methods were reported. The most common method was the addition of piperine (23%). Phospholipidated curcumin, a combination of curcumin and turmeric oils, nanomicellar curcumin, and colloidal dispersion of curcumin were the next most common methods. Fourteen trials (8.4%) compared more than one different curcumin product; nine (7.9%) trials compared the bioavailability/pharmacokinetics of curcumin products. In conclusion, a high number of diverse methods were used, and very few trials compared different curcumin products. More studies are needed to explore the comparative bioavailability and efficacy of different curcumin products. Bioavailability is a term used to describe the percentage, or the fraction, of an administered dose of a xenobiotic that reaches the systemic circulation. Thus, bioavailability is essential in oral dosage form development [5]. Multiple studies have reported a relatively lower bioavailability after oral administration of curcumin [6,7]. Preclinical data and clinical studies on volunteers confirmed a small amount of absorption in intestines, hepatic first-pass effect, and a certain degree of intestinal metabolism as contributors to the poor systemic availability of curcumin when given orally [8]. It has been suggested that formulations that include adjuvants, nanoparticles, liposomes, micelles, and phospholipid complexes should improve bioavailability and enable longer circulation, better permeability, and resistance to curcumin's metabolic processes [9]. A randomized controlled trial (RCT) is a prospective experimental study used to measure the efficacy of an intervention. In an RCT, participants are randomized into multiple arms, which receive different interventions and comparators. Participants are followed and compared. If they are designed adequately, it is considered that RCTs may achieve adequate control over confounding factors to enable an objective comparison of the interventions studied. However, as the RCTs of curcumin are accumulating, it is unknown how those trials approach the problem related to curcumin bioavailability and how it can impact trial results. This study aimed to analyze whether RCTs of curcumin pay any attention to the bioavailability of systemic oral curcumin, whether trialists attempted to use methods that could improve this bioavailability, and whether they discussed their results in terms of curcumin bioavailability. Results The search yielded a total of 319 records, of which we excluded 154 for reasons reported in Supplementary File S1. The remaining 165 studies were included in the analysis. The list and characteristics of included RCTs are reported in detail in Supplementary File S2. Summary characteristics of included studies are shown in Table 1. The included RCTs were published between the years 1980 and 2020. Trials were published by 112 journals, most commonly in the journal Phytotherapy Research (Table 1). Protocol registration was reported in 95 (58 %) studies; among those, most of the RCTs were registered on ClinicalTrials.gov (Table 1). The median number of participants randomized was 60. The median of two study arms were included. The median duration of patient follow-up was 4.5 weeks ( Table 1). The most common categories of participants included were healthy participants, followed by patients with arthritis, diabetes mellitus type 2, skin diseases, nonalcoholic fatty liver disease, and metabolic syndrome (Table 1). All categories of participants are shown in Supplementary File S2. Among included 165 trials, there were 184 curcumin interventions, as some trials used more than one curcumin product as a tested intervention. The most common category of curcumin intervention was simply described as "curcumin" or "curcuminoids" without a commercial name (Table 2). Table 2. The most common types of curcumin products used in analyzed trials (N = 165); interventions used in two trials or more are shown. Commercial names of the products are marked with an asterisk. A complete description of curcumin from methods reported in included RCTs (commercial name, details, dose, instructions for use) is reported in detail in Supplementary File S2. Among the 165 included RCTs, 112 (68%) mentioned the bioavailability of curcumin throughout the manuscript, regardless of the context. The parts of the manuscript where this was most commonly mentioned were the Introduction, Methods, and Discussion. Verbatim extractions of those parts of the text where the bioavailability of curcumin was mentioned are shown in Supplementary File S2. There were 106 (64%) manuscripts that reported the use of methods to enhance the oral bioavailability of curcuminoids used in their intervention. The trials reported 25 different methods for enhancing the bioavailability of oral curcumin. The most common method was the addition of piperine, used in 23% of the trials. Phospholipidated curcumin, a combination of curcumin and turmeric oils, nanomicellar curcumin, and colloidal dispersion of curcumin were the next most common methods (Table 3). Table 3. Methods used to enhance the oral bioavailability of curcuminoids in analyzed trials (N = 107). N (%) Piperine 26 (24) Phospholipidated curcumin 19 (18) Turmeric oils 17 (14) Nanomicellar curcumin 12 (11) Colloidal dispersion of curcumin 12 (11) Nanocurcumin 4 (3.7) Curcumin(oids)-galactomannoside complex 2 (1.9) Curcumin in a turmeric matrix formulation 2 (1.9) Dispersion of curcumin and antioxidants on a water-soluble carrier 2 (1.9) Solid-lipid particle formulation 2 (1. Most trials that used methods for enhanced oral bioavailability of curcumin were published in the most recent analyzed 5 years; only 12 trials using such methods were published before 2014. All trials that mentioned bioavailability did not use methods to enhance the bioavailability, and vice versa. Among the 53 trials that did not mention bioavailability anywhere in the manuscript, 10 (19%) used methods to enhance curcumin bioavailability. Of 112 trials that mentioned bioavailability in the manuscript, 16 (14%) did not report the use of any methods to enhance curcumin bioavailability. There were 14 (8%) trials that compared more than one type of standardized curcumin product. Most of them compared two curcumin products; while one compared three products, and two compared four products ( Table 4). Nine of those fourteen trials compared the bioavailability or pharmacokinetics of the tested curcumin products. Trials that compared the enhanced versions with the standard curcumin reported that the bioavailability was higher with the enhanced products (Table 4). Six trials compared the effect of supplementation of various curcumin products in individuals with metabolic syndrome, osteoarthritis, or those suffering from experiencing occupational stress-related anxiety and fatigue. Results were heterogeneous, as half of those trials did not demonstrate any superiority of enhanced products on the investigated outcomes (Table 4). To investigate the effects of curcumin on serum copper (Cu), zinc (Zn), and Zn/Cu ratio levels in patients with metabolic syndrome. Serum Zn concentration was increased significantly in the phospholipidated curcumin and curcumin groups after intervention, and it was significantly higher (p < 0.001) in the phospholipidated curcumin group than in the curcumin group (p < 0.05). The effect of phospholipidated curcumin on zinc was higher than the effect of curcumin because phospholipidated curcumin has better bioavailability than curcumin. 29958053 Curcumin Meriva * To investigate the effects of unformulated curcumin and phospholipidated curcumin on antibody titers to heat shock protein 27 (anti-Hsp 27) in patients with metabolic syndrome (MetS). Study used phospholipidated curcumin, which is known to be more bioavailable compared to unformulated curcumin, but no significant changes in serum anti-Hsp 27 and anthropometric measures in patients with MetS following supplementation could be found. Lipisperse * Curcumin To investigate the pharmacokinetics of a commercially available curcumin extract, with or without the curcumin-LipiSperse ® delivery complex. The novel delivery system LipiSperse ® is safe in humans, and demonstrates superior bioavailability for the supply of curcumin when compared to a standard curcumin extract. BioCurc * Curcumin To assess the bioavailability of a novel curcumin formulation compared to 95% curcumin and published results for various other curcumin formulations. The novel curcumin liquid droplet micromicellar formulation (CLDM) formulation facilitates absorption and produces exceedingly high plasma levels of both conjugated and total curcumin compared to 95% curcumin. Serum PAB increased significantly in the curcumin group (p < 0.001), but in the phospholipidated curcumin group, elevation of PAB level was not significant (p = 0.053). The results of our study did not suggest any improvement of PAB following supplementation with curcumin in MetS subjects. 28198120 Curcumin-phospholipid complex Curcumin To investigate the effect of curcumin on serum vitamin E levels in subjects with MetS. Results of the present study did not suggest any improving effect of curcumin supplementation on serum vitamin E concentrations in subjects with MetS. CGM Curcumin To investigate the safety, antioxidant efficacy, and bioavailability of CurQfen (curcumagalactomannoside [CGM]), a food-grade formulation of natural curcumin with fenugreek dietary fiber that has been shown to possess improved blood-brain barrier permeability and tissue distribution in rats. The study demonstrated the safety, tolerance, and enhanced efficacy of CGM in comparison with unformulated standard curcumin. Further comparison of the free curcuminoids bioavailability after a single-dose (500 mg once per day) and repeated-dose (500 mg twice daily for 30 days) oral administration revealed enhanced absorption and improved pharmacokinetics of CGM upon both single-(30.7-fold) and repeated-dose (39.1-fold) administrations. A formulation of curcumin with a combination of hydrophilic carriers, cellulosic derivatives, and natural antioxidants significantly increases curcuminoid appearance in the blood in comparison to unformulated standard curcumin CS, CTR, and CP. Microencapsulated curcumin Curcumin To investigate the human bioavailability of curcumin from breads enriched with 1 g/portion of free curcumin (FCB), encapsulated curcumin (ECB), or encapsulated curcumin plus other polyphenols (ECBB) was evaluated. Curcuminoid encapsulation increased their bioavailability from enriched bread, probably preventing their biotransformation, with combined compounds slightly reducing this effect. Meriva * Curcumin To investigate the relative absorption of a standardized curcuminoid mixture and its corresponding lecithin formulation (Meriva) in a randomized, double-blind, crossover human study. The improved absorption, and possibly also a better plasma curcuminoid profile, might underlie the clinical efficacy of Meriva at doses significantly lower than unformulated curcuminoid mixtures. 28204880 Curcumin Gamma-cyclodextrin complex containing curcumin Meriva * BCM95 * To investigate the bioavailability of a new γ-cyclodextrin curcumin formulation (CW8). This formulation was compared to a standardized unformulated curcumin extract (StdC) and two commercially available formulations with purported increased bioavailability: a curcumin phytosome formulation (CSL), and a formulation of curcumin with essential oils of turmeric extracted from the rhizome (CEO). The data presented suggest that γ-cyclodextrin curcumin formulation (CW8) significantly improves the absorption of curcuminoids in healthy humans. 27503249 Meriva * C3 complex * To evaluate the relationship between steady-state plasma and rectal tissue curcuminoid concentrations using standard and phosphatidylcholine curcumin extracts in a randomized, crossover study When adjusting for curcumin dose, tissue curcumin concentrations were five-fold greater for the phosphatidylcholine extract. Improvements in curcuminoid absorption due to phosphatidylcholine are not uniform across the curcuminoids. Furthermore, curcuminoid exposures in the intestinal mucosa are most likely due to luminal exposure rather than plasma disposition. Finally, once-daily dosing is sufficient to maintain detectable curcuminoids at a steady state in both plasma and rectal tissues. 29027274 Cureit/Acumin * Curcu-Gel * Doctor's Best Curcumin Phytosome * To assess the bioavailability of a completely natural turmeric matrix formulation (CNTMF) and compare its bioavailability with two other commercially available formulations, namely, curcumin with volatile oil (volatile oil formulation) and curcumin with phospholipids and cellulose (phospholipid formulation), in healthy human adult male subjects (15 each group) under fasting conditions. The results of this study indicate that curcumin in a natural turmeric matrix exhibited greater bioavailability than the two comparator products. Discussion This study showed that 68% of RCTs that analyzed the effects of curcumin used methods for enhancing the oral bioavailability of oral curcumin. The most common enhancement method was the addition of piperine to curcumin. Most of the trials that used methods for enhancing the oral bioavailability of curcumin were published in the five most recent analyzed years. However, very few trials (7.9%) compared the bioavailability/pharmacokinetics of various curcumin products. Oral bioavailability is a crucial aspect of the bio-efficiency of bioactive food ingredients, as it influences the potential health benefits of food [10]. However, many biocomponents have a low oral bioavailability because of their low stability in gastrointestinal fluid and inadequate absorption through the intestinal epithelium [11,12]. Thus, researchers have started studying the possibilities of improving the oral bioavailability of food bioactive ingredients by incorporating bioactive agents into different colloidal delivery systems such as emulsions, nanoemulsions, microemulsions, solid lipid nanoparticles, biopolymer nanoparticles, microgels, et cetera [6,[13][14][15][16]. As curcumin has low aqueous solubility and can be metabolized rapidly by the gastrointestinal system, resulting in low oral bioavailability, multiple pharmaceutical strategies for oral administration of curcumin have also been tested, including solid dispersions, nano/microparticles, polymeric micelles, nanosuspensions, lipid-based nanocarriers, cyclodextrins, conjugates, and polymorphs [17]. It is reasonable to expect that different formulations of curcumin and different methods used to improve its oral bioavailability will impact human clinical trials. In a narrative review published in 2019, Ma et al. presented 39 trials that used curcumin in a tabular form, but without reporting methods for searching the literature and without a clear conclusion regarding the methods used or their comparative advantage [17]. In 2019, Kunnumakkara published a non-systematic analysis of "over 200 clinical studies with curcumin", claiming that the "therapeutic potential of curcumin as demonstrated by clinical trials has overpowered the myth that poor bioavailability of curcumin poses a problem" [18]. However, their study did not report any research methods, analysis methods, or eligibility criteria [18]. Such generic statements cannot be made based on non-systematic narrative literature reviews. In this study, we analyzed all RCTs indexed until the targeted search date on PubMed that used curcumin to explore methods to enhance its oral bioavailability. In 107 trials that reported using such methods, 25 different methods were described. This is a widely heterogeneous methodological approach, and it is not clear what the comparative efficacy of all those different approaches is. Only 14 out of 165 trials we analyzed compared different curcumin products (from two to four products), and of those 14 trials, 9 compared their bioavailability/pharmacokinetics. Considering the vast heterogeneity of different methods described for enhancing the bioavailability of curcumin, the number of RCTs that have compared bioavailability/pharmacokinetics of different curcumin products is extremely low. Moreover, six trials that compared different curcumin products have compared their effects on various clinical outcomes regarding efficacy and/or safety. Some of those studies did not find that products with methods that are supposed to enhance availability were better than the standard curcumin. This indicates that we not only need trials that will compare the bioavailability of different methods, but also trials that will compare different curcumin products head-to-head to test their comparative clinical efficacy. Of note, we found that among 113 trials that mentioned bioavailability in the manuscript, 14% did not report using any methods to enhance curcumin bioavailability. This indicates that some trialists were aware of the issues related to curcumin bioavailability but still chose not to try to enhance it. While the focus of this study was to analyze aspects related to the bioavailability of curcumin, we noticed poor reporting of investigated interventions. Some studies reported "generic" names or descriptions of the intervention, while some reported commercial names. Some studies reported both, but with an incomplete list or description of the investigated interventions. Furthermore, the trialists interchangeably used the terms curcumin and curcuminoids, even though they are not synonymous. Thus, we would urge trialists to devote more attention to detailed and correct reporting of the intervention that was used in a trial. Transparent reporting will foster the replicability of a trial. Evidence syntheses from this field of research should compare the effects of different methods for enhancing the oral bioavailability of curcumin in data from clinical trials. Such analysis should involve data extraction on outcomes for efficacy and safety for different indications and performing comparative evaluations between trials that used different methods. However, that was beyond the scope of this study. This study had several limitations. First, we searched only PubMed to retrieve RCTs on curcumin. Since this was not a systematic review, we used a single search system to retrieve comprehensive datasets of trials to analyze. A recent comparison of 26 different academic search systems that could be used in systematic reviews and meta-analyses showed that PubMed is suitable for use as a principal search system [19]. Second, we used very specific inclusion criteria. However, we provided detailed reasons for excluding the retrieved studies so that our decision-making is transparent. Study Design This was a methodological study in which we included RCTs that were indexed on PubMed. Inclusion Criteria We included RCTs where curcumin was used via oral ingestion for systemic absorption, and analyzed as an intervention or a comparator, regardless of the type of participants and type of outcomes that were used in a trial. We also included RCT protocols if they were retrieved via our search. We included manuscripts that reported post hoc analyses of previously published RCTs. Exclusion Criteria We excluded studies where mixtures of curcumin with other compounds were used as an intervention in a way that the intervention effects could not be attributed clearly to curcumin (for example, curcumin as a spice mixture in food, or as a part of herbal intervention). The only exception was interventions where curcumin was combined with a bioavailability enhancer. We excluded studies using turmeric powders. We excluded studies on animals. We excluded studies where curcumin was used via administration modes other than oral ingestion for systemic absorption, such as curcumin taken in the mouth only as a local intervention in the oral cavity, or intravenous or rectal application of curcumin. Search and Screening We searched PubMed on 12 September 2020 by using the following search syntax: curcumin OR curcuma OR turmeric OR theracurmin OR tetrahydrocurcumin OR NCB-02 OR Curcuma domestica Val. OR Curcuma xanthorrhiza OR diferuloylmethane OR curcuminoids OR Biocurcumax OR biocurcumin OR BCM-95 OR BCM-095, and combined it with the limit for randomized controlled trials. All bibliographic records that were found with this search were retrieved. Two authors independently screened all records to verify that they indeed fulfilled the inclusion criteria. We then retrieved full texts of all RCTs that were deemed eligible or potentially eligible and repeated screening of full texts against the inclusion criteria. Again, two authors independently screened each full text. Reasons for excluding records from the study were noted. In case of disagreement during the screening of bibliographic records and full texts, two authors resolved discrepancies in opinion via discussion, or if necessary, a third author was included in the decision-making. Data Extraction Two authors participated in data extraction for each study; one author conducted the extraction, and the second author verified the extraction. For RCTs retrieved via PubMed, we extracted the following information: the last name of the first author, year of publication, the title of study, trial registration number, number of participants randomized, number of study arms, duration of follow-up, type of participants included, type of curcumin product that was tested (either as an intervention or a comparator), and a complete description of curcumin from the methods. We also extracted any mentions about the bioavailability of curcumin throughout the manuscript, together with the manuscript section where considerations regarding curcumin bioavailability were mentioned. For the type of curcumin that was used, we extracted information for both commercial and generic names of the product, as described in the article. After data extraction, we categorized the responses and presented the data narratively. Statistics Data were presented as numbers and frequencies, median, and range. Microsoft Excel (Microsoft Inc., Redmond, WA, USA) was used for analyses. Conclusions The majority of trials reported using methods for enhancing the oral bioavailability of curcumin. However, a large number of diverse methods were used, and very few trials compared different curcumin products. More studies are needed to explore the comparative bioavailability and efficacy of different curcumin products. Funding: The article processing charge for this manuscript was funded by the University of Split Faculty of Science from the institutional financial support awarded to Viljemka Bučević Popović.
2022-07-31T15:21:54.954Z
2022-07-28T00:00:00.000
{ "year": 2022, "sha1": "1e9c7e1921f7c7c4598ada428820d29b54b599ed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/15/8/939/pdf?version=1659695522", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e527569962faf7f35c85691662de848ccd333d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89977171
pes2o/s2orc
v3-fos-license
Antimicrobial pectin-gellan films: effects on three foodborne pathogens in a meat medium, and selected physical-mechanical properties ABSTRACT It is first reported the elaboration and characterization of films (F2) containing 1% (w/v) citrus pectin, 0.2% (w/v) gellan gum, 0.5% (w/v) glycerol, CaCl2 5 mM, ethylenediaminetetraacetic acid (EDTA) 0.05 M and 90 (Arbitrary Units)/mL of an antimicrobial concentrated supernatant (ACS) from fermentation culture broths of the lactic acid bacterium, Streptococcus infantarius. The functional films inhibited the growth of Listeria monocytogenes, Escherichia coli and Staphylococcus aureus in “Barbacoa” medium in 7-day cultures at 35°C. “Barbacoa” is a highly appreciated Mexican meat product. In contrast, the control cultures exhibited bacterial-growths up to 107–109 (Colony-Forming Units)/g. An antimicrobial-activity synergy between ACS and EDTA was demonstrated. Some film-physical properties were modified by the EDTA-ACS incorporation [F2/control-film]: Young’s modulus (MPa), 1,394/707; elongation at break (%), 1.9/9.3; stress at break (MPa), 5.7/12.6; water vapor permeability (10–11 g m Pa−1 s−1 m−2), 3/20 and oxygen permeability (10–12 g m Pa−1 s−1 m−2), 1.9/1.2. Introduction The research concerning the packaging for the conservation of food products is of worldwide interest due to implications on food safety, biomaterials applications and sustainability, among others (Campos, Gerschenson, & Flores, 2010;Jabeen, Majid, Nayik, & Yildiz, 2015). Specifically, the research on the packaging of meat products encompasses studies concerning meat products from the minimal to the highly processed ones: fresh beef (Zinoviadou, Koutsoumanis, & Biliaderis, 2010); pork meat hamburgers (Vargas, Albors, & Chiralt, 2011) and fresh white shrimps (Meenatchisundaram et al., 2016) among others. Furthermore, the antimicrobial packaging materials can effectively control the growth of spoilage and pathogenic microorganisms in the surface of meat products; in this sense, one alternative is the use of edible biopolymer films enriched with bacteriocins (Pattanayaiying, H-Kittikun, & Cutter, 2015;Salmieri et al., 2014). Bacteriocins are natural antimicrobial peptides synthetized by one bacterium species that are active against other bacteria species. After rigorous evaluations, these antimicrobials can be used as safe additives in food products for human consumption. This is the case of both bacteriocins, Nisin and Colicin (FDA, 2000(FDA, , 2016. In order to increase the antimicrobial spectrum of bacterium inhibition, bacteriocins are frequently used in combination with other substances, like the chelating agent, ethylenediaminetetraacetic acid (EDTA), which contributes to make more permeable the bacterium outer membranes, resulting in an effective antimicrobial activity against Gram negative bacteria as well (Vaara, 1992). Furthermore, some biopolymers, like pectins, also exhibit antimicrobial activities that can contribute to a more effective functional films for food packaging (Calce et al., 2014;Jindal, Kumar, Rana, & Tiwary, 2013). The present article reports the main results concerning the elaboration and characterization of films of gellan gum mixed with citrus pectin, enriched with EDTA and antimicrobial concentrated supernatant (ACS) from fermentation culture broths of the lactic acid bacterium (LAB), Streptococcus infantarius, containing bacteriocin-like inhibitory substances (BLIS). The antimicrobial activity of the films was tested against Listeria monocytogenes, Escherichia coli and Staphylococcus aureus, growing in a medium based on Mexican "Barbacoa", a highly appreciated meat product, that is usually prepared with lamb meat wrapped in agave leaves and cooked overnight in customized ovens (Natividad-Bonifacio et al., 2010). Also, some selected physical-mechanical properties of the films were determined (i.e. Young's modulus, stress and elongation at break and water vapor and oxygen permeabilities). The development of bioconservation technologies for Mexican products, like "Barbacoa", is of great interest for many reasons, including that concerning the food market (Rubio, Torres, Gutierrez, & Mendez, 2004). The bacterium indicators for testing the antimicrobial activity of the films were Listeria monocytogenes CFQ-103, Escherichia coli ATCC-25922 and Staphylococcus aureus ATCC-25923, kindly provided by Dr. G. Díaz-Ruiz (School of Chemistry, UNAM, Mexico). All bacterial strains were conserved at −80°C in 2 mL-vials containing 1 mL of 24 h old-culture broths of each bacterium mixed with 20% v/v glycerol. For this purpose, S. infantarius was grown in De Man-Rogosa-Sharpe broth, MRS (BD TM DIFCO, France), although the other bacteria were grown in Brain Heart Infusion broth, BHI (Bioxon ® México). Antimicrobial additives The ACS of S. infantarius was obtained according to Calderón-Aguirre et al. (Calderón-Aguirre et al., 2015). Briefly, reactivated S. infantarius-cells were cultured in MRS at 30°C for 6 h; then, the bacterial cells were removed by centrifugation. The supernatant was collected, adjusted to a pH of 6.5 and concentrated (60°C, 72 mbar) up to 40% of its initial volume. To inactive proteases, the ACS was heated at 110°C for 10 min; later, it was cooled and stored until use. The ACS exhibited an antimicrobial activity of 6,400 arbitrary units (AU)/mL, determined by the spot-on-thelawn method (Nuñez, Tomillo, Gaya, & Medina, 1996), being this activity due to the presence of BLIS because its antimicrobial action was inhibited by proteases (i.e. Proteinase K (Invitrogen, U. S.A.); Peptidase and Trypsin (Sigma-Aldrich, U.S.A.) (Mimila-Méndez, 2017). The obtained films were then conditioned into a desiccator during 48 h at a relative humidity (RH) of 50-55% and 23°C . The average thickness of the films were determined by measurements at five points each film with a micrometer to the nearest 0.0001 mm (Truper, Mexico) Film effects on the growth of Listeria monocytogenes, Escherichia coli and Staphylococcus aureus in selective media L. monocytogenes, E. coli and S. aureus were grown in BHI at 35°C for 24 h; then, decimal dilutions of each bacterium culture were done with isotonic salt solution (1% (w/v) NaCl). 200 µL-samples of convenient dilutions of each bacterium culture, containing 25 CFU, were taken and inoculated into Petri dishes (Interlux, 60 × 15 mm) containing selective media: Oxford (BD TM DIFCO, France) for L. monocytogenes, MacConkey (Sigma-Aldrich) for E. coli and Baird Parker (BD TM DIFCO, France) for S. aureus. After the surface spreading of the samples on the agar plates, the agar surfaces were aseptically covered with 6 cm-diameter circular films, previously sterilized with UV radiation during 24 h (12 h each side). The cultures were then incubated at 25°C during 30 days. The solid cultures with film treatments and controls without films (C) were tested by triplicate. Film effects on the growth of Listeria monocytogenes, Escherichia coli and Staphylococcus aureus in a meat product medium L. monocytogenes, E. coli and S. aureus were grown in BHI at 35°C for 24 h; then, proper dilutions of each bacterium culture were done with isotonic salt solution to inoculate 100 CFU/plate by surface spreading on Petri dishes (Interlux, 60 × 15 mm) containing culture medium based on "Barbacoa", a lamb meat product. The "Barbacoa"-medium contained 4.5% (w/v) Barbacoa (bought in a traditional market in Tulancingo, Hidalgo; México) and 1.5% (w/v) bactoagar (BD TM DIFCO, France). A portion of 45 g of Barbacoa was thoroughly ground with a food processor (Oster ® 3213, China), then mixed with 1 L of distilled water and 15 g of bacto-agar, maintaining constant agitation and heating, till boiling. Then the medium was sterilized (121°C for 2 h in an autoclave Tuttnauer, 3870ELV-D) and distributed into petri dishes until use. The surface of "Barbacoa" plates, inoculated with bacteria, was covered with 6 cm-diameter circular films, previously sterilized with UV radiation during 24 h (12 h each side). The cultures were then incubated at 35°C during 7 days, taking samples at days 0, 1, 3, 5 and 7, by triplicate. Each sample (1 plate containing 5 mL of medium) was homogenized with 45 mL peptone water in a Seward Stomacher ® 400 Circulator, at 300 rpm during 5 min. Viable cell counts were made involving decimal dilutions of the homogenate, then mixing 1 mL of diluted samples in BHI soft agar in plates (Interlux, 90 × 15 mm) to be incubated at 35°C during 24 h. Plates with F2 and FC films were tested, as well as bacterium inoculated plates without films (NF). A bacterium growth curve was obtained for each experiment, being used to determine: (a) the initial viable cell count (X 0 ; CFU/g); (b) the maximum bacterium concentration (X max ; CFU/g); (c) the multiplication factor ((X max /X 0 ), dimensionless) and (d) the maximum specific growth rate (µ max ; h −1 ) calculated as the slope of the semi-logarithmic plot of the growth curve in the exponential growth phase (Liu, 2013). Mechanical characterization of the films Mechanical tests were done according to the ASTM D882-10 method (ASTM, 2010) in a Texture Analyser TA plus Lloyd. Film samples were cut into "dog-bone" shape (Type M-I tension test specimen) according to the specifications of the standard ASTM D638M-93 (ASTM, 1993). After conditioning the samples during 48 h at 50-55% RH and 25°C, they were gripped into the texture analyzer with an initial separation of 0.05 m. The tensile tests were carried out at a cross head speed of 1 mm/s. At least 30 replicates per treatment were carried out to obtain the stress-Hencky strain curves of the samples through the force-distance data. Specimens that failed at the grip contact point were discarded. Young's modulus (EM; MPa) was determined through the slope of the linear region of the stress-strain curves. The ultimate mechanical properties of the films, stress (σ T,max ; MPa) and elongation at break (E max ; %) were determined in the rupture point (Calderón-Aguirre et al., 2015). Water vapor permeability of the films The water vapor permeability (WVP) of the films was determined with the ASTM E96-00 method (ASTM, 2000). Film disks, previously equilibrated at 53% RH and 25°C for 48 h, were mounted on permeation cells (aluminium cups) containing dried silica gel (0% RH); then cups were placed inside a cabinet that was equilibrated at 75 ± 2% RH and 23 ± 2°C during 24 h prior to WVP tests. The cups were weighed every hour during 8 h. The WVP was determined according to the procedure reported by Aguirre-Loredo, Rodríguez-Hernández & Chavarría-Hernández (2014). Four determinations were done per treatment. Oxygen permeability of the films The oxygen permeability (PO 2 ) of the films was determined in accordance with the ASTM D1434-82 method (ASTM, 1982) using a film-package permeability tester (Labthink VAC-V2, China). Films were conditioned at 50-55% RH during 48 h prior to be placed into the equipment chambers. Four runs per treatment were carried out. Statistical analysis Data are presented as the mean ± standard deviation for each treatment. Results were analyzed for statistical significance using analysis of variance (ANOVA) followed by Tukey test (p < 0.05). Differences between pairs of means were assessed using t-test (p < 0.05) (SigmaPlot 12.5, SPSS Inc., USA). Results and discussion Antimicrobial activity of the films First of all, the S. infantarius-ACS exhibited important activity against both Gram positive bacteria, L. monocytogenes and S. aureus, with no effects against the Gram negative bacterium, E. coli. The exhibited antimicrobial activity is due to the presence of BLIS involving molecules from 4 to 7 kDa of molecular weight, which can be inactivated by proteases (Mimila-Méndez, 2017); furthermore, the production of bacteriocins and BLIS by other bacterial strains which belong to the Streptococcus bovis/Streptococcus equinus complex has been reported. For example, S. bovis HC5 produces the bacteriocin, bovicin HC5, which exhibits important antilisterial activity (Mantovani & Russell, 2003). Besides, some of this bacterial strains have been isolated from traditional fermented dairy products (Jans et al., 2013). In the present work, in order to elaborate a film with antimicrobial activity against L. monocytogenes, S. aureus and E. coli, the metal chelator, EDTA, was added into the film-forming solutions to increase the bacterial sensitivity to the BLIS present in S. infantarius-ACS (Banin, Brady, & Greenberg, 2006). The determined MIC was a blend of ACS, 90 AU/mL, with EDTA, 0.05 M, which inhibited the growth of the three indicator bacteria (Figure 1). There are reports of combinations of EDTA with antimicrobial agents, against both Gram positive and Gram negative bacteria. For example, Economou, Pournis, Ntzimani, and Savvaidis (2009) reported the combination of 500-1,500 International Units of nisin with 50 mM EDTA, which affected the populations of the mesophilic bacteria, Pseudomonas sp., Brochothrix thermosphacta, lactic acid bacteria and enterobacteriaceae during the storage of fresh chicken meat. Sinigaglia, Bevilacqua, Corbo, Pati, and Del Nobile (2008) used conditioning brines with 0.25 g/L lysozyme and 10-50 mM Na 2 -EDTA during the storage of mozzarella cheese, reporting a significant inhibition of coliforms and Pseudomonadaceae. Furthermore, Banin et al. (2006) reported a synergic interaction of Gentamicin (10 mg/mL) with EDTA 50 mM, against P. aeruginosa. The antimicrobial activity-films were elaborated with a constant EDTA concentration, 0.05 M, testing three levels of ACS (i.e. 75, 90 and 120 AU/mL), in a complex biopolymer matrix of low-methoxyl pectin and deacetylated gellan gum, involving a gelation process greatly affected by the presence of calcium ions (Pérez-Campos, Chavarría-Hernández, Tecante, Ramírez-Gilly, & Rodríguez-Hernández, 2012; Thakur, Singh, & Handa, 1997). The Figure 2 presents the effects of the films on the growth of the three indicator bacteria in selective media. All bacteria grew well in control plates (C, bacterium inoculated media without films), where the counts were 105 CFU/plate and 78 CFU/plate for L. monocytogenes and E. coli after 2 days of incubation, respectively, although S. aureus exhibited 77 CFU/plate at the third day of incubation. In contrast, all plates with films (i.e. FC, F1, F2 and F3) did not exhibit any bacterial growth during a period of 30 days (Figure 2). The antimicrobial activity exhibited by the FC films, which contained no ACS nor EDTA, would rely on the contents of gellan gum and pectin. It has been reported the antimicrobial properties of several carbohydrate polymers (i.e. karaya gum, chitosan, algal polysaccharides (Ramawat & Mérillon, 2015)); specifically, pectins extracted from both apple peel (pristine and modified samples) and Aegle marmelos fruit, have exhibited antimicrobial activity against E. coli and S. aureus (Calce et al., 2014) and Bacillus cereus and E. coli (Jindal et al., 2013), respectively. Therefore, the bacterial growth inhibition exhibited by the FC films would be attributed to the pectin contents, being this antimicrobial activity associated to the uronic acid contents in the biopolymer (Jindal et al., 2013); furthermore, the pectic oligosaccharides have also been proposed as prebiotics with valuable antimicrobial properties (Gullón et al., 2013). On the other hand, the FC films partially inhibited the growth of the three bacteria inoculated in "barbacoa" (Table 1). In fact, the achieved bacterial concentrations were from 2.67 × 10 2 CFU/g, for S. aureus, to 1.6 × 10 3 CFU/g, for L. monocytogenes, associated with multiplication factors from 1.8 × 10 1 to 8.3 × 10 1 times, respectively, involving increases in the bacterial concentrations of only 1 log cycle during the experiments. The partial antimicrobial activity exhibited by the FC films would be attributed to the presence of pectins (Jindal et al., 2013). Furthermore, an outstanding antimicrobial activity was exhibited by the F2 films against the three tested bacteria which did not grow during the cultures; even more, from the day 1 of experiments, the viable cell counts were minor than 1 CFU/g (Figure 3, circle symbols). This bacterial inhibition would imply a synergy of the individual antimicrobial-activity contributions of ACS, EDTA and pectins, present in the bioactive F2 films. In an analogous study, Sivarooban, Hettiarachchy, and Johnson (2008) reported a maximum antimicrobial activity of soy protein films enriched with grape seed extract (1%), nisin (10,000 IU/g) and EDTA (0.16% w/w), which reduced the populations of L. monocytogenes, E. coli and Salmonella typhimurium, in 3, 2 and 1 log (CFU/mL), respectively, in contact periods of 1 h at 25°C. In other study concerning the antimicrobial activity of whey-protein-isolate films enriched with nisin (6,000 IU/g), EDTA (1.6 mg/mL) and malic acid (1%), Gadang, Hettiarachchy, Johnson, and Owens (2008) reported an E. coli O157:H7-growth inhibition of 4.6 log cycles in turkey Frankfurters inoculated with 6.15 log (CFU/g), after a storage of 28 days at 4°C. Mechanical characterization The selected physical-mechanical properties of both bioactive (F2) and control (FC) films are shown in the Table 2. The mechanical parameters were statistically significant different (p < 0.001). The incorporation of EDTA and ACS to the films led to increase the Young's modulus (EM), F2 showed more resistance to axial deformation than FC. Accordingly, F2 was almost five times less extensible than FC, and the stress value at the end of the stretching of the FC was higher than that of the F2 film. The high stiffness of the bioactive film can be attributed to the development of a heterogeneous film structure, presumably due to the formation of crystals composed of calcium and H 2 EDTA −2 ions and water molecules (Zabel, Poznyak, & Pawlowski, 2006), which could be formed during either drying or storage process of the films. This effect was not expected, CaCl 2 was added to film-forming solution to promote gellan and pectin gelation and yield biopolymer matrices with higher degree of cross-linking and therefore, less gas-permeable films. Water vapor permeability The Table 2 presents the values of WVP of the FC and F2 films. The results obtained were statistically significant different (p < 0.001). The WVP was affected by the incorporation of antimicrobial additives, mainly EDTA, F2 film was near 85% less WVP than FC. This decrease could be explained by the structural modifications arose in the F2 network by the development of crystal aggregates of EDTA-calcium. This assumption is based on the observations reported in studies concerning films containing nanoparticles (nanoclays, nanofibers, nanowhiskers (Sanchez-Garcia, Lopez- Rubio, & Lagaron, 2010)). These studies have attributed the reduction in WVP of nanocomposite films to the high nanodispersion of the particles across the matrix, the high crystallinity and the good interfacial adhesion in the nanobiocomposites. However, it has been also reported that at high concentration of nanoparticles, the WVP increases due to the formation of filler agglomeration, which usually results in the creation of preferential paths for the permeants to diffuse faster. Thus, the good dispersion of fillers into the biopolymer matrix as well as the amount and nature of the plasticizers could be relevant aspects to consider to enhance the water barrier properties of the films. Furthermore, in the present study, the films did confer a dehydration-protection to the tested agar media, due to its WVP properties. The agar plates with no films were spontaneously dried during the experiments; in contrast, the agar plates with films remained without apparent signs of desiccation ( Figure 2). This properties are important for the conservation of processed foods. Oxygen permeability The oxygen permeability (PO 2 ) of the films was not affected by the presence of EDTA and ACS (Table 2). There was not a statistically significant difference between the oxygen permeability of the two films, F2 and FC (p = 0.358). These values were lower than the corresponding WVP ones. The hydrophilic nature of the pectin-gellan films, their low plasticization and microstructural organization, provided a good barrier to oxygen diffusion and this could have enhanced the antimicrobial activity of the films. Conclusions It is reported the elaboration and characterization of functional films containing citrus pectin, gellan gum, glycerol, CaCl 2 , EDTA and ACS from fermentation culture broths of S. infantarius. The bioactive films exhibited inhibitory effects against L. monocytogenes, E. coli and S. aureus, inoculated in both selective media (Oxford, MacConkey and Baird Parker, respectively) and in "Barbacoa" medium (this last to mimic "Barbacoa", an important Mexican meat product). The recorded antimicrobial activity would be attributed to a synergic interaction of ACS, EDTA and Pectin. On the other hand, the mechanical properties of the bioactive films were influenced by the EDTA contents, giving stronger and less extensible films than the controls with no EDTA; these effects might be attributed to the formation of EDTA-calcium crystals into the biopolymer matrix. These crystals also enhanced the water barrier properties of the bioactive films. Notwithstanding more research must be done to improve the mechanical properties of the films without affecting the antimicrobial activity.
2019-04-02T13:13:44.322Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "52e31d7b2966d430ec9d52a6693a0bed5aa07d92", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19476337.2017.1422278?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "3477622b8f681719dcbe2372389a430561bba888", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
211071976
pes2o/s2orc
v3-fos-license
Change of pathotype and phylogenetic analysis of infectious bronchitis virus detected in Kagoshima prefecture, Japan Infectious bronchitis (IB) is a highly contagious disease in chickens, induced by IB virus (IBV) infection. The pathotype and S1 genotype of IBV field strain that was detected from 2008 to 2018 were investigated in Kagoshima prefecture, Japan. The frequency of cases that the renal lesion characteristic of IBV infection was histopathologically confirmed was significantly higher from 2014 to 2018 than from 2008 to 2009, suggesting the altered pathotype of IBV. Of 7 genotypes (JP-I, JP-II, JP-III, JP-IV, Mass, Gray, and 4/91) that have been detected in Japan, 6 genotypes except for JP-II were detected since 2008 and it appeared that the JP-III and JP-I have been predominant. The JP-IV with different antigenicity from other genotypes was detected since 2009. nephritis, tubular degeneration, and infiltration by heterophils in the kidney [3]. The frequency of cases that the respiratory and renal lesions as described above were histopathologically observed in the trachea and kidney was compared between 2008-2009 and 2014-2018 by Fisher's exact test. Eighteen of RNA samples (6 samples from 2008-2009 and 12 samples from 2014-2018) utilized for diagnosis in the 17 cases were subjected to the phylogenetic analysis (Table 1 and Fig. 1). The reverse transcription-polymerase chain reaction was performed by One-Step RT-PCR kit (Qiagen, Hilden, Germany) with the primer set which amplifies the S1 gene of IBV [17]. After 1.5% agarose gel electrophoresis, the amplification products were purified by QIAquick Gel Extraction Kit (Qiagen, Hilden, Germany). The commercialized outsourcing sequence service (Fasmac DNA sequence service, http://fasmac.co.jp/gene_loupe) was utilized for the sequence of amplification products. Based on the sequences of amplification products and several S1 gene sequences of IBV retrieved from GenBank (https://www.ncbi.nlm.nih.gov/genbank), the phylogenetic tree was constructed by the neighbor-joining method (500 bootstrap replicates) using MEGA 6 Software (https://www.megasoftware.net). The S1 genotypes of each sample were defined according to the classification that Mase et al. identified [17][18][19]. The detection rate of each genotype was calculated though the investigation period and in two groups (2008-2009 and 2014-2018), respectively. The detection rate between the two groups was compared by Fisher's exact test. All statistical analyses were performed using free statistical software R (version 3.5.2), distributed by The R Project (https:// www.r-project.org). The significant level was P<0.05 in all tests. There was no significant difference in the frequency of histopathological observation of respiratory lesion between two groups, a) The identical alphabets indicate identical farms. b) In Case 8, the two genotypes of infectious bronchitis virus were detected from the trachea and kidney respectively, which is a pooled sample of three chickens. c) +: The case that the lesions were histopathologically observed, -: not observed. d) The "suspected" indicates the case that it was difficult to make a definitive diagnosis because the histopathological lesions were not observed or were very mild, despite the pathogen detection such as isolation or specific gene detection of the virus. IB: Infectious bronchitis. (Table 3). In Japan, the most of reports regarding IB were with nephritis until around 1990 [7,9,10,16,24] since the first report of IB nephritis in 1971 [11], although there were a few reports of IB which developed the respiratory sign alone in affected chickens [23]. Also, the major clinical sign caused by IBV isolates submitted from the various regions in Japan for phylogenetic analysis was mostly nephritis since 1989, whereas that was respiratory from 1951 to 1980 [17]. Hence, it appeared that the pathotype of IBV field strain chronologically changed from respiratory to nephropathogenic in Japan. In Kagoshima prefecture, there was no report regarding the pathotype of IBV field strain except for the report in 1981 [20] and the predominant pathotype of IBV has not been known for a long period. The result of the present study showed that the detection of IBV field strain with nephropathogenicity has significantly increased in Kagoshima prefecture since 2014, suggesting that the nephropathogenic strain of IBV became predominant in Kagoshima prefecture. Conversely, the change of pathogenicity from highly nephropathogenic to respiratory has been reported in the field strain in Australia [8], and thus it appeared that IBV field strains have constantly altered the pathogenic features. Cavanagh et al. [6] described that the relationship between the pathogenicity of IBV and S protein is "open question", although it has been known that the S protein of IBV is the determinant factor of cell tropism [4]. In this study, there was no significant difference in the detection rate of the S1 genotype between the two groups, despite the change of pathotype in field strain, and the relationship between the pathogenicity of IBV and S1 gene remained unclear. Recently, some studies suggested that non-structural protein of IBV such as replicase or accessory proteins involved the pathogenicity of IBV [2,14]. Although the S gene might not be significant to determine the pathogenicity of IBV, further studies will be needed to elucidate the relationship. It is assumed that the elicitation of the histopathological lesion by IBV infection and the determination of IBV pathotype would be affected by the various factors (genetic factors, host factors such as breeds, days of age, and vaccination history, and environmental factors such as season, hygiene management of the farm, and the pathogens other than IBV) in the IB field case. However, there were no significant relationships between the observation of histopathological lesion and days of age, vaccination, and month of occurrence in the 17 cases of the present study (data not shown). Also, the histopathological lesions characteristic of IBV infection were not observed in 4 cases (Case 2, 3, 6, and 7) despite the complication with other diseases, as shown in Table 1. It would be hard to identify and explain the determination factors of IBV pathotype only by the data in the present study. The detection rate of each genotype, shown in Table 3, might not necessarily reflect the prevalent status of IBV genotype in Kagoshima prefecture since 2008 because of the small sample size and bias regarding the region and farm of samples subjected to the present study. However, the JP-III and JP-I have been constantly detected for a long period at the wide area in the prefecture, as shown in Fig. 1. Therefore, it appeared that the two genotypes have been predominant in Kagoshima prefecture since 2008. The JP-I is indigenous to Japan [17], and it was speculated that the JP-I has originally evolved for a long time also in Kagoshima prefecture. On the other hand, the origin of JP-III is unclear. The phylogenetic analysis of present and previous studies suggests that the JP-III is closely related to the isolate in China [1,17]. It has been believed that the IBV has a wide host range and infects not only poultry but also wild birds such as a duck [6,15]. Mase et al. inferred the possibility of involvement of wild birds regarding the dissemination of IBV into Japan [17]. In Kagoshima prefecture, there is a large plain (called "Izumi plain") as an overwintering site for wild migrant birds including a duck that infects with a virus such as avian influenza virus and that migrates among neighboring countries including China [21]. The predominance of JP-III in Kagoshima prefecture might not be irrelevant to the existence of a "portal site", such as Izumi plain, for viruses derived from neighboring countries. It is essential for the prevention and control of IB to estimate the S1 genotype of IBV field strain because the S1 genotype of field strain is associated with the vaccine selection in a chicken farm. All of the S1 genotypes, except for JP-II, were detected in Kagoshima prefecture since 2008. It implied the diversity of IBV field strain in Kagoshima prefecture. A flexible vaccination strategy would be needed for the prevention and control of IB. The JP-IV was detected in 3 of 18 samples subjected to the phylogenetic analysis in the present study. The 3 samples were collected from 3 different cases (Case 2, 8 and 11 in Table 1) and the earliest detection case was in January 2009 in a broiler farm (Case 2 in Table 1). On the other hand, the JP-IV was initially reported as a novel genotype of IBV in Japan, which was identified from the samples collected in September 2009 in a layer farm located at Ibaraki prefecture [19], more than 1,000 km far away from Kagoshima prefecture. Although the JP-IV was found in the farms located at Ibaraki and Kagoshima prefecture almost during the corresponding period, there was no geographic and epidemiological relationship between the two farms. Hence, it appeared that JP-IV has widely disseminated into Japan since at least 2009. In addition, the virulence of JP-IV was believed to be low in the previous study [19]. However, the both of respiratory and renal lesions were histopathologically confirmed in 2 of 3 cases (Case 8 and 11, Table 1) that the JP-IV was detected, and the JP-IV alone was detected in Case 11 ( Table 1). The 2 cases suggest the altered pathogenicity of JP-IV strain in Japan although the virulence to affected chickens in the cases might not necessarily be due to the IBV alone because of the complications such as Coccidiosis and Colibacillosis. Moreover, the antigenicity of JP-IV differs from others detected in Japan [19]. Therefore, the JP-IV strain could be a great threat to all of the chicken farms in Japan. Further studies regarding the pathogenicity and the prevalent status of JP-IV strain in Japan would be needed for the prevention of IB caused by JP-IV strain.
2020-02-11T14:02:51.877Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "bf392f6b0c2b1a335b5a0eb5f0c71e3807fe049c", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/82/4/82_19-0491/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52d3a9673702325151d33c3a7ebac37dd94ec63d", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18927744
pes2o/s2orc
v3-fos-license
Tension-type headache and sleep apnea in the general population The main objective of this study is to investigate the relationship between tension-type headache and obstructive sleep apnea in the general population. The method involves a cross-sectional population-based study. A random age and gender stratified sample of 40,000 persons aged 20–80 years residing in Akershus, Hedmark or Oppland County, Norway were drawn by the National Population Register. A postal questionnaire containing the Berlin Questionnaire was used to classify respondents to be of either high or low risk of obstructive sleep apnea. Included in this study were 297 persons with high risk and 134 persons with low risk of sleep apnea, aged 30–65 years. They underwent an extensive clinical interview, a physical and a neurological examination by physicians, and in-hospital polysomnography. Those with apnea hypopnoea index (AHI) ≥5 were classified with obstructive sleep apnea. Tension-type headache was diagnosed according to the International Classification of Headache Disorders. Results showed the prevalence of frequent and chronic tension-type headache was 18.7 and 2.1% in the participants with obstructive sleep apnea. The logistic regression analyses showed no significant relationship between tension-type headache and obstructive sleep apnea, with adjusted odds ratios for frequent tension-type headache of 0.95 (0.55–1.62) and chronic tension-type headache of 1.91 (0.37–9.85). The results did not change when using cut-off of moderate (AHI ≥15) and severe (AHI ≥30) obstructive sleep apnea. Thus, we did not find any significant relationship between tension-type headache and the AHI. The presence and severity of sleep apneas seem not to influence presence and attack-frequency of tension-type headache in the general population. Introduction Headache and sleep has been linked together for more than a century [1]. Although sleep in migraineurs has been studied repeatedly, less evidence exists regarding the relationship between tension-type headache and sleep [2]. Lack of sleep is frequently reported as precipitating both migraine and tension-type headache [3,4]. Two previous Danish studies have found sleeping problems to be positively associated with tension-type headache [5,6]. Recently, two new studies have showed a significant relationship between tension-type headache and a range of different sleep disturbances measured by validated sleep questionnaires [7,8]. One of the most common sleep disorders is obstructive sleep apnea syndrome, with an estimated prevalence of 2-4% among middle-aged adults [9,10]. Obstructive sleep apnea syndrome is defined as at least five apneas or hypopneas per hour of sleep in conjunction with symptoms such as daytime somnolence. When obstructive sleep apnea is defined solely by an apnea hypopnea index (AHI) of C5, the estimated prevalence among middle-aged adults is approximately 20% in the general population [9,11,12]. This is a disorder with partly or complete obstruction of the upper airways during sleep which constitutes hypopnea and apnea and will typically result in repeated airflow cessation, oxygen desaturation and sleep disruption. The disruption of sleep may then result in one or more of the following; excessive daytime sleepiness, unrefreshing sleep, daytime fatigue or reduced cognitive function [13]. Sleep apnea headache is recognized in the International Classification of Headache Disorders (ICHD II) as a brief recurrent morning headache in the presence of an apnea hypopnea index AHI of C5 [14]. There is however, still controversy regarding the association between primary headaches and obstructive sleep apnea. The apnea-related headache may present itself as migraine, tension-type, cluster or a non-specific headache, and several studies have found it to merely be a non-specific symptom with no clear relationship with obstructive sleep apnea [15][16][17][18]. The aim of the present study was to investigate the relationship between tension-type headache and obstructive sleep apnea in the general population. Sampling and representativeness This is a cross-sectional population-based study. An age and gender stratified random sample of 40,000 persons aged 20-80 years old were drawn by the National Population Register. Each of the ages 30, 35, 40, 45, 50, 55 and 60 years included 2,000 persons of each gender, while the remaining ages included 1,000 persons of each gender. The participants were residing in Akershus, Hedmark or Oppland County, Norway. The Counties have both rural and urban areas, and Akershus County is situated in close proximity to Oslo. Data from Statistics Norway has shown that the sampling area were representative for the total Norwegian population regarding age, gender, marital status and level of education [19]. The employment rate was equal, but employment in trade, hotel/restaurant and transport were overrepresented while industry, oil and gas and financial services were underrepresented in the sampling area as compared to the total Norwegian population. As shown in Fig. 1, the sample size was reduced to 38,871 because of error in the address list (n = 1,024), multihandicap (n = 4), dementia (n = 23), insufficient Norwegian language skills (n = 3) and deceased (n = 75). All participants received a mailed standard letter containing information about the project and a short questionnaire including the Berlin Questionnaire. The Berlin Questionnaire was used to classify respondents to be of either high or low risk of obstructive sleep apnea [20]. This instrument was developed in 1996 and is a ten item self-report questionnaire designed to predict the risk of obstructive sleep apnea. It contains questions in three categories addressing snoring, daytime sleepiness and the presence of hypertension and/or obesity. A positive score in at least two of the three categories are required to classify a person as high risk of obstructive sleep apnea. All others are classified as low risk group of obstructive sleep apnea. Detailed definitions of each category have previously been published [21]. If the questionnaire evoked no response, a second mail was issued. The replies could either be on paper or electronically. The overall response rate was 54.5% (21,177/38,871), and it was significantly higher among women than men (n = 11,120 vs. n = 10,057; p \ 0.001). A total of 1,442 questionnaires were not eligible. This was due to late response (n = 41), not containing a telephone number necessary for re-contact (n = 729) and incomplete filled in questionnaires that could not be classified as high or low risk of obstructive sleep apnea (n = 672). An age and gender stratified sample of the respondents aged 30-65 years were then invited by mail to a clinical evaluation and contacted by telephone. The clinical evaluation was conducted over a period of 2 years. If they could not be reached within three attempts, no further attempts were made (n = 202). Other exclusion criteria were: Use of Continuous Positive Airway Pressure (n = 10), pregnancy (n = 9), lack of Norwegian language skills (n = 5) and severe physical impairment (n = 4). A total of 378 persons with high risk and 157 persons with low risk of sleep apnea were included for further investigations. In case of technical failure in the polysomnography (PSG) recordings, the participants were asked to return for a second recording (n = 6). Two persons refrained from such a second PSG recording. Finally, all the participants diagnosed with migraine (n = 102) were excluded, in order to have a pure material without migraine as a confounder. The final study sample in the present study comprised of 431 (297 high risk and 134 low risk) persons. 585 persons refrained from participating. Participants and non-participants were not significantly different regarding self reported headache, depression, gender or age, while simple snoring was overrepresented in the low risk group, as compared to all low risk respondents of the questionnaire. Clinical evaluation The participants were all admitted to Akershus University Hospital (Stensby Hospital), Norway and underwent an extensive clinical interview including a semi-structured headache interview and a physical and a neurological examination by one of three physicians. The physicians were blinded regarding the participants replies on the questionnaire. ICHD II was applied [14]. Since the participants with no headache and infrequent tension-type headache did not differ in any of the variables, they were grouped together in the analyses, as \12 days of headache per year. The Hospital Anxiety and Depression Scale (HADS) was used to screen for depression [22]. The replies were dichotomized and depression was defined by a score of C8 on the subscale of depression (HADS-D) [23]. Excessive daytime sleepiness was assessed by the Epworth Sleepiness Scale [24]. The results were dichotomized into scores B10 and [10, the latter is considered to represent clinically significant excessive daytime sleepiness [25]. Body mass index (kg/m 2 ) was calculated from measured weight and height. All participants then underwent in-hospital PSG performed on standard, multichannel, Embla TM A10, PSG devices (ResMed Corp Poway, CA, USA). The recordings included a two-channel electroencephalograph (C4/A1, C3/A2 according to the 10-20 international electrode placement system), a two-channel electrooculogram, a one-channel submental electromyogram, leg EMG (tibialis), SaO 2 , breathing movements (Respitrace; Ambulatory Monitoring, Ardsley, NY, USA), air flow measured by a nasal air pressure transducer (Pro-Tech, Woodinville, WA, USA) and an oro-nasal thermistor, and body position monitoring. All electrophysiological signals were pre-amplified, stored and subsequently scored (30-s epochs using Somnologica 3.2 software package, Flaga-Medcare, Buffalo, NY, USA) according to the Rechtshaffen and Kales scoring manual by two US board certified PSG technicians who were blinded to the result of the Berlin Questionnaire [26]. Arousals were documented and classified [27]. Obstructive apneas were scored when a 90% decrease of flow occurred for more than 10 s. Hypopneas were defined as a 30% decrease in flow for more than 10 s with subsequent oxygen desaturation of at least 4%. The apnea hypopnoea index (AHI) was calculated as the average of total number of apneas and hypopneas per hour of sleep. In this study, the participants with AHI C5 were classified with obstructive sleep apnea. Statistical analyses The statistical analyses were performed using SPSS Base System for Windows 16 Ethical issues The project was approved by The Regional Committees for Medical Research Ethics and the Norwegian Social Science Data Services. Results In our screening population 23.3% (4,942/21,177) had high risk and 76.7% (16,235/21,177) had low risk for obstructive sleep apnea when using the Berlin Questionnaire as a risk stratifying tool. The distribution of demographic and clinical characteristics of the sample is shown in Table 1. Respondents with high risk of obstructive sleep apnea according to the Berlin Questionnaire were oversampled, resulting in obstructive sleep apnea occurring in 55.9% (241/431) of the participants. Men had a higher prevalence of obstructive sleep apnea, while women had a higher prevalence of frequent and chronic tension-type headache. The mean body mass index (kg/m 2 ) in the study sample was 28.9 (SD 4.9). The prevalence of frequent and chronic tension-type headache was 18.7% (45/241) and 2.1% (5/241) in the participants with obstructive sleep apnea, while it was 31.6% (60/190) and 1.6% (3/190) in participants with AHI below 5. The clinical characteristics of tension-type headache were evenly distributed among participants with and without obstructive sleep apnea. The majority in both groups reported bilateral location, pressing/tightening quality, mild/moderate pain intensity, few accompanying symptoms and a duration somewhere between 30 min up to 24 h. Table 2 illustrates the polysomnographic characteristics of the sample. As expected in a sample with a high number of participants with obstructive sleep apnea, the amount of minutes with deep sleep (S3 and S4) and REM sleep as well as the mean sleep efficiency were somewhat low. The participants with chronic tension-type headache had a mean sleep efficiency of 88.6 (SD 7.5) and a mean arousal index of 12.7 (SD 6.4) which were not significantly different from participants without chronic tension-type headache. The AHI of the participants with none or infrequent tension-type headache was significantly higher than among those with frequent headache (p = 0.03). Table 3 shows the odds ratios for obstructive sleep apnea by tension-type headache, depression, gender, body mass index and age. The crude probability for frequent tension-type headache was significantly decreased among participants with obstructive sleep apnea. However in the a Age at sampling of questionnaire data adjusted analysis, the odd ratios for both frequent and chronic tension-type headache showed a non-significant result. Results Our main finding was the lack of relationship between tension-type headache and obstructive sleep apnea in the general population. This is in concurrence with a previous clinical population study from Norway [18]. In that study they identified a subgroup of 1.5% of the patients referred to a neurologist because of headache, to fulfill the criteria of obstructive sleep apnea. This would suggest that obstructive sleep apnea is rather uncommon in patients with difficult headache. The prevalence of tension-type headache was equal in both patients with and without obstructive sleep apnea (7 vs. 9%) [18]. The consistency of our results is further emphasized by the fact that mild (AHI C5), moderate (AHI C15) and severe (AHI C30) obstructive sleep apnea 30 showed exactly the same. Such lack of dose-response relationship between headache and severity of obstructive sleep apnea has previously been reported in two case-control studies based on clinic populations from USA and Norway, respectively [16,28]. There are however, several previous studies indicating a relationship between non-specific headache diagnoses and obstructive sleep apnea [29][30][31]. These studies often refer to morning headache, chronic daily headache or simply headache. In the present study we have focused strictly on tension-type headache. A recent Norwegian populationbased survey found that severe sleep disturbances were three times more likely in subjects with tension-type headache than in headache free individuals [8]. Sleep disturbances in that survey was based on the Karolinska Sleep Questionnaire with a score in the upper quartile. This questionnaire assesses snoring, apnea, insomnia, daytime sleepiness and restless legs syndrome and in the analysis of the separate items they did not find any differences in the prevalence of snoring or apnea between subjects with tension-type headache and headache free individuals. Methodological considerations The strengths of the present study were the use of interview and examination by physicians regarding the diagnoses of tension-type headache as well as the use of PSG in diagnosing obstructive sleep apnea in participants from the general population. Although the response rate to the questionnaire was relatively low, similar replies to the first and second issued questionnaire as well as the electronic responses, suggest that responders and non-responders are not different. A previous Danish epidemiological survey found no significant difference in the frequency of migraine among responders and non-responders [32]. In addition, the response rate is comparable to that of other sleep-related epidemiologic studies [33,34]. The relatively low participation rate may introduce a selection bias. However, participants and non-participants were not significantly different regarding self reported headache, depression, gender or age. Regarding the difference between the participants and the study population, we found that self reported simple snoring was somewhat overrepresented in the low risk group in the study sample as compared to the low risk respondents of the questionnaire. If there is a relationship between snoring and headache, this may have introduced a misclassification bias resulting in a slight overestimation of headache in participants without obstructive sleep apnea in our study [29,35]. This will not, however, influence our finding that tension-type headache and the AHI was not significantly related. As with most studies, a larger sample may have demonstrated greater precision of the results. Since this was an epidemiologic study of the general population, the amount of participants with chronic tension-type headache was small. This requires a more cautious interpretation of the statistical findings regarding chronic tension-type headache, since we cannot exclude a type-2 error due to the small numbers. Finally, it cannot be completely ruled out that the use of single in-patient PSG may be a potential limitation to our study [36]. Although the mean total sleep time in this sample was 409.6 min, which may represent a first night effect, we believe the latter is more important in measuring of the sleep quality than in diagnosing of obstructive sleep apnea. Conclusion The presence and severity of sleep apneas seem not to influence presence and attack-frequency of tension-type headache in the general population.
2014-10-01T00:00:00.000Z
2010-12-16T00:00:00.000
{ "year": 2010, "sha1": "f7ab0f19bc93513015323ab76a8560d8e3409f5a", "oa_license": "CCBYNC", "oa_url": "https://thejournalofheadacheandpain.biomedcentral.com/track/pdf/10.1007/s10194-010-0265-5", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f7ab0f19bc93513015323ab76a8560d8e3409f5a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18498874
pes2o/s2orc
v3-fos-license
Pulmonary Embolism Presenting as Abdominal Pain: An Atypical Presentation of a Common Diagnosis Pulmonary embolism (PE) is a frequent diagnosis made in the emergency department and can present in many different ways. Abdominal pain is an unusual presenting symptom for PE. It is essential to maintain a high degree of suspicion in these patients, as a delay in diagnosis can be devastating for the patient and confers a high risk of mortality if left untreated. Here, we report the case of a 53-year-old male who presented to the emergency department with worsening right upper quadrant abdominal pain with fevers. Initial imaging was benign, although lab work showed worsening leukocytosis and bilirubin. Abdominal pathology seemed most likely, but the team kept PE on the differential. Further imaging revealed acute pulmonary embolus in the segmental branch of the right lower lobe extending distally into subsegmental branches. The patient was started on anticoagulation and improved drastically. This case highlights the necessity of keeping a broad differential and maintaining a systematic approach when dealing with nonspecific complaints. Furthermore, a discussion on the pathophysiology on why PE can present atypically as abdominal pain, as well as fevers, is reviewed. Using this information can hopefully lead to a subtle diagnosis of PE in the future and lead to a life-saving diagnosis. Introduction Pulmonary embolism (PE) is a commonly encountered diagnosis in the emergency room. Given the devastating effects of a high clot burden, it is imperative that a diagnosis is made and treatment is initiated promptly. There is a very high risk of morbidity and mortality if there is a delay in treatment. However, a PE can present in many different ways so a high clinical suspicion is necessary to avoid misdiagnosis. Abdominal pain, in particular, is a very uncommon presenting symptom for PE [1]. Herein, we report the case of a 53-year-old male who presented with worsening right upper quadrant abdominal pain, later found to have a PE. Case Report Patient is a 53-year-old male with history of hypertension who presents with two days of sudden onset of abdominal pain. He reports that pain began on his lower right back and radiated over to the right upper quadrant of his abdomen. Pain is described as constant dullness in the area with periodic bouts of sharp pain. Eating and deep inspiratory breaths exacerbate symptoms. He also notes intermittent fevers during this time with temperatures ranging within 100.8-102 F. Patient denies any chest pain, dyspnea, chills, sick contacts, or previous history of these symptoms. The patient did not recently travel and had no recent surgeries or history of clots. There was no other significant medical history except for acute pancreatitis at age 15 without known cause. Patient does not take any medications. Family history is unremarkable with no known clotting disorders or malignancy. Social history is benign, with no reported cigarette, alcohol, or drug use. On admission, patient had a temperature of 100.2 F, blood pressure of 137/83, with pulse of 100, respiratory rate of 22, and O 2 saturation of 96% on room air. Physical exam revealed a well-nourished male who appeared mildly uncomfortable on bed, with no evidence of JVD or calf tenderness/erythema. Heart was tachycardic, but otherwise normal with no rubs, murmurs, or gallops. Lungs were clear to auscultation bilaterally with no rales, rhonchi, or wheezes noted. Abdomen was tender in the right upper quadrant, but soft and nondistended. No evidence of rebound or guarding was appreciated on exam. Murphy's sign was negative. Significant labs included a white count of 12.1, liver function tests within normal limits, bilirubin of 1.6, and troponins negative times two. Right upper quadrant ultrasound showed no evidence of gallstones, no gallbladder distention, negative sonographic Murphy's sign, and no biliary dilation. CT scan of the abdomen/pelvis was read as small right pleural effusion with atelectasis at the right base, with no evidence of acute intra-abdominal process. CTA of the chest revealed an acute pulmonary embolus in the segmental branch of the right lower lobe extending distally into subsegmental branches; the infiltrate in the right lung base most likely represents infarcted lung (Figure 1). EKG performed the following day demonstrated sinus rhythm, with classic findings consistent with S1Q3T3 ( Figure 2) [2]. Patient was subsequently started on therapeutic enoxaparin. However, abdominal pain continued to worsen and bilirubin rose to 2.2 the next day. Pain seemed to be out of proportion to referred pain from the pulmonary embolus. Given continued low-grade fevers, zosyn was initiated. Visceral ultrasound of the abdomen was performed to rule out portal vein thrombosis, which was negative. Direct and indirect bilirubin was measured which showed direct bilirubin of 0.5, consistent with indirect hyperbilirubinemia. Subsequent hemolysis labs were negative. Patient eventually transitioned to rivaroxaban and abdominal pain slowly subsided. He was discharged on rivaroxaban with close outpatient follow-up. Discussion This case illustrates the necessity of having a high index of suspicion for pulmonary embolism even in patients presenting with nonclassical symptoms. This patient had a convincing story for abdominal etiology, as evidenced by elevated bilirubin, increased white blood cell count, and lowgrade fever. However, the pleuritic pain elicited on history as well as tachycardia and tachypnea kept pulmonary embolism on the differential. Given these findings, further workup was pursued, leading to the subtle diagnosis. Many hypotheses have been proposed regarding the mechanism of action explaining abdominal pain in the setting of a pulmonary embolism. It has been postulated that right heart strain from the clot could cause backflow, leading to passive hepatic congestion. Furthermore, distension of Glisson's capsule resulting from the congestion could lead to the presenting symptoms [3]. However, in this patient, there was no clinical or physical exam evidence of right heart strain. In addition, echocardiogram was within normal limits. Additionally, the abdominal pain may be related to diaphragmatic pleurisy [4]. This is thought to result from pulmonary infarction in the lung bases distally to the area of the clot [5]. Lastly, it has been suggested that the pain is related to tension on sensory nerve endings in the parietal pleura as an effect of the burden on the clot. This area innervates the intercostal muscles, leading to pain in this area [6]. The presentation was further clouded by fevers and worsening bilirubin. Low-grade fevers have long been studied as a well-known phenomenon accompanying PE [7]. This has been suggested to be secondary to infarction resulting in tissue necrosis and local inflammation [8]. It is not unusual to have concomitant leukocytosis, as seen here [9]. The degree of fever seen in our patient coexisting with worsening abdominal pain was a bit curious, however, and prompted the decision on starting empiric antibiotics. The increased bilirubin also questioned the diagnosis of PE and made an abdominal pathology more likely. Although the bilirubin initially rose, it eventually trended downwards; thus, it was suspected that the patient might have had Gilbert's disease, with the increase likely secondary to the stress of the PE. Conclusion Here, we describe an interesting case of PE that could have easily been missed without a high index of suspicion. Missing such a diagnosis can lead to devastating consequences. A variety of nonspecific complaints, such as abdominal pain, can mask the underlying disease. Many other labs seen here also confounded the picture. However, maintaining a systematic approach and recognizing the different signs, symptoms, and laboratory findings in PE can lead to a lifesaving diagnosis.
2018-04-03T01:51:32.962Z
2016-08-24T00:00:00.000
{ "year": 2016, "sha1": "0bc6d7a4ff2f302b068d1ce3180f6b33109254b0", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/criem/2016/7832895.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2859b55dfa020c7b2e130b7437b16fca8f1ae9bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212639121
pes2o/s2orc
v3-fos-license
Extracellular DNA in blood products and its potential effects on transfusion Abstract Blood transfusions are sometimes necessary after a high loss of blood due to injury or surgery. Some people need regular transfusions due to medical conditions such as haemophilia or cancer. Studies have suggested that extracellular DNA including mitochondrial DNA present in the extracellular milieu of transfused blood products has biological actions that are capable of activating the innate immune systems and potentially contribute to some adverse reactions in transfusion. From the present work, it becomes increasingly clear that extracellular DNA encompassed mitochondrial DNA is far from being biologically inert in blood products. It has been demonstrated to be present in eligible blood products and thus can be transfused to blood recipients. Although the presence of extracellular DNA in human plasma was initially detected in 1948, some aspects have not been fully elucidated. In this review, we summarize the potential origins, clearance mechanisms, relevant structures, and potential role of extracellular DNA in the innate immune responses and its relationship with individual adverse reactions in transfusion. Introduction The presence of extracellular DNA (ecDNA) was first reported by Mandel and Métais [1] in 1948. The term ecDNA describes any DNA existing in the extracellular environment, regardless of structure (association with protein complexes and extracellular vesicles) [2,3]. Although another term, cell-free DNA (cfDNA), is widely used presently, the term ecDNA is more accurate in this review because a large proportion of DNA in vivo is localized in complexes or is packaged in vesicles rather than being truly free [4,5]. Found in the extracellular milieu including serum, plasma, lymph, bile, milk, urine, saliva, spinal fluid, amniotic fluid, and cerebrospinal fluid, ecDNA can be isolated from individuals in both healthy and various disease states [6-10]. ecDNA comprises mainly nuclear DNA (nucDNA) and mitchondrial DNA (mitDNA). It is also present in the extracellular milieu of blood products (as shown in Table 1) [11][12][13][14][15][16][17][18][19][20]. However, a novel and significant role of ecDNA has emerged, involving its ability to trigger innate immune system responses and drive inflammation when released from mechanically injured cells [21]. ecDNA, along with other host molecules released upon cell damage, falls into the category of damage-associated molecular patterns (DAMPs) [22,23]. ecDNA in blood products Blood can be transfused without prior modification (whole-blood transfusion) or divided into red blood cell units (RBCUs), fresh frozen plasma (FFP), platelet concentrates (PCs) and sometimes granulocytes. These blood products for therapeutic use usually contain donors' plasma, platelets, residual leukocytes and erythrocytes. The DNA of a donor can, therefore, be transferred to a recipient via ecDNA in the plasma fluid, ecDNA bound to the surfaces of blood components (such as erythrocytes and platelets) or via the DNA localized in complexes or packaged in vesicles. According to the report of Garcia-Olmo et al. [24], ecDNA from human plasma can pass through the 0.4 micron filters of Corning Transwell plates and there were no significant differences between the effects of human plasma administered to cell cultures indirectly through the Transwell plates and plasma administered directly to the cells, indicating that no DNA was lost due to the filter. As early as in 1995, detectable DNA has been shown to be present in stored human donor blood at levels in the range of 250-1500 ng/ml with a Threhold Total DNA Assay Kit, and the total amount of DNA administered to a patient during the transfusion of a single unit of whole blood can be as much as 450 μg (based on 1.5 μg DNA/ml of CPD plasma and assuming a plasma volume of 60% in a 500 ml unit of blood) [11]. According to studies in the subsequent years, ecDNA was also observed in the plasma of transfusion blood components including RBCUs, PCs, and FFP [12][13][14][15][16][17][18][19][20]. The studies on ecDNA in transfusion products and their concentration are summarized in Table 1. As can be seen from the table, in the report of Waldvogel Abramowski et al. [20], cellular products including RBCUs (290 + − 120 ng/ml) and PCs (339.6 + − 114 ng/ml) contained more ecDNA than did FFP (2.875 + − 0.996 ng/ml), suggesting a potential link with the number of cells found in the blood bag. Simmons et al. [19] found detectable levels of extracellular mitDNA in FFP (213.7 + − 65 ng/ml), PCs (94.8 + − 69.2 ng/ml), and RBCUs (3 + − 0.4 ng/ml). In the present study, the concentration of extracellular mitDNA detected in RBCUs was lowest among the three tested, perhaps because RBCs do not contain mitochondria and are subjected to leukocyte reduction [19]. Of note, the generation or release of ecDNA present in blood products could be influenced by the details of the processing and manufacturing methods of blood components [16,17]. Shih et al. [16] have shown that that the levels of ecDNA are affected by RBCUs processing method as well as product age: whole blood filtered (WBF), short-term storage RBCUs had more ecDNA than red cell filtered (RCF), aged RBCUs. However, according to the study of Dijkstra-Tiekstra et al. [12], the amount of ecDNA was not influenced by filtration of the PCs (1.7 + − 0.8 vs. 1.5 + − 0.8 leucocyte-eq/μl). Unfortunately, many studies with respect to ecDNA in blood products were performed with dissimilar analytical procedures, rendering comparison with their results infeasible. Some of these research results do not differentiate between nucDNA and mitDNA that are thought to be of separate evolutionary origins [25,26]. MitDNA consists of a high number of unmethylated CpG islands, as typically found in bacteria [27]. Potential origins To date, no consensus has yet been reached with regard to the main origin of ecDNA. However, there are two contenders for the main origin of ecDNA: (i) cellular breakdown mechanisms and (ii) active release mechanisms [28]. Cellular breakdown mechanisms such as apoptosis and necrosis are considered to be the main processes for producing ecDNA by some researchers [29,30]. It may be of interest to note that other forms of cell death such as NETosis can also serve as sources for ecDNA [31]. Numerous studies have demonstrated that ecDNA can also be derived from active release mechanisms [32][33][34][35]. However, it is still unclear which mechanism accounts for the ecDNA observed in blood products. During storage, cellular breakdown mechanisms, such as necrosis, apoptosis and NETosis, could theoretically lead to ecDNA release in blood products. Neutrophils can release both nucDNA and mitDNA in structures known as neutrophil extracellular traps (NETs), which are composed of decondensed chromatin decorated with granular proteins [36][37][38]. The formation of NETs requires the activation of neutrophils and the release of their DNA in a process that may or may not result in neutrophil death. Additionally, ecDNA could be encapsulated in extracellular vesicles (EVs), such as microparticles (MPs) [39], which have also been detected in blood products [40,41]. This ecDNA, present in membrane-bound EVs, can be protected from nuclease mediated degradation and can be released through the breakdown of EVs [4]. What's more, activated platelets have been reported to release intact mitochondria, which can be hydrolysed by group IIA secretory phospholipase A2 (sPLA2-IIA), and thereby release extracellular mitDNA [42]. In their study, Boudreau et al. [42] demonstrated that inactivated platelets contain an average of ∼4 mitochondria using fluorescence and transmission electron microscopy. Clearance mechanisms Since 1966, work on autoimmune pathologies has permitted the first characterization of ecDNA [43][44][45][46], it is resistant to RNase and proteinase K [47], but can be hydrolyzed by DNase. Studies on ecDNA clearance revealed that extracellular DNases such as DNase1 and DNase1L3 could degrade ecDNA associated with a variety of structures [48][49][50][51][52]. DNase1 and DNase1l3, which have close structural and functional resemblance, may substitute or cooperate with each other during DNA degradation [50]. The enzyme DNase1 plays a role in the clearance of chromatin during necrosis [49]. DNase1L3 is uniquely capable of digesting chromatin in microparticles released from apoptotic cells [51]. DNase1 along with DNase1L3 are essential for disassembly of NETs [50,52]. The subsequent fate of such ecDNA in recipients' blood is still unknown. DNase activity is one possible pathway of degradation of ecDNA, but other mechanisms of clearance cannot be discounted. The clearance of ecDNA could also be achieved by renal excretion into the urine [53] or uptake by the liver and spleen followed by macrophagic degradation [54]. Y-chromosome-specific sequences were detected in the urine of women who had been transfused with blood from male donors [55]. Detailed information on the mechanisms associated with these processes is lacking and remains somewhat controversial. A combination of DNase degradation, renal clearance, and uptake by the liver and spleen are likely to play a role in the clearance of ecDNA in transfusion recipients. Whether the ecDNA is complexed with lipid/proteins or nucleosomes, or is encapsulated within membrane-enclosed particles may influence the ability of DNases to clear ecDNA [4, [56][57][58]. It was shown that DNA bound to nucleosomes can be protected from nuclease cleavage during apoptosis [56,57]. The ecDNA contained in EVs was also shown to be protected from degradation [4,58]. And the clearance of ecDNA may vary with the physiological state of the recipients [59]. Moreover, ecDNA can be recognized by various cell-surface DNA-binding proteins and can be transported into cells for possible degradation to mononucleotides or for transportation into the nucleus [60]. Therefore, the rate of ecDNA uptake by different cells may also affect the rate Mitochondrial DNA can also be released by these mechanisms. Elimination of ecDNA could be achieved by DNase degradation, renal excretion into the urine or uptake by the liver and spleen followed by macrophagic degradation. Abbreviations: NETs, neutrophil extracellular traps. of its clearance. The potential origins and clearance of ecDNA in blood products are explained and summarized in Figure 1. Particular structures relevant to ecDNA in blood products Depending upon different mechanisms of release, ecDNA is relevant to different complex structures; particulate structures such as EVs or macromolecular structures such as NETs and other less relevant structures that will not be detailed here [3]. EVs The first discovery of EVs was in 1964 when Chargaff and West, identified´subcellular factors' in cell-free plasma and showed that these factors played a role in blood clotting [61,62]. In 1967, Wolf [63] confirmed the presence of these subcellular factors using electron microscopy when he was studying the´platelet dust' that was known to be shed by platelets during storage [64]. Activated or apoptotic cells and platelets in blood products can shed small membrane vesicles, called EVs which are generally identified as MPs in transfusion research [62]. In particular, MPs derived from stored RBCs have been shown to contribute to neutrophil priming and activation, thereby enhancing the inflammatory response observed in patients who receive older RBCUs during transfusion [65]. Recently, the immunomodulatory potential of EVs in blood products has emerged as an important focus of studies in transfusion medicine. Based on the current knowledge, it has been suggested that EVs in stored blood are associated with a number of adverse outcomes such as neutrophil activation and the promotion of an inflammatory response in the recipients [41,[65][66][67]. A study showed that EVs accumulating in RBC products during storage contribute to a strong inflammatory host response in recipients, which depends both on the number of EVs as well as on changes in the EVs related to storage [41]. Some studies suggested that platelet-derived EVs, such as those that convey mitochondrial DAMPs, may be a useful biomarker for the prediction of the potential risk of adverse transfusion reactions [68]. NETs NET formation, or´NETosis' , was first described by Brinkmann et al. [36] in 2004. It occurs when neutrophils are activated by pathogen agents or under particular conditions: NETosis leads to chromatin decondensation, lysis of cell and nuclear membranes, and finally the release of NETs. The principal function of the NETs is believed to be to entrap and kill circulating pathogens. The composition of NETs was initially widely believed to be predominantly nucDNA; however, under specific stimulatory conditions, NETs composed exclusively of mitDNA were demonstrated [69]. The emerging body of evidence suggests that NETs can indeed be composed exclusively or predominantly of mitDNA, which means that NETosis may represent a significant source of extracellular mitDNA in certain inflammatory conditions. In addition to the role of intracellular mitDNA in NET composition, mitDNA may also trigger NET formation as a DAMP after major trauma and with signaling mediated through a TLR9 dependent pathway [70]. Of note, Caudrillier et al. [71] reported that activated platelets induce the formation of NETs in transfusion-related acute lung injury (TRALI), which is the leading cause of death after transfusion therapy. Potential effects of ecDNA on transfusion Over the past several decades, the effects of ecDNA on transfusion have rarely been investigated. A report published in 2018 pointed out that cell-free nucleic acids in blood products contained mainly double-stranded DNA (dsDNA) , which has been shown to regulate genes of innate immune response [20]. The total ecDNA encompasses nucDNA and mitDNA. It was found that only mitDNA and bacterial DNA (bacDNA), increased neutrophil viability as a consequence of their activation [72]. In another report, mitDNA induced neutrophil matrix metalloproteinase 8 (MMP-8) and MMP-9 release, while nucDNA did not [73]. This evidence suggested that the extracellular mitDNA encompassed in ecDNA in transfusion may be immunological or proinflammatory. Here, we review the role of extracellular mitDNA in innate immune responses and its relationship with individual adverse reactions in transfusion. This section will also describe the potential role of transfusion in horizontal gene transfer (HGT). Extracellular mitDNA in innate immune responses The particular DNA double-helix structure, the particular motifs of certain sequences and the molecular interactions are three factors at the origin of the stimulation of the immune response [74]. In effect, the exposure of cells of the innate immune system to dsDNA could provoke the activation of the genes of the innate immune response [20,74]. This stimulation is at the origin of a strong inflammatory response mediated by the secretion of cytokines. The abundant nucleic acid receptors in the cells play an important role in the innate immune system, which employs them in response to DNA within the hosts [75,76]. How exactly mitDNA may mediate their immunological role in transfusion is unknown, but different studies in some other areas of medicine rather than transfusion provided mechanistic insights. MitDNA has been shown to bind to Toll-like receptors (TLRs) or nucleotide oligomerization domain (NOD)-like receptors (NLRs) and more recently it has been shown to be linked with the stimulator of interferon genes (STING) pathway, thus providing distinct mechanisms potentially leading to immunological and inflammatory responses [77,78]. NLRs The NLRs are another major PRRs in the innate immune system. Of the NLRs, the NLR pyrin domain 3 (NLRP3) inflammasome is the most widely studied mainly due to its affinity for a wide variety of ligands [78]. The NLRP3 inflammasomes are targets of mitDNA, leading to the activation of caspase-1 in the inflammasome complex. Caspase-1 cleaves pro-IL-1β and pro-IL-18 into mature IL-1β and IL-18 [78], which is a potent pyrogen that elicits a strong proinflammatory response [89]. Shimada et al. [90] showed that it is the oxidized form of mitDNA that confers the inflammatogenic potential to mitDNA, which could directly bind NLRP3 to activate the inflammasome. Interestingly, the genetic deletion of NLRP3 and caspase-1 results in less mitDNA release [91]. Conversely, NLRP3 inflammasome formation releases mitDNA [91]. This suggests a positive feedback loop, in which activation of the NLRP3 inflammasome by oxidized mitDNA further promotes mitDNA release. STING pathway What's more, mitDNA has the ability to stimulate the innate immune system through stimulation of the interferon genes (STING) pathway, resulting in interferon (IFN) release. The STING pathway was recently mechanistically dissected to reveal an intricate relationship demonstrating how mitDNA triggers interferon release [92]. The study showed that through the depletion of mitochondrial transcription factor A (TFAM), mitDNA stability was disturbed, causing enlargement of the mitochondrial nucleoid. Subsequently, fragmented mitDNA was released, activating peri-mitochondrial cyclic GMP-AMP synthase (cGAS) causing increased cGAMP formation. The second messenger cGAMP then activates the endoplasmic-reticulum-bound STING pathway which ultimately activates TANK-binding kinase 1 (TBK1) and results in IFN I and other interferon-stimulated genes [92]. To conclude, mitDNA participates in a variety of innate immune pathways, including the mitDNA-TLR9-NFκB axis, mitDNA-NLRP3-caspase1 pathway and mitDNA-cGAS-cGAMP-STING signaling (as summarized in Figure 2). The extracellular mitDNA appears to be a potent danger signal that could be recognized by the innate immune system and modulate the inflammatory response [93]. Extracellular mitDNA and adverse transfusion reactions The establishment of strict procedures to avoid the transfusion of microbial components has greatly reduced the transmission of infections in recipients. However, sterile inflammation and organ injury in transfused recipients still occur in the absence of any apparent infectious agents [94]. TRALI is the leading cause of transfusion-related death and is initiated by soluble mediators in plasma [95]. Nonhemolytic transfusion reactions (NHTRs) are more frequent [96,97]. In addition, the transfusion of the plasma fraction (without cells or platelets) is sufficient to trigger these reactions [98]. Extracellular mitDNA falls into the category of DAMPs, which are known immune mediators associated with inflammation [22,23]. Studies have shown the potential contribution of extracellular mitDNA to adverse transfusion reactions [14,18,19,42]. Boudreau et al. [42] quantified mitDNA in the extracellular milieu of PCs that has induced adverse reactions and compared the levels to those within samples that were transfused without incidents. Interestingly, they confirmed that significantly higher levels of extracellular mitDNA correlated with adverse reactions [42]. Yasui et al. [18] further confirmed that elevated levels of mitDNA were present in PCs that induced NHTRs in platelet transfusion. Cognasse et al. [15] found that extracellular mitDNA did not correlate with cytokine levels and might be an independent risk factor in PC transfusion-linked inflammation. TRALI is defined as new acute lung injury that develops during or within 6 h of receiving a transfusion of any blood product and progresses to a non-resolving severe respiratory failure such as acute respiratory distress syndrome (ARDS) [99]. Simmons et al. [19] found that the levels of mitDNA present in FFP and PCs correlate well with the levels of mitDNA in the serum of post-transfusion patients and are associated with a higher risk of ARDS. Mitochondrial DAMPs, including mitDNA, have been shown to potentiate inflammatory lung injury when introduced into healthy rats in a landmark paper by Zhang et al. [84]. In addition, mitDNA DAMPs could increase the EC permeability that was observed in acute lung injury and ARDS [100]. Extracellular mitDNA DAMPs present in transfusion products may act as a potential effector of TRALI, as hypothesized by Lee et al. [14]. According to current evidence as mentioned before, the working hypothesis of the involvement of extracellular mitDNA in the adverse transfusion reaction is summarized in Figure 3. The potential role of transfusion in HGT During the 1950s to 1970s, blood transfusion experiments were found to alter the hereditary traits of the offspring in poultry [101][102][103]. As early as 1871, Galton [104] carried out intervarietal blood transfusion experiments among rabbits, but failed to induce heritable changes. Later, Sopikov [103] re-established the use of this method to analyse the effect of blood transfusion on hereditary traits: repeated transfusion of the blood of Black Australorp roosters to White Leghorn hens, and subsequent mating of these hens with White Leghorn roosters yielded progeny with modified inheritance. In exchanged donor and recipient roles, similar results were obtained [103]. Afterward, Sopikov's observations were confirmed not only by many Soviet researchers [105,106], but also by investigators in other countries [107,108]. Subsequently, increasing evidence that DNA injection could induce heritable changes emerged [109,110]. For instance, DNA extracted from the Khaki Campbell was exclusively used to induce heritable changes in the Pekin duck in the study of Benoit et al. [111] Hereditary modifications of morphological characteristics in ducks as a result of the injection of DNA and RNA from other breeds of ducks were also reported. It is clearly possible that when DNA-rich avian blood cells are transfused to other members of the same species, the transferred DNA can be expressed. Currently, many researchers have detected ecDNA in the plasma of donor-derived blood products, as listed before in Table 1. Landman [112] suggested that the genes of organisms could be divided into two groups: most were inherited 'vertically' from ancestors, but some were acquired 'horizontally' . HGT is defined as the transfer of genes between organisms in a fashion other than that, which is found through traditional reproduction [113]. The transfer of biological sources, including blood transfusion products, from donor to recipient will likely result in the presence of donor ecDNA in the recipient's blood circulation. This raises concern over the potential of ecDNA to transfer the genetic or epigenetic information of clinical risk factors (genetically related illness, mutations or pharmaceutically induced adverse effects) from donors to recipients. Stroun and Anker [114] have been suggested that nucleic acids are released by living cells and circulate throughout the whole organism. Newly synthesized, actively released ecDNA can translocate to neighbouring and remote parts of the body, enter cells and alter their biology [115,116]. Genometastasis experiments have shown that ecDNA in the plasma of cancer patients can indeed transfer oncogenic information to susceptible cells [24]. Thierry et al. [3] illustrated that many structures carrying ecDNA could be involved in this genometastasis. However, this implies that ecDNA in the plasma of blood transfusion products might be capable of transferring an activated human oncogene to recipients during blood transfusions. In addition, many pharmaceutical and botanical compounds have been found to induce epigenetic alterations [117][118][119]. It is, therefore, possible that the ecDNA in donors using medication can contain these pharmaceutically induced epigenetic alterations and that these alterations can be transferred to recipients during blood transfusions. Moreover, it was shown that microchimerism might be related to real or potential health implications in autoimmune diseases, graft-versus-host reactions, and transfusion complications [120]. Microchimerism refers to a small number of cells or DNA in one individual that is derived from another genetically distinct individual [121]. Regarding blood transfusion, microchimerism from non-leukoreduced cellular blood products has been found to persist from months to years after transfusion in traumatically injured patients [122]. Transfusion-associated microchimerism appears to be an identified complication of blood transfusion [123,124]. Whether patients with transfusion-associated microchimerism have consequent adverse health effects, such as transfusion associated graft-versus-host disease, is still under investigation. Conclusion Blood transfusion is the intravascular transfer of blood products into a recipient. A certain amount of DNA encompassed mitDNA was demonstrated to be present in the extracellular milieu of blood products for transfusion. This raises the question of whether such ecDNA carries a risk. We provide a comprehensive review of the potential origins, clearance mechanisms and relevant structures of ecDNA in blood products in order to familiarize researchers with previous works and to show that there are still many unanswered questions relating to the nature and biological functions of ecDNA in blood products. Our review shows that ecDNA especially mitDNA participates in a variety of innate immune pathways and relevant to some adverse transfusion reactions. In addition, it should be concerned that ecDNA might be capable of transferring some genetic or epigenetic information of clinical risk factors from donors to recipients during blood transfusion. Notably, methods of isolation and quantification of ecDNA are crucial when analysing data from different reports. At this time, there is a lack of uniformity in the methods of ecDNA extraction and quantification, which need to be standardized. What's more, the detailed clearance mechanisms of these ecDNA in recipients administrated via transfusion also require further research, including (i) the activity of DNase, (ii) the rate of renal excretion into urine, and (iii) the rate of uptake by the liver and spleen. However, this remains an underexplored field in transfusion, and more insights will likely emerge in the near future. Hopefully, the articles presented herein will stimulate researchers to give serious consideration to the potential harmful effects of ecDNA, especially extracellular mitDNA, on transfusion. Competing Interests The authors declare that there are no competing interests associated with the manuscript. Funding This study was supported by the Office of Science and Technology of Luzhou [grant number 2017LZXNYD-J23].
2020-03-10T13:16:42.530Z
2020-03-09T00:00:00.000
{ "year": 2020, "sha1": "58acb6d543f2d4cd8d0398732b26a86e533b638a", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/40/3/BSR20192770/870122/bsr-2019-2770.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34586eeee0579e5e2d5ac7bd05a1a72db3cb7e45", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204008199
pes2o/s2orc
v3-fos-license
On the $C^1$-property of the percolation function of random interlacements and a related variational problem We consider random interlacements on $\mathbb{Z}^d$, $d \ge 3$. We show that the percolation function that to each $u \ge 0$ attaches the probability that the origin does not belong to an infinite cluster of the vacant set at level $u$, is $C^1$ on an interval $[0,\^u)$, where $\^u$ is positive and plausibly coincides with the critical level $u_*$ for the percolation of the vacant set. We apply this finding to a constrained minimization problem that conjecturally expresses the exponential rate of decay of the probability that a large box contains an excessive proportion $\nu$ of sites that do not belong to an infinite cluster of the vacant set. When $u$ is smaller than $\^u$, we describe a regime of"small excess"for $\nu$ where all minimizers of the constrained minimization problem remain strictly below the natural threshold value $\sqrt{u}_* - \sqrt{u}$ for the variational problem. Introduction In this work we consider random interlacements on Z d , d ≥ 3, and the percolation of the vacant set of random interlacements. We show that the percolation function θ 0 that to each level u ≥ 0 attaches the probability that the origin does not belong to an infinite cluster of V u , the vacant set at level u of the random interlacements, is C 1 on an interval [0,û), whereû is positive and plausibly coincides with the critical level u * for the percolation of V u , although this equality is presently open. We apply this finding to a constrained minimization problem that for 0 < u < u * conjecturally expresses the exponential rate of decay of the probability that a large box contains an excessive proportion ν bigger than θ 0 (u) of sites that do not belong to the infinite cluster of V u . When u > 0 is smaller thanû and ν close enough to θ 0 (u), we show that all minimizers ϕ of the constrained minimization problem are C 1,α -functions on R d , for all 0 < α < 1, and their supremum norm lies strictly below √ u * − √ u. In particular, the corresponding "local level" functions ( √ u + ϕ) 2 do not reach the critical value u * . We now discuss our results in more details. We consider random interlacements on Z d , d ≥ 3, and refer to [5] or [6] for background material. For u ≥ 0, we let I u stand for the random interlacements at level u and V u = Z d I u for the vacant set at level u. A key object of interest is the percolation function V u ←→ / ∞} stating that 0 does not belong to an infinite cluster of V u . One knows from [13] and [12] that there is a critical value u * ∈ (0, ∞) such that θ 0 equals 1 on (u * ,∞ ) and is smaller than 1 on (0, u * ). And from Corollary 1.2 of [15], one knows that the non-decreasing left-continuous function θ 0 is continuous except maybe at the critical value u * . With an eye towards applications to a variational problem that we discuss below, see (0.9), we are interested in proving that θ 0 is C 1 on some (hopefully large) neighborhood of 0. With this goal in mind, we introduce the following definition. Given 0 ≤ α < β < u * , we say that NLF(α, β), the no large finite cluster property on [α, β], holds when (0.2) there exists L 0 (α, β) ≥ 1, c 0 (α, β) > 0, γ(α, β) ∈ (0, 1] such that for all L ≥ L 0 and u ∈ [α, β], P[0 where B L = B(0, L) is the closed ball for the sup-norm with center 0 and radius L, ∂B L its boundary (i.e. the subset of sites in Z d B L that are neighbors of B L ), and the notation is otherwise similar to (0.1). We then set (0.3)û = sup{u ∈ [0, u * ] ; NLF(0, u) holds}. One knows from Corollary 1.2 of [7] thatû is positive: It is open, but plausible, thatû = u * (see also [8] for related progress in the context of level-set percolation of the Gaussian free field). Our first main result is: Incidentally, let us mention that in the case of Bernoulli percolation the function corresponding to θ 0 is known to be C ∞ in the supercritical regime, see Theorem 8.92 of [10]. However, questions pertaining to the sign of the second derivative (in particular the possible convexity of the corresponding function in the supercritical regime) are presently open. Needless to say that the convexity of θ 0 on [0, u * ) displayed in the heuristic graph in Figure 1 is an open mathematical problem. Our interest in Theorem 0.1 comes in conjunction with an application to a variational problem that we now describe. We consider (0.7) D the closure of a smooth bounded domain, or of an open sup-norm ball, of R d that contains 0. Given u and ν such that we introduce the constrained minimization problem where C ∞ 0 (R d ) stands for the set of smooth compactly supported functions on R d and ⨏ D . . . dz for the normalized integral 1 D ∫ . . . dz with D the Lebesgue measure of D. The motivation for the variational problem (0.9) lies in the fact that it conjecturally describes the large deviation cost of having a fraction at least ν of sites in the large discrete blow-up D N = (ND) ∩ Z d of D that are not in the infinite cluster C u ∞ of V u . One knows by the arguments of Remark 6.6 2) of [14] that It is presently open whether the lim inf can be replaced by a limit and the inequality by an equality in (0.10), i.e. if there is a matching asymptotic upper bound. If such is the case, there is a direct interest in the introduction of a notion of minimizers for (0.9). Indeed, ( √ u + ϕ) 2 ( ⋅ N ) can be interpreted as the slowly varying local levels of the tilted interlacements that enter the derivation of the lower bound (0.10) (see Section 4 and Remark 6.6 2) of [14]). In this perspective, it is a relevant question whether minimizers ϕ reach the value √ u * − √ u. The regions where they reach the value √ u * − √ u could potentially reflect the presence of droplets secluded from the infinite cluster C u ∞ and taking a share of the burden of creating an excess fraction ν of sites of D N that are not in C u ∞ (see also the discussion at the end of Section 2). The desired notion of minimizers for (0.9) comes in Theorem 0.2 below. For this purpose we introduce the right-continuous modification θ 0 of θ 0 : Clearly, θ 0 ≥ θ 0 and it is plausible, but presently open, that θ 0 = θ 0 . We recall that D 1 (R d ) stands for the space of locally integrable functions with finite Dirichlet energy that decay at infinity, see Chapter 8 of [11], and define for D, u, ν as in (0.7), (0.8) and any minimizer ϕ in (0.14) satisfies e., ϕ is harmonic outside D, and ess sup Thus, Theorem 0.2 provides a notion of minimizers for (0.9), the variational problem of interest. Its proof is given in Section 2. Additional properties of (0.14) and the corresponding minimizers can be found in Remark 2.1. We refer to Chapter 11 §3 of [1] for other instances of non-smooth variational problems. In Section 3 we bring into play the C 1 -property of θ 0 and show then for any u ∈ (0, u 0 ) there are c 1 (u, u 0 , D) < θ 0 (u * ) − θ 0 (u) and c 2 (u, u 0 ) > 0 such that (0.17) for ν ∈ [θ 0 (u), θ 0 (u) + c 1 ], any minimizer ϕ in (0.14) is C 1,α for all 0 < α < 1, In view of Theorem 0.1 the above Theorem 0.3 applies to any u 0 <û (withû as in (0.3)). It describes a regime of "small excess" for ν where minimizers do not reach the threshold value √ u * − √ u. In the proof of Theorem 0.3 we use the C 1 -property to write an Euler-Lagrange equation for the minimizers, see (3.19), and derive a bound in terms of ν − θ 0 (u) of the corresponding Lagrange multipliers, see (3.20). It is an interesting open problem whether a regime of "large excess" for ν can be singled out where some (or all) minimizers of (0.14) reach the threshold value √ u * − √ u on a set of positive Lebesgue measure. We refer to Remark 3.4 for some simple minded observations related to this issue. Finally, let us state our convention about constants. Throughout we denote by c, c ′ ,c positive constants changing from place to place that simply depend on the dimension d. Numbered constants c 0 , c 1 , c 2 , . . . refer to the value corresponding to their first appearance in the text. Dependence on additional parameters appears in the notation. 1 The C 1 -property of θ 0 The main object of this section is to prove Theorem 0.1 stated in the Introduction. Theorem 0.1 is the direct consequence of the following Lemma 1.1 and Proposition 1.2. We let g(⋅, ⋅) stand for the Green function of the simple random walk on Z d . Proof of Lemma 1.1: Consider u ≥ 0 and ε > 0 such that u + ε < u * . Then, denoting by I u,u+ε the collection of sites of Z d that are visited by trajectories of the interlacement with level lying in (u, u + ε], we have Dividing by ε both members of (1.3) and letting ε tend to 0 yields (1.1). This proves Lemma 1.1. ◻ We now turn to the proof of Proposition 1. 2. An important tool in gaining control over the difference quotients of θ 0 is Lemma 1.3 below. Proof of Proposition 1.2: We consider 0 ≤ α, β < u * such that NLF(α, β) holds (see (0.2)), and set As mentioned above, an important tool in the proof of Proposition 1.2 is provided by and set Then, with cap(⋅) denoting the simple random walk capacity, one has Let us first admit Lemma 1.3 and conclude the proof of Proposition 1. We will use Lemma 1.3 to show that Once (1.10) is established, Proposition 1.2 will quickly follow (see below (1.20)). For the time being we will prove (1.10). To this end we set We also define We will apply (1. The application of (1.8) to u ′′ = u i+1 , u ′ = u i , for 1 ≤ i < i η and the observation that Hence, adding these inequalities, we find that Then, the application of (1.8) to Multiplying both members of (1.17) by e ∑ ℓ<iη δ ℓ and using (1.16) and (1.14) as well, we thus find and the term inside the exponential is at most c(α, β) √ η. We will now see how the Letting Γ(⋅) stand for the modulus of continuity of θ 0 on the interval [α, The above inequality implies that for any are Cauchy. Thus, letting w tend to v, we find that As a result we see that θ ′ 0 is the uniform limit on [α, α+β 2 ] of continuous functions, and as such θ ′ 0 is continuous. This is the claimed C 1 -property of Proposition 1.2. The last missing ingredient is the Proof of Lemma 1.3: We introduce the notation for v ≥ 0 and L ≥ 1 and the approximations of ∆ ′ and ∆ ′′ in (1.6) and as we now explain Indeed, by (1.25), (1.26), one has We now proceed with the proof of (1.8). By (1.27), we have and likewise we have We will now compare∆ ′ and∆ ′′ . We first recall that when Z is a Poisson-distributed random variable with parameter λ > 0, then one has If N u,u ′ (B L ′ ) stands for the number of trajectories in the interlacements with labels in we find by (1.26) that If we consider an independent random walk X . with initial distribution e B L ′ , the normalized equilibrium measure of B L ′ , and writeV u = V u (range X), we find from (1.32), (1.31) that (this formula is close in spirit to Theorem 1 of [2]). Then, we note that Inserting this identity into (1.33) and using (1.31) once again, we find that Note that L ′′ ≤ L ′ and a similar calculation as (1.28) yields the identity (u ′′ plays the role of u ′ , L ′′ the role of L ′ , and L ′ the role of ∞ in (1.27)). The application of (0.2) with L ′′ as in (1.7) now yields Coming back to (1.35), we find that Using (1.29), (1.30), it then follows that This completes the proof of (1.8) and hence of Lemma 1.3. ◻ With this last ingredient the proof of Proposition 1.2 is now complete. ◻ The variational problem The main object of this section is to prove Theorem 0.2 that provides a notion of minimizers for the variational problem (0.9), see (0.13) -(0.15). At the end of the section, the Remark 2.1 contains additional information on the variational problem, in particular when D, see (0.7), is star-shaped or a ball. Proof of Theorem 0.2: We will first prove (0.14) and (0.15). We consider D, u, ν as in (0.7), (0.8) and J D u,ν defined in (0.12). We let ϕ n ≥ 0 in D 1 (R d ), n ≥ 0, stand for a minimizing sequence of (0.12). Then, by Theorem 8.6, p. 208 and Corollary 9.7, p. 212 of [11], we can extract a subsequence still denoted by ϕ n and find ϕ ≥ 0 in D 1 (R d ) such that 1 2d ∫ R d ∇ϕ 2 dz ≤ lim inf n 1 2d ∫ R d ∇ϕ n 2 dz = J D u,ν and ϕ n → ϕ a.e. and in L 2 loc (R d ). Then, one has This shows that ϕ is a minimizer for the variational problem in (0.12) and (0.14) is proved. If ϕ is a minimizer for (0.12), note thatφ = ϕ ∧ ( √ u * − √ u) ∈ D 1 (R d ), and using Theorem 6.17, p. 152 of [11], In addition, one has θ 0 (( √ u+φ) 2 ) = θ 0 (( √ u+ϕ) 2 ) so thatφ is a minimizer for (0.12) as well. It follows that ϕ =φ (otherwise ϕ would not be a minimizer). With analogous arguments, one sees that the infimum defining J D u,ν in (0.12) remains the same if one omits the condition ϕ ≥ 0 in the right member of (0.12). Then, using smooth perturbations in R d D of a minimizer ϕ for (0.12), one finds that ϕ is harmonic outside D and tends to 0 at infinity (see Remark 5.10 1) of [14] for more details). In addition, see the same reference, z d−2 ϕ(z) is bounded at infinity and hence everywhere since ϕ is bounded. This completes the proof of (0.15). We now turn to the proof of (0.13). As already stated above Theorem 0.2, we know by direct inspection that I D u,ν ≥ J D u,ν . Thus, we only need to show that To this end, we consider a minimizer ϕ for J D u,ν and know that (0.15) holds. As we now explain, if ψ ≥ 0 belongs to C ∞ 0 (R d ) and ψ > 0 on D, then one has We consider two cases to argue (2.3). Letting m D stand for the normalized Lebesgue measure on D, either In the first case (2.4), then ϕ ≥ √ u * − √ u a.e. on D so that the left member of (2.3) equals 1 and (2.3) holds since ν < 1 by (0.8). In the second case (2.5), since θ 0 is strictly increasing on [0, u * ) (cf. Lemma 1.1), one has and (2.3) follows. We have thus proved (2.3). Using multiplication by a smooth compactly supported [0, 1]-valued function and convolution, we can construct a sequence ϕ n ≥ 0 in C ∞ 0 (R d ), which approximates ϕ + ψ in D 1 (R d ) and such that ϕ n → n ϕ + ψ a.e. on D. Then, we have (2.7) Hence, for infinitely many n, one has I D u,ν ≤ 1 2d ∫ ∇ϕ n 2 dz, so that If we now let ψ tend to 0 in D 1 (R d ) and recall that 1 2d ∫ R d ∇ϕ 2 dz = J D u,ν , we find (2.2). This completes the proof of Theorem 0.2. ◻ Remark 2.1. 1) Note that for D as in (0.7) and 0 < u < u * , the non-decreasing map Indeed, by definition of I D u,ν in (0.9), the map is right continuous. To see that the map is also left continuous, consider ν ∈ (θ 0 (u), 1) and a sequence ν n smaller than ν increasing to ν. If ϕ n is a corresponding sequence of minimizers for (0.14), by the same arguments as above (2.1), we can extract a subsequence still denoted by ϕ n and find ϕ ≥ 0 in D 1 (R d ) so that 1 2d ∫ R d ∇ϕ 2 dz ≤ lim inf n ∫ R d ∇ϕ n 2 dz = lim n J D u,νn and ϕ n → ϕ a.e. Using the reverse Fatou inequality as in (2.1), we then have (2.10) This shows that J D u,ν ≤ lim n J D u,νn and completes the proof of (2.9). 2) If D in (0.7) is star-shaped around z * ∈ D (that is, when λ(z − z * ) + z * ∈ D for all z ∈ D and 0 ≤ λ ≤ 1), then for u, ν as in (0.8), one has the additional fact any minimizer ϕ in (0.14) satisfies ⨏ D θ 0 ( √ u + ϕ) 2 dz = ν, and (2.11) Indeed, if ϕ is a minimizer of (0.14), one sets for 0 < λ < 1, ϕ λ (z) = ϕ(z * + 1 λ (z − z * )). Then, one has ∫ R d ∇ϕ λ 2 dz = λ d−2 ∫ R d ∇ϕ 2 dz, and, with D λ ⊇ D, the image of D under the dilation with center z * and ratio λ −1 , one finds dz ≥ ν must actually equal ν, otherwise the consideration of ϕ λ for λ < 1 close to 1 would contradict the fact that ϕ is a minimizer for (0.14). This proves (2.11) and (2.12) readily follows. 3) If D satisfying (0.7) is a closed Euclidean ball of positive radius in R d , given a minimizer ϕ of (0.14), we can consider its symmetric decreasing rearrangement ϕ * relative to the center of D, see Chapter 3 §3 of [11]. One knows that ϕ * ∈ D 1 (R d ) and ∫ R d ∇ϕ * 2 dz ≤ ∫ R d ∇ϕ 2 dz, see p. 188-189 of the same reference. As we now explain: (2.14) ϕ * is a minimizer of (0.14) as well. The argument is a (small) variation on Remark 5.10 2) of [14]. With m D the normalized Lebesgue measure on D, one has , and a similar identity holds with ϕ * in place of ϕ. Hence, we have (2.15) Thus, ϕ * is a minimizer of (0.14) as well, and the claim (2.14) follows. Incidentally, note that D is clearly star-shaped so that (2.9) and (2.13) hold. ◻ With Theorem 0.2 we have a notion of minimizers for the variational problem corresponding to (0.9). As mentioned in the Introduction, it is a natural question whether there is a strengthening of the asymptotics (0.10): is it the case that Given a minimizer ϕ in (0.14), the function ( √ u + ϕ) 2 ( ⋅ N ) can heuristically be interpreted as describing the slowly varying local levels of the tilted interlacements that enter the derivation of the lower bound (0.10) for (2.16), see Section 4 of [14]. Hence, the special interest in analyzing whether the minimizers ϕ for (0.14) reach the value √ u * − √ u. Indeed, if ϕ remains smaller than √ u * − √ u the local level function ( √ u + ϕ) 2 remains smaller than u * , and so with values in the percolative regime of the vacant set of random interlacements. On the other hand, the presence of a region where ϕ ≥ √ u * − √ u raises the question of the possible occurrence of droplets secluded from the infinite cluster of the vacant set that would take part in the creation of an excessive fraction ν of sites of D N outside the infinite cluster of V u (somewhat in the spirit of the Wulff droplet in the case Bernoulli percolation or for the Ising model, see [4], [3]). 3 An application of the C 1 -property of θ 0 to the variational problem The main object of this section is to prove Theorem 0.3 of the Introduction that describes a regime of small excess ν for which all minimizers of the variational problem (0.14) remain strictly below the threshold value √ u * − √ u. At the end of the section, the Remark 3.4 contains some simple observations concerning the existence of minimizers reaching the threshold value √ u * − √ u. We consider D as in (0.7), and as in (0.16) To prove Theorem 0.3, we will replace θ 0 by a suitable C 1 -functionθ, which agrees with θ 0 on [0, u 0 ], see Lemma 3.1, and show that for 0 < u < u 0 and ν ≥ θ 0 (u) the variational problemJ D u,ν attached toθ, see (3.15) and Lemma 3.3, has minimizers that satisfy an Euler-Lagrange equation, see (3.19), involving a Lagrange multiplier that can be bounded from above and below in terms of ν − θ 0 (u), see (3.20). Using such tools, we will derive properties such as stated in (0.17) for the minimizers ofJ D u,ν and show that they coincide with the minimizers of the original problem J D u,ν in (0.14) when 0 < u < u 0 and ν is close to θ 0 (u), see below (3.28). We select functions fulfilling (3.2) -(3.6) and from now on we view With u ∈ (0, u 0 ), D as in (0.7), andη as in (3.3), we now introduce the map: We collect some properties ofà in the next (recall m D stands for the normalized Lebesgue measure on D). A is a C 1 -map and A ′ (ϕ), its differential at ϕ ∈ D 1 (R d ), is the linear form (3.10) For any ϕ ≥ 0, A ′ (ϕ) is non-degenerate. (3.11) Proof. The claim (3.9) is an immediate consequence of the Lipschitz property ofη resulting from (3.4). We then turn to the proof of (3.10). For ϕ, ψ in D 1 (R d ), we set (3.12) With the help of the uniform continuity and boundedness ofη ′ , see (3.4), for any δ > 0 there is a ρ > 0 such that for any ϕ, ψ in D 1 (R d ) Since the D 1 (R d )-norm controls the L 2 (m D )-norm, see Theorem 8.3, p. 202 of [11], we see that for any Hence,à is differentiable with differential given in the second line of (3.10). In addition, with δ > 0 and ρ > 0 as above, for any ϕ, γ, ψ in D 1 (R d ) This readily implies thatà is C 1 and completes the proof of (3.10). Finally, (3.11) follows from (3.5) and the fact that u > 0. This completes the proof of Lemma 3.2. In the next lemma we collect some useful facts about this auxiliary variational problem and its minimizers. We denote by G the convolution with the Brownian Green function where ⋅ denotes the Euclidean norm on R d ). Lemma 3.3. For D as in (0.7), u ∈ (0, u 0 ), ν ≥θ(u) (= θ 0 (u)), one has Moreover, one can omit the condition ϕ ≥ 0 without changing the above value, and (3.17) any minimizer of (3.15) satisfiesÃ(ϕ) = ν. Now for ν as above, consider ϕ a minimizer of (0.14). Then, we have D(ϕ) = J D u,ν =J D u,ν , and sinceθ ≥ θ 0 , we find that The above remark naturally raises the question of finding some plausible assumptions on the percolation function θ 0 and regime for u, ν, ensuring that minimizers of J D u,ν in (0.14) achieve the maximal value √ u * − √ u on a set of positive measure. But there are many other open questions. For instance, what can be said about the number of minimizers for (0.14)? Is the map ν → J D u,ν in (2.9) convex? An important question is of course whether the asymptotic lower bound (0.10) can be complemented by a matching asymptotic upper bound.
2019-10-10T17:46:15.000Z
2019-10-10T00:00:00.000
{ "year": 2019, "sha1": "a7b385fb0ce9ca54c451e538f287e4576d8ac6d0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1910.04737", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a7b385fb0ce9ca54c451e538f287e4576d8ac6d0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
18572150
pes2o/s2orc
v3-fos-license
Metabolic Consequences after Urinary Diversion Metabolic disturbances are well-known, but sometimes neglected immediate consequences or late sequelae following urinary diversion (UD) using bowel segments. Whereas subclinical disturbances appear to be quite common, clinically relevant metabolic complications, however, are rare. Exclusion of bowel segments for UD results in loss of absorptive surface for its physiological function. Previous studies demonstrated that at least some of the absorptive and secreting properties of the bowel are preserved when exposed to urine. For each bowel segment typical consequences and complications have been reported. The use of ileal and/or colonic segments may result in hyperchloremic metabolic acidosis, which can be prevented if prophylactic treatment with alkali supplementation is started early. The resection of ileal segments may be responsible for malabsorption of vitamin B12 and bile acids with subsequent neurological and hematological late sequelae as well as potential worsening of the patient’s bowel habits. Hence, careful patient and procedure selection, meticulous long-term follow-up, and prophylactic treatment of subclinical acidosis is of paramount importance in the prevention of true metabolic complications. METABOLIC ALTERATIONS AND COMPLICATIONS Whenever bowel segments are excluded from the gastrointestinal tract for urinary diversion (UD), the absorptive surface of the respective segment is irreversibly lost for physiological bowel function. Functional bowel loss affects the absorption of nutrients and water from small and large bowel (1)(2)(3)(4)(5). Some of the absorptive and secreting properties of bowel are preserved if exposed to urine, as clearly demonstrated by previous studies (3,5). Typical metabolic consequences and complications have been reported for each bowel segment. They have been demonstrated to occur more frequently in patients having undergone continent UD due to the use of longer intestinal segments compared with shorter segments required for ileal and colonic conduits (2,5,6). Metabolic disturbances involve mainly the electrolytes, loss of bile acids, and malabsorption of vitamin B 12 , the extent of which depends on the length and type of bowel segment used, the duration of the storage interval, and the concentration of the urinary constituents the bowel is exposed to, the degree of atrophy of the bowel mucosa, and, importantly, the patient's renal and liver function. Further factors include patient age, status post irradiation or chemotherapy and primary or secondary diseases, which may promote or give rise to subsequent metabolic complications (3). ELECTROLYTES One of the most common metabolic consequences and complications are electrolyte imbalances. Whereas the majority of electrolytes freely traverse the intestinal segments across the apical surface of intestinal cells (transcellular movement), there is also some electrolyte movement between the cell borders (paracellular transport) (7). GASTRIC SEGMENTS Hypochloremic hypokalemic metabolic alkalosis may occur when gastric segments are used for UD; this has been reported to be life-threatening in some cases (8). The intestinal hormone gastrin seems to play a major role in this syndrome as with higher gastrin levels metabolic alkalosis becomes more severe (9). JEJUNUM The use of jejunum may entail hyponatremia, hypochloremia, hyperkalemia, azotemia, and acidosis paralleled by excessive loss of sodium chloride and by free water, which in turn can result in dehydration with subsequent hypovolemia and increased renin and aldosterone levels (6,10). The more proximal and the longer the jejunal segment used, the more clinically relevant these disturbances may present as (11,12). ILEAL AND COLONIC SEGMENTS Exclusion of ileal and colonic bowel segments may result in hyperchloremic metabolic acidosis. During the past 60 years, the underlying pathophysiological mechanisms have been the subject of intensive studies (7). Beside a complex interplay of various factors, ammonium ions (NH + 4 ) are believed to play a major role. When colonic or ileal segments are exposed to urine, ionized ammonium and chloride (Cl − ) are reabsorbed by the mucosa (13)(14)(15). Mediated by a sodium-hydrogen antiport, ammonium absorption occurs in exchange of sodium (Na + ) (16). The exchange of ammonium (NH + 4 ) for a proton (H+) in turn is coupled with the exchange of bicarbonate for chloride (Cl − ) (6). Furthermore, ionized ammonium may be also absorbed into the blood through potassium (K + ) channels (17), resulting in potential bicarbonate and potassium losses. ACIDOSIS IN RELATION TO THE DIFFERENT TYPES OF URINARY DIVERSION In patients having undergone ileal conduit diversion, mild to moderate acidosis can be expected in up to 15%. Up to 10% of patients will require antiacidotic treatment (18)(19)(20). In patients www.frontiersin.org with a continent UD, the contact time of urine is markedly longer and the exposed surface of bowel mucosa is much larger. As a consequence, this can lead to a higher incidence of electrolyte disturbances. Therefore, metabolic acidosis has been reported in up to 50% of patients (7,21). An elevated serum chloride concentration is associated with a decreased base excess (2,7,22). Treatment of hyperchloremic acidosis consists of administration of alkalizing agents. Prophylactic alkali substitution should be commenced at a base excess below −2.5 mmol/L, with the aim of avoiding the long-term complications of clinically evident acidosis (23). BONE DENSITY Incorporation of ileal and/or colonic segments into the urinary tract can lead to chronic acidosis, which may play a major role in a decrease of bone mineral density following UD. The occurrence of rickets in children has been previously reported, but constitutes an extremely rare complication in practice. Similarly, osteomalacia and osteoporosis may develop in adults. However, the definition of osteoporosis in younger adults is not clearly established. At least three possible mechanisms underlying osteoporosis after UD have been described. Bone carbonate has the potential to buffer chronic acidosis in exchange for hydrogen ions with subsequent release of calcium into the circulation. Calcium is then cleared by the kidneys (24-26). Importantly, renal tubule calcium reabsorption is directly inhibited by sulfate (7,27), with chronic acidosis being a cause for increased intestinal sulfate absorption. Furthermore, acidosis has been reported to activate osteoclasts resulting in further bone resorption (28). Finally, impaired intestinal absorption of both calcium and vitamin D may additionally develop in response to ileal resection (2). With all these different pathophysiological pathways in mind, one could hypothesize that a substantial proportion of patients following UD using bowel segments will suffer from decreased bone mineralization. Indeed, there are some early reports of osteomalacia after ureterosigmoidostomy (29-31), ureterosigmoidostomy (32), Kock pouch (33), and segmental ileal ureteric replacement (34). Interestingly, remineralization of the bones was shown to be possible by correction of acidosis in two early reports (35, 36). Both animal and clinical studies have demonstrated, that the metabolic disturbances are not as severe as commonly assumed, and, even more importantly, that they could be prevented with the proviso that prophylactic treatment is initiated early (1, 26, 37-39). In clinical practice, mild chronic acidosis was shown to be preventable, if a base deficit of more than −2.5 mmol/L was corrected early. In these patients, no signs of bone demineralization were observed (40). If osteomalacia occurs, correction of acidosis, dietary supplements with calcium, vitamin D and, in severe cases, bisphosphonates are recommended (35, 36, 41, 42). GROWTH RETARDATION In 1992, Mundy and Nurse as well as Wagstaff et al. found delayed linear growth in some children after UD (43, 44). The Baltimore group in the States observed decreased linear growth in patients with bladder exstrophy who had undergone intestinal augmentation, as opposed to those without augmentation (45). In a frequently cited long-term study of 93 patients with status post various treatment modalities for meningomyelocele (colonic conduit n = 2, ileal conduit n = 28, conservative treatment n = 63) it was shown that those with UD had a decreased linear growth; the rate of complications following orthopedic surgery was 17% compared to 3% in the conservative group (46). Recurrent pyelonephritis occurred in 60% of the patients with a conduit as opposed to "only" 21% in the conservative group; likewise, deterioration of the upper urinary tract was observed in 57 and 8%, respectively. All these factors may compromise renal function and thereby decrease the ability of the kidneys to counteract acidosis, which, as already stated above, has a negative impact on bone mineral density. Twenty percent of the patients with a conduit diversion had intermittent metabolic acidosis. At that time, the incidence of complications following orthopedic procedures in patients with meningomyelocele ranged between 16 and 29% (47,48). It is of note, however, that conclusions of all of the above retrospective studies are limited by methodological shortcomings. For instance, it has been clearly demonstrated that a reduced bone mineral density is markedly more common in children with myelomeningocele than in others undergoing UD (49). In support of this, another recently published study provided compelling evidence that there is a significant correlation between low-bone density and wheelchair-dependence in children. Moreover, an association between reduced bone density and higher neurological deficit was suggested (50). Gerharz et al. emphasized, that it is worth to take a second look at the linear growth of patients who had undergone enterocystoplasty in childhood. In their study, the initial series by Wagstaff et al. were incorporated (43, 51). Eighty-five percent of patients remained on the same or reached a higher centile after surgery; only 15% were in a lower position, and clinically relevant growth retardation was recognized in only four patients. All these patients underwent a complete endocrinological evaluation demonstrating that enterocystoplasty was not the underlying cause of growth retardation in a single case. Therefore, it seems very unlikely that the post-operative loss of the changes in position on the growth curve is a consequence of the UD. Rather, it seems to be more likely a non-specific phenomenon that should be considered in any clinical population of similar size and age distribution after the same length of time (51). VITAMIN B 12 Vitamin B 12 (cobalamin) cannot be synthesized by mammalians and must be ingested from food. It plays an important role in DNA synthesis and neurological function. The acidic environment in the stomach facilitates the uncoupling of vitamin B 12 from food. The parietal cells in the stomach secrete intrinsic factor, which binds to vitamin B 12 in the duodenum. In the ileum, the vitamin B 12intrinsic factor complex helps absorb vitamin B 12 (52). Moreover, the so-called cubilin receptor, which is found in the entire ileum and not only in the terminal ileum, has also a physiological role in the absorption of vitamin B 12 (53)(54)(55). Additionally, there is some evidence of the existence of an alternative system, which is obviously independent of the intrinsic factor or the terminal ileum and about 1% of orally administered vitamin B 12 is absorbed by an additional, yet unknown, pathway (52,56,57). Collectively, current knowledge suggests that the terminal ileum is not the only site of vitamin B 12 absorption (56)(57)(58)(59)(60)(61). A normal Western-style diet contains approximately 5-15 µg of vitamin B 12 per day; however, the daily requirement amounts to only 1-2 µg. Furthermore, the large hepatic vitamin B 12 depot in humans prevents symptoms of vitamin B 12 deficiency over a period of 2-5 years even in the presence of severe malabsorption (62). On the other hand, overt vitamin B 12 deficiency may result in megaloblastic macrocytic anemia, Hunter's glossitis, and funicular myelosis, an irreversible degeneration of spinal cord white matter. In most cases, however, vitamin B 12 deficiency can remain completely asymptomatic (63,64). In adults, resection of more than 60 cm of ileum increases the risk of development of a vitamin B 12 deficiency, as shown by some earlier studies (65)(66)(67)(68)(69). In children, impaired vitamin B12 absorption was demonstrated following resection of more than 45 cm of ileum demonstrated impaired vitamin B 12 absorption (70), whereas another study concluded that vitamin B 12 absorption in children normalizes after ileal resection (71). Other studies reported low vitamin B 12 levels in adults in whom ileal segments had been used for UD. This was particularly evident if longer segments of ileum were used (22, 72-81). However, the length of excluded ileum is not the only contributing factor (82): radiation therapy (83, 84), patient age (85)(86)(87), as well as administration of omeprazole (88) may all cause malabsorption of vitamin B 12 . In children, no decrease in vitamin B 12 levels was observed following bladder augmentation and substitution using ileal segments inmost cases after a follow-up period of up to 8 years (89). By contrast, Rosenbaum et al. demonstrated an increasing risk of vitamin B 12 deficiency after the seventh post-operative year when ileal segments had been used for bladder augmentation. In their study, 6 out of 29 patients had low vitamin B 12 levels (90). In a further study in which the ileocecal segment had been used, a slow decrease of serum vitamin B 12 level was detected in more than 60% of patients; 8% underwent substitution after completion of a follow-up of 23 years (91). In up to 35% of adult patients, substitution therapy is initiated following UD with ileal segments, such as ileal neobladder or ileocecal pouch (22, 74,76,77,79). It is of note, however, that there are only single case reports of a total of five patients with clinical symptoms (one with neurological symptoms) seemingly related to vitamin B 12 deficiency (72,76,79). There are some controversies about the normal levels of vitamin B 12 and the impact of age on requirements. A serum vitamin B 12 level <100 ng/L is considered pathological, and some authors regard a range between 100 and 200 ng/L as borderline, whereas others recommend substitution of vitamin B 12 if the serum level is below 200 ng/L (52,64,85,92). It should be noted, that there is wide variation in vitamin B 12 serum levels both in adults and children. In children, however, supplementation should be considered if serum levels drop below 200 ng/L, to be on the safe side (90,91). Serum vitamin B 12 levels should be checked for annually starting at year five following UD. With regard to the substitution of vitamin B 12 in patients with cobalamin deficiency, a randomized study provided evidence that oral therapy (2 mg per day) is as effective as parenteral application (1 mg intramuscularly) (57). BOWEL DYSFUNCTION Chologenic diarrhea due to the loss of bile acids via the large bowel is one potential source of bowel dysfunction following UD using bowel segments. The pool of bile acids amount to around 2-4 g and circulates 5-10 times per day (referred to as "enterohepatic circulation") (93). In the ileum, active reabsorption of conjugated bile acids involves a Na + -coupled co-transport system (94), and most of the conjugated bile acids are absorbed in the ileum (95). However, alongside active transport, the enterohepatic circulation includes also passive absorption of deconjugated bile acids from the jejunum and ileum. Under the influence of bacterial enzymes in the colon, deconjugation, 7α-dehydroxylation, and dehydrogenation of the conjugated bile acids occur (96) and only 0.2-0.4 g are lost through fecal excretion. This amount is normally synthesized by the liver. Therefore, the pool of bile acids remains by and large constant over time (97). Resection of longer ileal segments may entail malabsorption of bile acids with subsequent excess transition of bile acids, water, and sodium into the colon. This can result in chologenic (bile acid) diarrhea (98). Multiple mechanisms of bile acid diarrhea are known (96,(99)(100)(101)(102), but it remains still unclear which is the most important one (103). An increase of hepatic bile acid synthesis compensates for the loss. However, with increasing length of ileal resection, depletion of the bile acid pool can occur, resulting in malabsorption of fatty acids, which in turn can cause steatorrhea (104,105). If the ileocecal valve is resected during UD, colonic organisms (e.g., Bacteroides ssp.) can enter the ileum, which is usually free of bacterial colonization. These microorganisms cleave bile acids from their conjungates. Free bile acids emulsify fat to a very low extent. As a consequence, micellar formation is reduced resulting in decreased fat absorption. This is a further cause of steatorrhea (106)(107)(108). Moreover, exclusion of the ileocecal valve decreases the intestinal transit time (0.8-2.5 h) (109) and thus, may increase stool frequency. Reconstruction of the ileocecal valve was previously recommended in patients with risk factors for developing post-operative diarrhea (110). However, in a long-term study using matched pairs, no difference was found between patients with and those without reconstruction of the ileocecal valve (111). In the literature, only few reports have focused on bowel dysfunction after UD. In patients having undergone ileal or colonic conduit diversion, an increase of stool frequency was reported in 4-33%, following bladder augmentation or substitution in 7-59%, and after continent cutaneous diversion in 3-23% (112)(113)(114)(115)(116)(117)(118)(119). In this context, it is noteworthy, however, that stool incontinence has been reported as surprisingly common in epidemiological studies, with an estimated prevalence of up to 20% depending on age, gender, and population (120,121). In patients with chologenic diarrhea, the principle of treatment is a reduction of bile acids in the colon. It has been known for a long time that cholestyramine effectively binds bile acids and reduces stool frequency after ileal resection (122). Patients with a long-term use of this substance are at risk of interference of cholestyramine with the absorption of fat-soluble vitamins such as vit. A, D, and K (123,124). Therefore, vitamin levels should be checked. In patients with status post ileocecal pouch, no changes of the above vitamins were observed (125). Patients suffering from steatorrhea due to UD should be recommended a low fat diet. In www.frontiersin.org patients with more severe bile acid malabsorption, cholylsarcosine (a bile acid analog) can be used for replacement (126) and substitution of fat-soluble vitamins may become necessary. Interestingly, there are only two reports on complications due to the lack of fat-soluble vitamins after bowel resection in adults (127,128), whereas there are no reports thus far in patients after UD (22,76). CONCLUSION Metabolic consequences and disturbances are quite common in patients after UD using intestinal segments. However, careful patient selection for various types of UD, prophylactic substitution therapy (alkali supplementation, vitamin B 12 ), early intervention in case of overt clinical symptoms, and life-long follow up, can avert or successfully treat major clinical problems in the majority of cases.
2016-05-14T13:17:42.091Z
2014-02-12T00:00:00.000
{ "year": 2014, "sha1": "d262c125e9f3e777393a69cb9c39e36240161bd4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2014.00015/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d262c125e9f3e777393a69cb9c39e36240161bd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19435400
pes2o/s2orc
v3-fos-license
Almost analytic solutions to equilibrium sequences of irrotational binary polytropic stars for n=1 A solution to an equilibrium of irrotational binary polytropic stars in Newtonian gravity is expanded in a power of \epsilon=a_0/R, where R and a_0 are the separation of the binary system and the radius of each star for R=\infty. For the polytropic index n=1, the solutions are given almost analytically up to order \epsilon^6. We have found that in general an equilibrium solution should have the velocity component along the orbital axis and that the central density should decrease when R decreases. Our almost analytic solutions can be used to check the validity of numerical solutions. Coalescing binary neutron stars (BNSs) are considered to be one of the most promising sources of gravitational waves for laser interferometers such as TAMA300, GEO600, VIRGO and LIGO [1]. We can determine the mass and the spin of neutron stars from the gravitational wave signals in the inspiraling phase. We may also extract the informations on the equation of state of a neutron star from the signals in the pre-merging phase [2]. For this purpose it is important to complete theoretical templates of gravitational waves in the pre-merging phase as well as in the inspiraling phase. Recently Bonazzola, Gourgoulhon and Marck have numerically calculated quasi-equilibrium configurations of irrotational BNSs in general relativity [3] * . Although their results seem to be reasonable, we do not have any analytic solutions in general relativity in order to check the validity of their results. On the other hand, Uryū and Eriguchi have numerically constructed stationary structures of irrotational BNSs in Newtonian gravity [6]. We can use semi-analytic solutions produced by Lai, Rasio and Shapiro (hereafter LRS) [7] for checking the validity of the results. However, in numerical solutions of Uryū and Eriguchi, the velocity component along the orbital axis exists while in those of LRS such a component is assumed to be zero from the beginning. When we extend the analytic solutions to the general relativistic ones [8], we should include this velocity component. This is because in the numerical calculation, there is a possibility to obtain another solution although the binding energy of a BNS is almost the same value, and to lead a different conclusion [9]. In order to include the velocity component along the * The irrotational state is considered to be a realistic state for binary neutron stars before merger [4,5]. orbital axis, we solve the equation of continuity with the other basic equations. The method we use in this Letter is that we seek a solution to an equilibrium of irrotational binary polytropic stars in Newtonian gravity by expanding all physical quantities in a power of ǫ ≡ a 0 /R, where R and a 0 are the separation of the binary system and the radius of each star for R = ∞. We extend the method developed by Chandrasekhar more than 65 years ago for corotating fluids [10] to the one for irrotational fluids. Although a binary system consists of two stars, we pay particular attention to one of two stars. We call it star 1 whose mass is M 1 and the companion one star 2 whose mass is M 2 . In this Letter, we adopt two corotating coordinate systems. First one is X whose origin is located at the center of mass of the binary system. For calculational convenience, we choose the orbital axis as X 3 , and we take the direction of X 1 from the center of mass of star 2 to that of star 1. The second coordinate system is the spherical one r = (r, θ, ϕ) whose origin is located at the center of mass of star 1. We use units of G = 1. Since we treat irrotational fluids in Newtonian gravity, the basic equations are the equation of state, the Euler equation, the equation of continuity and the Poisson equation: where P , ρ, n, U and Ω are the pressure, the density, the polytropic index, the gravitational potential and the orbital angular velocity, respectively. The gravitational potential U is separated into two parts, i.e., the contribution from star 1 to itself U 1→1 and that from star 2 to star 1 U 2→1 . U 2→1 is written as Here, P l and I − 11 denote the Legendre function and the reduced quadrupole moment, and the superscript ′ means the term concerned with star 2. K is a constant related to entropy and v represents the velocity field in the inertial frame. In the irrotational fluid case, we can express v as a gradient of a scalar function Φ, i.e., v = ∇Φ. Following the Lane-Emden equation we first express the density as ρ = ρ c Θ n with ρ c being the central density. By using α ≡ [K(1+n)ρ 1/n−1 c /(4π)] 1/2 , we also introduce ξ = r/α. We expand Θ in a power series of a parameter ǫ as Θ = ∞ i=0 ǫ i Θ i . Since the shape of star 1 is spherical when R is large, the lowest order term of Θ is the solution of the Lane-Emden equation. Then we expand Θ i by spherical harmonics as Θ i = l,m . The radius of a spherical star a 0 is given by a 0 = αξ 1 as usual. Note here that ξ 1 = π for n = 1 case. Now we consider the orbital motion of star 1. In the spherical coordinate system, it becomes where Ω × ξ f ig = (0, 0, ξ sin θ), and p ≡ M 1 /M 2 . The first term on the right-hand side of Eq. (6) comes from the orbital motion of the center of mass of star 1 and the second term comes from the fluid motion around the center of mass of star 1. Next, we rewrite the equation of continuity (3) as The condition for Φ at the stellar surface is (∇Φ−Ω×r)· (∇Θ) surf = 0, since Θ = 0 at the surface. We expand Φ also as Φ = ∞ i=0 ǫ i Φ i . The gradient of the lowest order term of Φ should agree with the orbital motion of the center of mass of star 1, i.e., ΩR(Ω × ξ) orb because when R is large, the shape of star 1 is spherical and star 1 has only the orbital motion of the center of mass in the inertial frame with no intrinsic spin. This leads us to normalize Φ asΦ = ǫΦ/(Ωαa 0 ). We again expandΦ i by spherical harmonics asΦ i = l,m . The orbital angular velocity is derived from the first tensor virial relation defined by [11] where x 1 = r sin θ cos ϕ. If we substitute Eq. (6), Θ 0 and Φ 0 into Eq. (10), we obtain the orbital angular velocity in the lowest order as Ω 2 0 = M tot /R 3 where M tot = M 1 + M 2 . Note here that we also expand Ω 2 as Ω 2 = ∞ i=0 ǫ i Ω 2 i . Finally, we express the gravitational potential by rewriting Eq. (2) as where U 0 is constant. Substituting Eq. (11) into the Poisson equation (4), we obtain the equation to determine the equilibrium figure as Now a solution can be obtained iteratively. Firstly, Θ i is determined by demanding that the gravitational potential and its normal derivative are continuous at the stellar surface [10], that is, U int | ξ=Ξ = U ext | ξ=Ξ and ∂U int /∂ξ| ξ=Ξ = ∂U ext /∂ξ| ξ=Ξ , where Ξ(θ, ϕ) expresses the surface (Θ(ξ = Ξ(θ, ϕ)) = 0). Substituting Θ i and Eq. (6) into Eqs. (9) and (10), we obtain Φ i and Ω 2 i . After that we substitute these equations into Eq. (12) and derive Θ i+1 . We continued this procedure up to order ǫ 6 in this Letter. Since we would like to present almost analytic results, we only calculate n = 1 case in this Letter. For other polytropic indices, we must solve differential equations numerically. The results of these cases will be given in the subsequent paper [12]. The density profile up to O(ǫ 6 ) becomes Θ = Θ 0 + ǫ 3 Θ 3 + ǫ 4 Θ 4 + ǫ 5 Θ 5 + ǫ 6 Θ 6 , where Here j l and P m l denote the spherical Bessel function and the associated Legendre function. The function (6) ψ 22 is obtained by solving a differential equation: where (4) φ 2 (ξ) will be defined later. We point out that the terms Θ 1 and Θ 2 disappear in the equation for Θ since there are no tidal terms to produce Θ 1 and Θ 2 in the gravitational potential. We can write the velocity potential asΦ =Φ 0 + ǫ 4Φ 4 + ǫ 5Φ 5 + ǫ 6Φ 6 , wherẽ The functions (4) φ 2 , (5) φ 3 and (6) φ 4 are determined by solving following differential equations: where Θ ′ 0 denotes dΘ 0 /dξ. Note that the termsΦ 1 ,Φ 2 andΦ 3 disappear in the equation forΦ. In Table I, we show the results of the velocity potentials and their derivatives for an identical star binary (p = 1). From the expressions ofΦ 4 ,Φ 5 andΦ 6 , it is clear that the velocity has non-zero components along the orbital axis. Because since (4) φ 2 is not proportional to ξ 2 , there remains the velocity component ∂Φ 4 /∂x 3 wherex 3 = ξ cos θ † . We can also show the existence of the velocity components along the orbital axis forΦ 5 andΦ 6 . This analytic result does not agree with the semi-analytic solution given by LRS [7] but qualitatively agrees with the numerical solutions given by Uryū and Eriguchi [6]. The orbital angular velocity is calculated as whereĪ − 11 ≡ I − 11 /ǫ 3 . The effect of the quadrupole moments in Ω 2 is O(ǫ 8 ). One can see from Eq. (5) that the quadrupole term is O(ǫ 5 ) higher than the monopole term since I − 11 is O(ǫ 3 ). In the case of a different mass binary, i.e., a neutron star-black hole binary, we can express the orbital angular velocity as † In the case of n = 0, (4) φ2 is proportional to ξ 2 . where we assume that a black hole is a point source. Now it is ready to calculate relevant physical quantities. The mass of star 1 is calculated as M 1 = 1 d 3 xρ = 4πρ c α 3 ξ 1 [1 + 45ǫ 6 /(2p 2 ξ 2 1 )]. Since we consider the sequence for given baryon mass, the mass of star 1 should be the same one, which yields where ρ c0 is the central density for R = ∞. We note that the central density of star 2 should be written by changing p into 1/p in Eq. (28). The equation (28) means that the central density of star 1 up to order ǫ 6 decreases from that of the spherical star as the separation decreases. Although we did not assume ellipsoidal figures like in Ref. [7], the result is essentially the same as that derived by Lai [13], since the calculation of the central density up to O(ǫ 6 ) includes only quadratic terms. In both calculations the central density of irrotational binary system decreases as R −6 . This dependence is different from that of corotating ones, i.e., R −3 [10]. where the reduced quadrupole moment of star 1 is calculated asĪ − 11 = (2M 1 a 2 0 /3p)(15/ξ 2 1 − 1). The reduced quadrupole moment of star 2 is expressed by changing M 1 into M 2 and p into 1/p. We note that the above energies satisfy the virial equation for n = 1; i.e. 3Π tot + (W self ) tot + (W int ) tot + 2T tot = 0. Accordingly, the total energy is given as In the case of a different mass binary, we obtain the total energy as Our solution is correct if ǫ ≪ 1 so that it can be used to check the validity of numerical solutions. For any numerical codes, one can ask to solve an equilibrium for large R. One can compare numerically derived density and velocity distribution with our almost analytic solutions. However, for small R, our expansion up to ǫ 6 may not be enough. We can include the effects of the quadrupole terms of stars in the physical values such as the total energy at order ǫ 6 . Since the next higher order terms are the octupole ones at order ǫ 8 and their coefficients are order unity, we think that this expansion converges for the effect of the deformation. However, at order ǫ 9 , there appears the spin kinetic energy term in the total energy. There is a possibility to change the behavior of the total energy for small R. Therefore, in order to apply our solution in this case, further higher order calculations in our scheme as well as the check of numerical codes using our almost analytic solutions for large R are urgently necessary. In Figs. 1 and 2, we show the total energy for binary systems with p = 1 and p = 0.1 as functions of the orbital separation for an identical star binary. We find from these figures that the results of numerical and semianalytic calculations coincide with our analytic solutions up to ǫ 6 rather well in the region R/a 0 > 3 for p = 1 and R/a 0 > 5 for p = 0.1 although they are quite different in details. We would like to thank K. Ioka and K. Nakao for useful discussions. This work was partly supported by a Grantin-Aid for Scientific Research Fellowship (No.9402; KT) and Grant-in-Aid of Scientific Research (No.09640351; TN) of the Japanese Ministry of Education, Science, Sports and Culture.
2017-09-16T05:37:20.708Z
1999-11-23T00:00:00.000
{ "year": 2000, "sha1": "eca8701aa08401b534ab25cb1d4671a29aa69a6e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f3949baa5c062cf6b63b6e12ed66b7416cd4eb87", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
255822004
pes2o/s2orc
v3-fos-license
Implementing ICT in classroom practice: what else matters besides the ICT infrastructure? Background The large-scale International Computer and Information Literacy Study (2018) has an interesting finding concerning Luxembourg teachers. Luxembourg has one of the highest reported level of technology-related resources for teaching and learning, but a relatively lower reported use of ICT in classroom practice. Methods ICT innovation requires a high initial level of financial investment in technology, and Luxembourg has achieved this since 2015. Once the necessary financial investment in ICT technology has been made, the key question is what else matters to increase the use of ICT in teaching. To identify the relevant factors, we used the “Four in Balance” model, aimed explicitly at monitoring the implementation of ICT in schools. Results Using data for 420 teachers in Luxembourg, we identify that within such a technology-driven approach to digitalization, teachers’ vision of ICT use in teaching, level of expertise, and the use of digital learning materials in class are significant support factors. Leadership and collaboration, in the form of an explicit vision of setting ICT as a priority for teaching in the school, also prove to be important. Conclusions Through these findings, we show that the initial investment in school infrastructure for ICT needs to be associated in its implementation with teachers’ ICT-related beliefs, attitudes, and ICT expertise. Supplementary Information The online version contains supplementary material available at 10.1186/s40536-022-00144-6. Introduction The importance of ICT and teachers' key role in integrating ICT in classroom practice has become even clearer since the COVID-19 crisis. The International Computer and Information Literacy Study (ICILS) performed in 2018 and covering 13 educational systems allows for an in-depth assessment of the relevant factors. Luxembourg, one of the countries participating for the first time in ICILS in 2018, provides an interesting finding. Based on the secondary schools and teachers included in the study, Luxembourg has one of the highest levels of technology-related resources available for teaching and learning, but a relatively lower reported teachers' use of ICT in classroom practice. Only The main purpose of this study is to identify important factors that account for variation in ICT implementation in a context of high availability of ICT resources. The findings of this study will indicate in what respect schools and teachers need support with regard to ICT implementation, when ICT resources are not an issue. The theoretical model The Four in Balance model considers the factors for successful ICT implementation in education, approached through technology-driven or education-driven innovation (in Kennisnet, 2011;Koster et al., 2009;Law et al., 2008) (see Fig. 1). The underlying theoretical model was tested internationally and comparatively already in other countries, such as France, Germany, Japan, the Netherlands, Switzerland and the US (Tondeur et al., 2009;Tuijnman & Brummelhuis, 1992). The technology-driven innovation in ICT starts with technology and digital learning materials, while the education-driven innovation starts with human factors, in terms of people's vision and expertise. Considering these two approaches to ICT innovation, we conclude that Luxembourg embraced the technology-driven innovation approach in introducing ICT in schools, through its national strategy programs and the highly robust ICT infrastructure. The Four in Balance model specifies four basic elements to support teachers' pedagogical use of ICT, namely ICT infrastructure, digital learning materials, vision, and expertise (in Kennisnet, 2011;Brummelhuis, 2011) (see Fig. 2). ICT infrastructure refers to the availability of computers, access to the Internet, and all other similar facilities that relate with the use of ICT. Digital learning materials comprise all the digital educational content and tools that teachers' use in their educational practice. Expertise relates to teachers' knowledge and their technical and pedagogical skills in using ICT to achieve the educational objectives, while Vision refers to a school's objectives and the role of teachers, students, and management in achieving specific ICT goals (Kennisnet, 2011, p. 9). Teachers play a crucial role in executing this model, in a close relationship with leadership and collaboration between professionals in the school. Fig. 1 "Education-driven" versus "technology-driven" innovation in ICT (Source: Kennisnet (2011, p. 11, as Figure 1.2 in the source)) Page 4 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 Teachers' perceptions of the Four in Balance model were identified and assessed in this study through the following aspects, presented here from the "technology-driven" innovation approach: • ICT infrastructure-the availability of computers, interactive whiteboards, and Internet connection in their school. • Digital learning materials-their use of computer programs in teaching and their use of digital learning materials from different sources; • Expertise-their familiarity with ICT, level of skills for usage, and general pedagogical ICT skills; • Vision-their pedagogical vision of using ICT for knowledge transfer and knowledge construction; In addition, with regard to the general school context and facilitating conditions: • Collaboration and leadership-teachers' perceptions of collaboration between teachers, the support and source of support around ICT use, the presence of an ICT policy setting out the pedagogic vision for ICT, and the level of joint agreement regarding this ICT policy plan in the school. ICT infrastructure-represented by computers, tools such as whiteboards, and a wellfunctioning Internet connection (Kennisnet, 2011)-is a necessary condition for teachers' ICT use, but of course not a sufficient one in itself. The ICT infrastructure has proved to be the first and main barrier to teachers' use of ICT for learning and e-learning, especially in many developing educational contexts (e.g., Auma & Achieng, 2020;Mailizar et al., 2020;Marwan, 2008;Owusu-Fordjour et al., 2020). Teachers need to have ready access to the necessary technology (Becker, 2000), and even more, they need to have the time and opportunities to use this infrastructure in their practice. Accordingly, we expect the availability of ICT infrastructure to support the use of ICT by teachers. For digital learning materials, the Four in Balance model posits that the more teachers use digital learning tools and computer programs, the more they will tend to use ICT Page 5 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 regularly for teaching and learning. Nevertheless, the use of digital materials and tools does not automatically imply the pedagogical use of ICT in the classrooms. The Technological Pedagogical Content Knowledge (TPACK) model defines the domains of expertise that are necessary for teachers' pedagogical use of ICT. The model underlines the importance of mastering the technology through technological knowledge, technological content knowledge, and technological pedagogical knowledge, in order to integrate ICT pedagogically into classroom practice (Koehler & Mishra, 2005;Koh et al., 2013). Technological knowledge, represented by teachers' skills in using digital tools and digital material, has proved to be a strong predictor for the use of ICT (Koh et al., 2013). In terms of expertise, the Four in Balance model indicates that teachers' familiarity with ICT, level of skills for its usage, and general pedagogical ICT skills are essential for its pedagogical use (Kennisnet, 2011). Teachers need to have personal experience of the technology, hands-on in teaching, for their intentions and actual behavior with ICT to increase (Ertmer, 2005;Kim et al., 2021). In fact, teachers' initial professional development is seen as a precondition for them to learn how best to integrate ICT into their professional practices (Kopcha, 2012), to help them change their beliefs concerning ICT usage (Kim et al., 2012), and to make the effective steps for ICT implementation clearer (Thoma et al., 2017). In a meta-analysis study, Wilson et. al. (2020) conclude that pre-service training in ICT has been found to significantly increase teachers' technical knowledge, even after only one course. Learning how to integrate ICT in teaching has also proved important, with the study by Wilson et. al. (2020) showing that combining theory and hands-on lab sessions has a significant effect in increasing technological knowledge. In line with this empirical evidence, we expect that, teachers who are more familiar with ICT and have higher levels of initial technological knowledge and ICT selfefficacy will use it to a greater extent in teaching and learning. In terms of vision, the Four in Balance model indicates the importance of teachers' beliefs and values regarding ICT, especially its role in knowledge construction. Knowledge transmission is the pedagogical approach through which teachers decide about and guide students in terms of when and how to learn, while through knowledge construction, teachers facilitate learning as part of an investigation process (Kennisnet, 2011). Schmid et. al. (2021 found that for planned and actual use of technology in education, teachers' beliefs need to be considered and only a combination of such relevant predictors will be able to determine the differences in teachers' use of technology. Kreijns et. al. (2013) found that teachers' intention to use digital learning materials was determined by their attitude regarding its use and utility, and their general beliefs about teaching, among other things (Ertmer et al., 2012;Schimd et al., 2021). In line with this empirical evidence, our expectation is that teachers with more positive beliefs about ICT's role in teaching are more likely to report high frequencies of pedagogical use of ICT. Collaboration and leadership, as contextual characteristics, refer to teachers' collaboration, their support through professional development within schools, and a joint agreement about ICT priorities in the school (Kennisnet, 2011). Empirical evidence has shown that relevant collaborative professional development can lead to improvement in teachers' use of ICT for instruction (Aldunate & Nussbaum, 2012;Kim et al., 2012), especially through collaborative forms of learning (Kopcha, 2012;Thoma et al., 2017). Teachers need to observe similar others (e.g., colleagues) carrying out the tasks and Page 6 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 need access to multiple models in building their competence in order to change their intentions and behaviors. As Putnam and Borko (2000) note, teachers' practice is more likely to change as they participate in learning communities that discuss new materials, methods, and strategies. In addition, a clear ICT policy plan and a centralized leadership ICT vision in a school (Eickelmann, 2011;Kennisnet, 2011) is necessary to accompany greater ICT use by teachers in their working practice. If schools lack a digital policy plan and a shared vision about why and how ICT needs to be integrated, they will miss the opportunity to support ICT teaching and learning in the school (Costa et al., 2021;Howard et al., 2020). Our expectation is that teachers who report a higher level of collaboration, a good level of support, and a clear centralized vision of ICT as priority in their school, will also report greater use of ICT in their working practice. To summarize, our main hypothesis is that within a technology-driven approach to ICT innovation, teachers' expertise and vision will vary more between teachers and will implicitly have a stronger significant relationship with the reported use of ICT in teaching. Context: ICT educational policy in Luxembourg Luxembourg has a unique socio-economic context with direct implications for its educational system. First, Luxembourg's small population is very diverse, as more than 45 percent of the population is of foreign origin. The largest foreign nationality group is the Portuguese, followed by the French and then other smaller groups of nationalities (Italian, Belgian, German, Balkan/Ex-Yugoslavian, British, other EU, and other non-EU). Moreover, Luxembourg is a trilingual country. Luxembourgish is the national language, French is used for legislation, and Luxembourgish, French, and German are used for social, administrative, and legal purposes (MENJE, 2020a). It should be noted that 43 percent of the pupils in public education do not have Luxembourg nationality and 62 percent of them do not speak Luxembourgish at home (MENJE, 2020a). These unique student demographics have an important impact on student performance in Luxembourg, as measured via large-scale studies in education, such as the PISA and the ICILS 2018 SCRIPT, 2018). In terms of ICT, Luxembourg ranks high as ninth on the ICT Development Index, with 97.8 percent of individuals aged 16-74 having used the Internet in the previous three months when surveyed (Fraillon et al., 2020b). Moreover, Luxembourg has placed a major focus on the ICT dimension at the political and social level. In 2014, Luxembourg launched Digital Luxembourg, a multidisciplinary government initiative to support digitalization for social and economic transformations (MENJE, 2015). This governmental action was transposed into education in 2015, with the launch of the Digital (4) Education national strategy. Digital (4) Education defined five major domains of competences: digital peer, digital citizen, digital entrepreneur, digital worker, and digital learner. These broad domains aimed to be achieved through various programs and projects. In ICILS 2018, however, with regard to the national curriculum for secondary schools, Luxembourg reported only an implicit emphasis on teaching aspects related to computer and information literacy and no emphasis on computational thinking (Fraillon et al., 2020b). As the technology scene has evolved since 2018, Luxembourg has put in place a new strategy for digital education in schools. In 2020, a new digital approach termed Einfach Digital was introduced, being centered on five C's: critical thinking, creativity, communication, collaboration, and coding (MENJE, 2020b). The core principles of this new approach for primary and secondary schools have been included in the Guide de référence pour l'éducation aux/et par les médias (SCRIPT, 2020). This framework and working document was the starting point for integrating media and digital competences in every day practice and guiding teachers in developing expertise and transferring new digital skills into teaching . Moreover, in 2021, a new subject was introduced in secondary education, termed Digital Sciences (MENJE, 2021) through a pilot phase with eighteen schools, which is intended to be embraced progressively by all schools and up to three grades by 2025. The strategy Einfach Digital, introduced in 2020, endorses a skill-based perspective, transversal across subjects. It makes reference to digital citizenship and the production of information, with a focus on using the principles of technology and automation to develop skills to solve ecological, societal, and technological problems (MENJE, 2021). The wide availability of tailored professional development programs, conferences, practical sessions, presentations of pedagogical materials via exchange, and collaborative working groups across schools (MENJE, 2020b), offers space and support to teachers to integrate their attitudes, knowledge, and skills in this process. Method A multilevel perspective on investigating the factors associated with the use of ICT in schools is facilitated by the assessment framework of the International Computer and Information Literacy Study (ICILS, 2018). ICILS 2018 offers a rich opportunity to identify and measure important ICT factors and their impact on ICT practice. Its numerous and comprehensive measurement scales make it possible to test relevant theoretical frameworks, being the Four in Balance theoretical model in this case. Variables All the variables of interest in this study were addressed in the ICILS 2018 teacher questionnaire. All the data refers to teachers' perceptions of the latent concepts analyzed. Using the conceptualization of the four basic elements of the Four in Balance model, we selected the most appropriate scales from the ICILS 2018 teacher questionnaire, as the empirical assessment of the theoretical elements presented in Table 1. We prioritized the selection of scales over single items, when theoretically corresponding scales were available in the data. We selected three single items, not part of any scale, only when no scale was available to measure an aspect of the basic elements. The dependent variable is teachers' pedagogical use of ICT, measured through eight items, each on a 4-point Likert scale. The independent variables we chose are in line with the Four in Balance theoretical model, capturing the four basic elements (ICT infrastructure, digital learning materials, expertise, and vision), guided through collaboration and leadership. Most concepts of interest were measured through scales of items, usually a set of 4-point Likert-scale items with responses ranging from low to high (strongly disagree to strongly agree; 1-4) (see Appendix 1). All items and scales were developed by the ICILS 2018 team based on a comprehensive assessment framework (Fraillon et al., 2019), built on previous theoretical developments and empirical evidence. Appendix 1 details the items and scales used in this study to test the theoretical model of interest, showing excellent proxy coverage of the theoretical model through these final selected ICILS scales. We considered the theoretical fit of other available scales as independent variables, but we did not include them in the analysis in the end. More specifically, the scales measuring teachers' emphasis on ICT capabilities in class (T_ICTEMP) and of teaching coding tasks (T_CODEMP) were not included because of their explicit focus on very specific skills taught in class with the aim of supporting students to develop them. Most of the items used in this study were integrated into Likert scales with high internal consistency (Cronbach's alpha), as illustrated in Table 2. All the scales were built as IRT WLE scores with values on a continuum with an ICILS 2018 average of 50 and a standard deviation of 10, for equally weighted national samples (see the study's Technical Report, Fraillon et al., 2020a). The scales were created by IEA (International Association for the Evaluation of Educational Achievement, in charge of the ICILS 2018 study) to allow for comparable results across countries in the study, and we accordingly chose to use the same scales to facilitate future applicability. We did not test the relationship of interest with individual items of these scales, and instead only focused on the relationships with the relevant validated scales. In order to appropriately test the Four in Balance model, relevant demographic characteristics were taken into account, in line with previous empirical evidence Siddiq & Scherer, 2016). The demographic characteristics tested are gender, age, and subject domain, with the expectation that female, younger, and mathematics and science teachers will report greater use of ICT for teaching and learning. Table 3 presents the frequency distribution for the categorical demographics and for the three single items selected to measure the domains of the model, in addition to the scales presented in Table 2. Dummy variables and their baseline categories are indicated, as are the single-item numerical variables, with their corresponding mean and standard deviation. Sample The analyses focus on teacher data from the ICILS 2018 study in Luxembourg, which was carried out as a census. All eighth-grade students from all secondary schools were selected to participate. Out of the 41 secondary schools in Luxembourg, 38 participated in the study, the other three declined to participate for different reasons. In each of the participating schools, a sample of teachers teaching eighth-grade students was selected. The number of teachers to select was increased to 25 instead of the requested minimum of 15, due to the small number of secondary schools in Luxembourg. In schools with fewer than 25 teachers, all of them were selected for participation. This resulted in 927 teachers from 38 schools selected to complete the teacher questionnaire. The participating teachers cannot be directly linked to individual participating students. The ICILS 2018 followed a stratified 1 two-stage sampling design, where first schools and then a certain number of teachers (within the sampled schools) were sampled. Two weighting factors are considered: the school base weight and the teacher base weight are the reciprocal of the respective sampling probability. A third weighting factor accounts for teachers working in more than one school. Usually, not all schools or teachers are able or willing to participate, which results in nonresponse on the different levels. To account for this, two nonresponse factorson the school and the teacher level respectively-were calculated. Non-participating teachers were taken into consideration by adjusting the weights of the participating teachers within the same schools (see Table 4). The nonresponse adjustment assumes that nonresponse is completely at random, which implies that within one school, participating and non-participating teachers are otherwise similar. Furthermore, in the international ICILS 2018 data, teachers in schools, where more than 50 percent of the teachers did not participate, were not included in the international database. Therefore, for Luxembourg 494 teachers from 28 schools were considered as participating in ICILS 2018. Moreover, ten participating schools had a participation rate lower than 50 percent. The implication is that if these teachers could be considered using an alternative weighting approach, we may obtain a better picture of the teacher population. The final teacher weighting used in the international database is the product of all the teacher weighting factors. The calculation procedure for the final teacher weighting and for all the weight factors for the ICILS 2018 international database is described in the ICILS 2018 Technical Report (Fraillon et al., 2020a). Although this is acceptable at the international level, we believe these teacher weights could be optimized for the Luxembourg national teacher data. Considering the high rate of nonresponse and the relevance of the sampling demographic characteristics to the outcomes of interest, teacher and school weights were re-estimated to work with a more precise population estimate for the teacher population. All the analyses and adjustments presented subsequently were performed with anonymized data for teachers and schools. First, all 41 schools in Luxembourg were asked to participate, thus the teacher sample was a stratified systematic sample of all teachers. Stratified, because within schools, teachers were systematically sampled from a list sorted by gender, year of birth, and subject domain. This ensures a proportional teacher sample according to these characteristics, and allows us to define an alternative weighting approach, in which these characteristics are used to create alternative nonresponse cells. Second, based on an analysis of the international data, we observe that teachers in the same age group, with the same subject domain, and within certain types of schools, give similar answers to the questionnaire. This is not the case for the dependent variable in the current study, but for other independent variables of interest (e.g., gender and collaboration between teachers using ICT, teachers' use of digital tools in class, teachers ICT self-efficacy, age and experience with ICT use during lessons, positive views on using ICT in teaching and learning, use of ICT utility software, teacher participation in participative professional development, etc.). Thus, we believe that by defining the adjustment cells for non-participating teachers in line with these characteristics, the weights generated using this alternative weighting approach are better optimized for teacher analysis in Luxembourg. Technical details of the re-weighting procedure and of the final teacher and school weights used in this alternative weighing approach are presented in Additional file 1. Data analysis Multiple linear regression analysis is used to determine the relationship among the Four in Balance selected variables and teachers' pedagogical use of ICT. The dependent variable Teacher use of ICT for teaching practices in class (T_ICTPRAC) had 64 cases (13%) with "missing by design" values (meaning "practice not applicable at this class"), their deletion resulting into a working sample of 430 teachers. After deleting other missing "98-not applicable" values in some of the independent variables used, we worked with a final sample of 420 teachers in 28 schools. Multilevel data analysis was considered in line with the theoretical model, but the Intraclass Correlation Coefficient (ICC) showed that the bulk of the variance to be explained in teachers' use of ICT is between teachers and not between schools (ICC = 0.05). Consequently, Multiple Linear Regression Analysis was used to test the Four in Balance model on our data for the 420 teachers in 28 schools in Luxembourg. SPSS software was used for data management, together with the IEA's IDB Analyzer to perform the regression analysis, accounting for the specifics of the ICILS/IEA largescale data. The IEA's IDB Analyzer takes into account information from the sampling design in the computation of sampling variance, statistics, and standard errors. Moreover, it makes use of appropriate sampling weights. Standard errors for the statistics are computed according to the variance estimation procedure required by the design of the corresponding study (IEA IDB Analyzer, version 4.0). Our analysis in this way takes into account the nested structure of teachers within schools, and corrects the standard errors for the clustered sampling design (Fraillon et al., 2020a). Moreover, through weighting, the software also corrects for the more general sampling design in relation to the population of interest. However, in this study, we used the teacher and school re-estimated weights through the alternative weighing approach to try to account for teacher nonresponse, in order to produce more accurate estimates of the teacher population. We tested different regression models separately, to identify the effect and the variance explained by the different factors/domains of the theoretical model. More specifically, we first tested a demographics model, by considering only the demographic characteristics. These demographic variables were kept as antecedents in all the following models tested. An ICT infrastructure model, a digital learning materials model, an expertise model, a vision model, and a collaboration and leadership model were tested separately in order to identify the contribution and effect of each domain on the dependent variable. Lastly, a final model was tested, taking into consideration all the significant variables identified previously. Standardized regression coefficients and their standard errors are reported, together with the explained variance R-square adjusted for each domain of the Four in Balance proxy model and the final model. Results This section presents the results, in line with the Four in Balance theoretical model and its application to the ICILS 2018 data for Luxembourg. Regarding the first research question on the status quo of teachers' ICT use in practice, the dependent variable-the pedagogical use of ICT-scale has a mean of 45.54 (SD = 9.38) on a scale with a mean of 50 and standard deviation of 10, for equally weighted countries. When looking at the specific items of the scale, we can see that there is a relatively small percentage of teachers who report the use of ICT in often or always in their lessons for specific teaching practices (see Table 5). Most of the practices used are in line with the aim of knowledge transmission: providing remedial or enrichment support, whole-class discussions and presentations, and reinforcement of learning through repetition of examples. Less frequent are practices Page 13 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 using ICT to support collaboration between students or to provide feedback about their work. To answer the second research question, the results of the regression analysis are presented in Table 6 with the details presented next, by model. Demographic characteristics We see that gender, age, and the subject domain have no significant relationship with teachers' reported use of ICT for educational purposes in their classes. Considering that these variables might produce spurious effects of other variables on the pedagogical use of ICT, we will keep these variables in all the models presented subsequently. ICT infrastructure The ICT infrastructure model has a minor explanatory role (R-square = 0.03). We see a small but significant effect, with available ICT resources for teachers proving to be a significant positive predictor (β = 0.15, SE = 0.06). When investigated item by item for understanding purposes, time and opportunity for developing ICT expertise prove to be more important for teachers' use of ICT in practice than the availability of computers and a good Internet connection. Digital learning materials The use of digital learning materials during lessons proves to be a strong predictive model for teachers' pedagogical use of ICT (R-square = 0.33). When teachers report more use of digital learning tools (β = 0.38, SE = 0.07) and of general utility software (β = 0.28, SE = 0.06), they also report greater use of ICT for their teaching and learning. Both the use of digital learning tools (r = 0.52, se = 0.06) and the use of general utility software (r = 0.46, se = 0.05) have a strong correlation with the dependent variable. However, the conceptual meanings are different. The dependent variable refers to pedagogical practices with ICT (e.g., remedial or enrichment support, student assessment, feedback, collaboration, and inquiry learning), whereas the other two scales simply list Percentages of teachers who reported the use of ICT (often or always) in their lessons for different teaching practices (excluding the teachers who did not use this practice with the reference class) The provision of remedial or enrichment support to individual students or small groups of students 45 The support of student-led whole-class discussions and presentations 36 The reinforcement of learning of skills through repetition of examples 29 The support of inquiry learning 25 The support of collaboration among students 16 The assessment of students' learning through tests 15 The provision of feedback to students on their work 14 The mediation of communication between students and experts or external mentors 11 Page 14 of 28 Lomos et al. Large-scale Assessments in Education (2023) the digital tools (e.g., digital learning games and e-portfolios) and software (e.g., Word and wikis) teachers make use of in their work. Expertise The model measuring expertise obtains an R-square of 0.21, the variable with the strongest effect being teachers' ICT self-efficacy (β = 0.35, SE = 0.05). Moreover, initial teacher training in using ICT and using ICT in teaching has a significant positive effect (β = 0.12, SE = 0.05) compared with no such initial training. By comparison, never using ICT during lessons has a negative impact (β = − 0.16, SE = 0.08) when compared with using ICT during lessons for more than 5 years. Vision We see that teachers' vision about the role of ICT and its use in teaching and learning has an important explanatory value for teachers' pedagogical use of ICT (R-square = 0.15). Teachers' positive views about the outcomes of using ICT in education (β = 0.38, SE = 0.06), and especially about using ICT in knowledge construction practices (e.g., developing skills in planning and self-regulating work, developing problem-solving skills, and collaborating more effectively), are strongly and positively related with teachers' reported pedagogical use of ICT. Collaboration and leadership The model for collaboration and leadership obtains an R-square of 0.19, containing two scales measuring collaboration and one scale measuring leadership priority for ICT in the school. Greater reported collaboration between teachers using ICT (β = 0.18, SE = 0.07) and more reported participation in collaborative professional development learning related to ICT (β = 0.15, SE = 0.05) are both significantly associated with greater self-reported pedagogical use of ICT by teachers. In addition, when teachers perceive greater emphasis by the leadership in their school on ICT as a priority in teaching (β = 0.26, SE = 0.06), they also report more use of ICT in their classroom practice. Final model In the final model, we consider all the significant variables from the four previous models together, as presented in Table 6. We find an R-square of 0.42 and three domains appear significant: digital learning materials, expertise, and vision. In addition, the variable leadership agreement on ICT being considered a priority for teaching in the school also appears as significant in the final model. In terms of the use of digital learning materials, the use of digital learning tools (β = 0.27, SE = 0.07) and of utility software in their classes (β = 0.17, SE = 0.06) prove to positively relate with teachers' pedagogical use of ICT. With regard to expertise, teachers' ICT self-efficacy (β = 0.10, SE = 0.05) has a significant positive relationship with the dependent variable. In terms of vision, teachers' holding positive views about using ICT in teaching and learning (β = 0.17, SE = 0.05) is a significant predictor of the dependent variable. Lastly, one variable from the leadership domain-ICT being considered as a priority in the school (β = 0.13, SE = 0.06)-also proves relevant for teachers' use of ICT in practice. None of the other variables, especially the collaboration domain variables, are significantly related anymore with the dependent variable. This suggests many other possible indirect paths between these variables that could be hypothesized and tested more thoroughly in future studies. Table 7 presents a summary of the results, showing the R-squared adjusted of each single domain model and of the entire theoretical model. We can see again that together in the final model, the Four in Balance proxy model explains 0.42 of the variance of teachers' pedagogical use of ICT, with digital learning materials, expertise, and vision, playing the most important roles. The final teacher weight of the alternative weighting approach used here gave a similar population estimation to that of the final teacher weights in the international ICILS 2018 database. The variance explained by some of the models tested changed slightly. Table 7 shows the explained variance of the models (R-squared adjusted) obtained using both approaches. The change that deserves meaningful mention is the relatively smaller value of the explained variance for the expertise model when using the international database weights, model that tests for variables closely related with age (e.g., earlier experience with ICT use during lessons and teachers' ICT self-efficacy scale). Discussion The current study has looked at the factors identified in previous empirical investigations from the perspective of ICT implementation in schools within an educational system characterized by a high initial ICT technology support. We find that high financial investment in ICT for schools and communities becomes a support condition only when other staff-related and context-related factors are in place. Again, the scale mean of the availability of ICT resources in schools in Luxembourg (M = 53.72, SD = 8.7) compared with that across all countries in ICILS 2018 (M = 50.0, SD = 10.0) shows a clear teacher agreement on the availability of ICT equipment, digital learning resources, and even the time and opportunity to prepare and develop expertise in ICT. However, regarding the level of teachers' pedagogical use of ICT, we find that many teachers in Luxembourg report using ICT in most lessons, but particularly for knowledge transmission (Kennisnet, 2011). More specifically, ICT is mostly used often or always in the provision of remedial or enrichment support to individual students or small groups (45 percent of teachers) and as a support in student-led whole-class discussions and presentations (36 percent of teachers), followed by the reinforcement of learning of skills through repetition of examples (29 percent of teachers). Fewer teachers in Luxembourg report using ICT in most lessons with the aim of knowledge construction: specifically, for inquiry learning (25 percent of teachers) or provision of feedback to students about their work (14 percent of teachers). The other teaching practices indicated, such as assessment of students' learning through tests (15 percent of teachers) or the support of collaboration among students (16 percent of all teachers) can be supported through ICT, but not many teachers report this for most lessons, although it could be justified for some of these types of practices, such as tests. These results indicate that teachers are still in an initial stage of ICT integration in teaching practices, making use of it for transmitting information and not yet for constructing more abstract knowledge. This indicates why it is very important to address the second research question and identify which less-tangible factors support teachers' use of ICT in educational practice in Luxembourg. First, it is important to mention that none of the demographic characteristics tested have a significant relationship with teachers' use of ICT in pedagogical practice. Based on previous empirical evidence , we would expect age and subject domain to have a significant relationship, but this is not found in our sample. There are other studies that have not found a significant association between age or gender, and teachers' use of ICT or their ICT self-efficacy (Hämäläinen et al., 2021;Hatlevik & Hatlevik, 2018). A similar lack of association has been noted among pre-service teachers (Tondeur et al., 2018). In our findings, age becomes significantly related with teachers' ICT use in practice only in the vision model, hinting at a possible interaction between age and teachers' views on the utility of ICT. This is something that could be further investigated in future research. In terms of digital learning materials, we expected that using simple software or digital learning platforms would be a precursor for the pedagogical use of ICT for remedial support, provision of feedback, inquiry learning, and more. The data analysis provides strong support for this expectation. The use of digital learning tools and of utility software, as an indicator of their technological knowledge, are strong predictors of teachers' reported ICT use in practice (R-square = 0.33). What is surprising, however, is the relatively lower level of the reported use of digital learning tools, when comparing the scale mean in Luxembourg (M = 44.4, SD = 8.6) with the scale mean across all countries (M = 50.0, SD = 10.0). Luxembourg has developed and made available to teachers a large number of high-quality digital learning platforms and tools (MENJE, 2015(MENJE, , 2020b. This again proves to be a necessary but not a sufficient condition for actual use in practice. In turn, this implies that more effort is needed to support teachers in using these digital learning tools in practice (e.g., digital learning games, concept mapping software, simulation and modelling software, a learning management system, etc.), in order to increase the pedagogical use of ICT. In terms of expertise, as expected, all the variables measuring different aspects of expertise show significant relationships with ICT use in practice: specifically, teachers' use of ICT during lessons, initial teacher training with ICT, and ICT self-efficacy. First, the teachers with no earlier experience of ICT use during lessons, report the lowest pedagogical use in practice when compared with teachers having five or more years of experience. Teachers that never used ICT in teaching practice represent only 4 percent of the teachers in the present data, while most teachers (60 percent) report five or more years of experience. In terms of Initial training with ICT, only teachers who report having followed initial training for using ICT and using ICT in teaching (22 percent) report significantly more use of ICT in teaching when compared with those with no initial training (63 percent). Initial teacher training in ICT proves essential, as expected (Kopcha, 2012), but it is important to focus not only on ICT per se, but also on ICT in teaching. However, teachers' perceived ICT self-efficacy shows the most important contribution in this model, being strongly and positively related with their use of ICT in educational practice (β = 0.35, SE = 0.05). Teachers who report a higher level of ICT self-efficacy also report a greater use of ICT in practice . This result confirms the expectation that teachers' perceived ICT self-efficacy influences their intention to use ICT for teaching (Hatlevik, 2017;Teo, 2008), as well as their perceived use of it (Paraskeva et al., 2008). In terms of vision, the findings are in line with our expectations. There is a lower level of agreement with the possible positive effects of using ICT in teaching in Luxembourg (M = 44.7, SD = 8.6) when compared with the set mean score of the scale across all participating countries (M = 50.0 SD = 10.0). Moreover, teachers who agree with the potentially positive outcomes of using ICT in practice also report greater use of ICT in teaching (β = 0.38, SE = 0.06). The scale of negative views on using ICT in teaching and learning is also part of the data, but it was not included in this model because of its high correlation with the scale of positive views. When introduced in the model, however, the scale of negative views also shows a significant negative association with teachers' reported use of ICT in practice This association could also explain some reluctance in terms of conscious choices for the use (or non-use) of ICT and draws the attention toward the perception of the appropriate use of ICT. In terms of collaboration and leadership, as reported by the teachers, we see that all variables are significantly related to teachers' use of ICT in practice. Collaboration between teachers in using ICT proves to be a facilitator of its use, in line with previous research (Kopcha, 2012;Thoma et al., 2017). If ICT is considered a leadership priority for teaching in school, this also proves an important stimulator for ICT use in teaching practice. However, the mean of this scale indicates a low level of teachers' agreement on ICT being considered a priority at their school, with only 58 percent of the participating teachers agreeing or strongly agreeing. This is an important aspect, considering that Ertmer (1999) and found that having a common school vision, especially one with ICT as a priority, is a predictor of teachers' ICT use in practice in many countries. Liu et. al. (2020) also concluded their extensive review of 131 articles with a message "for institutional leaders on the importance of a clear expression of strategy intent supplemented by established decision making mechanisms and support" (p. 12). The final model brings together all the significant predictors selected in the Four in Balance model and the different domains. We find that the digital learning materials domain, with its use of digital learning tools and utility software as a reflection of teachers' technological knowledge, is the main predictor of teachers' use of ICT in practice. It is followed by the expertise and vision domains, with an important contribution of teachers' agreement with the positive outcomes of using ICT, and teachers' ICT selfefficacy. Lastly, ICT being considered as a priority in teaching in the school, representing the collaboration and leadership contextual domain in the final model proves to also support teachers' use of ICT. Following a 3-year longitudinal study, Müller et. al. (2007) recommended that teachers need space and time for exchange and debate, guided by clearly formulated goals and visions for the future. If we reflect back on the barriers that could hinder the use of ICT in teaching and learning, we see that our model identifies many second-order barriers as relevant, such as attitudes, self-perceived competence, and skills (Hämäläinen et al., 2021). These could be focused on in future strategies. Conclusion Our findings enrich the knowledge of what factors need to be enhanced when implementing ICT in schools, within a technology-driven approach to innovation. Even though ICT infrastructure (MENJE, 2015) and digital learning materials have been made available to secondary schools in Luxembourg on a very wide basis, we see that teachers still differ in their level of use of these in practice. As in many TPACK studies (Howard et al., 2020), teachers' technological knowledge-represented here by their use of digital tools and digital learning materials-proves to be a direct and important predictor of their use of ICT in classroom practice. Teachers' vision concerning the positive outcomes of using ICT and integrating it in teaching, especially for knowledge construction, needs to be supported to a greater extent. Schmid et. al. (2021) also found that for the planned and actual use of technology in education, teachers' vision and beliefs need to be considered, and only a combination of relevant predictors will be able to determine the differences in teachers' use of technology. Teachers' perceived ICT self-efficacy and long-term use of ICT in practice also need to be facilitated within schools. To summarize, we find teachers' vision and expertise concerning ICT to be key differentiators of its use in teaching; an important key message for all educational systems that take a technology-driven approach to innovation. At this stage in the process, more attention needs to be paid to understanding and guiding teachers' vision regarding the use and utility of ICT in teaching practice, as well as teachers' ICT self-efficacy and use. This could be achieved through a common school vision and through a common agreement that ICT is considered a priority in school. Policy directions, such as an explicit ICT-focused curriculum, need to be embraced and implemented in each school, through a common school vision on ICT, to actually transfer the impact into teachers' vision and use of digital learning materials in class. The high demand for ICT use in teaching during the COVID-19 health crisis has affected how teachers will use ICT in the future. The ICILS 2020 teacher panel (Strietholt et al., 2021) surveyed the same teachers as in the ICILS 2018 in three countries (Denmark, Finland, and Uruguay) and found a substantial increase in the use of ICT for learning, measured through reported frequency of use. Moreover, the study found little reported change in teachers' vision measured through their positive and negative views on using ICT for teaching and learning. This result shows that the COVID-19 health crisis might have accelerated the accumulation of ICT skills, but did not necessarily contribute to improving teachers' vision regarding the use and utility of ICT in practice. Page 21 of 28 Lomos et al. Large-scale Assessments in Education (2023) Limitations These results offer opportunities for further research, especially for clarifying the limitations of this study and the empirical elements left open for future investigation in this article. The first limitation of this study is the data analysis approach, considering only direct effects and not a path or moderation model (Sung et al., 2016). The important implication of this limitation is for the collaboration and leadership domain, which loses its influence in the final model when all the domains are considered. The fact that these are antecedents or contextual factors opens the way for more in-depth investigations through structural equation modeling in order to identify the paths through which these variables might affect the outcome of interest. Such an approach could identify not only the factors that influence teachers' use of ICT, but also the path through which schools and policymakers can enhance this use (Liu et al., 2020). In relation to this, a second limitation of this study is the data analysis exclusively addressing the teacher level. This approach does not allow us to account for the schools' and leaders' perspectives. The leaders' perspective would be useful in particular to corroborate the teachers' perceptions of collaboration and leadership in the school, but also the availability of ICT materials and ICT priorities. Having a school level in the model would allow for testing the level of teacher agreement within schools in terms of collaboration and leadership, in addition to individual perceptions (Costa et al., 2021;Howard et al., 2020). Moreover, having a school level would allow us to test which schools are more "in balance" in terms of support, cooperation, and vision, and to examine if these schools show better results in terms of ICT use in teaching. School profile analysis could be another approach to identify a typology of school types (Drossel et al., 2020) characterized by different levels of the basic elements. This brings us to a third limitation, which is the choice of data analysis approach. Taking into consideration the contextual domain of collaboration and leadership of the Four in Balance model, as well as the nested structure of the data, multilevel data analysis may appear the most appropriate approach. However, the intra-class correlation coefficient of this data is only 0.05. This small value of ICC indicates that the amount of variance in the use of ICT between schools is relatively small, whereas the variance between teachers within schools is much larger. Therefore, our data analysis considered only the teacher level, accounting statistically for the nested structure of the data by using the IEA IDB Analyzer. As a robustness check of the results, we also ran the same model using multilevel data analysis, the results of which are presented in Appendix 2 of this paper. No change in the relationships of interest here appeared when using multilevel data analysis, but future research could consider this approach if the focus is explicitly on the role of schools and leadership. A fourth limitation is related to the high level of teacher nonresponse in Luxembourg, as well as in other countries in the ICILS 2018 study (Fraillon et al., 2020b). For our purposes, teacher and school weights were re-estimated using observed demographic characteristics to work with a more precise population estimate for the teacher population. The risk of nonresponse bias determined by the unobserved variables related to the use of ICT could still be present. We acknowledge that we still do not know whether teachers' non-response is related to the variables of interest. For example, we do not know if teachers who do not share a positive vision of ICT refuse to participate more than others or alternatively have a higher proportion of participating because they want to make their voice heard. Such limitations Page 22 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 remain present in the data, although the alternative weighting approach helps in some ways. However, we see that Luxembourg teachers show a similarly low participation rate in other international studies (e.g., International Civic and Citizenship Study 2009), being the only two large-scale studies that required teacher participation in Luxembourg. In addition, the very small number of secondary schools in Luxembourg determines that all national and international studies ask for the participation of all schools in all studies, resulting in a very high demand for participation. It is common for schools and different participants within schools to decide not to participate in one or another study, most probably regardless of the topic. The later remark is also supported by the fact that we do not see any type of pattern in the school nonresponse in the ICILS 2018 by type of secondary school or educational track. A fifth limitation is related to the data and the results being based on self-reports, which mostly indicate teachers' attitudes and perceptions of their knowledge and skills. Hämäläinen et. al. (2021) found that teachers' self-reported levels of skills are conditioned by what they consider to be adequate for their tasks. The authors used both actual measurements of teachers' digital skills from PIAAC data and self-perceived levels of digital skills from TALIS data, and found that some teachers reported an adequate level of skills but exhibited low skills. This suggests that under specific expectations, a lower level of skill is considered adequate. In the current study, we did not work with the teachers' levels of skills and competence, but instead aimed to identify what could support teachers' use of ICT for teaching practice, with a focus on teachers' attitudes, self-perceived efficacy, and use. This limitation concerning the measurements suggests caution should be exercised with regard to the expectation that a higher perceived level of ICT use in teaching will be positively associated with student learning and remedial teaching, considering that teachers with weak measured skills also perceived to be able to support student learning through digital technology (Hämäläinen et al., 2021). Considering that the ICILS 2018 data allows for a very good coverage of the Four in Balance model, it would be relevant to apply such a model to other countries that participated in the study, especially those where teachers report a high use of ICT in teaching practice and that might have taken an "education-driven" approach (e.g., Denmark, Finland). Denmark and Finland would be very interesting cases to study further, considering that both countries have also ensured the adequate investment in ICT infrastructure. In Denmark, ICT resources and learning platforms have been widespread for a long time (Strietholt et al., 2021). In Finland, ICT equipment and learning materials for teaching have also been made available since 2010, through the Basic Education Act (Strietholt et al., 2021). Both countries nevertheless, set a great deal of importance on ICT-related professional development opportunities for teachers, as has also been the case during the COVID-19 health crisis. Furthermore, it would also be important to identify which domain or domains prove significant in predicting teachers' pedagogical use of ICT, and especially if the impact of domains differs per innovation approach or other contextual factors. Testing this model across all countries participating in ICILS 2018 and later in ICILS 2023 would allow for a more in-depth understanding of how teachers' use of ICT could be supported by schools and policymakers, and furthermore, how such approaches to ICT innovation would relate with student performance. Items measured on a 4-point Likert scale ranging from "I never use ICT in this practice" to "I always use ICT in this practice"; built as an IRT WLE score with mean 50 and standard deviation 10, higher values indicating more frequent use How often do you use ICT in the following practices when teaching your reference class? The provision of remedial or enrichment support to individual students or small groups of students The support of student-led whole-class discussions and presentations The assessment of students' learning through tests The provision of feedback to students on their work The reinforcement of learning of skills through repetition of examples The support of collaboration among students The mediation of communication between students and experts or external mentors The support of inquiry learning Availability of computer resources at school (T_RESRC) Items measured on a 4-point Likert scale ranging from "Strongly disagree" to "Strongly agree"; built as two IRT WLE scores with mean 50 and standard deviation 10, higher values indicating stronger agreement To what extent do you agree or disagree with the following statements about using ICT in teaching at your school? My school has sufficient ICT equipment (e.g., computers) The computer equipment in our school is up to date My school has access to sufficient digital learning resources (e.g., learning software or [apps]) My school has good connectivity (e.g., fast speed and Stable) to the Internet There is enough time to prepare lessons that incorporate ICT There is sufficient opportunity for me to develop expertise in ICT There is sufficient technical support to maintain ICT resources Scale: use of digital learning tools (T_USETOOL) Items measured on a 4-point Likert scale ranging from "Never" to "In every or almost every lesson"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating more frequent use Items measured on a 4-point Likert scale ranging from "Never" to "In every or almost every lesson"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating more frequent use How often did you use the following tools in your teaching of the reference class this school year? Expertise Single item: experience with ICT use during lessons Item measured on a 4-point Likert scale ranging from "Never" to "More than 5 years. " Approximately how long have you been using ICT for teaching purposes? During lessons: Single item: initial teacher education on ICT Two items measured with "Yes" or "No. " Did your [initial teacher education] include the following elements? (a) Learning how to use ICT (b) Learning how to use ICT in teaching For this study, the two items were collapsed into one: Did your [initial teacher education] include the following elements? Measured with "None, " "Either (a) or (b), ", and "Both (a) and (b)" Scale: teacher ICT self-efficacy (T_ICTEFF) Items measured on a 3-point Likert scale ranging from "I do not think I could do this" to "I know how to do this"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating higher level of selfefficacy How well can you do these tasks using ICT? Find useful teaching resources on the Internet Contribute to a discussion forum/user group on the Internet (e.g., a wiki or blog) Vision Scale: positive views on using ICT in teaching and learning (T_VWPOS) About the use of ICT for knowledge transmission and knowledge construction Items measured on a 4-point Likert scale ranging from "Strongly disagree" to "Strongly agree"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating stronger agreement To what extent do you agree or disagree with the following practices and principles in relation to the use of ICT in teaching and learning? Using ICT at school: Helps students develop greater interest in learning Helps students to work at a level appropriate to their learning needs Page 25 of 28 Lomos et al. Large-scale Assessments in Education (2023) 11:1 Scale: positive views on using ICT in teaching and learning (T_VWPOS) Helps students develop problem-solving skills Enables students to collaborate more effectively Helps students develop skills in planning and self-regulation of their work Improves the academic performance of students Enables students to access better sources of information Collaboration and leadership Scale: collaboration between teachers in using ICT (T_COLICT) Items measured on a 4-point Likert scale ranging from "Strongly disagree" to "Strongly agree"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating stronger agreement To what extent do you agree or disagree with the following statements about your use of ICT in teaching and learning at your school? I work together with other teachers on improving the use of ICT in classroom teaching I collaborate with colleagues to develop ICT-based lessons I observe how other teachers use ICT in teaching I discuss with other teachers how to use ICT in teaching topics I share ICT-based resources with other teachers in my school Scale: teacher participation in collaborative reciprocal professional learning development related to ICT (T_PROFREC) Items measured on a 3-point Likert scale ranging from "Not at all" to "More than once"; built as IRT WLE scores with mean 50 and standard deviation 10, higher values indicating more frequent participation How often have you participated in any of the following professional learning activities in the past two years? Observations of other teachers using ICT in teaching An ICT-mediated discussion or forum on teaching and learning The sharing of digital teaching and learning resources with others through a collaborative workspace Use of a collaborative workspace to jointly evaluate student work Single item: ICT as a school priority Item measured on a 4-point Likert scale ranging from "Strongly disagree" to "Strongly agree. " To what extent do you agree or disagree with the following statements about using ICT in teaching at your school? ICT is considered a priority for use in teaching Multilevel data analysis approach Due to the two-level character of the data, with teachers nested within schools (Snijders & Bosker, 2012), we also used hierarchical linear modeling (HLM) to investigate the Four in Balance proxy model. The HLM statistical package was used and the ICILS 2018 specific teacher and school weights were composed for the multilevel approach (the product of the alternative weighting approach teacher/school weight factor and the alternative weighting approach teacher/school weight adjustment). As indicated before, the empty model shows that most of the variance occurs at the teacher level, with only 5 percent of the variance to be explained being at the school level. When running all models following the same approach as previously presented (see Table 6), we find the same results in terms of the size and direction of the significant relationships, and their relative contribution to the explained variance of all models. No
2023-01-15T16:01:06.312Z
2023-01-13T00:00:00.000
{ "year": 2023, "sha1": "e37723cb77b37050f0a6e28915661e8fef510b18", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ee370ad8d29bb5c8eed912e41eab0ee8f1641f46", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
207889287
pes2o/s2orc
v3-fos-license
Role of MRI in staging and surgical planning and its clinicopathological correlation in patients with renal cell carcinoma Abstract Background and Aims: Radiological evaluation of renal cell carcinoma (RCC) is used for non-invasive staging for better surgical planning. However, the correlation of radiological staging using magnetic resonance imaging (MRI) with histopathological findings has not been done so far. The aim of this study is to assess the role of MRI in pre-operative staging of RCC in patients undergoing radical nephrectomy and nephron sparing surgery (NSS) and correlate it with histopathological findings. Settings and Design: This prospective observational study was conducted from February 2015 to October 2016 at a tertiary care hospital in northern India. Methods: MR imaging was done on 3 Tesla MR scanner (Signa Hdxt General Electrics, Milwaukee, USA). Preoperative staging was based on 2010 TNM staging system. The preoperative parameters in MRI were tumor size, detection/breach of pseudocapsule, tumor extension into perirenal fat and detection of tumor venous thrombus. The staging on MRI was compared with surgical and pathological staging. Statistical Analysis Used: The agreement between these three staging methods was determined using the kappa statistics (0.0-0.2, poor; 0.2-0.4, fair; 0.4-0.6, moderate; 0.6-0.8, good; 0.8-1.0, excellent). Results: 30 patients with suspected RCC underwent NSS (n - 10) and radical nephrectomy (n - 20). Mean tumor size was 9.66 ± 2.99 cm in the radical nephrectomy group and 4.06 ± 1.16 cm in the NSS group. There was perfect agreement between MRI, surgical and pathological staging for breach of pseudocapsule (κ -1.0, Percentage of Agreement - 100%,P < 0.05). In none of the patients, MRI missed extension beyond the Gerota’s fascia or presence of venous thrombus. Conclusion: MRI staging of RCC is an accurate predictor of the surgical and pathological stage and has the potential to become a useful tool for preoperative identification of patients with RCC who can undergo NSS. Introduction Renal cell carcinoma (RCC) is the most common malignant tumour of the kidney, accounting for 85-90% of adult renal Cite this article as: Lal H, Singh P, Jain M, Singh UP, Sureka SK, Yadav RR, et al.Role of MRI in staging and surgical planning and its clinicopathological correlation in patients with renal cell carcinoma.Indian J Radiol Imaging 2019;29:277-83. malignancies and 1-2% of all malignancies. [1]The worldwide incidence of RCC is 150,000 cases annually. [2]The percentage of incidentally discovered RCC ranges from 15-60%.These tumours are generally smaller with a lower tumour stage, and therefore have a better prognosis. [3]Currently, nephron sparing surgery (NSS) is the preferred approach for the treatment of localized renal tumours. [4]Other minimally invasive techniques such as radiofrequency ablation (RFA) and cryotherapy are being used increasingly for the same indication. [5]e goal of preoperative radiological evaluation and staging of RCC is to evaluate tumour size, tumour location, presence or absence of pseudocapsule, feeding vessels, presence and extent of any thrombus in the renal vein (RV) or inferior vena cava (IVC) and to identify invasion of perirenal fat/Gerota's fascia/adjacent organs or lymph nodes.However, it is difficult to accurately predict whether NSS would be feasible for many localized renal tumours, especially those near the renal hilum. Multidetector CT (MDCT) is the preferred modality of imaging and staging in patients with renal mass due to wider availability, high resolution, high speed of acquisition, isotropic imaging and imaging reformatting in any plane which can provide excellent anatomical details.But MDCT examination also causes exposure to ionising radiation.Use of iodinated contrast can cause nephrotoxicity and contrast reactions which may also affect the residual renal parenchymal function after NSS.Magnetic resonance imaging (MRI) is not associated with ionizing radiation and does not require iodinated contrast agent.The Gadolinium contrast is safer than iodinated contrast for kidneys with normal glomerular filtration rate (GFR). The purpose of this study was to assess the role of MRI in pre-operative evaluation of surgical and vascular anatomy as well as staging of RCC in patients undergoing NSS and radical nephrectomy and correlate it with histopathological findings. Methods This was a prospective observational study conducted at a tertiary care hospital in northern India from February 2015 to October 2016 and was approved by Institution's Ethics Committee (IEC code: 2015-26-MD-EXP, dated 11/02/2015).Patients aged 18 years and older who had suspected RCC and were planned for NSS or radical nephrectomy were included in the study.The patients who were excluded included those with contraindications for MRI (claustrophobia, pacemaker or other electromagnetic or non-MR compatible implants), cardiac conditions (unstable angina, cardiac arrhythmia and congestive heart failure), allergy to intravenous gadolinium contrast media, patients who did not undergo surgery and finally where postoperative histopathological examination revealed tumour other than RCC. MR imaging was done on a 3 Tesla MR scanner (Signa Hdxt General Electrics, Milwaukee, USA).The coil used was phased array Torso PA (Body Coil) with patient in supine position.Sequences were entirely breath-hold with field coverage of area of interest and imaging protocol included pre and post contrast sequences in axial, coronal and sagittal planes including MR angiography imaging.Gadobenate dimeglumine contrast (Multihance  ) was used at the dose of 0.1 mmol/kg of body weight, or a contrast volume of 15 mL (maximum). MRI image analysis The tumour diameter was measured in three planes.The largest one was chosen to represent the tumour size.The preoperative staging was based on the 2010 TNM staging system [Table 1 and Figures 1-4]. [6]The pseudocapsule Tumour thrombus in the RV and IVC was diagnosed in case of direct continuity with the renal mass, high signal intensity, and signal heterogeneity compared with skeletal muscle on T2-weighted imaging, and contrast enhancement.Lymphadenopathy was diagnosed if there were regional lymph nodes (nodes along the renal arteries, para-caval nodes for right-sided and para-aortic nodes for left sided RCC) which were showing diffusion restriction and/or greater than 1 cm in short axis, and/or contrast enhancement of the enlarged lymph nodes. Statistical analysis MRI staging was compared with surgical staging and pathological staging which was taken as the gold standard, and agreement between the staging systems was determined using the kappa statistic (0.0-0.2, poor; 0.2-0.4,fair; 0.4-0.6,moderate; 0.6-0.8,good; 0.8-1.0,excellent) using SPSS.A P value of 0.05 or less was considered statistically significant. Results A total of 30 patients with suspected RCC were finally included.Of these, 10 patients underwent NSS while 20 patients underwent radical nephrectomy.The mean age was 51.56 ± 14.74 years.There were a total of 22 males and 8 females.24 patients had clear cell RCC, while 3 patients each had chromophobe and papillary RCC [Figure 6].The mean size of the tumour was 9.66 ± 2.99 cm in the radical nephrectomy group and 4.06 ± 1.16 cm in the NSS group.Overall, the mean size of the tumour on MRI was 7.79 ± 3.67 cm compared to 7.35 ± 3.39 cm during surgery and 7.02 ± 3.27 on pathological examination. There was significant agreement in detection of the pseudocapsule among the three modalities (κ = 0.87, SE = 0.0915, Percentage of Agreement = 93.33%,P = 0.01) [Table 2].In one patient, MRI failed to detect the pseudocapsule while in another patient, MRI falsely reported a pseudocapsule compared to surgery and pathology.There was perfect agreement among all three for detecting breach of pseudocapsule (κ = 1.0,SE = 0.0, Percentage of Agreement = 100%, P < 0.05).MRI falsely reported tumour extension into the perirenal fat in 2 patients, while missing perirenal extension in one patient compared to surgery and pathology (κ = 0.77, SE = 0.1256, Percentage of Agreement = 90%, P < 0.05).Extension beyond the Gerota's fascia was reported in one patient on MRI in whom surgery and pathology confirmed no such extension (κ = 0.87, SE = 0.1271, Percentage of Agreement = 96.67%,P < 0.05).In none of the patients, MRI missed extension beyond the Gerota's fascia. Venous thrombus was reported on MRI in 7 patients and all of them were found to have a thrombus during surgery as well as on pathology (κ = 1.0,SE = 0.0, Percentage of Agreement = 100%, P < 0.05).MRI did not miss any thrombus.However, one patient had a bland tumor thrombus which was confirmed on pathological examination.One patient had invasion of tumour into the ipsilateral adrenal gland which was detected preoperatively on MRI and confirmed on surgery as well as pathology, suggesting a perfect agreement.In all 20 patients undergoing radical nephrectomy as well as all 10 patients undergoing NSS, there was perfect agreement among MRI and intraoperative findings of the actual number of feeder arteries supplying the kidney/tumour.Stage wise agreement among MRI, surgery and pathology is shown in Tables 3 and 4. Discussion Since nephrectomy is still the only curative method in the treatment of RCC, preoperative evaluation of RCC is of great importance.Partial nephrectomy, or nephron sparing surgery (NSS), is considered the standard surgical treatment of small renal tumours. [7]RCC of TNM class T1a without evidence of metastasis at primary staging is considered a small renal tumour: the oncologic efficacy and safety of NSS for the treatment is equivalent to radical nephrectomy. [7]n addition, MRI is also of great importance for detection of psuedocapsule, its thickness and integrity which is particularly associated with small renal tumours and serves as a good indication for partial nephrectomy. [8]th MDCT and MRI perform highly in the T-staging of local tumour extent but perform poorly in N-staging. [9]In our study, κ test revealed excellent agreement between MRI, intraoperative staging and pathological staging which is consistent with the results of Kamel et al. and Spero M et al. [10,11] who reported 80-85% accuracy of the MRI in staging organ confined renal cell carcinoma, as well as Ergen et al. [12] who reported good agreement between MRI and pathological staging for T and M staging and poor for N staging. Mean age of the patient included in our study was 51.56 years, ranging from 22 to 80 years which is in agreement with the trends. [7]Mean of the patients in radical nephrectomy group was 55.55 years and in partial nephrectomy group was 43.60 years, which is almost a decade less.It is also noted that in Radical Nephrectomy group more numbers of patients lie toward maximum range whereas in Partial Nephrectomy Group more number of patients lie towards minimum range.Above both observations might suggest that later age of presentation is associated with more extensive tumour. The mean tumour size measured on MRI was 7.79 cm which is marginally larger than that measured by surgeons (mean: 7.35 cm) and pathologists (mean: 7.02 cm).However, this difference was statistically insignificant. A pseudocapsule was detected in 14 cases by MRI out of which surgeons and pathologists ruled out one case.Additionally, surgeons and pathologists detected a pseudocapsule in a case which was missed on MRI.Inter investigation kappa agreement between both MRI vs. surgery and MRI vs. pathology was 0.87 which indicated excellent correlation with percentage of agreement 93.33%. Regarding the breach of pseudocapsule, there was perfect agreement between all the investigations with kappa value of 1.0 and percentage of agreement 100%.Pseudocapsule integrity is an important factor for surgical planning, as in case of an intact pseudocapsule the surgeon can simply perform an enucleation surgery resulting in maximum preservation of unaffected renal parenchyma, thus resulting in improved post-operative renal function. [13]In our study, the mean distance of pseudocapsule from the adjacent pelvicalyceal system was zero, as in the tumour was abutting the adjacent pelvicalyceal system in all the patients. The T-staging is determined by the tumour size and extension into perirenal fat and Gerota's fascia, invasion into ipsilateral adrenal gland, including the possibility of venous involvement comprising of renal vein, infradiphragmatic IVC and supradiaphagmatic IVC.For renal cell carcinoma, inter-investigation agreement in our study was excellent for T-staging: it was the best in T 2 , followed by T 4 , then T 1 and then T 3 . MRI over staged one T 1a tumour as T 3a ; in this patient, involvement of perirenal fat was suspected.One T 3a tumour was also under staged as T 1b tumour where involvement of perirenal fat was not suspected.In the above cases, the surgeons and the pathologist did not confirm the MRI finding.The MRI findings were attributed to compression of the perirenal fat by the tumour which obscured the renal capsule making it difficult to exclude capsular invasion. Clinically, it was probably not important because both tumours were managed with radical nephrectomy.Other studies have also brought forward the challenges in distinction between RCC with and without confinement to the renal capsule on cross-sectional imaging.Catalano et al. [14] diagnosed perirenal fat infiltration by MDCT on 1-mm scans with 96% sensitivity, 93% specificity, and 95% accuracy; Roy et al. [15] reported 84% sensitivity, 95% specificity, and 91% accuracy in T 3a staging by MRI; and Ergen et al. [12] concluded that MRI is a reliable method for preoperative staging of RCC.It appears that radiological distinction between confinement of RCC to the true renal capsule and extension in the perirenal fat is currently not fully reliable.Therefore, surgical planning should be individualized in patients whose cross-sectional imaging raises concern over involvement of the renal capsule and the perirenal fat. [9,12]e T 2a tumour was over staged as T 3a both by the radiologists as well as surgeons as extension of the tumour into renal vein was suspected by both.The pathologists found it to be a bland thrombus rather than tumour thrombus.One T 3a patient was also over staged as T 4 in whom tumour extension beyond the Gerota's fascia was suspected but the surgeons and the pathologist did not confirm the finding.This was due to the presence of a large renal mass abutting the surrounding organs which made it difficult to exclude extension beyond Gerota's fascia.It is important to rule out Gerota's fascia involvement as it makes the tumour locally invasive and alters surgical planning.There was no disparity noted in making decision about the invasion of ipsilateral adrenal gland and all the investigations were in perfect agreement. Venous tumour thrombus is present in 4-10% of patients with RCC.It is important to detect the presence and extent of RV and/or IVC tumour thrombus as well as the invasion of the IVC wall preoperatively for planning subsequent surgical approach.In a small study conducted by Aslam et al. [16] MRI had 100% sensitivity and 89% sensitivity in the detection of IVC wall involvement: the most reliable sign of IVC wall invasion was tumour signal both inside and outside the vessel wall, while altered signal in the vessel wall and its enhancement were nonspecific.In our study, all seven tumours, three with RV involvement, three with RV plus IVC involvement below the diaphragm, and one with RV plus IVC involvement above the diaphragm, were correctly assessed by MRI in relation to surgery, thus with a kappa value of 1.0 and percentage of agreement 100%.But in one patient with RV thrombus, the pathologists found it to be a bland thrombus.Still, there was good correlation with respect to presence or absence of tumour thrombus on MRI in relation to pathology with kappa value of 0.90 and percentage agreement of 96.67%.With regard to the extension of tumour thrombus there was perfect agreement among all three modalities. Regional lymph nodal involvement, classified as N-classes of the TNM system, is one of the major factors influencing the prognosis of patients with RCC: incidence of the metastasis in regional lymph nodes without distant metastasis at the same time is 10-15% and 5-year survival rate with lymph node involvement is 8-35%.Whether one uses MDCT or MRI, the commonest criterion for assigning lymph node metastasis remains size assessment. [17]On histopathology, non-neoplastic causes of lymph node enlargement include hyperplasic or inflammatory changes related to RCC.The specificity of cross-sectional imaging for regional lymph node involvement is poor but the use of contrast agents may improve the situation.Gadolinium chelates in MRI reach lymph nodes directly via their feeding arteries and regional lymph nodes enlarged because of metastases show contrast enhancement.In addition, diffusion restriction on MRI is also a criterion for lymph node involvement.In the present study 4 cases of lymph node involvement categorized as N 0 on MRI were found as N 1 by surgeons.Furthermore, 4 cases with lymph nodes categorized as N 1 on MRI were found as N 0 by pathologists.Intraoperatively, there may be a tendency for the surgeons to assign a lymph node as N 1, especially when it is present in the region providing lymphatic drainage to the part of kidney containing the mass.In other words, the false positivity may be higher with intraoperative assessment of the lymph nodes. In our study, regarding vascular anatomy of the kidney with tumour, perfect agreement was found between MRI and Surgery with respect to detection of number of arteries and veins supplying the kidney with tumour in all the cases.In partial nephrectomy cases, perfect agreement was found between MRI and surgery with respect to detection of the feeding artery to the tumour in all the 10 cases.Delineating the feeding artery to the tumour is of utmost importance if partial nephrectomy has been planned as it will help in preventing unnecessary vessel ligation, thus reducing chances of residual renal parenchymal ischemia, which in turn helps in preserving maximum post-operative renal function. The present study is limited by the small number of patients.Further, the effect of preoperative imaging characteristics on the operative variables such as the surgical approach (laparoscopic vs. open), surgical technique (radical vs. NSS), operative time and complications has not been studied.However, this is a unique study which has correlated the imaging characteristics of renal tumours with surgical and pathological characteristics and lays impetus for future research to compare the different surgical techniques based on preoperative MRI findings. In conclusion, the present study found good agreement for MRI TNM Staging with respect to surgical and pathological findings.The use of MRI may enhance the urologist's ability to judiciously use the organ preserving surgery for patients with renal cell carcinoma. Financial support and sponsorship Nil. Figure 1 ( Figure 1 (A-D): Contrast MRI in coronal plane showing stage T1a tumour at the upper pole of right kidney, (A) and corresponding gross pathology specimen, (B); stage T1b tumour at the upper pole of the left kidney, (C) and corresponding gross pathology specimen, (D).Stage T2a tumour at the lower pole of the right kidney and (D) Stage T2b tumour almost entirely replacing the left kidney D Figure 2 (Figure 4 (Figure 3 (Figure 5 ( Figure 2 (A-D): Contrast MRI in coronal plane showing stage T2a tumour at the lower pole of the right kidney, (A) and corresponding gross pathology specimen, (B); stage T2b tumour almost entirely replacing the left kidney, (C) and corresponding gross pathology specimen, (D) D Figure 6 ( Figure 6 (A-D): Histopathological slides of renal cell carcinoma (RCC).(A) Conventional clear cell RCC.Tumour shows large uniform cells with abundant cytoplasm that is glycogen rich.(B) Papillary RCC type I. Tumour papillae are lined by short cuboidal cells with basophilic cytoplasm.Nuclei are small with few inconspicuous nucleoli.(C) Papillary RCC type II.Tumour shows papillae lined by columnar to pseudostratified cells that have striking eosinophilic cytoplasm.(D) Chromophobe RCC.Tumour cells have abundant pale flocculent cytoplasm, prominent cell membranes, perinuclear halos, and wrinkled nuclei D
2019-10-31T09:14:46.277Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "3118429d724bf6fa58065e60847ae85d9399c5e8", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijri.ijri_177_19", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a3d10d2bcfbd780b7584adae4e4c9e062ffee973", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269646267
pes2o/s2orc
v3-fos-license
Sex Differences in the Associations of Traditional Risk Factors and Incident Heart Failure Hospitalization: A Prospective Cohort Study of 102 278 Chinese General Adults Background Evidence regarding sex differences in the associations of traditional risk factors with incident heart failure (HF) hospitalization among Chinese general adults is insufficient. This study aimed to evaluate the potential sex differences in the associations of traditional risk factors with HF among Chinese general adults. Methods and Results Data were from a subcohort of the China PEACE (Patient‐Centered Evaluative Assessment of Cardiac Events) Million Persons Project. The traditional risk factors were collected at baseline, and the study outcome was HF‐related hospitalization identified from the Inpatients Registry. A total of 102 278 participants (mean age, 54.3 years; 39.5% men) without prevalent HF were recruited. A total of 1588 cases of HF‐related hospitalization were captured after a median follow‐up of 3.52 years. The incidence rates were significantly higher in men (2.1%) than in women (1.2%). However, the observed lower risk of HF in women was significantly attenuated or even vanished when several traditional risk factors were poorly controlled (P for sex‐by‐risk factors <0.05). The selected 11 risk factors collectively explained 62.5% (95% CI, 55.1–68.8) of population attributable fraction for HF in women, which is much higher than in men (population attributable fraction, 39.6% [95% CI, 28.5–48.9]). Conclusions Although women had a lower incidence rate of hospitalization for HF than men in this study, the risk for HF increased more remarkably in women than in men when several traditional risk factors were poorly controlled. This study suggests that intensive preventative strategies are immediately needed in China. Similarly, striking sex differences in risk factors for incident HF have been well documented. 6In a meta-analysis of 47 cohorts involving >10 million participants, Ohkuma and colleagues found that the excess risk of HF attributed to diabetes was significantly greater in women than men, with a women-to-men ratio of relative risk of 1.47 (95% CI, 1.44-1.90). 7nother pooled analysis of 4 cohorts of 22 681 US adults showed that women with adiposity were more susceptible to having HF with preserved ejection fraction (HFpEF) than men. 8Additionally, a range of sex disparities in risk factors for incident HF was previously reported, including hypertension, 9 socioeconomic status, 10 and several female-specific risk factors. 11,12onetheless, these studies were mainly from Western European and North American populations; whether the conclusions were consistent among Asian populations remains unknown because regional and ethnic differences in HF have been recently reported. 13,14urrently, although China has a rapid development of HF cases and a significant proportion of overall patients with HF globally, 5,15 little evidence exists regarding the sex differences in the associations of conventional risk factors and incident HF hospitalization in Chinese general populations. 16ccordingly, leveraging a community-based, longitudinal subcohort of the China PEACE (Patient-Centered Evaluative Assessment of Cardiac Events) Million Persons Project, we aimed to evaluate the potential sex disparities in the associations of traditional risk factors with HF among Chinese community adults. Availability of Data and Materials The deidentified participant data will be shared on a request basis.Please directly contact the corresponding author to request data sharing. Study Design and Participants Detailed information on the study design and participants of the China PEACE Million Persons Project have been reported previously, 17,18 and the current study was conducted in a subcohort of this project.Briefly, this program is aimed to screen subjects with high cardiovascular disease risk in China, and individuals aged 35 to 75 years with local residence registration were eligible and were enrolled after obtaining written informed consent.The study participants were from 8 sites in Guangdong province, Southern China, because we only received official permission to baseline and outcome data access of Guangdong.A total of 102 358 community-dwelling participants were recruited between January 2016 and December 2020, and 80 participants with prevalent HF (self-reported) at baseline were excluded.The final analysis included 102 278 subjects.The current study was approved by the Ethics Committee of Guangdong Provincial People's Hospital (No. GDREC2016438H [R2]).All participants provided written informed consent. Data Collection and Variables Local trained medical staff used face-to-face interviews to collect information on demographic characteristics, socioeconomic status, and comorbid conditions.After enrollment, a physical examination was performed for each participant.Seated blood pressures were measured twice on the right arm, and the staff recorded the average readings.Anthropometrics were measured using standard protocols, and body mass index (BMI) was calculated by dividing the weight by the height squared.Lipid profile, including total cholesterol, triglyceride, high-density lipoprotein cholesterol and low-density lipoprotein cholesterol, and fasting blood glucose (FBG) were tested according to the predetermined protocols. 18 CLINICAL PERSPECTIVE What Is New? • The lower incident risk of HF hospitalization observed in women was strikingly attenuated when the numbers of traditional risk factors increased or the risk factors were poorly controlled.• The 11 selected risk factors together explained more than half of the population attributable fraction for HF-related hospitalization, and the population attributable fractions of 11 risk factors were much higher in women than in men.• Older age, suboptimal control of blood pressure, and waist circumference were the 3 leading attributable risk factors for HF. What Are the Clinical Implications? Ascertainment of Study Outcome The outcome of the current study was hospitalization for HF, which used the code (I50) of the International Classification of Diseases, Tenth Revision (ICD-10) and was identified from the Inpatients Registry.All HF events were independently verified and ascertained by 3 experienced experts.All-cause death was used as the competing risk of incident HF hospitalization and was identified from China's Centre for Disease Prevention and Control's National Mortality Surveillance System.The date of death, the date of HF events, or the date of the last follow-up (December 31, 2021) was used to compute the follow-up time. Statistical Analysis Participants were divided into 2 groups by sex.Continuous variables were presented as mean±SD and analyzed using Student's t test, and categorical variables were presented as number (proportion) and analyzed using the Pearson χ 2 test. Eleven conventional risk factors were selected in the present study, including age, systolic blood pressure (SBP), diastolic blood pressure, heart rate, BMI, waist circumference (WC), triglyceride, total cholesterol, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and FBG.The selection criteria of the risk factors were (1) available in the China PEACE Million Persons Project; (2) significantly related to cardiovascular health; and (3) relatively prevalent and having beneficial effects for interventions.Age was divided into 4 groups (35-44 years, 45-54 years, 55-64 years, and 65-75 years), and the other 10 risk factors were categorized on the basis of the clinical thresholds or quartile.In each risk factor, Fine and Gray models were conducted to account for competing risks of all-cause death in the analyses of HF-related hospitalization, with adjustment for the above 11 risk factors and demographics, socioeconomic status, and comorbidities.The detailed definitions for all covariates are provided in Data S1.The male group served as the reference group for each category, and subhazard ratios (sHRs) and 95% CIs were reported.In addition, restricted cubic splines using the Cox proportional hazards models with adjustment for the same covariates in the Fine and Gray models were performed to plot the flexible relationships between the risk factors as continuous variables and the HF-related hospitalization among men and women.The proportional hazards assumption was checked using the Schoenfeld residual test, and the nonlinearity of the continuous variables was confirmed using the Martingale residual test.The interactions between sex and risk factors were tested using interaction terms and likelihood ratio tests.After confirming the significant interactions between sex and 4 risk factors (age, blood pressure, obesity, and triglyceride) in the associations of incident HF hospitalization, we also assessed the relationships between sex and hospitalization for HF among participants with 0 to 4 joint risk factors (age ≥60 years; high blood pressure: SBP/diastolic blood pressure ≥130/80 mm Hg; obesity: BMI ≥24 kg/m 2 or WC ≥90 (men)/85 cm (women); high triglyceride: >1.7 mmol/L). Additional and Sensitivity Analyses We also calculated the population attributable fractions (PAFs) and 95% CIs for incident HF hospitalization attributed to the 11 risk factors at the median follow-up time using a model-based method, 19 with adjustment for the same covariates used in the Fine and Gray models.Furthermore, given the significant associations between smoking and alcohol drinking with HF, we additionally assessed the relationships between tobacco and alcohol use and incident HF hospitalization by sex, although the prevalences of current smokers and drinkers were relatively limited in our data. Several sensitivity analyses were also performed.First, we excluded individuals with prevalent cardiovascular diseases and reperformed the analyses.Second, we also used the relative excess risk of interaction and attributable proportion of interaction to estimate the additive interactions, 20 by categorizing the risk factors as binary variables according to the clinical risk thresholds. All analyses were conducted using R version 4. Baseline Characteristics of Participants As shown in Table 1, among the 102 278 participants, 39.5% (n=40 399) were men, the mean age was 54.3 years, and the mean SBP and diastolic blood pressure was 130.1 (±19.0) and 79.2 (±11.3)mm Hg, respectively.Hypertension (22.7%) and diabetes (7.6%) were the 2 most prevalent comorbidities.Women were younger, less likely to be smokers and drinkers, and had lower socioeconomic status and comorbid burdens than men. The characteristics of individuals with and without events are presented in Table S1.Participants without incident HF hospitalization were younger, less likely to be current smokers, had lower socioeconomic status, and had lower levels of blood pressure, heart rate, and anthropometrics than their counterparts with HF.By contrast, participants with hospitalization for HF had higher comorbid burdens and a higher level of FBG compared with those without HF. Sex Differences in the Associations of Risk Factors and Incident HF Hospitalization A total of 1588 cases of HF-related hospitalization were captured after a median follow-up of 3.52 years (4.62 [95% CI, 4.40-4.85]per 1000 person-years).The incidence rates were significantly higher in men (2.1%) than in women (1.2%; crude sHR, 0.55 [95% CI, 0.50-0.60]).When the 11 risk factors were considered as the categorical variables, the lower incidence of HF hospitalization in women was significantly attenuated with age, SBP, BMI, WC, and triglyceride increment (P for sex-by-risk factors interactions <0.05; Figure 1).When considering the risk factors as continuous variables, the results remained approximately identical.The lower incident HF hospitalization risk in women was observably attenuated with age, diastolic blood pressure, BMI, WC, and triglyceride increment (P for sex-by-risk factors interactions <0.05;Table 2).We also plotted the flexible relationship between the risk factors and HF-related hospitalization.According to Figure S1, the incidence rate of female HF hospitalization neared or exceeded that observed in men with the risk factors being out of control (Figure S1).The Schoenfeld residuals for sex were plotted against follow-up time and showed that the proportional hazards assumption was valid (Figure S3).After confirming the significant interactions between sex and 4 risk factors (age, blood pressure, obesity, and triglyceride) in the associations of HF-related hospitalization, we evaluated the associations between sex and incident HF hospitalization among participants with 0 to 4 joint risk factors.Women had the lowest risk of HF when the 4 risk factors were absent (sHR, 0.32 [95% CI, 0.17-0.61]).With increasing numbers of risk factors present, the observed lower risk of HF hospitalization in women was significantly attenuated (P for sex-by-joint risk factor interaction=0.028; Figure 2). In the additional analyses, we did not find the risk factors of smoking and alcohol drinking significantly alter the associations between sex and incident HF hospitalization (Table S2). Additionally, 12.1% (95% CI, 9.3-14.7) of incident HF hospitalizations were attributable to smoking and alcohol drinking.The PAFs of these risk factors for Fine and Gray models were conducted to account for competing risk of all-cause death in the analyses of HF-related hospitalization, with multiple adjustment for the selected 11 risk factors and demographics, socioeconomic status, and comorbidities.The interactions between sex and risk factors were tested using interaction terms and likelihood ratio tests.BMI indicates body mass index; DBP, diastolic blood pressure; FBG, fasting blood glucose; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; SBP, systolic blood pressure; sHR, subhazard ratio; and TC, total cholesterol.4) triglyceride ≥1.7 mmol/L.Fine and Gray models were conducted to account for competing risk of all-cause death in the analyses of HF-related hospitalization, with multiple adjustment for the selected risk factors and demographics, socioeconomic status, and comorbidities.The interactions between sex and risk factors were tested using interaction terms and likelihood ratio tests.BMI indicates body mass index; DBP, diastolic blood pressure; SBP, systolic blood pressure; and WC, waist circumference. Qiu et al Sex Differences in Risk Factors for Heart Failure HF-related hospitalization by sex are also displayed in Table S4. Sensitivity Analysis After excluding participants with prevalent cardiovascular diseases, the results remained consistent (Tables S5 and S6, Figure S4).We also estimated the additive interactions using the relative excess risk of interaction and attributable proportion of interaction by categorizing the risk factors as binary variables. Similarly, the lower incident HF hospitalization rate in women was markedly attenuated with age, BMI, WC, and triglyceride increment.Moreover, the results of additive interaction showed that the lower risk of HF hospitalization in women was also masked with FBG increment (Table S7). DISCUSSION This large-scale, community-based study showed that men had a higher crude incidence rate of HF hospitalization than women.However, the observed lower risk of the female sex for HF-related hospitalization was strikingly attenuated when several traditional risk factors (ie, age, blood pressure, BMI, WC, and triglyceride) developed or were poorly controlled, indicating that women had a stronger association with older age, SBP, BMI, WC but not heart rate, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, and FBG (Figure 3).Moreover, the 11 conventional risk factors together explained more than half of PAFs for incident HF hospitalization, and the PAFs of 11 risk factors were much higher in women than in men.Older age and suboptimal control of SBP and WC were the 3 major attributable factors for HF.HF is prevalent and rapidly developing in China.Our study showed that the incidence rate of HF hospitalization was ≈4.62 (95% CI, 4.40-4.85)per 1000 person-years, consistent with previous epidemiological studies in China. 21Also similar to prior studies, the current study demonstrated a lower incidence rate of HF-related hospitalization in women than in men.Recently, a pooled study of 12 417 US adults aged >45 years showed a higher lifetime risk for HF in men than women through age 90 years. 22Another community-based European study involving >70 000 participants also observed a lower risk of incident HF hospitalization in women (5.9%) compared with men (7.3%) after a follow-up over 10 years. 23Nevertheless, the potential protection by the female sex has not been widely reproduced across major epidemiological studies, 6,24 which merits further investigations.Indeed, sex disparities in HF-related epidemiology have been well described in the European and North American populations 6 but remain poorly investigated in China.Our current study added new evidence that middle-aged women had a lower risk of HF hospitalization incidence than men in China.Further studies are needed to investigate the sex differences in HF incidence stratified Another interesting finding of our study was that the observed lower risk of the female sex for HF in our data might be masked when the numbers of traditional risk factors increased or risk factors were poorly controlled.Previous studies showed that men had a higher risk of HF than women with age<75 years, whereas women of older age experienced HF more frequently. 25,26Evidence from the Cardiovascular Health Study and the Framingham cohort also demonstrated that the association of SBP and HF was stronger in women than men. 9,27In addition, women with adiposity tended to harbor a greater risk of incident HF than men, especially in the development of HFpEF. 8Controversy over sex differences in risk factors for HF definitely exists.Contrary to our study, a study conducted in European populations showed that several risk factors, including SBP, heart rate, C-reactive protein, and NT-proBNP (N-terminal pro-B-type natriuretic peptide), had a more hazardous impact on HF development in men than women, 23 and such disparity may be partly due to the different race, the younger age, and the higher SBP (in the male group) of these European cohorts. 23Our current study was the first to display the sex disparities in majorities of conventional risk factors for HF among Chinese general populations and show that compared with men, age and several modifiable risk factors had a more deleterious effect on incident HF hospitalization in women.Furthermore, the lower incident risk of HF-related hospitalization in women might even vanish with more risk factors getting poorly controlled (Figure 2).Meanwhile, several regional and ethnic differences in sex differences in HF were also reemphasized in our study. The underlying mechanisms of the observations were not investigated in the current study due to the observational and epidemiological nature, and the theories to explain are postulated.First, it is reported that men tend to develop cardiovascular risk earlier than women, which is commonly attributable to the protective effect of estrogen, 28 and most of the cardiovascular risk factors, comorbidities, and structural changes shown by women are clinically related to advanced age and usually coincide with diastolic dysfunction. 29,30econd, women are usually in an unfavorable status in primary and secondary prevention of cardiovascular risk factors and diseases, contributing to the higher disease burden and death. 31Third, the Framingham Offspring Study showed that women experienced a steeper increase in left ventricular mass with advancing age and increasing BMI compared with men, accelerating the HF development in women with the age and BMI increment. 32Third, the impact of pulse pressure amplification on cardiovascular death was more prominent in women than in men, especially in postmenopausal women, 33 indicating that increased aortic stiffness-a critical factor in blood pressure-had a more hazardous influence on female cardiovascular health. Although it is commonly believed that women were at a lower risk of cardiovascular diseases than men, 2 the prognosis for women tended to be generally worse. 34,35This phenomenon is mainly due to the older age, heavier comorbid burden, and lower probability of receiving standardized treatment observed in women. 36Our study extended the previous study and showed that most of the traditional risk factors displayed a more pronounced impact on female cardiac function, which might be a critical reason to explain why female patients with cardiovascular diseases (usually in an unhealthy status with risk factors out of control) were more susceptible to having HF and higher mortality rate than men.Moreover, the PAF of 11 risk factors for incident HF hospitalization was nearly twothirds among women in our study, which was much higher than in men, indicating that controlling the risk factors would further decrease the risk of HF hospitalization in women.Taken together, sex-specific and intensive prevention strategies are immediately needed to improve cardiovascular health and address the public health challenges in China. Limitations Several limitations of the current study should be noted.First, this study did not use a nationally representative sample; hence, the conclusions extrapolated to different races and other populations should be cautious.Second, comprehensive laboratory tests were performed only in those with high-risk cardiovascular diseases in the China PEACE Million Persons Project.Hence, the important laboratory indicators (eg, NT-proBNP, C-reactive protein, creatinine) cannot be collected.Similarly, for this project, echocardiography was performed only on individuals with high cardiovascular disease risk, so we cannot classify any HF subtypes according to the left ventricular ejection fraction.Further studies on this topic between different HF subtypes are warranted.Third, information on demographic, socioeconomic information, and comorbidity was self-reported, which was apt to generate recall bias.Finally, women are susceptible to having HFpEF, 37 whereas patients with HFpEF are commonly undetected 38 ; hence, an underdiagnosis of HFpEF in women might be possible at baseline and during the study.Nonetheless, using HF-related hospitalization as the end point obtained from the administrative registers benefited with high specificity in a population study. 39 CONCLUSIONS Our large, community-based, prospective cohort demonstrated that women had a lower rate of incident HF hospitalization than men in China.Nevertheless, the observed lower incident risk of HF hospitalization was significantly attenuated or even vanished when several traditional risk factors developed or were poorly controlled.The PAFs of 11 risk factors were much higher in women than in men, and older age, suboptimal control of SBP, and WC were the 3 major attributable factors for HF in both sexes.Women might gain more benefits from controlling risk factors in the prevention of HF, and intensive preventative strategies are immediately needed to improve cardiovascular health. Figure 1 . Figure 1.Associations of sex and incident heart failure hospitalization in different categorical risk factors.Associations of sex and incident heart failure hospitalization among individuals with different age (A), SBP (B), DBP (C), HR (D), BMI (E), waist circumference (F), TC (G), triglyceride (H), LDL-C (I), HDL-C (J), and FBG (K).Fine and Gray models were conducted to account for competing risk of all-cause death in the analyses of HF-related hospitalization, with multiple adjustment for the selected 11 risk factors and demographics, socioeconomic status, and comorbidities.The interactions between sex and risk factors were tested using interaction terms and likelihood ratio tests.BMI indicates body mass index; bpm, beats per minute; DBP, diastolic blood pressure; FBG, fasting blood glucose; HDL-C, high-density lipoprotein cholesterol; HR, heart rate; LDL-C, low-density lipoprotein cholesterol; SBP, systolic blood pressure; TC, total cholesterol; TG, triglyceride; and WC, waist circumference. Figure 2 . Figure 2. The associations of sex and incident heart failure hospitalization with different joint risk factors.*Joint risk factors included: (1) age ≥60 years; (2) SBP ≥130 mm Hg/ DBP ≥80 mm Hg; (3) BMI ≥24 kg/m 2 or WC ≥90 cm (men)/85 cm (women); (4) triglyceride ≥1.7 mmol/L.Fine and Gray models were conducted to account for competing risk of all-cause death in the analyses of HF-related hospitalization, with multiple adjustment for the selected risk factors and demographics, socioeconomic status, and comorbidities.The interactions between sex and risk factors were tested using interaction terms and likelihood ratio tests.BMI indicates body mass index; DBP, diastolic blood pressure; SBP, systolic blood pressure; and WC, waist circumference. Figure 3 . Figure 3. Study summary.BMI indicates body mass index; DBP, diastolic blood pressure; HF, heart failure; PEACE, Patient-Centered Evaluative Assessment of Cardiac Events; SBP, systolic blood pressure; and WC, waist circumference. Table 1 . Baseline Characteristics Comparisons Between Men and Women Table 2 . Associations Between the 11 Risk Factors as Continuous Variables and Incident Heart Failure Hospitalization Among Men and Women
2024-05-11T06:17:35.805Z
2024-05-10T00:00:00.000
{ "year": 2024, "sha1": "0f48968971a76ab3a54b9f16b35682dcb9d6d1ea", "oa_license": "CCBYNC", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "9004356250ddf63f7898d5dce9b9d9db36918f4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236573701
pes2o/s2orc
v3-fos-license
Towards a cultural political economy of the illicit This article intervenes in debates on the illicit in economic geography, notably in the tensions between cultural and political economic approaches. First, it assesses critiques of political economic evaluations of the illicit. It then offers a ‘trading zone’, drawing upon both cultural and political economy, and argues that the two economic epistemologies are complementary, not mutually exclusive. The article instates political and ecological missing links in cultural political economy to foster multidimensional analyses of illicit practices in discursive, material and ecological registers. It concludes by discussing the broader implications of a cultural political economy of the illicit for economic geography. Introduction Illicit practices are often to be found behind closed doors as a constitutive element of economic processes. Bribes, tax evasion, criminal operations, corporate profit shifting, money laundering and other illegal activities are estimated to account for trillions of dollars annually worldwide (OECD, 2014). However, this topic remains relatively marginal within economic geography today (Hudson, 2018). Partly due to past critiques of the geography of crime (Peet, 1975), the vast majority of the work investigating the illicit has thrived outside the boundaries of economic geography (Hall, 2010;LeBeau and Leitner, 2011). This situation is somehow surprising, not only because of its empirical relevance but also for the significant theoretical implications that this rubric has for critical economic thinking. Illicit economies cover a broad range of scales, from global crime (Hall, 2018;Madsen, 2009) to everyday practices of urban survival (Ghiabi, 2018;Hubbard, 2019;Inverardi-Ferri, 2018b). Similarly, the emergence of organizationally fragmented and geographically dispersed production networks has widened the space for illegality (Gregson and Crang, 2016). This fact calls into question conventional narratives on the economic, with the illicit offering a different map of globalization (Palan et al., 2013;Zook, 2003), and opens up possibilities for a profitable reengagement with main categories, such as production, circulation, consumption and social reproduction, in economic geography. In recent years, a renewed interest has arisen on the theme from different perspectives within geography. While literature on diverse economies has shown how capitalist processes are articulated in a variety of economic activities, also pertaining to the multiple space-times of the informal and the illicit (Gibson-Graham, 2006;Smith and Stenning, 2006), scholarship on disarticulation has highlighted the way violence underpins mechanisms of value production and accumulation (Alford et al., 2019;Bair and Werner, 2011;McGrath, 2018;Werner, 2018). By now, illicit activities are recognized to be inherently connected to capitalist dynamics (Hudson, 2018), to play a central role in geographical processes (Hall, 2012) and to participate in the uneven development of cities and regions . Political economy has been the dominant paradigm of this scattered subfield, providing key contributions that raised broader questions on the relationship between the illicit and the dynamics of capitalism. Within this context, a relatively recent intervention by Gregson and Crang (2016) has aimed to reorient the field towards cultural economy perspectives on the basis that geography would benefit from investigating the illicit at the intersection of the moral economy, customary illegality and circulation. According to Gregson and Crang, economic geographers have tended to conflate the illicit with the illegal and have provided a dichotomous description of markets, examining the theme only in 'limited ways' and finding themselves in a theoretical and empirical 'impasse' (2016: 1). The authors suggest, therefore, that research on the illicit should make 'a shift in conceptual perspective, away from the political economy-based toolkit of economic geography, and towards the possibilities of cultural economy ' (2016: 12). Accompanying their critique, Gregson and Crang (2016) provide a reconceptualization of the illicit as a transient quality of circulation and introduce the notion of 'customary illegality'. While the authors' claim that the study of the illicit can benefit from cultural economy perspectives is helpful, this article suggests that different epistemologies are complementary rather than mutually exclusive and should be retained (Hudson, 2004). Political economy has much to offer in the analysis of material processes and power relations underpinning illicit activities, and it would be detrimental for this emerging subfield to move away from it. In this vein, the article offers a cultural political economy framework as a potential trading zone between different approaches to the illicit. This move does not want to subsume distant positions within a new monism. Instead, it promotes dialogue and takes an engaged pluralist stance, suggesting that productive 'trading zones' can ' [catalyze] new understandings and possibilities' (Barnes and Sheppard, 2010: 195). Here, it should be said from the outset that most scholars working on the illicit are attentive to semiotic dimensions and many cultural elements are inherently embedded within most of the scholarship already produced. As such, the article tries to make these implicit elements visible and attempts to move discussions towards a different appreciation of these components. As pointed out by Martin Jones (2008: 393), the point is not that cultural political economy introduces new elements, 'but the way the components made visible by CPE interact and relate to each other'. Further, the article integrates a 'geographical sensibility' within the cultural political economy perspective, reflecting on several critiques that have been addressed to the framework, in particular relating to the 'political' and the 'ecological' missing links (Hudson, 2008;Jones, 2008). This is done to tune cultural political economy towards a profitable study of the illicit, concurrently opening up new areas of interest for the field. As such, the article develops a nuanced version of CPE that holds together registers relating to discursive, material and ecological dimensions, enabling multidimensional and multiscalar analyses. In so doing, it offers an original geographical perspective to investigate the illicit, how it is socially produced and how this process in turn contributes to replicating capitalist social relationships in space and time. In fact, the recent turn to the illicit within economic geography reflects both an empirical and theoretical concern that pertains to questions of difference within capitalism (Peck, 2016). Moving beyond contrasting binaries (e.g. legal/illegal, formal/informal), the economic geographies of the illicit contribute to theories of heterogeneity (Gidwani and Wainwright, 2014). The remainder of the article first outlines the intervention of Gregson and Crang (2016). Subsequently, it elaborates a broadbrush summary of key contributions in the economic geographies of the illicit to provide the background against which their critique has emerged, and evaluates their arguments vis-à-vis this scholarship. Here a caveat is necessary. Rather than reviewing the literature on the illicit in its entirety, the article limits its scope to illicit economies. While this appears a more feasible project, selecting the works relating to the 'economic' remains arbitrary. Different choices could have been made. Then, the article moves on to elaborate on a cultural political economy approach and by way of illustration provides a brief operationalization of the framework. Finally, conclusions assess the broader implications of cultural political economy for developing a renewed research agenda on the illicit. The Illicit and Political Economy Economic sociology, economic anthropology, criminal studies and other branches of critical social sciences have made the illicit a central concern for studying the governance of the global economy and its uneven contours (Beckert and Wehinger, 2012;Bhattacharyya, 2005;Gootenberg, 2008;Nordstrom, 2007;Phoenix and Oerton, 2013). However, within geography the picture is less defined. While some original strands of research have always highlighted the variegated dimensions of the economic (Berndt, 2018;Gibson-Graham, 2006;Peck and Theodore, 2007;Smith, 2010;Smith and Stenning, 2006), Hudson (2019) notes that certain economic geographical scholarship has tended to overemphasize analyses of the formal legal economy in explanations of uneven development. Traditionally, this accent has considered the political economy of uneven development as the result of geographical and organizational shifts that the decline of Fordism has stimulated in the world market since the late 1970s (Rossi, 2017;Sheppard, 2019). As the argument goes, in the last few decades, driven by the capitalist dynamics of cost, flexibility and speed, companies have undergone a significant restructuring of their manufacturing processes . They have relocated segments of production to those emerging economies that could provide considerable labour supply and profit maximization (Peck, 2017), engendering, in turn, uneven regional development trajectories. This account implies that most economic actors operate within the boundaries of the formal economy and according to its rules, often playing down the role of alternative practices in subsidizing global capitalism (Nagar et al., 2002). Similarly, this narrative often neglects how business decisions are not only informed by the competitive advantage that minimum regulatory frameworks can offer economic actors but also by the degree of tolerance of particular legal systems vis-à-vis illicit behaviours (Andrijasevic and Sacchetto, 2016). As an example, the possibility to derogate from labour laws and allow excessive working hours makes some areas of the world more attractive than others (Hudson, 2018). As such, recent years have seen a growing body of economic geographical scholarship on the illicit that has tried to fill this gap. There is a broader recognition that investigating this theme is useful for understanding how processes of uneven development unfold in the contemporary global economy more broadly, with many activities being hybrid outcomes of both licit and illicit behaviours, making it challenging to draw clear definitions of what the legal/illegal economy is. Within this context, a central methodological question concerns, therefore, what accounts for the illicit and how this should be studied. Far from being a merely theoretical problem, different conceptualizations have critical practical and political implications, involving alternative forms of inquiry and analysis. In particular, these relate to understandings of the illicit either as a fundamental trait of contemporary capitalism or instead as one of its contingent elements (Sayer, 2000). Political economy has traditionally made abstraction of lifeworld elements (i.e. aspects of life that are the product of embodied actors and their cultures) to focus on systemic processes (i.e. formal organizational logics that go beyond actors' experiences) (Sayer, 2001). In contrast, cultural economy has usually reversed the equation (Sayer, 2001). The illicit, however, encompassing both individual dimensions and collective practices, is a porous concept that eludes clear-cut definitions and cannot be easily categorized either as a system mechanism or as a lifeworld phenomenon. While it appears a tautology to relate illegality to the legislation of a given institutional space, scholars have suggested that the illicit encompasses more nuanced understandings of human activities, including those behaviours that may be legal but unethical as well as those that are illegal but socially accepted (Gregson and Crang, 2016). For instance, while some economic practices, such as gambling or financial speculation, could be de jure licit, they may nevertheless be considered unethical. At the same time, some activities that are sanctioned by law are de facto permitted in particular places (Tellman et al., 2020). The latter is the case of the informal economy (Behzadi, 2019;Tucker and Devlin, 2019). Illegal activities, such as the trade of counterfeit goods, may be tolerated by governments as a necessary means of survival for poorer social groups. While some actors may elude taxation or employ workers under illegal labour conditions, these behaviours might be seen as morally licit (Olson, 2018) and even promoted as a way to control social unrest in some regions of the world (Briassoulis, 1999). In this regard, Gregson and Crang (2016) have advanced a sharp critique of most research on the subject, taking issue with existing accounts and definitions. They have suggested that economic geographers have tended to conflate the illicit with the illegal, associating it with particular spaces, objects and actors. According to the authors, this stance has reinforced Northern centric preconceptions and confined the illicit to practices carried out by 'dodgy people doing dodgy things with dodgy goods in dodgy places' (Gregson and Crang, 2016: 208). Instead, they suggest that the illicit is often situated at the intersection of moral economies, illegality and customs. This move enables the authors to shift their attention away from marginal places to focus on habits, customs and norms, deploying their preferred theoretical perspective, namely cultural economy (Gregson and Crang, 2016). As such, they propose to reorient geographical research away from a territorial understanding of the illicit and towards those activities that, despite being illegal, are nevertheless socially accepted and therefore licit. A central idea in the authors' critique is that illicitness appears in the global economy as a transient quality of circulation, rather than a property of specific goods. They suggest that it is in the practices of logistics, which move goods across borders and between regulatory systems, that opportunities for illegal behaviours are to be found (Gregson and Crang, 2016). As such, they observe that when we make visible fraudulent practices of trade and circulation, it becomes also evident that previous research has given too little attention to the cultural dimensions of the illicit. The authors conclude their intervention pointing out that 'there is a need for economic geography to shift its gaze, [ . . . ] away from the political economy-based toolkit of economic geography, and towards the possibilities of cultural economy ' (2016: 12). The critique developed by Gregson and Crang (2016) shows that cultural economy is a useful tool, particularly in conjunction with the focus on circulation. Their intervention provides, therefore, a productive corrective to previous works that have conflated the illicit with the illegal. However, it would be detrimental for the field to move away from political economy altogether, as advocated in their concluding paragraph. Political economy has much to contribute to the analysis of the illicit, offering manifold perspectives to investigate how politics and power play out in space and time (MacKinnon et al., 2009), a contribution that would unlikely be associated with a monolithic endeavour (Sheppard, 2011). Although earlier publications within this tradition were characterized by structural frameworks, since the 90s, growing numbers of interventions have significantly diversified their theoretical apertures (Sheppard, 2011). A mounting appreciation for different epistemologies has been combined with a growing sensibility for grounded research and interdisciplinary influences (Peck, 2016). Within this context, feminist scholarship has offered one of the most straightforward critiques of geographical political economy, rethinking the role of diverse practices, identities and subjectivities in constructing the economic (Gibson-Graham, 2006). This scholarship has called into question capitalocentric theories of economic change, moving away from grand narratives towards a performative rethinking of the economy to reveal the diverse landscapes of human practices (Gibson-Graham, 2014). Alongside this emergent post-Marxian experience, a materialist perspective considering economic processes as co-constituted of both cultural and political economic dimensions has thrived and expanded (Werner et al., 2017). This research did not reduce its analysis to structural processes and contributed to conceptualizations of capitalism as a culturally embedded system (Hart, 2002). This perspective is pivotal in rethinking the illicit as a phenomenon operating through practices and in relation to structural dynamics. For instance, this appears in geographical scholarship on critical race theory. Building on the seminal work of Cedric Robinson (1983) on racialized capitalism, this literature has originally combined elements of black geographies, feminist thinking and political economy (Strauss, 2020). The pioneering work of Ruth Gilmore (1999) has shown how racial, economic and cultural elements forge composite political economies of carceration that have significant implications for conceptualizations of the illicit. Interestingly, Gilmore (1999) describes a politics of criminality haunted by moral panic over deviant behaviours rather than legal concerns. Denouncing and contesting the uneven relations and material circumstances that shape these political economic processes, critical race theory has, therefore, explored how difference is integral to the functioning of capitalism and the multiple dimensions of knowledge production that relate to it (Werner et al., 2017). The politics of everyday practices, the role of cultural ideologies and the reproduction of discursive dimensions have long been understood as the underpinning factors of economic formations (Nagar et al., 2002;Wright, 2006), posing the terms of how capitalism is 'reproduced and differentiated over time and space' (Werner et al., 2017: 4). Many cultural elements are, therefore, sedimented in political economy, which has always been cultural (Sayer, 2001) and the geographical scholarship on the illicit has often intrinsically mobilized concepts of Gregson and Crang's (2016) cultural economy. What follows are some of the older and more recent voices in this literature, acknowledging this pluralist dimension and making it visible. Economic Geographies of the Illicit In their intervention, Gregson and Crang (2016) point out that existing geographical scholarship has examined the illicit only in limited ways, reinforcing binaries relating to illegality and the mainstream economy. According to the authors, geographical works on the illicit have mainly produced studies of illegal activities as defined in the global north but carried out in the marginal spaces of the global south. Furthermore, they suggest that economic geographers have neglected those behaviours that are illegal, but socially accepted, and have ignored the role that circulation plays in opening up opportunities for illicitness (Gregson and Crang, 2016). In sum, they hint that economic geographers have mainly focused on the 'familiar list' of illegal activities, pushing the field into a theoretical and empirical impasse (Gregson and Crang, 2016: 3). However, when looking at the studies that have been produced in recent years, there is a variegated scholarship that seems to emerge. The following pages review this geographical research on the illicit to evaluate Gregson and Crang's (2016) criticism vis-à-vis this evolving field. The publications reviewed retrace how different ideas of the illicit in geography have been mobilized to think about particular activities, people and places. An important group of publications that discuss ideas on the illicit is research on criminal organizations. A prominent voice in this literature is one of Tim Hall, who has articulated the interdisciplinary scholarship on the subject through a political economy perspective (Hall, 2010(Hall, , 2012(Hall, , 2013(Hall, , 2018. While criminal and illicit economies are not always the same, many overlaps exist between the two (Pinheiro-Machado, 2018). In this regard, Hall (2010) argues that the illicit represents a valuable category to make sense of the interdependence of formal economies and criminal activities, and the many practices that destabilize clear-cut divisions between the two. For example, he notes that in some countries criminal organizations are de facto licit actors and 'have recognized or accepted roles in the mediation of business and civil disputes or in the provision of goods and services' (2010: 843), making it difficult to study these economic actors through the legal/illegal binary. According to Hall (2012), while a vast multidisciplinary scholarship beyond our discipline has produced valuable ethnographic accounts of criminal activities in particular regions, geographers are better positioned to examine their multiscalar dimensions. He has thus advocated for research that brings together the elements of different organizations, regulations and flows as a way to capture the complex spatial and organizational dynamics of criminality across scales; and for a rethink of conventional accounts of the global economy and its actors (Hall, 2012). Several monographs, edited books and journal articles have recently appeared in the literature following this line of thinking. Scholars have suggested that while depressed economic localities often generate the terrain for criminality to thrive, a global map of the illicit needs to take into account the multiscalar interconnections that these liminal spaces retain within the formal global economy (Hudson, 2014). Consequently, while critical criminologists have long examined processes of marginalization and criminalization in specific sites (Chadwick and Scraton, 2001), geographers have started to investigate the relationship between endogenous and exogenous factors that create fertile ground for fraudulent behaviours across scales (Chiodelli, 2018;Doshi and Ranganathan, 2018;Penna and O'Brien, 2018). Broader questions thereby arise regarding the illicit and the multiscalar dynamics of the economy. This research does not confine criminality to a separate realm (e.g. the illegal) or space (e.g. the global south) but conceives it as strictly interconnected with broader societal processes (Hall, 2010). For instance, contributors to the volume The illicit and illegal in regional and urban governance and development have provided examples of spatial understandings of illicitness that open up theoretical and empirical questions related to the governance of cities and regions, calling into question conventional legal/illegal binaries. These include a broad spectrum of processes and practices, such as the rise and fall of marijuana regulations in the US, the separation between illegal/illicit/ informal housing in Italy and the illicitness of non-state groups in the 2011 post-earthquake and tsunami reconstruction in Japan . Similarly, Hall and Scalia's edited book A Research Agenda for Global Crime (2019) brings together controversies on crime that destabilize traditional narratives on globalization. A transnational perspective enables the authors to examine illegality and corruption comparatively and deploy a critical revaluation of unlawful practices in a global age. At a broader level, these works suggest that the liberalization of exchanges, the deregulation of finance and the advancement of technology have brought about the conditions for new illicit networks to emerge (Hall and Scalia, 2019). While most of these studies focus on 'criminality' and are informed by political economy and its vocabulary, this phenomenon is not hermetically sealed instead, these works understand criminal activities as inherently linked to the formal economy. Therefore, they engage ideas on the illicit that relate to the cultural un/acceptability of particular economic practices, and their opposites (Hudson, 2019), which are exposed in Gregson and Crang's (2016) critique. Within urban geography, much of the conversation on criminality has also revolved around themes of policing and surveillance (Mitchell and Heynen, 2009). The geographies of homelessness and survival have described how changing legal and institutional landscapes, linked to revanchist political economies (Smith, 1996), have profoundly transformed contemporary cities, creating new patterns of criminalization and exclusion (Davis, 2006). Through the analysis of the increasing commodification of the urban space and the creation of punitive landscapes, this literature has brought to light the processes and patterns that deprive vulnerable social groups of the right to the city (DeVerteuil et al., 2009;Mitchell, 2003), annihilating the space they inhabit (Mitchell, 1997), both in the open areas of the city and in the closed places of care (Hennigan and Speer, 2019). Reverberating with this theme, carceral geography has also reflected on the processes and practices that render particular social and racial groups illicit (Shantz, 2017), providing insights into the extension of neoliberal policies and how this process has brought prison logics into the realm of everyday life (Moran et al., 2018). From gated communities to 'no go' zones, the carceral state has colonized many aspects and scales of the social world (Brown, 2014), defining new frontiers for the geography of the criminal (Jefferson, 2018). Ideas on the illicit that came under the scrutiny of Gregson and Crang's (2016) intervention are also mobilized in the geography of drugs (Taylor et al., 2013). Some of these studies understand drug trades as co-constituted by local and global dynamics that involve the circulation of people and things, within and without the formal economy. The formal economy is examined as inherently linked to drugs, which has significant effects for the governance of territories (Polson, 2015) and their transformations (Campbell and Heyman, 2015). This literature has analysed the different ways the contraband of narcotic substances is underpinned by capitalist relations of accumulation, production and social reproduction (Agnew, 2015;Banister et al., 2015;Boland et al., 2018;Boyce et al., 2015;Campbell and Heyman, 2015;Massaro, 2015;Slack and Campbell, 2016), highlighting the impacts that illicit trades have in urban governance (Boland et al., 2018) and the reframing of social policies (Proudfoot, 2017). At the same time these works have also shown how the legal/illegal boundary is fictitious. For instance, Taylor (2015) anticipates key elements of Gregson and Crang's (2016) critique, showing that the illicit is not a characteristic of particular goods or places. Discussing the synthetic drug known as butylone (bk-MBDB), Taylor (2015) shows that this substance is legal in some countries (e.g. Denmark, Poland and India), but illegal in others (e.g. UK, Sweden and Israel). While regulatory differences go beyond the north/south divide, they may equally include moments of production, circulation and consumption. The materiality of illicitness is also peculiar in Taylor's (2015) account, since 'one tiny tweak, or the addition of an extra atom to a banned drug's molecular structure, creates an entirely new, putatively legal drug' (Taylor, 2015: 7) and therefore renders an illegal object, legal. As such, drugs may easily move in and out of the illegal category, showing that illicitness is not an ontological attribute of particular objects, but rather the curious result of the entanglement of both material practices and semiotic dimensions. This continuous interaction between the legal and the illegal is also inherent in some of the most intimate aspects of human life. Jeffrey (2020) shows that the human body has become a frontier of the illicit, making life itself a site of speculation, when rights over bodily integrity are subjected to different legal authorities and moral regimes. Legal geographers have investigated how utilitarian biomedical narratives have contributed to the global trade of organs (Parry et al., 2015), which has developed into a complex ecosystem of actors and practices situated at the border of the illicit (Lundin, 2012;Mendoza, 2011), moving the body into the realm of the economic. Questions on the body have also brought scholars to investigate those practices that relate to sex work (Hubbard et al., 2009), physician-assisted suicide (Shondell Miller and Gonzalez, 2013) and female reproduction (Moore, 2018;Sheldon, 2018). For instance, Calkin (2019) describes a complex political economic geography of abortion in which social control and regulation over female bodies have pushed many women to seek abortion provisions outside of statal legal frameworks, through various illicit practices and mobilities, such as informal online telemedicine, transnational pill trades and abortion travels. Finally, the subdiscipline of financial geography has also mobilized ideas on the illicit in recent years, unsettling conventional narratives on the spaces and processes of the global economy (Clark, 1997;Haberly and Wójcik, 2015;Hall, 2013;Ledyaeva et al., 2015;van Hulten and Webber, 2009;Wójcik and Boote, 2011). While the spatial organization of the world market has considerably changed the structure of financial activities in the last decades, the emergence of global financial networks (GFNs) (Coe et al., 2014) is often seen as intertwined with dynamics of tax evasion, corruption and secrecy (Christensen, 2012). GFNs co-produce spaces characterized by activities that are deemed to blur the boundary of what is licit, providing platforms for transactions that would have otherwise been impossible. Within this context, offshore financial centres (OFCs), namely countries that provide a combination of low tax rates, minimal regulation and secrecy, among other characteristics, are a paradigmatic example of this phenomenon. OFCs destabilize the association of illicit activities with particular marginal locations or actors. A great number are established in global cities such as Dublin, Zurich or Singapore. Producing an environment that mixes 'fiscal subsidies, tax exemptions, legalized opacity, weak information exchange treaties, and minimal regulation' (Christensen, 2012: 327), OFCs have become centres for financial flows on an enormous scale. While the shared imaginary of tax havens is usually associated with dirty money laundry and criminal activity, recent studies have pointed out that this is only part of their story. Tax havens attract a variety of actors including major multinational corporations seeking to avoid tax around the world (Barrera and Bustamante, 2018) and parties aiming to hide licit capital from corrupted authorities in home countries (Ledyaeva et al., 2015). For instance, Ledyaeva et al. (2015) show that the phenomenon of round-trip investment, that is, money sent abroad to be reinvested in the home country, has emerged as a way for firms to bargain better conditions when returning as foreign investors. Here the boundary between legality and illegality becomes particularly blurred. As such, this literature has pointed out how spaces and times of globalization are co-produced by dynamics that pertain to both licit and illicit realms, raising questions on the functioning of the economy, the place of institutions and the role of the state (Smith, 2015;van Hulten, 2012). It is not the aim of this intervention to provide a systematic review of this evolving field. As such, this article cannot do justice to all the diverse strands of research, variegated issues and empirical cases that have appeared to date. However, this account should be sufficient for the sake of the argument. When looking at how geographical research has articulated ideas on the illicit, this scholarship appears as an heterogeneous field. Therefore, while sympathetic to Gregson and Crang's (2016) intervention for a more serious engagement with cultural economy, this article recognizes the proposed move as complementary rather than antagonistic to other perspectives in the field (Hudson, 2004). In this vein, it advocates for an engaged pluralism (Barnes and Sheppard, 2010) through a cultural political economy approach. Cultural political economy is here mobilized to develop a middle-range research agenda that takes into account multidimentional and multiscalar processes in the production of the economic geographies of the illicit and their articulations. To further highlight this argument, the following section turns to cultural political economy, as this approach integrates elements of political and cultural economy in what Sum and Jessop define as an attempt 'to navigate between a structuralist Scylla and a constructivist Charybdis' (Sum and Jessop, 2013: 22). Towards a Cultural Political Economy of the Illicit This final section develops a cultural political economy perspective as a potential trading zone of different epistemologies in the subfield of the illicit. To do so, it first situates the call of Gregson and Crang (2016) vis-à-vis the history of the 'cultural turn' in economic geography. Gregson and Crang's (2016) intervention writes into broader critiques of political economy that have grown within the social sciences and the humanities since the 80s. When the cultural turn emerged as a dominant paradigm, scholars progressively abandoned grand interpretations of human history to embrace perspectives that privileged heterogeneity and difference (Harvey, 1989). In economic geography, this shift brought researchers to examine 'what was previously secondary, merely superstructural' (Ray and Sayer, 1999: 1) -cultural meanings and social practices (Lash and Urry, 1994). At first, this change appeared as a reflection of a broader 'culturalization' of economic life (Du Gay and Pryke, 2002). Companies producing cultural artefacts had increased in the post-Fordist era. It was also suggested that meanings, practices and identities played a growing role in the way people think and act at work, contributing to the success and failure of organizations (Du Gay and Pryke, 2002). Culture, broadly conceived, was understood as a way of enhancing performance in an increasingly competitive knowledge-based economy (Du Gay and Pryke, 2002). This 'cultural turn' not only reoriented empirical agendas in economic geography but also represented a more profound shift in theoretical perspectives (Castree, 2004). Gone were the days when culture and the economy appeared as separate realms. A novel paradigm, 'cultural economy', bore the promise of providing a new analytical grammar to study the economy as a cultural formation (Amin and Thrift, 2004), saving geography from a 'musty oblivion' (Thrift, 2000: 692). Scholars moved away from structural accounts that rose to prominence in previous decades to embrace fresh theoretical perspectives (Ray and Sayer, 1999). As such, the cultural turn not only appeared as a critique of orthodox, that is, neoclassical economic thinking, but also of political economy, and Marxian scholarship in particular. Inspired by poststructuralist thought, and more broadly by theoretical intakes from cultural studies, cultural economy substantially rethought categories of production, circulation and consumption. Geographers highlighted the way elements such as norms, meanings and values shape the economic and how discourses contribute to constituting the object of economic analysis itself (James, 2006). This shift did not happen without friction. A cleavage between political economic and cultural economic approaches appeared within the discipline (Yeung, 2001), producing conflicting and almost irreconcilable positions (Amin and Thrift, 2000;Smith, 2005). Yet not all voices emphasized division over unity. Hudson (2004), for instance, recognized considerable merit in cultural economy epistemologies, particularly relating to bottom-up methods of enquiry that he saw as complementary, rather than alternative, to top-down political economic analyses. Similarly, Barnes and Sheppard (2010) advocated for an anti-monist and anti-reductionist economic geography that promotes an engaged pluralism between different epistemologies. It is along these lines that this article suggests a potential synthesis between political economic and cultural economic perspectives on the illicit, rather than a revival of debates about the merits of competing theoretical approaches. To this end, the following pages develop a cultural political economy of the illicit that aims to combine a rigorous analysis of capitalist relations with an appreciation of semiotic dimensions co-producing economic processes (Sum and Jessop, 2013). As such, cultural political economy (hereafter CPE) is particularly well situated for an analysis of the multiple aspects that the economic geographies of the illicit have so far studied only in dispersed ways, as presented in the previous section of the article. CPE is an overarching term encompassing several post-disciplinary approaches that first emerged in the 1990s when the cultural turn appeared as a dominant force in the social sciences. Developing as a reaction to the shift of research agendas (Ray and Sayer, 1999), CPE aimed at renewing interest in critical political economy (Jessop, 2010). While there is no consensus on the nature of CPE, several paradigms have appeared under this label. Sum and Jessop (2013) identify five projects and at least eight different research agendas that propose a cultural shift in the study of capitalist processes more generally. Insights from CPE have gained considerable traction in economic geography, where scholars have critically engaged this framework in various ways to think about the underlying mechanisms of globalized systems of production, the state, the urban, natural resources and waste among other themes (Arnold and Hess, 2017;Hudson, 2008;Jones, 2008;Pickren, 2015;Ribera-Fumaz, 2009;Su et al., 2018). The article primarily engages with CPE as articulated by Bob Jessop and Ngai-Ling Sum over a period of two decades (Jessop and Sum, 2006;Sum and Jessop, 2013) integrating criticisms that have been advanced within geography, notably on the missing political (Jones, 2008) and ecological (Hudson, 2008) links. In a larger context, 'culture' in cultural political economy does not refer to an ontological category but to the processes through which social agents, individuals or groups, make sense of the world and act in everyday life through systems of meanings (Sum and Jessop, 2013). For decades, geography has been characterized by implicit and explicit assumptions on culture's superorganic nature (Sauer, 1925;Zelinsky, 1973), implying culture to be an entity above human action. Critiques have developed over the years (Duncan, 1980;Mikesell, 1978). However, essentialist perspectives have remained influential until the seminal intervention of Don Mitchell (1995), when the 'idea of culture' and its underpinning historical processes were placed as the primary objects of study. This conception, operationalized in CPE, is a legacy of the Gramscian tradition and is indebted in particular to Williams' cultural Marxism (Williams, 1983) and the influence of spatial linguistics on Gramsci's thought (Ekers and Loftus, 2012). Following this tradition, CPE involves analysing narratives, imaginaries and regulatory technologies that underpin the reproduction of accumulation regimes. Simultaneously, this approach details a vocabulary to analyse capitalist social relationships and their contradictions from a materialist (and ecological) perspective, taking the economy's materiality more seriously. In sum, this opens up possibilities to study who is in the position to define what is socially accepted as licit and what is not in a particular time-space, and what differential economic consequences these processes generate. CPE integrates material and discursive analysis through two forms of complexity reduction, semiosis and structuration, that are interconnected and co-constitutive of the dynamics of social relations (Jessop, 2010). Semiosis extends beyond linguistic modes of signification and communication and refers to the different processes that enable 'the production of linguistic meaning' and the 'apprehension of the natural and social world' (emphasis in original) more generally (Sum and Jessop, 2013: 3). The social world is conceived as 'alwaysalready meaningful' and 'its analysis must acknowledge the importance of sense-and meaning-making' (Sum and Jessop, 2013: 3). While agents have no direct access to reality, they need to strategically select some aspects of it to describe, interpret and participate in social life (Jessop, 2004). From this perspective, semiosis relates to those practices that define and reproduce shared understandings, behaviours and customs of what the licit/illicit economy is. Semiosis is related to structuration, namely the emergent patterns of human interactions with the natural world. Structuration represents, therefore, a mode of complexity reduction, which aims to 'transform relatively meaningless and unstructured complexity into relatively meaningful and structured complexity' (Sum and Jessop, 2013: 148). Everyday practices based on shared meanings translate into informal (e.g. habits, routines and customs) and formal (e.g. legal systems and regulatory technologies) institutions that reproduce capitalist social relations in heterogeneous but always structured varieties (Hudson, 2004). Within these emerging social formations, CPE conceives that agents are unevenly empowered and have a differential capacity in their understanding and shaping of the world. They select strategies and tactics according to their knowledge about a given conjuncture and contribute in differential ways to structure building. Therefore, structures are not conceived in a deterministic fashion, nor can they be ascribed to the power of a single agent. Instead, they are the result of asymmetrical interactions and blind co-evolutions (Sum and Jessop, 2013). Within this context, scholars have provided several productive criticisms to advance a geographical sensibility within a postdisciplinary CPE. In particular, Martin Jones (2008) suggested that earlier versions of CPE have missed a link with geographical understandings of space, failing to promote dialogue between poststructural and Marxist epistemologies on the matter. At the same time, Jones also underlined several methodological limits of CPE and proposed to move the framework in the direction of a 'practical turn', a process of 'trial-anderror [ . . . ] and dialogue with other theoretical currents and emerging empirical research' (Jones, 2008: 395). For him a way to do so is to build upon the work of Dixon and Hapke (2003) on semantic geographies. One tool for this approach is the critical reflection on what the authors call the 'play of binaries' (Dixon and Hapke, 2003: 143), the social construction of semantic artefacts through the use of opposites (e.g. rural/urban, safety/risk, family/corporate). Interestingly, this perspective has important implications for analysing ideas on the illicit in debates over public policies, as exemplified in the empirical example of the following pages. Related to the methodological link are also broader reflections on the validity of data, risk assessments and ethical issues in conducting research on the illicit (Clark, 1998). There is usually little official and reliable data on this phenomenon. Crime and corruption and a range of other practices at the boundary between formality and informality, and legality and illegality, are constitutive elements that are not impossible but difficult to evaluate from a quantitative perspective. This does not mean that illicit economies are limited to these practices; they are certainly much more than this. However, these elements need to be considered, without exoticizing them, in the analysis of the deep texture of local circumstances that characterize illicit economies. Ignoring their diversity in time and space will fail to provide a decent account. Qualitative and ethnographic research is an appropriate tool to reintroduce this variety of facets (McDowell, 1992), while other methods can always complement these approaches (Yeung, 2003). Likewise, investigating the illicit sometimes involves working with stigmatized communities, where the design of nonexploitative research projects and the employment of risk assessments are critical. Scholars are confronted with two sets of issues. The first concerns the confidentiality and privacy of the participants. The second, the safety of the researcher. Particular contexts could involve considerable risks. Together these challenges may dishearten the researcher or lead to situations where traditional fieldwork has little empirical value (Shaver, 2005). This is because informants could deliver untrustworthy accounts when asked or because they may refuse to participate in the study. However, 'it is better to tell the story with admittedly imperfect and incomplete data than to simply throw up one's hands' and not tell it at all (Andreas, 2013: xii). Therefore, investigating the illicit pushes scholars to encounter major barriers and experience a wide array of situations that need to be confronted with flexible and ad hoc approaches beyond disciplinary guidelines and textbook prescriptions. In this regard, CPE offers a particularly flexible framework to integrate different methodologies and confront on-the-ground issues in creative ways. A second missed link in CPE is the material or ecological one (Hudson, 2008). Hudson noted that CPE has failed to seriously engage with the materiality of the economy 'beyond the recognition that the production of use values necessarily involves people working on and with elements of the natural world ' (2008: 422). He suggested that the engagement with the political economic and semiotic registers should be accompanied with considerations for the material one, which is defined here as the 'ecological' register. This is done to distinguish between 'materiality' related to the historical materialist tradition, in this paper referred to as the material register, and materiality related to the ecology. While the former relates to political economy, the latter engages with elements of both Marxian scholarship and new materialisms (Barua, 2019;Castree, 2002). The ecological register refers, therefore, to the production and accumulation of capital (Moore, 2015) in light of its metabolic relationship with nature (Smith, 2010). On the one hand, this register refers to the Epicurean and Feuerbachian roots of Marxism (Foster, 2000). On the other hand, this move brings into CPE performative elements that enable relational analyses of the economic (Castree, 2002). In so doing, it underlines the ecological transformations (physical, biological and chemical) that underpin the flows of energy and materials in the capitalist economy, adding critical geographical perspectives to CPE. The ecological register is particularly relevant for an investigation of the illicit. Besides the growing importance in public discourse over the dimensions of the environment for criminal activities, there are implications relating to the governance of the material world more widely. Economic processes in the capitalist economy are reflections of different understandings and materializations of the metabolic relationship between society and nature (Swyngedouw and Heynen, 2003). Whether and how it is considered to be licit for social actors to engage with, this relationship is often a contested process. In other words, the extraction, manipulation and consumption of material resources, may they be raw materials, commodities or waste products, always relate to a 'moral economy of the right, the good, the proper, their opposites and all values in between' (Scanlan, 2005: 22). As the empirical paragraphs will show, ecological narratives are sometimes mobilized to construct oppositional binaries that render particular actors, activities or places illicit; changing, in turn, metabolic processes. A more nuanced CPE, refined through this geographical sensibility, offers, therefore, a powerful framework to investigate the multidimentional and multiscalar processes at play in illicit economies. It provides a tool to examine how and why people engage in different economic activities, what are the underlining material, discursive and ecological dimensions that define these practices as illicit, and how in turn this process contributes to reproducing capitalist social relationships in variegated ways. As such, CPE offers a flexible framework that is particularly powerful when applied to the study of the illicit through meso-level concepts that would hold together questions relating to its discursive production (e.g. discourses, narratives, imaginaries), material constitution (e.g. accumulation and labour regimes) and ecological composition (e.g. flows of energy and matter) deploying a broader conceptual taxonomy. Contra stigmatizations of epistemological convergences (Barnett, 2005), this article suggests that cross-fertilizations from different theoretical traditions are welcome and useful (Castree, 2002;Ekers and Loftus, 2008). To show how CPE could be deployed in an empirical analysis, the following paragraphs provide an illustrative example that is drawn from the literature on the geographies of waste (Gregson and Crang, 2015;Hobson, 2015;MacFarlane, 2019;Millington and Lawhon, 2019;Moore, 2012;Oteng-Ababio, 2010). Representing what is abject and rejected, waste is a strange object related to different discursive, material and ecological registers in a spectacular way (Pickren, 2015). Alternative meanings that society attaches to the material world shape contested practices of valuation (Crang et al., 2013;Lepawsky and Mather, 2011) and devaluation (Gidwani and Reddy, 2011;Herod et al., 2014) that in turn define what waste is, how it should be managed and who is entitled to handle it. In recent years, a vast scholarship on this subject, embracing different theoretical perspectives, has grown within geography, generating lively debates (Herod et al., 2013;Lepawsky and Mather, 2013). Within these works, several contributions have focused on the way ecological discourses have been mobilized to promote transitions towards formal waste management systems in developing countries (Demaria and Schindler, 2015;Inverardi-Ferri, 2018a;Lawhon, 2013;Samson, 2015;Schulz and Lora-Wainwright, 2019;Tong et al., 2015). This process of 'formalization' is particularly illustrative of how discursive dimensions are related to political economic outcomes, providing a testing ground for a cultural political economy of the illicit. Today, the secondary processing of discarded products has developed into an economic sector that generates employment opportunities, produces cheap products for lower classes (Inverardi-Ferri, 2018b) and offers material inputs to other industries on a global scale (Gutberlet, 2012). It is usually estimated that in developing countries, at least between 15% and 20% of waste collection is performed by some form of informal labour (World Bank, 2019). In 2013, more than 20 million people were making a living through the secondary processing of discarded products in China only (Wang et al., 2013). As several reports have shown, waste trading is often accomplished by legal actors that 'walk on a thin line between legal and illegal activities' (Geeraerts et al., 2015: 41). In public discourse, these practices are usually portrayed as criminal operations, through a sophisticated linguistic deployment of the formal/informal binary. Networks of actors that include transnational corporations, governmental agencies and international NGOs, such as the United Nations and the World Economic Forum, have recurrently produced reports denouncing informal practices of waste recycling (Baldé et al., 2017), advocated for bans on waste exports (Puckett et al., 2019) and promoted transitions to large-scale formal industrial processing (UNEP et al., 2019). Particular conceptualizations of the licit economy in conjunction with dominant ecological discourses have informed public policies in many places of the global south (Inverardi-Ferri, 2018a), often resulting in processes of dispossession for poorer classes (Inverardi-Ferri, 2020;Ortega, 2016), through the discursive portrayal of these groups, and their activities, as illicit (Gidwani and Maringanti, 2016). The particular materiality of discards, as repositories of valuable raw materials, that is, crystallized labour, makes them the site of conflictual political economic interests, engendering value struggles and usually co-determining the emergence of new accumulation regimes, through subsumption of previously accepted activities within new spatio-temporal fixes (Inverardi-Ferri, 2018a). For instance, in China, bans on waste imports, criminalization of traditional modes of recycling and implementation of management schemes for different waste streams have profoundly reshaped the geography of this industrial sector in recent years, sometimes with significant geopolitical consequences (Gregson and Crang, 2019). Particularly illustrative is the case of e-waste recycling that has been documented in several publications (Inverardi-Ferri, 2017;Lepawsky et al., 2015;Lora-Wainwright, 2017;Schulz, 2015). Today e-waste is one of the fastest growing waste streams and represents a considerable source of wealth. It is estimated that as much as 7% of the total amount of gold worldwide is contained in discarded electronic products (LeBlanc, 2020). Yet the extraction of precious metals from these discards often involves processes with significant social and environmental consequences if not handled in the proper way. While Chinese authorities tolerated, and in some cases even promoted, recycling initiatives that spontaneously developed during the first decades of the marketization transition, in recent years they have severely repressed these activities (Inverardi-Ferri, 2017). Official narratives have increasingly portrayed small-scale modes of recycling, that is, informal recycling, as illicit practices and described them as a source of uncontrolled social and environmental damage (Lepawsky et al., 2015). This semiotic shift reflects a profound cultural rearticulation of China as a modern nation. The redefinition of circuits of waste (and value) represents one of its manifestations. The result has been the implementation of new legislation and the establishment of an 'extended producer responsibility' scheme that demands complex certification procedures (Tong et al., 2015). This fact has penalized traditional recyclers that did not have the resources to meet the new requirements, rendering their operations illegal overnight, forcing them to close down or displace their activities to areas where these practices were still considered licit (Schulz and Lora-Wainwright, 2019). Concurrently, it has resulted in the development of large-scale recycling groups, the benefit of global capital and the reshaping of regional development trajectories in astonishing ways (Goldstein, 2017). The point of this very brief digression in the geographies of waste has been to show that the analysis of social processes cannot make abstraction of the curious mix of semiotic and material dimensions that co-produces them. As such, combining elements of both cultural economic and political economic perspectives, rather than posing them as alternative paradigms, CPE offers a powerful device to generate theoretical and empirical research on the illicit. At the same time, integrating an ecological perspective within a cultural political economy framework traces new avenues of research for the illicit. Conclusions The illicit raises theoretical, empirical and methodological questions that strongly resonate with the broader interests of economic geographers, who are arguably well equipped to contribute to this prominent conversation in the social sciences. Conceiving the economy as institutionally mediated and socially embedded, their multiscalar and relational conceptualizations appear to foreground a fruitful meeting with the illicit, an intriguing vantage point to investigate the material and semiotic constitution of the economic. Recent attempts to engage with this subject have developed original geographical perspectives. These works have nourished a nascent body of literature Gregson and Crang, 2016;Hall, 2012;Hudson, 2018), creating fertile grounds for intellectual exchange and interdisciplinary dialogue. While this endeavour is relatively novel, it has already been animated by vivid debates. Two major competing perspectives have here appeared. The first, inspired by political economy, has conceived of the illicit as an element inherently connected to capitalist dynamics and the process of value creation and extraction (Hudson, 2018). The second, rooted in cultural economy, has restated the centrality of semantic dimensions for an understanding of illicit practices and advocated for a shift away from political economy to overcome several theoretical and empirical limits in previous literature (Gregson and Crang, 2016). The article has evaluated this debate vis-à-vis existing studies and, while sympathetic to an investigation of cultural dimensions, has suggested that different paradigms can be conceived as complementary rather than mutually exclusive (Hudson, 2004). Illicit economies, qua economies, are reproduced through varying material and discursive mechanisms that are the result of struggles and different understandings of the world. They provide, therefore, a terrain to 'capture and hold in suspension the simultaneously 'economic' and 'cultural' dimensions' of social life (Bridge and Smith, 2003: 262). As such, oppositions on categories such as culture and the economy are detrimental and should be avoided (Castree, 2004). Moving beyond antinomies, a CPE perspective on the illicit offers a trading zone for dialogue, developing a more rounded approach and reflecting on some of the limits that scholars are confronted with while conducting research on the illicit. Drawing on work that has tried to integrate a 'geographical sensibility' within cultural political economy, the article has therefore engaged different methodologies to challenge the licit/illicit binary. Concurrently, operationalizing a political ecology grammar it has developed a register to underline the wider material transformations that co-constitute the capitalist economy, opening new areas of interest for the field. In so doing, it has offered a tentative vocabulary to analyse why and how illicit economies emerge, how illicit practices are understood by different actors, what are the governance mechanisms and institutions that underpin them and how these processes contribute to reproducing heterogeneous patterns of uneven development in time and space. Advocating for an engaged pluralism (Barnes and Sheppard, 2010) that could develop the potential of different perspectives, the article has made connections between different theoretical approaches without subsuming them within a new monism. This fact is particularly significant in light of Gregson and Crang's (2016) intervention. In fact, their considerations seem reminiscent of debates on the cultural turn that animated the discipline almost two decades ago (Amin and Thrift, 2000;Martin and Sunley, 2001) and resonate with concerns on the identity and role of economic geography today (James et al., 2018;Yeung, 2019). Working towards a pluralist economic geography of the illicit contributes to the diversity of our discipline more broadly. This is an effort that entails dialogue (Valentine, 2008) and a profound rethinking of the practices, politics and institutions of our discipline (Rosenman et al., 2020). Whether the conditions for this change will be realized is a matter for future research. Acknowledgements I am grateful to Adrian Smith, who provided valuable feedback on earlier versions of the article. I have greatly benefited from rich discussions with him. I also thank the editor Alex Hughes and four anonymous reviewers for their productive criticisms. I presented some preliminary ideas for this article in early 2019 at the workshop on Labour and Global Production at Queen Mary University of London and received helpful comments from the participants. All errors, omissions and misinterpretations remain my own responsibility. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/ or publication of this article: The article has been written under the auspices of the British Academy Postdoctoral Fellowship (Grant No. PF19\100031).
2021-08-02T00:05:41.681Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "7b2f9e6aab3d4baf62608b9d2d27cafec4790a4b", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03091325211013378", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "97a04a4a0a9f4bb0686db70720e5202d64e04fc7", "s2fieldsofstudy": [ "Sociology", "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
271931292
pes2o/s2orc
v3-fos-license
Histochemical Characterization, Distribution and Morphometric Analysis of NADPH Diaphorase Neurons in the Spinal Cord of the Agouti We evaluated the neuropil distribution of the enzymes NADPH diaphorase (NADPH-d) and cytochrome oxidase (CO) in the spinal cord of the agouti, a medium-sized diurnal rodent, together with the distribution pattern and morphometrical characteristics of NADPH-d reactive neurons across different spinal segments. Neuropil labeling pattern was remarkably similar for both enzymes in coronal sections: reactivity was higher in regions involved with pain processing. We found two distinct types of NADPH-d reactive neurons in the agouti's spinal cord: type I neurons had large, heavily stained cell bodies while type II neurons displayed relatively small and poorly stained somata. We concentrated our analysis on type I neurons. These were found mainly in the dorsal horn and around the central canal of every spinal segment, with a few scattered neurons located in the ventral horn of both cervical and lumbar regions. Overall, type I neurons were more numerous in the cervical region. Type I neurons were also found in the white matter, particularly in the ventral funiculum. Morphometrical analysis revealed that type I neurons located in the cervical region have dendritic trees that are more complex than those located in both lumbar and thoracic regions. In addition, NADPH-d cells located in the ventral horn had a larger cell body, especially in lumbar segments. The resulting pattern of cell body and neuropil distribution is in accordance with proposed schemes of segregation of function in the mammalian spinal cord. INTRODUCTION Since the early 1960s it has been shown that the enzyme nicotinamide adenine dinucleotide phosphate diaphorase (NADPH-d) reveals a sub-population of inhibitory neurons in the mammalian central nervous system (CNS) (Thomas and Pearse, 1964).Later, it was demonstrated that NADPH-d corresponds to nitric oxide synthase enzyme (NOS), which catalyzes production of nitric oxide (NO) (Dawson et al., 1991;Hope et al., 1991), a highly diffusible signaling molecule.NO is involved with several physiological and pathological processes in the nervous system, including synaptic plasticity and local regulation of blood fl ow (Contestabile, 2000;Estrada and DeFelipe, 1998;Holscher, 1997;Iadecola, 1993;Wallace et al., 1996; see Calabrese et al., 2007 for a recent review).NADPH-d histochemistry has since been used as an indirect method for revealing the distribution of NO in aldehyde-fi xed brains (Scherer-Singler et al., 1983). Two distinct types of non-pyramidal, NADPH-d-containing cells have been consistently identifi ed in the mammalian CNS: a more reactive group, with a Golgi-like appearance (type I cells), and a weakly stained sub-population (type II cells) (Luth et al., 1994).Both are co-localized with GABA in cortical neurons, but type II neurons also express calbindin (Yan et al., 1996).Type I neurons have large, darkly stained cell bodies and dendritic tress (Luth et al., 1994) and are present in the brain of all mammals examined, from echidna to primates (Hassiotis et al., 2005;Yan and Garey, 1997).Type II neurons, conversely, are weakly stained, with small cell bodies and few or no labeled processes (Luth et al., 1994), and are reported to be especially numerous in the primate brain (Sandell, 1986;Yan and Garey, 1997).In the present work, we will report data from type I neurons, henceforth called NADPH-d neurons. A considerable amount of evidence has implicated NO as a nociceptive mediator in both peripheral tissues and the spinal cord.It has been proposed, for instance, that hyperalgesia is mediated by activity-dependent long-term potentiation (LTP) being induced in spino-peraqueductal gray projections neurons located in the spinal dorsal horn lamina I (Ikeda et al., 2006) and that NO is essential for its induction (Ikeda et al., 2006).Recently, it was shown in rats that LTP in lamina I neurons could be induced by NO secreted by interneurons located in laminae II and III (Ruscheweyh et al., 2006). Cytochrome oxidase (CO) is a key mitochondrial enzyme involved with oxidative metabolism (see Wong-Riley, 1989 for review).The strict coupling observed between neuronal activity and oxidative metabolism (Kasischke et al., 2004) justifi es the use of CO as an endogenous metabolic marker in the brain.Analysis of CO distribution in the CNS, for instance, has helped to unveil key principles of compartmental organization in the brain (Freire et al., 2004(Freire et al., , 2005;;Wong-Riley, 1979;Wong-Riley and Welt, 1980), and the spinal cord of several mammalian species (Wong-Riley and Kageyama, 1986). Our animal model, the agouti (Dasyprocta ssp., Dasyproctidae: Rodentia), is a medium-sized terrestrial and burrowing rodent (about ten times as heavy as the rat) with a widespread distribution over Central and South American Neotropical forests (Silvius and Fragoso, 2003).Agoutis are diurnal animals that rely mostly on a frugivorous diet, usually manipulating seeds with high dexterity with their forelimbs (Henry, 1999).Agoutis play a critical role on dispersing tree and plant seeds due to their caching behaviour (Silvius and Fragoso, 2003).This behaviour also underlies the necessity of a well-developed spatial memory for successful seed retrieval.The agouti is always on the lookout for predators, and possesses a welldefi ned retinal visual streak that, together with its lateralized eyes (Silveira et al., 1989), provides for good panoramic vision.The agouti has been successfully used as a model for comparative studies of cortical organization (Elston et al., 2006;Picanço-Diniz et al., 1989;Rocha et al., 2007). In this work we evaluate the neuropil reactivity, distribution pattern and morphometrical characteristics of NADPH-d neurons in different segments along the spinal cord of the agouti.Our results show that NADPH-d neurons are more numerous in the dorsal horn, more specifi cally in laminae I and II, and around the central canal (lamina X).Morphometric analysis shows that while NADPH-d neurons have cell bodies that are larger in the ventral horn, their dendritic fi elds are larger and more complex in the dorsal horn, especially in cervical segments.The spatial distribution of CO and NADPH-d neuropil enzymatic activity is somewhat similar, appearing more intense in the superfi cial laminae of the cervical and lumbar enlargements, which are associated with innervation of the fore-and hindlimb, respectively.The resulting pattern of cell body and neuropil distribution is in accordance with known schemes of segregation of function in the mammalian spinal cord and further illuminates the role played by NO in dorsal horn circuitry. Animals We used fi ve adult male agoutis (Dasyprocta spp) (2.7-3.2 kg) in the present study.Animals were donated by The Emilio Goeldi Museum, under license of the Brazilian Institute for Environmental Protection (IBAMA, license 207419-0030/2003).The experimental protocols are in accordance with NIH Guidelines for the Care and Use of Laboratory Animals.We made all efforts to use as few animals as possible and to minimize unnecessary animal discomfort, distress and pain. Perfusion and histological procedures Animals were deeply anesthetized with an intramuscular injection of a mixture of ketamine hydrochloride (90 mg/kg) and xylazine hydrochloride (10 mg/kg).After extinction of the corneal refl ex, animals were perfused transcardially with 500 ml of heparinized 0.9% saline solution followed by 2,000 ml of 4% paraformaldehyde (Sigma Company, USA) in 0.1 M phosphate buffer (PB).The spinal cord was dissected and stored in 0.1 M Tris buffer pH 8.0.Before sectioning, segments of equal length from the cervical enlargement (C4-T1), thoracic region (T7-T10) and lumbar enlargement (L4-S1) were separated.The spinal cord blocks were cut into 200 µm-thick coronal sections using a Vibratome (Pelco International, Series 1000). Histochemical processing We used adjacent sections to reveal the enzymatic activity of NADPH-d and CO.NADPH-d was revealed with the indirect method (modifi ed from Scherer-Singler et al., 1983) as follows: free-fl oating sections were incubated in a medium containing 0.6% malic acid, 0.03% nitroblue tetrazolium, 1% dimethylsulfoxide, 0.03% manganese chloride, 0.5% β-NADP and 1.5-3% Triton X-100 in 0.1 M Tris buffer, pH 8.0 and protected from light.Sections were monitored every 60 min to avoid overstaining.The incubation period lasted from 6 to 7 h and was interrupted by rinsing sections in 0.1 M PB (pH 7.4).The criteria to stop the reaction were the presence of strongly stained cell bodies and a less intense but heterogeneously labelled neuropil. CO histochemistry was performed in adjacent, 200 µm-thick coronal sections from the same spinal cord segments (Wong-Riley, 1979).Briefl y, sections were incubated in a solution containing 0.05% diaminobenzidine (DAB), 0.03% cytochrome c and 0.02% catalase.The reaction was monitored every 30 min in order to avoid overstaining.Similar to NADPH-d, sections were incubated for the same period of time and in the same solution for all animals.The duration of incubation ranged from 7-8 h, and was interrupted by rinsing the sections fi ve times in 0.1 M PB (pH 7.4).All reagents were purchased from Sigma Company (USA). Additional alternate sections were Nissl-stained for cytoarchitectonic analysis.Sections were mounted on gelatin-coated glass slides, left to air-dry overnight, dehydrated and coverslipped with Entellan (Merck). Qualitative and quantitative analyses Both NADPH-d and CO stained sections were qualitatively surveyed under light microscopy and images of representative fi elds were obtained with a digital camera (Nikon Coolpix 950E) attached to an optical microscope (Nikon AFX-DX Optiphot-2).A few selected NADPH-d neurons, from each region of interest, were also photographed for the illustrations.Only the global contrast, and/or brightness of pictures were adjusted using Photoshop CS (Adobe Systems, Inc., USA). We used the software Neurolucida (MicroBrightField Inc., USA) to characterize the spatial distribution of NADPH-d neurons in ten sections obtained from cervical, thoracic and lumbar regions from each animal.Within each section, we counted the number of cells in the following regions: dorsal horn (laminae I-IV), central canal (lamina X) and ventral horn (laminae VII-IX).In order to obtain an unbiased estimate of cell numbers, we applied the Abrecrombie's correction factor (Abercrombie, 1946; see Williams and Rakic, 1988 for details), which compensates for the overcounting of sectioned profi les, using the equation: where P is the corrected value, A is the raw density measure, M is the section's thickness (in micrometers) and L is the average diameter of cell bodies along the axis perpendicular to the plane of section.Descriptive statistics was obtained for all groups and we evaluated differences among groups by one-way analysis of variance (ANOVA) followed by the Newman-Keuls post hoc test.Statistical signifi cance was set at 95% confi dence level (α < 0.05).Table 1 depicts the average diameter of spinal cord cells.In general, the largest neuronal cell bodies were located in the ventral horn (p < 0.05). We chose thirty cells of each region (dorsal horn, central canal and ventral horn) for reconstruction using the Neurolucida software.The software ran in a PC attached to a Nikon microscope (60× objective) equipped with a motorized stage.The criterion to select cells for analysis was the presence of a complete, uncut dendritic arborization.The following parameters were measured using the Neuroexplorer software (MicroBrightField Inc., USA): dendritic fi eld size (the area contained within a polygon joining the outermost distal tips of the dendrites), cell body area (values expressed in µm 2 ), number of branch points and number of dendrites.Mean values were compared amongst groups by ANOVA followed by Newman-Keuls post hoc test, as previously described. Distribution of NADPH-d neurons Two types of reactive neurons can be readily identifi ed in the spinal cord of the agouti: type I and type II cells (Luth et al., 1994).Both correspond to aspiny, non-pyramidal neurons with distinct reactivity profi les (Figure 2).Type I neurons have the largest cell bodies and are more intensely labeled, showing in high-contrast details of both the cell body and the dendritic tree.Type II neurons, on the other hand, have a ghostlike appearance against the nonspecifi c background, with only their tiny soma being barely visible (Figure 2). The distribution of type I cells was examined in coronal sections of the spinal cord (Figure 3A).In every segment, the number of NADPH-d cells is greater in the dorsal horn and the central canal than in the ventral horn ( p < 0.05; Figure 3B).The exception is seen in the thoracic segment, where the distribution of labeled cells is more homogeneous (p > 0.05; Figure 3B).When comparing the distribution of cells in the dorsal horn and the central canal, however, the number of NADPH-d neurons is signifi cantly greater in the former than in the later in both the lumbar ( p < 0.05) and cervical segments (p > 0.05). Another comparison shows that NADPH-d neurons have a different distribution along spinal segments.For instance, labeled neurons are always found in greater number in the cervical segment (p < 0.05; Figure 3B), with the exception of the dorsal horn, which has a similar number of neurons in both the cervical and lumbar enlargements ( p > 0.05).For the central canal, however, all segmental comparisons show difference, and more neurons are labeled in the lumbar enlargement, followed by the cervical enlargement and thoracic segments (Figure 3B; p < 0.05).In the ventral horn, the number of NADPH-d neurons is greater in the lumbar segment than in either the cervical or the thoracic segments (p < 0.05), both showing no statistical difference between them (Figure 3B; p > 0.05). Morphometric analysis of NADPH-d neurons We evaluated the morphology of reactive NADPH-d neurons both qualitatively and quantitatively.NADPH-d neurons are present throughout the rostrocaudal axis of the spinal cord (Figure 3A) and have a varied morphology, including poorly ramifi ed bipolar neurons in the central canal and extensively ramifi ed multipolar cells in the dorsal horn of the cervical enlargement (see examples in Figure 4). NADPH-d neurons are present in all laminae of both the cervical and lumbar enlargements, but are mostly found in the superfi cial layers of the dorsal horn (laminae I-IV) and clustered around the central canal (lamina X) (Figures 3A and 4).Cells in the cervical enlargement have the most complex pattern of dendritic arborization of all segments, especially in the dorsal horn (Figure 4).NADPH-d neurons in the ventral horn display the simplest arborization pattern, regardless of segmental location (Figures 4 and 5). The dendritic arborization of NADPH-d cells in the thoracic segment is relatively poor when compared to cells in both the cervical and lumbar enlargements (Figure 5).In the thoracic segment, NADPH-d neurons are mostly located at Rexed's laminae I-IV, VII, IX and X (Figure 4) and the IML.Thoracic NADPH-d cells have in general a multipolar aspect and exhibit a compact dendritic arbor (Figure 5), with cells located in the dorsal horn and central canal having the most prominent dendritic arborization (Figure 5). Similar to what is observed in the cervical segment, NADPH-d cells located in the inferior portion of both the dorsal horn and the central canal of the lumbar enlargement also have a relatively complex morphology (Figure 5).Although neurons in the ventral horn display intense NADPHd reactivity, they tend to have stumpy dendritic arbors (Figure 5). We also observed scattered NADPH-d neurons located in the white matter of every spinal cord segment, mostly in the ventral and dorsal funiculi (Figure 4B).The dendritic tree of these cells shows a complex ramifi cation pattern, especially in the ventral white matter.These cells are commonly found in regions bordering gray matter, usually projecting their dendrites toward it.The quantitative analysis confi rmed and expanded the qualitative description of morphological characteristics of NADPH-d neurons across spinal cord segments.Cells located in the ventral horn have the largest cell body, especially in the lumbar segment ( p < 0.01) (Figure 6A).Conversely, cells in the dorsal horn (especially in cervical and lumbar segments) have a larger dendritic fi eld coverage ( p < 0.01) (Figure 6B).Overall, the dendritic tree of neurons in the thoracic segment covers a volume smaller than that of cells located in other segments ( p < 0.05; p < 0.01) (Figure 6B).Two other measured parameters, i.e., number of ramifi cation points and number of dendrites by order, follow the same trend: the neurons in the dorsal horn of the cervical and lumbar enlargements have a greater number of branch points than cells in the same location in the thoracic segment (Figure 6C).This is also observed in the ventral horn, although cells in this region are noticeably less ramifi ed than cells in the dorsal horn (Figure 6C) ( p < 0.05).Only in the central canal of the thoracic segment do neurons have more branch points than those in the lumbar enlargement, but not the cervical enlargement (Figure 6C). The neurons located in the dorsal horn of the cervical and lumbar enlargements have the most exuberant dendritic arborization (Figure 7A, p < 0.05), exhibiting up to fi fth order dendrites, especially in the central canal (Figure 7B).Conversely, neurons of the thoracic segment ramify only to third order dendrites (Figure 7B).We found a greater density of branches in the second order dendrites of cells located in both the dorsal horn and central canal of the cervical enlargement (Figure 7B) ( p < 0.01).Although less noticeable, the same trend is observed in the lumbar enlargement, except for the ventral horn, where the peak of branching is observed in fi rst order dendrites (Figure 7B, p < 0.01).In the thoracic segment, in a way similar to the lumbar enlargement, cells located in the ventral horn have the most exuberant ramifi cation pattern (Figure 7B, p < 0.01). DISCUSSION In the present work, we evaluated the enzymatic reactivity of NADPH-d and CO in coronal sections of the spinal cord of the agouti.Our main results are the following.First, the reactivity pattern for both enzymes is very similar throughout the spinal cord.Second, NADPH-d neurons are heterogeneously distributed across distinct spinal cord segments, with more neurons located in the dorsal horn and around the central canal in the cervical and lumbar enlargements.Third, the neurons found in the cervical enlargement are always more numerous and more ramifi ed than those found in other segments.The signifi cance of these fi ndings is discussed below. Cytoarchitectonic organization of the agouti's spinal cord: similarities with other mammalian species Our results show that the agouti's spinal cord possesses a laminar organization that follows the classical pattern described by Rexed in the cat (Rexed, 1952(Rexed, , 1954)).It is also very similar to the organization described for the rat (Molander et al., 1984(Molander et al., , 1989)), corroborating the claim that the laminar scheme proposed by Rexed is valid for all mammals, including man (Schoenen, 1973). The histochemical reaction pattern for NADPH-d was spatially heterogeneous in coronal sections, as described in other species (Dun et al., 1993;Wong-Riley and Kageyama, 1986).Neuropil was usually more reactive in the dorsal horn of the cervical and lumbar enlargements, but not the thoracic segment (see Figure 1).The superfi cial laminae I and II are the target of inputs from nociceptive receptors from the periphery.Hyperalgesia is caused by infl ammation and injury of peripheral tissues and is thought to be mediated by the amplifi cation of nociceptive information in dorsal horn projection neurons (Woolf, 1983).It has been proposed that NO in this region could be involved with LTP underlying hyperalgesia (Ikeda et al., 2006).Additionally, there is ample, consistent evidence of the involvement of NO in nociception provided, for instance, by the pharmacological blockade of NOS using the intravenous administration of N-omeganitro-L-arginine methyl ester (L-NAME) (Yonehara et al., 1997). Morphology and distribution of NADPH-d neurons in the agouti's spinal cord: possible functional signifi cance Overall, the laminar distribution of NADPH-d neurons in the spinal cord of the agouti is similar to that described for other mammalian species, including the rat (Anderson, 1992;Saito et al., 1994;Spike et al., 1993), the rabbit (Kluchova et al., 2002), the hamster (Reuss and Reuss, 2001), the monkey (Dun et al., 1993) and humans (Foster and Phelps, 2000;Smithson and Benarroch, 1996).We show that NADPH-d neurons tend to concentrate in three main regions: the dorsal horn, the IML and central canal (see Figure 3A).Other studies, however, show a qualitatively distinct distribution pattern in other species (Blottner and Baumgarten, 1992;Mizukawa et al., 1989).Blottner and Baumgarten (1992), for instance, report that NADPH-d neurons are found only in the IML of the rat.Mizukawa et al. (1989), on the other hand, describe a relative absence of NADPH-d neurons in the same region in the cat's spinal cord.Other studies also fail to show the presence of NADPH-d neurons in the ventral horn of the rat (Kluchova et al., 2001;Spike et al., 1993) and the rabbit's spinal cord (Kluchova et al., 2001).Such differences are probably related to methodological issues, instead of genuine differences among species.For instance, the period of tissue incubation in Blottner and Baumgarten's study (about 30 min) is much shorter than ours (at least 6 h).In our experience, a longer incubation period is necessary to allow the proper visualization of the population of NADPH-d neurons. We were able to identify two distinct types of NADPH-d reactive neurons in the spinal cord: a more reactive group, with a Golgi-like appearance (type I) and a small, weakly reactive subpopulation (type II) (Luth et al., 1994).These results are similarly observed in other mammalian species (Dun et al., 1993;Saito et al., 1994), suggesting that the presence of both types of NADPH-d neurons is universal in the mammalian spinal cord.Yan et al. (1996) described a strict co-localization of type II neurons with both GABA and calbindin in the primate cortex.The same authors also proposed that type II neurons are typical of the brain of higher mammals, such as primates (Yan et al., 1996).Our own results in both the rat and the mouse show that this conclusion has to be reconsidered: the brain of rodents also has a large population of type II neurons (Freire et al., 2004(Freire et al., , 2005)).Type II neurons, however, are much harder to discriminate than type I neurons in the mammalian brain and are very sensitive to overfi xation (Freire et al., 2005).Thus, it is easy to overlook the presence of type II neurons in otherwise successful histochemical reactions. The presence of NADPH-d neurons in the dorsal horn superfi cial laminae is in accordance with the role played by NO in hyperalgesia and chronic pain in this region (Aley et al., 1998;Dolan et al., 2000;Ikeda et al., 2006;Tang et al., 2007;Wu et al., 1998).NO would act on lamina I spinothalamic projection neurons contributing to activity-dependent LTP of synaptic responses evoked by peripheral inputs (Ruscheweyh et al., 2006;Simone et al., 1991).In the rat, for instance, a sustained production of NO and subsequent activation of soluble guanylate cyclase in the spinal cord mediates thermal hyperalgesia (Meller et al., 1992).NADPH-d neurons are also heavily concentrated in the central canal (lamina X), which has been shown to receive convergent input from visceral afferents and also from cutaneous, muscular, and noxious afferents (Ness and Gebhart, 1987;Wall et al., 2002).It is hypothesized that lamina X could mediate general arousal in the brain, in a way similar to the brainstem reticular formation (Wall et al., 2002). Reactive NADPH-d neurons were also found in the IML, as described in other species (Kluchova et al., 2001;Marsala et al., 1999;Saito et al., 1994;Valtschanoff et al., 1992a).The IML is the anatomical substrate, or fi nal common pathway, through which the control of the sympathetic nervous system is mediated by several inputs from the periphery (Appel and Elde, 1988), including noxious information.NADPH-d cells in this region are autonomic preganglionic neurons (Necker, 2004;Saito et al., 1994;Valtschanoff et al., 1992a).These cells form a local circuit which sends its projections, among their targets, to lamina X (Saito et al., 1994).Thus, NO could act as a neuromodulator or neurotransmitter within sensory and motor circuits related to the pelvic viscera (Vizzard et al., 1993). NO also plays a signifi cant role in regulating cerebral blood fl ow in the CNS (Estrada and DeFelipe, 1998;Ishikawa et al., 2005;Yoshida et al., 1997).In the rabbit spinal cord, NADPH-d somata and fi bers are closely associated with longitudinally oriented blood vessels (Marsala et al., 1999), suggesting that NO may also help to couple neural activity with regional blood fl ow in the spinal cord (Valtschanoff et al., 1992a). The agouti uses its forelimbs for locomotion and to manipulate food while eating (see Santiago et al., 2007).Complex motor acts like these require continuous sensory feedback aimed at various levels of the motor hierarchy.It should be expected that in spinal cord segments involved with controlling movement of the limbs, especially the forelimb, the underlying circuitry should be more complex than in other segments.This complexity could be expressed as a greater number of interneurons, for instance, or a tendency for neurons to have larger and more ramifi ed dendritic trees.A larger dendritic fi eld, for example, could serve to sample a greater diversity of inputs to the spinal cord circuits.We see this increase in morphological complexity happening, in terms of NADPH-d cells, both in the dorsal horn and central canal of the cervical enlargement.Conversely, and as a comparison, less morphological complexity and a smaller number of interneurons and motorneurons should be present in the thoracic segment, which controls axial muscles.The results presented here reinforce this hypothesis, supporting the notion that spinal cord regions that control axial muscles have fewer and morphologically simpler neurons than regions involved with more complex movements. NADPH-d neurons in the white matter exhibited a well ramifi ed dendritic arborization.The physiological role played by these cells is not clear yet, but it has been suggested that they belong to local circuits connecting different spinal segments (Marsala et al., 2003;Valtschanoff et al., 1992a). CONCLUSIONS In conclusion, the neuropil distribution of NADPH-d and CO in the spinal cord of the agouti is remarkably similar, both enzymes being more reactive in superfi cial laminae.This pattern is in accordance with the role played by NO in LTP of lamina I spinothalamic neurons.Two distinct types of NADPH-d-reactive neurons were also found: type I and type II neurons.Type I neurons are heavily stained and are located preferentially in superfi cial laminae, lamina X, IML, and the white matter.The dendritic fi eld of type I neurons is more exuberant in cervical segments, which contain the fi nal common pathways controlling skilled movements of the forelimbs.The resulting pattern of cell body and neuropil distribution is in accordance with segregation of function in the mammalian spinal cord and further illuminates the role played by NO in dorsal horn circuitry. Table 1 .Figure 1 . Figure 1.NADPH diaphorase and cytochrome oxidase reactivity in coronal sections of the agouti's spinal cord.The enzymatic reaction pattern is illustrated in alternate sections of three spinal cord segments.The spinal cord cytoarchitectonic organization can be observed in alternate sections stained with cresyl violet.Neuropil reactivity for both NADPH-d and CO is higher in the dorsal horn, except in thoracic segments.Scale bar: 10 mm. Figure 2 . Figure 2. Comparative morphology of types I and II neurons in the central canal of the lumbar segment in agouti's spinal cord.Note both the small size of the cell body and the weakly stained dendritic arbor of type II neurons (arrowheads) when compared to the much-larger type I cell.Scale bar: 30 µm. Figure 4 . Figure 4. Morphology of NADPH-d neurons located in the spinal cord.Gray matter neurons have a varied morphology, including poorly ramifi ed bipolar neurons from the central canal and extensively ramifi ed multipolar cells from the dorsal horn of the cervical enlargement (A).The dendritic arborization of white matter cells is exuberant, frequently projecting towards the gray matter (B).Scale bars: 100 µm (lower magnifi cation); 50 µm (enlargements). Figure 3 .Figure 5 . Figure 3. Distribution of NADPH-d neurons in sub-regions of the cervical, thoracic and lumbar segments of the agouti's spinal cord.Qualitative analyses points to a heterogeneous distribution of NADPH-d neurons across spinal cord segments (A), confi rmed by quantitative analysis depicted in the histogram (B).For any given spinal segment, the number of NADPH-d cells is greater in both the dorsal horn and central canal than in the ventral horn (B).In addition, these cells are found in greater number in both the cervical and lumbar segments (B).Asterisks in horizontal bars indicate statistically signifi cant comparisons (*p < 0.05; **p < 0.01; ANOVA/Newman-Keuls post hoc test).Scale bar in A: 10 mm. Figure 6 . Figure 6.Comparative morphometry of spinal cord's NADPH-d neurons.Cells located in the ventral horn have the largest cell bodies, especially in lumbar segments (A).Conversely, cells found in the dorsal horn (especially in both cervical and lumbar segments) possess a larger dendritic coverage (B).In addition, neurons found in the dorsal horn of the cervical and lumbar segments have a greater number of branch points than cells found in the same region of the thoracic segment (C) Asterisks in horizontal bars indicate statistically signifi cant comparisons (*p < 0.05, **p < 0.01, ANOVA/Newman-Keuls post hoc test). Figure 7 . Figure 7. Ramifi cation pattern in the spinal cord.Cells in the cervical enlargement are more ramifi ed (in terms of average number of dendrites) in the dorsal horn and the central canal (A).Cells of the lumbar enlargement come in second and have the same ramifi cation pattern in its three sub regions (dorsal horn, central canal and ventral horn).Cells in the lumbar and cervical enlargement have more complex dendritic patterns (B).Asterisks in horizontal bars indicate statistically signifi cant comparisons (*p < 0.05; **p < 0.01).
2014-10-01T00:00:00.000Z
2008-02-04T00:00:00.000
{ "year": 2008, "sha1": "63a3ed0c76feae1ab0315677487aab9262f2b887", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/neuro.05.002.2008/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63a3ed0c76feae1ab0315677487aab9262f2b887", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
231654505
pes2o/s2orc
v3-fos-license
Transporter Gene Regulation in Sandwich Cultured Human Hepatocytes Through the Activation of Constitutive Androstane Receptor (CAR) or Aryl Hydrocarbon Receptor (AhR) The induction potentials of ligand-activated nuclear receptors on metabolizing enzyme genes are routinely tested for new chemical entities. However, regulations of drug transporter genes by the nuclear receptor ligands are underappreciated, especially in differentiated human hepatocyte cultures. In this study, gene induction by the ligands of constitutive androstane receptor (CAR) and aryl hydrocarbon receptor (AhR) was characterized in sandwich-cultured human hepatocytes (SCHH) from multiple donors. The cells were treated with 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), omeprazole (OP), 6-(4-chlorophenyl)imidazo[2,1-b][1,3]thiazole-5-carbaldehyde O-(3,4-dichlorobenzyl)oxime (CITCO) and phenobarbital (PB) for three days. RNA samples were analyzed by qRT-PCR method. As expected, CITCO, the direct activator, and PB, the indirect activator of CAR, induced CYP3A4 (31 and 40-fold), CYP2B6 (24 and 28-fold) and UGT1A1 (2.9 and 4.2-fold), respectively. Conversely, TCDD and OP, the activators of AhR, induced CYP1A1 (38 and 37-fold), and UGT1A1 (4.3 and 5.0-fold), respectively. In addition, OP but not TCDD induced CY3A4 by about 61-fold. Twenty-four hepatic drug transporter genes were characterized, and of those, SLC51B was induced the most by PB and OP by about 3.3 and 6.5 fold, respectively. Marginal inductions (about 2-fold) of SLC47A1 and SLCO4C1 genes by PB, and ABCG2 gene by TCDD were observed. In contrast, SLC10A1 gene was suppressed about 2-fold by TCDD and CITCO. While clinical relevance of SLC51B gene induction or SLC10A1 gene suppression warrants further investigation, the results verified that the assessment of transporter gene inductions are not required for new drug entities, when a drug does not remarkably induce metabolizing enzyme genes by CAR and AhR activation. INTRODUCTION There are about 450 membrane transporters divided into two superfamilies, the solute carrier (SLC) and the ATP Binding Cassette (ABC) transporters. Membrane transporters transport ions, essential nutrients, hormones etc. across a biological membrane. Among these membrane transporters, nearly thirty of them are involved in drug transport and currently are designated as clinically relevant drug transporters (International Transporter et al., 2010;Hillgren et al., 2013). The liver is the most important organ in the body for drug metabolism, accounting for about 70% of drugs and their metabolite elimination in humans (Patel et al., 2016). Many transporters are expressed on the sinusoidal and canalicular membrane of hepatocytes to transport poorly permeable endogenous substances into and out of the liver. About 20 of these membrane proteins are also involved in hepatic uptake and biliary excretion of a wide variety of prescription drugs and their metabolites (Vildhede et al., 2015), and play a key role in determining drug pharmacokinetics (PK) and hepatic exposure. Therefore, fluctuations of expressions and functional activities of drug transporters can result in changes of plasma level and/or liver exposure of substrate drugs. For example, active hepatic uptake mediated by sinusoidal transporters e.g., organic anion transporting polypeptides (OATP) can be the rate-determining step of drug elimination for the substrates that are either metabolically stable or unstable (Hagenbuch and Meier, 2004). The functional inhibition or induction of transporter gene expression can cause drug-drug interactions (DDIs) with coadministration drugs that are substrates of the transporters (International Transporter et al., 2010;Hillgren et al., 2013). Given the critical roles played by hepatic transporters in systemic clearance and liver exposure, it is important to understand the factors affecting transporter expression. Therefore, characterizing transporter gene regulation is a crucial step to improve the understanding of transporter-mediated drug absorption, disposition and elimination, and subsequently their association to drug efficacy and toxicity. It is well-documented that metabolizing enzymes can be induced by activation of various cytoplasmic and orphan nuclear receptors, such as the pregnane X receptor (PXR) and constitutive androstane receptor (CAR). For example, phase I cytochrome P450 3A (CYP3A), CYP2C and phase II uridine diphosphate glucuronosyltransferase 1A1 (UGT1A1) are induced by PXR ligands (Niu et al., 2019). Similarly, CAR ligands induce CYP2B, 3A, 2C and UGT enzymes (Yang and Wang, 2014). Unfortunately, details in transporter gene regulations by these nuclear receptors remain sparse in the literature (Mottino and Catania, 2008;Tolson and Wang, 2010), and systemic evaluations have not been conducted to the same extent as metabolic enzyme inductions, especially in differentiated hepatocyte cultures with right transporter localizations. Recently we investigated transporter gene induction by rifampin, a known prototypical activator of PXR for enzymes and efflux transporter genes, in sandwich-cultured hepatocytes (Niu et al., 2019). Since sandwich-cultured human hepatocytes (SCHH) consist of collagen Magrigel ™ on both sides of the hepatocytes, which allow the differentiation of hepatocytes to form the bile ducts, the hepatocytes can restore the expression of drug metabolizing enzyme and transporter proteins, and most importantly, with correct localization of hepatic drug transporters (Kimoto et al., 2012a;Kimoto et al., 2012b). We also observed similar induction patterns of metabolic enzyme genes by rifampin between sandwich-cultured hepatocytes and plated hepatocytes (Niu et al., 2019). Interestingly, hepatic transporter gene induction by the PXR ligand rifampin was generally not significant in both human and monkey hepatocytes, except for SLC51B, a greater than 10-fold induction was observed (Niu et al., 2019). Ethics Statement Cryopreserved primary human hepatocytes isolated from deceased donor livers were purchased from In Vitro ADMET Laboratories (Columbia, MD 21045) and BioIVT (Westbury, NY). Pre-experiments were conducted to confirm the induction of CYP genes in selected hepatocyte lots. The donor demographics is listed in Table 1. Consent was obtained from the human donor or the donor's legal next of kin for use of these samples and their derivatives for research purposes using IRBapproved authorizations. Sandwich-Cultured Human Hepatocytes and Treatment of AhR or CAR Activators Cryopreserved human hepatocytes were quickly thawed at 37°C and then transferred into hepatocyte thawing medium. Cells were pelleted at 100 × g for 10 min and then resuspended in hepatocyte complete plating medium (CP medium). The cells were then seeded at 65,000 cells per well in 96-well collagen I coated plates. After cell attachment (4-6 h post-seeding), CP medium was exchanged for ice-cold complete incubation medium (HI medium) containing BD Matrigel ™ at the concentration of 0.25 mg/ml. The cells were then incubated overnight at 37°C in a humidified 5% CO 2 incubator. The HI medium was replaced by the fresh HI medium containing TCDD (0.08-50 nM), OP (0.08-50 μM), CITCO (0.008-5 μM), PB (1.6-1,000 μM) or dimethyl sulfoxide (DMSO) only. The media containing TCDD, Omeprazole, CITCO, PB or DMSO were subsequently changed daily for a total of three days. Cell death or loss of adhesion of hepatocytes during or after treatment were examined by microscopy. In addition, the β-actin mRNA levels were assessed to confirm that the treatments of transcript activators didn't show significant cytotoxicity. Real-Time Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) A RNeasy 96 Kit (Qiagen) was used to extract the total cellular RNA from sandwich cultured hepatocytes in 96-well plates following the manufacturer's instructions. A total volume of 200 μL RNA was obtained via the column elution provided in the RNeasy kit. Five μL RNA extraction was mixed with TaqMan ® Fast Virus 1-Step Master Mix (Life Technologies) in a final reaction volume of 20 μL and qRT-PCR was achieved using a QuantStudio 6 Flex Real-Time PCR System (Life Technologies), following the manufacturer's instructions. β-actin mRNA expression in the SCHH was used as a house-keeping gene for the normalization of target gene expressions. The mRNA levels in SCHH treated with AhR or CAR activators were expressed by a fold change, compared to the gene expressions without the treatment. The 2 −ΔΔCt method (Livak and Schmittgen, 2001) was used to analyze the relative changes in gene expression, where the Ct is the cycle number at which the fluorescence in the reaction crosses the preset arbitrary threshold; the ΔCt represents the difference between the Ct target and reference and the ΔΔCt is the difference between the ΔCt of the test and the ΔCt of the preassigned control. Gene expression for each condition is expressed as a fold-change of mean ± standard deviation (SD) from three independent donors (each performed in triplicate). All primer and probe sets were purchased from Life Technologies ( Table 2). When concentration-dependent induction observed, the in vitro induction mRNA data were fitted using the sigmoid 3parameter equation in GraphPad Prism (La Jolla, CA) to estimate the in vitro concentration of inducer that produced half the maximum induction values (EC 50 ). The TCDD induction was concentration-dependent ( Figure 1A), and the EC 50 was estimated to be 0.5 and 1.56 nM for CYP1A1 and UGT1A1, respectively (Table 4). Likewise, CYP1A1, CYP3A4 and UGT1A1 mRNA inductions by OP were observed in a concentration-dependent manner to a maximum of 37, 61 and 5.0-fold increase at 50 μM, respectively ( Table 3). The EC 50s of OP induction were 3.34, 7.37, and 28.7 µM for CYP1A1, CYP3A4 and UGT1A1, respectively ( Table 4). The mRNA of CYP2B6, CYP3A4 and UGT1A1, but not CYP1A2 was 4.3 ± 1.5 5.0 ± 1.2 2.9 ± 1.6 4.2 ± 1.9 ABCB1 (MDR1) FIGURE 1 | Induction of metabolizing enzyme and transporter mRNA by TCDD. The concentration-dependent induction studies were conducted in SCHH (lot HH1117, HH1142 and YTW) treated with TCDD at the concentrations ranging from 0.08 to 50 nM. The gene expressions were quantitated by qRT-PCR. Gene expression for each condition is expressed as a fold-change of mean ± standard deviation (SD) from three independent donors (each performed in triplicate). Dot lines indicate the change of 2-fold. Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 620197 induced by CAR activators, CITCO and PB. The average of 24, 31 and 2.9-fold increase of CYP2B6, CYP3A4 and UGT1A1 mRNA expression were detected in the hepatocytes treated with CITCO at 5 µM (Table 3), with the EC 50 of 0.16, 1.14 and >5 μM, respectively (Table 4). Similarly, PB induced CYP2B6, CYP3A4 and UGT1A1 mRNA expressions by about 40, 28 and 4.2-fold at the highest concentration tested (1,000 µM) ( Table 3), and the EC 50s were 726, 343 and 639 μM, respectively ( Table 4). Although robust enzyme inductions were detected in SCHH treated with activators of AhR or CAR, it is worth noting that the donor-to-donor variabilities were observed among three lots of human hepatocytes. The CYP3A4 fold induction by CITCO (5 µM) in hepatocytes from the donor OQA was nearly 8-times greater than that of the donor AOS (Supplementary Figure 1). In addition, a greater than 50% reduction of CYP3A4 mRNA (<0.5-fold) by TCDD was detected in only two donors among three lots of hepatocytes, resulting in an average reduction of about 0.4-fold (Table 3) (Supplementary Figure 1). Induction of Transporter Genes by TCDD, OP, CITCO and PB Twenty-four hepatic drug transporters were tested for potential gene induction in SCHH treated with prototypical activators of AhR (TCDD and OP) or CAR (CITCO and PB) for 3 days. All 24 transporter genes were found to be expressed in hepatocytes from five donors. In the US FDA guidance for industry on in vitro DDI studies, CYP induction is presumed when ≥ 2-fold concentrationdependent increase in CYP mRNA is observed (USFDA-Guidance-2020, 2020). Accordingly, the average fold changes of transporter genes by CAR or AhR activators in SCHH were generally less than 2-fold, with the exception of SLC51B, SLC47A1, SLCO4C1, ABCG2 and SLC10A1 genes ( Table 3 and Figures 1-4). A modest (∼2-fold) and concentrationdependent induction of ABCG2 mRNA by TCDD (50 nM) was detected ( Figure 1C). In contrast, marginal suppression of SLC10A1 mRNA by TCDD and OP was observed at the highest concentration treated and the gene suppression appeared to be concentration-dependent ( Figure 1B and Figure 2B). A dosedependent induction of SLC51B gene by PB and OP was detected ( Figure 2D and Figure 3D), and at the highest concentration tested, the induction of SLC51B gene was about 3.4 and 6.5-fold, respectively. Additionally, about 2-fold increases of SLC47A1 and SLCO4C1 mRNA expression were also found in SCHH treated with PB at 1,000 μM. The fold induction by CITCO for all transporter genes in SCHH was generally less than 2-fold ( Table 3). DISCUSSION Drugs or xenobiotics can induce the expression of metabolizing enzyme and/or transporter proteins through activating gene transcription receptors. As a result, fluctuation of systemic exposure can occur for drugs that are transported or metabolized by the induced enzymes or transporters. Thus, assessing the potentials for CYP and transporter inductions early become important de-risk steps for planning appropriate clinical DDI studies in drug research and development cycle. Primary cultures of hepatocytes are commonly used as tools to investigate induction potential of metabolizing enzyme genes, and the model is adapted to investigate transporter gene regulation (Nishimura et al., 2002;Williamson et al., 2013). However, when hepatocytes are grown in a conventional plated monolayer format, many drug metabolism and transport genes are substantially downregulated (Sun et al., 2019), which can result in false positive or negative outcomes, particularly for drug transporters that require the unique polarized architecture present in the liver. In fact, Courtois et al. (Courtois et al., 2002) observed remarkable in vitro induction of multidrug resistance-associated protein 2 (MRP2) mRNA and protein levels in conventional plated rat and human hepatocytes exposed to phenobarbital. Surprisingly, MRP2 induction observed in vitro was not reproduced in rats in vivo treated by the bartiturates (Courtois et al., 2002), which suggests that the positive in vitro induction of MRP2 could be a false positive event. Additionally, inconsistent results are reported in the literature when using the conventional plated hepatocytes for characterizing transporter gene inductions by PXR or CAR ligands, as compared with the results obtained from sandwich cultured hepatocytes (Jigorel et al., 2006). With that in mind, in the current investigation, transporter induction potential by AhR or CAR activators was accessed in SCHH to overcome the disadvantage of the conventional hepatocyte cultures suffering from loss of polarized nature of hepatocytes. OP is a proton pump inhibitor used for the treatment of acidrelated disorders. TCDD, also known as dioxin, was used as an herbicide infamously known as Agent Orange and is found in animals and humans through environmental polluted food chain. OP and TCDD are the AhR ligands (Quattrochi and Tukey, 1993). Ligands bind to the AhR receptor and form a heterodimeric complex with the AhR nuclear translocator protein, known as a transcription factor, to stimulate the transcription of CYP1A family genes. As a result, induction of CYP1A1 and UGT1A1 genes in SCHH by TCDD and OP was observed following a concentrationdependent manner (Figure 1, 2). In addition, OP, but not TCDD, also induced CYP3A4 gene to a level that was comparable to the induction of CYP1A1 (61-fold at 50 μM) ( Table 3). The results agreed with the literature reports (Yoshinari et al., 2008;Donato et al., 2010), and confirmed the reliability and robustness of SCHH for investigating gene induction. Regarding the drug transporter regulation, it is known that efflux transporters ABCG2 (BCRP) and ABCB1 (P-gp) are induced by TCDD mediated AhR activation in Caco-2 and other human carcinoma cells including HepG2, LS180, LS174T and MCF7 cells (Ebert et al., 2005;Tan et al., 2010). The induction of ABCG2 mRNA in primary hepatocytes is also observed; however, the magnitude of induction is relatively small in human hepatocytes compared with those of the human carcinoma cell lines, which suggests a cell-dependent induction (Tan et al., 2010). Accordingly, a modest concentration-dependent induction of ABCG2, but not ABCB1 mRNA, by TCDD and OP was observed in SCHH (Figure 1, 2). Additionally, the SLC10A1 gene that codes the sodium-dependent taurocholate transporter protein (NTCP) for hepatic uptake of bile acid, was reduced about 2-fold by TCDD and OP (Figure 1, 2). Although we can not rule out if the maginal effects observed is due to slight cell stress at the high concentrations, this observation was supported by previous studies that TCDD can alter genes involved in cholesterol metabolism and bile acids homeostasis (Fletcher et al., 2005). Interestingly, SLC51B gene was found to be the most highly induced transporter gene by OP, but not TCDD (Figure 1, 2). SLC51A and SLC51B genes code OSTα and β protein respectively, to form the heteromeric transport protein OSTα/β. This is an important transporter in bile acid homeostasis, and can undergo adaptive regulation in the disease progression of obstructive cholestasis or primary biliary cholangitis (Ballatori et al., 2009). The OSTα/β expression can be regulated by other transcription factors including small heterodimer partner (Shp), liver receptor homolog-1 and LXR/the retinoid X receptor (RXR) heterodimer (Ballatori et al., 2009). In addition, the farnesoid X receptor (FXR), also called the bile acid receptor, and LXR may exert a coordinated role in maintaining bile acid homeostasis in the hepatocytes, which result in a novel negative feedback loop of bile acids to induce or suppress bile acid transporter genes including OSTα/ β. In fact, we previously reported that SLC51B gene was also induced in hepatocytes treated with rifampin (Niu et al., 2019). Rifampicin induces CYP3A4 and CYP2C9 through activation of PXR (Maglich et al., 2002). The observation of CYP3A4 and SLC51B induction was consistent with the current literature (Novotna and Dvorak, 2014;Benson et al., 2016;Niu et al., 2019), suggesting that the induction is involved in PXR. Since AhR and PXR can share common regulation pathways (Maglich et al., 2002;Lim and Huang, 2008), our results implicate the crosstalk of AhR-PXR in human hepatocytes (Gerbal-Chaloin et al., 2006). CITCO and PB are known as CAR direct and indirect activators, respectively. As one of the xenosensors, CAR regulates hepatic drug metabolizing enzyme genes and plays a role in mediating various hepatic functions including fatty acid oxidation, insulin signaling and biotransformation of bile acids (Gao et al., 2009;Lynch et al., 2013). For example, induction of CYP2B6 gene expression is mediated by CAR via responsive elements located in the promoter regions of CYP2B6 gene. PB does not directly bind to CAR and is considered as an indirect activator to translocate CAR into the nucleus, where it leads to transcriptional activation of target genes (Kawamoto et al., 1999;Wang et al., 2004). PB is also a human PXR ligands to induce gene expression of numerous hepatic metabolizing enzymes and transporters (Li et al., 2019). In contrast, CITCO binds to the CAR and is a direct activator of CAR. It also activates human PXR (Li et al., 2019). Here we observed that both CYP2B6 and CYP3A4 were greatly induced by PB and CITCO, and to a less extent, UGT1A1 was also induced in a concentrationdependent manner (Figure 3, 4), which is consistent with the literature reports (Li et al., 2019). Burk et al. reported that CAR can also regulate MDR1 gene in cells stably expressing CAR (Burk FIGURE 4 | Induction of metabolizing enzyme and transporter mRNA by PB. The concentration-dependent induction studies were conducted in SCHH (Lot AOS, OQA and YTW) treated with PB at the concentrations ranging from 1.6 to 1,000 μM. The gene expressions were quantitated by qRT-PCR. Gene expression for each condition is expressed as a foldchange of mean ± standard deviation (SD) from three independent donors (each performed in triplicate). Dot lines indicate the change of 2-fold. Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 620197 et al. , 2005). Our findings suggested that CITCO and PB may also activate the PXR receptor to induce CYP3A gene (Novotna and Dvorak, 2014;Li et al., 2019). Controversially, neither CITCO nor PB induced the ABCB1 (MDR1) gene in SCHH (Figure 3 and 4). This suggests that transporter induction can be cell line (or organ) specific. Interestingly, SLC51B gene was also induced by PB, but not CITCO (Table 3). Together with the SLC51B induction by OP, PB and rifampin that are PXR ligands (Benson et al., 2016;Niu et al., 2019), the SLC51B induction appears to likely not be mediated by CAR/AhR receptors. In addition, OSTβ heterodimerizes with OSTα to form the functional transporter complex, OSTα/β. Therefore, the outcome of SLC51B (OSTβ) induction, but not SLC51A (OSTα), remains unclear and further investigation on OSTβ induction is warranted. Collectively, according to the regulatory guidance that a concentration-dependent 2-fold increase in mRNA is considered the threshold for a positive in vitro induction signal (USFDA-Guidance-2020, 2020), our results revealed that except SLC51B gene, hepatic transporter gene regulations by CAR and AhR ligands were generally marginal, and significantly lower than the inductions of metabolizing enzyme genes. The current findings suggest that the assessment of transporter gene inductions is not required when a drug does not remarkably induce metabolizing enzyme genes by CAR and AhR activation. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS YL, CN, and BS participated in research design. CN conducted experiments. YL and CN performed data analysis. YL, CN, and BS wrote or contributed to the writing of the manuscript.
2021-01-21T14:24:37.847Z
2021-01-21T00:00:00.000
{ "year": 2020, "sha1": "80fd6592d9a5804bdae2ef2d39701ae3cdaf8eb7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.620197/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "80fd6592d9a5804bdae2ef2d39701ae3cdaf8eb7", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
218777308
pes2o/s2orc
v3-fos-license
Wearable Smart Spectacles using Arduino Smart spectacles or smart glasses are wearable smart glasses that add information alongside to what user sees. Alternatively, smart glasses are sometimes defined as wearable computer glasses that are able to change their optical properties at runtime. Smart sunglasses which are programmed to change tint by electronic means these are an Example of the latter type of smart glasses. Superimposing information onto a field of view is achieved through an optical head-mounted display (OHMD) or embedded wireless glasses with transparent heads-up display (HUD) or augmented reality (AR) overlay. These systems have the capability to reflect projected digital images as well as allow the user to see through it or see better with it. While early models can perform basic tasks, such as serving as a front end display for a remote system, as in the case of smart glasses utilizing cellular technology or Wi-Fi, modern smart glasses are effectively wearable computers which can run self-contained mobile apps. Some are hands free and can communicate with the Internet via natural language voice commands, while others use touch buttons. Like other computers, smart glasses may collect information from internal or external sensors. It may control or retrieve data from other instruments or computers. It may support wireless technologies like Bluetooth, Wi-Fi, and GPS. A small number of models run a mobile operating system and function as portable media players to send audio and video files to the user via a Bluetooth or Wi-Fi headset. Some smart glasses models also feature full lifelogging and activity tracker capability. Smart glasses devices may also have features found on a smartphone INTRODUCTION As technology is growing rapidly and integrating itself to all aspects of people's life, designers and developers tried to provide a more pleasant experience of technology to people. One of the technology trends which aim to make life easier is wearable computing. Wearable's aim to assist people to be in control of their life by augmenting the real life with extra information constantly and ubiquitously. One of the growing trends of wearable computing is Head Mounted Displays (HMD), as the head is a great gateway to receive audio, visual and hectic information. A useful technique for all kinds of people including handicapped/disabled. In this project, we will make a wearable extension, and it will be used to send notifications of calls and messages received on mobile phones, and also show time and date, all in front of wearer's eye. Smart-Glasses are the wearable computing device used as an extension, which can be attached to the spectacles or sunglasses of the wearer, and can be paired with Smart Phones, via Bluetooth. This extension, contains an Arduino Microcontroller having ATmega328p microprocessor, which is programmed to connect with Smart-Phones through a Smart-phone application. A Bluetooth module, named HC-05 is interfaced with ATmega328p, which is used to connect with smart-phones. A battery / Re-chargeable battery of 5V is used as power supply for Smart-Glass. An SSD1306, 0.96" OLED display is interfaced with ATmega328p, which is used to display the data received from Smart-phones. Smart-Phone application is used to transmit data of the phone, i.e. Date, Time, Notifications of Phone call and Text messages. A) MIT App Inventor MIT App Inventor is a web application integrate development environment originally provided by Google, and now maintained by the Massachusetts Institute of Technology (MIT). It allows newcomers to computer programming to create application software apps for two operating systems (OS) Android operating system, Android, and iOS. It uses a graphical user interface (GUI) very similar to the programming languages Scratch programming language and the Star Logo, which allows users to drag and drop visual objects to create an application that can run on mobile devices. In creating App Inventor, Google drew upon significant prior research in educational computing, and work done within Google on online development environments. Considering our system it has to work with the Bluetooth module to access the notifications of a smart phone to be displayed on the screen. So in order to work building an android application was the first concern, considering different modules network buttons to be accessed and placed accordingly. Certain blocks in app inventor that interacts with the Bluetooth client are considered and build according to the requirements so pass on the notifications of the smart phone to the display unit connected to the spectacles. App Inventor programs describe how the phone should respond to certain events: a button has been pressed, the phone is being shaked, the user is dragging her finger over a canvas, etc. Some commands require one or more input values (also known as parameters or arguments) to completely specify their action. So specifically certain blocks have been created to access the Bluetooth client and pass on the notifications to the display unit for optimum working of the system. PROPOSED SYSTEM The smart glass module works on the principle of reflection and focusing of light. The information displayed on the OLED screen will be shown on the anti-reflecting glass by reflecting it with the help of mirror and then it is focused on the screen with the help of a focal lens. The module is powered by 280 mA Lithium Polymer battery which can be charged with the help of USB charger circuit and the power to the Arduino Nano is controlled with the help of a switch. The Bluetooth HC-05 module is controlled by Arduino Nano for displaying the received output on the OLED display. The Arduino pro Nano acts as CPU in this module it is interfaced with the Bluetooth module, OLED display. An Arduino Micro-controller having ATmega328p microprocessor, which is programmed to connect with Smart-Phones through a Smart-phone application. A Bluetooth module, named HC-05 is interfaced with ATmega328p, which is used to connect with smart-phones. A battery / Re-chargeable battery of 5V is used as power supply for Smart-Glass. An SSD1306, 0.96" OLED display is interfaced with ATmega328p, which is used to display the data received from Smart-phones. Smart-Phone application is used to transmit data of the phone, i.e. Date, Time, Notifications of Phone call and Text messages. The following are the main steps that are implemented during the whole process: The above circuit will interact with the application through the Bluetooth module in the following manner 1. It will receive the notifications on the smart phone 2. This notification will be encoded via the mobile application to be sent via the Bluetooth client 3. The encoded signal data will be received by the Bluetooth module HC05 and passes onto the Arduino Nano for decoding and forwarding. 4. The Arduino will forward the signal to the display unit and will receive and optimum output. Experimental results As the system provides basic notifications which are accessed through the smart phone connected via Bluetooth HC05. We are able to detect the local time on the system exactly synchronized with the smart phone. Real time battery percentage is been noted and it gets updated every time the cycle repeats. The Smart phone having a primary SIM is been detected by the system and displayed accordingly th Author Photo The text message is sent from an unknown device to the system phone and it reflects in showing the message delivered to it with the phone number as well as the message itself. Here the message received by the phone is displayed in the form of notifications with unknown number being displayed as well as the message that is been delivered. CONCLUSIONS The system provides satisfactory results the system provides basic notifications to the display unit such as SMS, time, date and information about the sim card the phone is using in a continuous loop. This system is placed in a 3d case which fits with any spectacles and can be used as a portable device. After which one is able to observe the date, time, cellular operator, the text message. FUTURE SCOPE According to the results we have obtained the system tend to take time for updating the new notifications easily that is the efficiency can be increased. The text message received can uphold only 5 words at one output in the display screen, so longer than that is not able to be received by the system. Calling scope can be added to this system as general OS permissions are not been able to get access through our system to give the calling notification facility.
2020-04-30T09:13:05.490Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "92fe959f27c3f632ebe17bb4b86da30fb1ffbf89", "oa_license": "CCBY", "oa_url": "https://www.ijert.org/research/wearable-smart-spectacles-using-arduino-IJERTV9IS040284.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "92fe959f27c3f632ebe17bb4b86da30fb1ffbf89", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14002260
pes2o/s2orc
v3-fos-license
Role of Hepatic Stellate Cells in the Early Phase of Liver Regeneration in Rat: Formation of Tight Adhesion to Parenchymal Cells We investigated activation mechanisms of hepatic stellate cells (HSCs) that are known to play pivotal roles in the regeneration process after 70% partial hepatectomy (PHx). Parenchymal liver cells (PLCs) and non-parenchymal cells (NPLCs) were isolated and purified from the regenerating livers at 1, 3, 7, 14 days after PHx. Each liver cell fraction was stained by immunocytochemistry using an anti-desmin antibody as a marker for HSCs, anti-alpha-smooth muscle actin (alpha-SMA) as a marker for activated HSCs, and 5-bromo-2'-deoxyuridine (BrdU) for detection of proliferating cells. Tissue sections from regenerating livers were also analyzed by immunohistochemistry and compared with the results obtained for isolated cell fractions. One and 3 days after PHx, PLC-enriched fraction contained HSCs adhered to PLCs. The HSCs adhered to PLCs were double positive for BrdU and alpha-SMA, and formed clusters suggesting that these HSCs were activated. However, HSC-enriched fraction contained HSCs not adhered PLCs showed positive staining for anti-desmin antibody but negative for anti-alpha-SMA antibody. These results suggest that HSCs are activated by adhering to PLCs during the early phase of hepatic regeneration. Introduction The liver regenerates in size and function 7 to 14 days after 70% partial hepatectomy in rodents [1]. Recent reports demonstrated that not only proliferation of Parenchymal liver cells (PLCs) but also activation of sinusoidal liver cells, namely, Kupffer cells, hepatic lymphocytes, sinusoidal endothelial cells, pit cells, stem cells and HSCs are involved in the regeneration process through cell-cell interaction and cytokine networks [2]. The activated hepatic stellate cells (HSCs) proliferate vigorously, lose vitamin A and synthesize a large quantity of extracellular matrix. The morphology of these cells also changes from the star-shaped stellate cells to that of fibroblasts or myofibroblasts [3]. However, the molecular and cellular mechanisms of this process, especially, the roles of cellcell interaction between PLCs and HSCs in the HSC activation remain unknown. In the present study, we isolated and purified HSCs and PLCs from regenerating liver after This article is available from: http://www.comparative-hepatology.com/content/3/S1/S29 PHx in rats and investigated mechanisms of HSC activation from a viewpoint of adhesion between PLCs and HSCs in vivo and in vitro. Animals and Partial Hepatectomy (PHx) Female Lewis rats (200-250 g body weight) were used. Under ether anesthesia, rats were subjected to PHx using the method described by Higgins and Anderson [4] with minor modifications. Isolation of PLC-and HSC-enriched Fractions Isolation and enrichment methods for PLCs and HSCs were a modification of the previously described isolation method for PLCs [5] and HSCs [6]. Briefly, the liver was perfused with Ca 2+ , Mg 2+ -free HBSS containing 0.05% collagenase at 37 degrees C. Then the liver was removed, cut into small pieces, and incubated in the same solution at 37 degrees C for 30 minutes. PLCs were separated from the non-parenchymal cells (NPLCs) by low-speed centrifugation. After washing with cold HBSS, the PLC-enriched fraction was obtained. HSCs were isolated from the NPLC-enriched fraction by 8.2% Nycodenz density gradient centrifugation. The HSCs-enriched fraction was obtained from an upper whitish layer. Immunohistochemistry Indirect immunohistochemical examination of desmin and alpha-smooth muscle actin (alpha-SMA) was performed on formalin-fixed, and paraffin-embedded sections of rat liver. 5-bromo-2'-deoxyuridine (BrdU) Labeling for Proliferation Assay BrdU (50 mg/kg body weight) was given to rats by an intraperitoneal injection 3 days after PHx. One hour later, the rats were used for isolation of liver cells. Immunocytochemistry of BrdU, Desmin and alpha-SMA Each liver cell fraction freshly isolated from normal or PHx rats was re-suspended in PBS and adhered to microscope slides using a cytospin. Double immunocytochemical staining of desmin and BrdU was performed to demonstrate proliferating HSCs, while activating HSCs were shown by double immunocytochemical staining of desmin and alpha-SMA. Slides were observed under a fluorescence microscope and digitally photographed. Immunohistochemistry To investigate the behavior of HSCs after PHx, we observed chronologically the regenerating liver by desmin and alpha-SMA immunohistochemistry and analyzed the activation of HSCs (data not shown). In summary, there was a clear increase of HSCs starting on day 1 which peaked on day 3 and declined again by day 7. HSC activation on day 14 was not different from day 0. HSCs "Contamination" in PLC-enriched Fraction After PHx, we counted the number of NPLCs in the PLCenriched fractions. We stained the PLC-enriched cell fraction with Giemsa, counted the number of NPLCs present in the fraction and calculated their percentage in the whole cell population. PLCs and NPLCs were readily discerned by cell and nucleus size (Fig. 1). In the PLCenriched fraction obtained from normal rat liver, the percentage of NPLCs was 3%, and this increased to 27 and 20% at 1 and 3 days after PHx, respectively (data not shown). To identify HSCs in those NPLCs "contaminating" the PLC-enriched cell fraction, the fraction analyzed by desmin immunocytochemistry (Fig 2). Many desminpositive cells were found in the PLC-enriched fraction at 1 and 3 days after PHx. To investigate proliferation of HSCs in PLC-enriched fraction, we analyzed the cell fraction by double immunocytochemistry using the anti-desmin antibody and BrdU staining (data not shown). Three days after PHx, HSC proliferation coincided with cluster formation. In addition, BrdU and desmin were co-localized within the same cells indicating that these HSCs were proliferating. Activation of HSCs by Adhering to PLCs Further, we identified activated HSCs by double immunocytochemistry using anti-desmin and anti-alpha-SMA antibodies (data not shown). "Contaminating" HSCs in the PLC-enriched fraction at 3 days after PHx showed a double-positive reaction against the antibodies (data not shown). However, when HSCs were isolated separately from the same animal on a Nycodenz density gradient, HSCs were desmin-positive but not alpha-SMA positive (data not shown). Thus, HSC activation was detected only when HSCs adhered to PLCs in the PLC-enriched fraction. Activation was not demonstrated when HSCs were isolated and further purified as an HSC fraction from NPLCenriched fraction. Discussion We demonstrated previously that PLCs synthesize and release various cytokines such as GM-CSF [5], M-CSF, IL-1, and activate the hematopoietic and immune systems in the liver after PHx. In previous reports we have proposed as a hypothesis that the liver functions as a hematolymphoid organ. In our present study, the morphology of HSCs surrounding PLCs in the PLC-enriched fraction was round and HSCs were desmin positive at 1 day after PHx. Three days after PHx, the morphology of HSCs changed to that of myofibroblast-like cells and HSCs were alpha-SMA positive. The PLC-enriched fractions at 7 or 14 days after PHx contained virtually no HSCs. These data indicate that regeneration of liver structure including hepatic Giemsa staining of isolated PLC-enriched fraction Figure 1 Giemsa staining of isolated PLC-enriched fraction. PLC-enriched fractions from normal (A), and PHx rats at 1 day (B), 3 days (C), 7 days (D) and 14 days (E) after PHx. Contaminating NPLCs (arrows) can be seen abutting or within regenerating PLC clusters. Bar = 100 µm. sinusoidal architecture, involves transient cluster formation by HSCs in the early phase.
2014-10-01T00:00:00.000Z
2004-01-14T00:00:00.000
{ "year": 2004, "sha1": "77a528606bb02a36f1d5821eb38ab4e68259087b", "oa_license": "CCBY", "oa_url": "https://comparative-hepatology.biomedcentral.com/track/pdf/10.1186/1476-5926-2-S1-S29", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7ce0c088f14d973ac1f4ddfa624e71d4a57f1d29", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
242062901
pes2o/s2orc
v3-fos-license
Music and Time Perception in Audiovisuals: Arousing Soundtracks Lead to Time Overestimation No Matter Their Emotional Valence : One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on time perception. When considering emotional valence and arousal, there is some evidence that positive affect music fosters time underestimation, whereas negative affect music leads to overestimation. Instead, contrasting results exist with regard to arousal. Furthermore, to the best of our knowledge, a systematic investigation has not yet been conducted within the audiovisual domain, wherein music might improve the interaction between the user and the audiovisual media by shaping the recipients’ time perception. Through the current between-subjects online experiment ( n = 565), we sought to analyze the influence that four soundtracks (happy, relaxing, sad, scary), differing in valence and arousal, exerted on the time estimation of a short movie, as compared to a no-music condition. The results reveal that (1) the mere presence of music led to time overestimation as opposed to the absence of music, (2) the soundtracks that were perceived as more arousing (i.e., happy and scary) led to time overestimation. The findings are discussed in terms of psychological and phenomenological models of time perception. In the tradition of studies conducted regarding music's influence, a promising thread of the research focused on time perception, starting from the seminal work of Rai [14]. As we will see, aside from the research traditionally focusing on music (i.e., perceived duration of musical excerpts), some work has pointed at the domain of audiovisual stimuli [15] and film-induced mood [16], with some interest in the dynamic interaction between the auditory (i.e., musical) and visual elements [17]. For example, it has been found that music is capable of modulating and altering visual perception [18][19][20] due to phenomena such as auditory driving [21]. In this study, we are interested in the modalities through which music affects time perception. We firmly believe that this peculiar research theme has the merit of meeting the needs of a growing number of scientists, artists, and professionals who are interested in shaping the interaction between users and audiovisuals with some background music, whether in movies [22], educational videos [23], interactive games [24], or videogames [25,26]. For instance, there may be some utility in decreasing the self-perceived passage of time in interactive educational or tutorial contexts, especially considering that the majority of educational or tutorial videos have background music within. In such contexts, the challenge is to reach a balance between the entertaining effects of music that improve learning [3,27] and its distracting properties [28] that dampen attention. We are confident that a proficient management of the background music could act upon the recipients' time perception, thus improving the interaction between the users and the audiovisual devices at hand. Before discussing the existing tradition of research, a clarification is needed on the assessment of time estimation. As suggested by [29], in the literature, two main paradigms exist for the study of time estimation: prospective and retrospective; the former implies that the participants are aware that, after the presentation of a given stimulus, there will be questions on the perceived duration of elapsed time, thus studying the experienced time (i.e., the subject attends to the passage of time itself), whereas the latter does not have such an implication, that is, the experimental participants are not informed about the questions that will follow concerning time perception, thus analyzing the remembered time (i.e., the subject's attention is not focused on time perception) (for a detailed discussion of the different cognitive processing of prospective and retrospective time judgement, see also [30][31][32]). In the current work, as we are interested in analyzing the soundtrack's influence on the time estimation of an audiovisual experience, we focus on the retrospective paradigm (i.e., remembered time). We opted for this paradigm because we wanted our experimental viewers to be completely unaware of the task at hand. In other words, we wanted to avoid conscious time counting. To begin with, in Section 2, a summary of previous studies on the relationship between music and time perception is presented, alongside a focus on the musical parameters that have been proven to perform a role in affecting time perception in several contexts. In Section 3, more attention is devoted to two psychological models (i.e., Dynamic Attending Theory and Scalar Expectancy Theory) that attempt to explain how time perception works within the audiovisual domain. In Section 4, we present our online experiment, which we discuss in Section 5, where we also list some of the limitations of this work and a few suggestions for future research. A brief conclusion is presented in Section 6. Previous Works on Music and Time Perception A variety of studies focused on how music alters our subjectively perceived time in an indirect fashion, for instance, considering the waiting times [33] in retail settings [34], restaurants [35], queue contexts [36,37], and on-hold waiting situations [38]. North and Hargreaves [33] compared the waiting times of four groups of participants who were waiting for an experiment to begin; three of the groups were provided with background music differing in complexity, while the last group was not provided with music (control condition). They found no differences between the music conditions, but the controls showed a significantly lower waiting time. Areni and Grantham [39] reported that when waiting for an important event to begin, their participants tended to overestimate the waiting time when they disliked the background music that was playing, whereas they underestimated the waiting time in the presence of background music they liked. Fang [35] found that slow-paced background music extended the customer waiting time in a randomly selected restaurant, whereas fast-paced background music shortened the waiting time of customers, i.e., they decided to leave earlier. Guéguen and Jacob [38] shed further light on the issue by analyzing the cognitive mechanisms that come into play in an on-hold telephone scenario. Their results proved that in comparison with the no music condition, the simple presence of music led to both an underestimation of the time elapsed and an overestimation of the projected time passed before a person would hang up. Finally, the most up-to-date meta-analytic review of the effects of background music in retail settings [34] (p. 761) concludes that, "A higher volume and tempo, and the less liked the music, the longer customers perceive time duration. Tempo has the greatest effect on arousal." In most of these studies, the dependent variables are indirect measures of time perception because participants do not explicitly report their time awareness, but their behavior is simply annotated. In some other cases [40,41], an actual self-report of the wait length was assessed. Evidence that music alters the representation of time also stems from qualitative research on altered states of consciousness (ASCs) [42]. In such studies, subjects' reports often mention feelings of timelessness, time dilation, and time-has-stopped in correspondence to music listening activities [43]. To sum up, there exists a consensus that music has a robust role in influencing how we perceive time [15,25,37,40,[44][45][46], although fostering small-to-moderate effects, as Garlin and Owen [34] pointed out in their meta-analytic review. On the contrary, less consensus exists on the musical parameters that are responsible for the alteration of time perception. Research with both direct and indirect measures of time estimation has focused on diverse music parameters. Musical Parameters and Time Perception Several studies have investigated time estimation in the dependence of several music parameters and types of music, primarily by assessing the perceived length of musical excerpts. To begin with, the musical structure complexity has been found to increase the time estimation [47]; on the contrary, the results on tempo are not consistent; if [44] found no evidence, other works proposed that a slower tempo seems to lead to time underestimations [37,48,49]. Coherently, [37] found temporal perception (perceived minus actual wait duration) to be a positive function of musical tempo. In relation to musical modes, a study [50] proved that the Locrian mode (diminished, thus more likely to be unpleasant) led to time overestimation as opposed to the Ionian and Aeolian modes. The modes of ancient Greece can be described as a kind of musical scale coupled with a set of particular melodic behaviors. The Ionian mode is equal to modern-day major scale. The Aeolian mode is today's natural minor scale (i.e., 1, 2, 3, 4, 5, 6, 7). The Locrian mode is a minor scale with the second and fifth scale degrees lowered a semitone (i.e., 1, 2, 3, 4, 5, 6, 7). Arguably, the Locrian mode tends to yield more negative valence than the Minor due to the different composition of the tonic triads (i.e., the principal chord that determines the tonality of the mode). When compared with the Minor mode, wherein the tonic triad is constituted by a minor a 3rd and a perfect 5th (i.e., minor chord), the Locrian mode's tonic is composed of minor 3rd and diminished 5th (i.e., diminished chord). Because of their composition, the diminished chords are considered dissonant and prove to be responsible for conveying the lowest valence and very high tension [51]. The music volume also might play a role; indeed, [45] proposed that people listening to quieter music tend to underestimate the time passed. Lastly, [52] reported that an overestimation of passed time was observed for pop music played in a major vs. minor mode, while [53] concluded that listening to familiar music leads to an underestimation of passed time. Time Perception in Audiovisuals-Models and Mechanisms When it comes to audiovisuals (i.e., complex multimodal stimuli that present at least two channels of information: auditory and visual), it can be assumed that different mechanisms come into play, especially since the viewer engages with a conscious elaboration of the overall stimulus by integrating its various interacting parts. The integration process can be easy or difficult depending on the stimulus' internal congruency, and this can, in turn, impact the time estimation. For instance, as elaborated by [54] (p. 504), "Because of its effects on information processing, stimulus congruity may influence the retrospective estimation of event duration. Specifically, underestimation of lapsed time might be expected when the elements comprising the event are incongruent, because incongruent information tends to be more difficult to encode and retrieve because of the absence of a preexisting cognitive schema [ . . . ], and because of weaker linkages between unrelated nodes in an associative network. [ . . . ] exposure to incongruent information, like elevated arousal states, may create a distraction that reduces attention to one's internal "cognitive timer [55]". In their recent review on time perception in audiovisual perception, Wang and Wöllner [15] clarify that two Internal Clock models can account for the effects of music on time perception in the audiovisual context, which are the Dynamic Attending Theory (DAT), also known as the oscillator model [56], and the Scalar Expectancy Theory (SET), also referred to as pacemaker-counter model [57]. Contrarily to the SET, which postulates regularly emitted pulses by an independent internal clock, the DAT claims that the time estimation of the duration of past events depends on the coupling between attentional pulses and the occurrences of external events [56]. For this reason, this model is sometimes referred to as an attention-based model. Crucial to this theory is the idea that the emission of attentional pulses or oscillations is a non-linear (i.e., dynamic) process, and that the attention regulates the pulses to a greater extent than the working memory (on the relationships among time perception, attention, and working memory, see [32]). Indeed, the adjective "dynamic" stems from the fact that, contrary to the SET model, the emission of the attentional pulses is not a static process, rather, it varies depending on the salience of the external events (i.e., stimuli). The DAT model also suggests that when the attention toward a stimulus is low, a fewer number of pulses is emitted, thus leading to an underestimation of time. Furthermore, other results agreeing with the DAT model indicate a positive correlation between musical tempo and time estimation in both auditory [58][59][60] and visual perception [61]. Conversely, the latter model (i.e., Scalar Expectancy Theory) proposes a linear cumulation of regularly emitted attentional pulses and a pacemaker that counts them. According to this memory-based theoretical framework, more akin to that of [62], a more important role is assigned to the working memory as it is a three-step model in which the memorization constitutes the second and central phase, following the clock phase and followed by the time judgement. In particular, the storage-size model of memory of time perception [62] claims that richer stimuli (i.e., with a high amount of information) lead to the perception that a greater number of events occur in a given interval, thus favoring time overestimation. To sum up, among the models that account for the influence of music on time perception, two share the hegemony: the attention-based (DAT) and the memory-based (SET and storage-size model of memory of time perception). Nevertheless, various results exist within the literature because various studies claim that different features of the stimuli influence time estimates (Table 1). Major mode overestimation [52] Minor mode underestimation [52] As we are interested in the time estimation of an audiovisual, we focus our attention on two basic parameters, namely valence and arousal of the emotions conveyed by music, for two reasons: 1. It is known that certain pieces of music can, through their emotional valence, foster positive affective states, to the point that music has traditionally been considered as a valid mood inductor [68]. Therefore, in accordance with previously collected results from outside of the audiovisual domain [66], we can hypothesize that the positive affect experienced by the recipients while viewing may be negatively correlated with the estimation of the time elapsed [31], that is, the better the viewers feel as they watch the scene (i.e., positive affective state), the less they perceive the passing of time. 2. A great deal of research suggests that the arousal (i.e., the physiological and psychological state of activation) conveyed by music might lead to time overestimation [34,54], possibly due to an effect on the internal clock system speed (both in attention-and memory-based models of time perception). Nevertheless, no one, to our knowledge, has ever shown such a phenomenon in an audiovisual domain. These two points underpin our research questions, which are introduced in the following section. The Present Study As stated above, in the literature on the influence of music on time perception, one cannot draw definitive conclusions about a variety of factors. With this study, we aim to investigate how the perceived length of a visual scene is affected by the background music (i.e., soundtrack), and, more specifically, the following two particular features conveyed by the music: emotional valence and arousal. Research Questions We hypothesize that both the emotional valence and arousal conveyed by music have a key role in the time estimation of an audiovisual piece, although in contrasting ways. First, coherently the studies on waiting times [38], we expect the mere presence of music to lead to a decrease in the perception of the elapsed time (i.e., time underestimation) (Hypothesis 1). Secondly, considering the literature on music pieces [66], we expect positively valenced music to result in time underestimation, and negatively valenced music to induce time overestimation (Hypothesis 2). Third, in accordance with both the attention and memory-based models of time perception, we hypothesize that the arousal level should lead to time overestimation (Hypothesis 3). Below, we describe the experimental paradigm, each construct, and its related measurement separately. Method We designed a between-subjects experiment wherein the participants watched a modified version (01 30") of a short movie by Calum Macdiarmid [69] (Figure 1). Method We designed a between-subjects experiment wherein the participants watched modified version (01′30″) of a short movie by Calum Macdiarmid [69] (Figure 1). Using Reaper 6.29, we created five versions of the short movie-varying under th five experimental conditions-with the video accompanied respectively by a happy piec (Appalachian spring-VII: doppio movimento) by A. Copland), a sad cello melody accompa nied by a piano, (After Celan by D. Darling and K. Bjørnstad), a frightening track from th Original Motion Picture Soundtrack of the film Proxy (Murder by the Newton Brothers), relaxing piece specifically composed to control anxiety [70], or by no music at all (i.e control condition). This method allowed us to present all the possible combinations o valence and arousal ( Table 2). Similarly to [20], the four pieces were chosen by considering the findings of [71], an the subsequent studies enumerated by [72] concerning a plethora of psychoacoustic pa rameters associated with emotional expression in music. Two of the pieces evoked nega tive affects but differed in the arousal dimension: the After Celan track's soft tone and mo bid intensity fosters sadness and tenderness [73]. Conversely, the Newton Brother track's great sound level variability and the rapid changes in its sound level could be a sociated with the experience of fear [71], while its increasingly louder volume can evok restlessness, agitation, tension [74] or rage, fear [75] and scariness [76]. In a similar way, the two other pieces both foster positive feelings, but with a marke difference in the arousal dimension: if Copland's piece's orchestration, fast tempo, an high pitch all cultivate a sense of highly exciting joy, the relaxing piece was specificall composed with the goal of controlling anxiety. To elaborate, it presents a relatively con stant volume, narrow melodic range, legato articulation, and regular beat [70]. To mitigat any loudness perception effects, the perceived loudness of all the tracks was normalize via a Loudness, K-weighted, relative to Full Scale (LKFS) [77]. Using Reaper 6.29, we created five versions of the short movie-varying under the five experimental conditions-with the video accompanied respectively by a happy piece (Appalachian spring-VII: doppio movimento) by A. Copland), a sad cello melody accompanied by a piano, (After Celan by D. Darling and K. Bjørnstad), a frightening track from the Original Motion Picture Soundtrack of the film Proxy (Murder by the Newton Brothers), a relaxing piece specifically composed to control anxiety [70], or by no music at all (i.e., control condition). This method allowed us to present all the possible combinations of valence and arousal ( Table 2). Similarly to [20], the four pieces were chosen by considering the findings of [71], and the subsequent studies enumerated by [72] concerning a plethora of psychoacoustic parameters associated with emotional expression in music. Two of the pieces evoked negative affects but differed in the arousal dimension: the After Celan track's soft tone and morbid intensity fosters sadness and tenderness [73]. Conversely, the Newton Brothers' track's great sound level variability and the rapid changes in its sound level could be associated with the experience of fear [71], while its increasingly louder volume can evoke restlessness, agitation, tension [74] or rage, fear [75] and scariness [76]. In a similar way, the two other pieces both foster positive feelings, but with a marked difference in the arousal dimension: if Copland's piece's orchestration, fast tempo, and high pitch all cultivate a sense of highly exciting joy, the relaxing piece was specifically composed with the goal of controlling anxiety. To elaborate, it presents a relatively constant volume, narrow melodic range, legato articulation, and regular beat [70]. To mitigate any loudness perception effects, the perceived loudness of all the tracks was normalized via a Loudness, K-weighted, relative to Full Scale (LKFS) [77]. We also aimed at ecological validity; thus, in order to allow people to participate in a less detached situation than a lab, we built an online procedure on Qualtrics.com. The participants accessed a single-use link (An anti-ballot box stuffing was employed to avoid multiple participations from the same device) through which they could run the experiment. As a result of the online procedure, they were able to participate directly from home on their laptops, smartphones, or tablets, just as if they were watching an actual movie. An introductory screen summarily presented the task to the participants without mentioning the question about the time estimation. Immediately after this introductory screen, the informed consent statement was presented. After viewing the scene, a questionnaire was administered with three questions: the first two, which might be considered as a manipulation check, were designed to verify whether the emotional valence and arousal self-reported by participants were the same as those expected for each music condition. The last question aimed to assess the dependent variable, namely the participants' perception of elapsed time (i.e., time estimation). To avoid sequence effects (i.e., the theoretical possibility that a previous question could affect the following one in any possible way), the order of questions was completely randomized for each participant. Affective States of the Recipients To measure the affective state of the viewers, we needed to identify what we might call the emotional nuclei of the viewing session and the emotional nuances each soundtrack could add to the narration. To this aim, as we were interested in a fast and immediate answer that caught the gist of the emotional content of the vision, we decided against a Likert scale with several emotions as the items, because this would have resulted in increased fatigue for the recipients. On the contrary, we resorted to Plutchik's wheel of emotions [78]. In brief, we presented our participants with the image of the wheel (Figure 2), asking them to select with a click the region that best represented the emotion they were experiencing while viewing the video. Arousal To assess emotional arousal, we used a 100-point slider, asking our participants how active they felt while viewing the scene. The slider was initially set to 0; the recipients were required to place it at their desired point. As it would have been suboptimal to use a single adjective to refer to the concept of arousal unambiguously, in this assessment, we provided a note in the question to our participants that read: "When we say active, we also mean awake or ready". Time Estimation We asked our participants to indicate the length of the video by dragging a slider that ranged between 60 and 120 s (i.e., minimum and maximum values admitted); the slider was initially placed at the center of the bar (i.e., 90 s). Later, as was the case with [37], we created a measure of the gap between the estimated time and the actual time, according to the formula: Arousal To assess emotional arousal, we used a 100-point slider, asking our participants how active they felt while viewing the scene. The slider was initially set to 0; the recipients were required to place it at their desired point. As it would have been suboptimal to use a single adjective to refer to the concept of arousal unambiguously, in this assessment, we provided a note in the question to our participants that read: "When we say active, we also mean awake or ready". Participants and Preliminary Sample Data Analysis As a first step, six hundred and three (n = 603) Italian participants were recruited by sharing the link of the study on social media and through university mailing lists (i.e., snowball procedure). Their participation was provided on a voluntary basis, and the participants were not incentivized with any reward. Before our data analysis, to improve the reliability of our sample, we performed exclusions based on the following pre-established criteria: • An attention check question in which a short Likert scale was presented with the explicit instruction that asked participants to avoid completing it; we excluded all those participants who completed such a scale. • A time counter on the screen displaying the video was incorporated (it was visible to the experimenters only) so as to exclude all participants who had not watched the whole video (i.e., time spent on that screen < 90 s). • All those participants who completed the task in less or more than the mean duration ± 3SD were excluded. • All participants who did not complete the questionnaire in all its parts were also excluded. After the above exclusions, our sample size decreased from 603 to 565 valid participants (mean age = 26.01 SD = 10.53, 339 females, 60%). The five experimental groups were comparable in the number of participants (range 104-119) and were gender-balanced (p = 0.41). Results For the statistical analyses, IBM SPSS 26.0 was used; the path analysis was processed through Mplus 8.5 [79]. The violin plots were made by means of R (ggplot2 package). For each test, the effect size is provided by employing η (eta squared, for chi-square and ANOVA statistics). In the ANOVA tests, the post-hoc computed observed power is provided in terms of (1-β). In the results of the model (Section 4.5.2), for each path, we provide the standardized path coefficient (β), the relative Standard Error (S.E.), the level of statistical significance (p value), and a 95% Confidence interval (95% CI). In the case of the indirect effects, 95% Bias-Corrected Confidence Intervals are indicated (BCa). Affective States of the Recipients The heatmaps of Figure 2 provide a first and intuitive point of view of the participants' affective states. A common emotional nucleus emerges in all conditions, specifically the bottom region of Plutchik's wheel of emotions, which is the axis that includes pensiveness, sadness, and grief. It is worth mentioning that the other soundtracks add or subtract diverse emotional nuances in comparison with the control condition. For instance, comparing the controls with the happy group, the region of the serenity/joy becomes more populated. When considering the scary condition, the serenity/joy axis loses relevance, while the expectancy area remains active, and apprehension and awe gain saliency. Conversely, when considering the sad condition, all the other axes aside from the pensiveness/sadness/grief axis become unnoticeable. Upon further analyses, considering that our participants simply clicked once on the Plutchik's wheel of emotions image in correspondence with the emotion they were feeling, we created an emotional score by assigning 1 point to the participants who chose a positively valenced emotion (21.9%), 0 points to non-valenced emotions (expectation, interest, surprise, and distraction, 17.3%), and −1 point to negatively valenced emotions (60.7%). We then performed a chi-square test to evaluate the distribution of the emotion valence in dependence of the condition, finding it to be significant, χ 2 (8565) = 101.34, p < 0.001, η condition dependent = 0.08, η aff. state dependent = 0.40 (Table 3). Time Estimation Before proceeding with the analysis of variance, we studied the descriptive statistics. The first aspect to consider is that the majority of participants in our sample (71.3%) underestimated the actual length of the scene (M = −14.98 SD = 27.01, min = −62, max = 60). We then proceeded to the verification of our hypotheses (Section 4.1). Hypothesis 1 (H1). Does the presence of music lead to time underestimation? To verify Hypothesis 1, that is, whether the mere presence of music negatively influenced time estimation, a one-way ANOVA was performed, which revealed the main effect of the music [F(1, 563) = 6.46, p = 0.011 η 2 = 0.011 (1 − β) = 0.72]. Contrarily to the hypothesis, the control group reported the video to be shorter (M = −21.03 SD = 26.10) as opposed to the music group (M = −13.62 SD = 27.01) (Figure 3). We can therefore state that Hypothesis 1 was not verified. Time Estimation Before proceeding with the analysis of variance, we studied the descriptive statistics. The first aspect to consider is that the majority of participants in our sample (71.3%) underestimated the actual length of the scene (M = −14.98 SD = 27.01, min = −62, max = 60). We then proceeded to the verification of our hypotheses (Section 4.1). Hypothesis 1 (H1). Does the presence of music lead to time underestimation? To verify Hypothesis 1, that is, whether the mere presence of music negatively influenced time estimation, a one-way ANOVA was performed, which revealed the main effect of the music [F(1, 563) = 6.46, p = 0.011 η 2 = 0.011 (1 − β) = 0.72]. Contrarily to the hypothesis, the control group reported the video to be shorter (M = −21.03 SD = 26.10) as opposed to the music group (M = −13.62 SD = 27.01) (Figure 3). We can therefore state that Hypothesis 1 was not verified. Table 4 and Figure 4). As concerns the specific roles of valence and arousal, we resorted to a path analysis that we describe in the following paragraph. After analyzing all the groups in greater detail, we still found an effect of the music [F(4, 560) = 4.93, p = 0.001 η 2 = 0.034, (1 − β) = 0.96]. Subsequent custom hypothesis contrasts revealed the significant differences against the control condition to be those of the happy (M = −14.00 SD = 24.74, p = .050), scary (M = −7.37 SD = 29.08, p < 0.001), and relaxation conditions (M = −13.28 SD = 27.88, p = .031) (Table 4 and Figure 4). As concerns the specific roles of valence and arousal, we resorted to a path analysis that we describe in the following paragraph. Hypothesis 3 (H3). Arousal in time estimation. As for the verification of Hypotheses 2 and 3, a path analysis was performed to analyze the role of the valence and arousal as conveyed by the music and self-reported by our participants with regard to time estimation. The model presents two exogenous variables, namely the valence and the arousal conveyed by music (i.e., the experimental conditions). Both the variables were operationalized on three levels; the valence denoted as −1 (negative valence: sad and scary), 0 (neutral valence/no music), and 1 (positive valence: happy and relaxation); and the arousal denoted as −1 (low arousal: relaxation and sad), 0 (neutral arousal/no music), and 1 (positive arousal: happy and scary). For the next step (i.e., order of the model), the endogenous variables were the self-reported affective state and arousal. The first part of our model can be considered as a manipulation check that is conducted to ensure that our participants' affective state and arousal were effectively and coherently affected by the pieces of music that we selected. Finally, the last endogenous variable was the time estimate. To avoid normality issues, Robust Maximum Likelihood (MLR) was used as the estimator. Discussion Firstly, our results suggest that the mere presence of music causes an increase in time estimation in an audiovisual context. This finding seems to contradict that of other studies that found that music presence, as opposed to music absence, led to longer waiting times, therefore suggesting a decrease in the perception of elapsed time (i.e., time underestimation) [33,38,81] due to the fact that music leads to perceive the time passing by as being slower. We can account for such an apparent contradiction by considering that music has a twofold nature: on the one hand, when it is reproduced in the background (as in most of the studies on waiting times mentioned above), it may be conceived as a distractor that draws the focus of attention away from the conscious time perception. On the other hand, when music is paired with a visual stimulus (as in the film music domain), it becomes a key part of the meaning of that scene, the integration processing of which requires added attentional and memory resources. Indeed, concerning the above-mentioned models of time perception (Section 3), this effect of the presence of music might go in the direction of some memory-based phenomenon related to an added complexity. In further detail, an audiovisual stimulus requires more information to be processed than a visual stimulus alone. Not only do audiovisuals require several parallel levels of processing, such as visual, music, kinesthetic, and, possibly, speech, sound FX (i.e., sounds recorded and presented to make a specific storytelling or creative point without the use of words or soundtrack, ex.: sounds of real weapons or fire), and text [82], but they also require their coherent integration aimed at building a working narrative, namely the subjective interpretation of a scene. Such integration involves both bottom-up (sensory-perceptual) and top-down (expectative) processes: on the one hand, a recipient perceives information using their senses; on the other, one integrates this information using previous knowledge and cognitive schemas stored in the long-term memory [82]. Other studies have already revealed that, under the influence of differently valenced soundtracks for the same video, not only do the viewers generate diverse plot expectations [20,83] and alter their recall of the scene [84,85] (i.e., high-level processing), but they can also be driven and even deceived in a way that impacts their visual perception (i.e., low-level processing) [18][19][20][21]. Therefore, it can be assumed that such a to-be-processed integration, only present in the music conditions, could be the cause of time overestimation in accordance with the memory-based model of time perception. Elaborating on the differences in the soundtracks more in detail, when considering the four music conditions separately, both the positively valenced soundtracks (i.e., happy and relaxing) and highly arousing ones (i.e., happy and scary) seemingly result in time overestimation (Figure 4). Nevertheless, to better clarify the roles of valence and arousal, we implemented a more sophisticated path analysis that considered not just the experimental conditions but the subjectively perceived affective state and arousal level as plausible predictors of time estimation. The results of the model ( Figure 5) clarified that the soundtracks' impact on the study participants was coherent with our hypotheses and, more importantly, that only the subjectively perceived level of arousal positively predicted the time estimation (in contradiction with [54,65]). It appears that our results contrast the well-known traditional adage that "time flies when you're having fun", or at least they correct this adage in quite a counter-intuitive fashion, that is: "Time flies when you're not activated". This outcome is coherent with those studies that furthered several music parameters, showing that fast musical tempi [37,58,60] and high musical structure complexity [47], all features present only within the highly arousing soundtracks, led to overestimations. Similarly, our findings also overlap with those studies that found music in major mode (widely associated with positive affect) and music in minor mode (largely associated with negative affect) do not differ in their influence on time perception [64]. Were this the case, then the music valence should have behaved as a negative predictor, given that the two positively valenced pieces were both in the major mode, whereas the two negatively valenced ones were both in the minor mode. It is also worth noting that these findings are in contradiction with those of [66], where systematic overestimation in the judgment of the duration of joyful musical excerpts was found, and the opposite was noticed for the sad tracks. We may account for such a difference by bringing attention to two significant differences between their procedure and ours: first, although both the studies employ a retrospective paradigm, Bisson and colleagues [66] inserted a cognitive task between the two musical excerpts; thus fostering a relevant change in the participants' foci of attention that could have created a bias in the internal clock mechanism. Secondly, and most importantly, it should be considered that their results (i.e., positive valence music in major key fosters time overestimation as opposed to negative valence in minor key) might also be explained in terms of arousal. In fact, the positive valence musical piece that was used by Bisson et al. [66] (i.e., the 1st movement of Johann Sebastian Bach's Brandenburg Concerto No. 2 in F major, BWV 1047O) can undoubtedly be considered to be an arousing composition, incomparably more arousing when contrasted with the negative valence musical piece that was used (Samuel Barber's Adagio for strings in B minor from the 2nd movement of String Quartet, Op. 11). As for the diatribe between the attention and memory-based models, it is worth mentioning that no safe conclusion can be drawn from the current study. The attentionbased model posits that time overestimations are due to a higher number of attentional pulses emitted in highly arousing situations, whereas the memory-based model posits time overestimation to be a phenomenon caused by the stimulus complexity. The more complex the stimulus, the more processing is required, and a greater number of traces remain in memory, thus leading to overestimations. To put this in terms of the Scalar Expectancy Theory, the pacemaker regularly emits attentional pulses, but the counter device, in the presence of richer stimuli, counts an increased number of pulses. The issue here is that the two arousing soundtracks both present, apart from the faster tempi, greater perceived complexity compared to both the relaxing and sad tunes. To disambiguate between these two differently oriented models, in future works, it could be profitable to compare, for instance, two arousing soundtracks differing in the degree of harmonic and melodic complexity (for example, a very fast bebop jazz tune with a techno track), and the same might be done with two scarcely arousing pieces. Rather than through the attention and memory-based models, a phenomenological approach appears to be promising in explaining this and other aforementioned results. Such a phenomenological approach, promoted by Flaherty [86], has philosophical roots in the thought of Heidegger, Husserl [87], and Merleau-Ponty [88,89]. It proposes that time consciousness cannot be fully analyzed through perception because of the intrinsic nature of time, which is considered as a construction more than an objectively perceived entity. As such, no fixedly emitted pulses of sort can exist; on the contrary, our experience of the now rises from the integration of diverse perceivable stimuli into a single unit of content within consciousness. Yet, the number (i.e., how many of these stimuli we process) and the saliency of these stimuli vary depending on several factors, including memory, personality, affect, and physiological conditions. Two of Flaherty's forms of temporal experience (i.e., "temporal compression" and "protracted duration") deal with retrospective time judgement. Temporal compression happens when the listening activity is not so engaging (e.g., in our sad and relaxing conditions); in these cases, the listener's brain works almost automatically, so that "time will be experienced and retroactively constructed as having flowed quickly" [90] (p. 256) [31]. Conversely, the protracted duration phenomenon arises in cases of intense, novel, or extraordinary experiences (e.g., the highly arousing soundtracks of our study, as opposed to the less arousing, might belong to this category to a greater extent), and, similarly to the memory-based models, it is mainly due to a more complex structure of information that needs to be processed. Such a difference is eminently important as it accounts for our results in a coherent fashion. Limitations Lastly, the four main limitations of this study must be highlighted. Firstly, all of the measures employed are self-reported. On the one hand, self-reported measures in psychological studies on music have been consistently applied to the study of musically elicited emotions over the last 35 years and presented good reliability as long as they stem from validated theories or models of emotion [91]. Moreover, there is some evidence suggesting a consistent overlap between self-reported and psychophysiological measures such as skin conductance levels [92][93][94][95], heart rate [94,96], finger temperature, and zygomatic facial muscle activity [95]. On the other hand, it must be acknowledged that these two sets of measures cannot always be considered as equally valid in all contexts, and other studies found more complex relationships between them [97], even in the audiovisual domain [98]. For these reasons, it would be good practice to replicate these findings in a laboratory setting by employing one or several psychophysiological measures [99]. Secondly, some studies insist on the role of music preference [25] and familiarity [100] in time perception. In our study, to construct a more condensed online task (and to avoid further losses in participation), we did not ask our participants to express their musical preferences, nor did we measure the extent to which they were previously exposed to the genre of the soundtrack they were listening to. Similarly, we did not ask for their movie preferences. All these personal characteristics could have slightly biased our findings. Thirdly, as regards the stimulus complexity in audiovisuals, we need to mention that an assessment of the subjectively perceived musical fit [101], that is, the degree to which, according to a viewer, musical and visual information overlap each other with no semantical frictions, could have been profitable. Nevertheless, so far, such an assessment has been validated for audiovisual advertising only [101,102]. Lastly, the short movie we used as the stimulus was not completely neutral from an affective standpoint; indeed, we found that 64.42% of the viewers in the no-music condition reported a negative affect during their viewing. Although we are confident that such an "affect negativity" of the visual stimulus could not have jeopardized the validity of the results per se, we are less certain that it did not impact the perceived congruity of the audiovisual; namely, the fact that a negative visual stimulus was in some conditions paired with a pos-itive soundtrack could have led to a decrease in the stimulus congruity, thus eliciting a slightly different (and perhaps more complex) processing. Indeed, in this design, we did not include a measure of the musical experience per se. In other words, we did not assess the self-reported valence and arousal of the musical pieces separately from the video. As a consequence, what we have referred to as a self-reported measure of the affective state of the participants is a measure of the overall audiovisual stimulus, subsequent to the aforementioned cognitive process of integration of the visual and auditory channels of information. On the one hand, the results of our model (Section 4.5.2) support the claim that the musical pieces were representative of the desired valence; on the other, there is also evidence that visual information influences the perception and memory of music [103]. In future studies, we plan to include the investigation of the bi-directional influence between music and video stimuli, especially with reference to musical fit [101,102] interactions with time perception. Conclusions To conclude, two main results have been found in this study. The first is that the mere presence of music, regardless of its valence and arousal, leads to time overestimation in an audiovisual context, possibly due to the cognitive process of integrating the visual and auditory information. Secondly, and most importantly, the primary result is that the subjectively perceived level of arousal, which is in turn increased by faster musical tempi and greater stimuli complexity (i.e., happy and scary soundtracks), positively predicts the time estimation of an audiovisual (i.e., arousal leads to time overestimation). In the light of the studies mentioned in Section 3, the supposedly causal role of the arousal in time overestimation appears to be solid. Further studies need to identify the cause by distinguishing between attention and memory-based models of time perception. It is our intention to underline the potential that these findings and this research niche might present in the audiovisual domain. The notion that the interaction between the soundtrack and the moving image can affect the viewers' time perception should receive further attention from media psychologists, video content creators, filmmakers, and, in general, any scholars or professionals interested in shaping and improving the interaction between viewers and an audiovisual. As the development of new technologies continues, their interactive uses become more and more explored and exploited. It is not negligible to claim that an ameliorated management of the background music within the audiovisuals could improve the interaction between the user and the audiovisual devices by shaping the recipients' time perception. Our results confirm previously collected evidence [16,59,64] revealing that the musically conveyed arousal, and not specific emotions, fosters time overestimation within a narrative audiovisual scene. We are aware that, from a naïve point of view, the fact that arousing music steers the listeners towards time overestimations might appear paradoxical. Instead, this is far from being unknown among music composers. For instance, it is told that Maurice Ravel was very disappointed by Wilhelm Furtwängler's rendition of his Boléro, which was so fast that he thought it would have lasted forever [104]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
2021-11-04T15:21:30.476Z
2021-10-29T00:00:00.000
{ "year": 2021, "sha1": "79b37f9cda1f30dcf1a263e9448ede7f2ad47556", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2414-4088/5/11/68/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a1bc1fdfa228ff6578c174a0790488c862dae61c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
3869201
pes2o/s2orc
v3-fos-license
Memory shapes visual search strategies in large-scale environments Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies. In natural behavior, visual information is actively sampled from the environment by a sequence of gaze changes. These gaze changes reflect shifts of attention, and are frequently the result of a visual search operation that specifies a region of the peripheral retina as the target of the next eye movement. Locations in scenes are selected as gaze targets in order to direct the fovea to a region where new information can be gathered for ongoing visual or cognitive operations. A large body of work in visual search has revealed many of the factors that influence the search efficiency, such as the stimulus features of the image, top down guidance, and scene semantics 1 . However, most of this work has been done using 2D displays on computer monitors, and only a relatively small number of studies have examined visual search in the natural world [2][3][4] . This is an important issue because the nature of the stimulus in standard experimental paradigms differs considerably from that in everyday experience 3,[5][6][7] . For example, experimental paradigms typically entail brief exposures to a large number of different images, whereas in natural settings humans are immersed in a relatively small number of environments for longer durations. This provides viewers with the opportunity to develop memory representations of particular environments, and to use such representations to guide search, rather than interrogating the visual image. In addition to the greater opportunity to build memory representations in normal experience, the involvement of body movements may influence the development of such representations. For example, when moving around a room it is necessary to store information about the spatial layout in order to be able to orient to regions outside the field of view. Land et al. 8 noted instances when subjects made large gaze shifts to locations outside the field of view that involved eye, head, and body movements, and were remarkably accurate. Another aspect of 3D environments is that movement is self-initiated and accompanied by proprioceptive and vestibular feedback that allow the development of abstract spatial memory representations 9 . These components of active behavior are absent in experiments that attempt to understand scene learning and visual search using 2D stimuli. Jiang et al. 3 investigated visual search in an outdoor environment and found that the ability to move the body during search allowed subjects to code targets in both egocentric and allocentric frames, whereas coding in laboratory environments was primarily egocentric. They also demonstrated that subjects learnt statistically likely regions for the target location and used this to speed up search. Thus the structure of memory representations and strategies for search are likely influenced by experience in immersive environments. Another factor to consider in understanding the role of memory is the cost of the movements when exploring in large spaces, which is presumably tied to the pervasive effects of the neural reward machinery [10][11][12] . For example, in locomotion, subjects choose a preferred gait that reflects the energetic minimum determined by their body dynamics 13 . Other experiments have demonstrated that subjects store information in spatial memory if a large head movement is needed to update the information, presumably because head movements are energetically costly [14][15][16][17] . In a real world search task, Foulsham et al. 18 found that 60-80% of the search time was taken up by head movements, making minimization of such movements a significant factor. Visual search typically occurs in complex, multi-room environments, and involves planning and coordination of movements of the eyes, head and body through space [18][19][20] . Use of spatial memory representations should therefore allow more efficient planning of the path through the space. Experiments on the role of memory in visual search have revealed a complex picture. In 2D experiments, subjects often do not use memory when searching is easy without accessing memory 21 . On the other hand, the semantic structure of the scene, learnt from past experience, is an important factor in allowing more efficient search in both 2D and 3D environments 22,23 . In 2D environments, the locations of objects that have been searched for previously are remembered, but objects fixated incidentally are not 24 . Previously we examined visual search in immersive virtual apartments, and found similar results to Võ and Wolfe, where memory was good for task relevant objects, but not for other objects in the room 25,26 . In the present experiment we explore further how memory is used to guide search in immersive environments. Our previous results suggested that much of the advantage of experience might come from learning the spatial layout of the apartment and remembering the correct room to explore. Previous work also suggested that spatial memory may be organized hierarchically, where object location is represented relative to a sub-region of the environment, and the sub-region is represented relative to the larger scale environment 27,28 . In the present experiment we explore representations at different spatial scales are used to aid search. Do subjects remember the exact coordinates of the target or a less precise location, such as the side of the room the target is on? If memory is at a coarse scale such as which room, or which side of the room, or which surface, then it may be faster to use visual cues rather than an imprecise memory, especially when the targets are relatively easy to locate and the target is within the field of view. To probe how memory was used, we employed the same virtual reality apartment as in Li et al. 25 . Subjects searched for targets on three occasions. We then randomly changed the target locations and asked how this affected search (Fig. 1). If subjects use a visually based search strategy once they are in the correct room, this manipulation would have little effect. On the other hand, if the specific target location relative to the spatial layout of the apartment is remembered, subjects may orient their gaze to the previous target location despite the fact that it is no longer there. We found that subjects frequently fixated the previous target location despite its absence, indicating that, in these instances, search was driven by memory of spatial location. Additionally, we found that attention increasingly narrowed down to relevant parts of the scene at different spatial scales. Finally, we examined whether the cost of movement was a factor in determining the usage of memory. We show that the total distance traveled and the extent of head movements decreased sharply with experience. Thus visual search in natural environments is strongly influenced by spatial memory and this may reflect the constraints imposed by moving the body in space. Results Effect of changing target location on search. We first examined the consequences of moving the targets in the 4 th block of trials. An example of gaze and head direction at different points in time during a search trial from one subject is shown in Fig. 2. A movie showing the sequence of gaze and head direction change can be found at https://youtu.be/hgxAo1vOT7w. The subject was searching for a pink cone, which was moved to the new location indicated in the figure. The red and blue arrows in the figure indicate head and eye direction, respectively. Figure 2A shows that the subject was pointing her gaze and head toward the target even before seeing what was inside the room. In Fig. 2B once the room became visible, the subject maintained gaze on the old target location and continued fixating the new object there while approaching it during the next second, before fixating the actual new target location. This behavior suggests that the old target location was stored in spatial memory relative to the coordinates of the apartment and that the fixation was programmed solely on the basis of that memory. We analyze the head movements in more detail below. Across 8 shuffling trials we found that subjects fixated the previous location of the target on 58.3% of the trials. Of all the fixations made in the room where the target used to reside, 57.9% were made to its previous location. This results in an average of 0.9 fixations to the previous location of the target. Once in the correct room, 30.6% of first fixations and 44.4% of the first two fixations were made to the previous target location. These results suggest that subjects have memory representations of the target locations and are using them to guide gaze on many of the trials. Average fixation duration on the old location was compared with average fixation duration to other locations in the same room, but no differences were found (paired t-test: t(8) = 0.94, p = 0.38, Cohen's d = 0.31). Figure 3 shows search time and number of fixations as a function of search block. Each block consists of 8 search trials with a different target on each trial. In repeated blocks the same set of 8 targets were searched for again, in a different random order. Search time and fixations in the correct room (containing the target) are separated from those in the incorrect room (target absent). Note that there were initially large numbers of fixations and long search times in the incorrect room, both of which declined rapidly on the 2 nd and 3 rd search block (time: Welch's F(2, 12.14) = 9.4, p = 0.003, η 2 = 0.52; fixations: Welch's F(2, 11.5) = 6.6, p = 0.012, η 2 = 0.42). In the correct room there were fewer fixations and more rapid search initially, which also declined in the first three blocks (time: Welch's F(2, 14) = 6.48, p = 0.01, η 2 = 0.42; fixations: Welch's F(2, 14) = 6.06, p = 0.013, η 2 = 0.47). In Block 4, when target locations changed, there was a small (but non-significant) increase in fixations and search time in the incorrect room, but essentially no change in fixations in the correct room (Games-Howell test for Block 3 vs. Block 4, time/correct room: p = 0.42, time/incorrect room: p = 0.26; fixation/correct room: p = 0.7; fixation/incorrect room: p = 0.18). Thus, as long as subjects choose the correct room by chance on the 4 th block, they found the target rapidly even though it was in a new location. This means that subjects also learned general characteristics of the search space that allowed rapid search. In Block 5, when the target locations reverted to their original configuration, search time and number of fixations were similar to those in Block 3. There was a small but non-significant decrease in the number of fixations in the incorrect room (post-hoc test results for Block 3 vs. Block 5, time/correct room: p = 0.72, time/incorrect room: p = 0.9; fixation/correct room: p = 1; fixation/ Attention and memory structure. In this section we explore what subjects learned about the space, given the small impact on search performance from changing target location. Figure 4 plots heat maps of gaze distribution on different trials of one subject searching for one specific target, and illustrates how fixations change over search blocks. In the first search block there were numerous fixations in the incorrect room, and a number of fixations on regions other than the four surfaces that contain targets. In the second and third blocks there were fewer fixations overall, and fewer fixations on non-surface areas. When the target was moved to the other room in Block 4, almost all fixations fell on surfaces, even in the incorrect room. When the target returned to its previous location in Block 5, the regions fixated in the correct room were similar to 2 nd and 3 rd searches. To examine this further, we tested whether subjects learn to attend to relevant regions and whether the space is represented in a coarse-to-fine manner. To find a target, subjects must first choose the correct room and direct their heads towards the correct side of the room. Within the room there are four surfaces that contain targets, one of which will be correct on the current trial. We explored gaze distribution at these different spatial scales in the first three search blocks. First we assessed the percentage of trials in which subjects chose the correct room. Figure 5A shows that by the third block subjects chose the correct room 73.8% of the time, increasing from 45% in Block 1. This increase in accuracy in the first three blocks was significant (F(2, 56) = 6.92, p = 0.002, η 2 = 0.2). Figure 5B shows the proportion of trials in which initial fixations, once subjects were in the correct room, were on the same side of the room that the target was on. Figure 5C shows the proportion of fixations that were on relevant surfaces, in which other targets were located. Figure 5D shows the proportion of fixations on the correct surface, where the target of the current trial is located. Across the first three blocks there were significant improvements in: (i) the percentage of trials in which subjects first fixated the correct side of the room after it became visible (F(2, 23) = 7.07, p = 0.004, η 2 = 0.4), (ii) the percentage of trials in which participants fixated relevant surfaces in both the correct and the incorrect room (F(2, 214) = 14.9, p = 0, η 2 = 0.12), (iii) the percentage of trials in which the correct surface was fixated (F(2, 214) = 16.82, p = 0, η 2 = 0.14). Thus subjects quickly learned the relevant regions of the space at different scales. Once gaze landed on the correct surface, around half of the time it was on the target. Changing target location in Block 4 also came with the cost of decreasing accuracy. This was true for room choice (p = 0.02), side of the room first fixated (p < 0.001), fixations to relevant surfaces (p = 0.022), and percentage of fixations to correct surfaces (p = 0.003). A notable feature of the change in allocation of attention was the rapid reduction of fixations that occurred, even when subjects went into the incorrect room. Since subjects appeared to rapidly learn to look at surfaces, we analyzed if the faster exclusion of the incorrect room could result from restricting search to surfaces. Fixation count to non-surface areas dropped quickly by 6.28 fixations after Block 1, which is consistent with the rapid improvement in search performance. Fixation count to non-surface areas in the incorrect room also dropped significantly in the first three blocks (Welch's F(2, 96.3) = 10.28, p = 0, η 2 = 0.076). Taken together, these results suggest that memory of the spatial structure and allocation of attention to the relevant parts of the scene are the dominating factors that lead to improvement in search performance. Guidance of movements from spatial memory. Given that subjects rapidly learn the structure of the space, it seems likely that this allows more efficient planning of body movements. To address this issue we examined the change in total distance traveled while searching as a function of trial block. This is plotted in Fig. 6. We summed the change in the coordinates of head position to calculate the total path distance in a trial. The distance traveled differed significantly across five blocks (F(4, 94) = 50.52, p = 0, η 2 = 0.69) and the distance traveled in Block 2-5 was significantly lower than that of Block 1 (p = 0). There was a significant increase in distance traveled from Block 3 to Block 4 when target locations changed (p = 0.045), but no significant difference was found To understand the extent to which subjects learnt to direct the head towards the targets, the angular difference between head direction and the direction from head to target was calculated and is plotted in Fig. 8 for each of the 4 target locations, as a function of Block. Note that for Block 4 the old target location was used as the reference. The angle between head direction and direction to target one second before the room was visible is shown in Fig. 8A, and the angle during first fixation in the room is shown in Fig. 8B. Note that the averages of angles in the thirty-frame window around 1 sec before the scene became visible and around the first fixation were used. In the first three blocks, the angle 1 sec before scene became visible was significantly different between four locations but not between blocks (two-way ANOVA: block: . Location 1 required the smallest change in head angle in order to orient to the target when subjects walked down the hallway before entering the room, so that might underlie the effect for target at that location. Overall the head error data do not provide a good case that head error is reduced across trials possibly because the data is quite noisy, and the constraints of the head mounted display may have lead subjects to restrict head rotations. The question requires further investigation with a lighter HMD. Discussion In this study we examined the nature of the memory representations that guide search in 3D immersive environments. When targets were moved to different locations on the fourth search episode, subjects fixated the old location of the target on 58% of trials. On 44% of trials the old location was one of the first two fixations. These saccades were presumably programmed on the basis of spatial memory, since the targets were easy to find and quite salient. Since subjects still made 20% errors in room selection on the third search episode, memory was not perfect, but search was still heavily weighted towards a spatial memory-based strategy. Surprisingly, however, general search performance was not disturbed much when locations were shuffled. Thus memory for structure of the space was sufficient for efficient search even when the specific spatial coordinates of the target were not known. When analyzing what was learned about the structure of the environment, we found that fixations were progressively restricted to relevant parts of the space. Consistent with the findings in Li et al. 25 , better choice of room was found as a result of experience. Over blocks the proportion of fixations to the correct side of the room, to relevant surfaces, and to the correct surface also increase. This finding is compatible with the idea that subjects learned the structure of the space at different spatial scales: room, room side, surfaces, and specific location. Subjects learned very quickly where not to look. Such a representation allows efficient guidance of eye movements, since subjects can restrict search to only some parts of the space. This is supported by the reduction in total path length and cumulative head rotation during search that we found. There was also some suggestion that subjects were orienting their heads towards the targets even before they were visible in the case of the two targets at the sides of the rooms, although this requires further investigation. The improvement of search performance in the first three blocks, especially in the incorrect room, replicates our previous findings that suggest that spatial context is learned very quickly 25 . Draschkow & Võ also found similar learning of context in a real apartment, which appeared to result from incidental fixations 4 . The new manipulation of shuffling the locations in Block 4 barely affected performance. In the cases in which the target was moved to a different surface within the same room, subjects scanned mostly only the relevant surfaces, and were able to find the target as fast as in previous blocks, although they were likely to check the old location of the target as well and that might add a small cost. The primary cost was the greater probability of entering the incorrect room. In that case subjects could rapidly exclude the room by restricting gaze to the relevant surfaces (maximum of four). Within a few trials after targets were shuffled, subjects appeared to realize that locations of the targets changed every trial and thus adopted a strategy of searching for targets based mostly on their knowledge of the general layout of the room. In Block 5, targets returned to their original locations. Within a few trials, subjects appeared to realize this and directed their attention to the old location reliably so that performance was at the same level as in Block 2 and 3. Thus the search strategy again incorporated the use of memory for both the global structure and target locations. Manginelli and Pollman performed a similar manipulation in a 2D search paradigm with T's and L's 29 . Despite the difference in the paradigms and the overall learning rate, the results are quite similar, indicating a tendency to code targets by their location even when the context is hard to learn. The search targets we used in the experiment were geometric objects that do not have obvious associations with their surrounding items. As shown by Võ and Wolfe 24,30 , in naturalistic images scene semantics often dominate search performance rather than episodic memory from search experience. Using geometric objects allowed us to assess the effect of episodic memory on search, without the effects of scene semantics. The choice of the correct room, the correct side of the room, the correct surface, and the correct location on the surface all suggest episodic memory of spatial location. However, there is evidence for some effect of scene semantics since subjects preferentially fixated surfaces in the early trials, indicating that a prior that small objects will be found on surfaces was used. Thus spatial learning in this kind of context could occur very rapidly since humans have presumably learned the basic statistical regularities of indoor environments. For example, in the present experiment subjects rarely look at the ground and ceilings for targets, demonstrating the existence of strong priors. Previous work with 2D images has also provided evidence for memory representations at different spatial scales. Hollingworth 31,32 showed that both local and global image structure learnt on previous trials affected search, and that disrupting spatial relationship impaired search performance. With simple stimuli, an effect of local context is only found after extensive experience [33][34][35] , although global aspects of the scene dominate the contextual cueing effect with real-world scene images 36 . With realistic immersive environments, we found that local contextual cues provided by neighboring objects had little effect 25 and the current results indicate that global cues such as the correct room and correct surface are more important. Although the memory representation exists at different spatial scales, it does not need to be accessed in a coarse to fine manner. Indeed the restriction of fixations to surfaces developed quickly even though room choice was occasionally incorrect. Marchette, Ryan and Epstein 37 found that memories at different spatial scales are to some extent independent, in conflict with Brooks et al. 's finding 37,38 . In our experiments, subjects appeared to learn the room, the side of the room, and the correct surface in the room at similar rates, and it is unclear whether memory representations at these different scales are organized hierarchically or independently. The present results suggest that visual search strategies in realistic environments are strongly dependent on spatial memory. This most likely becomes an important factor when body movements are involved, because of the accompanying energetic costs. Recent approaches to understanding sensory-motor decisions reveal a critical role for rewards and costs in momentary action choices such as those involved in the search process here 39,40 . Although there is extensive evidence about the role of reward on oculomotor neural firing in neurophysiological paradigms 41 , it is unclear how this machinery controls natural behaviors. In addition, less is known about dopaminergic modulation of the neural circuitry underlying head and body movements at least at the cortical level. Our previous work compared visual search in immersive environments, with search in 2D images, where the head was fixed and only the eyes moved. We found that in 3D, subjects stayed in the incorrect room for a longer time initially and were better at learning the correct room, presumably because of the high cost of moving between rooms in 3D. These results, together with the present findings suggest that the costs of moving the head and the body are an important aspect of visually-guided behavior in natural environments, and must be considered for an integrated understanding of decision making in natural tasks. Although it is plausible to suggest that spatial memory representations are adopted in order to minimize energetic costs, the findings in our experiment are somewhat equivocal. While the total distance traveled dropped sharply with experience, it did not increase very much in Block 4, when the targets were moved to a new location. Thus it might be argued that subjects simply learned an efficient strategy for searching the space, unrelated to the memory of target location. After experience, subjects walked just inside the door and quickly scanned the surfaces and this strategy is effective even if the correct surface is not known. However, the adoption of the shorter travel paths depended on the spatial knowledge of the layout so there is clearly a tendency to reduce walking distance. Teasing apart these different explanations will require a paradigm where visually guided search is more difficult. Some aspect of gaze behavior are unaffected by moving around 42 , so exactly how costs affect search in ambulatory contexts requires further examination. It seems clear that examination of scene memory needs to explore realistic environments where subjects move around. In a comprehensive review, Chrastil and Warren concluded that idiothetic information deriving from efferent motor commands and sensory re-afference aid the development of spatial memory 9 . They also concluded that the allocation of attention is a critical factor. This has been demonstrated by other experiments using real environments that do not explicitly investigate visual search. For example, Tatler & Tatler found that memory in a real environment was best when subjects were asked to attend to tea-related objects 43 . Drashkow & Võ found that object manipulation influenced memory 4 . Droll & Eckstein found that changes were easier to detect if the objects had previously been fixated 44 . In a paradigm similar to the one used here, where objects were changed following experience in virtual environments, Karacan & Hayhoe and Kit et al. found evidence that gaze was attracted to the changed region 26,45 . However, the present paradigm involved active search, so the results are difficult to compare. In summary, there are many complex factors underlying the development and structure of scene memories, many of which involve active body movements, and it appears that search in 2D displays reveals only a subset of those factors. To summarize, humans typically conduct visual search in dynamic, large-scale environments requiring coordination of eye, head and body movements. Our results demonstrated that learning of the structure of the SCIenTIFIC REPORTS | (2018) 8:4324 | DOI:10.1038/s41598-018-22731-w environment occurred rapidly, and attention was quickly narrowed down to relevant regions of the scene at multiple spatial scales. Memory of the large-scale spatial structure allows more energetically efficient movements, and this may be an important factor that shapes memory for large-scale environments. Methods Experimental Environment. The virtual environment was composed of a bedroom and a living room (3 m × 6 m each) connected by a hallway (1 m × 6 m) in between (Fig. 1A). It was generated in FloorPlan 3D V11 (IMSI) and rendered by Vizard 4 (WorldViz). Subjects saw the virtual environments through an nVisor SX111 head-mounted display (NVIS), which was fitted with a ViewPoint eye tracker (Arrington Research). The resolution of the HMD was 1,280 × 1,024 and the field of view (FOV) is 102° (in total, 76° each eye) horizontally and 64° vertically. Motion tracking was performed by HiBall motion tracking system (ThirdTech) at 600 Hz, and the latency for updating the display after head movement was 50 to 75 ms. The left eye was tracked at a sampling rate of 60 Hz. Tracker accuracy was about 1°. The calibration of the eye tracker was performed before, in the middle of, and at the end of the experiments on a nine-point (3 × 3) grid shown in the HMD. Recalibration was conducted when track loss or drift was observed, although we tried to minimize the amount of recalibration as that interrupts the task. Data with frequent track loss and poor calibration at the end of the experiment were excluded. At the end of the experiment, the synchronized videos of the scene display, eye tracking data, and metadata (head position, object position, and event timing) were saved in an MOV file for following data analysis and for verification of the automated analysis. Subjects clicked on a button on a Wiimote (Nintendo) when the target was found, but they were able to trigger the button press to end the trial only when they were within 1.5 m from the target and were looking at the target so that the target was within the central 70% of the screen. This prevented subjects from clicking the button randomly without locating the target. When the Wiimote button was triggered successfully, a 'Trial Done' message would show up on the screen to signal the completion of a trial. Targets. Target objects were a set of eight different uniform-colored, geometric-shaped objects placed on eight different pieces of furniture, such as cabinet, dining table, desk, side tables and TV table, four in each room. The visual angle that targets subtended was approximately 2° to 2.5° on average when viewed from entry to the center of a room. Subjects usually found the target prior to reaching the center of the room. The same set of targets was searched for in five search blocks. Each target was searched for once in each block of trials, and thus there were eight trials in each block. The locations of the targets were the same during the first three blocks. In Block 4, the targets were moved to a different surface randomly chosen from all the other locations that contained target objects in the first three blocks. That is, the target locations were shuffled. This was true for each trial in the fourth block. Targets could move to a different surface in the same room or in another room. In Block 5, the targets returned to their original locations, where they were in the first three blocks. Subjects were not told that the targets shuffled in Block 4 or that they moved back to the old locations in Block 5. Procedure. Prior to the start of the experiment, the eye tracker was calibrated and then subjects were given instructions about the procedure while standing in front of the TV on the hallway within the VR environment. Then they were instructed to proceed from the hallway to one of the two rooms for one minute to become familiar with navigating in a virtual environment and looking around freely. Following free exploration the subjects returned to the hallway to start the main task: visual search for targets for five blocks, for a total of 40 trials. Each trial followed the same structure: at the beginning of a trial, subjects returned to the hallway from any room they visited and approached the TV screen at the end of the hallway, on which an image of the next target object would show up. Then subjects had to turn around and decide which room to enter to look for the target. They were allowed to freely traverse between the two rooms until the target was located, which usually took one to three room visits. Each room had a door that opened when approached but blocked view of the target until the room had been entered. The order of the target objects was randomized within each search block and across subjects. The targets on successive trials were never repeated. Gaze analysis. An in-house program written in Vizard was used to reconstruct the environment from the metadata and generate a data file that included the positions of head, eye, objects and identity of the object that the gaze intersected at each frame. The eye tracking data were then analyzed in Matlab. A median filter was first applied to remove outliers, and then a three frame moving average filter smoothed the signals. Next, the data were segmented into fixations and saccades using a velocity-based algorithm. A fixation was detected when the velocity of the signal was lower than 60 degrees/s and lasted at least for 100 ms. Note that a relatively high velocity threshold was used due to the signal from the low-velocity vestibular-ocular reflex, which adds to the velocity of the eye movement signal while head is moving. Transient track losses were ignored if the fixation was made to the same object prior and after the loss. When consecutive fixations were less than 80 ms and 1.5° apart, they were combined as one fixation. The labeling of the object being fixated was determined by the program with a window of 80 × 80 pixels that spans approximately 5° × 5° of visual angle centered at the gaze point on each frame. This enables the mapping of the 2D gaze coordinates for each frame in the data. A relatively large window was used to allow for possible drifts of the eye signal and inaccuracy of the calibration. Another reason worth noting was that the target was easily visible even without direct fixation so subjects may not have executed corrective saccades for precise fixation on the targets. Next, the eye movement data were segmented into trials. We defined the start of each trial in our later analysis as the point in time when the first room entry was made. The end of the trial was the time when the target was found, which was defined as the time when the subjects made the first fixation to the target without making subsequent fixations to other objects before the Wiimote button was clicked. This eliminated cases where subjects were unaware that they had fixated the target and fixated other objects before returning to the target. For analyzing fixations to surfaces, another analysis was performed in which a box region around each SCIenTIFIC REPORTS | (2018) 8:4324 | DOI:10.1038/s41598-018-22731-w relevant surface, 40 cm above and 20 cm below, was defined. Fixations that fell within these boxes were identified as fixations to the surfaces. Again the large size of the boxes was used to tolerate for drifts in the eye signal. Participants. Twenty-one students from the University of Texas at Austin participated in the experiment, which was approved by the University of Texas Institutional Review Board (IRB: 2006-06-0085) and were performed in accordance with relevant guidelines and regulations. All of the subjects had normal vision and provided informed consent to participate. Class credit or monetary compensation was provided. Of all the subjects, two skipped the first trial and were excluded from all analysis; and ten subjects did not have reliable eye tracking data that allowed parsing fixations, so were excluded from all the data analysis related to fixations, including search time (since the end of the trial was defined by the first fixation to the target), fixations in general and fixations to surfaces. Track loss is a common issue in the HMD and eye tracker combination we used in this experiment, primarily because of the awkward geometry where the eye tracker camera must fit in between the helmet and the eye. In more recent helmets the eye tracker optics are embedded within the HMD optics, allowing for much easier and more reliable tracks. The heavy weight and tethering cables also make the alignment precarious and can lead to track losses during the session if the helmet moves on the head. Thus only nine subjects were included in those analyses. For other analyses, such as analysis of percentage of correct room choices and analyses of head movements that were not related to fixations, 19 subjects with full behavioral data (no trials skipped) were included. Data analysis. All analyses of the data extracted from the reconstruction were performed in Matlab. Search time (time spent to find the targets) and number of search fixations were calculated and used as the indicators of search performance. Time and fixation counts in the correct room (the room that contained the target on that trial) and the incorrect room were calculated. Inside the rooms, number of fixations to the surfaces and non-surface areas were computed. For head movement analysis, the angles between head direction and direction from the participant to the target in the thirty-frame window around 1 sec before the room became visible and around first fixation were calculated separately. The cumulative angular change in head direction was also calculated. Data that were three standard deviations away from the mean were excluded. An alpha level of 0.05 was used for all statistical tests. Levene's test of homogeneity was used to test the equality of variances between groups. When variances were equal, standard ANOVA was used to test differences of means, and Tukey's HSD test was used for post-hoc analysis. When the assumption of homogeneity in variances was violated, Welch's test was used to test differences in means and Games-Howell post-hoc test was used instead. Data availability. The datasets generated collected and analyzed during the current study are available from the corresponding author on reasonable request.
2018-04-03T02:37:28.055Z
2018-03-12T00:00:00.000
{ "year": 2018, "sha1": "ca80892f2e5c05f894b6663010752540bab5f22b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-22731-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca80892f2e5c05f894b6663010752540bab5f22b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
237323263
pes2o/s2orc
v3-fos-license
Natural α-Glucosidase and Protein Tyrosine Phosphatase 1B Inhibitors: A Source of Scaffold Molecules for Synthesis of New Multitarget Antidiabetic Drugs Diabetes mellitus (DM) represents a group of metabolic disorders that leads to acute and long-term serious complications and is considered a worldwide sanitary emergence. Type 2 diabetes (T2D) represents about 90% of all cases of diabetes, and even if several drugs are actually available for its treatment, in the long term, they show limited effectiveness. Most traditional drugs are designed to act on a specific biological target, but the complexity of the current pathologies has demonstrated that molecules hitting more than one target may be safer and more effective. The purpose of this review is to shed light on the natural compounds known as α-glucosidase and Protein Tyrosine Phosphatase 1B (PTP1B) dual-inhibitors that could be used as lead compounds to generate new multitarget antidiabetic drugs for treatment of T2D. Introduction Type 2 diabetes is a complex pathology characterized by hyperglycemia and metabolic abnormalities affecting different organs and tissues, such as liver, muscle, adipose tissue, and pancreas. To date, subjects affected by T2D can rely on several oral antihyperglycemic drugs showing different mechanisms of action to keep glycaemia under control. These drugs include: inhibitors of intestinal α-glucosidases, which delay intestinal absorption of glucose; metformin, which blocks hepatic gluconeogenesis; different types of secretagogues that stimulate the release of insulin from pancreatic β-cells; thiazolidinediones, which stimulate the storage of circulating fatty acids into adipocytes, thereby improving insulin sensitivity in several peripheral tissues; and the sodium/glucose cotransporter 2 (SGLT-2) inhibitors, which impair the re-uptake of glucose in the renal tubules [1]. The choice of the most appropriate hypoglycemic drug for a patient depends on several factors, such as the patient's general condition, the presence of comorbidities, tolerance, and the patient's response to the drug. Generally, most diabetic patients showing hyperglycemia without further pathological complications respond positively to single-drug based therapy (Figure 1), experiencing a decrease of blood sugar levels and an improvement of their general conditions. However, many clinical studies showed that benefits obtained with this approach are transient, and in the medium to long term, patients experience a gradual rise in blood sugar and a worsening of general health conditions. In some cases, the up-scaling of drug dosage can allow regaining of the glycemic target, with the hope that, at the same time, no adverse effects related to high doses of the drug occur [2]. The failure of mono-drug therapy is mainly due to the inability of such drugs to replace physiological functions of insulin. Indeed, even if these drugs are able to compensate a specific metabolic defect, they unexpectedly induce severe unbalance in other metabolic pathways. Figure 1. Antidiabetic strategy based on the mono-drug therapy: each antidiabetic drug is administered alone and works on a specific target. Combination versus Mono-Drug Therapy for Treatment of T2D The combination of two or more anti-hyperglycemic drugs acting on different biological targets is a therapeutic option often used for treatment of diabetic patients who do not adequately respond to mono-drug therapy [3]. From a theoretical point of view, the purpose of this strategy is to generate a synergistic effect by acting on different targets involved directly or indirectly in the control of glycemia ( Figure 2). This approach aims to generate a synergistic effect, improving glycemic control using a lower dosage of each drug than the doses provided by mono-drug therapy. However, despite the undoubted advantages of such a pharmacological approach, many clinical trials revealed that multi-drug therapies are difficult to manage for the majority of patients for different reasons. For example, often a fine adjustment of dosages is required, or patients are asked to take medications at different times of the day because of their different Combination versus Mono-Drug Therapy for Treatment of T2D The combination of two or more anti-hyperglycemic drugs acting on different biological targets is a therapeutic option often used for treatment of diabetic patients who do not adequately respond to mono-drug therapy [3]. From a theoretical point of view, the purpose of this strategy is to generate a synergistic effect by acting on different targets involved directly or indirectly in the control of glycemia ( Figure 2). insulin. Indeed, even if these drugs are able to compensate a specific metabolic defect, they unexpectedly induce severe unbalance in other metabolic pathways. Combination versus Mono-Drug Therapy for Treatment of T2D The combination of two or more anti-hyperglycemic drugs acting on different biological targets is a therapeutic option often used for treatment of diabetic patients who do not adequately respond to mono-drug therapy [3]. From a theoretical point of view, the purpose of this strategy is to generate a synergistic effect by acting on different targets involved directly or indirectly in the control of glycemia ( Figure 2). This approach aims to generate a synergistic effect, improving glycemic control using a lower dosage of each drug than the doses provided by mono-drug therapy. However, despite the undoubted advantages of such a pharmacological approach, many clinical trials revealed that multi-drug therapies are difficult to manage for the majority of patients for different reasons. For example, often a fine adjustment of dosages is required, or patients are asked to take medications at different times of the day because of their different This approach aims to generate a synergistic effect, improving glycemic control using a lower dosage of each drug than the doses provided by mono-drug therapy. However, despite the undoubted advantages of such a pharmacological approach, many clinical trials revealed that multi-drug therapies are difficult to manage for the majority of patients for different reasons. For example, often a fine adjustment of dosages is required, or patients are asked to take medications at different times of the day because of their different pharmacokinetics. These factors make it very difficult for patients to comply with the therapeutic protocol, thus increasing the risk of not reaching the glycemic target. The consequent uncontrolled fluctuations in blood sugar can be deleterious, forcing patients to change treatment to avoid further complications. A practical solution to this problem may arise from taking combinations of oral hypoglycemic drugs in fixed and pre-established doses depending on the desired effects. This strategy reduces the complexity of therapeutic regimen and improves patient adherence to treatment [4]. Overall, the clinical studies carried out so far confirmed that combinatorial therapies imply, in the short to medium term, significant benefits compared to mono-therapy. However, in the long term, the efficacy of this therapeutic approach remains to be confirmed [5]. In conclusion, all evidence suggests that multitargets therapies seem to guarantee a better quality of life for people affected by T2D. Toward the Multiple Designed Ligands (MDLs) for Treatment of T2D Studies performed in recent decades have unexpectedly shown that, contrary to what was originally hypothesized, numerous single-target drugs behave as multiple ligands in vivo [6]. In clinical practice, it is not easy to predict the in-vivo effects of a molecule capable of acting on different targets. In fact, depending on the dose and its pharmacokinetics, it could generate both positive or negative effects especially if used in treatment of chronic multifactorial diseases. However, the belief that MDLs can offer many advantages over combinatorial mono-therapies has prompted many researchers in developing new multiple-ligands drugs for T2D treatment ( Figure 3). In principle, a molecule capable of acting as an MDL should offer the advantages of a combination therapy but with fewer side effects. pharmacokinetics. These factors make it very difficult for patients to comply with the therapeutic protocol, thus increasing the risk of not reaching the glycemic target. The consequent uncontrolled fluctuations in blood sugar can be deleterious, forcing patients to change treatment to avoid further complications. A practical solution to this problem may arise from taking combinations of oral hypoglycemic drugs in fixed and pre-established doses depending on the desired effects. This strategy reduces the complexity of therapeutic regimen and improves patient adherence to treatment [4]. Overall, the clinical studies carried out so far confirmed that combinatorial therapies imply, in the short to medium term, significant benefits compared to mono-therapy. However, in the long term, the efficacy of this therapeutic approach remains to be confirmed [5]. In conclusion, all evidence suggests that multitargets therapies seem to guarantee a better quality of life for people affected by T2D. Toward the Multiple Designed Ligands (MDLs) for Treatment of T2D Studies performed in recent decades have unexpectedly shown that, contrary to what was originally hypothesized, numerous single-target drugs behave as multiple ligands in vivo [6]. In clinical practice, it is not easy to predict the in-vivo effects of a molecule capable of acting on different targets. In fact, depending on the dose and its pharmacokinetics, it could generate both positive or negative effects especially if used in treatment of chronic multifactorial diseases. However, the belief that MDLs can offer many advantages over combinatorial mono-therapies has prompted many researchers in developing new multiple-ligands drugs for T2D treatment ( Figure 3). In principle, a molecule capable of acting as an MDL should offer the advantages of a combination therapy but with fewer side effects. Although this idea is considered exciting, the identification of appropriate multifunctional scaffolds represents the main challenge for researchers engaged in the generation of new MDLs. The debate among scientists regarding the best strategy to obtain the best multifunctional scaffold remains heated. Large-scale screening and knowledge-based approaches are considered the most effective strategies for designing and developing multitarget molecules. The first approach relies on the fact that a known drug actually behaves like a multiple ligand. Therefore, based on this hypothesis, synthesizing new molecules would no longer be required, but rather it would be sufficient to identify new potential ligands in addition to those already known. Conversely, the knowledge-based approach Although this idea is considered exciting, the identification of appropriate multifunctional scaffolds represents the main challenge for researchers engaged in the generation of new MDLs. The debate among scientists regarding the best strategy to obtain the best multifunctional scaffold remains heated. Large-scale screening and knowledge-based approaches are considered the most effective strategies for designing and developing multi-target molecules. The first approach relies on the fact that a known drug actually behaves like a multiple ligand. Therefore, based on this hypothesis, synthesizing new molecules would no longer be required, but rather it would be sufficient to identify new potential ligands in addition to those already known. Conversely, the knowledge-based approach exploits previous information obtained through structure-activity analysis (SAR) focusing on the mode of interaction between molecules of interest and their biological targets. Then, once more appropriate molecules are identified, these are chemically linked or combined together to generate new multi-target drugs that will be subsequently tested to evaluate their real effectiveness. Besides those aforementioned, several in-silico methods can be used to select potential pharmacophores useful for assembling new MDLs. The advantage of using the computational approach is its ability to easily perform high-throughput analyses, starting from databases containing hundreds to thousands of different compounds. The selected pharmacophores then can be linked together to produce new MDLs. In a forward step, the most promising molecules can be further modified to improve their affinity towards the ligands and safety profile while reducing their toxicity [7]. New MDLs for Treatment of T2D The emerging interest of scientists and pharmaceutical companies versus multiple ligand drugs is evidenced by the growing number of MDLs produced in the last years for treatment of T2D. Coskun and co-workers projected and synthesized dual glucosedependent insulinotropic polypeptide receptor (GIP-R) and glucagon-like peptide-1 receptor (GLP-1) agonists and demonstrated that treatment with such molecule stimulated insulin release, leading to a significant reduction of both fasting and postprandial glycaemia [8]. In recent years, convincing evidence suggested that GLP-1 acts as anorexigenic peptide binding to GLP-1R in the hypothalamic region, inducing satiety [9]. Interestingly, it has been demonstrated that, in the brain, GLP-1 acts synergistically with PYY (peptide YY), a peptide that is co-secreted with GLP-1 from enteroendocrine L cells of the intestine and binds NPY2R (Neuropeptide Y receptor Y2 receptor). This finding stimulated many researchers to evaluate the activity of new GLP-1R/NPY2R agonists as antidiabetic agents. It has been demonstrated that GLP-1R/NPY2R dual agonists in vivo exert anorectic effects along with the ability to reduce blood glucose levels, thereby confirming that they could act as promising anti-obesity and antihyperglycemic agents [10]. Among all biological targets, peroxisome proliferation-activated receptors (PPARs) are considered some of the most effective ones for treatment of T2D. Various type of PPARs are differently expressed in human tissues; namely, PPAR-α, δ, and γ, have been identified and characterized to date. The PPAR-α is highly expressed in liver, kidney, heart muscle, and vascular endothelial cells, where its activation promotes fatty acid oxidation, thereby avoiding accumulation of intracellular lipids depots. Treatment with PPAR-α agonists increases cardiac performances in diabetic patients, reducing the risk of stroke [11]. PPAR-δ is ubiquitously expressed, and its activation leads to different effects. In muscle cells, PPAR-δ activation stimulates fatty acids oxidation, while it reduces glucose utilization. In adipose cells, PPAR-δ activation increases the expression of genes involved in fatty acids β-oxidation and energy dissipation via uncoupling of fatty acids oxidation and ATP production [12]. Interestingly, it has been demonstrated that the balance of fatty acids oxidation and synthesis can affect inflammatory and immunosuppressive T cells and macrophages. In macrophages, PPAR-δ activation impairs polarization toward M2-like phenotype with reduced inflammatory potential. According to this hypothesis, antidiabetic functions of PPAR-δ have been associated with reduced inflammatory signalling [13]. PPARγ is largely expressed in adipose tissue, and its activation promotes proliferation and differentiation of preadipocytes into adipocytes. Moreover, PPARγ agonists stimulate deposition of fatty acids into adipocytes, lowering fatty acids blood levels, preventing hyperlipidemia, and increasing peripheral insulin sensitivity [14]. Saroglitazar, a PPARα/γ dual agonist, has been recently approved in India for treatment of diabetic dyslipidemia based on the promising results obtained in clinical trials. Diabetic patients treated with Saroglitazar showed no adverse events, an improved lipidemic profile, an increased insulin sensitivity and β-cell function [15]. Recently, a novel dual peroxisome proliferator-activated receptor alpha/delta (PPAR-α/δ) agonist was synthesized and tested on animal models. Besides protecting the liver from inflammation, and fibrosis, the administration of dual agonists decreased hepatic lipids accumulation, protecting animals from the development of liver steatosis [16,17]. Finally, many efforts have been made to generate pan PPAR agonists combining the pharmacophore motif of PPAR-α, β, and G agonists. Such molecules reduce lipids accumulation in the liver and improve liver damage, inflammation, fibrosis, and insulin resistance [18,19]. Although many new pan PPAR agonists have demonstrated their efficacy as antidiabetic drugs in the preclinical phase, subsequent clinical studies have shown their limitations and revealed their intrinsic toxicity. For these reasons, studies on these molecules have not progressed further [20]. Very recently, Qi Pan and coworkers evaluated the antidiabetic activity of GLP-1-Fc-FGF21 on diabetic and obese mice models. This new dual targeting agonist, able to target both the GLP-1 and FGF21 (Fibroblast growth factor 2) pathway, showed a potent antihyperglycemic activity and caused a marked weight loss, suppressing the appetite and reducing caloric intake. Together, these results suggested that GLP-1/FGF21 dual agonists possess all characteristics to become promising new drugs to fight diabetes and obesity [21]. Dual α-Glucosidase/PTP1B Inhibitors: A New Drugs against Type 2 Diabetes? Clinical studies revealed that many people show evident signs of metabolic abnormalities years before the diagnosis of T2D. Insulin resistance (IR) is one of the most common abnormalities. IR can affect liver, skeletal muscle, adipose tissue, pancreas, and hypothalamic region, generating several metabolic dysfunctions and promoting the onset of T2D. Besides genetic factors, a diet rich in simple carbohydrates, sedentary behavior, and obesity are thought to be the main risk factors responsible for the development of insulin resistance [22]. These evidence suggest that all measures that limit glucose absorption and increase insulin sensitivity should be the first-line approaches recommended to reduce the risk of developing T2D. We are convinced that MDLs targeting both PTP1B and α-glucosidases could be used to reach the goal ( Figure 4). vated receptor alpha/delta (PPAR-α/δ) agonist was synthesized and tested on animal models. Besides protecting the liver from inflammation, and fibrosis, the administration of dual agonists decreased hepatic lipids accumulation, protecting animals from the development of liver steatosis [16,17]. Finally, many efforts have been made to generate pan PPAR agonists combining the pharmacophore motif of PPAR-α, β, and ɣ agonists. Such molecules reduce lipids accumulation in the liver and improve liver damage, inflammation, fibrosis, and insulin resistance [18,19]. Although many new pan PPAR agonists have demonstrated their efficacy as antidiabetic drugs in the preclinical phase, subsequent clinical studies have shown their limitations and revealed their intrinsic toxicity. For these reasons, studies on these molecules have not progressed further [20]. Very recently, Qi Pan and coworkers evaluated the antidiabetic activity of GLP-1-Fc-FGF21 on diabetic and obese mice models. This new dual targeting agonist, able to target both the GLP-1 and FGF21 (Fibroblast growth factor 2) pathway, showed a potent antihyperglycemic activity and caused a marked weight loss, suppressing the appetite and reducing caloric intake. Together, these results suggested that GLP-1/FGF21 dual agonists possess all characteristics to become promising new drugs to fight diabetes and obesity [21]. Dual α-Glucosidase/PTP1B Inhibitors: A New Drugs against Type 2 Diabetes? Clinical studies revealed that many people show evident signs of metabolic abnormalities years before the diagnosis of T2D. Insulin resistance (IR) is one of the most common abnormalities. IR can affect liver, skeletal muscle, adipose tissue, pancreas, and hypothalamic region, generating several metabolic dysfunctions and promoting the onset of T2D. Besides genetic factors, a diet rich in simple carbohydrates, sedentary behavior, and obesity are thought to be the main risk factors responsible for the development of insulin resistance [22]. These evidence suggest that all measures that limit glucose absorption and increase insulin sensitivity should be the first-line approaches recommended to reduce the risk of developing T2D. We are convinced that MDLs targeting both PTP1B and α-glucosidases could be used to reach the goal ( Figure 4). The tyrosine phosphatase 1B (PTP1B) acts as a key negative regulator of insulin receptor, and a plethora of studies confirmed that uncontrolled activity of this enzyme is one of the main causes that lead to IR [23]. According to this hypothesis, it has been demonstrated that the overexpression of PTP1B promotes IR in liver [24], muscle [25], adipose tissue [26], pancreas [27], and brain [28]. Conversely, many studies confirmed that PTP1B downregulation or inhibition improves insulin sensitivity, normalizes blood glucose levels, and protects from obesity and the onset of T2D [29]. Overall, such evidence The tyrosine phosphatase 1B (PTP1B) acts as a key negative regulator of insulin receptor, and a plethora of studies confirmed that uncontrolled activity of this enzyme is one of the main causes that lead to IR [23]. According to this hypothesis, it has been demonstrated that the overexpression of PTP1B promotes IR in liver [24], muscle [25], adipose tissue [26], pancreas [27], and brain [28]. Conversely, many studies confirmed that PTP1B downregulation or inhibition improves insulin sensitivity, normalizes blood glucose levels, and protects from obesity and the onset of T2D [29]. Overall, such evidence suggest that PTP1B targeting could generate a pleiotropic effect, improving insulin response in liver, muscle adipose tissue, pancreas, and brain, thereby correcting most metabolic abnormalities observed in diabetic patients. Since PTP1B does not have a role in regulating intestinal absorbance of glucose, it is improbable that small molecules designed to target this enzyme could be used to regulate intestinal absorption of glucose. Monosaccharides, such as glucose, fructose, and galactose, are the only sugars absorbed by gut. Oligosaccharides derived from starch digestion are processed by pancreatic α-amylase and intestinal α-glucosidase to produce free glucose that is then uploaded from intestinal cells. Therefore, the rate of blood glucose raising mainly depends on the gut glucose concentration that, in turn, is influenced by the activity of glucosidases present in the gut. This finding inspired many researchers to challenge glucosidase inhibitors as pharmaceutical tools for the treatment of T2D based on the hypothesis that such molecules could delay the release of glucose from complex carbohydrates, slowing down the rise in blood sugar levels observed after a meal. In the last decades, different kinds of glucosidases inhibitors have been produced and approved as antihyperglycemic drugs [30]. Today, such molecules are used as first-line therapy for T2D patients or administrated in combination with other oral anti-diabetics drugs when metformin/biguanides mono-drug based therapies failed the achievement of the glycemic goal [31]. The evidence that glucosidases inhibitors act synergistically with different oral antihyperglycemic drugs suggested that α-glucosidase/PTP1B dual inhibitors could be successfully projected and used as drugs for treatment of T2D. Moreover, tests carried out on HepG2 cells demonstrated that some of these compounds show a good insulin-mimetic activity, enhancing phosphorylation levels of Akt in the absence of insulin stimulation. We can hypothesize that, in absence of insulin, the PTP1B inhibition results in an enhancement of insulin receptor phosphorylation level, promoting the activation of insulin signaling pathway [33]. Finally, in 2020, Malose J. Mphahlele et al. investigated the properties of a series of ortho-hydroxyacetyl-substituted 2-arylbenzofuran derivatives, showing that some of these have IC 50 values in the submicromolar and in the micromolar range for α-glucosidase and PTP1B, respectively [34]. Nature-Inspired Scaffold Molecules for the Synthesis of Dual α-Glucosidase/PTP1B Inhibitors It is well known that natural sources, such as plants, fruits, algae, and microorganisms, are important sources of bioactive molecules that often have been used as lead compounds to the development of new drugs for treatment of human diseases [35]. Based on this evidence, we analyzed literature data looking for natural molecules showing both αglucosidase and PTP1B inhibitory activity that should be used as scaffold molecules for the synthesis of new dual-target antidiabetic drugs. Data collection from the literature was performed by querying the PUBMED database, using specific keywords, such as "PTP1B and alpha-glucosidase inhibitors" (which yielded 79 results), "alpha-glucosidase and PTP1B" (48 results), "Dual targeting PTP1B and glucosidase" (6 results), and "PTP1B and multitarget inhibitors" (6 results). Every single study was downloaded and analyzed in depth to extract the data of interest. The collected compounds were classified into twelve different groups based on their chemical structure. Surprisingly, we found that, in the last twenty years, more than 200 compounds showing dual α-glucosidase/PTP1B inhibitory activity have been discovered and characterized. To make it easier for the reader to analyze the data, we have divided the identified compounds into different classes. Coumarins Sixteen coumarin-derivatives were isolated from Angelica decursiva (compound 1-7, 12, 13, and 17), from Artemisia capillaris (compounds 9-11, 14, and 16), and from Euonymus alatus (Thunb.) Sieb (compound 8) (Table 1). The dihydroxanthyletin-type family includes (+)-trans-decursinol (1), Pd-C-I (2), Pd-C-II (3), Pd-C-III (4), 4 -Hydroxy Pd-C-III (5), and 4 -Methoxy Pd-C-I (7). Among these, (+)-trans-decursinol (1), a molecule bearing two hydroxyl groups on C3 and C4 of pyrene moiety, resulted the most potent inhibitor of both targets, showing IC 50 values of 2.33 and 11.32 µM for PTP1B and α-glucosidase, respectively. Conversely, Decursinol (12), which is lacking in OH on C4, showed a lower affinity for both targets. This result suggested the pivotal role of this hydroxyl group in improving the inhibitory activity of natural dihydroxanthyletin-type coumarins. Interestingly, the replacement of OH on C4 with a senecioyl group produced Pd-C-II (3), which exhibited a slightly decreased inhibitory activity in respect to (1), indicating that this group only partially succeeds in substituting the hydroxyl group. Pd-C-I (2), a derivative bearing a senecioyl group on C3, showed IC 50 values for PTP1B and α-glucosidase similar to those of (+)-trans-decursinol. On the other hand, 4 -Hydroxy Pd-C-III (5), bearing an angeloyl group on C3, maintained a significant inhibitory activity toward PTP1B but showed a weaker inhibitory activity on α-glucosidase in comparison to (+)-trans-decursinol, suggesting that the nature of aliphatic chain is crucial to stabilize the α-glucosidase-inhibitor complex. The substitution of 4 OH with an acetyl group leads to Pd-C-III (6), which showed a lower affinity for PTP1B but a better inhibitory capacity towards the α-glucosidase compared to (5). Conversely, the insertion of a methoxy group on C4 of Pd-C-I leads to Methoxy Pd-C-I (7), a compound showing an inhibitory activity on PTP1B comparable to that of parental molecule but a slower affinity for the α-glucosidase. Finally, Decursidin (12), bearing two senecioyl groups on C3 and C4, respectively, showed a reduced inhibitory activity for both targets compared to (+)-trans-decursinol, thereby confirming the OH groups are important to reinforce the binding of dihydroxanthyletin-type coumarins with PTP1B and α-glucosidase. Among phenyl-coumarins, Selaginolide A (15), a 7-hydroxycoumarin derivatives bearing a 4-hydroxy-3,6-dimethoxyphenyl ring linked to C3, was proven the most potent compared to Euonymalatus (8), showing the latter had a more complex structure and several OH groups. However, both compounds showed a balanced and potent inhibitory activity for both targets, suggesting that phenolic coumarins are good lead compounds to generate new antidiabetic drugs acting on both enzymes. Umbelliferone (16) or 7-hydroxycoumarin moiety is a prototype of several dual PTP1B and α-glucosidase inhibitors, such as Esculetin (9), Daphnetin (8), Scopoletin (14), Umbelliferone 6-carboxylic (17), and 6-Methoxy artemicapin C (10). Daphnetin, bearing an additional OH group on C8 compared to Umbelliferone, showed a slight increased inhibitory activity for both targets, while Esculetin, a 6,7 dihydroxy coumarin, showed IC 50 values for PTP1B and α-glucosidase 27 and 7 times lower than those calculated for Umbelliferone. Interestingly, Scopoletin, bearing a methoxy group on C6, showed an affinity for PTP1B similar to that of Umbelliferone but an increased affinity for α-glucosidase in comparison to them. Conversely, Umbelliferone 6-carboxylic showed high affinity for PTP1B (IC 50 = 7.98 µM) but a weak affinity for α-glucosidase (IC 50 = 172.10 µM). Finally, the mono-hydroxylated compound 6-Methoxy artemicapin C (10), showing a methoxy group on C7, had a significant inhibitory activity for PTP1B (IC 50 = 27.6 µM) but a weak inhibitory power for α-glucosidase (IC 50 = 563.7 µM). This data suggested that substituents on C6 can be relevant to potentiate the interaction of inhibitor to both targets even if OH and negative-charged groups on C6 favor the interaction with PTP1B, while non-polar groups in this position favor the interaction with α-glucosidase. Lignans glycosides family accounts several members with chemical differences that give them peculiar properties. The (+)-pinoresinol 4-O-β-D-glucoside (22) and (+)-syringaresinol 4-O-β-D-glucoside (23) showed high IC 50 values for both enzymes, suggesting that pinoresinol scaffolds bearing vanilloyl or syringoyl units, per se, did not guarantee a tight binding to both targets. Interestingly, the introduction of an OH group on C8 (21) or C8 (20) strongly increased the affinity for PTP1B but slightly improved the inhibitory power for α-glucosidase. On the other hand, the addition of a p-hydroxybenzoyl (45) or a vanilloyl group (46) to C6 slightly increased the affinity for PTP1B without improving the affinity for α-glucosidase. Noticeably, the addition of a glucosyl or galactosyl group on C4 (41) or C2 (42), respectively, slightly increased the affinity of such molecules for PTP1B but exhibited significant inhibitory activity for α-glucosidase (IC 50 = 16.7 and 17.1 µM, respectively). A significant affinity increase for both targets (IC 50 for PTP1B and α-glucosidase of 25.8 and 16.1 µM, respectively) was obtained introducing a vanilloyl-4-O-β-glucopyranosyl group on C6 (43), while Viburmacroside G and H, bearing, respectively, a syringoyl group and an α-L-rhamnopyranosyl residue (47) or a syringoyl group and α-D-xylopyranosyl unit (48), both showed a weaker inhibitory activity compared to (43), thereby confirming the key role of chemical substituents linked to C6 in determining the inhibitory power of molecules. Finally, Viburmacroside D (44) bearing an apiofuranosyl group located at C2 of the glucopyranosyl moiety, showed the highest inhibitory activity of these series (IC 50 of 8.9 and 9.9 µM for PTP1B and αglucosidase, respectively). In conclusion, these data suggest that the inhibitory power of Viburmacrosides in respect to α-glucosidase is attributable to the presence of a hydroxyl group at C4 and of two or more monosaccharide units at C4. Meanwhile, the ability of such compounds to inhibit PTP1B seem to be influenced by the presence of hydroxyl groups on C8 or C8 , by a glycosyl-phenolic acyl residue located at C6 , and mainly by an apiofuranosyl group located at C2 of the glucopyranosyl moiety. As far as lignanamides derivatives are concerned, these represent a highly structurally heterogenous family whose members showed a different inhibitory activity for both targets. Among these, Cannabisin I (31) proved to be the most active compound, showing an IC 50 value for PTP1B and α-glucosidase of 2.01 and 1.5 µM, respectively. It is interesting to note that Limoniumin B (34), bearing a methoxyl group on C3 instead of one OH, showed an IC 50 value for α-glucosidase comparable to that of compound (31) and a three-times higher IC 50 value for PTP1B. However, Limoniumin C (35), bearing a methoxy group on C3, showed similar affinity for PTP1B to compound (35) but one order of magnitude lower inhibitory activity for α-glucosidase. Finally, substitution of OH groups on C3 and C3 led to Limoniumin D (36), a compound acting as a weaker inhibitor than (35). Intriguingly, Limoniumin E (37), which lacks the dihydroxybenzene group linked to C7 , showed an increased IC 50 for α-glucosidase but maintained a significant inhibitory activity for PTP1B. α-Glucosidase The role of the OH groups in determining the inhibitory activity of lignanamides is evidenced for compounds showing a phenyldihydronaphthalene core, such as Limoniumin H (39), I (40), Cannabisin D (29), B (27), and C (28). Cannabisin B, bearing two OH groups on C3 and C3 , showed a high affinity for both targets (IC 50 values for PTP1B and αglucosidase were 5.89 and 4.56 µM, respectively), whereas compounds (28) and (39), bearing respectively a methoxyl group on C3 and on C3, exhibited higher IC 50 values for α-glucosidase. Moreover, the introduction of two methoxyl groups (on C3 and C3 ) cause a strong loss of inhibitory activity of compound (29). It is interesting to note that the affinity of compound (40) for α-glucosidase is eight times lower than (39), suggesting that the position of OH groups on naphthalene moiety can influence the interaction of lignanamides with this enzyme. Xanthones Several alkylated xanthones active on both PTP1B and α-glucosidases have been extracted from roots bark of C. Cochinchinense [44] (Table 3). All compounds isolated behaved as good inhibitors of both targets, showing IC 50 values in the 1.7-80 µM range for α-glucosidase and between 2.8 and 52.5 µM for PTP1B. Although selected xanthones possess different substituents, they behaved as mixed-type inhibitors of α-glucosidases and competitive inhibitors of PTP1B. This finding suggests that xanthone moiety has a key role in determining the interaction with the active site of both targets even if aliphatic chains and hydroxyl groups linked to xanthone structure can influence the affinity of each compound. The γ-Mangostin (49), bearing two prenyl chains on C2 and C8 and four OH groups, resulted as the most potent inhibitor of this series, showing IC 50 values for PTP1B and α-glucosidase of 2.8 and 1.7 µM, respectively. The replacement of OH group on C7 with a methoxyl group (α-Mangostin, 50) slightly affected affinity for both targets, while the affinity of Cratoxylone (59), also bearing an hydroxylated prenyl chain on C2, was reduced eight times for PTP1B and eighteen times for α-glucosidase. The displacement of prenyl chain from C8 to C4 impaired the inhibitory activity of 1,3,7-Trihydroxy-2,4-diisoprenylxanthone (51), while the replacement of prenyl chain on C4 with a geranyl group (Cochinechinone A, 55) strongly improved the affinity of the molecule for PTP1B. Interestingly, hydroxylation of the prenyl chain on C2 (Caratoxanthone A, 53) but not the hydroxylation of the geranyl chain on C4 (Cratoxanthone F, 58) makes compound (53) an inhibitor almost as efficient as compound (49). The Pruniflorone S (60) showed IC 50 values similar to those of compound (55), suggesting that the substitution of OH groups with an aliphatic chain did not result in a further enhancement of inhibitory power. 7-Geranyloxy-1,3-dihydroxyxanthone (52) and Cochinxanthone A (56) showed a weak inhibitory activity toward both targets, indicating that the presence of a single geranyl chain was not functional to improve the inhibitory activity of these compounds. Finally, Cochinchinone Q (54), which does not bear aliphatic chains, resulted as the weaker inhibitor of this series toward both targets, thereby confirming the importance of aliphatic chains in stabilizing the xanthones-enzyme complexes. Anthraquinones Anthraquinones reported in Table 7 were obtained from Cassia obtusifolia L., a legomnous annual herb growing in tropical countries of Asia [44] (Table 7). Screening assays carried out using both PTP1B and α-glucosidase revealed that Alaternin (100) is the most active compound, showing an IC 50 value in the low micromolar range (IC 50 = 1.22 and 0.99 µM for PTP1B and α-glucosidase, respectively). Furthermore, kinetic analyses revealed that Alaternin acts as a competitive inhibitor of PTP1B and a mixed type inhibitor of α-glucosidase. In addition, docking analyses performed with PTP1B showed that hydroxyl groups present on C1, C2, and C6, as well as methyl group present on C3 of Alaternin, have a key role in stabilizing the Alaternin-PTP1B complex. Moreover, although no kinetic and docking data are available on α-glucosidase, it is reasonable to think that hydroxyl groups contribute to stabilize the complex Alaternin/α-glucosidase, too. According with this hypothesis, we found that the introduction of a methoxyl group on C1 of Alaternin generates 2-Hydroxyemodin-1 methylether (99) that showed a reduced activity for both targets. Moreover, the removal of the OH group on C6 leads to Obstusifolin (113), a molecule showing a good affinity for PTP1B but a reduced affinity for α-glucosidase. On the other hand, the removal of the OH group from C2 leads to Emodin (110), a molecule that possesses an IC 50 value unchanged compared to Alaternin but with a lower affinity for PTP1B. In addition, Chrysophanol (107), obtained by removing the OH group from C6 of Emodin, showed a very weak affinity for α-glucosidase (IC 50 = 0.99 µM of Alaternin versus 46.81 µM of Chrysophanol) and a moderate decrease of the affinity for PTP1B (1.22 µM for Alaternin versus 5.86 µM of Chrysophanol). The replacement of the methyl group on C3 with an alcoholic group leads to Aloe-emodin (101), which showed a reduced affinity for PTP1B but behaved as a potent α-glucosidase inhibitor (IC 50 = 1.4 µM). On the other hand, the introduction of a methoxyl group on C8 of Emodin leads to Questin (116), which behaved as a weaker α-glucosidase inhibitor than Emodin, whereas the presence of a methoxyl group on C3 generates Physcion (115), a molecule that maintained a relevant affinity for both targets. Furthermore, the introduction of a methoxy group on C1 leads to 2-Hydroxyemodin-1 methylether (99), a molecule showing higher IC 50 values for both targets, while the removal of OH group from (99) generates Obstusifolin (113), showing a good affinity for PTP1B but a poor affinity for α-glucosidase (IC 50 = 142.12 µM). Finally, the introduction of an additional methoxy group on C7 of (99) leads to Aurantio-obtusin (102), which presented IC 50 values slightly increased in respect to (99). It is noteworthy that Obtusin (114) and Chryso-obtusin (105), showing methoxy groups on C1, C6, and C7 and C1, C6, C7, and C8, respectively, showed IC 50 values for PTP1B and α-glucosidase lower than those of (113), suggesting that a methoxy group can at least partially replace the function of the OH group. Glycosylated derivatives, such as Chryso-obtusin-2-glucoside (106), Chrysophanol tetraglucoside (108), and Chrysophanol triglucoside (109), showed low affinity for both targets regardless of the position and number of glycosyl groups. Similarly, glycosides bearing a naphthopyrone group, such as Cassiaside (103) and Cassitoroside (104), behaved as weak inhibitors of PTP1B and α-glucosidase, suggesting that anthraquinone by itself represents a good scaffold for developing new MDLs for the treatment of T2D. Prenylated Phenolic Compounds Morus alba L. (196, 202, 206, 208, 209, 210, 212), Paulownia tomentosa (197-201, 204, 205, 211), Glycyrrhiza uralensis (licorice) (213, 214) are the natural source of prenylated phenolic compounds, reported in the Table 9. Such molecules are highly heterogeneous and showed a variable inhibitory activity. Starting from smaller molecules, we observed that, despite their similar structure, Morachalcone A (212) is a weaker inhibitor of (E)-4-isopentenyl-3,5,2 ,4 -tetrahydroxystilbene (196), suggesting that the position of dihydroxyphenyl groups influence the activity of such prenylated molecules. Concerning the flavonoid compounds, we observed that Mimulone (211), bearing a geranylated-naringenin based structure, is a potent PTP1B inhibitor (IC 50 = 1.9 µM), and a good α-glucosidase inhibitor (IC 50 = 30 µM), thereby confirming that flavonoids are good lead molecules for synthesis of new drugs targeting both enzymes. Modifications of Mimulone structure have a different impact on the inhibitory power of chemical derivatives depending on the chemical groups introduced. For instance, the introduction of a methoxy group on C3 (199) or C4 (201) of "B" aromatic ring reduced affinity for PTP1B but improved that for α-glucosidase, whereas the introduction of a second methoxy group on C5 (197) strongly reduced the affinity of the molecule for αglucosidase. Moreover, the introduction of OH group on C3 of keto-pyrene moiety (198, 200) did not improve the inhibitory power, whereas the presence of two OH groups on C3 and C5 together with a methoxy group on C4 generated two molecules, (204) and (205), showing IC 50 values in the low micromolar range toward both targets. This finding suggests that the OH group on "B" ring has a key role in enhancing the targets-ligand interactions. Prenylated Morin derivatives, such as Albanin A (206), showed a better affinity for α-glucosidase than for PTP1B, while Kuwanon C (208), showing two prenyl groups, had increased affinity for both enzymes. A stronger inhibitory activity was observed when prenyl chains were linked to C6 and C8 of Genistein (203), suggesting that position of "B" ring strongly influences the inhibitory activity of prenylated phenol molecules. Glycyuralin H (207), which has the same 3-hydroxyisoflavanone skeleton of Genistein, behaved as weaker inhibitor of Kuwanon C (208), suggesting that modification of "B" ring of Genistein is not a good strategy to improve the inhibitory activity. The presence of a single geranyl chain on C5 of "B" ring of Morin suitably increased the inhibitory power of 5 -Geranyl-5,7,2 ,4 -tetrahydroxy-flavone (202) for PTP1B and α-glucosidase. This result indicates that the length of the aliphatic chain can be modulated to improve affinity of inhibitor for both targets. On the other hand, the introduction of additional 2,4-dihydroxyphenyl groups (209) resulted in a strong enhancement of inhibitory power of Kuwanon G, thereby confirming that OH groups contribute to strengthening the stability of the enzyme-inhibitor complex. Isoprenylated coumarones proved to be good inhibitors, showing 2 -O-demethylbidwillol B (213) similar IC 50 values for both targets, while Glyurallin A (214) appeared to have a greater affinity for α-glucosidase. Finally, the Diels-Alder adduct Macrourin G (210) behaved as a potent inhibitor of both enzymes. Interestingly, kinetic and docking analyses revealed that Macrourin G binds into allosteric site identified by Wiesmann in PTP1B [65], while it acts as competitive inhibitor of α-glucosidase, interacting with some residues of catalytic site of enzyme [56]. As the PTP1B inhibitory activity is concerned, all dicaffeoylquinic acids reported in the Table 10 showed significant inhibitory activity. Among these, 1,5 dicaffeoylquinic acid (215), the only molecule of this series bearing a caffeic acid molecule linked to C1 of quinic acid, showed the lowest activity overall. Taking into account the differences between the IC 50 values and chemical structure of compounds reported in Table 10, it is reasonable to hypothesize that the presence of the caffeoyl group at the positions 3, 4, or 5 are related to the higher inhibitory activity of 3,4 and 3,5 caffeoyl derivatives for PTP1B. Although dicaffeoylquinic acids (215-219) were also active on α-glucosidase, their inhibitory activity is weaker when compared with that observed on PTP1B. The most potent α-glucosidase inhibitor identified between this group of compounds was the methyl-3,5-di-O-caffeoylquinic acid (219). Considering the structural differences of (219) with 3,5-dicaffeoylquinic acid (217), it is possible to infer that the methyl ester bridge at the carboxylic acid is functional to enhance the inhibitory activity against α-glucosidase. Alkaloids Eighteen different alkaloids able to inhibit both PTP1B and α-glucosidase were extracted from Clausena anisum-olens [66] (Table 11). [66] It is interestingly to observe that among all carbazole, Clausenanisine A (224) exhibited the lowest IC 50 values on both targets, suggesting that the carbazole moiety bearing a fivemembered cyclic ether, a methoxy group, and a short aliphatic chain represents a promising lead structure for developing new MLDs active on both PTP1B and α-glucosidase. Taking into account that Clausenanisine B (225) showed inhibitory activity similar to Clausenanisine A (224), we argue that the insertion of a tetrahydro-pyran-4-one group resulted in a slight decrease of the affinity for both targets. The loss of the OH group from C2 of Clausenanisine B (225), of both OH and carbonyl groups, or the introduction of a methoxyl or hydroxyl group on C8 of carbazole moiety generated Euchrestifoline (236), Dihydromupamine (235), Clauraila B (221), and Kurryame (237), whose affinity for PTP1B and α-glucosidase steadily decreased. However, the most relevant decrease of affinity occurred after the loss of the carbonic group present on the pyrene moiety, suggested that ketocarbonyl group on C1 of Clausenanisine B is responsible of the significant inhibitory activity of molecule for both enzymes. Clausenanisine F (229), which bears a carboxyl group on C3 and an OH group on C1, showed lower but always significant inhibitory activity for PTP1B while it proved to be a very weak inhibitor of α-glucosidase. The addition of a methoxy group on C2 or C6 of (229) leads to Clausenanisine C (226) and Clausenaline E (222), two molecules with different inhibitory activity. The first one showed a weak affinity for PTP1B but behaved as a potent α-glucosidase inhibitor. The latter showed a reduced affinity for both PTP1B α-glucosidase when compared to (229). Finally, Clausenaline F (223), bearing two methoxy groups on C1 and C2, and Clausines B (232), possessing 2 methoxy groups on C6 and C8, showed a decreased affinity for PTP1B in comparison with (229). However, Clausines B resulted a better α-glucosidase inhibitor than Clausenaline F. The replacement of the carboxyl group of (229) with a formyl group leads to 3-Formyl-1-hydroxycarbazole (220), which showed an affinity for PTP1B similar to (229) but an increased affinity for α-glucosidase. The loss of the OH on C1 of (231) leads to Clausenanisine D (227), which showed a higher IC 50 value for PTP1B but a similar affinity for the α-glucosidase compared to (220). Changing the position of one or both OH groups present on (220) we obtain Clauszoline N (234) and Clauszoline M (233). The affinity of these compounds is similar, but that for α-glucosidase differs, (233) being a very bad inhibitor of this enzyme. The addition of one methoxy group on C6 of 3-Formyl-1-hydroxycarbazole (220) leads to Clausine I (230), which possessed a similar affinity for PTP1B as (220) but an enhanced affinity for α-glucosidase compared to 3-Formyl-1-hydroxycarbazole. Finally, the introduction of two methoxy groups on C6 and C8 of Clauszoline M (233) leads to Clausines B (232), a molecule that showed an affinity for PTP1B similar to that of (233) but a high affinity for α-glucosidase. p-propoxybenzoic acid (244) showed a slight decrease of affinity for both targets (IC 50 values for PTP1B and α-glucosidase were 14.8 and 10.5 µM, respectively), while Sargachromenol (245) and Sargaquinoic acid (246), showed a similar affinity for PTP1B but a decreased affinity for α-glucosidase in respect to (244). All other compounds showed a weak affinity for both targets. From Natural World to Bench-Side What emerges from the data reported in this review is that the natural world can provide a large number of chemically different scaffold molecules active on both PTP1B and α-glucosidase. The list includes a large number of good inhibitors-molecules showing IC 50 values in the low micromolar range (0.1-20 µM) for both targets-and other compounds with IC 50 values that gradually increase up to the value of 440 µM for the PTP1B and 600 µM for the α-glycosidase ( Figure 5 and Table 13). Table 13. It is interesting to note that the 250 known MDLs show a greater affinity for PTP1B than for α-glucosidase, as evidenced by the fact that the average IC50 values for the enzymes are 6.0 ± 4.1 µM and 42.9 ± 52.0 µM, respectively. Although this difference could seem an obstacle to the development of balanced dual PTP1B/α-glucosidase inhibitors, it may not represent a true problem, especially considering the different localization and physiological role of the two enzymes. The α-glucosidase localizes in the intestinal lumen while PTP1B in the cytosol of human cells [69]. This first consideration suggests that αglucosidase targeting does not require very potent inhibitors, and significant inhibition of this enzyme can easily be achieved by taking oral drug during or immediately after a meal. By ensuring protection from the acidic pH of the stomach, MDLs could reach the intestine intact and target the enzyme. Furthermore, the lower potency of MDLs for αglucosidase could be easily compensated by increasing drug dosage. As a precautionary measure, it could be of great importance to evaluate the ability of these molecules to inhibit the activity of intestinal glucose transporters in order to avoid the risk of inducing a hypoglycemic condition [70]. Conversely, targeting PTP1B is a more complex matter. In fact, MDLs designed to act on PTP1B should have good bioavailability to ensure their absorption through the intestinal wall and their rapid uptake by cells that compose liver, muscle, adipose tissue, pancreas, and brain, where this enzyme acts as a negative regulator of the insulin receptor. Moreover, considering that mammalian cells express different types of tyrosine phosphatases, each of which carries out important regulatory functions, a good/high specificity of MDLs for PTP1B is essential to avoid the induction of severe side effects. Unfortunately, in most cases, information about the specificity of such molecules are not yet available. Conversely, relatively more informations are available about the action mechanism of natural MDLs (Table 14). It is interesting to note that several of the most effective MDLs identified act as non-competitive inhibitors of PTP1B, suggesting that they interact with sites different from the active site of enzyme. Non-competitive and allosteric PTP1B inhibitors actually represent the most promising molecules for developing new antidiabetic drugs since they could ensure a more adequate and specific action, thereby reducing the risk of inducing negative side effects [71]. Table 13. It is interesting to note that the 250 known MDLs show a greater affinity for PTP1B than for α-glucosidase, as evidenced by the fact that the average IC 50 values for the enzymes are 6.0 ± 4.1 µM and 42.9 ± 52.0 µM, respectively. Although this difference could seem an obstacle to the development of balanced dual PTP1B/α-glucosidase inhibitors, it may not represent a true problem, especially considering the different localization and physiological role of the two enzymes. The α-glucosidase localizes in the intestinal lumen while PTP1B in the cytosol of human cells [69]. This first consideration suggests that α-glucosidase targeting does not require very potent inhibitors, and significant inhibition of this enzyme can easily be achieved by taking oral drug during or immediately after a meal. By ensuring protection from the acidic pH of the stomach, MDLs could reach the intestine intact and target the enzyme. Furthermore, the lower potency of MDLs for α-glucosidase could be easily compensated by increasing drug dosage. As a precautionary measure, it could be of great importance to evaluate the ability of these molecules to inhibit the activity of intestinal glucose transporters in order to avoid the risk of inducing a hypoglycemic condition [70]. Conversely, targeting PTP1B is a more complex matter. In fact, MDLs designed to act on PTP1B should have good bioavailability to ensure their absorption through the intestinal wall and their rapid uptake by cells that compose liver, muscle, adipose tissue, pancreas, and brain, where this enzyme acts as a negative regulator of the insulin receptor. Moreover, considering that mammalian cells express different types of tyrosine phosphatases, each of which carries out important regulatory functions, a good/high specificity of MDLs for PTP1B is essential to avoid the induction of severe side effects. Unfortunately, in most cases, information about the specificity of such molecules are not yet available. Conversely, relatively more informations are available about the action mechanism of natural MDLs (Table 14). It is interesting to note that several of the most effective MDLs identified act as non-competitive inhibitors of PTP1B, suggesting that they interact with sites different from the active site of enzyme. Non-competitive and allosteric PTP1B inhibitors actually represent the most promising molecules for developing new antidiabetic drugs since they could ensure a more adequate and specific action, thereby reducing the risk of inducing negative side effects [71]. From a careful analysis of the data, it is possible to obtain other important information regarding the properties of the molecules with dual inhibitory activity. The "α-glucosidase IC 50 /PTP1B IC 50 " ratio allows us to evaluate the relative potency of MDLs toward both targets. Interestingly among all compounds, 110 of them show an IC 50 ratio comprised between 0.5 and 1, while the other 35 compounds show a ratio value comprised between 1 and 2 (Table 15). This finding demonstrates that about 59% of identified molecules can be considered naturally balanced inhibitors even if each of these molecules differ from the others in inhibitory power. Nevertheless, we also found compounds showing a different balancing ratio, including molecules with a greater affinity for PTP1B in respect to α-glucosidase and vice versa. For this reason, we think that such data could be precious for researchers working to produce new PTP1B/α-glucosidase inhibitors with different kinetic properties and structure to be used as scaffold molecules for the design of new MDLs with antidiabetic activity. Otherwise, some of these molecules could be used to generate semisynthetic derivatives characterized by greater bioavailability or specificity for selected targets [6]. Considering that the chemical properties can influence both the efficacy and metabolic fate of each molecule, the choice of the most appropriate scaffolds remains one of the most important steps of the entire design and synthesis process. A seminal study conducted by Lipinski and coworkers lead to formulation of general criteria useful to predict invivo bioavailability and degree of oral absorption of a molecule. The molecules that are most likely to be absorbed are characterized by a molecular weight lower than 500 Da, a miLogP value comprised in the 0-5 range, a number of hydrogen bond acceptors <10, a number of hydrogen bond donors <5, a TPSA (topological polar surface area) value <140 Å, and a number of rotatable bonds <10-20 [72]. Based on this information, we decided to analyze the properties of the 60 most potent compounds among those reviewed to predict their bioavailability. Interestingly, we found that, on average, the compounds meet the criteria mentioned before (Table 16). This result reinforces the hypothesis that most of these compounds possess a good bioavailability and can be considered promising lead molecules for the development of new MDLs. Besides the influence on bioavailability, we wondered if the aforementioned chemicalphysical parameters could influence the activity of MDLs toward designed targets. To give an answer, we analyzed the correlation between the IC 50 values for both enzymes and each parameter described in Table 16 ( Figures 6 and 7). Interestingly, taking into account the PTP1B inhibitory activity, we found that the TPSA, the number of hydrogen donors and acceptors, the complexity, and the molecular weight of molecules are inversely related to IC 50 value, while the miLogP shows a direct correlation with the calculated IC 50 value. A weak correlation was observed between the number of rotatable bonds and the IC 50 value. This evidence suggests that the extension of the molecules and their ability to form hydrogen bonds with the atoms of the enzyme play an important role in determining their interaction strength with PTP1B, while the hydrophobicity of the molecules represents an unfavorable factor. However, different results were obtained analyzing the IC 50 values for α-glucosidase. Parameters such as TPSA, number of hydrogen donors and acceptors, complexity, and molecular weight have a poor correlation with the IC 50 value, while the parameter miLogP seems inversely correlated with the inhibitory power of the molecules. Finally, the number of rotatable bonds appears directly related to the IC 50 value for α-glucosidase. These data suggest that this enzyme preferentially interacts with rigid molecules characterized by a hydrophilic profile even if the interaction of MDLs with the enzyme is not strongly influenced by formation of hydrogen bond or by molecular weight. This is in accordance with the fact that this enzyme works in an aqueous extracellular medium and binds glucose polymers, molecules showing a relatively high rigidity. Molecules 2021, 26, x 25 of 35 Figure 6. Relationship between the chemical-physical properties and the IC50 values for PTP1B of the best sixty compounds. Chemical-physical data were recovered from PubChem database and analyzed using OriginPro 2021 Software. Mechanism of Action and In-Vivo Activity of Some Natural PTP1B/α-Glucosidase Inhibitors Most of the studies carried out to identify new double inhibitors of PTP1B/α-glucosidase have been focused on the definition of the chemical structure and on the determination of the IC50 values of the molecules. Rarely, researchers have also been able to determine also the mechanism of action and in-vivo effectiveness of the isolated compounds. Below, we have summarized the data obtained from the literature concerning some of the most potent double inhibitors of PTP1B/α-glucosidase characterized so far. Mechanism of Action and In-Vivo Activity of Some Natural PTP1B/α-Glucosidase Inhibitors Most of the studies carried out to identify new double inhibitors of PTP1B/α-glucosidase have been focused on the definition of the chemical structure and on the determination of the IC 50 values of the molecules. Rarely, researchers have also been able to determine also the mechanism of action and in-vivo effectiveness of the isolated compounds. Below, we have summarized the data obtained from the literature concerning some of the most potent double inhibitors of PTP1B/α-glucosidase characterized so far. Phlorofucofuroeckol-A (242), Dieckol (139), and 7-Phloroeckol (122) are phlorotannins extracted by the edible brown algae arame (Ecklonia bicyclis) and turuarame (Ecklonia stolonifera). Kinetic analyses revealed that these compounds acted as non-competitive inhibitors of PTP1B, while only Phlorofucofuroeckol-A and 7-Phloroeckol behaved as non-competitive inhibitors of α-glucosidase. Conversely, Dieckol acted as competitive inhibitor of the latter. A recent investigation on rat insulinoma cells showed that treatment with Dieckol reduced oxidative stress and apoptosis caused by exposition of cells to high glucose levels, suggesting that this compound could protect β-pancreatic cells against damages induced by hyperglycemia [73]. A recent in-vivo study demonstrated that Phlorofucofuroeckol-A delayed intestinal absorption of dietary carbohydrates in diabetic mice models, suggesting that this molecule could be used to generate new drugs able to reduce post-prandial glucose levels in diabetic patients [74]. High blood glucose levels increase the production of advanced glycation end products that, in turn, contribute to the onset of numerous diabetes-related complications, such as nephropathy, retinopathy, atherosclerosis, and neurodegenerative diseases. Recently, Su Hui Seong et al. demonstrated that Phlorofucofuroeckol-A inhibits non-enzymatic insulin glycation and Aβ aggregation. This finding suggests that this molecule could be used to preserve insulin function and prevent aggregation of the Aβ peptide, thus maintaining the vitality of neurons and avoiding Aβ-mediated brain damage [75]. Besides, it has been demonstrated that Phlorofucofuroeckol-A acts as a potent noncompetitive inhibitor of human monoamine oxidase-A, confirming that this compound could be useful to prevent neuronal disorders [76]. A further study showed that Phlorofucofuroeckol-A is effective in contrasting negative effects induced by high fatty diet as well as in reducing leptin resistance in hypothalamic neurons and microglia. This evidence suggests that such compound could be used to fight leptin resistance and to control weight in obese subjects [77]. Results of studies conducted on diabetic mice models confirmed the antidiabetic activity of Dieckol (139) in vivo. Intra peritoneal injections (10-20 mg/Kg for 14 days) of Dieckol in C57BL/KsJ-db/db diabetic mice lead to a significant reduction of both blood glucose and serum insulin levels as well as of body weight. Moreover, it has been reported that treated mice showed either increased phosphorylation levels of AMPK (5′ adenosine monophosphate-activated protein kinase) or Akt (Akt kinase) and a concomitant increase Kinetic analyses revealed that these compounds acted as non-competitive inhibitors of PTP1B, while only Phlorofucofuroeckol-A and 7-Phloroeckol behaved as non-competitive inhibitors of α-glucosidase. Conversely, Dieckol acted as competitive inhibitor of the latter. A recent investigation on rat insulinoma cells showed that treatment with Dieckol reduced oxidative stress and apoptosis caused by exposition of cells to high glucose levels, suggesting that this compound could protect β-pancreatic cells against damages induced by hyperglycemia [73]. A recent in-vivo study demonstrated that Phlorofucofuroeckol-A delayed intestinal absorption of dietary carbohydrates in diabetic mice models, suggesting that this molecule could be used to generate new drugs able to reduce post-prandial glucose levels in diabetic patients [74]. High blood glucose levels increase the production of advanced glycation end products that, in turn, contribute to the onset of numerous diabetes-related complications, such as nephropathy, retinopathy, atherosclerosis, and neurodegenerative diseases. Recently, Su Hui Seong et al. demonstrated that Phlorofucofuroeckol-A inhibits nonenzymatic insulin glycation and Aβ aggregation. This finding suggests that this molecule could be used to preserve insulin function and prevent aggregation of the Aβ peptide, thus maintaining the vitality of neurons and avoiding Aβ-mediated brain damage [75]. Besides, it has been demonstrated that Phlorofucofuroeckol-A acts as a potent non-competitive inhibitor of human monoamine oxidase-A, confirming that this compound could be useful to prevent neuronal disorders [76]. A further study showed that Phlorofucofuroeckol-A is effective in contrasting negative effects induced by high fatty diet as well as in reducing leptin resistance in hypothalamic neurons and microglia. This evidence suggests that such compound could be used to fight leptin resistance and to control weight in obese subjects [77]. Results of studies conducted on diabetic mice models confirmed the antidiabetic activity of Dieckol (139) in vivo. Intra peritoneal injections (10-20 mg/Kg for 14 days) of Dieckol in C57BL/KsJ-db/db diabetic mice lead to a significant reduction of both blood glucose and serum insulin levels as well as of body weight. Moreover, it has been reported that treated mice showed either increased phosphorylation levels of AMPK (5 adenosine monophosphate-activated protein kinase) or Akt (Akt kinase) and a concomitant increase of the activity of several antioxidant enzymes [78]. Seung-Hong Lee and co-workers showed that the treatment of type 2 diabetic mice with a Dieckol-rich extract from Ecklonia cava caused a significant reduction of blood glycosylated hemoglobin and plasma insulin levels and an improvement of glucose tolerance. Furthermore, diabetic mice treated with Dieckol-rich extract showed a reduction of both plasma and hepatic lipids levels compared to control mice group [79]. Similar results were obtained from tests carried out on eighty pre-diabetic male and female adults. In this case, the administration of Dieckol-rich extract (1500 mg per day for 12 weeks) caused a significant reduction of postprandial glucose, insulin, and C-peptide levels compared to control group. Moreover, no significant adverse events or changes of biochemical and hematological parameters were observed during treatment. Taken together, these results confirmed that algae extracts rich in Dieckol and other phlorotannins possess antidiabetic activity, and their consumption may contribute to regularize glycemia in prediabetic and diabetic subjects [80]. Mulberrofuran G (170), Albanol B (122), and Kuwanon G (209) are flavonoids extracted from root bark of Morus Alba Linn. of the activity of several antioxidant enzymes [78]. Seung-Hong Lee and co-workers showed that the treatment of type 2 diabetic mice with a Dieckol-rich extract from Ecklonia cava caused a significant reduction of blood glycosylated hemoglobin and plasma insulin levels and an improvement of glucose tolerance. Furthermore, diabetic mice treated with Dieckol-rich extract showed a reduction of both plasma and hepatic lipids levels compared to control mice group [79]. Similar results were obtained from tests carried out on eighty pre-diabetic male and female adults. In this case, the administration of Dieckol-rich extract (1500 mg per day for 12 weeks) caused a significant reduction of postprandial glucose, insulin, and C-peptide levels compared to control group. Moreover, no significant adverse events or changes of biochemical and hematological parameters were observed during treatment. Taken together, these results confirmed that algae extracts rich in Dieckol and other phlorotannins possess antidiabetic activity, and their consumption may contribute to regularize glycemia in prediabetic and diabetic subjects [80]. [59]. In accordance with the results of kinetic analyses, docking in silico revealed that these compounds interacted with both the catalytic and an allosteric site present on PTP1B surface. Similar results were obtained analyzing the interaction mode of compounds with α-glucosidase enzyme, thereby confirming the mixed-type inhibition mode. Studying the effects on liver cells, the authors reported that treatment with these compounds decreased the expression of PTP1B and stimulated glucose uptake, suggesting that they act as insulin-sensitizing agents [59]. Several phytochemicals extracted from leaves and fruits of different members belonging to the Moraceae family showed anti-obesity and anti-diabetic activity, enhancing insulin signaling pathway, glucose uptake, and inhibiting hepatic gluconeogenesis [81,82]. Paudel P. et al. demonstrated that such compounds acted as potent mixed-type enzyme inhibitors against PTP1B and α-glucosidase [59]. In accordance with the results of kinetic analyses, docking in silico revealed that these compounds interacted with both the catalytic and an allosteric site present on PTP1B surface. Similar results were obtained analyzing the interaction mode of compounds with α-glucosidase enzyme, thereby confirming the mixed-type inhibition mode. Studying the effects on liver cells, the authors reported that treatment with these compounds decreased the expression of PTP1B and stimulated glucose uptake, suggesting that they act as insulin-sensitizing agents [59]. Several phytochemicals extracted from leaves and fruits of different members belonging to the Moraceae family showed anti-obesity and anti-diabetic activity, enhancing insulin signaling pathway, glucose uptake, and inhibiting hepatic gluconeogenesis [81,82]. Like Manh Tuan Ha and co-workers demonstrated that these compounds act as non-competitive inhibitors of PTP1B and as competitive inhibitors of α-glucosidase [56]. Besides, in-silico docking analyses revealed that all three compounds are docked into the allosteric binding site previously described by Wiesmann et al. in 2004 [65]. Although no data have been produced to confirm the in-vivo activity of such compounds, such results could inspire the synthesis of new MDLs for treatment of type 2 diabetes [56]. Ugonin J (190) together with other similar derivatives are flavonoids with cyclohexyl motif extracted from the rhizome part of Helminthostachys zeylanicawas. Abdul Bari Shah and co-workers demonstrated that this compound acts as a competitive inhibitor of PTP1B and non-competitive inhibitor of α-glucosidase [60]. Interestingly, deeper analyses revealed that Ugoning J behaves as a reversible slowbinding inhibitor on PTP1B. In-silico docking analyses confirmed that Ugonin J interacted with residues located in the active site of PTP1B, thereby confirming its nature of competitive inhibitor [60]. An in-vivo study conducted on C57BL/6 J mice fed a high-fat diet showed that Ugonin J treatment promoted lipid clearance increasing phosphorylation levels of both AMPK, ACC (acetyl-CoA carboxylase), and upregulating th expression of CPT-1 (carnitine palmitoyltransferase-1). Moreover, it has been observed that treatment of liver cells with Ugonin J increased Akt activity and increased insulin secretion from β cells during an acute insulin-secretion tests [83]. Manh Tuan Ha and co-workers demonstrated that these compounds act as noncompetitive inhibitors of PTP1B and as competitive inhibitors of α-glucosidase [56]. Besides, in-silico docking analyses revealed that all three compounds are docked into the allosteric binding site previously described by Wiesmann et al. in 2004 [65]. Although no data have been produced to confirm the in-vivo activity of such compounds, such results could inspire the synthesis of new MDLs for treatment of type 2 diabetes [56]. Ugonin J (190) together with other similar derivatives are flavonoids with cyclohexyl motif extracted from the rhizome part of Helminthostachys zeylanicawas. Abdul Bari Shah and co-workers demonstrated that this compound acts as a competitive inhibitor of PTP1B and non-competitive inhibitor of α-glucosidase [60]. Manh Tuan Ha and co-workers demonstrated that these compounds act as non-competitive inhibitors of PTP1B and as competitive inhibitors of α-glucosidase [56]. Besides, in-silico docking analyses revealed that all three compounds are docked into the allosteric binding site previously described by Wiesmann et al. in 2004 [65]. Although no data have been produced to confirm the in-vivo activity of such compounds, such results could inspire the synthesis of new MDLs for treatment of type 2 diabetes [56]. Ugonin J (190) together with other similar derivatives are flavonoids with cyclohexyl motif extracted from the rhizome part of Helminthostachys zeylanicawas. Abdul Bari Shah and co-workers demonstrated that this compound acts as a competitive inhibitor of PTP1B and non-competitive inhibitor of α-glucosidase [60]. Interestingly, deeper analyses revealed that Ugoning J behaves as a reversible slowbinding inhibitor on PTP1B. In-silico docking analyses confirmed that Ugonin J interacted with residues located in the active site of PTP1B, thereby confirming its nature of competitive inhibitor [60]. An in-vivo study conducted on C57BL/6 J mice fed a high-fat diet showed that Ugonin J treatment promoted lipid clearance increasing phosphorylation levels of both AMPK, ACC (acetyl-CoA carboxylase), and upregulating th expression of CPT-1 (carnitine palmitoyltransferase-1). Moreover, it has been observed that treatment of liver cells with Ugonin J increased Akt activity and increased insulin secretion from β cells during an acute insulin-secretion tests [83]. Interestingly, deeper analyses revealed that Ugoning J behaves as a reversible slowbinding inhibitor on PTP1B. In-silico docking analyses confirmed that Ugonin J interacted with residues located in the active site of PTP1B, thereby confirming its nature of competitive inhibitor [60]. An in-vivo study conducted on C57BL/6 J mice fed a high-fat diet showed that Ugonin J treatment promoted lipid clearance increasing phosphorylation levels of both AMPK, ACC (acetyl-CoA carboxylase), and upregulating th expression of CPT-1 (carnitine palmitoyltransferase-1). Moreover, it has been observed that treatment of liver cells with Ugonin J increased Akt activity and increased insulin secretion from β cells during an acute insulin-secretion tests [83]. The xanthones α-Mangostin (50), γ-Mangostin (49), and Cratoxanthone A (56) were extracted from Cratoxylum cochinchinense Lour. Results of kinetic analyses revealed that these compounds behave as mixed-type inhibitors of α-glucosidase but as competitive inhibitors of PTP1B [56]. Alaternin acted as a PTP1B-competitive inhibitor, and its interaction with the active site of enzyme was confirmed also by in-silico docking analyses. Besides, it has been demonstrated that Alaternin behaved as a mixed-type inhibitor of α-glucosidase. Finally, the authors of the study showed that both compounds are well tolerated by HepG2 cells and enhanced insulin-stimulated glucose uptake [45]. Conclusions MDLs are considered promising alternatives to traditional drugs for the treatment of multifactorial disease. However, the production of new MDLs is not a simple matter. The main obstacle that researchers face in the initial phase of design and synthesis of new drugs is the selection of appropriate scaffold molecules that exhibit at least one initial activity against the selected targets. Sometimes this phase can be very expensive and time consuming, two factors that can lead to the failure of the entire project. In this contest, there is a large consensus in considering the natural world an important source of scaffold molecules to be used for design new MDLs. Data reported in this review demonstrated Alaternin (100) and Emodin (110) are two anthraquinones extracted from Cassia obtusifolia. Alaternin acted as a PTP1B-competitive inhibitor, and its interaction with the active site of enzyme was confirmed also by in-silico docking analyses. Besides, it has been demonstrated that Alaternin behaved as a mixed-type inhibitor of α-glucosidase. Finally, the authors of the study showed that both compounds are well tolerated by HepG2 cells and enhanced insulin-stimulated glucose uptake [45]. Conclusions MDLs are considered promising alternatives to traditional drugs for the treatment of multifactorial disease. However, the production of new MDLs is not a simple matter. The main obstacle that researchers face in the initial phase of design and synthesis of new drugs is the selection of appropriate scaffold molecules that exhibit at least one initial activity against the selected targets. Sometimes this phase can be very expensive and time consuming, two factors that can lead to the failure of the entire project. In this contest, there is a large consensus in considering the natural world an important source of scaffold molecules to be used for design new MDLs. Data reported in this review demonstrated Alaternin acted as a PTP1B-competitive inhibitor, and its interaction with the active site of enzyme was confirmed also by in-silico docking analyses. Besides, it has been demonstrated that Alaternin behaved as a mixed-type inhibitor of α-glucosidase. Finally, the authors of the study showed that both compounds are well tolerated by HepG2 cells and enhanced insulin-stimulated glucose uptake [45]. Conclusions MDLs are considered promising alternatives to traditional drugs for the treatment of multifactorial disease. However, the production of new MDLs is not a simple matter. The main obstacle that researchers face in the initial phase of design and synthesis of new drugs is the selection of appropriate scaffold molecules that exhibit at least one initial activity against the selected targets. Sometimes this phase can be very expensive and time consuming, two factors that can lead to the failure of the entire project. In this contest, there is a large consensus in considering the natural world an important source of scaffold molecules to be used for design new MDLs. Data reported in this review demonstrated that many natural molecules possess intrinsic dual α-glucosidase/PTP1Binhibitory activity. Although most of these possess sub-optimal properties, such as low bioavailability and unfavorable pharmacokinetic or a lowspecificity, semi-or fully synthetic superior derivatives could be easily obtained starting from them. This approach could allow researchers to quickly overcome the first experimental phase, allowing them to focus on the optimization phase aimed at balancing the activity of the molecules toward the biological targets, increasing theirspecificityor bioavailability. The data we reported confirmed that many natural molecules, including some lignanamides, xanthones, anthraquinones, and several phenolic compounds, show high/balanced inhibitory activity toward PTP1B and α-glucosidase, making these as interesting lead compounds for synthesis of new MDLs. Most of these compounds are characterized by the presence of several hydroxylated phenolic rings or aliphatic chains. What often emerges from SAR analyses is that the number and the position of hydroxyl groups or aliphatic chains are essential to ensure a close interaction between the inhibitors and both enzymes and that by changing one of these parameters, it is possible to influence both inhibitory power and specificity for targets. Moreover, we found that complexity and molecular weight are two parameters strictly related with the IC 50 values of molecules, confirming that the higher the number of OH groups, the higher the affinity for targets. However, small molecules such as Alaternin, Ugonin J, or α-Mangostin also showed high affinity for both targets, suggesting that it is possible to reach a high inhibitory activity also starting from small scaffold molecules. This is another interesting aspect concerning the mechanism of action of these molecules. Data reported in Table 14 shows that smaller molecules mainly behave as competitive inhibitors for PTP1B and bigger ones as non-competitive inhibitors. This evidence suggests that small molecules bearing a hydroxylated phenyl ring could mimic the phenolic structure of the natural substrate, the phosphotyrosine, and for this reason, they could interact more easily with the active site of the enzyme. Conversely, larger molecules, due to their steric hindrance, would not be able to access the active site but could bind to allosteric sites on the surface of the enzyme. To the best of our knowledge, few synthetic PTP1B/α-glucosidase inhibitors have been produced still to date, and none of these has been evaluated in clinical trials to date, which is probably due to the fact that development of such kinds of molecules is still in its infancy. We hope that the information reported in this review will be useful to researchers dedicated to the design and synthesis of novel dual PTP1B/α-glucosidase inhibitors to be used routinely for the treatment of patients affected by T2D, obesity, and metabolic syndrome. Funding: This research was funded by University of Study of Florence, Italy (Fondo Ateneo ex 60%). Conflicts of Interest: The authors declare no conflict of interest.
2021-08-28T06:17:20.417Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "782ec208c84c179f63696466a6ae0b79c88a1c3a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/16/4818/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a70d85b41dafcadb0617b803d0229b4f44233786", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
62836823
pes2o/s2orc
v3-fos-license
The rate of convergence of some asymptotically chi-square distributed statistics by Stein's method We build on recent works on Stein's method for functions of multivariate normal random variables to derive bounds for the rate of convergence of some asymptotically chi-square distributed statistics. We obtain some general bounds and establish some simple sufficient conditions for convergence rates of order $n^{-1}$ for smooth test functions. These general bounds are applied to Friedman's statistic for comparing $r$ treatments across $n$ trials and the family of power divergence statistics for goodness-of-fit across $n$ trials and $r$ classifications, with index parameter $\lambda\in\mathbb{R}$ (Pearson's statistic corresponds to $\lambda=1$). We obtain a $O(n^{-1})$ bound for the rate of convergence of Friedman's statistic for any number of treatments $r\geq2$. We also obtain a $O(n^{-1})$ bound on the rate of convergence of the power divergence statistics for any $r\geq2$ when $\lambda$ is a positive integer or any real number greater than 5. We conjecture that the $O(n^{-1})$ rate holds for any $\lambda\in\mathbb{R}$. Introduction In this paper, we use Stein's method, introduced in 1972 by Stein [28], to obtain bounds on the rate of convergence of some asymptotically chi-square distributed statistics. In particular, we make use of a recent variant of Stein's method, due to [8], that allows one to obtain approximation theorems when the limit distribution can be represented as a function of multivariate normal random variables. In this paper, we achieve two goals. Firstly, we obtain bounds on the rate of convergence of Friedman's statistic and the power divergence family of statistics that improve on those from the existing liter-ature. Secondly, in deriving these bounds we generalise some of the theory developed in the recent work of [8]. We demonstrate that the theory can be applied in situations in which there is a dependence amongst the random variables of interest (these being, for example, the rankings of treatments across the trials for Friedman's statistic). We also obtain some simple sufficient conditions for O(n −1 ) convergent rates that weaken those of [8]. The theory developed in this paper allows for the distributional approximation of a large class of statistics (which include the Friedman and Pearson statistics as popular special cases) to be treated within one framework. Chi-square statistics for complete block designs In this paper, we study the rate of convergence of a class of statistics for nonparametric tests for complete block designs. That is, statistics for comparing r treatments or classifications across n independent trials. In Section 2, we develop some general theory and in Section 3 this theory is applied to several common chi-square statistics, which we now present. Friedman's chi-square statistic Friedman's chi-square test [6] is a non-parametric statistical test that, given r treatments across n independent trials, can be used to test the null hypothesis that there is no treatment effect against the general alternative. Suppose that for the i-th trial we have the ranking π i (1), . . . , π i (r), where π i (j) ∈ {1, . . . , r}, over the r treatments. Under the null hypothesis, the rankings are independent permutations π 1 , . . . , π n , with each permutation being equally likely. Let X ij = √ 12 √ r(r+1) π i (j) − r+1 2 and set W j = 1 √ n n i=1 X ij . Then the Friedman chi-square statistic, given by is asymptotically χ 2 (r−1) distributed under the null hypothesis. Pearson's chi-square and the power divergence family of statistics Another non-parametric test for complete block designs is Pearson's chisquare goodness-of-fit test, introduced in [25]. Consider n independent trials, with each trial leading to a unique classification over r classes. Let p 1 , . . . , p r represent the non-zero classification probabilities, and let U 1 , . . . , U r represent the observed numbers arising in each class. Then Pearson's chi-square statistic, given by is asymptotically χ 2 (r−1) distributed. Pearson's statistic is a special case (λ = 1) of the so-called power divergence family of statistics introduced by [5]: The statistic T λ (W) is asymptotically χ 2 (r−1) distributed for all λ ∈ R. When λ = 0, −1, the notation (1.3) should be understood a result of passage to the limit (see [29], Remark 1). Indeed, the case λ = 0 corresponds to the log-likelihood ratio statistic and the case λ = −1/2 is the Freeman-Tukey statistic (see [29], Remark 2). Rates of convergence of chi-square statistics In the existing literature, the best bound on the rate of convergence of Friedman's statistic is the following Kolmogorov distance bound of [15]: where the (non-explicit) constant C(r) depends only on r and Y ∼ χ 2 (r−1) . The rate of convergence of other asymptotically chi-square distributed has also received attention in the literature. For Pearson's statistic over n independent trials with r classifications, it was shown by [30] using Edgeworth expansions that the rate of convergence of Pearson's statistic, in the Kolmogorov distance was O(n (r−1)/r ), for r ≥ 2, which was improved by [14] to O(n −1 ) for r ≥ 6. Also, [29] and [1] have used Edgeworth expansions to study the rate of convergence of the more general power divergence family of statistics. For r ≥ 4, [29] obtained a O(n (r−1)/r ) bound on the rate of convergence in the Kolmogorov distance and, for r = 3, [1] obtained a O(n −3/4+0.065 ) bound on the rate of convergence in the same metric, with both bounds holding for all λ ∈ R. To date, the application of Stein's method to the problem of determining rates of convergence of asymptotically chi-square distributed statistics has been quite limited. In particular, there has been no application to Friedman's statistic in the literature. Pearson's statistic, however, has received some treatment. An investigation is given in the unpublished papers [20] and [21], with a O(n −1/2 ) Kolmogorov distance bound given for Pearson's statistic with general null distribution. In a recent work [10], a bound of order n −1 , for smooth test functions, was obtained. This bound is valid under any null distribution provided r ≥ 2, and involves the classification probabilities, under the null model, p 1 , . . . , p r correctly in the sense that the bound goes to zero if and only if np * → ∞, where p * = min 1≤i≤r p i . In this paper, we obtain bounds for the rate of convergence of a class of statistics for complete block designs that includes the Friedman and Pearson statistics and the power divergence family of statistics, at least for certain values of the index parameter λ. By building on the proof techniques of [8] and [10], we establish some simple conditions under which the rate of convergence is of order n −1 for smooth test functions, and present the general O(n −1 ) bounds in Theorems 2.4, 2.5 and 2.6. In Section 3, we consider the application of these general bounds to particular chi-square statistics. In particular, in Theorem 3.1, we obtain an explicit O(n −1 ) bound on the distributional distance between Friedman's statistic and its limiting chisquare distribution, for smooth test functions. In Theorem 3.3, we also show that the rate of convergence of the power divergence family of statistics is O(n −1 ) for the cases that the index parameter λ is either a positive integer or any real number greater than 5. It is conjectured that this rate holds for any λ ∈ R. Elements of Stein's method for functions of multivariate normal random variables To derive our approximation theorems for Friedman's statistic, we employ the powerful probabilistic technique Stein's method. Originally developed for normal approximation by [28], the method has since been extended to many other distributions, such as the multinomial [17], exponential [3,26], gamma [10,18,23], variance-gamma [7] and multivariate normal [2,13]. For a comprehensive overview of the current literature and an outline of the basic method see [16]. We now outline how Stein's method can be used to prove approximation theorems when the limit distribution can be represented as a function of multivariate normal random variables (for more details see [8]). We describe the general approach and explain how it can be applied to statistics and for block designs, such as Friedman's statistic. Let g : R d → R be continuous and let Z denote the standard d-dimensional multivariate normal distribution. Let Σ be non-negative definite, and Σ 1/2 be the unique non-negative matrix so that Σ 1/2 Z ∼ MVN(0, Σ). Suppose that we are interested in bounding the distributional distance between g(W) and To see, for example, that Friedman's statistic falls into this framework, note that F r can be written in the form g(W), where g(w) = r j=1 w 2 j and the W i are asymptotically normally distributed by the central limit theorem. Now, consider the multivariate normal Stein equation (see [12]) with test function h(g(·)): (1.4) We can therefore bound the quantity of interest |Eh(g(W)) − Eh(g(Σ 1/2 Z))| by solving (1.4) for f and then bounding the expectation (1.5) A number of coupling techniques have been developed for bounding such expectations (see [4,11,12,22,27]). These papers also give general plugin bounds for this quantity, although these only hold for the classical case that the derivatives of the test function (here h(g(·))) are bounded, in which standard bounds for the derivatives of the solution to (1.4) can be applied (see [9,12,22]). However, in general the derivatives of the test function h(g(·)) will be unbounded (this is the case for Friedman's statistic) and therefore the derivatives of the solution will also in general be unbounded. The partial derivatives of the solution (1.6) were bounded by [8] for a large class of function g : R d → R; in particular, bounds are given for the case that the partial derivatives of g have polynomial growth. These bounds are relevant to our study and are stated in Lemma 2.3. With such bounds on the solution and the coupling strategies developed for multivariate normal approximation it is in principle possible to bound the expectation (1.5), although we cannot directly apply the existing plug-in bounds. This is the approach we shall take when obtaining our general approximation theorems in Section 2. Outline of the paper In Section 2, we derive general bounds for the distributional distance between statistics g(W) for complete block designs and their limiting distribution g(Σ 1/2 Z). We give two general O(n −1/2 ) bounds, one for the case of non-negative covariance matrices (Theorem 2.2) and another for positive definite covariance matrices (Theorem 2.3). When the function g is even (g(w) = g(−w) for all w ∈ R d ), the rate of convergence can be improved to O(n −1 ) for smooth test functions (see Theorem 2.4). In Section 2.3, we see that it is possible to obtain O(n −1 ) bounds when the assumption that g is relaxed a little (see Theorem 2.6). In Section 3, we consider the application of the general bounds of Section 2 to the Friedman and Pearson statistics, as well as the power divergence statistics. In particular, in Theorem 3.1, we obtain an explicit O(n −1 ) bound for the distributional distance between Friedman's statistics and its limiting chi-square distribution. In Theorem 3.3, we obtain a O(n −1 ) bound on the rate of convergence for the family of power divergence statistics for the cases that λ is a positive integer or any real number greater than 5. We end by conjecturing that this rate holds for all λ ∈ R. 2. General bounds for the distributional distance between g(W) and g(Σ 1/2 Z) Preliminary lemmas Let X ij , i = 1, . . . , n, j = 1, . . . , d, be random variables which have mean zero, but which are not necessarily independent or identically distributed. Indeed, we shall suppose that X 1,j , . . . , X n,j are independent for a fixed j, but that the random variables X i,1 , . . . , X i,d may be dependent for any fixed i. For j = 1, . . . , d, let W j = 1 √ n n i=1 X ij and denote W = (W 1 , . . . , W d ) T . To deal with this dependence structure, we introduce the random variables W and X ij are independent. We also write Suppose that the covariance matrix Σ of W is non-negative definite. Let Z have the standard d-dimensional multivariate normal distribution, so that Σ 1/2 Z ∼ MVN(0, Σ). Let σ jk = (Σ) jk and In this section, we shall obtain bounds on the distributional distance between g(W) and g(Σ 1/2 Z), where g : R d → R is a sufficiently differentiable function. Note that g(W) takes the form of a statistic for complete block designs. In this subsection, we give two bounds: one for general g and a second for the case that g is an even function (g(w) = g(−w) for all w ∈ R d ). In the next subsection, we shall specialise to the case that the partial derivatives of g have polynomial growth. Before presenting our bounds, we introduce some notation. We shall let C k (I) denote the class of real-valued functions defined on I ⊆ R d whose partial derivatives of order k all exist. We shall also let C k b (I) denote the class of real-valued functions defined on I ⊆ R d whose partial derivatives of order k all exist and are bounded. . . , n, j = 1, . . . , d, be defined as above. Suppose h and g are such that f ∈ C 3 (R d ), where f is given by (1.6). Then, if the expectations on the right-hand side of (2.1) exist, We aim to bound Eh(g(W)) − Eh(g(Z)), and do so by bounding the quantity Here we used that 1 The proof is complete. In the statement of Lemma 2.1, we did not give precise conditions on h and g such that f ∈ C 3 b (R d ), nor restrictions on the X ij such that the expectations on the right-hand side of (2.1) exist. In applying, Lemma 2.1 in practice (see Section 2.2), one would need to check that h, g and the X ij are such that these conditions are met. The same comment applies equally to Lemma 2.2. We now obtain an analogue of Lemma 2.1 for the case that g is an even function. The symmetry of the function g allows us to obtain O(n −1 ) convergence rates for smooth test functions h. The following partial differential equation shall appear in our proof. Lemma 2.2. Let X ij , i = 1, . . . , n, j = 1, . . . , d, be defined as they were for Lemma 2.1. Suppose g : R d → R is an even function. Suppose further that the solution (1.6), denoted by f , belongs to the class C 4 (R d ) and that the solution ψ jkl to (2.2) is in the class C 3 (R d ). Then, if the expectations on the right-hand side of (2.3) exist, Proof. By a similar argument to the one used in the proof of Lemma 2.1, We can write N 1 as and we can also write N 2 as where Combining bounds gives that To achieve the desired O(n −1 ) bound we need to show that E Since g is an even function, the solution f , as given by (1.6), is an even function (see [8], Lemma 3.2). Therefore E We can use Lemma 2.1 to bound the right-hand side of (2.5), which allows us to obtain a O(n −1/2 ) bound for this quantity. All terms have now been bounded to the desired order and the proof is complete. Approximation theorems for polynomial P Lemmas 2.1 and 2.2 allow one to bound the distributional distance between g(W) and g(Σ 1/2 Z) if bounds are available for the expectations on the righthand side of (2.1) and (2.3), respectively. In this subsection, we obtain such bounds for the case that the partial derivatives of g have polynomial growth. We begin, with Lemma 2.3 (below), in which we state some bounds (see [8], Corollary 2.2 and 2.3) for the solutions f and ψ jkl . In [8] bounds for f and ψ jkl are also available for the case that the partial derivatives of g have exponential growth, although for space reasons we do not include these bounds (polynomial bounds suffice for our applications). We say that the function g : R d → R belongs to the class C m P (R d ) if all mth order partial derivatives of g exist and there exists a dominating function P : R d → R + such that, for all w ∈ R d , the partial derivatives satisfy [24]). Let h ∈ C 6 b (R) and g ∈ C 6 P (R d ). Then, for all w ∈ R d , Lemma 2.4. Let X i denote the vector (X i,1 , . . . , X i,d ) T and let u : . . , n and j = 1, . . . , d. Then, for all θ ∈ (0, 1), where the inequalities are for g in the classes C k P (R d ), C k−1 P (R d ) and C 6 P (R d ), respectively. For the second inequality, we must assume that Σ is positive definite; for the other inequalities it suffices for Σ to be non-negative definite. Proof. Let us prove the first inequality. From inequality (2.7) we have θ . By using the crude inequality |a + b| s ≤ 2 s (|a| s + |b| s ), which holds for any s ≥ 0, and independence of X ij and W (i) j , we have Using that E|W (i) j | r j ≤ E|W j | r j leads to the desired inequality. This can be seen by using Jensen's inequality: Thus we obtain the first inequality. The proofs of the other two inequalities are similar; we just use inequalities (2.6) and (2.8) instead of inequality (2.7). By applying the inequalities of Lemma 2.4 to the bounds of Lemmas 2.1 and 2.2, we can obtain the following four theorems for the distributional distance between g(W) and g(Σ 1/2 Z) when the derivatives of g have polynomial growth. Theorem 2.2 follows from using inequality (2.9) in the bound of Lemma 2.1, and the other theorems are proved similarly. Theorems 2.4 and 2.5 give some simple sufficient conditions under which a O(n −1 ) bound can be obtained for smooth test functions. We could obtain analogues of Theorems 2.4 and 2.5 for the case of a positive definite covariance matrix Σ (which would impose weaker conditions on g and h) by appealing to results from Section 2 of [8]. However, for space reasons, we do not present them (our applications involve covariance matrices that are only non-negative definite). Theorem 2.2. Let X ij , i = 1, . . . , n, j = 1, . . . , d, be defined as in Lemma 2.1, but with the additional assumption that E|X ij | r k +3 < ∞ for all i, j and 1 ≤ k ≤ d. Suppose Σ is non-negative definite and that g ∈ C 3 P (R d ). Let Theorem 2.3. Let X ij , i = 1, . . . , n, j = 1, . . . , d, be defined as in Lemma 2.1, but with the additional assumption that E|X ij | r k +3 < ∞ for all i, j and Theorem 2.4. Let X ij , i = 1, . . . , n, j = 1, . . . , d, be defined as in Lemma 2.1, but with the additional assumption that E|X ij | r k +4 < ∞ for all i, j and 1 ≤ k ≤ d. Suppose Σ is non-negative definite and that g ∈ C 6 P (R d ) is an even function. Then, for h ∈ C 6 b (R), (2.11) Theorem 2.5. Let X ij , i = 1, . . . , n, j = 1, . . . , d, be defined as in Lemma 2.1, but with the additional assumption that E|X ij | r k +4 < ∞ for all i, j and 1 ≤ k ≤ d. Suppose Σ is non-negative definite and that EX ij X ik X il = 0 for all 1 ≤ i ≤ n and 1 ≤ j, k, l ≤ d. Suppose g ∈ C 4 P (R d ). Then, for h ∈ C 4 b (R), (2.12) Proof. Notice that in the proof of Lemma 2.2 the bound (2.4) reduces to |R 1 | + |R 2 | + |R 3 | + |R 4 | when EX ij X ik X il = 0 for all 1 ≤ i ≤ n and 1 ≤ j, k, l ≤ d. Therefore the terms involving a multiple of EX ij X ik X il vanish from the bound (2.11), and we no longer require that g is even and also only need g to belong to the class C 4 P (R d ) and h to belong to C 4 b (R). Relaxing the condition that g is even For our application to the rate of convergence of the power divergence statistics we shall need a slight relaxation of the assumption from Theorem 2.4 that g is an even function. Looking back at the proof of Lemma 2.2, we see that a crucial step in obtaining the O(n −1 ) rate of Theorem 2.4 was the result that E ∂ 3 f ∂w j ∂w k ∂w l (W) is of order n −1/2 when g is even function satisfying suitable differentiability and boundedness conditions. We were able to obtain this result by using the fact that E ∂ 3 f ∂w j ∂w k ∂w l (Σ 1/2 Z) = 0 for such g. However, it actually suffices that E ∂ 3 f ∂w j ∂w k ∂w l (Σ 1/2 Z) = O(n −1/2 ) in order to obtain a final bound of order n −1 . This offers the scope for relaxing the condition that g is an even function. Suppose g : R d → R is such that where a : R d → R and b : R d → R satisfy suitable boundedness and differentiability conditions, a is an even function and 0 ≤ δ < 1 is a constant that we shall often think of as being 'small'. Through a sequence of lemmas we shall see that, for such a g, we have E ∂ 3 f ∂w j ∂w k ∂w l (Σ 1/2 Z) = O(δ). This will enable us to obtain an extension of Theorem 2.4 to the case that g is of the form (2.13). This theorem will enable us to obtain O(n −1 ) bounds for the rate of convergence of the power divergence statistics for certain values of the index parameter λ; see Section 3.3. We begin by obtaining simple useful formula for the partial derivatives of the test function h(g(·)), where g is of the form (2.13). Before deriving this formula, we state some preliminary results. The first is a multivariate generalisation of the Faà di Bruno formula for n-th order derivatives of composite functions, due to [19]: (2.14) where π runs through the set Π of all partitions of the set {1, . . . , m}, the product is over all of the parts B of the partition π, and |S| is the cardinality of the set S. It is useful to note that the number of partitions of {1, . . . , m} into k non-empty subsets is given by the Stirling number of the second kind m k (see [24]). We now introduce a class of functions that will be play a similar role to the class of functions C m P (R d ) of Section 2.2. We say that the function g : R d → R belongs to the class C m Q,δ (R d ) if g can be written in the form (2.13), that all m-th order partial derivatives of a and b exist and there exists a dominating function Q : R d → R + such that, for all w ∈ R d , the quantities are all bounded by Q(w). If g ∈ C m Q,δ (R d ) then it is easy to see that, for all w ∈ R d , the quantities j∈B ∂w j are all bounded Q(w). With these inequalities we are able to prove the following lemma. Lemma 2.5. Suppose h ∈ C m b (R) and that g, defined by g(w) = a(w) + δb(w), is in the class C m Q,δ (R d ). Then, for all w ∈ R d , Proof. By (2.14), we have that where r 1 satisfies the crude inequality, for all w ∈ R d , as δ < 1. By the mean value theorem we have that where r 2 is bounded for all w ∈ R d by Summing up the remainders r 1 (w) + r 2 (w) completes the proof. So far, we have imposed no conditions on the dominating function Q. However, from now on, we shall suppose that where A ≥ 0, B 1 , . . . , B d ≥ 0 and r 1 , . . . , r d ≥ 0. Thus, Q takes the same form as P did in Section 2.2. We shall restrict our attention to such a dominating function in this paper, but we note that we could obtain an analogue of the following lemma for dominating functions that have exponential growth (see [8], Section 2). However, for our applications, we shall not need such a lemma, so we omit it for space reasons. Lemma 2.6. Let m ≥ 1 be odd. Suppose that h ∈ C m b (R) and g ∈ C m Q,δ (R d ). Let f denote the solution (1.6). Then MVN(0, Σ). Then, by dominated convergence and Lemma 2.5, where q is defined as per equation (2.15). Evaluating both sides at the random variable Σ 1/2 Z and taking expectations gives that Since a is an even function and m is odd, it therefore follows that the first integral on the right-hand side of (2.18) is equal to 0, and so and thus, by (2.16), It was shown in [8] (see Lemma 2.3 and Corollary 2.2 of that work) that, for all w ∈ R d , Applying this inequality to (2.19) gives that as required. where M is defined as in Theorem 2.4. Proof. Theorem 2.4 was obtained by applying Lemma 2.2 together with the bounds of (2.3). Examining the proof of Lemma 2.2 (particularly equation (2.4)), we see that we can obtain a bound for the quantity |Eh(g(W)) − Eh(g(Σ 1/2 Z))| that is the bound M of Theorem 2.4 plus the additional term which arises because it is no longer assumed that g is an even function. Applying inequality (2.17), with δ = n −1/2 , to bound (2.20) yields the desired O(n −1 ) bound. Application to Freidman's chi-square and the power divergence statistics In this section, we consider the application of the approximation theorems of Section 2 to the statistics for complete block designs that were introduced in Section 1.1. Friedman's statistic We begin be observing the Friedman's statistic falls into the class of statistics covered by Theorem 2.5. , and under the null hypothesis, for fixed j, the collection of random variables π 1 (j), . . . , π n (j) are i.i.d. with uniform distribution on {1, . . . , r}. Friedman's statistic is given by and is asymptotically χ 2 (r−1) distributed under the null hypothesis. By the central limit theorem, for any j, we have that W j converges in distribution to a mean zero normal random variables as n → ∞. However, the W j are not independent (see Lemma 3.1 for the (non-negative definite) covariance matrix Σ W ), and we have that W = (W 1 , . . . , W r ) T D → MVN(0, Σ W ) as n → ∞. However, for a fixed j, the random variables X 1,j , . . . , X n,j are independent because it is assumed that trials are independent. Also, we can write χ 2 = g(W), where g(w) = m j=1 w 2 j . The function g : R r → R is an even function with partial derivatives of polynomial growth. Finally, since the X ij have finite support, their moments of any order exist. Therefore, Friedman's statistic falls into the framework of Theorem 2.4. However, since the distribution of the random variables X ij is symmetric about 0, it follows that EX ij X ik X il = 0 for all 1 ≤ i ≤ n and 1 ≤ j, k, l ≤ d. Thus, we can in fact apply Theorem 2.5 to bound the rate of convergence of Friedman's statistic. We present an explicit bound in Theorem 3.1, but before stating the theorem we note the following lemma. Proof. For fixed j, π 1 (j), . . . , π n (j) are independent Unif{1, . . . , r} random variables, and therefore Suppose now that j = k. Since r j=1 W j = 0, we have where we used that the W j are identically distributed to obtain the final equality. On rearranging, and using that EW 2 j = r−1 r , we have that 1. The bound (3.1) is of order n −1 , which is the fastest rate of convergence in the literature for Friedman's statistic. However, the numerical constants and the dependence of the bound on r are far from optimal. This is the price we pay for deriving the bound by applying the more general bound of Theorem 2.5. In proving Theorem 2.5, we used local couplings to deal with the dependence structure, and this approach enabled us to obtain a O(n −1 ) bound for a class of statistics for complete block designs. However, for Friedman's statistic, local couplings are quite crude, and their use leads to a bound with a poor dependence on r. A direction for future research is to obtain a O(n −1 ) bound with a better dependence on r. 2. As using the Theorem 2.5 to obtain a O(n −1 ) bound for Friedman's statistic will lead to one with a far from optimal dependence on r and large numerical constants, we make use of a number of crude inequalities to derive when applying Theorem 2.5 to derive our bound. This simplifies the calculations, but still allows us to obtain the desired O(n −1 ) rate. 3. We could also apply Theorem 2.2 to derive a O(r 3 n −1/2 ) bound for Friedman's statistic, which may be preferable when the numerical constants are large compared to n. Note that we cannot apply Theorem 2.3 because the covariance matrix Σ S is not positive-definite. Proof of Theorem 3.1. As described above, Friedman's statistic falls into the framework of Theorem 2.5, so to derive the bound we need to find a suitable dominating function for g(w) = Now, let X be a random variable that has the same distribution as the X ij . We shall need some formulas for the moments of X, which can be computed from the following sum: In particular, With these inequalities and Hölder's inequality we can bound the following terms that arise in (2.12) for all 1 ≤ i ≤ n and 1 ≤ j, k, l, m, t ≤ d: We also have that, for all 1 ≤ j ≤ d, as EX = 0 and n ≥ 1. Finally, EZ 4 j = 3 r−1 Plugging these inequalities into the bound (2.12) and simplifying using that r, n ≥ 1 gives that which is our final bound. Pearson's statistic Here, we consider the application of Theorem 2.4 to Pearson's statistic. Recall that Pearson's statistic is given by and is asymptotically χ 2 (r−1) distributed provided np * → ∞, where p * = min 1≤j≤r p j . The cell counts U j ∼ Bin(n, p j ), 1 ≤ j ≤ r, are dependent random variables that satisfy r j=1 U j = n. Now, let I ij ∼ Ber(p j ) denote the indicator that the i-th trial falls in the j-th cell. Then letting X ij = I ij −p j √ p j and W j = 1 √ n n i=1 X ij , we can write where g(w) = r j=1 w 2 j . As was the case for Friedman's statistic, the function g is even and has derivatives of polynomial growth. Also, because trials are assumed to be independent, the random variables X 1,j , . . . , X n,j are independent for fixed j. The covariance matrix of W = (W 1 , . . . , W r ) T is nonnegative definite, with entries σ jj = 1 − p j and σ jk = − √ p j p k , j = k (see [10]). It is therefore the case that Pearson's statistics falls within the class of statistics covered by Theorem 2.4. However, unlike, Friedman's statistic, we cannot apply Theorem 2.5 to Pearson's statistic. This is because EX ij X ik X il = 0 in general. We can apply Theorem 2.4 to Pearson's statistic as follows. As was the case for Friedman's statistic, for all k = 1, . . . , d we have that ∂g(w) ∂w k = 2w k , ∂ 2 g(w) ∂w 2 k (w) = 2 and all other derivatives are equal to 0. However, because we are applying Theorem 2.4 instead of Theorem 2.5, we need a different dominating function. We have that ∂g(w) ∂w k 6 = 64w 6 k and ∂ 2 g(w) ∂w 2 k 3 = 8, meaning that we can take P (w) = 8 + 64 r j=1 w 6 j as our dominating function. We can therefore bound the quantity |Eh(χ 2 ) − χ 2 (r−1) | by applying the bound (2.11) with A = 8, B 1 = . . . = B r = 64 and r 1 = . . . = r r = 6. We do not compute this bound, but note that we can obtain one of the form where C is a constant depending on r and p 1 , . . . , p r , but not n. We do not explicitly find such a C because a superior upper bound of the form Km(np * ) −1 5 k=1 h (k) has already been obtained by [10]. This bound holds for a weaker class of test functions than (3.2) and has a much better dependence on r and p 1 , . . . , p r than a bound that would result from an application of Theorem 2.4. That the bound of [10] outperforms ours is perhaps to be expected, given that their approach was target specifically at Pearson's statistic rather than the general class of statistics that are considered in this paper. The power divergence family of statistics Recall that the power divergence statistic with index λ ∈ R is given by Letting W be defined as in Section 3.2, we can write (3.3) as since W j = U j −np j √ np j and r j=1 U j = n. For general λ, the function g, defined as per equation (3.4), is not an even function; the notable exception being the case λ = 1, which corresponds to Pearson's statistic. Therefore, for λ = 1, we cannot apply Theorems 2.4 and 2.5 to obtain O(n −1 ) bounds for the rate of convergence of the statistic T λ to its limiting χ 2 (r−1) distribution. However, for certain values of λ, the statistic T λ falls into the class of statistics covered by Theorem 2.6. Let us now see why this is the case. We can write where as r j=1 p j = 1 and r j=1 √ p j W j = r j=1 (U j −np j ) = 0. Equation (3.5) tells us that T λ (w) is of the form (2.13), in the sense that it can be expressed as the sum of an even term r j=1 w 2 j (which corresponds to Pearson's statistic) and a 'small' term R(w). We can argue informally to see that this term is small. Recall the binomial series formula (1 + x) α = ∞ k=0 (−α) k k! (−x) k which is valid for |x| < 1, and the Pochhammer symbol is defined by (β) n = β(β + 1) · · · (β + n − 1). Then, provided |w j | < √ np j for all 1 ≤ j ≤ r, we can use the binomial series formula to obtain that which is O(n −1/2 ) as n → ∞. Whilst the series expansion (3.6) shows that R(w) is O(n −1/2 ), provided |w j | < √ np j for all 1 ≤ j ≤ r, we still need to argue carefully to apply Theorem 2.6 to obtain a bound for the rate of convergence of T λ ; in fact, as we shall now see, the theorem cannot be directly applied for all λ ∈ R. We shall now prove the following theorem and end by discussing the corresponding conjecture. and all other derivatives are equal to 0. Recall that to apply Theorem taken the following dominating function: where A λ ,Ã λ ,B λ 1 , . . . ,B λ r and B λ 1 , . . . , B λ r are O(1) non-negative constants involving only λ. We could directly apply Theorem 2.6 with the dominating function (3.9) to obtain a bound for |Eh(T λ (W)) − χ 2 (r−1) h|. Alternatively, we that we could easily derive a variant of Theorem 2.6 for dominating functions of the form (3.8). This would result the same bound as that given in Theorem 2.6, but with an additional O(n −3λ ) term. Thus, the leading order term (in n) of our bound for |Eh(T λ (W)) − χ 2 (r−1) h| would result from applying Theorem 2.6 with the dominating function A λ + r j=1 B λ j |w j | 12 . The bound would then be computed by bounding the appropriate expectations involving the X ij and W j . Since r 1 = . . . = r r = 12, the values of these expectations would not depend on λ. Therefore the performance of the bound based on p 1 , . . . , p r would remain the same for any r ∈ Z or r ≥ 5. 2. Given that the application of Theorem 2.4 to Pearson's statistic resulted in a bound with a poor dependence on p 1 , . . . , p r and r compared to the existing bound of [10], we would also expect that the application of Theorem 2.6 to the power divergence statistics would result in a bound with far from optimal dependence on these values. Again, we expect this to be the case, because we obtain our bounds by applying a general bound. A possible direct for future research would be to proceed in the spirit of [10] and use an approach specifically targeted at the power divergence statistics to obtain a bound with a better dependence on these values. Remark 3.5. Unless λ ∈ Z + or λ ≥ 5, the presence of a singularity at w k = − √ np k for a least one of the partial derivatives up to sixth order of b means we cannot apply Theorem 2.6 to T λ (W). We do, however, conjecture that the O(n −1 ) rate holds for smooth test functions for any λ. This conjecture is based on the fact that the O(n (r−1)/r ) Kolmogorov distance bounds of [1] and [29] are valid for all λ ∈ R and also the O(n −1/2 ) series formula (3.6) for R(w). Extending the theory of this paper to deal with the whole family of power divergence is an interesting direct for future research.
2016-03-24T21:52:54.000Z
2016-03-06T00:00:00.000
{ "year": 2016, "sha1": "b6d175a57b7a1a918f9f517ed30ca8f2a8a72bee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a3b3b4bfa1b7eabccd56159377a9a486bdc4ff57", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
14508274
pes2o/s2orc
v3-fos-license
Homotopical variations and high-dimensional Zariski-van Kampen theorems In 1933, van Kampen described the fundamental groups of the complements of plane complex projective algebraic curves. Recently, Ch\'eniot-Libgober proved an analogue of this result for higher homotopy groups of the complements of complex projective hypersurfaces with isolated singularities. Their description is in terms of some"homotopical variation operators". We generalize here the notion of"homotopical variation"to (singular) quasi-projective varieties. This is a first step for further generalizations of van Kampen's theorem. A conjecture, with a first approach, is stated in the special case of non-singular quasi-projective varieties. Introduction Our topic is best understood in the general frame of Lefschetz type theorems. Let X := Y \ Z, where Y is an algebraic subset of complex projective space P n , with n ≥ 2, and Z an algebraic subset of Y (such an X is called an (embedded) quasi-projective variety). Let L be a projective hyperplane of P n . The non-singular quasi-projective version of the Lefschetz Hyperplane Section Theorem, proved by and [GM1,2]), asserts that if X is non-singular and L generic, then the natural maps (between homology and homotopy groups, respectively) H q (L ∩ X) → H q (X) and π q (L ∩ X, * ) → π q (X, * ) are bijective for 0 ≤ q ≤ d − 2 and surjective for q = d − 1, where d is the least (complex) dimension of the irreducible components of Y not contained in Z. In the special case Y = P n , that is for the complement X = P n \ Z of a projective variety, the bounds can be improved by c − 1, where c is the least codimension of the irreducible components of Z (cf. [C2]): then the above maps are bijective for 0 ≤ q ≤ n + c − 3 and surjective for q = n + c − 2 (there is no improvement if c = 1). The question now arises of determining the kernel of these maps in dimension d − 1 (resp. n + c − 2 for a complement). For this purpose, it is classical (at least when Z = ∅) to consider L as a member of a pencil P of hyperplanes of P n with axis a generic (n − 2)-plane M. The sections of X by all the hyperplanes of such a pencil are isotopic to the one by L with the exception of the sections by a finite number of exceptional hyperplanes (L i ) i . For each i, there are some homomorphisms var i,q : H q (L ∩ X, M ∩ X) → H q (L ∩ X), for all q, called "homological variation operators", defined by patching each relative cycle on L∩X modulo M∩X with its transform by monodromy around L i (cf. [C3]). These are defined even if X is singular. In loc. cit., the first author showed that if X is non-singular, then the kernel of the natural epimorphism is equal to the sum of the images of the homological variation operators (var i,d−1 ) i associated to the exceptional members (L i ) i of the pencil. In the case of a complement X = P n \ Z, the same is true for the epimorphism (E ′ ) H n+c−2 (L ∩ (P n \ Z)) → H n+c−2 (P n \ Z) with n+c−2 in place of d−1. Combined with the Hyperplane Section Theorem quoted above, this gives a natural isomorphism and, in the case of a complement, (which coincides with a special case of (1) when c = 1). This is different from the classical point of view of the Second Lefschetz Theorem (cf. [Lef], [Wa] and [AF]) where the kernel is expressed in terms of "vanishing cycles", that is (d − 1)-cycles of L ∩ X which vanish when L tends to some L i . But the classical theorem applies only to the case where X is nonsingular closed (i.e., Z = ∅) while formula (1) applies to both the closed and non-closed cases, provided X is non-singular. Thus isomorphisms (1) and (1 ′ ) are generalizations of the Second Lefschetz Theorem. But isomorphisms (1) and (1 ′ ) may also be considered as generalized homological forms of the classical Zariski-van Kampen theorem on curves. Recall that this theorem expresses the fundamental group of the complement of an algebraic curve in P 2 by generators and relations. The generators are loops in a generic line, around its intersection points with the curve. The relations are obtained by considering a generic pencil containing this line: each loop must be identified with its transforms by monodromy around the exceptional members of the pencil (cf. [Za], [vK] and [C1]). The first true (homotopical) generalization of van Kampen's theorem to higher dimensions was given by Libgober (cf. [Li]). It applies to the (n − 1)-st homotopy group of the complement of a hypersurface with isolated singularities in C n behaving well at infinity. In this case, if n ≥ 3, the fundamental group is Z and the (n − 1)-st homotopy group is the first higher non-trivial one as explained in [Li], and at the same time the first not preserved by hyperplane section. Libgober also showed how the projective case can be deduced from the affine case. The projective version of Libgober's theorem can be stated in the previous frame as follows. With the notation above, assume that Y = P n with n ≥ 3, and Z = H where H is an algebraic hypersurface of P n having only isolated singularities. Consider a generic pencil of hyperplanes of P n as above. Let * ∈ M ∩ (P n \ H) be a base point in the axis of the pencil. For each i, we denote by h i,q : π q (L ∩ (P n \ H), * ) → π q (L ∩ (P n \ H), * ), for all q ≥ 1, the isomorphism induced by a geometrical monodromy around L i leaving fixed the points of M ∩ (P n \ H). Then there is a natural isomorphism (2) π n−1 (L ∩ (P n \H), * ) Im (h 1,n−1 − id), Im D 1,n−1 , where N is the number of the exceptional hyperplanes and where each is some homomorphism called "degeneration operator". Later in [CL], Chéniot and Libgober showed that the combination (ordinary variations by monodromies -degeneration operators) of [Li] is related to some homotopical variation VAR i,n−1 : π n−1 (L ∩ (P n \ H), M ∩ (P n \ H), * ) → π n−1 (L ∩ (P n \ H), * ) defined from the homological variation var i,n−1 of [C3] acting in a pencil supported by the universal covering of P n \ H. In particular, (2) yields a natural isomorphism Isomorphism (3) can also be deduced from (1) applied to the universal covering of P n \ H (cf. [CL]). Isomorphism (3) provides a homotopy analogue of isomorphisms (1) and (1 ′ ) in the special case of complements of projective hypersurfaces with isolated singularities. Thus, this high-dimensional Zariski-van Kampen theorem is also a homotopy version of the Second Lefschetz Theorem for this special case. But the definition of homotopical variation operators in [CL] (as well as the definition of degeneration operators in [Li]) relies heavily upon the special topology of complements of hypersurfaces with isolated singularities and does not make sense in a more general setting. The need then arises for an alternative definition which could lead to a more general homotopy analogue of isomorphisms (1) and (1 ′ ). It must be said that in a more general situation, the considered homotopy group may not be the first higher non-trivial one but it remains the first not preserved by generic hyperplane section. The aim of this article is to give such an alternative definition and to state a reasonable conjecture generalizing (3) (and (2)) with a first approach toward its proof. We introduce here new homotopical variation operators, which extend those of Chéniot-Libgober and have several advantages with respect to them. Our definition is purely homotopical, that is, it does not go through homology. This frees it from the special topology of the case considered by Chéniot and Libgober, which was precisely required to express homotopy groups with the help of homology groups. In fact our definition is valid for any (singular) quasi-projective variety X := Y \ Z. Our operators coincide with those of Chéniot-Libgober when the latter are defined. More precisely, we show that, when X := P n \ H and q = n − 1, with n ≥ 3, and H is an algebraic hypersurface having only isolated singularities, then VAR i,n−1 = VAR i,n−1 . In particular, isomorphism (3) is valid with VAR i,n−1 in place of VAR i,n−1 . We also show that our homotopical operators are linked by Hurewicz homomorphisms with the homological operators var i,q of [C3] and that they are equivariant under the action of the fundamental group. It is then natural to ask whether isomorphism (1) when X is non-singular and isomorphism (1 ′ ) are true for homotopy groups, with our homotopical variation operators instead of the homological ones. We show easily that the kernel of the homotopy analogue of epimorphism (E) (resp. (E ′ )) contains the images of operators VAR i,d−1 (resp. VAR i,n+c−2 ). Thus, there are well defined epimorphisms where i Im VAR i,1 denotes the normal subgroup generated by i Im VAR i,1 , and a well-defined epimorphism (4 ′ ) π n+c−2 (L ∩ (P n \ Z), * ) i Im VAR i,n+c−2 → π n+c−2 (P n \ Z, * ) when n + c ≥ 4 (if c = 1, (4 ′ ) is only a special case of (4)). The question is whether these are isomorphisms. Notice that when n ≥ 3 and X = P n \ H is the complement of a hypersurface H with isolated singularities, (4) and (4 ′ ) coincide with isomorphism (3) since our operators extend those of Chéniot-Libgober. Also, when n = 2 and X := P 2 \ C is the complement of a plane curve, the second row of (4) coincides with the map that van Kampen's theorem asserts to be an isomorphism, as we shall see in Section 4. We conjecture that the answer to our question is almost always yes: (4), where X is non-singular, and epimorphism (4 ′ ), unless Z = ∅, are isomorphisms. We are far from having a proof of this conjecture. We shall only give a very small step in its direction in the last section. A detailed exposition from the origins to nowadays of the questions mentioned in this introduction, like the Lefschetz Hyperplane Section Theorem, the Second Lefschetz Theorem or the van Kampen Theorem on curves, can be found in [E3]. The content of this article is as follows. Section 1 is devoted to some basic facts on generic pencils and monodromies. In Section 2, we recall the definition of the homological variation operators var i,q of [C3] which are used to define the homotopical variation operators VAR i,n−1 of [CL]. The latter will be described in Section 3. In Sections 4 and 5, we introduce the generalized homotopical variation operators VAR i,q , we give their elementary properties and we prove that they coincide with the Chéniot-Libgober operators when the latter are defined. We also prove that there are epimorphisms (4) and (4 ′ ) as stated above. Finally, in Section 6, we give a first approach to the conjecture we have formulated. Notation 0.1. -Throughout the paper, homology groups are singular homology groups with integer coefficients, unless there is an explicit statement of the contrary. We shall note the homology class in a space A of an (absolute) cycle z by [z] A and the homology class in A modulo a subspace B of a relative cycle z ′ by [z ′ ] A,B . If there is no ambiguity, we shall omit the subscripts. If (A, B) is a pointed pair with base point * ∈ B, we shall denote by F q (A, B, * ) the set of relative homotopy q-cells of A modulo B based at * . These are maps from the q-cube I q to A with the face x q = 0 sent into B and all other faces sent to * (as in [St,§15]). We designate by F q (A, * ) the set of absolute homotopy q-cells of A based at * , that is maps from I q to A sending the boundaryİ q of I q to * . Given f ∈ F q (A, B, * ) (resp. F q (A, * )), the homotopy class of f in A modulo B based at * (resp. in A based at * ) will be denoted by f A,B, * (resp. f A, * ). Again, if there is no ambiguity, we shall omit the subscripts. Generic pencils and monodromies Let X := Y \ Z, where Y is a non-empty closed algebraic subset of P n , with n ≥ 2, and Z a proper closed algebraic subset of Y . Take a Whitney stratification S of Y such that Z is a union of strata (cf. [Wh], [LT]), and consider a projective hyperplane L of P n transverse to (the strata of) S (the choice of such a hyperplane is generic). Denote by d the least dimension of the irreducible components of Y not contained in Z. Notice that, when X is non-singular, the application of the Lefschetz Hyperplane Section Theorems mentioned in the introduction is valid for L. Indeed, the genericity of the hyperplane required for these theorems can be specified as its transversality to a Whitney stratification of Z (cf. [HL2,Appendix], [GM2, end of the proof of II.5.1], [C2, Corollaire 1.2]). Now, consider a pencil P of hyperplanes of P n having L as a member and the axis M of which is transverse to S (the choice of such an axis is generic inside L). All the members of P are transverse to S with the exception of a finite number of them (L i ) i , called exceptional hyperplanes, for which, nevertheless, there are only a finite number of points of non-transversality, all of them situated outside of M (cf. [C2]). If necessary, one may take the liberty of considering some ordinary members of P, different from L, as exceptional ones. We parametrize the elements of P by the complex projective line P 1 as usual. Let λ be the parameter of L and, for each i, let λ i be the parameter of L i . For each i, take a small closed semi-analytic disk D i ⊂ P 1 with centre λ i together with a point γ i on its boundary. Choose the D i mutually disjoint. Take also the image Γ i of a simple real-analytic arc in P 1 joining λ to γ i and such that: Finally, consider a loop ω i in the boundary ∂K i of K i starting from λ, running along Γ i up to γ i , going once counter-clockwise around the boundary of D i and coming along Γ i back to λ. Notation 1.1. -For any subsets G ⊂ P n and E ⊂ P 1 , note where P(µ) is the member of P with parameter µ. The monodromies around each L i , more precisely above each ω i , proceed from the following lemma. for every x ∈ L ∩ X and every t ∈ I; (iii) for every t ∈ I, the map L ∩ X → X ω i (t) , defined by x → H(x, t), is a homeomorphism; (iv) H(x, t) = x, for every x ∈ M ∩ X and every t ∈ I. As usual, I is the unit interval [0, 1]. The terminal homeomorphism Such a homeomorphism h is called a geometric monodromy of L ∩ X relative to M ∩ X above ω i . Another choice of loop ω i within the same homotopy classω i in P 1 \ i λ i and another choice of isotopy H above ω i as in Lemma 1.2 would give a geometric monodromy isotopic to h within L ∩ X by an isotopy leaving M ∩ X pointwise fixed. Thus, the isotopy class of h in L ∩ X relative to M ∩ X is wholly determined by the homotopy classω i of ω i in P 1 \ i λ i . Homological variation operators Fix an index i, and consider a geometric monodromy h of L ∩ X relative to M ∩ X above ω i . Denote by S q (L ∩ X) the abelian group of singular q-chains of L ∩ X with integer coefficients, and by h q : S q (L ∩ X) → S q (L ∩ X) the chain homomorphism induced by h. Since h leaves M ∩ X pointwise fixed (cf. Lemma 1.2), it is easy to see that for every relative q-cycle z of L ∩ X modulo M ∩ X, the variation by h q of z, that is the chain h q (z) − z, is an absolute q-cycle of L∩X (cf. [C3,Lemma 4.6]). Moreover, one has the following lemma. Lemma 2.1 (cf. [C3,Lemma 4.8]). -The correspondence gives a well-defined homomorphism which depends only on the homotopy classω i of ω i in P 1 \ i λ i . This means that another choice of loop ω i within the same homotopy classω i in P 1 \ i λ i and another choice of monodromy h above ω i as in Lemma 1.2 would give a homomorphism equal to var i,q . Homomorphism var i,q is called a homological variation operator associated toω i . These operators were used by the first author in [C3] to prove the following result. Theorem 2.2 (cf. [C3,Proposition 5.1]). -If X is non-singular, then there is a natural isomorphism In the special case of the complement of a projective algebraic set, i.e., if X := P n \ Z, this can be improved into an isomorphism where c is the least of the codimensions of the irreducible components of Z. Homotopical variation operators on the complements of projective hypersurfaces with isolated singularities Throughout this section, we work under the following hypotheses. Hypotheses 3.1. -We assume that Y = P n , with n ≥ 3, and Z = H, where H is a (closed) algebraic hypersurface of P n , with degree k, having only isolated singularities. Thus, X = P n \ H. We also assume that S is the Whitney stratification the strata of which are: P n \ H, the singular part H sing of H, and the non-singular part H \ H sing of H. Being transverse to S then means avoiding the singularities of H and being transverse to the non-singular part of H. Observe that M ∩ X = ∅. We fix a base point * ∈ M ∩ X. In [CL], a k-fold (unramified) holomorphic covering p: X ′ → X is constructed, where X ′ := Y ′ \ Z ′ is a (pathwise) connected quasi-projective variety in P n+1 . In fact, X ′ is the global Milnor fibre of the cone of C n+1 corresponding to H. Moreover, it is shown that there is a Whitney stratification S ′ of Y ′ preserving Z ′ and a pencil P ′ in P n+1 with axis M ′ transverse to S ′ such that the member P ′ (µ) of P ′ with parameter µ being transverse to S ′ if and only if P(µ) is transverse to S. Recall that L = P(λ), and put L ′ := P ′ (λ). Then, for each i, pencil P ′ gives rise to a homological variation operator associated toω i , defined as in section 2. Given an index i and a base point • ∈ p −1 ( * ), one can then consider the following diagram: whereπ and π are induced by p and where χ andχ are Hurewicz homomorphisms. Now, by a general property of covering projections,π is an isomorphism (cf. [Sp,Theorem 7.2.8]). Moreover, homomorphismχ too is an isomorphism as a consequence of the special fact that X is the complement of a projective hypersurface H with isolated singularities. Indeed, L∩H is then a non-singular hypersurface of L ≃ P n−1 , so that π 1 (L ∩ X, * ) ≃ Z/kZ and π q (L ∩ X, * ) = 0 for 2 ≤ q ≤ n − 2 (this range may be empty) (cf. [Li, Lemma 1.1]). Knowing that L ′ ∩ X ′ is pathwise connected (cf. [CL, Lemma 2.9]), these facts imply that L ′ ∩ X ′ is (n − 2)-connected, andχ is then an isomorphism by the Hurewicz Isomorphism Theorem. Thus, for each i, and for every • ∈ p −1 ( * ), there is a homomorphism VAR i,n−1 : π n−1 (L ∩ X, M ∩ X, * ) → π n−1 (L ∩ X, * ) defined by the composition Section 5]). One shows easily that homomorphism VAR i,n−1 does not in fact depend on the choice of the base point • ∈ p −1 ( * ). Homomorphism VAR i,n−1 is called a homotopical variation operator associated toω i . These operators were used in [CL] to prove the following result. As mentioned in the introduction, Theorem 3.3 is equivalent to the projective version of [Li,Theorem 2.4] and also provides a homotopy version of Theorem 2.2 in the special case of complements of projective hypersurfaces with isolated singularities. Generalized homotopical variation operators In this section, X := Y \ Z is again a (possibly singular) quasi-projective variety as in Sections 1 and 2. We assume further that M ∩ X = ∅ and we fix a base point * in M ∩ X. Observe that the condition M ∩ X = ∅ is equivalent to dim X ≥ 2. We also fix an index i, and consider a geometric monodromy h of L ∩ X relative to M ∩ X above ω i (cf. Section 1). Let q be an integer ≥ 1. Notice that the reversion of f and its concatenation with h • f are performed on the variable transverse to the free face. This would in general not make sense but here it does because f and h • f have the same boundary. Lemma 4.1. -The correspondence VAR i,q : π q (L ∩ X, M ∩ X, * ) → π q (L ∩ X, * ) gives a well-defined map which depends only on the homotopy classω i of ω i in P 1 \ i λ i . If q ≥ 2, it is a homomorphism. The independence assertion follows from the remark we made just after Lemma 1.2. That the map is well-defined is straightforward and one checks easily that it is a homomorphism if q ≥ 2 (the sum of homotopy cells being performed as in [St,§15]). We shall call map VAR i,q a generalized homotopical variation operator associated toω i . This terminology is justified by Theorem 5.1 below which asserts that, in the case where the homotopical variation operators of [CL] are defined (cf. Section 3), the latter coincide with our generalized operators. We remark that if our operators are applied to absolute cells of L ∩ X or if their result is considered as relative cells of L ∩ X modulo M ∩ X, then they act as what can be called ordinary variations by monodromy. More precisely: Observation 4.2. -Let incl q : π q (L ∩ X, * ) → π q (L ∩ X, M ∩ X, * ) be the natural map. Then, (i) VAR i,q (incl q (x)) = −x + h q (x) for all x ∈ π q (L ∩ X, * ); (ii) if q ≥ 2, incl q (VAR i,q (y)) = −y +h q (y) for all y ∈ π q (L ∩ X, M ∩ X, * ); where h q andh q are the automorphisms of π q (L∩X, * ) and π q (L∩X, M∩X, * ) respectively induced by h. The right-hand sides of the equalities are written additively though the first group is not a priori commutative if q = 1 nor is the second one if q = 2; the order of operations must then be respected. Observation 4.2 relies on the same reasons as those which allow the sum of two homotopy cells to be performed indiscriminately on any variable not transverse to the (possible) free face and which make the sum commutative in high dimension. Its detailed proof is left to the reader. Notice that when n = 2 and X is the complement P 2 \ C of a plane projective curve, M ∩ X is reduced to a single point and the observation above shows that the epimorphism (4), second row, of the introduction (the existence of which will be justified by Lemma 4.8 below) coincides with the map that van Kampen's theorem asserts to be an isomorphism. Operator VAR i,q is linked to the homological variation operator var i,q of Section 2 by Hurewicz homomorphisms. This is stated in the next lemma. Proof. Since homotopy cells are defined on cubes, it is convenient to use cubical singular homology theory (cf. [HW], [M]), which is equivalent to ordinary (simplicial) singular theory (cf. [HW,Section 8.4]). So, let us first introduce some notation. For any pair of spaces (U, V ), with V ⊂ U , we shall denote by H c q (U, V ) the q-th cubical singular relative homology group of (U, V ), by [·] c U,V the homology classes in this group, and by S c q (U, V ) the group of q-dimensional cubical chains of the pair (U, V ). Given a (continuous) map g: (U, V ) → (U ′ , V ′ ), we shall denote by g q : S c q (U, V ) → S c q (U ′ , V ′ ) the induced cubical chain homomorphism. A similar notation is used for the absolute case. Let f be a representative of an element of π q (L ∩ X, M ∩ X, * ). We have where ι: I q → I q is the identity map (cf. [HW,8.8.4]). Since the expression for var i,q (given by Lemma 2.1) remains valid in cubical theory (by [HW,8.4.7 and the paragraph before 8.4.10]), it then suffices to prove that the following equality holds in H c q (L ∩ X): For this purpose, consider the singular q-cubes σ 1 , σ 2 : I q → I q in I q defined by σ 1 (x 1 , . . . , x q ) := (x 1 , . . . , x q−1 , Observe that −σ 1 + σ 2 is a relative cycle of I q moduloİ q . Lemma 4.5. -The following equality holds in H c q (I q ,İ q ): Proof. Let σ: I q+1 → I q be the singular (q + 1)-cube in I q defined by The boundary operator ∂: S c q+1 (I q ,İ q ) → S c q (I q ,İ q ), applied to σ, satisfies Indeed, the face of σ of index (q, 0) is degenerated and all other non-mentioned faces are inİ q . The equality in the statement of Lemma 4.5 follows. and since one sees immediately that This completes the proof of (4.4) and, consequently, the proof of Lemma 4.3. Operator VAR i,q also satisfies the following equivariance property. Finally we prove a lemma which justifies the existence of the epimorphisms (4) and (4 ′ ) of the introduction. But this lemma is valid for every q ≥ 1 and even if X is singular. Lemma 4.8. -The image of operator VAR i,q is contained in the kernel of the natural map π q (L ∩ X, * ) → π q (X, * ). Proof. A representative of an element of the image of VAR . Let H be an isotopy giving rise to h as in Lemma 1.2. One defines a homotopy I q × I → X from f ⊥(h • f ) to the constant map equal to * by t ≤x q ≤ 1. By the first half of this homotopy, the lower part of the cell undergoes the monodromy h while remaining attached to the upper part; at the end of this process the two half cells become opposite and the second half of the homotopy collapses them together. Remark. -Since Im H ⊂ X ∂K i , the proof above shows in fact that the image of VAR i,q is contained in the kernel of the natural map π q (L ∩ X, * ) → π q (X ∂K i , * ). Before proving this theorem, observe that, together with Theorem 3.3, it implies the following result. Of course, Theorem 5.2 is equivalent to Theorem 3.3 and to the projective version of [Li,Theorem 2.4]. The link between VAR We now turn to the proof of Theorem 5.1. Proof of Theorem 5.1. Consider the diagram obtained from diagram (3.2) by completing its lower row with the homomorphism and its middle row with the homomorphism defined from ω i as VAR i,n−1 but with pencil P ′ and the point • instead of pencil P and the point * . We have to show that this new diagram is commutative. But its lower square is indeed commutative since, given a geometric monodromy h of L ∩ X relative to M ∩ X above ω i , there exists a geometric monodromy h ′ of L ′ ∩ X ′ relative to M ′ ∩ X ′ above ω i such that [CL,Remark 4.2]). As to the upper square, it commutes thanks to Lemma 4.3. 6. A conjecture generalizing the van Kampen theorem to non-singular quasi-projective varieties In this section, we come back to the general hypotheses of Section 1, so that X := Y \ Z is again a (possibly singular) quasi-projective variety in P n with n ≥ 2 as in Sections 1, 2 and 4. We also assume that M ∩ X = ∅; as this condition is equivalent to dim X ≥ 2, it will be automatically fulfilled when d ≥ 2. We fix a base point * in M ∩ X. The conjecture we made in the introduction can be specified as follows. involving the generalized homotopical variation operators VAR i,q defined in Section 4, with i Im VAR i,1 denoting the normal subgroup of π 1 (L ∩ X, * ) generated by i Im VAR i,1 . In the special case Y = P n , so that X = P n \ Z, and provided Z = ∅, there is a natural isomorphism where c is the least of the codimensions of the irreducible components of Z (notice that n + c − 2 = d − 1 when c = 1). If proved, this conjecture would extend Theorem 5.2 (and hence Theorem 3.3 and the projective version of [Li,Theorem 2.4] reported in the introduction as isomorphism (2)) and would also extend the classical Zariskivan Kampen theorem on curves as remarked in Section 4. It would give a complete homotopy analogue of Theorem 2.2 and thus would gather in a generalized form the Zariski-van Kampen theorem with a homotopical version of the Second Lefschetz Theorem. We now give a first little approach to this conjecture. First approach to Conjecture 6.1. By Lemma 4.8, the subgroups by which the quotients are taken are contained in the kernels of the corresponding natural maps (which are epimorphisms by the Lefschetz Hyperplane Section Theorems, as mentioned in the introduction). The reverse inclusion which would lead to the conclusion is much more difficult and not proved at the moment. We are simply giving below, via the following lemma, a first little step in this direction. Lemma 6.2. -If X is non-singular and d ≥ 2, there is a natural epimorphism π d (X K , L ∩ X, * ) → π d (X, L ∩ X, * ), where K is the union of the K i (cf. Section 1). In the special case where Y = P n and Z = ∅, there is a natural epimorphism π n+c−1 (X K , L ∩ X, * ) → π n+c−1 (X, L ∩ X, * ), where c is as in Conjecture 6.1. In the special case where Y = P n and Z = ∅, the kernel of the natural map π n+c−2 (L ∩ X, * ) → π n+c−2 (X K , * ) is contained in i Im VAR i,n+c−2 if n + c ≥ 4. To complete this section, it remains to prove Lemma 6.2. Proof of Lemma 6.2. By the homotopy sequence of the triple (X, X K , L ∩ X), it suffices to prove that the pair (X, X K ) is d-connected (resp. (n + c − 1)-connected). We start by noticing that the pair (L ∩ X, M ∩ X) is (d − 2)-connected (resp. (n + c − 3)-connected). This is shown by applying the Lefschetz Hyperplane Section Theorem for non-singular quasi-projective varieties (resp. for complements) to the section of L ∩ X by the hyperplane M of L. To check the required hypotheses and verify that the conclusion is the announced one, we refer to the beginning of the proof of [C3,Corollary 3.4]. We just point out here that the hypothesis Z = ∅ is crucial to ensure that the codimension of L ∩ Z in L is still c. Thus, to show that (X, X K ) is d-connected (resp. (n + c − 1)-connected), it is enough to prove the following result which in fact holds even if X is singular. Lemma 6.4. -For this lemma, X may be singular. Let k be an integer ≥ 0. If (L ∩ X, M ∩ X) is k-connected, then (X, X K ) is (k + 2)-connected. This is a weak homotopy analogue of [C3,Lemma 3.9]. In its proof, the homology excision property is replaced by a much more restrictive homotopy excision theorem, and the Eilenberg-Zilber theorem and Künneth formula by a criterion on the connectivity of the product of two pairs of spaces. Proof of Lemma 6.4. Let P n be the blow up of P n along M, which is defined by P n := {(x, µ) ∈ P n × P 1 | x ∈ P(µ)}. It is a compact analytic submanifold of P n × P 1 with dimension n. The projections of P n × P 1 onto its two factors, when restricted to P n , give two proper analytic morphisms ϕ: P n → P n and π: P n → P 1 . One must not confuseG E with G E ; we have For simplicity, we also set L := L ∩ X and M := M ∩ X. By stratifying suitablyỸ and then applying the First Isotopy Theorem of Thom-Mather (cf. [Th], [Ma]) one obtains the following lemma. The proof is now along lines very similar to [La,8.3]. Decompose P 1 into two closed semi-analytic hemispheres D + and D − such that: (i) D + ∩D − = S 1 , where S 1 is a 1-sphere; (ii) K is contained in D + ; (iii) λ ∈ S 1 ; (iv) D − is contained in some coordinate neighbourhood of the fibre bundleX \ iX λ i for a trivialization which preserves the subbundle M × (P 1 \ i λ i ). This choice of D − implies that the pairs
2014-10-01T00:00:00.000Z
2002-04-13T00:00:00.000
{ "year": 2002, "sha1": "0cf76256bbf5cc7a298521ec00f5a8daffb3327a", "oa_license": null, "oa_url": "https://www.ams.org/tran/2006-358-01/S0002-9947-05-03907-3/S0002-9947-05-03907-3.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "4338c18d593e5dde1bc4168ed532c081d350dfe9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235266160
pes2o/s2orc
v3-fos-license
On the consistency of (partially-)massless matter couplings in de Sitter space We study the consistency of the cubic couplings of a (partially-)massless spinning field to two scalars in $\left(d+1\right)$-dimensional de Sitter space. Gauge invariance of observables with external (partially)-massless spinning fields translates into Ward-Takahashi identities on the boundary. Using the Mellin-Barnes representation for boundary correlators in momentum space, we give a systematic study of Ward-Takahashi identities for tree-level 3- and 4-point processes involving a single external (partially-)massless field of arbitrary integer spin-$J$. 3-point Ward-Takahashi identities constrain the mass of the scalar fields to which a (partially-)massless spin-$J$ field can couple. 4-point Ward-Takahashi identities then constrain the corresponding cubic couplings. For massless spinning fields, we show that Weinberg's flat space results carry over to $\left(d+1\right)$-dimensional de Sitter space: For spins $J=1,2$ gauge-invariance implies charge-conservation and the equivalence principle while, assuming locality, higher-spins $J>2$ cannot couple consistently to scalar matter. This result also applies to anti-de Sitter space. For partially-massless fields, restricting for simplicity to those of depth-2, we show that there is no consistent coupling to scalar matter in local theories. Along the way we give a detailed account of how contact amplitudes with and without derivatives are represented in the Mellin-Barnes representation. Various new explicit expressions for 3- and 4-point functions involving (partially-)massless fields and conformally coupled scalars in dS$_4$ are given. Introduction In the past decades the Holographic principle has seen a number of key developments in the study of observables in Quantum Gravity, especially in the context of the AdS/CFT correspondence [1]. Scattering processes in (d + 1)-dimensional asymptotically anti-de Sitter (AdS) space can be re-cast as correlation functions of local operators in a d-dimensional Conformal Field Theory (CFT), which are defined non-perturbatively by a combination of conformal symmetry, unitarity and a consistent operator product expansion. These are the three main pillars of the Conformal Bootstrap programme, which aims to carve out the space of consistent CFTs using symmetry and mathematical consistency [2,3] and is a spectacular fully-functioning revival of the Bootstrap philosophy put forward in the early days of S-matrix theory [4,5]. JHEP10(2021)156 Recent years have seen a renewed interested in the challenge to extend the successes of the Bootstrap and the AdS/CFT paradigm to more general backgrounds, in particular those that are closer to real world. A natural testing ground is provided by de Sitter (dS) space, which shares the isometry group with Euclidean AdS. Indeed, it has long been known that correlators on the boundary of dS are constrained by conformal symmetry [6][7][8][9][10][11][12][13][14][15]. This has since evolved into the Cosmological Bootstrap [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32], which aims to identify the symmetries and consistency criteria that should be satisfied by boundary correlators in dS (which, in contrast, for the AdS case above are known) and apply them to constrain (or even completely determine) the form that such boundary correlators can take. One of the first instances in which symmetry and consistency were used to successfully deduce model independent properties of quantum theories is Weinberg's seminal 1964 result [33,34] on the couplings of massless particles of arbitrary integer spin to scalar matter in flat space. 1 Weinberg showed that locality and unitarity constrain the S-matrix for the emission of a single massless spin-J particle with momentum q to take the following form in the soft limit q → 0: where S (p 1 , . . . , p N ) is the S-matrix for the process before the emission of the massless spin-J particle, g a is its coupling to the a-th external particle and its polarization vector which is null 2 = 0 and transverse (q · = 0). This however gives a redundant description of the massless spin-J particle and (gauge invariance) requires that the spurious longitudinal components decouple, viz. and for J > 2 that there is no coupling of a massless higher-spin particle to scalar matter, JHEP10(2021)156 The beauty of this argument lies in its universality: fundamental features of theories of massless spinning particles, such as charge conservation and the equivalence principle, follow in a model independent way as a simple consequence of locality and unitarity. It also has the strength to rule out (in local theories) interactions of certain collections of particles altogether. Extending Weinberg's analysis to a curved background has long faced various difficulties, mostly related to the problem of defining an S-matrix on spaces with non-vanishing curvature. Given the recent developments in adapting S-matrix techniques to boundary correlators in (A)dS space however, we are increasingly in a position to start tackling this problem concretely and, in the specific case of couplings to conformally coupled scalars in dS 4 , 2 we have already seen significant progress [24] for massless particles of spins J = 1, 2. The story in dS space is moreover particularly rich from a phenomenological perspective, where unitary irreducible representations of the de Sitter isometry group admit not only massless but also partially-massless spinning particles [37][38][39][40][41][42], whose consistent scattering observables also require the decoupling of (a subset of) the longitudinal components. A natural question is then if the couplings of such partially-massless particles can be similarly constrained in a model-independent way. In this work we demonstrate how the Mellin-Barnes formalism for boundary correlators in momentum space introduced in [20,21] can be used to extend Weinberg's analysis to (A)dS d+1 , including the matter couplings of partially-massless fields peculiar to the dS case. For boundary correlators in (A)dS, the gauge-invariance constraint (1.2) on Smatrix elements is replaced by a Ward-Takahashi identity which relates the longitudinal components to lower-point correlators with the massless external field removed. As we shall see, Ward-Takahashi identities at the level of the Mellin-Barnes representation are encoded in a particular form of polynomial (2.41) in the Mellin variables. Upon computing threeand four-point functions of a (partially)-massless field with scalars (see figures 1 and 2) this feature allows us to systematically study the constraints from the Ward-Takahashi identities. By studying constraints from Ward-Takahashi identities at the three-point level, we recover the results [43,44] that gauge-invariance constrains (2.53) the scaling dimensions of the scalar fields to which a (partially-)massless field can couple. At four-point, we find that the Ward-Takahashi identity is generally violated by terms that are singular in the total energy E T . This observation was also made in [24] where, for couplings to conformally coupled scalars in dS 4 , it was shown that charge conservation (1.4) and the equivalence principle (1.5) ensure that the Ward-Takahashi identity is satisfied. One should also rule out the possibility that the total energy singularities violating the Ward-Takahashi identity cannot be compensated by adding local quartic contact terms -i.e. those generated by quartic vertices with a finite number of derivatives involving one (partially-)massless spin-J field and the three external scalars. In flat space it is clear, since local quartic contact amplitudes do not contribute singularities in q and are therefore subleading in the soft JHEP10(2021)156 limit (1.1). In (A)dS the separation appears less sharp. Contact amplitudes associated to local quartic vertices in (A)dS tend to dominate in the limit E T → 0, with the singularity in E T increasing with the number of derivatives. 3 By translating the problem into the Mellin-Barnes representation, where the local contact terms are associated to polynomials in the Mellin variables of a minimum degree, we are able to establish that there are total energy singularities violating the Ward-Takahashi identity which cannot be compensated by adding local quartic vertices to the theory. In particular, there are singularities in E T that violate the Ward-Takahashi identity that are of a too low degree to be generated by a local quartic vertex. From this, combined with our results for four-point exchanges with a single external (partially)-massless spin-J field, in this paper we establish the following: • Massless spin-J fields in (A)dS d+1 . Weinberg's conclusions on the couplings of massless spinning fields to scalar matter carry over to (A)dS. In particular, the fourpoint Ward-Takahashi identities require: charge conservation (1.4) for J = 1, the equivalence principle (1.5) for J = 2 and that, in local theories, there is no coupling of massless higher-spin fields J > 2 to scalar matter (1.6). • Partially-massless spin-J field of depth-2 in dS d+1 . 4 The four-point Ward-Takahashi identities imply that partially-massless fields of depth-2, which can have spin-3 and higher, cannot couple consistently to scalar matter in local theories. The paper is organised as follows. In section 2 we introduce the Mellin-Barnes representation of three-point boundary correlators in (A)dS d+1 , 5 focusing on the case of correlators involving two scalars and a spin-J field. For (partially-)massless spin-J field we study how gauge-invariance manifests itself in the Mellin-Barnes representation and how it constrains the masses of the scalar fields to which it can couple. We derive three-point Ward-Takahashi identities for massless spinning fields and partially-massless spinning fields of depths 1 and 2. We also provide a double check of these results by deriving the Ward-Takahashi identities in d = 3 for the case that the scalars are conformally coupled, where the Mellin-Barnes integrals in the three-point correlators can be lifted completely. We also give various new explicit expressions for three-point correlators of (partially-)massless fields with conformally coupled scalars, including all lower helicity components. In section 3 we introduce the Mellin-Barnes representation of four-point functions, focusing on tree-level processes -namely, four-point exchanges and quartic contact diagrams (including those of derivative interactions). We show how quartic contact diagrams can be packaged as improvement terms to cubic vertices in a four-point exchange and how this is naturally described within the Mellin-Barnes formalism. 3 These have been classified in [18] for quartic contact diagrams involving only (conformally coupled) scalar fields. 4 For simplicity we focused on the scalar matter couplings to depth-2 partially-massless fields, though from considering various examples for other depths we expect that this result holds for all non-zero depths. 5 In the case of dS d+1 , by boundary correlators we mean late time in-in correlators computed within the in-in/Schwinger-Keldysh formalism [45,46] (for a review see [47]). These should not be confused with wavefunction coefficients which are sometimes (with an abuse of terminology) are referred to in the literature as correlators. JHEP10(2021)156 In section 4, focusing on four-point functions involving three scalars and a single (partially)-massless spin-J field in (A)dS d+1 , we explore the constraints coming from gaugeinvariance. We show that the Ward-Takahashi identity, for generic cubic couplings, is violated by quartic contact terms and argue that this cannot be restored by the addition of local quartic vertices -thus leading to a constraint on the cubic couplings of (partially)massless fields to scalars. We verify this explicitly for the case of massless spin-J fields and partially-massless spin-J fields of depth 2, deriving the corresponding constraints on the cubic couplings. As for the three-point functions in section 2, we provide a check of these results in d = 3 in the case that the scalars are conformally coupled, where the Mellin-Barnes integrals can be lifted completely. We give various explicit expressions for four-point exchanges involving conformally coupled scalars and a single external massless spinning field, including all lower helicity components. Various technical details are relegated to the appendices. Notation and conventions. Throughout we denote scalar fields by the symbol φ and spinning fields by ϕ. A scalar operator on the boundary with scaling dimension ∆ = d 2 + iν is denoted by O ν . If the operator instead has spin-J it is denoted by O ν,J . The d-dimensional spatial vector x parameterises the boundary directions and k denotes the boundary momentum. These have magnitudes denoted by x = |x| and k = |k|. In dS d+1 we work with metric signature (− + . . . + + ). Three-point functions We begin in section 2.1 by reviewing and extending the relevant aspects of the Mellin-Barnes representation for three-point functions in momentum space introduced in [20,21]. In section 2.2 we introduce some useful differential operators which can be used to derive relations between correlators with operator scaling dimensions and spins that differ by integer shifts, as well as correlators generated by derivative interactions. In section 2.3 we consider three-point functions of a (partially)-massless field and two scalars in (A)dS d+1 . We describe how the constraints from gauge-invariance manifest themselves in the Mellin-Barnes representation and derive explicit expressions for the corresponding three-point Ward-Takahashi identities. In section 2.4 we detail how the freedom to add improvement (i.e. on-shell vanishing) terms to cubic vertices can be used to simplify the Mellin-Barnes representation of three-point functions. In section 2.5 we consider the special case in which the two scalar fields are conformally coupled in d = 3. In this case the Mellin-Barnes representation is not required to describe the correlator completely and the integrals can be lifted to give explicit closed form expressions for the three-point function of a (partially-)massless field and two conformally coupled scalars. These results provide a cross-check of the three-point Ward-Takahashi identities we derived in section 2.3. Momentum space three-point functions of (partially)-massless fields in (A)dS have been studied in various works. For a (most-likely) incomplete list see refs. [9,10,15,16,21,24,45,[48][49][50][51][52][53][54][55][56][57]. Mellin-Barnes representation The Mellin-Barnes representation of a generic three-point conformal correlation function in d-dimensional momentum space is defined as where in the usual way the prime denotes the correlator with the momentum conserving delta function stripped off, The operator O ν j ,J j has spin J j and its scaling dimension ∆ + j parameterised as ∆ + j = d 2 + iν j , so that the shadow scaling dimension is given by sending 6 We refer to the variables s j as Mellin variables, which are assigned to each momentum k j . 7 The Mellin-Barnes representation can be expressed in the form: where the j are null auxiliary vectors j · j = 0 encoding the tensor structure (as in e.g. [59]): 3) The function ρ ν 1 ,ν 2 ,ν 3 (s 1 , s 2 , s 3 ) carries two infinite families of poles for each Mellin variable, These poles are associated to the Mellin-Barnes representation of the corresponding bulkboundary propagators, which are given by a type of Bessel functions [60]. The function is what we refer to throughout as the Mellin-Barnes amplitude, 5) 6 Note that, throughout, the parameters νj are not necessarily real. The constraint νj ∈ R defines Principle series representations and at the level of the Mellin-Barnes representation ensures that the integration contours do not get pinched. Other representations can be obtained from the Principle Series by analytic continuation and careful treatment of any divergences, for which we refer the reader to [21]. See [58] for a nice overview of unitary irreducible representations in (anti-)de Sitter space. 7 Later on Mellin variables will be divided into external and internal Mellin variables, associated to external and internal momenta respectively. The sj above are therefore external Mellin variables. JHEP10(2021)156 where the function C ν 1 ,J 1 ;ν 2 ,J 2 ;ν 3 ,J 3 (s 1 , s 2 , s 3 | k · k j , k · j ) encodes the tensorial structure, which is a polynomial in the contractions ( k · k j ) and ( k · j ) and a rational function of the Mellin variables s j . We shall give some explicit examples below. In the next section 2.2 it will be shown that this function can always be transformed into a polynomial in the s j through the appropriate change of Mellin integration variables, so that all poles in the s j are encoded in functions (2.4). The Dirac delta function in (2.5) enforces a constraint among the Mellin variables that is analogous to momentum conservation k 1 + k 2 + k 3 = 0. In particular, analogous to how translation invariance implies momentum conservation, the Dilatation Ward identity (see e.g. [49] for its form in momentum space) requires where N is the degree of the polynomial C ν 1 , . From a holographic perspective, the Dirac delta function can be expressed as an integral over the bulk radial co-ordinate, which in Poincaré co-ordinates with radial co-ordinate z, reads: 8 Boundary terms are therefore encoded in the Mellin-Barnes amplitude (2.5) by terms that vanish on the constraint (2.6), since: Dirac delta functions (2.8) which are not accompanied by a factor x 4 − s 1 − s 2 − s 3 as in (2.9) above are therefore the fingerprint of genuine bulk contact interactions, and hence of the presence of a singularity in the total energy variable E T = k 1 + k 2 + k 3 as E T → 0. 9 Boundary terms do not have a singularity in E T . The most general boundary term is given by (2.9) dressed with a polynomial in s 1 , s 2 and s 3 . This is analogous to the representation of the momentum conserving delta function as an integral over the boundary co-ordinates x, where boundary terms are encoded in terms that vanish by momentum conservation: For scalar operators, this integral is precisely the integral in the triple-K integral representation [49] for conformal correlation functions of scalar operators in momentum space. In that case the three Mellin variables sj arise from the Mellin-Barnes representation for each K, which is modified Bessel function of the second kind. 9 Bulk contact terms only have singularities in ET and are characterised by the order of the pole in ET [48,61]. JHEP10(2021)156 Given the above parallels between the Mellin-Barnes and momentum space representation of conformal correlators it is tempting to regard the Mellin-Barnes representation as an analogue of momentum space for the bulk radial direction. We will often find it useful to work with the Mellin-Barnes amplitude at the level of the integrand in the bulk radial co-ordinate z, which can be immediately read off from (2.8) where we defined The Mellin-Barnes amplitude for the corresponding in-in 3pt function in dS d+1 is obtained from its EAdS d+1 counterpart by multiplying with the following constant sinusoidal factor: 10 (2.14) This factor combines the contributions from the + and − in-in contour branches, which have equal and opposite phases generated by analytic continuation from EAdS d+1 -for details see [21]. Having outlined the general framework for the Mellin-Barnes representation of conformal 3pt functions above, below we will give some examples. Example 1: three scalars. The simplest example is given by boundary three-point correlation functions generated by the following simple non-derivative bulk cubic vertex of with coupling g. For bulk fields φ i in EAdS d+1 the Mellin-Barnes amplitude (2.5) of the dual operators O ν i ,0 simply reads [21]: where for scalar 3pt contact diagrams x = d. Correlators of the shadow operators O −ν j with scaling dimensions ∆ − j = d 2 − iν j are obtained from the above by sending ν j → −ν j . To obtain the corresponding result in dS d+1 one simply multiplies by the factor (2.14) with The corresponding correlator (2.1) is, up to normalisation, the unique solution to the Conformal Ward identities for scalar operators, which for the generic scaling dimensions considered above is given by Appell's F 4 function [49,62]. The bulk counterpart of the uniqueness of this conformal structure is that the vertex φ 1 φ 2 φ 3 generating it is unique on-shell. Other cubic vertices involving φ 1 , φ 2 and φ 3 differ from the latter by terms that vanish on-shell, so-called improvement terms, which generate boundary terms (2.9) that give a vanishing contribution to the three-point function. JHEP10(2021)156 Example 2: two scalars and a spin J . The cubic vertex involving a single spin-J field ϕ J and the two scalars φ 1,2 is also unique on-shell, taking the following form up to integration by parts and the free equations of motion: (2.17) For bulk fields in EAdS d+1 the Mellin-Barnes amplitude it generates for the three-point correlation function of the dual spin-J operator O ν 3 ,J with auxiliary vector 3 and two scalar operators O ν 1,2 ,0 is (see section 3.2 of [21]): To obtain the corresponding result in dS d+1 one simply multiplies by the factor (2.14). The tensorial structure C ν 1 ,0;ν 2 ,0;ν 3 ,J (s j | 3 · k j ) is a degree J polynomial in the contractions ( 3 · k j ): where and Y (J) which is independent of the Mellin variables s j and whose explicit form is reviewed in appendix B. Vertices that differ from the canonical choice (2.17) by on-shell vanishing terms generate the same three-point correlation function (2.18) modulo boundary terms (2.9) that give a vanishing contribution to the three-point function. In the next section we will see how rational functions of the Mellin variables such as (B.3) can be translated into a polynomial via an appropriate change of integration variables. Note that above we have taken the fields participating in the vertex (2.17) to be generic. When the spin-J field is (partially-)massless, gauge invariance constrains the masses of the scalar fields φ 1 and φ 2 to which it can couple (see [43] section 3.2). In section 2.3 we will see how such constraints manifest themselves in the Mellin-Barnes formalism. Weight shifting operators The Mellin-Barnes representation has the virtue of making manifest certain useful recursion relations that hold between correlators with operator scaling dimensions and spins that differ by integer shifts, 11 as well as correlators with and without derivative interactions. See section 4.4 of [21], which we review and expand upon in the following. 11 Note that such positive integer shifts of the operator dimensions parameterised by ∆ = d 2 ± iν can be naturally interpreted as shifts in the dimension d of the space-like de Sitter boundary. Let us first consider three-point functions of scalar operators. Given a three-point function of scalar operators in general boundary dimensions d, the scaling dimension of the operator O ν j can be lowered by one unit by shifting d → d − 2 and acting with a simple differential operator M k j in the momentum k j on the three-point function: 12 which lowers by one unit the scaling dimension of O ν j while raising by two units the boundary dimension d. This relation is straightforward to establish from the Mellin-Barnes representation (2.16), where shifts in the dimension d induce shifts in the parameters ν j through a re-definition of the Mellin variable s j . This generates the Mellin-Barnes representation of the three-point function with the new, shifted, ν j dressed with a polynomial in s j . The latter is then naturally recast as a differential operator (2.22) in the momentum k j . Likewise, the scaling dimension of the operator O ν j can be raised by one unit upon shifting d → d − 2 in (2.16) and acting with a simple differential operator P k j : which instead raises by one unit the scaling dimension of O ν j while also raising by two units the boundary dimension d. The operators (2.22) and (2.24) can then be used recursively to obtain any integer shift ∆ j → ∆ j ∓ n in the scaling dimensions, which compose simply as More generally, any polynomial in the Mellin variables s j that dresses the Mellin-Barnes representation of the scalar three-point function (2.16) can be translated into the action of a differential operator. This can be achieved by expressing the polynomial as a sum of Pochammer factors s j ± iν j 2 n , which in turn can be absorbed into the action of the following differential operators: (2.27b) As will become clear, the relations (2.27) are particularly useful when dealing with threepoint functions generated by derivative interactions and also operators with spin. In particular, in the previous section we saw that the Mellin-Barnes representation for spinning correlators (2.18) differs from that for the corresponding scalar correlator (2.16) by a rational function (2.19) of the Mellin variables s j that encodes the tensorial structure and a shift in the boundary dimension d → d + 2J. The key point is that the function encoding the tensorial structure can always be transformed into a polynomial in both the Mellin variables s j and the contractions 3 · k j through a change of variables. For example, for the three-point function involving a single spin-J operator (2.18) this is achieved for each term in the finite sum over α by redefining s 3 → s 3 + α, which gives: where we recall that x = d + 2J. Using the relations (2.27), the Pochhammer factors on the second line dressing the scalar 3pt conformal structure with x−α boundary dimensions can be absorbed into the action of the differential operators (2.26): This establishes that the Mellin-Barnes representation of three-point functions for spinning operators can be reduced to that (2.16) of the scalar operators with the same scaling dimensions as their spinning counterparts up to a shift in the boundary dimension. By re-instating the Mellin-Barnes integrals via the definition (2.1), the identity (2.28) above combined with (2.29) furthermore gives a decomposition of the three-point function involving a single spin-J operator into a sum of three-point functions involving only scalar operators which are acted upon by the operators (2.27). Using the Mellin-Barnes representation we can also express the three-point function involving a single spin-J operator as a differential operator acting on a single three-point function of scalar operators in which JHEP10(2021)156 one of the scalar operators has scaling dimension shifted by the spin-J: whereν 3 = ν 3 +iJ, so that the correlator with spin-J operator with scaling dimension ∆ 3 = d 2 + iν 3 is obtained by acting with the above differential operator on the correlator where it is replaced by a scalar operator with scaling dimension ∆ 3 − J. Note that τ 3 = ∆ 3 − J is the twist of the spin-J operator, meaning that if two correlators where the operators in one correlator have the same twist as their counterparts in the other footnoteE.g. conserved operators (which are dual to massless spinning fields) all have the same twist τ = d − 2, as do partially-conserved operators of the same depth r which have twist τ = d − 2 − r. both correlators are obtained from the same correlation function of scalar operators in this way. The identity (2.30) is straightforward to establish from the Mellin-Barnes representation (2.18) by making the change of variables s 3 → s 3 + J 2 and using (2.26). Ward-Takahashi identities Correlation functions involving conserved currents are further constrained by Ward-Takahashi identities. These restrict scaling dimension of the operators that can appear in correlators involving conserved currents at the three-point level [43,44]. In this section we detail how these features manifest themselves in the Mellin-Barnes representation, focusing on three-point functions of a (partially)-massless field and two scalars. See figure 1. The spin-J primary operator O ν 3 ,J is a conserved current at the following special values of ν 3 : where we refer to the parameter r as the depth. 13 For these values of ν 3 the operator satisfies the conservation condition [41,42]: Operators satisfying (2.32) with depth r > 0 are often referred to in the literature as partially-conserved, with the terminology "conserved current" reserved for those with depth r = 0. For J = 2 the latter is familiar as the stress tensor. When inserted into a correlator, the above conservation condition relates the longitudinal components to lower point correlators of the other operators. For instance, for the three-point function of O ν 3 ,J with two scalar operators O ν 1 ,0 and O ν 2 ,0 , whose Mellin-Barnes amplitude we gave in (2.18), we have where δ 3 denotes the action of the charge associated to the current (2.32). 13 Sometimes in the literature another definition of depth, t, is given and is related to r above via: m m S X J F s o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F y a c a e N 5 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / c A + P m j p O F a E N E v N Y t U O s K W e S N g w z n L Y T R b E I O W 2 F o 9 u Z 3 x p T p V k s H 8 0 k o Y H A A 8 k i R r C x 0 l N 3 j F U y Z L 3 7 n l v 2 K t 4 c a J X 4 O S l D j n r P / e r 2 Y 5 I K K g 3 h W O u O 7 y U m y L A y j H A 6 L X V T T R N M R n h A O 5 Z K L K g O s v n B U 3 R m l T 6 K Y m V L G j R X f 0 9 k W G g 9 E a H t F N g M 9 b I 3 E / / z O q m J r o O M y S Q 1 V J L F o i j l y M R o 9 j 3 q M 0 W J 4 R N L M F H M 3 o r I E C t M j M 2 o Z E P w l 1 9 e J c 1 q x b + s V B 8 u y r W b P I 4 i n M A p n I M P V 1 C D O 6 h D A w g I e I Z X e H O U 8 + K 8 O x + L 1 o K T z x z D H z i f P 8 3 U k G o = < / l a t e x i t > ' J < l a t e x i t s h a 1 _ b a s e 6 4 = " z P + y f The Ward-Takahashi identities are intimately related to invariance under gauge transformations of the corresponding field ϕ J in the bulk. In particular, the Ward-Takahashi identity (2.33) is equivalent to the following gauge invariance condition (see e.g. [65]): 14 c P x 7 c x v P z F t u J I P d p K w I C Z D y S N O i X V S q 5 e M e N / v l y t e 1 Z s D r x I / J x X I 0 e i X v 3 o D R d O Y S U s F M a b r e 4 k N M q I t p 4 J N S 7 3 U s I T Q M R m y r q O S x M w E 2 f z a K T 5 z y g B H S r u S F s / which relates the cubic coupling in the action S (3) generating the three-point function in (2.33) to the kinetic term S (2) of the scalar fields φ 1,2 that are dual to the operators is the linearised gauge transformation of the spin-J field with gauge parameter ξ, which for depth r is: (2.35) The field ϕ J therefore has helicities ranging over {J − r, J − r + 1, . . . , J }. Since massless spin-J fields have helicity J, fields with depth r > 0 are known as partially-massless. The δ (1) ξ is the transformation of the scalar fields φ 1,2 induced by the cubic vertex S (3) and is linear in φ 1,2 . Since the Ward-Takahashi identities are a constraint on the longitudinal components, it is useful consider the helicity decomposition: where the helicity-m component is obtained by acting on the 3pt function with the differential operator (A.2): (2.36) 14 The notation (n) signifies that the corresponding term is power n in the fields. JHEP10(2021)156 The function Υ J−m ( 3 , k 3 ) encodes the J − m longitudinal indices and is given by the following Gegenbauer polynomial By definition this is a monomial of degree-m in the contraction ζ 3 · k 12 , where ζ 3 replaces 3 as an auxiliary vector which is also transverse, i.e. ζ 3 · k 3 = 0, in addition to being null. In particular, is a polynomial in the Mellin variables s j and g (J,r) 12 is the coupling of the spin-J (partially-)massless field of depth r to scalars φ 1 and φ 2 . The polynomial f 3 ) becomes more and more involved as the helicity m decreases. In particular, for the helicity m = J and m = J − 1 components we have: In section 2.4 we will see how the explicit form of these polynomials can be simplified using the freedom to add improvement (on-shell vanishing) terms to the cubic vertices. A useful feature of the functions f is that they depend on the spin-J and the boundary dimension d only through the combination x = d + 2J. This implies that to extract the helicity-m component it is sufficient to extract it for spin-(J − m). 15 JHEP10(2021)156 For generic values of ν 3 , the Dirac delta distribution in each helicity component (2.38) indicates the presence of a bulk contact term. Or equivalently, a singularity in the total energy variable E T = k 1 + k 2 + k 3 as E T → 0. See section 2.1. For the values (2.31) of ν 3 corresponding to (partially-)massless fields, for them to couple consistently to scalar matter the bulk contact terms must be absent starting from the helicity-(J − 1 − r) component of the correlator down to helicity-0. This requires that for m = 0, . . . , J − 1 − r the helicity-m component (2.38) of the Mellin-Barnes amplitude takes the following form at the (partially-)massless points (2.31): is a polynomial in s 1 , s 2 and s 3 , and the factor that multiplies it coincides with the argument of the Dirac delta function in (2.38). This requirement places constraints on the values of ν 1 and ν 2 for the scalar fields that admit consistent cubic couplings with (partially)-massless fields. As we saw in section 2.1, the above form corresponds to a local boundary term (2.9) in the Mellin-Barnes representation and therefore does not encode a singularity in the E T → 0 limit. In particular, inserting the expression (2.8) for the Dirac delta function as an integral over the bulk radial coordinate gives the following representation of the helicity-m component (2.39): In this form the presence of a bulk contact singularity is indicated by a simple pole at For consistent couplings of the (partially-)massless points (2.31) to scalar matter, this pole is cancelled by the corresponding factor (2.41) in the polynomial f , generating a boundary term. For the case of two scalars and a (partially-)massless field this boundary term is actually non-zero and generates a non-trivial Ward-Takahashi identity, hence the "W-T" in (2.41), which we discuss in the following. Consistent couplings involving (partially-)massless fields come in two types [65]: those which are exactly gauge invariant under the corresponding gauge transformations, and those which are gauge-invariant up to terms proportional to the free equations of motion, δ In order to satisfy the cubic order gauge invariance condition (2.34), the latter induce a non-trivial deformation δ (1) ξ = 0. Correspondingly, three-point functions generated by exactly gauge invariant cubic couplings are exactly conserved, while those generated by couplings that induce deformations in the gauge transformations give non-trivial Ward-Takahashi identities. Cubic vertices of two scalars and a (partially-)massless field in (A)dS d+1 are of the latter type [44,66,67]. At the level of the Mellin-Barnes representation, the latter can only be generated by the residues of the poles (2.4) satisfying the constraint (2.43) with non-zero residue. These give a finite contribution in the limit z 0 → 0, since the constraint (2.43) sets to zero the exponent of z 0 in (2.42). They are, where n j = 0, 1, 2, 3, . . . and: This makes clear that, for a given spin J and depth r, only a finite number of poles (2.4) contribute to a non-trivial Ward-Takahashi identity, which furthermore only emerges for scaling dimensions ν 1,2 satisfying (2.47). As we shall see below, this is the case for all values [43,44] of ν 1,2 for which a consistent cubic coupling to a partially-massless field exists. This is to be expected since such couplings are only gauge invariant on-shell (2.45) and therefore induce a non-trivial δ (1) . In the following we derive the corresponding threepoint Ward-Takahashi identities, considering couplings to massless fields in section 2.3.1 and partially-massless fields in section 2.3.2. This simply consists of extracting the function (2.41) for each helicity component and evaluating the residues (2.46), which can be implemented for a given helicity in Mathematica. Note that at the 3pt level the cubic coupling g is not constrained by the Ward-Takahashi identity, only the masses of the scalar fields φ i and φ j that can couple to a (partially-)massless field of spin-J are. 16 These are instead constrained by the four-point Ward-Takahashi identities, which is explored in section 4. Massless fields For a massless spin-J field we have depth r = 0 and . From the helicity J − 1 component of the Mellin-Barnes amplitude (2.40b), it is straightforward to see that a gauge invariant three-point function only exists when the scalars have equal mass. In particular, contain the factor (2.41) required to generate a boundary term. This is consistent with existing results on cubic couplings of massless spinning fields to scalar matter, where it is well known that consistent cubic couplings, both in flat and in (A)dS space, require the scalars to have equal mass [66,67]. In the following we shall therefore take ν 1 = ν 2 = µ. JHEP10(2021)156 The corresponding Ward-Takahashi identities are non-trivial and are generated by the residues of poles (2.46) with where the lower the helicity component the greater the number of poles that contribute. This is consistent with the fact that the corresponding cubic couplings [66,67] are gauge invariant up to terms proportional to the free equations of motion. For the helicity-(J − 1) component, only the poles with n 1 = n 2 = n 3 = 0 generate the Ward-Takahashi identity, which reads: 17 The Ward-Takahashi identities for the lower helicity components (2.39) follow in the same way. Since the number of poles that contribute increase as the helicity decreases, they also become more involved. For instance, for the helicity-(J − 2) component we have The polynomials f which encode the above Ward-Takahashi identities at the level of the Mellin-Barnes amplitude (2.39) also become increasingly complicated as the helicity decreases. For example, for helicity-(J − 2) component above we have: The momentum space two-point of scalar operators Oµ,0 reads (in the normalisation of [21]): In section 2.4 we will see how this can be simplified using the freedom to add improvement terms. As highlighted in the previous section, note that both (2.50) and (2.51) depend on d and J only through the combination x = d + 2J. Therefore, once they are known, say, in general d, for some spin-J, then they are known for all spins J. Likewise, if they are known for all spins-J in some dimension d, then they are known for all d. This is also illustrated by the results (2.54) and (2.56) for partially-massless fields of depth-1 and -2, which are considered in the following section. Partially-massless fields The solutions to the gauge invariance condition (2.34) for cubic couplings involving partially-massless fields were constructed and classified in the works [43,44]. For the cubic coupling of a (partially)-massless field of spin-J to scalar fields, it was found that consistent couplings exist only when the following relation holds between the depth r and the scaling dimensions ∆ 1 and ∆ 2 of the scalar fields: In particular: • Partially massless fields of odd depth r can only couple to scalars with scaling dimensions ∆ 1 and ∆ 2 that differ by odd integers no greater than r. • For partially massless fields of even depth r, the above condition tells us they can only couple to scalars with scaling dimensions that differ by even integers no greater than r. This includes scalars of equal mass. This has interesting implications for the coupling of partially-massless fields to massive scalars, where ∆ 1,2 = d 2 + iν 1,2 with ν 1,2 ∈ R: • Partially massless fields of odd depth cannot couple to massive scalars since their scaling dimensions cannot differ by (odd) integers as required by (2.53). Consistent couplings of scalars to odd depth partially massless fields can therefore only exist when both scalars belong to the complementary series. • Partially massless fields of even depth can only couple to massive scalars if they have equal mass, since the scaling dimensions of Principal series representations can only differ by imaginary values. The Ward-Takahashi identities associated to (2.53) are non-trivial since this condition coincides with that (2.47) required to generate a finite, non-zero, boundary term. This is consistent with the analysis [43,44] of the corresponding partially-massless cubic couplings, which are gauge invariant up to terms proportional to the free equations of motion and hence induce a deformation in the gauge transformation. In the following we give some examples for even and odd depths separately, focusing for simplicity on partially massless fields with depths r = 1 and r = 2. For r = 1 we take JHEP10(2021)156 ν 1 = µ and ν 2 = µ + i where i is the imaginary unit, so that ∆ 1 − ∆ 2 = 1, while for r = 2 we will take the scalars to have equal mass, ν 1 = ν 2 = µ. These choices are the simplest ones consistent with the constraint (2.53). Partially-massless of depth r = 1. Taking ν 1 = µ and ν 2 = µ + i, the helicity-(J − 2) component (2.39) of the Mellin-Barnes amplitude for a spin-J partially-massless field of depth 1 is given by The factor outside of the square brackets ensures that this is a boundary term (2.41). The corresponding Ward-Takahashi identity generated by the residues of poles (2.47) reads Partially-massless of depth r = 2. Taking ν 1 = ν 2 = µ, the helicity-(J − 3) component (2.39) of the Mellin-Barnes amplitude for a spin-J partially-massless field of depth 2 is given by (2.55) JHEP10(2021)156 This is again a boundary term (2.41), owing to the factor ( x−12 4 − (s 1 + s 2 + s 3 )). The corresponding Ward-Takahashi identity generated by the residues of poles (2.47) reads In the next section 2.4 we will use the freedom to add improvement terms to reduce (2.55) -which is a degree 6 polynomial -to a degree 5 one. On improvement terms As discussed in section 2.1 one can consider adding improvement terms to the canonical cubic vertex (2.17) which vanish on-shell. These give vanishing boundary term contributions to the corresponding three-point function. In particular, in the helicity-m component (2.39) of the three-point function (2.18), an improvement generates a contribution of the form It is useful to expand the above polynomial in the following basis From section 2.2 we see that such basis elements are in one-to-one correspondence with the differential operators (2.27): the degree of the polynomial in the s j corresponds to the order of the differential operator that generates it. In other words: The higher the degree of the improvement as a polynomial in s j the higher the derivative of the (on-shell vanishing) cubic vertex it represents. 18 The coefficients c n 1 ,n 2 ,n 3 in (2.58) 18 See section 3.2 for more details on this statement. JHEP10(2021)156 are constrained to give a vanishing boundary term contribution. In particular, the requirement is that the polynomial vanishes on the values (2.47) of s 1 , s 2 , s 3 . This does not fix the coefficients c n 1 ,n 2 ,n 3 completely and the leftover freedom can be used to simplify the functions f . For example, for massless fields, ν 1 = ν 2 = µ, the improvements (2.58) of degree 4 that we can add to the helicity-(J − 2) component (2.52) are: which is simplified compared to (2.52). Special case: conformally coupled scalars In the previous sections we studied three-point functions of two scalar operators and a spinning operator, in particular the Ward-Takahashi identities that must be satisfied when the spinning operator is (partially-)conserved and how they arise in the Mellin-Barnes formalism. As noted in [21] (section 4.6), for certain scaling dimensions the Mellin-Barnes representation is not needed to capture the full analytic structure of the correlator and JHEP10(2021)156 the Mellin-Barnes integrals can be straightforwardly evaluated to give simple closed form expressions. This includes correlators involving conformally coupled scalars which, via the weightshifting operators of section 2.2, can then be used to obtain explicit closed form expressions for correlators of certain spinning operators and also scalar operators with scaling dimensions that differ from those of conformally coupled scalars by integers (which are ∆ = d 2 +iν, ν = ± i 2 ). In particular, recall that for partially conserved operators we have whose three-point functions, via the weight-shifting identity (2.30), can be generated from those with the partially conserved operator replaced by a scalar operator with scaling dimension∆ For d odd this differs by an integer from the scaling dimension of conformally coupled scalars, so the three-point function involving the scalar operator with (2.64) can, in turn, be obtained from that of a conformally coupled scalar via application of the differential operators (2.21) and (2.23). The action of the differential operators (2.30), (2.21) and (2.23) are straightforward to implement in Mathematica. Below we give some examples of how this can be used to obtain such expressions for correlators of a (partially-)massless field with conformally coupled scalars, focusing on the case d = 3. We will also obtain the corresponding Ward-Takahashi identities, which serves as a consistency check for the more general results obtained at the level of the Mellin-Barnes representation in the previous section. Massless spinning field and two conformally coupled scalars. Consistent threepoint functions of a massless spinning field and two scalar fields necessarily require that the scalars have the same mass (reviewed in the sections above). For two conformally coupled scalar operators of the same scaling dimension given by ν 1 = ν 2 = − i 2 , following the discussion above, their three-point function with a spin-J conserved current in d = 3 can be obtained by from the following scalar three-point function withν 3 = i 2 : This expression can be obtained from the Mellin-Barnes representation (2.16) simply by evaluating the integrals in s 1 , s 2 and s 3 , see [20,21]. The three-point correlation of the massless spin-J field can then be obtained for d = 3 by acting on the above with the differential operator (2.30) and setting d = 3. Below we give some examples for spins J = 1, 2, 3. 19 19 Note that the helicity-J component of the 3pt function for a massless spin-J field and two conformally coupled scalars in d = 3 was shown to be given by a Gauss hypergeometric function in (3.46) of [21], while the lower helicity components were left implicit through the action of a differential operator. In the view of extracting the corresponding Ward-Takahashi identity, in the following examples we give the lower helicity components explicitly. For massless spin-1 and spin-2, the explicit 3pt functions were given in [24,49] which agree with the expressions we obtain in (2.66) and (2.68). JHEP10(2021)156 To the best of our knowledge this explicit expression for spin-3 is new. 21 The helicity-2 and helicity-1 components are: These match the Ward-Takahashi identities (2.50) and (2.51) setting d = 3, J = 3 and µ = − i 2 . What we did not give previously is the helicity-0 component, which reads: Partially-massless spinning field and two conformally coupled scalars. Recall that consistent three-point functions involving a partially conserved operator and two scalar operators only exist when the scaling dimensions of the scalar operators satisfy (2.53). For example, for partially conserved operators of depth-1, according to (2.53) the scaling dimensions of the scalar operators in the three-point function must differ by ±1. The simplest case is if they both correspond to conformally coupled scalars, one with ν 1 = − i 2 and the other with ν 2 = i 2 . In d = 3 the three-point function of a partially conserved operator of depth-1 is generated via (2.30) from the three-point function of the latter scalars and a third scalar operator withν 3 = 3i 2 , given explicitly by: which can either be obtained by directly evaluating the Mellin-Barnes integrals in (2.16) or by acting on the three-point function (2.65) of conformally coupled scalars with the differential operator (2.21). Below we give some examples, which to the best of our knowledge were not previously given explicitly in the literature. 21 Expressions for any spin-J can be obtained similarly, acting with the differential operator (2.30) on (2.65), but due to the increasing complexity of the result we do not give them explicitly here. JHEP10(2021)156 3 Four-point functions In this section we review and extend some relevant aspects of the Mellin-Barnes representations of four-point functions introduced in [20,21]. The main new result is a systematic study of four-point contact diagrams generated by quartic vertices with and without derivatives in the Mellin formalism, which can be found in section 3.2. Exchanges We adopt the Mellin-Barnes representation for four-point exchanges introduced in [21], to which we refer the reader for details and technicalities. The only novelty we present here is an improvement on the notation and presentation. The Mellin-Barnes representation for four-point exchanges is defined as: which takes the form The poles in the s j are encoded in which is the extension of (2.4) to 4pts. These poles are associated to the external legs and accordingly we refer to the s j as external Mellin variables. The poles in u andū are encoded in and are associated to the internal leg of the exchange with momentum k s = k 1 + k 2 , so we refer to u andū as internal Mellin variables. The Mellin-Barnes amplitude for the s-channel exchange is given by where each of the three terms can be identified with a specific term in the corresponding bulk-bulk propagators. 22 Two of these can be expressed as a convolution integral of the 22 In particular, the terms with subscript < and > correspond to those generated by terms with a specific ordering of the radial components of the two bulk points. See [21] for the details. are the parameters (2.6) associated to each three-point function. The -prescription ensures that the integration contour passes to the right of the pole w ∼ 0. Note that the cosine factors arise from combining the contributions from each branch of the in-in contour, which differ by phases (see section 4 of [21]). 24 The remaining contribution is completely factorised: The expressions for the corresponding t-and u-channel exchanges follow from the s-channel expressions above via the appropriate interchange of s i , k i , i , J i and ν i . Notice that contributions (3.6) factorise on the simple pole at w = 0. This is the on-shell factorisation of the exchange into its constituent three-point Mellin-Barnes amplitudes, which appears in a way that is reminiscent of the on-shell factorisation of exchanges in flat space -the simple pole in the variable w plays an analogous role to the simple pole in the appropriate Mandelstam variable. In section 2.58 we saw that improvement terms in three-point functions give (vanishing) boundary term contributions. This is no longer the case for exchange diagrams because the internal leg is off-shell. In this case such terms generate bulk contact terms since the 23 The three-point Mellin-Barnes amplitudes in (3.6) are contracted together which is implemented by the Thomas-D operator D given in (A.1a). Note that, as detailed in [21], these 3pt Mellin-Barnes amplitudes are those for EAdS d+1 e.g. (2.16) and (2.18), and the cosine factors in (3.6) and (3.8) account for the analytic continuation to dS d+1 . The factor N4 accounts for the change in 2pt function normalisation from AdS to dS, see (2.93) of [21]. 24 In particular, these cosine factors are absent from the Mellin-Barnes representation for the exchange in Euclidean AdS. JHEP10(2021)156 improvement terms -which are proportional to the free equations of motion -cancel with the bulk-bulk propagator. Let's consider the most general improvement term we can add to the three-point Mellin-Barnes amplitude (2.5), where p impr. (s 1 , s 2 , s 3 ) is a polynomial in the Mellin variables s 1 , s 2 and s 3 , and the factor x 4 − s 1 − s 2 − s 3 that multiplies it generates the boundary term in the way we saw in section 2. In the Mellin-Barnes exchange amplitude (3.5) however, such improvement terms generate terms proportional to w in the contributions (3.6): These terms then cancel the simple pole in (3.6) at w = 0. The residue of this pole in the contributions (3.6) is therefore universal i.e. blind to improvements and, together with the purely factorised contribution (3.8), gives the on-shell exchange. The improvement terms instead correspond to bulk contact terms that can be uplifted to local quartic vertices in a Lagrangian. This is the Mellin-Barnes counterpart of the fact that one gets a bulk contact term when you act on a bulk-bulk propagator with the operator corresponding to the free equations of motion. On bulk quartic contact terms At the end of the previous section we argued that improvement terms in cubic vertices contribute bulk quartic contact terms in their corresponding 4pt exchange diagrams fourpoint exchange diagrams (3.10). In this section we explore this relation in more detail, focusing for ease of illustration on the four-point functions involving only scalar fieldswhere we can take C ν i ,0 (s i ) = 1. The discussion for spinning fields follows in the same way using the corresponding expressions (2.5) for the spinning 3pt Mellin-Barnes amplitudes. The total contribution to the exchange generated by, say, the improvement (3.10a) reads which is obtained simply by combining the contributions (3.6) with the replacement (3.10). To make the connection with bulk quartic contact terms more explicit the integrals in (3.12) need to be evaluated. This simply amounts to evaluating the integral in w since the integral in u andū are eliminated by the presence of the two Dirac delta functions. To evaluate the w integral, the key is that in the basis (2.58) the improvement p impr. (s 1 , s 2 , u) takes the following form where the coefficients c n (s 1 , s 2 ) are polynomials in s 1 and s 2 , which in the basis (2.58) takes the form (3.14) The integral in w can then be evaluated using the following identity: 25 The result (3.15), inserted in (3.12), gives the Mellin-Barnes representation of a fourpoint contact diagram. These are polynomials in the four external Mellin variables s j which multiply a Dirac delta function in their sum s 1 + s 2 + s 3 + s 4 . The latter encode the bulk contact singularities for E T = k 1 + k 2 + k 3 + k 4 → 0. To see this let us first consider improvements with n 1 = n 2 = 0 and take the external scalars to be conformally coupled, keeping the exchanged scalar generic. In this case it is straightforward to lift in JHEP10(2021)156 integrals (3.11) in the external Mellin variables s j . In particular, each term in the sum over j in (3.15) is equal to the Mellin-Barnes representation of the φ 4 contact diagram with conformally coupled scalars φ (which have ν 1,2,3,4 = i 2 ) and boundary dimension d = x+x 2 + 2j, which is given by The Mellin-Barnes integrals in s 1,2,3,4 were evaluated in [20] to give the expression [16]: (3.17) Combining this with (3.15) gives us the following bulk contact term generated by the improvement (3.13) with n 1 = n 2 = 0: This is a quartic contact diagram for a local quartic vertex of conformally coupled scalars φ with (n − 1) derivatives, where each term in the sum (i.e. for fixed j) involves no more than j derivatives -which one can read off from the degree of the singularity in E T [18]. 26 Notice that for n = 0 the contact term (3.18) is vanishing, meaning that they can only be generated by improvements (3.13) with a non-trivial u-dependence where, the higher the degree of the polynomial in u, the greater the number of derivatives that appear in the contact interaction it generates. For improvements that also depend on s 1 , s 2 , i.e. with non-zero n 1 , n 2 in (3.13), the basis (2.58) is extremely useful. As explained in section 2.4, each basis element can be recast as a differential operator (2.59) in the momentum, meaning that contact terms generated by improvements with non-zero n 1 , n 2 can be obtained by acting with O (n 1 ) k 2 ,ν 2 on that (3.18) generated by improvements with n 1 = n 2 = 0. This increases the degree of the singularity in E T , where the O (n j ) k j ,ν j adds derivatives to the field described by the external Mellin variable s j in the quartic vertex. From the above we can make the following observations about improvements p impr. (s 1 , s 2 , u), which we shall make use of later on: 1. Improvements can only generate non-trivial bulk quartic contact terms if they have a non-trivial dependence on u. The higher the degree the improvement is as a polynomial in s 1 , s 2 and u, the higher the derivative of the quartic vertex that it corresponds to. 26 Recall that when all the fields in the exchange are scalars, which we are considering here, we have (3.19) which is that of a quartic contact diagram given by the vertex φ 1 φ 2 φ 3 φ 4 of scalar fields φ j with no derivatives. If there is also a dependence on s 1 , s 2 the bulk contact term has the form where c 1 (s 1 , s 2 ) is a polynomial in s 1 and s 2 . Through the correspondence (2.59) we can understand this is generated by a quartic vertex φ 1 φ 2 φ 3 φ 4 with derivatives acting on φ 1 and φ 2 . 3. Improvements that have a u-dependence given by u 2 do not generate bulk contact terms -their integral (3.15) is vanishing. This can be understood by noting that the integral for the n = 2 basis element reads and plugging n = 2 into the integral (3.15) gives (1 − iν) times the result for n = 1. The contact term generated by the constant term in (3.21) is vanishing, as established in point 1, hence the contact term generated by the u 2 term must be vanishing. 4. More generally, an improvement that is degree n in u generates a derivative contact term that is degree-(n − 2) in k 2 s . The reason this is not degree (n − 1), as the formula (3.15) naively seems to indicate, is that for n > 1 the term in the sum with j = (n − 1) is vanishing by virtue of the Pochhammer factor (j − (n − 1)) j . 5. The lowest derivative quartic vertex that generates a contact term proportional to k 2 s is given by the improvement which one can confirm by plugging it into (3.15). As a final comment we emphasise that for external (partially-)massless fields the improvements (3.13) are further constrained by the requirement that they do not affect the three-point Ward-Takahashi identity -see section 2.4. As we shall see in section 4, this implies that some of the above possibilities cannot be realised in this case as it imposes a lower bound on the degree of the polynomial (3.13) and hence, via the analysis above, also on the degree of the singularity in E T . < l a t e x i t s h a 1 _ b a s e 6 4 = " u 0 N X d S J o K A g 7 n f Y E C V v i c r m c D Z E = " > A A A B 7 X i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 K r t V 1 G P R i 8 c K 9 g P a p W T T b B u b T U K S F c r S / + D F g y J e / T / e / D e m 7 R 6 0 9 c H A 4 7 0 Z Z u Z F i j N j f f / b W 1 l d W 9 / Y L G w V t 3 d 9 / Z L B 4 d N I 1 N N a I N I L n U 7 w o Z y J m j D M s t p W m K k 4 j T V j S 6 n f q t J 6 o N k + L B j h U N E z w Q L G Y E W y c 1 u r I e u e 9 U t m v + D O g Z R L k p A w 5 6 r 3 S V 7 c v S Z p Q Y Q n H x n Q C X 9 k w w 9 o y w u m k E 0 N V Z i M 8 I B H B U 4 o S b M Z t d O 0 K l T + i i W p W w a K b + n s h w Y s w 4 i V x n g u 3 Q L H p T 8 T + v k 9 r 4 O s y Y U K m l g s w X x S l H V q L p 6 6 j P N C W W j x 3 B R D N 3 K y J D r D G x L q C i C y F Y f H m Z N K u V 4 L J S v b 8 o 1 7 y O A p w D C d w B g F c Q Q 3 u o A 4 N I P A I z / A K b 5 7 0 X r x 3 7 2 P e u u L l M 0 f w B 9 7 n D 0 C Y j u w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " T i C g v W y F z H + c X f 1 k k v b k 6 p 8 Q 9 + Y = " > A A A C A n i c d V B N S w M x E M 3 6 W e t X 1 Z N 4 C R Z B Q U p S R N t b Y t 4 q m B V 6 K 4 l m 2 b b O w H y a x Q l u L F v + L F g y J e / R X e / D d m t Y K K P h h 4 v D f D z D w / U d I A I W / O x O T U 9 M x s Y a 4 4 v 7 C 4 t F x a W T 3 c a q 5 a P F Y x f r S Z Y o G Y k W S F D i M t G C h b 4 S F / 7 g K P c v r o U 2 M o 7 O Y J g I L 2 S 9 S A a S M 7 B S p 7 T e u 8 p c J Q L Y P t n V r p a 9 P u y M O l m V j D q l M q k Q Q i i l O C f 0 Y J 9 Y U q / X q r S G a W 5 Z l N E Y z U 7 p 1 e 3 G P A 1 F B F w x Y 9 q U J O B l T I P k S o y K b m p E w v i A 9 U T b 0 o i F w n j Z x w s j v G W V L g 5 i b S s C / K F + n 8 h Y a M w w 9 G 1 n y K B v f n u 5 + J f X T i G o e Z m M k h R E x D 8 X B a n C E O M 8 D 9 y V W n B Q Q 0 s Y 1 9 L e i n m f a c b B p l a 0 I X x 9 i v 8 n 5 9 U K 3 a 9 U T / f K j c N x H A W 0 g T b R N q L o A D X Q M W q i F u L o B t 2 h B / T o 3 D r 3 z p P z / N k 6 4 Y x n 1 t A P O C / v z i y X E g = = < / l a t e x i t > g (J,r) 20 < l a t e x i t s h a 1 _ b a s e 6 4 = " S i I B K G S 4 8 q F 1 J o + 4 E N o F 3 Y L I 6 g g = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + r X r 0 E i y C p 7 J b R D 0 W v Y i n C v Z D 2 q V k 0 2 w b m m S X J F s o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F y a c a e N 5 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / c A + P m j p O F a E N E v N Y t U O s K W e S N g w z n L Y T R b E I O W 2 F o 9 u Z 3 x p T p V k s H 8 0 k o Y H A A 8 k i R r C x 0 l N 3 j F U y Z L 3 7 n l v 2 K t 4 c a J X 4 O S l D j n r P / e r 2 Y 5 I K K g 3 h W O u O 7 y U m y L A y j H A 6 L X V T T R N M R n h A O 5 Z K L K g O s v n B U 3 R m l T 6 K Y m V L G j R X f 0 9 k W G g 9 E a H t F N g M 9 b I 3 E / / z O q m J r o O M y S Q 1 V J L F o i j l y M R o 9 j 3 q M 0 W J 4 R N L M F H M 3 o r I E C t M j M 2 o Z E P w l 1 9 e J c 1 q x b + s V B 8 u y r W b P I 4 i n M A p n I M P V 1 C D O 6 h D A w g I e I Z X e H O U 8 + K 8 O x + L 1 o K T z x z D H z i f P 8 3 U k G o = < / l a t e x i t > 'J < l a t e x i t s h a 1 _ b a s e 6 4 = " z P + y f 2 K S V + c w 7 d X b H U l o w f v X I v c = " > A A A B 7 X i c b V B N S w M x E J 3 U r 1 q / q h 6 9 B I v g q e w W U Y 9 F L x 4 r 2 A 9 o l 5 J N s 2 1 s N l m S r F C W / g c v H h T x 6 v / x 5 r 8 x b f e g r Q 8 G H u / N M D M v T A Q 3 1 v O + U W F t f W N z q 7 h d 2 t n d 2 z 8 o H x 6 1 j E o 1 Z U 2 q h N K d k B g m u G R N y 6 1 g n U Q z E o e C t c P x 7 c x v P z F t u J I P d p K w I C Z D y S N O i X V S q 5 e M e L / W L 1 e 8 q j c H X i V + T i q Q o 9 E v f / U G i q Y x k 5 Y K Y k z X 9 x I b Z E R b T g W b l n q p Y Q m h Y z J k X U c l i Z k J s v m 1 U 3 z m l A G O l H Y l L Z 6 r v y c y E h s z i U P X G R M 7 M s v e T P z P 6 6 Y 2 u g 4 y L p P U M k k X i 6 J U Y K v w 7 H U 8 4 J p R K y a O E K q 5 u x X T E d G E W h d Q y Y X g L 7 + 8 S l q 1 q n 9 Z r d 1 f V O o 3 e R x F O I F T O A c f r q A O d 9 C A J l B 4 h G d 4 h T e k 0 A t 6 R x + L 1 g L K Z 4 7 h D 9 D n D z 8 U j u s = < / l a t e x i t > 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " u 0 N X d S J o K A g 7 n f Y E C V v i c r m c D Z E = " > A A A B 7 X i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 K r t V 1 G P R i 8 c K 9 g P a p W T T b B u b T U K S F c r S / + D F g y J e / T / e / D e m 7 R 6 0 9 c H A 4 7 0 Z Z u Z F i j N j f f / b W 1 l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N I 1 N N a I N I L n U 7 w o Z y J m j D M s t p W 2 m K k 4 j T V j S 6 n f q t J 6 o N k + L B j h U N E z w Q L G Y E W y c 1 u 2 r I e u e 9 U t m v + D O g Z R L k p A w 5 6 r 3 S V 7 c v S Z p Q Y Q n H x n Q C X 9 k w w 9 o y w u m k 2 E 0 N V Z i M 8 I B 2 H B U 4 o S b M Z t d O 0 K l T + i i W 2 p W w a K b + n s h w Y s w 4 i V x n g u 3 Q L H p T 8 T + v k 9 r 4 O s y Y U K m l g s w X x S l H V q L p 6 6 j P N C W W j x 3 B R D N 3 K y J D r D G x L q C i C y F Y f H m Z N K u V 4 L J S v b 8 o 1 2 7 y O A p w D C d w B g F c Q Q 3 u o A 4 N I P A I z / A K b 5 7 0 X r x 3 7 2 P e u u L l M 0 f w B 9 7 n D 0 C Y j u w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " l t T K 2 e K W R / 9 R P X f + D 3 v S o w l K P F k = " > A A A C A n i c d V B N S w M x E M 3 6 W e v X q i f x E i x C B S l J F V t v o h f x V M F W o a l m 2 b b Y P a D Z F Y o S / H i X / H i Q R G v / g p v / h u z t Y K K P h h 4 v D f D z D w v V t I A I e / O x O T U 9 M x s b i 4 / v 7 C 4 t O y u r D Z M l G g u 6 j x S k b 7 m B F K h q I O E p S 4 j L V g g a f E h X d 9 n P k X N I b G Y X n M I h F O 2 C 9 U P q S M 7 B S x 1 3 v X a U t J X w o n u 7 o l p a 9 P m w P O + k u G X b c A i k R Q i i l O C O s k 8 s O T i o l m k V 8 y y K K A x a h 3 3 r d W N e B K I E L h i x j Q p i a G d M g 2 S K z H M t x I j Y s a v W U 8 L Q 1 Z I E w 7 H b w x F t W 6 W I / r Z C w C P 1 + T K A m M G g W c 7 A w Z 9 8 9 v L x L + 8 Z g J + t Z 3 K M E 5 A h P x z k Z 8 o D B H O 8 s B d q Q U H N b C E c S 3 t r Z j 3 m W Y c b G p 5 G 8 L X p / h / i i X 6 H 6 p f L Z X O D w a x 5 F D G 2 g T F R F F F X S I T l A N 1 R F H t + g e P a I n 5 8 5 5 c J 6 d l 8 / W C W c 8 s 4 Z + w H n 9 A M + y l x M = < / l a t e x i t > g (J,r) 30 < l a t e x i t s h a 1 _ b a s e 6 4 = " S i I B K G S 4 8 q F 1 J o + 4 E N o F 3 Y L I 6 g g = " > A A A B 8 H i c b V B N S w M x E J 2 t X 7 V + r X r 0 E i y C p 7 J b R D 0 W v Y i n C v Z D 2 q V k 0 2 w b m m S X J F s o S 3 + F F w + K e P X n e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F y a c a e N 5 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / c A + P m j p O F a E N E v N Y t U O s K W e S N g w z n L Y T R b E I O W 2 F o 9 u Z 3 x p T p V k s H 8 0 k o Y H A A 8 k i R r C x 0 l N 3 j F U y Z L 3 7 n l v 2 K t 4 c a J X 4 O S l D j n r P / e r 2 Y 5 I K K g 3 h W O u O 7 y U m y L A y j H A 6 L X V T T R N M R n h A O 5 Z K L K g O s v n B U 3 R m l T 6 K Y m V L G j R X f 0 9 k W G g 9 E a H t F N g M 9 b I 3 E / / z O q m J r o O M y S Q 1 V J L F o i j l y M R o 9 j 3 q M 0 W J 4 R N L M F H M 3 o r I E C t M j M 2 o Z E P w l 1 9 e J c 1 q x b + s V B 8 u y r W b P I 4 i n M A p n I M P V 1 C D O 6 h D A w g I e I Z X e H O U 8 + K 8 O x + L 1 o K T z x z D H z i f P 8 3 U k G o = < / l a t e x i t > 'J < l a t e x i t s h a 1 _ b a s e 6 4 = " z P + y f 2 K S V + c w 7 d X b H U l o w f v X I v c = " > A A A B 7 X i c b V B N S w M x E J 3 U r 1 q / q h 6 9 B I v g q e w W U Y 9 F L x 4 r 2 A 9 o l 5 J N s 2 1 s N l m S r F C W / g c v H h T x 6 v / x 5 r 8 x b f e g r Q 8 G H u / N M D M v T A Q 3 1 v O + U W F t f W N z q 7 h d 2 t n d 2 z 8 o H x 6 1 j E o 1 Z U 2 q h N K d k B g m u G R N y 6 1 g n U Q z E o e C t c P x 7 c x v P z F t u J I P d p K w I C Z D y S N O i X V S q 5 e M e L / W L 1 e 8 q j c H X i V + T i q Q o 9 E v f / U G i q Y x k 5 Y K Y k z X 9 x I b Z E R b T g W b l n q p Y Q m h Y z J k X U c l i Z k J s v m 1 U 3 z m l A G O l H Y l L Z 6 r v y c y E h s z i U P X G R M 7 M s v e T P z P 6 6 Y 2 u g 4 y L p P U M k k X i 6 J U Y K v w 7 H U 8 4 J p R K y a O E K q 5 u x X T E d G E W h d Q y Y X g L 7 + 8 S l q 1 q n 9 Z r d 1 f V O o 3 e R x F O I F T O A c f r q A O d 9 C A J l B 4 h G d 4 h T e k 0 A t 6 R x + L 1 g L K Z 4 7 h D 9 D n D z 8 U j u s = < / l a t e x i t > 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " u 0 N X d S J o K A g 7 n f Y E C V v i c r m c D Z E = " > A A A B 7 X i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 K r t V 1 G P R i 8 c K 9 g P a p W T T b B u b T U K S F c r S / + D F g y J e / T / e / D e m 7 R 6 0 9 c H A 4 7 0 Z Z u Z F i j N j f f / b W 1 l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N I 1 N N a I N I L n U 7 w o Z y J m j D M s t p W 2 m K k 4 j T V j S 6 n f q t J 6 o N k + L B j h U N E z w Q L G Y E W y c 1 u 2 r I e u e 9 U t m v + D O g Z R L k p A w 5 6 r 3 S V 7 c v S Z p Q Y Q n H x n Q C X 9 k w w 9 o y w u m k 2 E 0 N V Z i M 8 I B 2 H B U 4 o S b M Z t d O 0 K l T + i i W 2 p W w a K b + n s h w Y s w 4 i V x n g u 3 Q L H p T 8 T + v k 9 r 4 O s y Y U K m l g s w X x S l H V q L p 6 6 j P N C W W j x 3 B R D N 3 K y J D r D G x L q C i C y F Y f H m Z N K u V 4 L J S v b 8 o 1 2 7 y O A p w D C d w B g F c Q Q 3 u o A 4 N I P A I z / A K b 5 7 0 X r x 3 7 2 P e u u L l M 0 f w B 9 7 n D 0 C Y j u w = < / l a t e x i t > Consistency of (partially)-massless matter couplings In section 2 we saw that at the three-point level the Ward-Takahashi identities constrain the masses of scalars that can interact with a (partially)-massless field. The coupling constant, however, is not constrained by such a three-point analysis and for this one must go to four-points. In particular, considering the four-point function of three scalars φ i with a single (partially-)conserved operator, for the tree-level exchange of a scalar field φ 0 of mass , in the following we will explore how the Ward-Takahashi identity can be used to constrain its coupling g (J,r) i0 with a spin-J partially massless field of depth r and one other scalar φ i . See figure 2. Previous works on momentum space four-point functions of (partially)-massless fields in (A)dS include [24,25,31,60,[68][69][70][71][72][73][74][75][76]. The full four-point function is the sum of the s-, t-and u-channel contributions: x −x 2 The helicity decomposition of the t-and u-channel exchanges follow similarly: the t-channel expressions follow from the s-channel ones above through the interchanges s 2 ↔ s 3 , k 2 ↔ k 3 and ν 2 ↔ ν 3 . The u-channel expressions follow from s 2 ↔ s 4 , k 2 ↔ k 4 and ν 2 ↔ ν 4 . Let us now suppose that the spin-J operator O ν 1 ,J is partially conserved (2.32), which occurs for: In order for the four-point function (4.1) to be consistent, as for the 3pt functions involving a partially conserved operator considered in section 2.3, the components with helicity m = 0, . . . , J − 1 − r must not contain bulk quartic contact terms -meaning that there are no singularities in the four-point total energy variable E T = k 1 +k 2 +k 3 +k 4 as E T → 0. Note that such bulk quartic contact terms cannot be generated by the contributions , this factor of w cancels the simple poles at w = 0 in (4.3a) and (4.3b) and with it generates bulk quartic contact terms with Mellin-Barnes representation given by (cf. (3.11)): which can be evaluated using (3.15). In total we therefore have . This would correspond to adding quartic contact interactions involving the three external scalars and the (partially-)massless spin-J field to the Lagrangian. JHEP10(2021)156 2. Fixing the cubic coupling g (J,r) ij of the (partially-)massless field to scalars φ i and φ j so that the offending bulk contact singularities cancel among themselves. Using the language of the Mellin-Barnes representation it does not take much to see that the first possibility would not work, at least assuming locality of quartic contact interactions. In particular, quartic contact terms that have a tensorial structure given by a power of (ζ 1 · k 2s ) as in (4.9) can be reinterpreted as improvements p impr. (s 1 , s 2 , s 3 ) in the cubic vertices that mediate the scalar exchange in the s-channel (and similar for the t-and uchannels). 27 These however cannot cancel all the bulk quartic contact terms generated by p W-T (s 1 , s 2 , s 3 ) in (4.7) which encodes the three-point Ward-Takahashi identityotherwise the identity could be violated by improvements. See the analysis in section 2.4. The four-point Ward-Takahashi identity can therefore only be restored by constraining the cubic coupling g (J,r) ij of the (partially-)massless field with two scalars. This will be studied more rigorously in the following sections. Coupling massless spinning fields to scalar matter At the four-point level, for a massless spin-J field the Ward-Takahashi identity requires the cancellation of bulk quartic contact terms (4.9) in the helicity m = 0, . . . , J −1 components of the exchange (4.1). This constrains the coupling g (J,0) i0 of generic scalar fields φ 0 and φ i of equal mass to a massless field of spin-J in (A)dS d+1 . For the helicity-(J − 1) component we have (from (2.40b) with ν 1 = ν 2 ): which is linear in u. Using (4.9), the corresponding bulk quartic contact term is (where 27 Contact terms that have a mixed tensorial structure involving (in addition to (ζ1 · k2s)) also (ζ1 · k3) and/or (ζ1 · k4), cannot be written as improvements in a scalar exchanges -only as improvements in exchanges of spinning fields. 28 If instead we were considering the same exchange process but in AdS d+1 we would obtain the same result for the for the helicity-(J − 1)-component but without the factor: (4.14) In [20] it was shown that the bulk contact terms of in-in four-point functions in dS d+1 differ from those in AdS d+1 precisely by the factor (4.14). The conclusions that we draw therefore hold for AdS d+1 . JHEP10(2021)156 As argued at the end of the last section, this bulk contact term cannot be cancelled by adding improvement terms to the cubic vertices that mediate the exchange. In more detail, the Mellin-Barnes representation of the above bulk contact term is given by the Dirac delta function: Such a contact term could only be compensated by an improvement (3.13) that is linear in u, see section 3.2. As discussed in section 2.4, the improvement itself cannot be chosen arbitrarily and is constrained to vanish for the values (2.46) 29 of s 1 , s 2 and u so as to give a vanishing boundary term in the corresponding three-point function. This can be achieved in various ways. In particular, in this case, for the helicity-(J − 1) component we have n 1 = n 2 = n 3 = 0 in (2.46). Therefore, to give a vanishing boundary term, such an improvement is restricted to take one of the following forms: where q impr. 1 and q impr. 2 are polynomials in s 1 and s 2 . Each of the above forms are linear in u so that they generate the Dirac delta function (4.16). Neither of them however can cancel the offending contact term (4.15), since the latter is given by a constant multiplying the Dirac delta function (4.16), while the above improvements ensure that the contact terms they generate dress the Dirac delta function (4.16) with a polynomial in s 1 and s 2 which is at least degree 1. In other words, from the analysis of section 3.2, the improvements (4.17) generate contact terms with a degree of singularity in E T that is higher than that of (4.15). The offending contact term (4.15) therefore cannot be cancelled by a finite number of local quartic contact terms and therefore must vanish by itself. In particular, assuming that the factor (4.14) is non-vanishing -which we can do for generic d or generic scaling dimensions ν i -the bulk quartic contact terms (4.15) can only vanish if We emphasise that this result assumes locality of interactions, as in Weinberg's flat-space analysis [33]. 30 This is complementary to the result [78], which showed that Ward identities of an underlying global higher-spin symmetry require quartic interactions that are as non-local as exchanges if consistent interactions of higher-spin gauge fields are to exist in AdS d+1 . It is clear that, if we allow ourselves to add quartic contact interactions that are as non-local as the exchange, the obstruction (4.18) -which itself is generated by the exchange -can in principle be cancelled. For de Sitter space, strictly speaking our analysis does not cover the values of d and scaling dimensions ν j that give a vanishing sine factor (4.14), in which case the Ward-Takahashi identity is satisfied without any constraint on the couplings g (J,0) i0 . This vanishing of the sine factor (4.14) is actually a consequence of unitarity in dS [26]. For completeness it should be clarified if this could allow for non-trivial couplings of massless higher-spin fields to scalars of certain mass in de Sitter space, though it would be unexpected. We expect this possibility to be ruled out by a similar analysis at the level of the Wave Function, where the exchange can be obtained from the AdS result by a simple Wick rotation [45] so the dS couplings would be constrained just as they are in the AdS case, which is covered by our analysis above simply by dividing by the factor (4.14). The same statements apply to the partially-massless case (4.23) considered in the next section. Coupling partially-massless spinning fields to scalar matter In a similar fashion the couplings g (J,r) ij of a spin-J partially massless field of depth-r can be constrained by requiring that the bulk contact terms all cancel in the helicity m = 0, . . . , J − 1 − r components of the exchange (4.1). In the following we focus on partially massless fields of depth r = 2, since it is the lowest depth at which there exist matter couplings to generic equal mass scalars, 31 where in the following we take ν = ν 2 = ν 3 = ν 4 = µ. Consistency of depth-2 partially massless couplings requires that there are no bulk contact terms starting from the helicity-(J − 3) downwards. The three-point functions of depth-2 partially massless fields were studied in 30 By now it is well known that this conclusion of Weinberg's result for higher spins J > 2 in flat space do not hold if one allows quartic contact interactions that are as non-local as the exchange amplitude [77]. The same has also been shown to be true in AdS d+1 [78]. Allowing such non-localities in field theory would however render them ill defined in the absence of a guiding principle that would replace space-time locality, see e.g. [79]. 31 The exception is the coupling of a depth-1 partially massless field to conformally coupled scalars in − i 2 (µ + 2is 2 )(µ + 2iu)(µ + 2i(u + 1))(µ + 2i(u + 2)), (4.22) which is a degree-4 polynomial. Using the analysis of section 3.2, the bulk contact term contribution to the helicity-(J − 3) component of the exchange then has the following form where g (J,2) i0|j (s 1 , s i ), i = 2, 3, 4, are polynomials of at most degree 3 − j in s 1 and s i . For instance, for the j = 0 contribution, by evaluating (4.9) using (3.15) we have: , (4.24) which is degree 3 in s 1 and s 2 . We can then ask if a contact term of the above form can be cancelled by adding improvement terms to the cubic vertex. It is sufficient to focus on those improvements which could cancel the contact term with j = 0 which, as we saw in the previous section, can only be linear in u. In the case of a partially massless-field, the improvement p impr. (s 1 , s 2 , u) must vanish on more values of s 1 , s 2 and u compared to the massless case considered 32 The reason j runs from 0 to 1 and not 0 to 2 is that the polynomial p W-T (s1, s2, u), although degree 4, is only degree 3 in u. 2 and q impr. 3 are polynomials in s 1 and s 2 . Note that improvements of the form (4.26c) are polynomials of at least degree 4 in s 2 and are therefore not useful to cancel the contact term (4.24), which is degree 3 in s 1 and s 2 . The other two possible forms of improvement (4.26a) and (4.26b) are of at least degree-2 and degree-3 respectively, but they both have zeros at s 1 = x−4 4 . It is straightforward to check that the contact term (4.24) does not have a zero at s 1 = x−4 4 and therefore no combination of (4.26a) and (4.26b) can be chosen to cancel it. The contact term must therefore vanish by itself, giving the constraint: (4.27) The difference between this constraint and that (4.18) for the massless case is that in the above each tensorial structure is multiplied by a function g for all spins J, 33 which in turn implies that there is no consistent cubic coupling of a depth-2 partially-massless field to scalar matter : Like for the massless case in the previous section this assumes locality of quartic interactions. As a further confirmation of this result, in the following we will recover it (and those of the previous section) using an alternative approach for conformally coupled scalars and d = 3. Special case: conformally coupled scalars In section 2.5 we saw that three-point functions of conformally coupled scalars have simple explicit expressions that do not involve Mellin-Barnes integrals. For the four-point exchanges, this implies that the representation given by (4.3a), (4.3b) and (4.4) can be reduced to simpler form when the scalars are conformally coupled upon evaluating the integrals in s 1,2,3,4 , u andū. If we also replace the spin-J field with a conformally coupled scalar, these read (see section 4.6 of [21]): JHEP10(2021)156 gives the four-point Ward-Takahashi identity. The term on the second line has a singularity in E T = k 1 + k 2 + k 3 + k 4 and so is a bulk quartic contact term that violates the Ward-Takahashi identity. This singularity is however logarithmic, i.e. proportional to log E T , which cannot be generated by a local quartic vertex involving a single massless spin-1 field and three conformally coupled scalars. This can be understood from the following simple argument: 34 The singularity of the contact diagram (3.17) generated by the φ 4 interaction where φ is a conformally coupled scalar is a simple pole in E T for d = 3. Derivatives only increase the order of the singularity in E T . Therefore, for contact diagrams of conformally coupled scalars the lowest order singularity is a simple pole and so, in particular, they cannot contain log E T terms. Now, for d = 3, conformally coupled scalars are in the same higher-spin multiplet as massless spinning fields [80] and their contact diagrams are therefore related by higher-spin symmetry. Contact diagrams involving a massless spinning field and three conformally coupled scalars therefore cannot contain singularities in E T that are lower order than those of four conformally coupled scalars. 35 The log E T singularity in (4.37) must therefore cancel upon summing the s-, t-and u-channel exchanges, giving: ( 1 · k 2 ) 2 k 2 s − k 2 12 − 2k 12 k 1 2 (k s + k 12 ) 2 (k s − k 12 ) 2 log k 34 + k s E T − k 1 2 (k 12 + k s ) (k 12 − k s ) E T + ( 1 · k 2 )( 1 · k 1 ) k 2 s − k 2 1 + k 2 2 k 2 s (k s +k 12 ) 2 (k s −k 12 ) 2 log k 34 +k s E T − k 2 k 2 s (k s −k 12 ) (k s +k 12 ) The helicity-2 component was given in [24] which matches with the first line of the expression above. For the helicity-1 component we have: (4.40) 34 We have also checked this explicitly by extracting the explicit contact terms generated by the lowest derivative admissible improvements (4.17) which have q impr. 1 = const. and evaluating the integrals in sj, confirming that there is no log ET singularity. 35 The higher-spin symmetry transformation can be realised as derivative operators, an example of which is the operator (2.30). JHEP10(2021)156 Like for the massless spin-1 case above, the left term in the square brackets is proportional to the three-point function of conformally coupled scalars and so gives the corresponding 4pt Ward-Takahashi identity. In addition to this we have singularities in E T which violate the Ward-Takahashi identity. One of this is a simple pole in E T which in principle can be compensated by adding a local quartic contact interaction. The other is log E T which, like for the massless spin-1 case, cannot be compensated and must therefore vanish upon summing the s-, t-and u-channel exchanges, giving: g (2,0) 20 (ζ 1 · k 2 ) + g (2,0) 30 (ζ 1 · k 3 ) + (ζ 1 · k 4 ) g Such a contact diagram can be generated by acting J times with the differential operator (2.24) on the four-point contact diagram (3.17) of conformally coupled scalars where x in d shifted by x → x − 2J, giving boundary dimension d = d − 2. From (3.17), the latter is given explicitly by For d = 3, by carefully expanding, one obtains which in particular contains the non-analytic term E T log E T . Upon acting J times with the differential operator (2.24), this term is responsible for log E T singularities in the exchange for external massless spin-J, for all J. To illustrate, for external massless spin-1, the helicity-0 component is obtained upon acting with (2.24) on (4.45), which gives For massless spin-2, acting twice with (2.24) on (4.45), for the helicity-1 component we have . and so on for higher spin J which is obtained by acting J times with the operator (2.24), where each application generates a log E T singularity as demonstrated above. As we saw in the above examples these must cancel upon summing the s-, t-and u-channels, giving rise to the following constraint for general spin-J: which once again recovers charge conservation (4.19) and the equivalence principle (4.20) for J = 1 and J = 2, while for J > 2 that there can be no consistent coupling of massless higher-spin fields to scalar matter (in local theories). JHEP10(2021)156 is given by 38 which does not have a log E T singularity. Diagrams generated by derivative vertices can only increase the singularity in E T . We can therefore conclude that the log E T singularity in the helicity-(J − 3) component of the exchange cannot be compensated by adding a local quartic vertex to the Lagrangian. It must therefore vanish by itself upon summing the s-, t-and u-channels, which gives the constraint: which recovers the result of section 4.2, i.e. that there can be no consistent coupling of a depth-2 partially-massless field of any spin J to scalar matter (in local theories).
2021-06-02T01:16:11.685Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "95c6e8e19467cd80553d726bfd9fc5ea39d1cbbf", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2021)156.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4157458c10e361a5f7f512344faebe2c5df66f85", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
145338284
pes2o/s2orc
v3-fos-license
Video Analysis in Golf Coaching : An Insider Perspective Golf has evolved from a recreational pursuit to a professional sport, which attracts huge sponsorships and endorsements. With this development, greater demands have been placed on golf coaches to produce golfers with high levels of skills and competence. The technology used in golf coaching has also evolved with video analysis becoming an integral part of golf coaching. The purpose of this study was to elicit golf coaches’ perspectives on golf coaching. A qualitative research approach using semi-structured interviews was used to collect data from a purposive sample of eight (n=8) golf coaches. Interviews were recorded and transcribed verbatim. Arising from an in-depth literature review, an interview schedule was developed to collect data on the following research issues associated with using video analysis in golf coaching: a) positive experiences, b) negative experiences, c) player development, d) attitude towards video analysis, e) barriers and constraints to using video analysis, f) focus of video analysis, and g) alternatives to video analysis. The data were content-analysed to reduce it to meaningful information units. The findings of the study revealed that the positive aspects outnumbered the negative aspects associated with the use of video analysis in golf coaching. Among the positive aspects were improved stance, balance and swing, while the negative aspects were mainly concerned with the high cost of the technology. Given the many positive aspects associated with using video analysis in golf coaching, and the fact that coaches continuously strive to get the best performance from their athletes, the use of video analysis in golf coaching appears to be a positive step towards achieving this goal. Introduction The advent of advanced technology has created the opportunity for sport coaches to access accurate information and select appropriate tools or equipment to incorporate in their training programmes.Van Staden (2014) argues that the increased improvement in sport is not as a result of human beings evolving faster than other mammals, but as a result of more accurate information on one's fingertips to make better choices.Assessment in sport may be invasive or noninvasive, with the latter increasingly being used due to advanced technology.The use of invasive assessments is often very expensive and frequently hinders the training program of athletes.Technology also allows coaches to make better training choices in terms of techniques, intensity and frequency. The game of golf has evolved from a recreational pursuit to a professional sport that attracts lucrative endorsement opportunities for promising golfers, and high sponsorship for national and international golf tournaments.Accompanying this evolution were large-scale changes in the approach to golf, the equipment used in golf and golf coaching.While most individuals still play golf for recreation, the game has produced highly paid professionals such as Tiger Woods, Gary Player, Ernie Els and Jack Nicklaus, to name a few, who made a career of the sport.In the early days, the sport was played with wooden clubs and required little or no coaching.With the passage of time, the attractive and increasing prize money for tournaments began capturing the attention of players of both sexes.With higher participation and spectatorship, competition soared.For players to enter and remain in competitions/tournaments, they needed the requisite skill and competence.Hence, it became necessary for them to receive professional coaching.Golf coaching does not just involve learning technical skills.In addition to technical skills, players need to have an awareness of their ability to play the sport and a positive mental approach to the sport (Cromack, 2011).The game was played initially with wooden sticks.However, the introduction of the steel shaft was not only cost-effective but also provided more comfort.Players were able to choose a specific shaft -medium, regular or stiff to match their swing.Graphite shafts, especially, have become very popular, as players with slow swing speeds could now get more distance from the same swing.Lane (2007) comments, that when golf was initially played in the 13 th century, it was played by teams of players who would alternate hitting the ball.The only strategy and technique that was required was devising the most efficient means of hitting the ball into an area from which the opposing team would find it difficult to play.In the next few centuries, the production of golf balls was both laborious and expensive (Lane, 2007).Therefore, the playing of golf was restricted to small areas, and individuals learnt from imitating good players.In the early 1900s, legitimate tournaments such as the British Open and the US Open emerged as attractive tournaments to play in.The difficulty level of the tournaments increased with the introduction of different golf courses and different equipment.The real boom in golf began in the 1950s when young amateurs forsook their careers to pursue golf because of the high prize money and endorsement income.Golf has since found a home in print and broadcast media, which provided instruction and marketed the sport.Golf schools and offerings of golf courses at institutions of higher education started mushrooming in the 1980s.It was during this time that specialised instruction in the sport emerged.Video analysis of the golf swing was one of these. Video Analysis in Golf Coaching Unlike many sports where coaching is a face-to-face interaction, social networks and improved technology have made it possible for golfers to get coaching tips at times which are most convenient to them.For example, social networks and video analysis, as methods of golf coaching to analyse a golf swing, can be done across continents. Regardless of the sport, proper feedback from coaches is the most important thing.A golf coach is able to get improved attention, response and performance from students if the students have visual images of what they are doing wrong.Video analysis using V1 Pro, for example, as a coaching aid works wonders and a much quicker improvement in players can be observed, regardless of their age.V1 Pro is a versatile programme, which is able to record videos, has user-friendly tools, can be played back in slow motion, and provides multiple views of students' movements.Video clips of students' swings and stances can be compared with those of professional golfers.An important positive aspect of the V1 Pro is that information can be emailed to students, who can download the information and use it at their leisure.Another programme that professional golf coaches use is Powerchalk.This is similar to V1 Pro and also provides coaches with accurate feedback, as well as many model swings from professional players.This technology, however, is less user-friendly than V1 Pro.Many of the world's top golf coaches have included video analysis in their coaching, for example Butch Harmon, one of Americas greatest golf teachers for 12-straight years, as nominated by his peers (Harmon, 2014), uses advanced video equipment to help his players achieve their potential in the sport.Other renowned coaches such as Sean Foley (coach of 16 times major champion Tiger Woods), and Jim Mclean (the Jim McLean Golf Centre-Texas is regarded as the one of the leading golf schools in the United States), also use video analysis extensively to get the best out of their players (McLean, 2014).Video analysis is a technique used to obtain information about moving objects from video (Wikipedia, 2014).Unlike in the past when coaches made judgements based on their own experience, knowledge and subjectivity, video analysis accurately measures angle movements to the spine, arm flex, and general posture positions, thereby removing the guesswork associated with traditional coaching methods (Anon, 2008).Video analysis involves the use of high-speed video cameras, biomechanic vests, force plates, radar tracking units and devices using infra-red and ultrasound technology to accurately monitor and record the movement of players.Instead of providing a one-dimensional feedback, the biomechanics included in video analysis provide multiple dimensions of movement.This is done through attaching sensors, which are usually in different places of a vest or waistcoat that allow for the detection and analysis of movements.Different video analysis systems may include other components, for example the force plate or balance plate, which provides information on where a player's weight is during a swing sequence.Depending on the coaching needs and the available resources (inter alia financial), video analysis provides many possibilities. There are many benefits associated with the use of video analysis; among these are reduced injuries (McCarron, 2013).Video analysis makes it possible to detect weaknesses in respect of the golf swing, which may lead to future injury; such injury can be corrected or prevented at an early stage.This type of analysis is hardly possible using traditional coaching methods.Most golf injuries arise through incorrect techniques (Johnson, 2011) because amateurs and less competent players try to imitate professionals.Every good golf swing analysis system comes with an extensive library for both male and female professionals to use as swing templates (Anon, 2008).With such information, athletes can discover weaknesses in their own techniques, and easily correct their technique; thereby, also improving their performance.Another benefit of video analysis is scouting at sports institutions.Universities and schools have developed scouting as one of the most important aspects in their success, using video analysis, a coach can analyse an athlete to the extent that he/she fits perfectly into what the coach is looking for.By using video analysis, a coach can present concrete evidence of why the player will be valuable to the team.It will prevent making expensive recruitment and selection decisions on observations with the naked eye. Another important benefit is that feedback need not necessarily be face-to-face with the athlete.The feedback can be emailed to the athlete who can download the information and start working on his/her technique at a time most suitable to him/her. However, there are constraints associated with the use of video analysis in golf coaching; one of the major drawbacks is the high cost.Video equipment is very expensive, and technology is advancing so rapidly that a newly purchased expensive system may become obsolete in a short time before even realising a fair return on its investment.Video analysis is also a time-consuming process for the coach, because it takes up many hours outside the normal coaching sessions to analyse all the data and provide detailed feedback to the players.The video analysis would be meaningless if the data is not applied correctly to improving the performance of the athlete.Very often coaches using technology can become too technical and confuse players or inhibit their natural ability.Thus, a coach needs to know when to step away from the technology.If the technology is used incorrectly, it can result in incorrect data being captured.For example, if the video equipment is not directly behind the player, the coach will not see the correct angle and alignment needed to provide proper feedback. Purpose of the Study The purpose of the study was to elicit insider perspectives on video analysis in golf coaching.Insiders, in the context of this study, are professional golf coaches who use video analysis in their coaching. Methodology A comprehensive literature review of video analysis was conducted.Given the uniqueness of the phenomenon being researched, the authors deemed a qualitative research approach as appropriate for the study.Patton (2002) posits that this approach provides the opportunity to explore the real-life world of respondents, and obtain an insider perspective of the phenomenon being researched.It evokes meaningful responses that are applicable to the experiences of the respondent.In this approach, data are collected in the respondent's own words, rather than limiting them to select from fixed responses required in a quantitative approach (Sooful, Surujlal & Dhurup, 2010).Semi-structured face-to-face interviews were conducted with five professional golf coaches.Furthermore, an additional three telephonic interviews were conducted with those professional golf coaches who were not within geographical access to the researchers.All respondents were recruited through purposive sampling.Dane (1990) argues that purposive sampling allows the researcher to deliberate on people or events, which have good grounds in what they believe, that are critical for the study.Professional golf coaches who used video analysis for at least one year on a regular basis to provide feedback to their athletes were approached, and requested to participate in the study.The principal researcher, who himself is a professional golf coach and affiliated to the Professional Golfers' Association (PGA), accessed the contact details of potential participants from the database of affiliated coaches.Of those coaches who used video analysis in their coaching, eight accepted the request to participate in the study. Instrument and procedures The literature review contributed to the development of an interview schedule.Two researchers with experience in qualitative research and knowledge of the game of golf assessed the content of the interview schedule.Based on their recommendations, modifications were made to the interview schedule.A pilot test of the instrument was conducted to determine the most logical order of questions, as well as the approximate duration of the interview.The following research issues were explored in the study: a) positive experiences, b) negative experiences, c) player development, d) attitude towards video analysis, e) barriers and constraints to using video analysis, f) focus of video analysis, and g) alternatives to video analysis. The face-to-face interviews were of approximately 25 minutes duration, while the telephonic interviews were of approximately 15 minutes duration.In both instances, the interviews were recorded and transcribed verbatim.A cellular phone was used to record the telephonic interviews, which were subsequently downloaded on a computer.Probes were used to follow up on issues that needed clarity.Core ethical issues such as respect, honesty, confidentiality and anonymity were adhered to during the study.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences Data analysis The data were content-analysed using an iterative and recursive process (Miles & Huberman, 1994) to reduce it to more relevant and manageable information units (Weber, 1990).This was done by reading and rereading the transcripts in order to become familiar with the data. Reliability and trustworthiness According to Patton (2002) the researcher is the instrument for data collection and analysis in qualitative research, therefore, his/her experience is important for the credibility of the data.The primary author, having extensive background experience in coaching golf, and good contextual knowledge about the sport and terminology associated with the sport, conducted all interviews.This also helped to build rapport with the participants (Eklund, 1993). The credibility of the findings of the study was established in the following ways, first, the responses of the participants were recorded and transcribed verbatim, secondly the data were analysed independently by both researchers, and thirdly, an independent researcher examined the findings in conjunction with the recordings and transcripts. Results and Discussion All participants were positively inclined towards the use of video analysis in golf coaching.In terms of their usage of video analysis, all participants in the study used video analysis in at least five out of seven coaching sessions. Positive experiences Analysis of the data revealed that the positive aspects associated with using video analysis outnumbered the negative aspects.Participants in the study indicated that video analysis enabled them to respond more swiftly to areas of weaknesses of golf players.This finding echoes Magowen's (2014) assertion that feedback from video analysis enables the golfer to make quicker changes to his/her technique.It also facilitates the gaining of a clearer understanding of stroke/swing and movement patterns, and draws comparisons with professional players and players of similar ability.According to Magowen (2014), coaches are also able to pinpoint faults more quickly and take appropriate measures to correct faults associated with the sport.Participants also perceived that players have a better and quicker understanding of their golf swing and posture, and what measures were needed to improve; this is because they could see themselves on video.One of the most pronounced positive aspects, according to the participants, was that they were able to identify minor details in a player's technique.The benefits of video golf swing analysis alone are huge and improve a coach's teaching ability.The ability to put precise angle measurements to spine, arm flex, and general posture positions eliminates the guesswork from coaching, and contributes greatly to improved playing styles (Anon, 2008). Negative experiences Although the participants reported fewer negative aspects, it is important to take note of them.A significant negative aspect, according to the participants, was that the inclusion of video technology in their coaching resulted in a tendency to deviate from their normal style of coaching, and adopt a very technical approach to coaching.Instead of being natural in their coaching, they tended to be over reliant on the technology, and sound pedagogical principles of coaching are ignored.This sometimes has an influence on their athletes, who tend to lose the natural approach to playing golf, get confused with too much information at their disposal, and begin to doubt their own ability because of their faults being exposed through this technique.Another important negative experience mentioned by participants was the athletes' desire to emulate other players.Previous research on video analysis (Rothstein, 1976) suggested that video analysis could be an ineffective means of presenting knowledge of performance. Player development Participants expressed that video analysis contributed greatly to player development.Players could now rely on both their coaches as well as video analysis, which could be viewed both during and outside practice sessions.Outside official practice sessions, video analysis assisted players to improve and master techniques in the absence of the coach.Cole Conclusion The purpose of the study was to gain an insider perspective of video analysis in golf coaching.The findings indicate that the use of video analysis is popular among professional golf coaches.Given the many positive aspects associated with using video analysis in golf coaching, and the fact that coaches continuously strive to get the best performance from their athletes, the use of video analysis in golf coaching appears to be a positive step towards achieving this goal.
2018-12-27T05:07:46.561Z
2014-09-06T00:00:00.000
{ "year": 2014, "sha1": "e9fb9cd24eb00da163bb4cafa5500242a7433d16", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/4214/4124", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e9fb9cd24eb00da163bb4cafa5500242a7433d16", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
168913032
pes2o/s2orc
v3-fos-license
Impact of Internal Branding (IB), Brand Commitment (BC) and Brand Supporting Behavior (BSB) on Organizational Attractiveness (OA) and Firm Performance(FP) This paper explores the various dimension of Internal Branding like training, orientation and briefing and its impact on Brand commitment and Brand supporting behavior (like brand allegiance, brand endorsement and brand citizen behaviour).The study shows that training, orientation and briefing of Internal Branding (IB) does impact Brand commitment(BC) and brand supporting behavior(BSB). Internal Branding impact both organizational attractiveness and Firm performance while brand commitment only impact organizational attractiveness. With respect to brand supporting behaviour, brand citizen behaviour (BCB) and brand allegiance (BA) impact Organization attractiveness, while brand citizen behaviour (BCB) and brand endorsement (BE) impact firm performance. Introduction Internal branding has recently been identified as an enabler of an organisation's success in delivering the brand promise to fulfil customers' brand expectations decided by various communication activities (Drake et al., 2005). Many authors (Boone, 2000;Buss, 2002) have studied the steady growth of internal branding's popularity among companies such as Southwest, Sears, BASF, IBM and Ernst and Young. These examples shows the power of an informed workforce committed to delivering the brand promise. Most of the studies focused on the perspective of management and consultants while employees are considered targeted internal audience of an internal branding efforts. While some studies have provided empirical evidence for the association between internal branding and employees' brand commitment (Burmann and Zeplin, 2005), few have focused on the relationship between internal branding and employees' brand loyalty (Papasolomou and Vrontis, 2006a, b). Being a member of an organisation having a strong employer brand inceases their self-esteem and create strong organisational identification (Lievens et al., 2007). Constant delivery of the brand promise create more trust and loyalty ensuring a steady supply of applicants (Holliday, 1997) and maintains high commitment and high performance in employees by ensuring the organisation's credibility (Burack et al., 1994). It attracts the right talent with the culture fit and at the same time provided the prospective employees an assurance of the work experience as expected by them (Bhatnagar and Srivastava, 2008). Further it helps in building stronger psychological contract with the employees so that they become the brand advocate of the company. "On-brand behaviours" among employees should be encouraged by marketing and explaining the brand internally (Mitchell, 2002). There is a need to manage employee behaviour so that it is consistent with the company's desired brand positioning (Henkel et al., 2007). Thus corporate brand or reputation management has been conceptualised as "living values internally and promoting those same values externally" (Davies et al., 2004). This study aims to understand the internal branding from the employees' perspective; it will empirically assess the relationship between internal branding and employees' perception about organization attractiveness and firm performance as well as the relationships among different brand attitudes (i.e. brand commitment, and brand supporting behaviour). To achieve its objectives, a quantitative survey conducted with 350 employees from major IT companies was carried out. Literature Review a) Employee Brand and Employee Brand Equity The employee brand has been defined as "the image presented to an organization's customers and other stakeholders through its employees" (Mangold and Miles, 2007: 77). This image can be either negative or positive, and depends upon the extent to which employees know and understand the desired brand image and are motivated to project that image to organizational constituents. King and Grace (2009) introduced the idea of employee-based brand equity and postulated that it impacts consumer-based brand equity as well as financial-based brand equity. Two critical elements are necessary in order to provide the competitive advantage an employee brand can provide to the employees (Miles and Mangold, 2005). First, employees must know and understand the desired brand image and they must be motivated to engage in the behaviors that are necessary to deliver the desired brand image. The extent to which psychological contracts are upheld in employees' minds impacts their desire to deliver the organization's expected brand image (Mangold and Miles, 2007). The extent to which these message systems send consistent messages determines the strength and nature of the employee brand (Greene et al., 1994;Mitchell, 2002;Robinson, 1996). b) Internal Branding According to Foster et al. (2010) the main focus of internal branding is on how the employees within an organization adopt the brand concept and live up to the promises that the brand should deliver to its external stakeholders. Thus the aim is to teach and communicate the brand values to employees (Foster et al., 2010;Backhaus & Tikoo, 2004;Punjaisri et al.,2009). Punjaisri et al. (2009) says that internal branding can positively affect how employees identify with the brand. Mosley (2007) suggest that internal branding is about shaping the perceptions that employees have about the brand. It is suggested that there is a clear link between corporate branding and internal branding (Punjaisri & Wilson, 2011, Foster et al., 2010. According to Foster et al. (2010) internal branding can, along with employer branding, be seen as developments or extensions of corporate branding. Kotler et al. (2009);Knox & Freeman (2006) and de Chernatony &McDonald (2003) agrees that everyone in an organization needs to "live the brand" to achieve complete success. According to Punjaisri et al.(2009) the quality of a service and the deliverance of brand promises are ultimately dependent on the employees who come into direct contact with consumers. Maxwell & Knox, (2009) argue that employer branding is an effective way of pursuing that employees' attitudes and behavior are in alliance with the corporate brand. Kimpakorn & Tocquer (2009) further suggest that when the brand values are communicated in a good way to employees it is likely that they become committed to the brand and behave in accordance with organization's values. Thus according to Foster et al. (2010) employer branding can help organizations to attract the right employees that possesses values that matches a corporate brand. Wallace and de Chernatony (2009) promote leadership as a condition for employees to live the brand, while Punjaisri et al. (2008) have more functional approach, promoting internal communication and HR training as being key mechanisms in the internal brand process. Burmann et al. (2009) incorporate internal communication, HR practices and leadership as determining factors of employee brand commitment. The internal brand management seeks to internalise the brand so that employees are more prepared to fulfil the explicit and implicit promises inherent in the brand (Berry, 2000;Miles and Mangold, 2004).According to King and Grace(2012)organization socialization, relationship orientation and receptiveness are three important internal branding factor which affect brand commitment. Corace (2007) believes that ultimately, treating employees with respect and dignity is what will lead to distinct behaviours (i.e.brand citizen behaviour).Strong relationships between the organisation and the employee are believed to be an instrumental in increasing employee job motivation (Bell et al., 2004) as the organisation-employee relationship is considered by employees to be an important, if not the most important, aspect of the working environment (Herington et al., 2009). c) Brand Commitment (BC) As internal branding aims at forming a shared understanding of a brand across the entire organisation, recent studies have shown its positive influence on employees' brand commitment (Punjaisri and Wilson, 2007). Papasolomou and Vrontis(2006)) have seen that internal branding influences employees' brand loyalty or their willingness to remain with the brand (Reichheld, 1996).Organisation identification (OI) theory focuses on the cognitive approach, the organisation commitment (OC) theory talks more of the emotional connections (Edwards, 2005). Thus, OC is considered as staff's emotional attachment to the organisation (Meyer and Allen, 1991;Meyer et al., 2002). Furthermore, it is noted that brand identification leads to employees' brand commitment (Burmann and Zeplin, 2005;Cheney and Tompkins, 1987) and commitment a key precursor to loyalty (Brown and Peterson, 1993;Pritchard et al., 1999;Reichers, 1985). If the employee perceives the relationship with the organisation to be a positive and worthy of maintaining, then the employee has a high level of commitment to the organisation. Therefore commitment, is considered to be a key variable in determining organisational success (Morgan and Hunt, 1994) as employees feeling of belonging influences their choice to go above and beyond the job in order to achieve the organisation's goals (Castro et al., 2005). Castro et al. (2005) believe that commitment results in employees willing to make extra effort on behalf of the organisation. Thus Castro et al. (2005) suggests that performance of employees within their work environment is a significant reason for organisational commitment. Hence employees, who are satisfied with their work environment tend to, or have a desire to, reciprocate (Wayne et al., 1997;Castro et al., 2005). Through their perception of fairness (Deluga, 1994) and support from the organization (Wayne et al., 1997), employees shows behaviours that are beyond the formally expected requirements of their job (Deluga, 1994;Beckett-Camarata et al., 1998). Such behaviours, identified as brand citizenship behaviour, are employee behaviours which are non-prescribed or "above and beyond the norm", yet alinged with the brand values of the organisation, thus engendering positive organisational outcomes. Burmann et al. (2009) believe that the key determinants of brand strength as a result of internal brand management practices are brand commitment (BC) and brand citizenship behaviours (BCB). Brand commitment, is the psychological attachment or the feeling of belonging an employee has towards an organisation. d) Brand supporting behavior(BSB) The aim of internal brand management is to align individual employee behaviour with a desired brand identity ( Tosti and Stotz, 2001 ).In conceptualising employee behavioural loyalty, a one dimensional approach such as employee satisfaction, employee engagement or employee turnover is believed to lack holistic insight. For this reason, Zeithaml et al (1996) believe that behavioural loyalty can manifest in many ways (for example, positive word of International Journal of Human Resource Studies ISSN 2162-3058 2017 mouth, repeat patronage, greater spend). According to King and Grace (2012) indicators like employee satisfaction may be linked to behaviour, which are either historically based or, at best, only a rough indicator of future positive and productive employee behaviour. A measure of employee future-oriented thinking which shows his / her relationship with the brand i.e brand related behaviour is considered to provide for a more robust organisational measure. Henkel et al 's (2007, p. 311) conceptualise behavioural branding ' as any type of verbal and non-verbal employee behaviour that directly or indirectly determines brand experience and brand value ' . Furthermore, Bloemer and Odekerken-Schr ö der (2006) identify several behavioural attributes that, holistically provides the conceptual richness of employee loyalty. These include an employee's proactive external communication about the organisation's brand, as well as the employee's positive desire to maintain a working relationship with the brand in the future. In addition to retention and positive word of mouth, Morhart et al (2009) identify participation and 'in-role', or brand compliant, behaviour as being appropriate measures of employee brand behaviour. i) Brand Endorsement(BE) Employee external promotion or communication of the brand to others is considered to be another important aspect of brand supporting behaviour. Brand endorsement can be defined as the extent to which an employee is willing to say positive things about the organization (brand) and to readily recommend the organisation (brand) to others. Shinnar et al (2004, p. 273) encourages the idea that employees who hold a favourable image towards their organisation are intrinsically motivated to provide positive external communication. Employee activity not only derives benefits for the employee. An employee's personal advocacy leads to positive organisational outcomes or such as increased recruitment cost efficiencies (Morehart, 2001), greater employee performance (Kirnan et al, 1989) and greater pre-employment knowledge (Williams et al, 1993) which subsequently impacts organisational socialisation. Brand endorsement is thus leads to significant organisational benefits as a result of employees having better brand knowledge. ii) Brand Allegiance(BA) Employee brand allegiance (or purchase intentions in a consumer context) is defined as the future intention of employees to continue with the organisation (brand). This intention is considered to be a crucial decision, given the significant economic impact caused for losing knowledgeable employees (Ramlall, 2004). This also helps in developing crucial human capital, whereby employees are considered to possess skills experience and knowledge which creates economic value for organisations through increased productivity (Snell and Dean, 1992). By retention of productive employees who constantly exhibit brand-related behaviours, service brand success is likely to be enhanced. This is so because the service brand promise is consistently delivered in a cost-effective and efficient manner. According to Punjaisri and Wilson (2007), an employee's intention to stay with the organisation is reflective of their awareness of the need to live up to the brand standards. This future-orientated thinking has been realized in the theory of reasoned action, which suggests that the best predictor of future behaviour is the intention to act (Schiffman et al , 2001). ISSN 2162-3058 2017 iii) Brand Citizenship Behavior(BCB) Employees who are satisfied with their work environment tend to exhibit behaviours that are beyond the requirements of their job ( Beckett-Camarata et al ,1998 ). Such behaviours i.e brand consistent behaviour , can be defined as an employee behaviour that is often non-prescribed, yet consistent with the brand values of the organisation ( Burmann et al , 2009 ).The significance of brand-supporting behaviour is that it is discretionary ( Castro et al , 2005 ), yet considered to be vital for organisational productivity ( Deluga, 1994 ). Brand consistent behaviour , or brand citizenship behaviour as coined by Burmann and Zeplin (2005) , is considered to be ' the pivotal(behavioural) constituent for successful internal brand management ' ( Burmann et al , 2009, p. 266 ). Burmann and Zeplin (2005) believe there to be little difference with respect to brand-related behaviour in contrast to organisational-related behaviour. They suggest a modified concept to organisational citizenship behaviour (OCB), namely brand citizenship behaviour (BCB). Burmann and Zeplin (2005) believe such a modification is needed given that OCB is "considered" to have an internal focus while BCB have external focus. . e) Organizational Attractiveness A closely related concept to 'employer branding' is the concept of 'employer attractiveness'. This concept has been widely studeies in the areas of vocational behaviour (Soutar & Clarke 1983), management (Gatewood et al. 1993), applied psychology (Jurgensen 1978;Collins &Stevens 2002), communication and marketing (Ambler & Barrow 1996;Gilly & Wolfinbarger 1998;Ambler 2000;Ewing et al. 2002). Berthon et, al (2005) defined 'employer attractiveness' as the envisioned benefits that a potential employee sees in working for a specific organisation.Some studies that have studies potential applicants' attraction in initial recruitment stages have confirmed that organizational attraction is affected by applicants' perceptions of job or organizational characteristics such as pay, opportunities for advancement, location, career programmes, or organizational structure (Cable & Graham, 2000;Highhouse et al., 1999;Honeycutt & Rosen, 1997;Lievens, Decaesteker,Coetsier, & Geirnaert, 2001;Lievens & Highhouse, 2003;Turban & Keon, 1993). Many authors have suggested that decisions to apply to an organization are often heavily rely on the general impression of applicants about the company's overall attractiveness (e.g. Belt & Paolillo,1982;Fombrun & Shanley, 1990;Rynes, 1991). Recommendation intentions are defined (Van Hoye, 2008) as the extent to which employees have intention to recommend their organization as an employer to others. Thus word of mouth, as a recruitment source, is an significant predictor of organizational attractiveness (e.g., Van Hoye, 2012) and has a better impact on post hire outcomes like job satisfaction, performance and chances to quit (Breaugh & Starke, 2000;Zottoli &Wanous, 2000). These positive consequences may be due to the reason that word of mouth provides realistic and credible information (e.g., Cable & Turban, 2001). Having ambassadors as the employees might thus be valuable for organizations in order to attract, recruit and motivate potential and current employees. If research has studied the consequences of recommendation intentions, what motivates employees to provide favorable word of mouth has been studied further (Shinnar, Young, & Meana, 2004). Van Hoye (2008) has shown that a favorable image of an organization as an employer leads employees to recommend their organization. Van Hoye (2008) found that enhancing employees' perceptions of employer image is an effective way to increase their likelihood to recommend their employer to others. f) Firm Performance According to Fulmer et al., (2003) study, the time and money spent to create and support positive employee relation turned out to be worthwhile investment. As positive reputation tent to be stable and difficult to copy, they provide unique and sustainable competitive advantage for companies (Robert & Dowling, 2002) Thus ,despite the additional cost incurred to provide employee friendly practices, the benefits are more than to compensate for the cost (Fulmer et al., 2003). Thus developing positive employee relations is no easy task, but those firms who are making continuous efforts and making investment will not likely to regret in near future (Romero, 2004). Through content analysis of website of Fortune's 100 best companies, Joyce (2003), argued that" these companies are distinguished by employee development programs, diversity initiatives, and fun work environment" (p.77). Companies included on Fortune's 100 best companies list have higher market values and better return than matched firm not included on the list (Ballou et al., 2003;Fulmer et al., 2003). The market values of firm ranked in top one third of the list were higher than firms ranked in the bottom one third of the list (Ballou et al., 2003). Fulmer et al., (2003) showed that positive employee relation are beneficial for companies and may be related to improved performance (as measured by both accounting and market data: ROA and market-to-book value ratios). Being a best employer is a strong marketing or employer branding tool, signaling its favorable work environment publicly, which leads to attraction and retention of talent (Joyce, 2003). Research Methodology The questionnaire was administered on 350 employees of IT companies and 244 have filled it. Thus the respondent rate was 69.71 %.The companies were chosen on the basis of NASSCOM top 20 IT-BPM employers in India 2014-15.Thus IT companies included mix of companies with very good reputation, with medium reputation and with not so good reputation. Thus the sample provided the mix of companies having different reputation. Demographic details of the respondents with respect to the age, gender, qualification and work experience were collected. Questionnaire: Internal branding(IB) was measured by Punjaisri et al, (2009) having training with 4 items, orientation with 4 items, briefing with 2 items and brand identification with 5 items. Brand Commitment (BC) was measured by 5 items from King and grace (2012) scale. Brand Supporting Behaviour (BSB) was be measured by King et, al.(2012),12 item scale having (i) brand endorsement (BE),(ii) brand consistent behaviour (BCB) and (iii)brand allegiance (BA). Organizational attractiveness (OA) was measured by three items adapted from the measure of perceived organizational attractiveness proposed by Highhouse, Lievens, and Sinar (2003). Respondents will rate these items on a 5-point rating scale, ranging from 1 = strongly disagree to 5 = strongly agree. Firm Performance (FP) was measured by scale adopted from Chun (2001b Results Demographic details of the respondents shows that 42.2% are in age group of 20-25, 32.0 % are in age group of 26-30, 7.4 % in 31-35, 15.2 % in 36-40 and 3.3%in above 40 years .With respect to gender 47.5% are female and 52.5% are male.35.7 % are in Junior level, 46.7% are in middle level and 17.6% are in senior level with respect to their designation. With respect to total years of experience 19.3% have above 10 years, 28.7% have between 5 to 10 years, 44.3% have between 1to 5 years, 7.0% have less than 1 year and 0.8% have no prior experience. When asked how many years have they spend in current organization.14.3% have spent above 10 years,19.7% between 7-9 years,7.8% between 4-6 years,40.6% between 1-3 years and 17.6% have spent less than a year in the present organization they are working for. Table 6 shows the regression analysis with dependent variable Firm Performance (FP) and Internal branding (IB), Brand commitment (BC) and Brand supporting Behaviour (BSB) as independent variables. The regression model is significant (p<0.01) with adjusted R-square value of 0.531. Internal branding (IB) and Brand Supporting Behaviour (BSB) have a significant (p<0.05) impact on dependent variable with std coefficient beta values of 0.464 and 0.185 respectively. Managerial Implication As this study was conducted within the Asian context particularly in India, this study has extended the knowledge beyond the western school of thoughts, validating the application of the concept within different cultural contexts. Essentially, the current study has provided empirical evidence showing the influence of internal branding on employees' brand-supporting behaviour. Internal Branding (IB) efforts play important role in building brand commitment (BC) as shown by many researchers. The study shows that the Internal Branding (IB) have a significant (p<0.05) impact on brand commitment(BC). As given by Punjaisri et al,(2009), Internal Branding (IB) efforts have dimensions like training , orientation and briefing. The present study confirms this as in this study also training, orientation and briefing have shown significant association with brand commitment (BC). For service firms, which are not limited to the IT industry, seeking to ensure the delivery of their brand promise, this study reaffirms the literature (Aurand et al., 2005;Burmann and Zeplin, 2005;Machtiger, 2004) that companies should continuously work on internal communication and training programs to inform and educate staff, as well as reinforce the brand values. This study empirically suggests that internal branding assists management in enhancing employees' brand supporting behaviour (BSB). Furthermore, this study provides empirical reserch that supports the influence of internal branding on the on-brand behaviour of staff (Hankinson, 2002;Thomson et al., 1999). Management are encouraged to communicate with staff and train them constantly about the unique and distinctive brand values, which should be interpreted into daily activities such as brand standards. This will help them to deliver on the brand promise. Internal Branding (IB) and Brand Commitment (BC) have shown positive effect on organizational attractiveness (OA). Correct internal branding efforts will lead to greater brand commitment and these will lead to higher perception about attractiveness about the organization employees are working for. Brand supporting Behaviour (BSB) as accumulative factor did not have effect on organization attractiveness. But Brand Allegiance (BA) and Brand Citizenship Behaviour (BCB) individually have shown positive effect on internal organizational attractiveness (OA). Various factors can influence the employee to leave an organisation, or to remain despite being dissatisfied. Employees who have more opportunity to voice dissatisfaction are less likely to leave (Spencer, 1986). An intention to quit is related to job stress, lack of commitment to the employer, and job dissatisfaction (Mellor et al., 2004). Most labor turnover models include a significant affective factors, including organisation commitment, well-being and job satisfaction (Steel et al., 2002;Steel, 2002). Job satisfaction and organisational commitment are many times assumed to influence the decision to leave (Winterton, 2004) but the influence of the corporate brand on this process which is considered to be the most significant affective factor in an organisation, is never considered. Thus brand allegiance (BA) is an important factor which influence organization attractiveness. Further to this the managers need to understand that external reputation with respect to becoming employer of choice or ranking themselves among best employer studies can also help in sustaining superior financial performance over a period of time and create competitive edge over the competitors. In this study also Internal Branding (IB) and Brand supporting behaviour (BSB) have positive effect on Firm Performance (FP). Further Brand Endorsement (BE) and Brand Citizenship Behaviour (BCB) have positive effect on Firm Performance (FP) and managing them do effect the perception about firm performance. Limitation However, it should be acknowledged that this study focused on the IT industry, which is one among several types of industries in the service sector. Some service industries may have a specific nature which is not shared by the others, thereby limiting the generalisability of this study to other service industries. We would suggest that replications of the relationships tested in this study in different service industries and cultural contexts would help clarifying the conditions for generalisations to theory in other parts of Asia. Moreover, longitudinal data would improve an understanding of the mechanisms influencing different attitudes of employees and their behaviours in delivering the brand performance. Future Research As previously discussed, the findings of this study suggest many more avenues for future research. HR literature have studied many individual aspects associated with employee behaviour, what is lacking is our understanding of how these individual factors relate and impact internal marketing communications. Also, just as emotional bonds are developed between consumers and brands, so too we must strive to understand how strong bonds are developed between employees and brands. Therefore, the study of various areas like personality, values, motivation, emotional intelligence, affective reactions and behavioural responses to employer brands is important to further enrich the internal brand management literature and understand best practice in the service industry. Conclusion The implication of this study to management is that it is significant internal branding includes knowledge from both marketing in terms of internal communication and human resource in terms of training and development programs. Management can deploy internal branding to enhance their employees' brand attitudes and also its distinctiveness to enhance their pride towards the brand to enhance their brand commitment. It is important for management to be informed that training programs to develop and enhance employees' brand-related understanding and skills need to be conducted on regular basis. Management should use communication, daily briefing, group meeting, notice boards and corporate magazine to communicate any brand messages to staff. The importance of effective brand management in realizing financial benefits for the organisation cannot be overstated. With increased interest being given to the financial outcomes, particularly in the services sector, both practitioners and academics alike advocate the important role played by the employee. Brand aligned employees, as demonstrated through employee commitment to the brand and the exhibition of brand citizenship behaviour, has been advocated in the literature to be the result of the right internal brand management practices (e.g. internal communication and training). Clearly managing the employer brand is a complex task, an observation that leads to a final question to both employers and researchers: who should be responsible for managing the employer brand? There is some empirical evidence as to how to promote the employer brand internally (e.g. Hickerman et al., 2005) and how external promotion such as advertisement and sponsorship may also influence employees, but no consensus on the coordination of customer and employer branding. There are various perspectives, including expanding the role of marketing or a greater understanding of branding issues among HR professionals (e.g. Martin and Beaumont, 2003). Others argue for a new role, that of reputation manager (e.g. Davieset al., 2002), responsible for co-ordinating internal and external branding and to all stakeholders. Certainly there is value in managing the employer brand and a potential danger if no function accepts or is given responsibility for it.
2019-05-30T13:18:28.137Z
2017-04-25T00:00:00.000
{ "year": 2017, "sha1": "1dfe0088c2eadc4b3cf7931a52b5303a649b4fca", "oa_license": "CCBYNC", "oa_url": "https://www.macrothink.org/journal/index.php/ijhrs/article/download/11113/8924", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d3c3675905943581eb3cb47a4ab0f2676949eb76", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
261967164
pes2o/s2orc
v3-fos-license
Deep dissection of stemness-related hierarchies in hepatocellular carcinoma Background Increasing evidence suggests that hepatocellular carcinoma (HCC) stem cells (LCSCs) play an essential part in HCC recurrence, metastasis, and chemotherapy and radiotherapy resistance. Multiple studies have demonstrated that stemness-related genes facilitate the progression of tumors. However, the mechanism by which stemness-related genes contribute to HCC is not well understood. Here, we aim to construct a stemness-related score (SRscores) model for deeper analysis of stemness-related genes, assisting with the prognosis and individualized treatment of HCC patients.Further, we found that the gene LPCAT1 was highly expressed in tumor tissues by immunohistochemistry, and sphere-forming assay revealed that knockdown of LPCAT1 inhibited the sphere-forming ability of hepatocellular carcinoma cells. Methods We used the TCGA-LIHC dataset to screen stemness-related genes of HCC from the MSigDB database. Prognosis, tumor microenvironment, immunological checkpoints, tumor immune dysfunction, rejection, treatment sensitivity, and putative biological pathways were examined. Random forest created the SRscores model. The anti-PD-1/anti-CTLA4 immunotherapy, tumor mutational burden, medication sensitivity, and cancer stem cell index were compared between the high- and low-risk score groups. We also examined risk scores for different cell types using single-cell RNA sequencing data and correlated transcription factor activity in cancer stem cells with SRscores genes. Finally, we tested core marker expression and biological functions. Results Patients can be divided into two subtypes (Cluster1 and Cluster2) based on the TCGA-LIHC dataset's identification of 11 stemness-related genes. Additionally, a SRscores was developed based on subtypes. Cluster2 and the group with the lowest SRscores had superior survival and immunotherapy response than Cluster1 and the group with the highest SRscores. The group with a high SRscores was significantly more enriched in classical tumor pathways than the group with a low SRscores. Multiple transcription factors and SRscores genes are correlated. The core gene LPCAT1 is highly expressed in rat liver cancer tissues and promotes tumor cell sphere formation. Conclusion A SRscores model can be utilized to predict the prognosis of HCC patients as well as their response to immunotherapy. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04425-8. Introduction Hepatocellular carcinoma (HCC) is the most predominant type of primary liver malignancy, accounting for approximately 90% of all prior liver cancer cases, and has become a significant public health problem [1].HCC patients have the sixth-highest incidence and the fourth-highest mortality rate, according to global oncology statistics [2].While morbidity and mortality rates are declining for many tumors, HCC incidence and mortality rates have increased significantly in many parts of the world, with a 43% increase in mortality from HCC in the United States between 2000 and 2016 [3,4].The majority of patients with HCC are diagnosed and treated at a late stage, with limited treatment options and a median survival of less than one year [5].Although surgery, radiotherapy, and immunotherapy have helped improve morbidity and mortality in HCC, the overall 5 year survival rate is only 18% [4,6].Therefore, further research and exploration of the molecular mechanisms underlying the occurrence and development of HCC and the search for specific and sensitive biomarkers are essential for the early diagnosis, prognostic assessment, and advancement of effective treatment strategies for HCC. Cancer stem cells (CSCs) are a subpopulation of cells in tumors that are in a stem cell-like state and have a unique ability to self-renew and differentiate [7,8].There is growing evidence for the presence of tumor cells with stem cell properties (liver cancer stem cells, LCSCs) in HCC.Liver stem/progenitor cells are transformed into LCSCs during long-term inflammatory processes induced by factors such as chronic viral infection or alcohol [9][10][11].The study suggests that CSCs are primarily responsible for recurrence, metastasis, chemo, and radiation resistance in liver cancer [9,12].Therefore, targeting LCSCs therapy may become a strategy for treating HCC.Current single-cell transcriptomic analysis revealed that LCSCs exhibit heterogeneity, as well as that distinct genes in distinct subpopulations are independently associated with liver cancer prognosis, suggesting that additional LCSCs influence tumor progression and intratumor heterogeneity [13,14]. CSC growth depends on their microenvironment.CSCs have the ability to interact with surrounding cells, release substances, and rearrange the nearby microenvironment to establish their own environment [15].The CSC microenvironment contains cancer-associated fibroblasts (CAFs), endothelial cells, immunological cells, mesenchymal stem cells, and their growth factors and cytokines [15].CSCs thrive in the complicated tumor microenvironment [15,16].CAFs promote CSC proliferation, invasion, and metastasis by secreting cytokine CXCL12, vascular endothelial growth factor, and stem cell growth factor [17].They also support CSCs mechanically by producing fibrillar collagen.CSC microenvironments rely on endothelial cells.Hypoxia and vascular endothelial factors stimulate endothelial cell blood vessel growth.These new blood arteries feed CSC metabolism for self-renewal, invasion, and metastasis [18].Endothelial cells release IL-1, IL-3, IL-6, VEGF-A, and other cytokines that support CSC proliferation and tumor growth [19].Immune cells help CSCs invade and metastasize, attract T regulatory cells through TGF-β and cytokines, and elude the immune system [20].Thus, knowing stemness-related genes and TME interactions in HCC helps develop tailored CSC therapeutics. We examined stemness-related gene expression and prognosis in TCGA-LIHC patients screened for somatic mutations.Unsupervised clustering was used to classify hepatocellular cancer patients into two subgroups based on screening stemness-associated genes.Cox regression and random forest analysis were used to create stemnessrelated risk ratings from subtype-specific differentially expressed genes.Hepatocellular carcinoma subtypes and stemness-related risk scores were correlated with TME, drug sensitivity, chemotherapy/immunotherapy efficacy, and molecular function.We employed single-cell analysis to measure gene expression for stemness-related risk scores in distinct cells, and we annotated cancer stem cells to see their gene distribution.Intriguingly, we also examined transcription factors with variable expression in cancer stem cells and SRscores genes.Finally, LPCAT1, a stemness-related risk score core gene, was experimentally confirmed.The stemness-related sub-risk score predicted HCC patients' prognoses and guided treatment. Based on multi-omics analysis Differential expression of stemness-related genes in tumor and paraneoplastic tissues was analyzed by DESeq2 package in R, and adjP < 0.05 and |Log fold change |> 1 were used as screening thresholds [27].Oneway Cox regression analysis was performed using Surviv-alR for all genes in the TCHA-LIHC cohort to identify genes significantly associated with Overall survival (OS).The intersection of differentially expressed stemnessassociated genes and prognostic genes was determined using the VennDiagram package, and a Venn diagram was drawn.Mutations in stemness-related genes were analyzed using the maftools package to screen for genes with a mutation percentage more significant than 0 [28].These genes' copy number variation (CNV) status was described using GISTIC 2.0, and chromosomal information and loss status were obtained and visualized by circos plot [29]. Identification of stemness-related subtypes Unsupervised clustering was performed in the TCGA-LIHC cohort via the Consensusclusterplus package based on genes with a proportion of mutations greater than 0 [30].The K-means (km) cluster method with Euclidean distance was used in this analysis and repeated 1000 times to ensure reliability.And the effect of subtyping was examined by the PCA method, and Kaplan Meier survival curves were plotted for different subtypes using the survival package.The validation was performed in ICGC-LIRI-JP.To further explore the role of different subtypes in HCC, we used the GSVA package to compare potential action pathways between different subtypes.The CIBERSORT algorithm was used to compare the differences in immune cell infiltration between different subtypes.The ESTIMATE algorithm was used to compare the various subtypes of tumor microenvironments (TME).Also, we analyzed the expression of immune checkpoints between different subtypes. Prediction of chemotherapy sensitivity and response to immunotherapy We assessed the efficacy response to multiple drugs in different stemness-related subtypes.First, the sensitivity of different subtypes to chemotherapeutic drugs (expressed as half-maximal inhibitory concentration IC50 values) was predicted by the pRRophetic package.In addition, we used the Tumor Immune Dysfunction and Exclusion (TIDE) online algorithm (http:// tide.dfci.harvard.edu/) to estimate the immunotherapeutic response in each HCC patient.Second, we downloaded the IPS scores of CTLA4 and PD1 from TCIA (https:// tcia.at/ home) for HCC patients and compared the effect of immunotherapy for different subtypes.[31].Also, we compared.Finally, the sensitivity of stemness-related genes to non-immunotherapeutic drugs was observed using the GSCA database (http:// bioin fo.life.hust.edu.cn/ GSCA/#/) [32]. Construction and validation of a prognostic stemness-related model We further analyzed differentially expressed genes (DEGs) between the two subtypes in the TCHA-LIHC cohort to fully explore the stemness-related subtypes.And a one-way cox analysis (p < 0.05) was performed for the use of DEGs, a random forest model was built using the randomForestSRC package, and the genes ranked in the top 10 relative importance were combined by a permutation to determine the optimal signature: Messenger RNA expression-based stemness index (mRNAsi) calculation Transcript mRNAsi values (ranging from 0 to 1) were directly calculated for each HCC sample by the TCGAbiolinks package (R version 4.2.0), strongly correlated with stem cell characteristics, and widely used for predicting tumor stemness.In the TCHA-LIHC cohort, the prognostic value of the mRNAsi index and its correlation with stemness-related subtypes and SRscores were further analyzed. Single cell analysis Single-cell data of hepatocellular carcinoma were obtained from the publicly available dataset GEO (GSE125499 and GSE151530).The 10 × Genomic platform sequenced both datasets.For analysis using the R package Seurat, we filtered cells with UMI counts less than 200, while mitochondrial genes > 20% will also be filtered.The integrated data were screened for high variant genes, while the high variant genes were regioncentric using the ScaleData function.The data are then subjected to PCA analysis and clustering analysis using the FindNeighbors process.Afterward, cell type identification was performed using the SingleR package with the reference gene HumanPrimaryCellAtlasData. In addition, we looked for markers of hepatocellular carcinoma stem cells (CD44, EPCAM, HAPLN1, HNF1B, IGFBP5, IHH, KRT19, LFNG, LGR5, NANOG, POU5F1, PROM1, SOX2, THY1) by CellMarker 2.0 and annotated the cells accordingly [33].Stemness-related gene sets were scored in the single-cell dataset using the AddModuleScore function [34]. We analyzed all subpopulations of hepatocellular carcinoma single cells by R package SCENIC, inferred coexpression modules between transcription factors and candidate target genes based on the GENIE3 algorithm, and also performed cis-regulatory motif analysis (cisregulatory motif ) for each co-expression module using RcisTarget [35].Finally, the transcription factor activity was obtained by scoring each regulon activity of each cell using the AUCell algorithm, and the regulons were clustered by transcription factor activity [36].The clustering method was used to calculate the correlation between transcription factor activities, to calculate the Connection Specificity Index (CSI) matrix of regulons, and to perform clustering, and the clustered modules were visualized in a UMAP plot.Finally, the average regulatory activity of each transcription factor was calculated and visualized. HE and immunohistochemistry Paraffin-embedded tumor tissue was cut into 4 μm sections, dewaxed, and rehydrated.HE and immunohistochemical staining were performed according to standard protocols.HE staining: Sections were deparaffinized, stained with hematoxylin-eosin, dehydrated, transparent, and sealed, then photographed under microscopic (Nikon, ECLIPSE Ts2-FL) observation.Immunohistochemical staining: After slicing, dewaxing and antigen repair were performed, and the normal sheep serum working solution was sealed.Each tissue was dripped with LPCAT1 Polyclonal Antibody (protentech, 16112-1-AP) and incubated overnight at 4 ℃ in a refrigerator.Incubate the secondary antibody for 30 min after cleaning.Then, perform color rendering, re-staining, dehydration, and sealing.Finally, take photos using a microscope (Nikon, ECLIPSE Ts2-FL). Statistical analysis All statistics were performed using R3.6.3 and R4.2.0 versions.The Wilcoxon test (Wilcoxon rank sum test) was used for two sets of continuous type variable data.The Spearman correlation test was used for the correlation between gene expressions.Univariate and multifactorial Cox regression was used to define prognostic correlates, ROC curves were calculated using the timeROC package, and the area under the line was used to assess the prognostic effect of the SRscores.p < 0.05 was used as a criterion for the statistical significance of differences. Expression and mutation analysis of stemness-related genes in HCC The data from TCGA-LIHC were analyzed comprehensively.Differential analysis between liver cancer and paracancer tissues revealed differences in 103 out of 345 stemness-related genes, with up-regulated and down-regulated genes as shown in Fig. 1A volcano plot.A univariate Cox analysis identified 5163 prognosisassociated genes in the TCGA-LIHC.The intersection was taken with 103 stemness-associated genes, and 27 stemness-associated genes showed differential and prognostic values in HCC (Fig. 1B).In a one-way cox analysis of these 27 genes, two genes were found to be protective factors for HR < 1, and 22 genes were risk factors for HR > 1 (Additional file 1: Fig. S1A).To investigate the genetic mutation of these genes in HCC, we analyzed the incidence of somatic mutations and found that 11 of the 27 genes associated with stemness had a mutation frequency of > 1% (Fig. 1C).In addition, copy number variants (CNV) were analyzed, and copy number alterations were found to be prevalent in 11 stemness-associated genes (Fig. 1D).Among them, there were extensive CNV increases in ASPM, SLC4A11, SOX11, and OSR1, while CNV decreases in ESR1, CITED2, PRDM15, and FOLR1.Figure 1E shows the chromosomal locations of the 11 stemness-associated genes with CNV variants.Figure 1F demonstrates that the expression of these 11 genes at the mRNA level differs substantially between tumor and normal tissue.Consequently, stemness-related genes may play a crucial role in the development of HCC. Identification of subtypes of HCC by stemness-associated genes Clustering analysis of 363 tumor patients with TCGA-LIHC based on 11 stemness-associated genes using the ConensusClusterPlus R package revealed two subtypes, including 168 cases in Cluster 1 and 187 cases in Cluster 2 (Fig. 1G).Additional file 1: Fig. S1B depicts the interactions of these 11 stemness-associated genes.Using PCA analysis to evaluate the differences between the two subtypes, it was determined that the two stemness subtypes differed markedly in the transcriptome (Fig. 1H). Survival analysis was also used to evaluate the prognostic value of the two subtypes of stemness in clinical practice.The OS of patients with both subtypes was significantly different (p = 0.029), with Cluster 1 having a significantly worse prognosis (Fig. 1I).Nine of these 11 stemness-related genes displayed differential expression between the two subtypes, as shown in Fig. 1J.To verify these two typings' stability and applicability, we validated unsupervised cluster analysis using ICGC-LIRI-JP.It was well divided into two categories (Fig. 1K), PCA analysis was significantly different (Fig. 1L), survival was quite different between the two subtypes (p = 0.00091).Clus-ter1 had a significantly poorer prognosis (Fig. 1M), and ten of eleven stemness-related genes were significantly different between the two subtypes (Fig. 1N).Using the TCGA-LIHC dataset, we analyzed the distribution of somatic mutations for high and low SRscores and found that TP53 was predominant in Cluster1 patients.At the same time, CTNNB1 predominated among patients in Cluster 2 (Additional file 1: Fig. S1C, D).Somatic mutations result from diverse mutational processes such as DNA repair defects and exposure to exogenous or endogenous mutagens.Different mutational processes produce different mutation types, i.e., mutational characteristics.Therefore we used NMF to characterize the genomic landscape comprehensively.In Cluster1, MMR and Guanine damage predominate, while in Cluster2, MMR and Adenine damage predominate (Additional file 1: Fig. S1E, F). Analysis of tumor microenvironment and signaling pathways between two subtypes in HCC By comparing the infiltrative immune cell component in the TME between the two subtypes, we discovered a higher rate of immune cell infiltration in Cluster1 in addition to the immune hyper-infiltrative zone depicted by the heat map (Fig. 2A).In the heat map, it was found that in the high immune infiltration area, anti-immune response cells such as Type 1 T helper cells, activated CD8 + T cells, and immunosuppressive cells such as immature dendritic cells, regulatory T cells, and neutrophils were significantly enriched.Interestingly, most of the samples in Cluster1 showed low immune infiltration, while most of the samples in Cluster2 showed high immune infiltration.These suggest that TME may promote the recruitment or differentiation of immune cells [37].To distinguish the specific immune components between the two subtypes in the tumor immune microenvironment, the distinctions between 28 immune cells were calculated (Fig. 2B).Due to the disparities in immune infiltration between the two subtypes, we further analyzed the differences between immune checkpoints.The results showed considerable immune checkpoint differences between the two subtypes (Fig. 2C).To further explore the possible mechanisms between the different subtypes, we analyzed the differences in enrichment pathways between the two subtypes based on GSVA.The results showed that Cluster1 was mainly enriched in cell cycle, DNA replication, homologous recombination, and notch signaling pathway, while Cluster2 has enriched primarily in glutathione metabolism, α-linolenic acid metabolism, Cysteine and methionine metabolism, or other metabolism-related pathways were mainly enriched in Cluster2 (Fig. 2D).Glutathione promotes T-cell activation, proliferation, and differentiation and is essential for maintaining T cell immunity [38].α-linolenicacid plays a protective role in various tumors.MDSCs may limit T cell activation by limiting the effectiveness of cysteine on T cells [39].And methionine metabolism correlates with the number of CD8 + T cells [40].These results suggest that these pathways play an essential role in the immune infiltration of tumors, but the specific complex mechanisms need to be explored by more in-depth experiments. Analysis of chemotherapy sensitivity and immunotherapeutic response between two subtypes in HCC Chemotherapy remains the standard treatment for cancer patients.Using pRophetic algorithm, we determined the sensitivity of both subtypes to conventional chemotherapeutic medications.Cluster1 was found to be more sensitive to Sorafenib, Cytarabine, Cisplatin, Doxorubicin, and other medications (Fig. 2F).Using the TIDE algorithm, we also assessed the immunotherapeutic response of both subtypes.These results indicate that stemness subtypes is associated with immunotherapy and chemotherapy sensitivity. Construction and validation of a prognostic model based on the analysis of differences between two subtypes First, we analyzed the DEGs between the two subtypes and obtained 550 genes.Then, random forest analysis filtered out the genes with high relative importance (Fig. 3A).Subsequently, a log-rank test was performed by combining these 10 genes in 1023 (2 10 -1) permutations, and the prognostic model was evaluated.Figure 3B shows the − log10 (log-rank P) values of the top 20 models, from which the highest ranking signature consists of 8 genes (LPCAT1, NDRG1, G6PD, CYP7A1, DNASE1L3, SPP1, SFN, CDCA8) was selected for the construction of the stemness risk model.The SRscores was calculated for each HCC patient by this score.Figure 3C reveals that Cluster1 patients had a substantially higher SRscores (p < 0.001) than Cluster2 patients.Patients with a high SRscores had a substantially worse prognosis than those with a low score (p < 0.001) (Fig. 3D).In TCGA-LIHC, a time-dependent ROC curve analysis determined the sensitivity and specificity of the SRscores, with an AUC of 0.773% (Fig. 3G).ICGC-LIRI-JP was then validated in order to establish the model's dependability.Figure 3E patients had a higher SRscores than Cluster2 patients.In the survival analysis, patients with a high SRscores had a substantially worse prognosis (Fig. 3F), with a timedependent ROC curve AUC value of 0.75 (Fig. 3H).The pie chart demonstrates that patients with higher risk scores have more advanced pathological staging than those with lower risk scores.(Fig. 3I).The aforementioned findings indicate that the risk model is highly adaptable and has an outstanding prognostic effect on HCC patients. Genomic characterization of high and low SRscores, TMB, tumor microenvironment, and signaling pathway analysis Using the TCGA-LIHC dataset, we analyzed the distribution of somatic mutations in high and low SRscores.The analysis revealed that patients with low-risk scores were predominately CTNNB1 positive.Similarly, TP53 was prevalent among patients with high-risk scores (Fig. 4A, B).Additionally, we exhaustively characterized the genomic landscape.Figure 4C, D demonstrates that MMR and Guanine damage dominated the low-risk score, whereas MMR and Adenine damage were vanquished in the high-risk score.Calculating the TMB for each patient with HCC, we discovered that the TMB was greater in the group with a high stem risk score (Fig. 4E). By prognostic analysis, we found that the high TMB group had a poorer prognosis than the low TMB group (Fig. 4F).More importantly, by combining TMB grouping and SRscores for survival analysis, patients with low TMB and low SRscores had significantly longer OS than patients with high TMB and high SRscores (Fig. 4G). We also performed GSVA pathway enrichment analysis between high and low SRscores groups.The results showed that cell cycle,mTOR signaling pathway,P53 signaling pathway,NOTCH signaling pathway, and other pathways closely related to tumor progression were significantly enriched in the high-stem risk group.In contrast, in the low SRscores group, metabolism-related pathways such as FATTY_ACID_METABOLISM, RETI-NOL_METABOLISM, TYROSINE_METABOLISM, GLYCINE_SERINE_AND_THREONINE_METABO-LISM were mainly enriched (Additional file 1: Fig. S1I).Such is also comparable to the subtype results.In addition, we calculated the immune cell subpopulation infiltration in the high and low SRscores groups, and found greater Eosinophil and Neutrophil infiltration in the low SRscores group, which was associated with improved survival, compared to the high SRscores group.(Additional file 2: Fig. S2A).We further analyzed the differences between immune checkpoints.The results showed considerable immune checkpoint differences between the high and low SRscores groups (Additional file 2: Fig. S2B).We further described the correlations of the eight genes that constitute the stemness risk model, and there was a significant negative correlation between most of them (Additional file 1: Fig. S1G, H).In addition, these genes had substantial correlations with multiple immune cells (Additional file 2: Fig. S2C).These results suggest that these genes may contribute to the biological differences between the high and low-scoring groups. Additionally, we examined the expression of these eight genes in cancerous and paraneoplastic tissues using the TCGA-LIHC dataset.The expression of LPCAT1, NDRG1, G6PD, CYP7A1, SPP1, SFN, and CDCA8 was significantly higher in cancer tissues than in paraneoplastic tissues, whereas the expression of DNASE1L3 was significantly lower in cancer tissues (Additional file 1: Fig. S1J).We used the IOBR package to analyze the correlation between high and low SRscores groups and multiple tumor-related characteristics datasets.It was found that the high-stem risk score group was significantly enriched in classical tumor pathways such as WNT, TGF-β, JAK-STAT3, mTOR, etc. (Additional file 2: Fig. S2G).We also used the GSCA database to assess the correlation between genes of SRscores and pathways such as apoptosis, cell cycle, and EMT.We found that most genes were involved in the activation of pathways of tumor progression (Additional file 2: Fig. S2D).These results suggest these genes may contribute to tumor progression and poor prognosis. Analysis of pharmacotherapy for high and low SRscores Using the GSCA database, we analyzed the sensitivity of genes and chemotherapeutic medications for the SRscores.Additional file 2: Fig. S2E, F demonstrates that SPP1, SFN, NDRG1, and G6PD are sensitive to the majority of medications.Using the pRophetic software, we also determined the sensitivity of high and low SRscores categories to chemotherapy drugs.Figure 4H demonstrates that the group with a high SRscores was more sensitive to Sorafenib, Cytarabine, Cisplatin, and Doxorubicin than the group with a low risk score.Similar to the subtypes, the group with a reduced risk score and a better prognosis benefited more from immunotherapy (Fig. 4I).We also investigated the effect of immunotherapy with the immune regulators PD-1 and CTLA4 on scores indicating high and low stemness risk.There was no significant difference between the two groups, unfortunately.These results can assist in selecting chemotherapeutic agents when immunotherapy is combined with chemotherapy in clinical practice. Analysis of mRNAsi binding subtypes and SRscores The new version of the TCGAbiolinks package calculated the mRNAsi of each patient in the TCGA-LIHC dataset.We investigated the relationship between mRNAsi, stemness subtypes, and SRscores.We ranked HCC samples according to their mRNAsi values from low to high and examined their correlation with stemness subtypes, SRscores, and clinical characteristics (Fig. 5A).Cluster1 and the group with a high SRscores were found to be highly concentrated in the high mRNAsi region.In contrast, Cluster2 and the group with a low SRscores were primarily concentrated on the low mRNAsi region, as depicted in Fig. 5C, D. mRNAsi and SRscores showed a significant positive correlation (R = 0.34, P < 0.001), as shown in Fig. 5E.In addition, the prognostic analysis demonstrated that patients with elevated mRNAsi had a worse prognosis than those with low mRNAsi (Fig. 5B). Figure 5F depicts the distribution of HCC patients between these three subtypes using a Sankey diagram, and the results are consistent with those described previously.The distribution of HCC patients between subtypes, SRscores groups, and mRNAsi groups was presented using the Sankey diagram.The results were also consistent with those described above, with samples from the better prognostic Cluster2, low SRscores groups, and low-mRNAsi groups essentially overlapping.Samples from the poorer prognostic Cluster1, high SRscores groups, and high-mRNAsi groups overlap (Fig. 5F). Single cell analysis of SRscores To further explore the expression profile of the SRscores, we analyzed its distribution and expression in the scRNA-seq dataset.First, we downloaded the single-cell dataset (GSE125449, GSE151530) from the GEO database with quality control (Additional file 3: Fig. S3A, C).Subsequently, the UMAP and tSNE algorithms were used to cluster all cells, which could be divided into 31 clusters (Additional file 3: Fig. S3B).According to marker genes, these 31 clusters were annotated as B cells, Endothelial cells, Plasma cells, Macrophages, Smooth muscle cells, stem cells, T cells, CD8 + T cells, and Fibroblasts (Additional file 3: Fig. S3D, E).To further validate the accuracy of cancer stem cell clustering, we also used the CytoTRACE package to calculate the stem cell characteristic scores (i.e.CytoTRACE scores) for each cell, with CytoTRACE scores ranging from 0 to 1, while higher scores indicate higher stemness (less differentiation) and vice versa.In turn, the tumor stemness level of each cell was assessed, with high scores corresponding to high tumor stemness (Fig. 6A).As expected, we found that the stem cells we clustered had significantly higher CytoTRACE scores compared to other cell populations (Fig. 6C) [41,42].In addition, we found that in the tumor tissues of HCC patients, high SRscores were shown in Hepatocytes (Fig. 6B, D, Additional file 3: Fig. S3F).Interestingly, we further analyzed the expression of eight genes of the stemness score risk model in these cells and found the highest expression in stem cells (Fig. 6E, Additional file 3: Fig. S3G).Finally, we analyzed the colocalization of LPCAT1 and NDRG1, the two genes with the most substantial importance shown by random forest, and found a strong localization consistency (Additional file 3: Fig. S3H). It is well known that transcription factors play an important role in tumor development, and in addition, they play an indispensable role in the promotion of tumor development by tumor stem cells [43][44][45][46].To investigate the expression activity of transcription factors in tumor stem cells and the correlation with SRscores genes, we performed a single-cell clustering analysis.First, we classified the cells within the liver cancer tissue into 11 cell clusters by transcription factor clustering effects and found that the stem cells were mainly in the M8 cluster (Fig. 7A-C).Furthermore, after calculating the average regulatory activity of transcription factors in the M8 module, we screened the transcription factors with an average regulatory activity > 2 in stem cells for visualization and correlation analysis with stemness genes and found that genes such as TCF3, SMARCA4, TFF3, RFX6, SMARCB1, and HES6 were enriched in stem cells (Fig. 7D, E).Finally, we analyzed the correlation between these transcription factors and stemness-related genes in hepatocellular carcinoma transcriptome data and found a significant correlation between NDRG1 and these transcription factors compared to LPCAT1 (Fig. 7F). LPCAT1 regulates the stemness of HCC Random forest analysis revealed that LPCAT1 is the most critical gene in the stemness risk model.Therefore, we validated LPCAT1 expression and potential function in CSC characterization at a histological and cellular level.The basis of our previous work has established a batch of rat models of hepatocellular carcinoma with preserved paraffin specimens [47].HE staining confirmed the successful construction of hepatocellular carcinoma (Fig. 8A), and immunohistochemistry results showed that the expression of LPCAT1 was significantly higher in HCC than in normal rat liver tissue (Fig. 8B).The study reported the highest expression of LPCAT1 in HCCLM3 in hepatocellular carcinoma cells in vitro [48].Therefore, we selected specific shRNAs to inhibit LPCAT1 expression in HCCLM3 (shRNA-1, shRNA-2, shRNA-3; Fig. 8C).We then performed sphere formation experiments using the more efficient knockdown shRNA-2, shRNA-3, and the control group.We found that the maximum diameter of spheres was significantly reduced in the shRNA group compared to the NC group (Fig. 8D, E). Discussion Due to their distinct biological functions in tumors, CSCs have gained the interest of numerous researchers in recent years.Consequently, they play a vital role in the progression, metastasis, recurrence, and radiotherapy resistance of HCC [9,12].Growing evidence suggests that LCSCs are the primary organizers of HCC initiation, such as liver tumor initiating cells [49].Nonetheless, the physiopathology and mechanisms of LCSCs in hepatocellular carcinoma require further investigation.Here, we analyzed the role of stemness-related genes in HCC using bioinformatics to examine the molecular characteristics of these genes.Moreover, the key resuts were validated in the experiements.We obtained 11 stemness-associated genes based on a multi-omics screen for unsupervised cluster analysis, identified two subtypes, and constructed a SRscores by subtypes.The differences in survival status, TME, somatic mutations, and drug sensitivity of HCC patients by subtyping and SRscores were evaluated.These studies provide a more accurate prognostic analysis for HCC patients.HCC is difficult to diagnose in its early stages, and patients with advanced disease have a poor prognosis.With the advancement of immunotherapy, oncology is increasingly employing immunotherapeutic agents.In clinical treatment, immunodetection inhibitors such as anti-PD1, anti-PDL1, and anti-CTLA4 monoclonal antibody therapy are widely utilized.TME provides a suitable environment for protecting and regulating CSCs, which supports the growth and differentiation of CSCs and thus promotes tumor metastasis [50].Understanding the characteristics of CSCs and TMEs in HCC is crucial.CSCs can secrete immunosuppressive cytokines that enable tumor-associated macrophages to secrete multiple inflammatory cytokines and recruit myeloidderived suppressor cells (MDSCs), thereby facilitating the formation of a tumor microenvironment conducive to CSCs' survival [51,52].Our findings also validate that macrophages and MDSCs were more abundant in the high SRscores group.In addition, some biomarkers, such as PD-L1, human epidermal growth factor receptor 2 (HER2), vascular growth factor (VEGF), and TMB are highly predictive of tumor immunotherapy [53,54].We analyzed the efficacy of immunotherapy using data from TIDE and TCIA databases on HCC patients.We observed that patients in Cluster 2 and those with a low stemness score were more responsive to immunotherapy.Moreover, Cluster2 patients were more amenable to anti-PD-1 and anti-CTLA4 therapies. Malta et al. developed the OCLR technique to calculate the transcriptional stem cell index (mRNAsi) using the 11,774-gene mRNA expression profile [55,56].We used the same method to calculate the mRNAsi of HCC patients.Our findings are similar to previous studies in that patients with high mRNAsi had a poorer prognosis [57].Patients in the better surviving Cluster2 and low SRscores groups had lower mRNAsi.The SRscores and mRNAsi showed a significant positive correlation.Our findings were similar to previous studies in that patients with high mRNAsi had a poorer prognosis.Patients in the better surviving Cluster2 and low SRscores groups had lower mRNAsi.The interventional risk score and mRNAsi showed a significant positive correlation.Studies have shown that the mRNAsi, such as Wnt, Nuclear factor-κB (NF-κB), Janus kinase/signal transducers and activators of transcription (JAK-STAT), Phosphatidylinositol-3-kinase (PI3K)/AKT/mammalian target of rapamycin (mTOR) and other signaling pathways can regulate the growth of CSCs [58].We found that the high SRscores group was significantly enriched in these pathways.In conclusion, we suggest that the SRscores can reflect the characteristics of CSCs. We did transcriptional and single-cell analyses.Stem cells had a considerably higher SRscores.Stem cells had much higher gene expression than other cells.Our single-cell data only examined HCC tumor tissues, therefore these genes may be substantially expressed in CSCs.Transcription factors influence tumor stem cell biology, according to several studies.Bioinformatic study showed that tumor stem cells activated TCF3, SMARCA4, TFF3, etc. (Fig. 7D, E).Joint transcriptome data analysis suggested these transcription factors regulate stemness risk-related genes.Single-cell sequencing from hepatocellular cancer stem cells did not evaluate these genes.We can improve risk-scoring models and CSC treatment by studying the distribution and expression of these genes in CSCs.We also have not investigated the detailed regulatory relationships between transcription factors and these genes, which would allow us to investigate the specific networks of stemness genes that regulate tumorigenesis and development. Eight genes were utilized to generate a stemnessrelated risk score.LPCAT1 (Lysophosphatidylcholine Acyltransferase 1) is a gene that codes for a protein that is essential for phospholipid metabolism.LPCAT1 is crucial to the development of diverse tumors and may function as a potential prognostic marker.[59][60][61][62].Uehara et al. found that LPCAT1 expression was significantly higher in gastric cancer compared to paraneoplastic tissue [63].It has been shown that LPCAT1 can promote endometrial tumor cell growth and stemness by affecting the TGF-β signaling pathway [61].In HCC, LPCAT1 is an oncogene that promotes the growth and metastasis of liver cancer cells [64].Using a rat model of hepatocellular carcinoma, we found that LPCAT1 expression was substantially higher in tumor tissues compared to normal tissues.In addition, silencing LPCAT1 inhibited the stemness of hepatocellular carcinoma cells. In conclusion, stemness-related subtypes and risk scores were developed.We also systematically described their associations with TME immune cell infiltration, drug sensitivity, and TMB.These results demonstrate different subtypes and high and low-risk score groups in HCC patients.These stemness-related genes' interaction is vital in tumor development and treatment.The SRscores may provide new ideas for Nonetheless, our investigation inevitably has some limitations.The data in our study were obtained from public databases.Although animal and cellular investigations were conducted for validation, additional clinical validation is required to confirm the practical accuracy.Second, there is a paucity of clinical data validation, and large samples will be required in the future to validate the accuracy of the stemness model in hospitals with which we collaborate.The clinical significance of stemness subtypes and SRscores must be investigated further.In addition, only the role of the core gene LPCAT1 in HCC was validated, and the role and mechanism of stemnessrelated genes in HCC require further validation of functional experiments of other genes in the model. Where n repre- sents the number of genes built by the scoring model, Coef equals the coefficient of each gene in the multifactorial cox regression, and x equals the expression of each stemness-related gene.SRscores = (0.075256372* expression of LPCAT1) + (− 0.004373688* expression of NDRG1) + (0.068014774* expression of G6PD) + (− 0.035236454* expression of CYP7A1) + (− 0.022288854* expression of DNASE1L3) + (0.051083710* expression of SPP1) + (0.004196020* expression of SFN) + (0.264500258* expression of CDCA8)..And to validate the prognostic value of the SRscores and the assessment of the accuracy of survival prediction.The regression coefficients in TCGA were then applied to the ICGC-LIRI-JP validation cohort to calculate the SRscores. Fig. 1 Fig. 1 Identification of stemness-related genes and subtypes in HCC.A Volcano map showing 103 of 345 stemness-associated genes with differential mRNA expression in HCC data.B Venn diagram showing 27 stemness-related genes with differential expression has prognostic value.C Waterfall plot showing the mutational landscape of 27 stemness-associated genes.D Copy number variation (CNV) of 11 stemness-associated genes indicates increased CNV and green dots indicate decreased CNV.E Position of CNV of 11 stemness-associated genes on the chromosome.F Differences in mRNA expression of 11 stemness-associated genes between HCC and normal tissues, *p < 0.05; **p < 0.01; ***p < 0. 001; ****p < 0.0001; ns, no statistical significance.G Identify two stemness-related subtypes by unsupervised clustering.H PCA shows the difference between the two subtypes.I Kaplan-Meier curves show survival differences between the two subtypes.J Differential expression of 11 stemness-associated genes in the two subtypes.K-N Validation in ICGC-LIRI-JP data Fig. 2 Fig.2Tumor microenvironment, signaling pathway analysis, and drug sensitivity analysis among different subtypes.A Heat map showing the differences of different immune cell subpopulations in the two subtypes.B Relative abundance of each immune cell subpopulation between two subtypes.C Analysis of immune checkpoint expression in two subtypes.D Enrichment analysis of GSVA pathway between the two subtypes.E TIDE algorithm to estimate immunotherapy response in patients of both subtypes.F Box plot of estimated IC50 of chemotherapeutic drugs between two subtypes.G Analysis of immune checkpoint blockade therapy (e.g., anti-PD-1 and anti-CTLA4) between the two subtypes Fig. 3 Fig. 3 Identification of prognostic stemness-related signatures (A) Random forest algorithm to calculate the top genes' relative importance.B The top 10 genes were subjected to 1023 (2 10 -1) permutations and log-rank tests, and the top 8 genes were selected for model construction.C Analysis of SRscores among different subtypes in TCGA-LIHC data, ***p < 0. 001.D Analysis of survival differences between high and low SRscores groups in TCGA-LIHC data.E, F ICGC-LIRI-JP data to validate the fitness of the model.G, H Time-dependent ROC analysis showing the specificity of SRscores in TCGA-LIHC (training set) and ICGC-LIRI-JP (validation set).I Pie charts showing the cardinality test for clinicopathological factors and stemness subgroups for each group of the stemness score Fig. 4 Fig. 4 Mutation, TMB, and drug sensitivity analysis in SRscores groups.A, B In the SRscores group, waterfall plots show the distribution of somatic mutations in the genes with the highest mutation frequencies.C, D Bayesian NMF identification of mutation markers in the SRscores group.The middle and following plots show the relative proportions of the total number of mutations and mutation types.E Differences in TMB between high and low SRscores groups.F Prognostic analysis of the high TMB and low TMB groups.G Prognostic analysis of the combined interventional SRscores and TMB.H Box plot of estimated IC50 of chemotherapeutic drugs between SRscores group.I TIDE algorithm for immunotherapy response between the high-SRscores group and the low-SRscores group (See figure on next page.) .8 ) = 2.12 , p =0.04 , g Hedges = 0.22 , CI95% [0.02 , 0.43 ], n obs = 355 loge(BF01) = 5.33e−03 , δdifference posterior = 0.67, CI95% HDI [0.06, 1.31 ], rCauchy JZS = 0 Fig. 5 Fig. 5 mRNAsi analysis and Single-cell analysis validation.A Correlation between mRNAsi and different subtypes, SRscores, and clinical features are shown.B Prognostic analysis of high mRNAsi group and low mRNAsi group.C, D Analysis of mRNAsi in different subtypes and SRscores groups, ***p < 0. 001.E Spearman correlation analysis of SRscores with mRNAsi.F Sankey plots show the relationship between different subtypes, high and low SRscores groups, high and low mRNAsi groups, and prognostic outcomes in HCC Fig. 6 Fig. 6 Single-cell annotations and SRscores in single cells.A t-distribution random neighborhood embedding (tSNE) plots of malignant cells from HCC. B, D Display of SRscores in each cell subpopulation.C CytoTRACE scores of each cell population.E Expression of the eight genes constituting the SRscores in each cell subpopulation Fig. 7 Fig. 7 Correlation of transcription factors and stemness-related genes was confirmed by single-cell analysis.A Cell annotations for the HCC-integrated dataset are shown on the UMAP plots, with different colored representations of cell types.B Identify regulatory modules based on the regulator CSI matrix and extract core transcription factors, binding motifs, and corresponding cell types.C The distribution of transcription factors among cell populations in each module.D Heat map showing the expression of the screened transcription factors in each cell population.E The enrichment of the screened transcription factors in each cell population.F Heat map display of the correlation between screened transcription factors and stemness-related genes
2023-09-17T13:17:55.466Z
2023-09-16T00:00:00.000
{ "year": 2023, "sha1": "36643a2491af1b3b3734b39bfa9ebba550f46315", "oa_license": "CCBY", "oa_url": "https://translational-medicine.biomedcentral.com/counter/pdf/10.1186/s12967-023-04425-8", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "1dc20d6dc0ef21103218c321e5edcc58fd5db19f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
5353839
pes2o/s2orc
v3-fos-license
Robotic-assisted surgery for excision of an enlarged prostatic utricle Highlights • Prostatic utricle is a rare malformation originating from incomplete regression of Mullerian ducts.• Mini invasive approach is considered the gold standard for surgical treatment.• Few cases of robot-assisted excision have been reported in the literature.• We report a case treated with robotic-assisted surgery. Introduction The prostatic utricle is a diverticulum of the posterior urethra at the summit of the verumontanum, between and above the two ejaculatory ducts.It arises from incomplete regression of the Müllerian ducts or from inadequate androgen stimulation of the urogenital sinus.The incidence of enlarged prostatic utricle is estimated at between 11 and 14% in association with distal hypospadias or disorder of sexual differentiation (DSD) anomalies and at up to 50% in the presence of perineal hypospadias [1].Diagnosis is easily made but management can be challenging, and in order to prevent infertility and neoplastic degeneration, treatment is generally reserved to symptomatic cases (urinary tract infections, stones in the pouch, dysuria, back-pressure changes and pseudoincontinence due to secondary trapping of urine in the pouch) [2].Several surgical and endoscopic techniques have been described, but laparoscopy is considered the gold standard treatment.Robotic-assisted resection of prostatic utricle has rarely been described to date [3][4][5].We report the first successful robotic assisted redo-excision of a non-symptomatic, giant prostatic utricle in a 19-year-old boy with DSD. Presentation of case A 19-year-old boy affected by DSD with chromosome disease (45x-46x-dic(Y)(q 11.2)) was admitted to our department for treatment of a retrovesical mass.A prostatic utricle cyst had been laparoscopically removed two years before in another institution.The patient's clinical history included right inguinal hernia repair at 4 months of age with excision of an ovotestes (the contralateral intraoperative gonad biopsy detected normal testicular tissue); and urethroplasty for penile hypospadia, performed when the child was 4-year-old. Clinical examination and blood analysis at admission were normal without any history of urinary infections.Ultrasound (US) revealed a 10 × 6 cm anechoic retrovesical cystic lesion.MRI confirmed a fluid-filled drop-shaped midline cystic mass, tapering to an end behind the hypoplastic prostate, and apparently not communicating with the prostatic urethra (Fig 1).The urethral meatus, on the ventral surface of the glans, was of adequate caliber.Cystoscopy revealed a tortuous distal urethra opening into a utricle at the level of the verumontanum, filled with corpuscolated fluid.At the bottom we identified the stitches from the previous incomplete surgical removal. We approached the lesion by robot-assisted laparoscopy.A uretheral catheter was positioned inside the utricle and a Foley catheter was passed into the bladder.The optical port at the umbilicus was advanced into the peritoneum and two 5-mm working ports were placed on the para-rectal lines.The bladder was suspended to the abdominal wall.The prostatic utricle was easily Discussion Surgical excision of the prostatic utricle is usually reserved for symptomatic cases.We describe redo surgery in an asymptomatic patient with giant utricle, with indication for prevention of the risk of malignancy and to preserve function and fertility of the residual testis since almost 12% of enlarged prostatic utricles in adults are associated with subfertility [6].In 1992, Hendry and Pryor reported 26 cases of male subfertility caused by prostatic utricle, with semen quality improving in 38.5% of patients after surgical treatment [1].In addition, some reports describe an incidence of malignancy of 3% [7]. Surgical management is challenging due to the rarity of the disorder and the proximity to the ejaculatory ducts, pelvic nerves, rectum, vas deferens and ureters.Several endoscopic techniques have been reported, such as transurethral cyst catheterization and aspiration, cyst orifice dilatation, utricle incision, transurethral deroofing and trans-vesical excision, all with high recurrence rates [2,8].Open surgery seems to give better results, but the approach to the lesion is too high for a perineal approach and too low for abdominal surgery.Moreover, all the described techniques, although requiring extensive dissection, result in poor exposure and high risk of injury to adjacent structures [9][10][11][12].The minimally invasive approach has been recommended in several reports [4,[13][14][15] because of the clearer view of the deep pelvic structures.In our case, the original laparoscopic surgery had not allowed removal of the entire lesion, and we opted for robotic-assisted laparoscopy in the redo procedure in order to combine the advantages of laparoscopy with the improved three-dimensional (3D) visualization and high instruments dexterity.The 15-fold magnification of the surgical field given by the 3D camera has been reported to translate into enhanced intraoperative and postoperative outcomes [16][17].Nevertheless, few cases of robot-assisted excision of retrovesical structures have appeared in the literature to date [3][4][5]. We found that the excellent visualization of the retrovesical structures made this technique safe, lowering the risk of injury to the vas deferens, ureters, rectum and bladder neck.Together with the excellent magnification, the wristed instruments allow for improved dexterity in the small confines of the pelvis.The threedimensional camera compensates for the lack of tactile feedback to the retrovesical structures.Optimal port placement with a wide angle avoids collision of the robotic arms and can be instrumental in performing more precise surgery [18][19][20]. Conclusions Robotic-assisted surgery allowed complete removal of a large utricle already submitted to previous unsuccessful laparoscopic resection.We found minimally invasive robotic procedure to be advantageous because of its greater magnification, 3-D visualization and high dexterity wristed instruments.These features allowed us to avoid injury to other structures in the retrovescical space, in spite of the presence of adhesions from the first attempt to treat the utricle.Robot assisted laparoscopy should be considered in redo utricle procedures, and as a valid alternative to laparoscopy for the primary treatment of prostatic utricle.
2017-10-02T09:24:43.789Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "649e04d37fce6d3565f8457f9d4a1ddcb406d202", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2015.03.024", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "374fc76dca7bf9f01e2a42cd533cdbd2fe6a493a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211526536
pes2o/s2orc
v3-fos-license
A Novel Method for Reconstructing CT Images in GATE / GEANT4 with Application in Medical Imaging: A Complexity Analysis Approach : For reconstructing CT images in the clinical setting, ‘e ff ective energy’ is usually used instead of the total X-ray spectrum. This approximation causes an accuracy decline. We proposed to quantize the total X-ray spectrum into irregular intervals to preserve accuracy. A phantom consisting of the skull, rib bone, and lung tissues was irradiated with CT configuration in GATE / GEANT4. We applied inverse Radon transform to the obtained Sinogram to construct a Pixel-based Attenuation Matrix (PAM). PAM was then used to weight the calculated Hounsfield unit scale (HU) of each interval’s representative energy. Finally, we multiplied the associated normalized photon flux of each interval to the calculated HUs. The performance of the proposed method was evaluated in the course of Complexity and Visual analysis. Entropy measurements, Kolmogorov complexity, and morphological richness were calculated to evaluate the complexity. Quantitative visual criteria (i.e., PSNR, FSIM, SSIM, and MSE) were reported to show the e ff ectiveness of the fuzzy C-means approach in the segmenting task. Introduction Clinical imaging techniques are essential components of medical diagnostics.Computed tomography (CT) is one of the most widely used medical imaging methods in which attenuating properties are used in calculating Hounsfield Unit (HU) to visualize objects.Scanner types, projection systems, and reconstruction algorithms have impacts on CT scanners' output, where modifying scanner type and projection systems need substantial investment in physical development [4].Therefore, many studies tried to contribute towards reconstruction algorithms for better scanning of different phantoms [7]. CT imaging is an inverse problem in which analytical and iterative reconstruction methods are used to visualize images [10].These reconstruction methods founded on the use of attenuation coefficients and the 'effective energy' in the total X-ray spectrum. However, using effective energy instead of the whole range causes a decline in the contrast level and can introduce artifacts.Several image enhancement techniques have been proposed to solve these issues.Chen et al. [2] developed a low-rank and sparse decomposition framework to simultaneously reconstruct and segment tissues obtained from a dynamic Positron-emission tomography (PET).Since PET has a relatively low spatial resolution and high level of noise, they proposed a mixed CT and PET architecture to characterize tissue elements reliably.Xu et al. [20] proposed an image reconstruction model, regularized by edge-preserving diffusion and smoothed for limited-angle CT. Chen et al. [3] developed a prior contour-based total variation method to enhance the edge information in compressed sensing reconstruction for cone-beam computed tomography (CBCT).Although CBCT has been widely used in radiation therapy for onboard target localization, they showed that using this method in reconstruction will result in over-smoothing the edge information.Wang et al. [19] proposed a method for reconstructing CT data in limited-angle CT devices.To solve the ill-posed problem, they proposed an iterative re-weighted method, in which the re-weighted technique is incorporated into the idea of the total anisotropic variation.In this way, they could approximate the most direct measure of L 0 norm sparsity. Gholami [9] created an attenuation map by applying the inverse HU to CT images reconstructed in 70 keV.Although quantizing 'effective energy' could provide a better reconstruction, they neutralized the effect of HU by using the inverse HU and did not consider the statistical distribution of the source photon flux for the quantization.Following their idea, we proposed a novel post-processing algorithm to cover more energy range in the total X-ray spectrum and establish a trade-off between accuracy and computational cost.We quantized the X-ray spectrum into 13 intervals, ranged from 10 to 140 keV.To validate the proposed method, we created a phantom in the GATE/GEANT4 environment consisting of the skull, rib bone, and lung tissues surrounded by water.This phantom was then irradiated in a double-wedge way by a fan-beam X-ray.To calculate the effective energy of each interval, we used the mean energy of each interval and its associated water attenuation coefficient [12], see Eq. ( 1): where μ is the attenuation coefficient, and μ w is the water attenuation coefficient.A pixel-based attenuation matrix (PAM) was then created by applying the back-projection method.PAM was used to weight the value of HU and to normalize photon flux. We observed that the proposed post-processing method could increase the contrast of target tissue in the CT image and subsequently ease the segmentation task.The rest of this paper is organized as follows: Section 2 describes the proposed postprocessing method.Experimental results are discussed in Section 3, and the conclusion is drawn in Section 4. Methodology Imaging environment in the GATE/GEANT4 is an air cube that is 50 cm on each side spanned in {(−25, 25, 25), (25, −25, 25), This cube defines our coordinate system where the rest of the components will be defined with respect to this coordinate system.The source is a fan beam CT geometry with the size of 0.5×0.5 mm 2 , placed in (0, 0, 150).The scanner consists of 30×16 cubic cell detectors (0.5×0.5×1 mm 3 ) made of Lutetium, Silicon, and Oxygen.The phantom is a cylinder with a radius of 5 mm and the height of 6 mm consisting of the skull, rib bone, and lung tissues (1×1×2 mm 3 ) surrounded by water.Density of tissues are 0.26, 1.92, and 1.61 g/cm 3 , respectively.Structure of phantom and positions of tissues are shown in Fig. 1.GATE, the Geant4 Application for Tomographic Emission developed by the international OpenGATE collaboration, has a dominant utilization in numerical simulations in medical imaging and radiotherapy.It takes advantage of the (1) well-validated The proposed post-processing method aims at increasing the contrast of tissues in the reconstructed image through the steps of Algorithm 1: Algorithm 1: Proposed post-processing algorithm. Input : PAX ← projected attenuation X-ray.W ← water attenuation coefficient.F ← photon flux value.4 Take Kolmogorov-Smirnov test to find the best distribution that fits X: 7 where F(x) is the hypothesis distribution, F n (x) is the cumulative distribution function, and I [∞,x] (X i ) is the indicator function, equals to 1 if X i ≤ x and equals to 0 otherwise. 8 Calculate the "effective energy": Output: post-processed HU → wHU Figure 2 and Fig. 3 show the output of CT scanner before and after applying the post-processing approach, respectively.In order to visually distinguish differences between standard and postprocessed CT images, we used HSV color map in illustrating reconstructed images in 70 keV (see Fig. 2 (g) and Fig. 3 (g)).As is evident, not only the proposed post-processing method can reduce associated artefacts but also can make CT images more ideal for the task of segmentation. Experimental Results To evaluate the post-processing method, we conducted visual and complexity experiments where entropy measurements, Kolmogorov complexity, morphological richness, and quantitative visual criteria (i.e., PSNR, FSIM, SSIM, and MSE) were calculated.Constructed phantom in GATE/GEANT4 environment made of skull, rib bone, and lung tissues surrounded by water.The radiation range of fan-beam X-ray changed from 10 to 140 keV in a way that could cover double-wedge. Probability density functions were calculated for both CT and post-processed CT images to quantitatively compare them.Let a simulated image is represented by the histogram of indexed values in the range of To compare CT and post-processed CT images (obtained as n independent realizations of a bounded probability distribution with smooth density), we combined ranges of indexed values into histogram columns following Scott's normal reference rule [16].Figure 4 (a) and Fig. 4 (b) show aligned indexed value of all reconstructed CT images before and after applying post-processing approach, respectively.It is evident that bins in the histogram of post-processed CT images (Fig. 4 (b)) have relatively less overlap and bigger distribution compared to Fig. 4 (a).Hence, segmenting a CT image by using shape, clustering, or entropy-based method is more straightforward.Moreover, mounting the proposed postprocessing approach into imaging software gives an expert radiologist the flexibility to modify the energy level for reaching the best tissue differentiation in the final CT image. Complexity Analysis In this study, we used Kolmogorov estimation, which is an approximation to the algorithmic complexity.Kolmogorov complexity (K) can quantify the randomness content in both CT and post-processed CT images.K(x) is defined as the length (in bits) of the smallest computer program that can reproduce the object (x) when it runs on a Universal Turing Machine U. Since K is semi-computable, compression algorithms can utilize to approximate it.However, it has been shown [21] that compression algorithms are entropy rate approximations.Therefore, to consider algorithmic content, we proposed an algorithmic probability-based approach to estimate K. Algorithmic probability, which is inversely proportional to K, is the probability of an object x to be produced by a Universal Turing Machine.It can be empirically estimated from the output frequency of small Turing machines using Coding theorem (CTM) and Block decomposition methods (BDM) [17].To calculate K [15], we considered layers in which images are quantized and binarized in q digital levels.This quantization is prior to the aggregation of CTM values, where each layer gets decomposed. Algorithm 2: Layered Block Decomposition // CTM is a hash-table with binary 2D blocks as keys, // Output is the Kolmogorov complexity 1 2 Function LayeredBDM(grayImage, CTMs, blockSize, blockOffset, q) is 3 -Quantize image in q digital levels and binarize in q digital layers 4 grayImage ←− quantize(grayImage, q) blocksList ←− {} for i in 1 to q do 5 binImage ←− binarize(grayImage, q) 6 blocks ← partition (binImage, blockSize, blockOffset) Figure 5 (a) shows the estimations of BDM to K for CT and post-processed CT images using the layered BDM where q = 256.Figure 5 (b) shows the KC estimation obtained by the lossless compression algorithm Lempel-Ziv-Welch (LZW) for comparison.Both Fig. 5 (a) and Fig. 5 (b) show an almost monotonic increase in complexity when the energy level increases.For energy levels below 65 keV in Fig. 5 (a), a small difference in KC between CT and post-processed CT is evident.We performed the Spearman's rank correlation test between the KC values obtained with layered BDM and compression length.In CT data, this test gives ρ = 0.96 with p-value = 1.91 × 10 −6 and in the post-processed CT data, it results in ρ = 0.97 with p-value = 5.32 × 10 −7 .These results indicate that the layered BDM is more sensitive to morphological changes in the images than the ones obtained from lossless compression. The benefit of utilizing entropy in the context of complexity is that it only considers the probability of observing a specific event.Therefore, it does not express any interpretation of the meaning of the events themselves.In this study, we calculate approximate, conditional, corrected conditional, sample, and fuzzy entropy measurements to show the complexity of the CT (Fig. 6 (a)) and processed CT (Fig. 6 (b)) data.Approximate entropy (ApEn) [13] quantifies the amount of regularity and the unpredictability of fluctuations in reconstructed CT images.It modifies an exact regularity statistic, i.e., Kolmogorov-Sinai entropy, to handle the system noise.To address the complexity of reconstructed CT images from different perspectives, we also calculated the following entropy measures. • We measured conditional entropy [5] to quantify the amount of information needed to describe the outcome of CT images in different energy levels.• We calculated corrected conditional entropy (CCEn) to measure the information content with respect to the minimum value of Eq. ( 2) function. where Ê(L/l − 1) represents the estimate of Shannon entropy in a L/L−1-dimensional phase space.perc(L) is the percentage of single points in the L-dimensional phase space, and Ê(1) is the estimated value of Shannon entropy for L = 1. • Sample entropy (SEn) [14] is a modification of approximate entropy with two advantages over ApEn including independence from data length and a relatively trouble-free implementation.As self-matching is not included in Sample entropy, actual interpretation about the irregularity of signals is possible.For a given embedding dimension m, tolerance r and number of data points N, SEn is calculated by Eq. ( 3). where A is a number of template vector pairs (e.g., d[X m+1 (i), X m+1 ( j)] < r) with length of m + 1 and B is a number of template vector pairs (d[X m (i), X m ( j)] < r) with length of m. • Fuzzy entropy (FEn) estimates the short-length data without restricting validity by the parameter value.It evaluates global deviations from the type of ordinary sets and is resistant to noise and jamming phenomena (Eq.( 4)). FEn(m, n, r, N) = ln φ m (n, r) − ln φ m+1 (n, r), where m and r are the dimensions of phase space and similarity tolerance, respectively, n is the gradient of the exponential function, N is the number of data, and D is the similarity degree.Lower entropy means that the CT image is more homogeneous.From Fig. 6 (b), we can see that for different energy levels (except for 70 keV) the entropy level of post-processed CT images are relatively the same while the entropy in Fig. 6 (a) monotonically increased.Therefore, designing an analytical approach for raw CT images involves a trade-off between the amount of information and the number of components that characterize the image.The sharp entropy change in Fig. 6 (b) echoes the results of previous studies (e.g., Ref. [1]) where energy level of 70 keV was used to form the attenuation map and get the best reconstruction re-sults.Indeed, at this energy level, we have the maximum amount of information which was proven as the appropriate energy level for x-ray based medical image reconstruction. Visual analysis In this section, we show the performance of the proposed postprocessing approach using morphological richness analysis [18] and fuzzy c-means (FCM) [8] based segmentation. Morphological richness (MR) represents the number of different configurations of 3 × 3 blocks divided by the number of all possible configurations (2 9 ).To amplify changes in the restructuring of reconstructed images, we calculated the power spectrum of morphological richness using Eq. ( 5), where F T (ω) is the Fourier transform of the signal (vectorized CT image) in period T .The power spectrum itself is a Fourier transform of the autocorrelation function.The auto-correlation function represents the relationship of long and short-term correlation within the signal itself (refer to Eq. ( 6)). The results of our analysis are illustrated in Fig. 7. Amplitude and "dominating frequencies" differentiations are evident in processed CT images which imply that analyzing processed CT images would bring more information.Solid and slow components in the frequency domain imply that there is a high correlation between macro-structures, while extreme and fast oscillations imply correlation in the micro-structures.Image segmentation plays the essential role in medical image processing [6].Fuzzy c-means (FCM) is one of the popular clustering algorithms [8] used in medical image segmentation.However, FCM is highly vulnerable to noise which is an unavoidable element in reconstructing CT images.To show the performance of the proposed post-processing approach, we applied FCM segmentation to both conventional CT and processed CT images.FCM minimizes an object function by partitioning a finite collection of n elements X = {x 1 , . . ., x n } into a collection of c fuzzy clusters with respect to some given criterion.FCM returns a list of c cluster centers C = {c 1 , . . ., c c } and a partition matrix W = w i, j ∈ [0, 1], i = 1, . . ., n, j = 1, . . ., c, where each element, w i j , tells the degree to which element, x i , belongs to cluster c j .The objective function can be defined by Eq. (7).(7) To evaluate FCM, we calculated four measurements (Eq.( 8)) including Peak-value signal-to-noise ratio (PSNR), featuresimilarity (FSIM) index, Structural Similarity (SSIM) index, and Mean Square Error (MSE). PSNR(I where I T is the target image with the size of m × n, PC m is the weighting factor for S L (x) which is the overall similarity between I T and a reference image I R , μ is the mean of the image, σ Conclusion We presented an algorithmic protocol of increasing tissue discrimination using post-processing of CT images.A phantom consisting of the skull, rib bone, and lung tissues was created and irradiated in GATE/GEANT4 to validate the proposed method.By quantizing the total X-ray spectrum into irregular intervals, we could have different Sinograms with different levels of tissue discrimination.In each energy interval, the mean was considered as the representative energy.Then, a Pixel-based Attenuation Matrix (PAM) was computed for each representative energy by applying Inverse Radon transform to the associated Sinogram.We also calculated the normalized photon flux of each interval to use it as a weighting factor.When we calculate the Hounsfield unit scale (HU) for each interval's representative energy, we used PAM and the normalized photon flux to modify the CT image. The performance of the proposed method was demonstrated through Complexity and Visual analysis.Entropy measurements, Kolmogorov complexity, and morphological richness were calculated to evaluate the complexity.Calculating morphological richness (MR) for the post-processed CT images at different energy levels shows that the proposed post-processing method can better uncover the tissue differentiation.Quantitative visual criteria (i.e., PSNR, FSIM, SSIM, and MSE) were reported to show the effectiveness of fuzzy C-means approach in segmenting task.These criteria show a better segmentation performance over postprocessed CT images in the majority of energy levels.This in-dicates that better tissue discrimination has been reached as the result of applying the proposed post-processing method.Therefore, using this method in clinical set up can result in a lower degree of irradiation and less tissue damage. Fig. 2 Fig. 2 CT images (200 × 200) in energy levels of 15-135 keV (a-k).CTimage in the energy level of 70 keV (g) is illustrated in HSV color map to make differences distinguishable visually. Fig. 3 Fig. 3 Post-processed CT images (200 × 200) in energy levels of 15-135 keV (a-k).Image in the energy level of 70 keV (g) is illustrated in HSV color map to make differences distinguishable visually. Fig. 4 Fig. 4 Aligned indexed value of all reconstructed CT images from HU scale (a) before applying post-processing, (b) after applying postprocessing. Fig. 6 Fig. 6 Approximate, Conditional, Corrected Conditional, Sample, and Fuzzy entropy measurements for (a) CT images and (b) post-processed CT in different energy levels. Fig. 7 Fig. 7 Power spectrum of the entropy of the calculated morphological richness.(a) CT images, (b) processed CT images.Each color is associated with an energy level. Table 1 Evaluation criteria of FCM applied to both CT and post-processed CT images. Table 1 , [1] )help us to draw the following conclusion remarks:( 1 )We can see sharp changes in entropy measures for both CT and post-processed CT images in the range of 50-90 keV.The reason for these sharp changes is the slight tissue differentiation between the phantom's components and water.Moreover, this change agrees with the results of previous studies (e.g., Ref.[1]) where the maximum amount of information shows the appropriateness of this energy level for x-ray based medical image reconstruction.( 2 ) Quantitative measurements show that the post-processing algorithm improved the quality of CT images and decrease the noise level.Therefore, with a lower degree of irradiation and less tissue damage, we can reach better tissue discrimination.( 3 ) Although reconstructing CT images is usually made in the energy level of 70 keV, results of our experiments prove that working on CT images in different energy levels is possible, either by applying the proposed post-processing method or physical modification.In this way, an expert can reach better tissue discrimination in CT images. 2is the variance of image, σ IT ,IR is the covariance of I T and I R , and c 1 and c 2 are two variables to stabilize the division with weak denominator.In our experiments we set c 1 = (0.01 × 2,155)2and c 2 = (0.03 × 2,155)2.The quantitative measurements, showed in
2020-02-20T09:12:39.993Z
2020-02-15T00:00:00.000
{ "year": 2020, "sha1": "1d0bf1b9bd18c0078f9fdc53da3811cc41223c18", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ipsjjip/28/0/28_161/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9de723e277c741ffdd815cfa8db2ae8820eb14d5", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15171593
pes2o/s2orc
v3-fos-license
HGF and Direct Mesenchymal Stem Cells Contact Synergize to Inhibit Hepatic Stellate Cells Activation through TLR4/NF-kB Pathway Aims Bone marrow-derived mesenchymal stem cells (BMSCs) can reduce liver fibrosis. Apart from the paracrine mechanism by which the antifibrotic effects of BMSCs inhibit activated hepatic stellate cells (HSCs), the effects of direct interplay and juxtacrine signaling between the two cell types are poorly understood. The purpose of this study was to explore the underlying mechanisms by which BMSCs modulate the function of activated HSCs. Methods We used BMSCs directly and indirectly co-culture system with HSCs to evaluate the anti-fibrosis effect of BMSCs. Cell proliferation and activation were examined in the presence of BMSCs and HGF. c-met was knockdown in HSCs to evaluate the effect of HGF secreted by BMSCs. The TLR4 and Myeloid differentiation primary response gene 88(MyD88) mRNA levels and the NF-kB pathway activation were determined by real-time PCR and western blotting analyses. The effect of BMSCs on HSCs activation was investigated in vitro in either MyD88 silencing or overexpression in HSCs. Liver fibrosis in rats fed CCl4 with and without BMSCs supplementation was compared. Histopathological examinations and serum biochemical tests were compared between the two groups. Results BMSCs remarkably inhibited the proliferation and activation of HSCs by interfering with LPS-TLR4 pathway through a cell–cell contact mode that was partially mediated by HGF secretion. The NF-kB pathway is involved in HSCs activation inhibition by BMSCs. MyD88 over expression reduced the BMSC inhibition of NF-kB luciferase activation. BMSCs protected liver fibrosis in vivo. Conclusion BMSCs modulate HSCs in vitro via TLR4/MyD88/NF-kB signaling pathway through cell–cell contact and secreting HGF. BMSCs have therapeutic effects on cirrhosis rats. Our results provide new insights into the treatment of hepatic fibrosis with BMSCs. Introduction Liver fibrosis is the excessive deposition of extracellular matrix and scar formation surrounding damaged liver and this can be effectively reversed [1,2]. Activated hepatic stellate cells are generated in the extracellular matrix (ECM) of the principal cells during the process of liver fibrosis. ECM production is the primary reason for the excessive fibrosis formation, which eventually leads to cirrhosis. Acquired fibrosis may result from the action of a number of pathogenic factors, toxic exposures, chronic viral hepatitis or the presence of non-alcoholic fatty liver disease. These etiological factors may work separately or in combination with each other to produce cumulative effects [3]. A large number of in vivo experimental and clinical studies have shown that endotoxin levels in patients with liver cirrhosis are significantly increased, and LPS (an endotoxin) can directly activate HSC in vivo. TLR4 is the primary LPS receptor, and TLR4 polymorphisms are closely related to liver fibrosis. Thus, the LPS-TLR4 pathway plays an important role in fibrosis [4]. TLR4 signals through adaptor proteins, including MyD88, to activate down-stream effectors that include NF-kB, mitogen-activated protein kinase (MAPK), and phosphatidylinositol 3-kinase (PI3K). Collectively, these pathways regulate the expression of pro-inflammatory cytokines and genes that control cell survival and apoptosis. BMSCs are pluripotent stem cells with the potential to differentiate into liver cells. Recent studies have also shown that BMSCs play a substantial role in liver fibrosis treatment without allograft rejection. Studies from animal models have shown that BMSCs infusion ameliorates liver fibrosis and reverses fulminant hepatic failure. A number of clinical trials also proved that BMSCs can effectively alleviate end-stage liver disease and improve symptoms and liver function, indicating the effectiveness and safety of BMSCs in clinical implantation [5][6][7]. However, it has also been reported that BMSCs have the potential to promote fibrosis [8,9]. Therefore, for therapeutic applications, it will be important to understand the potency and possible repair mechanisms of BMSCs to help us understand the nature of hepatic fibrosis. Since both the LPS-TLR4 pathway and BMSCs are involved in liver fibrosis, we hypothesized that BMSCs may interfere LPS-TLR4 pathway and inhibit NF-kB activation during fibrosis. To test this hypothesis, we examined the expression of TLR4 in HSCs stimulated with different doses of LPS and investigated the regulatory role of BMSCs in MyD88-mediated LPS-stimulated TLR4 expression. Ethics Statement Normal liver and bone marrow samples. Normal liver samples were collected from patients undergoing resection of hepatic hemangiomas at the Department of Hepatobiliary Surgery, the third affiliated hospital of Sun Yat-sen University. Bone marrow suspension cells (10 ml) were obtained from healthy donors in the department of infectious diseases, the third affiliated hospital of Sun Yat-Sen university. All samples were obtained with written, informed consent in accordance with the Sun Yat-sen University Ethical Committee requirements. Bone Marrow Separation for Primary Human BMSCs Cultures BMSCs isolation and culture were described in our previously study [10]. Prior to experimental use, the third generation BMSCs were determined to be CD44, CD90 positive and lack expression of CD45 and CD34 by flowcytometry analysis. Human HSCs Isolation and Cell Line Culture HSCs were isolated by a 2-step collagenase perfusion from surgical specimens of two normal human livers as described previously [11]. All tissues were obtained by qualified medical staff, with donor consent and the approval of the Sun Yat-sen University Ethical Committee requirements. Cell viability was determined using trypan blue exclusion staining. Hepatic stellate cell line LX2 was obtained from the Cancer Center of Sun Yat-sen University. Isolated HSCs and LX2 were seeded on uncoated plastic tissue culture dishes and cultured in Dulbecco's modified Eagle medium (DMEM, Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS, HyClone, Logan, UT, USA), 2 mM L-glutamine, 100 units/ml penicillin and 100 ug/ml streptomycin, cultured at 37uC in a humidified atmosphere containing 5% CO 2 . Physical Contact Co-culture System and Insert Co-culture System The GFP-labeled LX2 cells were mixed and co-cultured with BMSCs in supplemented DMEM containing 10% FBS at a density of 1:1 for LX2 and BMSCs. Since there is no barrier between the two populations, effects are produced from both the exchange of soluble factors and from physical contact. As a comparison to the physical contact co-culture system, LX2 cells and BMSCs were also co-cultured in a bicompartmental system by using 6.5 mm TranswellH with 1.0 mm Pore Polycarbonate Membrane Insert, which were purchased from Corning (Corning, NY, USA). In this case, two types of cells shared the culture medium but were not in physical contact. Cells were seeded in DMEM and incubated overnight. Immediately prior to treatment the medium was changed for fresh DMEM. A total of 2610 5 LX2/ml and equivalent volumes of BMSCs releasate and pellet fractions were added. Where indicated, cells were treated with 5 or 50 ng/ml of recombinant HGF (R&D Systems). In Vivo Transplantation of BMSCs Fourty SD rats in six weeks were purchased from the Institute of Materia Medica (Chinese Academy of Sciences, Beijing, China). This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of Sun Yat-Sen University. To induce liver cirrhosis, 6.0 ml/kg body weight of carbon tetrachloride (CCl4) mixed with olive oil (1:1 ratio) was injected intraperitonealy into rats twice a week up to 7 weeks. Five rats were random sacrificed to evaluate if the model was successful. Then the rest rats were divided into two group: BMSC treatment group (25 rats) and control group (10 rats): 1610 6 BMSC in 100 ul PBS or 100 ul PBS as a control were injected into tail vein of rats. Three rats were sacrificed at each predetermined time interval (12 h, 24 h, 48 h, 72 h, 144 h) after BMSC post-transplantation to track the BMSC location. CCl4(3 ml/kg) was injected for another two weeks after cell transplantation to maintain persistent liver damage and all the left rats were sacrificed after 3 weeks BMSC post-transplantation. Liver tissue was collected after perfusing with 4% paraformaldehyde solution and preserved in formalin buffer solution for histopathological studies. For protein and total RNA isolation, liver tissue was snap-frozen in liquid nitrogen and then stored at 280uC. Hematoxylin and Eosin (HE) and Masson Staining The rat livers were fixed in formalin for 48 h, paraffin embedded and sectioned into 4-mm thick slices for HE staining or Masson staining. After sealing the slides containing the tissue slices with neutral gum, the stained tissue slices were microscopically examined at 200 6 magnification. Subsequently, color images in five randomly chosen microscopic fields of each slice were captured and analyzed by a medical image software program (Image-Pro Plus, Media Cybernetics, USA) to semi-quantitatively determine the areal density (AD) i.e. collagen fiber area over liver area. Serum Assay Blood was taken from the abdominal aorta, and the following indicators were analyzed: total protein (TP), albumin (ALB), alanine aminotransferase (ALT), and aspartate aminotransferase (AST). Other experimental procedures are available in Materials and Methods S1. Statistical Analysis Data are reported as mean values 6 SD. The statistical significance of differences between mean values was determined by Student's t test. A P value of less than 0.05 was considered significant. Statistical calculations were performed using SPSSv16 software (SPSS Inc., Chicago, Illinois, USA). Bone Marrow-derived Cells Expressed Characteristic Surface Markers of BMSCs BMSCs obtained by bone marrow aspiration and expanded in vitro appeared similar to fibroblasts, with a characteristic spindleshaped fusiform morphology ( Figure 1A). LX2 and primary HSCs were also fibroblasts alike ( Figure 1A and 1B). In cultured cells of the 3rd generation analyzed by £ow cytometry, more than 97% of the cells expressed CD44 (97.261.35%, Fig. 1B) and CD90 (98.061.20%, Figure 1C), a surface marker characteristic of BMSCs. The absence of contaminating hematopoietic cells in the BMSCs population was verified by the lack of surface antigens defining hematopoietic progenitor cells (CD34, 1.160.03%) or leukocytes (CD45, 0.6860.05%). Thus, the bone marrow cells after the 3 rd passage were of high purity, and expressed CD44 + , CD90 + , CD34 2 and CD45 2 , which are markers for BMSCs. LPS induced Activation of LX2 Cells and Upregulation of TLR4 We first evaluated the effect of LPS stimulate hepatic stellate cells in vitro. LX2 cells were cultured with different concentrations of LPS (0, 50, 100, 125 ng/ml) to evaluate if there is a dose depended effect. IL-8 and TGF-b which was secreted by activated hepatic stellate cells were measured by ELISA. Results showed quiescent LX2 cells only secreted small amounts of IL-8 (0.3360.04 pg/ml) and TGF-b (0.3160.05 pg/ml) ( Figure 2A). However, the secretion of IL-8 and TGF-b of LX2 was significantly increased when LPS added. Results of ELISA showed that IL-8 and TGF-b secretion from LPS-activated LX2 cells was 0.8860.03 pg/ml and 0.7460.05 pg/ml respectively after 100 ng/ml LPS stimulation ( Figure 2A). Furthermore, the expression IL-8 and TGF-b was increased in parallel to increasing concentrations of LPS. When the LPS concentration reached 100 ng/ml, IL-8 and TGF-b expression levels reached up to three times that of the untreated cells. However, as the LPS concentration was increased to 125 ng/ml, a significant amount of cell died. As such, 100 ng/ml LPS were used in all subsequent experiments. Then the expression of TLR4 in two human primary HSCs cell lines and LX2 cells were assessed by RT-PCR. In quiescent LX2 cells and primary HSCs cells (no LPS stimulation), only low levels of TLR4 mRNA were detected. However, TLR4 was substantially up-regulated in LPS-activated primary HSCs cells and LX2 cells for more than 2 times in vitro ( Figure 2B). These results suggested that LPS can stimulate HSC activation and TLR4 gene expression. BMSCs Inhibit the Activation of LX2 Cells by Interfering with LPS-TLR4 Pathway Since BMSCs had benefit to liver fibrosis patients, we test if BMSCs also had the effect to inhibit the activation of LX2 in vitro. We co-cultured LX2 with BMSCs separated by transwell to test if BMSC had the same effect on LPS induced LX2 activation. RT-PCR analysis indicated that LX2 and BMSCs without LPS stimulation only expressed small amount of TLR4, a-SMA, and Col-1. However, after LPS stimulation, the levels of TLR4, a-SMA, and Col-1 expression from LX2 cells were significantly increased for more than two times ( Figure 2C), while addition of BMSCs significantly reduced the effect. Furthermore, the results of western blot analysis indicated that LX2 cells without LPS stimulation express only small amounts of a-SMA, Col-1, and TLR4, and addition of LPS stimulated increased expression of these proteins, which are known markers of HSCs activation. However, when BMSCs or the TLR4 inhibitor was added, the expression of these proteins from LX2 cells was decreased significantly ( Figure 2D). These data suggest that protection of LPS induced LX2 activation by BMSCs was associated with TLR4. BMSC Promote Activation of the HGF Pathway in LX2 Activation To gain better insight into the signaling pathways involved in BMSCs-to-LX2 cell communication, we detected the effect of BMSCs-secreted cytokines on protection of LPS induced LX2 activation. Of the many cytokines, we tested HGF, IL-8, IL-2, IL-6, and TNF-a for their ability to effect Col-1 and a-SMA expression of LX2. The RT-PCR showed only HGF decreased Col-1 and a-SMA mRNA expression in LX2 cells for more than two times ( Figure S1A). Considering that a prolonged exposure to HGF protects fibrosis in LX2, we investigated whether BMSCderived HGF could inhibit fibrosis in LX2 cells. As shown in Figure S1B-1E, the mRNA and protein level of Col-1 and a-SMA in LPS stimulated LX2 cells decreased by approximately 1.5-to 2fold within 6 to 24 hours after HGF treatment and increasing dose of HGF from 20 to 50 ng/ml, demonstrating that effect of HGF on LX2 cells. We next investigated whether HGF protection of LX2 cell is dependent on TLR4 signaling. Interestingly, When LX2 cells were treated with different cytokines for 6 h, we found only HGF induced decrease in TLR4 mRNA expression ( Figure 3A). We also combined HGF with other cytokines to evaluate the co-effect on LX2. Western blot showed that quiescent LX2 cells (without LPS stimulation) only expressed a small amount of TLR4. While after LPS stimulation, TLR4 expression increased obviously. Then we used IL-8, HGF, IL-2, IL-6, and TNF-a stimulated LX2 along, and found only HGF deduced TLR4 expression. Besides of these, we combined two or three cytokines together and stimulate LX2, and the results of western blot showed that only when the combination included HGF, there was an effect of TLR4 down-regulation. The combination without HGF demonstrated no effect. Furthermore, no synergistic effect was found when HGF was used together with other cytokines ( Figure 3B). Then we knockdown the HGF receptor, c-met, of LX2 cell and investigate if LX2 activation inhibited by BMSCs was through HGF. As Figure 3C indicated, two c-met siRNA knockdown c-met protein in LX2 cell. When using BMSCs treating LX2/c-met-RNAi, we found that LPS induced LX2 activation could not be suppressed ( Figure 3D). The results conferred BMSCs protected LX2 cells from LPS through secreating HGF. HGF and Direct BMSC-LX2 Cell Contact Synergize to Inhibit LX2 Activation Gene Expression It is believed that effective treatments aimed at suppressing the activation and proliferation of LX2 would reduce the deleterious effects of LX2 on the progression of fibrosis. The proliferation of GFP labeled cells leads to a progressive reduction in mean fluorescence intensity (MFI). We therefore calculated the in vitro growth rate of LX2 by MFI after interaction with BMSCs. Flow cytometry analyses suggested that the MFI of GFP gradually decreased with incubation time during LX2 growth and that direct co-cultured LX2 exhibited relatively slow proliferation rates compared to monocultured cells ( Figure 4A). When GFP-labeled LX2 cells and BMSCs were co-cultured with direct cell-cell contact, the fluorescence labeled LX2 cells was inherited by daughter cells after cell division, but was not transferred to adjacent BMSCs in the co-culture population. Fluorescence-activated cell sorting was then performed to separate fluorescence-positive cells (LX2) and fluorescence-negative cells (BMSCs). Co-cultured cells (3610 6 ) were sorted and 0.921.2610 6 of each type was obtained after cell sorting. The relative mRNA expression levels of a-SMA, Col-1, TLR4, MyD88 in LX2 coculture with physical contact, and in the insert co-culture system (no physical contact) versus LX2 alone with HGF are significant deduced compared to PBS control group ( Figure 4B). In the LX2 co-culture with physical contact with BMSCs group, the gene relative expression levels were much lower than the groups which were in the insert co-culture system and LX2 alone with HGF. Results for time and dose-depend effect of the three kinds of treatments on LX2 cells using RT-PCR are also shown ( Figure 4C and 4D). Genes that were down-regulated at least one half are identified in the BMSCs physical contact co-culture and HGF. The patterns of gene expression for cells exposed to contact coculture are different from co-culture that only permitted soluble interactions only (i.e. the insert co-culture system). WB also showed the same Synergize effect of HGF and direct BMSC-LX2 cell contact ( Figure 4E and 4F). Taken together, these results show that BMSC-derived HGF is necessary to deduce the expression of several fibro-activation genes in vitro but that, in addition, BMSCs provide other signals through physical contacting which synergize with HGF signaling. The NF-kB Signaling Pathway is Inhibited by BMSCs in a Contact-dependent Manner and Cooperates with HGF Signaling to Inhibit LX2 Activation We next sought to define the molecular pathway induced upon BMSCs contact that cooperate with HGF to inhibit fibroactivation gene expression. Using a set of luciferase reporter assays for several pathways, we screened for their activation in LPS treated LX2 cells in response to interaction with BMSCs or HGF ( Figure 5A). Interestingly, co-incubation with BMSCs or HGF addition decreased activation of the NF-kB pathways in LX2 cells ( Figure 5A). Activation of the NF-kB pathway was decreased when cells were incubated with either BMSCs or the pellet fraction from BMSCs. RT-PCR further proved the NF-kB downstream genes including cyclinD1, c-Myc, Mmp9, CXCR4, Cox2 and VEGF, were inhibited by BMSCs, HGF or HGF+BMSC ( Figure 5B). NF-kB is composed of the p50 and p65 subunits, and in resting cells the NF-kB complex is sequestered in the cytosol by an inhibitory subunit, IkB. Once stimulated, IkB is phosphorylated and degraded, which allows NF-kB to translocate to the nucleus and induce the expression of its target genes. So we did p65 staining to evaluate the effect of BMSCs on NF-kB signaling. As indicated in Figure 5C, LPS-treated LX2 cells demonstrated primarily nuclear NF-kB staining, whereas exposure to BMSC or HGF for 1 h resulted in a rapid translocation of NF-kB to the cytoplasm. The effect was enforced by co-cultured with BMSCs directly with HGF addition. Additionally, when we examined the cellular localization of NF-kB expression within the cell, we noted a decrease in nuclear NF-kB expression in BMSC or HGF treated cells, and the effect was also enforced by co-cultured with BMSCs directly with HGF addition ( Figure 5D). WB further proved the protection of BMSCs or HGF in LPS induced LX2 activation through inhibited TLR4/ M yD88/NF-kB signaling ( Figure 5E). To determine if BMSC had the protection ability in LPS induced NF-kB DNA binding in LX2, we performed EMSA analysis on nuclear lysates from LX2. As showed in Figure 5F, nuclear NF-kB constitutively bound DNA in LPS-stimulated LX2. While BMSCs and HGF repressed DNA binding of NF-kB. The effect was also enforced by co-cultured with BMSCs directly with HGF addition. When we knockdown cmet in LX2 cells, the protection ability of BMSCs, HGF or director BMSCs co-culture on LX2 cell were significantly impaired ( Figure 5G). These data proved the ability of BMSCs to prime LX2 cells activation depends on the synergistic interaction between the NF-kB and HGF signaling pathways, which is triggered by direct BMSC-tumor cell contact. BMSC Inhibited the Activation of LX2 Cells Depended on MyD88 TLR4 acts as a receptor for LPS and mediates its intracellular actions via the adapter molecule MyD88. Experiments in MyD88 knockout mice demonstrated that MyD88 is required for induction of liver injury in response to hypoxia and LPS [14,15]. So we further examining the levels of MyD88 and NF-kB in LPS induced activated LX2 cells response to BMSCs. The MyD88 and NF-kB was highly expressed in LPS induced activated LX2 cells based on RT-PCR ( Figure 4B). While when BMSCs or HGF was added, the expression of MyD88 and NF-kB of LX2 was significantly decreased. Since MyD88 was important in LPS-TLR4 pathway and responded to BMSCs treating, we further investigate if MyD88 is essential for BMSC protecting LPS induced activation of LX2. We established MyD88-overexpressing or -silenced LX2 stable cell line ( Figure 6A). Inhibition of LPS-induced LX2 activation by BMSCs was significantly decreased when MyD88 was overexpressed (LX2/MyD88 cells) ( Figure 6B). When MyD88 was knockdown in LX2 cells, the LPS stimulated activation of LX2 was inhibited as treated by BMSCs alone ( Figure 6B). The luciferase activity of NF-kB in activated LX2 was also blocked by BMSCs or when MyD88 knockdown, while overexpressing MyD88 rescued the inhibited effect, suggesting that these markers of stellate cell activation were directly regulated by MyD88dependent TLR4 signaling. Interestingly, the LPS induced luciferase activity of LX2 was inhibited when MyD88 was down-regulated in LX2, suggesting the involvement of TLR4 in this process ( Figure 6C and 6D). In Vivo Determination of BMSC-protected HSC Activation and Liver Fibrosis After 8 weeks of CCl 4 injection, liver fibrosis was observed ( Figure 7A) and proved by HE and Masson-stained sections. The Scheuer score in control group was 961.25 compared 360.37 in BMSCs transplantation group (P,0.05) ( Figure 7B). So BMSCs transplantation had 2-fold decrease of fibrotic area and ameliorated the formation of liver fibrosis ( Figure 7B). Rats with fibrosis treated with PBS alone had 4-fold increase of serum ALT (533.6645.1 U/dl) ( Figure 7C) and 2.5-fold increase of serum AST (489.7642.8 U/dl) ( Figure 7D) activity relative to those with BMSCs group (ALT 136.1638.6 U/dl, P,0.05 and AST 152.3610.3 U/dl, P,0.01), respectively. The TP of the rats treated with PBS alone was 38.267.1 umol/L, and 50.167.8 umol/L for the rats treated with BMSCs ( Figure 7E). ALB level of the rats treated with BMSCs was 28.763.4 umol/L compared to18.563.8 umol/L in PBS treated rats ( Figure 7F). The results suggested that BMSCs transplantation ameliorates the CCl4-induced deterioration of liver function. Meanwhile, GFAPlabeled BMSCs were strongly and diffusely present near the expanding septa and in the perisinusoidal spaces of sidual hepatic parenchyma when frozen tissue sections from these livers since 24 h after BMSCs injection and proliferated at 144 h were observed under fluorescence microscopy ( Figure 7G); however, no fluorescent cells were observed in sections from the untreated cirrhotic rats and few BMSCs were stayed at lung and spleen. These results suggest that BMSC labeled with GFP in vitro are able to survive in the livers of cirrhotic rats without effect other organs. Thus, BMSCs transplantation relived liver fibrosis safe and effect. Discussion In this study, we found that BMSCs inhibited LX2 cell activation in vitro and relieved liver fibrosis in vivo. Furthermore, we showed that the anti-fibrotic effect of BMSCs was through induction of HGF and cell-to-cell contact. C-met knockdown or MyD88 over expression reversed the BMSCs inhibition of NF-kB luciferase activation. These data suggested that BMSCs plays an important role in protecting LPS induced LX2 activation via inhibition of the LPS-TLR4-MyD88-NF-kB pathway and may represent a novel therapeutic way for the disease. After liver injury, LPS levels increase in the portal and systemic circulation, owing to changes in the intestinal mucosal permeability and increased bacterial translocation. Animal studies have shown that LPS can increase hepatic fibrosis [13,14]. Patients with liver cirrhosis of Child-Pugh A class, B class, and C class, followed by LPS increased [15]. TLR4 is an important member of the TLR family and a receptor for LPS, has been found in the portal vein of chronic hepatitis patients and expressed in HSCs and KC cells too [16]. Although both LPS and TLR4 has an important role in liver cirrhosis, the mechanism is still unclear. A recent report documents the discovery of a single nucleotide polymorphism (T399I) in TLR4 that can significantly reduce the risk of hepatic fibrosis [17]. As such, therapeutic targeting of this pathway may block the formation of fibrosis and has important research value. Our experiments also indicated that LPS activated LX2 cell, which significantly increased TLR4 as well as collagen synthesis. TLR activation can stimulate two main pathways. One is dependent upon the adapter protein MyD88 (MyD88-dependent pathway) and the other is not (MyD88-independent pathway). The differential responses mediated by distinct TLR ligands and had different effect. Our results showed the protection of BMSCs on LX2 was MyD88 depended and found that at least part of the effects of BMSCs may have resulted from the suppression of NF-kB activation, which would normally occur in response to MyD88dependent ligand-induced stimulation of TLRs. In the present study, the MyD88-overexpressing LX2 cells demonstrated increased NF-kB activity even when co-cultured with BMSCs and MyD88-knocdown decreased NF-kB activity of LX2 even without BMSCs protection. These findings confirmed the critical role of TLR4-MyD88 signaling in regulation of LX2 cell activation and identified novel biologic pathways affecting the risk of hepatic fibrosis progression. In our experiments, LX2 cell activation was inhibited when these cells were co-cultured with BMSCs, and Animal experiments have shown that BMSCs can effectively relieve fibrosis, improve liver function, and effect the remodeling of fibrous tissue, which may be related to the integration of bone marrow cells and the expression of matrix metalloproteinase 9 [18][19][20][21][22]. In addition, transplantation of BMSCs can result in a large increase in the number of liver and bile duct cells, subsequently improving liver function. We have previously reported good results with autologous transplantation of BMSCs through the hepatic artery for treatment of patients with liver failure [23]. To evaluate the function of BMSCs on HSCs, we Myc, Mmp9, CXCR4, Cox2 and VEGF in LX2 cells with indicated treatments. Expression levels were normalized with GAPDH. (C) Doubling staining with anti-p65 antibody and DAPI was performed, and the NF-kB localization was identified in LX2 cells with indicated treatments (4006). (D) Cytoplasmic and nuclear levels of NF-kB in LX2 cells with indicated treatments were analyzed by WB. (E) WB analysis of TLR4, MyD88 and NF-kB in LX2 cells with the indicated treatments. (F) NF-kB DNA binding activity was analyzed by EMSA Shown is a representative blot from three independent experiments. (G) MWB analysis of TLR4, MyD88 and NF-kB in LX2 cells with c-met knockdown. doi:10.1371/journal.pone.0043408.g005 further conclude from in vivo experiments that BMSCs confer their liver protective effects. This suggests that BMSCs can evoke endogenous repair mechanisms in liver. Indeed, BMSCs are capable of producing a variety of cytokines and hematopoietic growth factors [24][25][26]. Among the great number of growth factors liver protective effects have been most attributed to HGF. HGF is a potent growth factor involved in liver regeneration that has various effects on epithelial and nonepithelial cells. HGF is a polypeptide originally characterized as a highly potent hepatocyte mitogen [27,28]. HGF also suppresses the onset of severe hepatic injury and maintains the integrity of hepatocytes in the livers of mice with cholestasis induced by alphanaphthylisothiocyanate [29][30][31]. Using a transwell co-culture assay, our study suggested that BMSCs protected activation of HSCs. This effect was mediated in part through paracrine signaling by HGF secreted by BMSCs. When we knockdown HGF receptor c-met, the protection of BMSCs and HGF decreased. Our results could therefore indicate that HGF production is upregulated in BMSCs-CM compared to DMEM, inhibiting NF-kB pathway which is important in LPS induced LX2 activation. Importantly, labeled LX2 and BMSCs directly co-culture inhibited LPS induced LX2 activation much more effect. These results present the possibility of direct interactions between activated HSCs and BMSCs through direct cell-to-cell contact, although we and others have previously investigated paracrine regulation between these two cells. In this study, we therefore propose a new potential mechanism of the anti-fibrotic effect of BMSCs. The present data indicate that BMSCs in direct cell-cell contact not only result in growth inhibition in HSCs compared with the transwell system, but also suppress HSCs activation in vitro. This observation provides evidence that cell-cell adhesion is involved in the HSCs growth inhibition by BMSCs. In particular, directly co-cultured BMSCs displayed stronger inhibitory activities compared with the paracrine blockade effect in two-chamber coculture models. In summary, BMSCs are able to protect against liver fibrosis induced by LPS. Directly co-culture, in contrast to the indirectly co-culture of BMSCs, provides the most effective treatment to prevent LX2 activation. These results are due to the fact that BMSCs can secret HGF, which, in turn, can modulate the host immune response and homeostasis between TLR4 and NF-kB pathway. The results of this study provide several new insights into reversing hepatic fibrosis through the anti-fibrotic effects of treated BMSCs targeting HSCs in vitro. The direct co-culture system we have established was applied to explore the effect of direct interplay between BMSCs and HSCs involving paracrine signaling as well as cell-cell adhesion and juxtacrine signaling. Regardless of the mechanism, our study is the first to demonstrate that induction of the TLR4-MyD88-NF-kB signaling pathway can dramatically repress proliferation of HSCs co-cultured with BMSCs in vitro. But the in vivo effect of BMSCs inhibited HSC activation through TLR4-MyD88-NF-kB pathway is still unclear. Therefore, more studies needed to clarify the mechanism and make BMSCs more effectively to reduce liver inflammation and reduce scar formation, thus inhibiting liver fibrosis and improving the quality of life of patients and prolonging their survival time.
2016-05-12T22:15:10.714Z
2012-08-23T00:00:00.000
{ "year": 2012, "sha1": "d8e0c01c2feccd726453b9a86a83b890b23c3f5a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0043408&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8e0c01c2feccd726453b9a86a83b890b23c3f5a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16879890
pes2o/s2orc
v3-fos-license
HIV Prevention in Care and Treatment Settings: Baseline Risk Behaviors among HIV Patients in Kenya, Namibia, and Tanzania HIV care and treatment settings provide an opportunity to reach people living with HIV/AIDS (PLHIV) with prevention messages and services. Population-based surveys in sub-Saharan Africa have identified HIV risk behaviors among PLHIV, yet data are limited regarding HIV risk behaviors of PLHIV in clinical care. This paper describes the baseline sociodemographic, HIV transmission risk behaviors, and clinical data of a study evaluating an HIV prevention intervention package for HIV care and treatment clinics in Africa. The study was a longitudinal group-randomized trial in 9 intervention clinics and 9 comparison clinics in Kenya, Namibia, and Tanzania (N = 3538). Baseline participants were mostly female, married, had less than a primary education, and were relatively recently diagnosed with HIV. Fifty-two percent of participants had a partner of negative or unknown status, 24% were not using condoms consistently, and 11% reported STI symptoms in the last 6 months. There were differences in demographic and HIV transmission risk variables by country, indicating the need to consider local context in designing studies and using caution when generalizing findings across African countries. Baseline data from this study indicate that participants were often engaging in HIV transmission risk behaviors, which supports the need for prevention with PLHIV (PwP). Trial Registration ClinicalTrials.gov NCT01256463 Introduction As the global HIV epidemic enters its fourth decade, 34 million people are estimated to be living with HIV [1]. Since the beginning of the epidemic, HIV care and treatment efforts have rapidly expanded and 5.3 million people are now receiving antiretroviral treatment (ART) [2]. The expansion of HIV treatment efforts has decreased the morbidity and mortality associated with HIV infection and people living with HIV (PLHIV) are now living longer [3]. Unfortunately, HIV incidence remains high with an estimated 2.6 million new infections in 2009 [2]. Traditional HIV prevention efforts have focused on reducing HIV risk among individuals who are HIV-negative or of unknown serostatus. Recently, however, there has been increasing recognition of the importance of addressing the prevention needs of PLHIV as part of a comprehensive and integrated HIV prevention, care, and treatment strategy [4]. In addition, studies have demonstrated the efficacy of providing ART as a mechanism to reduce the likelihood of HIV transmission [5], adding further momentum to the strategy of prevention with PLHIV (PwP). Population-based surveys from a number of countries in sub-Saharan Africa highlight the need for effective prevention services for PLHIV. First, fewer than 40% of those who have HIV are aware of their status [2], and even fewer know their partner's HIV status. Moreover many PLHIV, especially women, find it difficult to disclose their HIV-positive status to their sexual partners [6,7,8]. However, many PLHIV are in serodiscordant relationships with estimates from eastern Africa indicating that 40 to 50% of married HIV-positive individuals have an HIV-negative spouse or partner [9,10,11]. Second, low rates of condom use in stable relationships may place the HIV-negative partner in serodiscordant partnerships at high risk of acquiring HIV [12]. Furthermore, alcohol use among some PLHIV has been found to be high and is associated with both increased risky sexual behavior [13,14] and reduced adherence to antiretroviral medications (ARVs) [15]. PLHIV may also be co-infected with other sexually transmitted infections (STIs) which can be more severe and difficult to treat in HIV-positive persons, and incident STIs are an indicator of unprotected sex [16]. Another effective prevention service is preventing mother to child transmission (PMTCT) of HIV. Although PMTCT services are improving in sub-Saharan Africa, many HIV-positive women report an unmet need for contraception and safer pregnancy counseling [12,17], and with access to PMTCT services often limited, access to contraception is even more critical. Many PLHIV in these population studies are not aware of their status and may not be enrolled in HIV clinical care [10,12,13]. However, with the expansion of HIV care and treatment services, more PLHIV are now engaged in clinical care and have regular contact with health care providers. Yet data are limited regarding whether PLHIV in care are engaging in behaviors that put them at risk for acquiring other STIs or transmitting HIV to uninfected partners. In addition, studies that have been done among PLHIV in clinical settings have typically been small scale and conducted in a single country. This paper describes the baseline socioedemographic, HIV risk behaviors, and clinical data for a clinic-based HIV prevention intervention for PLHIV attending HIV care and treatment clinics in Kenya, Namibia, and Tanzania. Country differences in health and risk behaviors are also examined in order to explore how these factors differ across cultural contexts. Methods The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1. Study design The study was a longitudinal group-randomized trial, randomized at the clinic level. Six HIV care and treatment clinics in each of three countries (Kenya, Namibia, and Tanzania) participated, for a total of 18 clinics. The sites chosen were six district-level hospitals with HIV outpatient facilities on-site. Each clinic was matched to another clinic in the same country based on level of care and services provided (e.g., health care provider/patient ratio, number of HIV-positive patients enrolled in care), for a total of three pairs of clinics in each country. Intervention status was randomly assigned (by coin flip) to one clinic in each of the three matched pairs. The other clinic was assigned to a wait-list comparison condition. At the intervention clinics, health care providers (HCPs) and lay counselors (LCs) were trained to provide a package of HIV prevention messages and services (described below) as part of the routine care offered to all HIV-positive patients (not just study participants) during their clinic visits. In the comparison clinics, providers and staff delivered services following their usual standard of care. Following final study data collection, HCPs and LCs in the comparison clinics were trained to deliver the HIV prevention services and messages. Patient participants were assessed at baseline, 6-months, and 12months post-intervention. To allow sufficient time for patients to be exposed to the intervention during regular clinic visits, the 6month follow-up data collection began 6 months after clinic staff were trained and the intervention was implemented. To make the intervention and comparison clinic project procedures as similar as possible, the 6 and 12-month follow-up assessments occurred during the same time periods in both the intervention and comparison clinics. HIV Prevention Intervention. The intervention and materials developed for this project were adapted from Partnership for Health (PfH) [18], which is a brief, provider-delivered counseling program for PLHIV. PfH is designed to improve patient-provider communication about safer sex, disclosure of serostatus, and HIV prevention by incorporating counseling about safer sex and disclosure into the routine services provided to all PLHIV by health care providers during each clinic visit. It is based on a social cognitive model that uses message framing, repetition, and reinforcement to increase the patient's knowledge, skills, and motivations to practice safer sex. The investigators obtained input from health care providers in Kenya, Uganda, and Botswana for the adaptation of the materials for the African context and field tested the interventions in clinics in Kenya. The original intervention was expanded to include additional prevention messages (e.g., partner testing, alcohol reduction) and services (e.g., family planning, STI screening) to more comprehensively address prevention with PLHIV within the context of sub-Saharan Africa. Following completion of the baseline data collection, HCPs (including physicians, clinical officers, nurses, pharmacists) in the intervention clinics were trained to incorporate HIV prevention services and messages into their routine clinical patient assessments and to provide these services and messages to all clinic patients (not just study participants) during each routine clinic visit. Specifically, when HCPs met with a patient individually in the exam room they incorporated HIV prevention services and messages into the routine care provided to the patients. The HIV prevention services included assessing and addressing sexual risk behaviors, partner HIV testing and knowledge of partner's status, medication adherence, alcohol use; identifying and treating STIs; and providing sufficient quantities of condoms. HCPs also addressed family planning needs. For those desiring pregnancy, guidance on safer conception, pregnancy, and delivery was provided to men and women. Participants who did not currently desire pregnancy were encouraged to use condoms plus another form of highly effective contraception. Hormonal contraceptives were provided within the HIV care and treatment clinics and women who were interested in long-acting methods were referred to other clinics to receive those methods. Health care providers were also trained to help patients set a prevention goal (e.g., increasing condom use). Trained lay counselors provided group education sessions in clinic waiting areas on basic HIV/AIDS, HIV transmission to partners, mother-to-child transmission, treatment adherence, and healthy living. In addition, they provided individual counseling on the prevention issues raised by HCPs including disclosure, encouraging HIV testing of sex partner(s) and children, condom use (including condom use demonstrations), alcohol reduction, and medication adherence. Finally, counselors were trained to conduct partner and couples HIV testing and counseling (Namibia and Kenya). In Tanzania, lay counselors only provided pre-and posttest counseling, as they were not permitted by national guidelines to conduct HIV testing. Patients were either referred by HCPs to the LCs for additional counseling or the patients could choose on their own to visit the LCs. Ethics. The study was reviewed and approved by Institutional Review Boards (IRBs) at all the collaborating institutions conducting the research. Eligible participants were read an informed consent form and given the opportunity to ask questions before written consent was obtained. Participants consented to complete three questionnaires during the study period, allow collection of information from their medical charts, and provide contact information for participant tracking during the follow-up period. Participants were offered a drink and/or a small snack during or after the questionnaire administration. Data collection procedures Each day, study staff approached every third patient in the clinic waiting area and asked if he/she was interested in participating in a study. Interested patients were taken to a private area, told about the project, and screened for eligibility. Inclusion criteria required that patient participants were 18 years of age or older, enrolled in the HIV care and treatment clinic (documented HIV infection) and had received care at the study clinic at least twice prior to study enrollment. Participants also had to report being sexually active within the past 3 months, and report planning to attend the clinic for at least 1 year. Women who knew they were pregnant and male partners of pregnant women were ineligible for study enrollment, as family planning counseling and pregnancy were study outcomes. Respondents who were aware of their partners already participating in the study were excluded. All patient interviews were available in English, as well as Kiswahili (in Kenya and Tanzania) and Oshiwambo and Afrikaans (in Namibia). In order to have sufficient representation of patient participants by gender and ART status, each clinic attempted to enroll equal numbers of men and women, as well as equal numbers of ART and pre-ART patients (care only). Treatment status was assessed on the screening form to track the number of participants enrolled in each group. The patient baseline questionnaire took approximately one hour to complete and was administered by a trained project interviewer who was not part of the clinic staff. Data collection start dates were staggered across countries, with each baseline data collection period taking approximately 3 months. Baseline data collection began in October 2009 and was completed in all countries by April 2010. Data presented in this paper were collected from patient medical chart review and patient questionnaires. Medical Chart Review. Data extracted from patients' medical charts included date of HIV diagnosis, dates of HIV clinic visits in the past 6 months, clinical indicators of HIV (e.g., most recent CD4 count, WHO clinical stage), prescribed medications, pregnancy status, contraceptive use, STI symptoms and STI treatment provided, and meetings with lay counselors. In addition, data on dates and amount of medication dispensed to patients in the past 6 months were collected from pharmacy records. Participant Questionnaire. The patient questionnaire collected information regarding sociodemographics, HIV testing and care, mental and physical health, social support, alcohol use, HIV medications and adherence, disclosure of serostatus, knowledge of HIV/AIDS, sexual behavior, sexual self-efficacy, history of violence, STIs, fertility desire, family planning, and prevention services received. As much as possible, measures included in the questionnaire had been used previously in studies conducted in Africa. To assess adherence participants were asked to name their HIV medications, number of pills taken, dose (number of times per day), and whether they had missed any doses in the last 30 days [19,20]. For sexual behavior, participants could report up to five partners with whom they had sex in the past 3 months and identified whether each partner was a spouse, main partner, or non-main partner. Disclosure of the participant's HIV status to each sex partner(s) was assessed. For this analysis, disclosure of HIV status was to their spouse or most recent main partner. If they didn't report a spouse or main partner, disclosure to a non-main partner was used. Participants also indicated whether each partner had been tested for HIV and if so, whether the participant knew that partner's HIV status. The 10-item World Health Organization's Alcohol Use Disorders Identification Test (AUDIT) [21] was used to measure alcohol use. Based on the AUDIT scoring criteria, participants were categorized as non-drinker (0), non-problem drinker (,8), problem drinker ($8), and likely dependent on alcohol ($13 for women, $15 for men). STI symptoms were based on patients' report of whether they had experienced discharge from the penis or vagina, sores in the genital area, or (for female patients only) abdominal pain. Data analysis The SAS GLIMMIX procedure was used to test for differences between countries (Table 1) and between males and females for categorical variables (SAS PROC MIXED was used for continuous variables). When testing for country differences, clinic was entered as a random effect to account for the correlation of observations within clinic, country was entered as a categorical independent variable, and the variable of interest was entered as the dependent variable. A multinomial distribution with a generalized logit link was assumed for the dependent variable. For gender differences, gender was the independent variable and the variable of interest was the dependent variable. Clinic was again entered as a random effect and the appropriate distribution for the dependent variable used (binomial, multinomial or continuous). All data analyses were considered significant at p,0.05. Participants A total of 3,538 eligible patient participants were enrolled and completed the baseline questionnaire between September 2009 and April 2010 (see Figure 1 for patient flow diagram). Despite intensive recruitment efforts, all countries had difficulty enrolling male patients, especially those who were receiving care only and not taking ARVs. Given the challenges enrolling men, the recruitment goals for each clinic were revised from an even 50 participants in the male/female and on ARV/not on ARV groups to 60 males and 60 females on ARVs, 50 females in care only, and 30 men in care only. Even with these revised targets, one clinic had difficulty recruiting men who were in care only. Sociodemographics The majority of participants (58%) were female (Table 1). Nearly half of participants were between 30 and 39 years old; however, women were younger (M = 35, SD = 7) compared to men (M = 41, SD = 9, F = 548, p,.0001). Nearly two-thirds of participants either attended school through the primary grades (54%) or had not attended school (10%). Sixty one percent of study participants reported being married or living together as married. More men (69%) than women (56%) reported being married/ living together as married (F = 29.55, p,0.0001). Only 5% of married men reported 2 wives or partners they lived with as married, although 13% of married women reported that their husband had more than 1 wife or partner that he lived with as married. Ninety percent of participants had children who were still living and 51% had 2 or fewer living children. There were country differences in sociodemographics which are shown in Table 1. Because there were no Islamic/Muslim participants in Namibia, one Muslim observation was added per site in Namibia (6 total) to allow the model to converge. There were significant differences in religion (F = 52.03, p,0.0001; Table 1). Tanzania had 75% Islamic/Muslim participants, whereas Kenya was more evenly split among the different religions, with the largest group being Protestant (37%). Namibian participants were mostly Evangelical/ Pentecostal (56%). Less than half of participants (44%) had done paid work in the last 6 months. Among those who had worked in the last 6 months, 83% had also worked in the past 7 days. Men were more likely to have reported paid work in the last 6 months (55%) than women (37%; F = 126.24, p,0.0001) and in the last 7 days (male = 86%, female = 81%; F = 5.31, p = 0.02). Monthly household income was converted from local currency to US dollars for comparability across countries, with 68% of participants earning less than $100 per month. Health Status and Medication Adherence About a quarter (26%) of participants were diagnosed with HIV in the previous year, and more than two-thirds (70%) had been diagnosed with HIV within the past 3 years. In Namibia, nearly Nearly all participants (88%) were taking HIV medications (ARVs and cotrimoxazole), and 83% of those taking medications reported that they had not missed any doses in the last 30 days (Table 2). There was no significant difference in adherence between those taking cotrimoxazole only compared with those on both ARVs and cotrimoxazole. When limited to the 64% of participants taking ARVs, adherence rates were similar with 81% reporting that they had not missed a dose in the past 30 days. HIV Transmission Risk Behaviors Most participants (82%) reported that they had disclosed their HIV status to their partner (Table 2). Two-thirds of participants reported that they knew their partner's HIV status, with 48% reporting their partner was HIV-positive, 19% reporting their partner was HIV-negative, and 34% of participants did not know their partner's HIV status. Self-reported consistent condom use rates were high with 76% of participants reporting that they had used a condom at every sexual encounter in the past three months ( Table 2). Men were more likely to report consistent condom use than women (81% vs. 73%, p,0.0001). There were also country differences (p,0.0001), with more participants in Namibia reporting consistent condom use (86%) compared with Kenya (76%) and Tanzania (69%). Most participants (80%) reported no alcohol use, 15% were non-problem drinkers, 3% harmful drinkers, and 2% scored as likely dependent on alcohol ( Table 2). Fewer men reported no alcohol use and they were more likely than women to be classified as non-problem or harmful drinkers. There were also significant country differences (p,0.0001). Over one-third of participants (36%) reported ever experiencing STI symptoms (Table 2), with significant country differences (p = 0.005), but no significant gender differences. Eleven percent of participants reported STI symptoms in the past six months, with more women (14%) reporting symptoms than men (7%; F = 46.52, p,.0001). Female participants were asked whether they desired a pregnancy and male participants were asked whether they desired their spouse or main partner to be pregnant in the next 6 months. Only 17% of participants desired pregnancy (Table 2), although more men reported desiring their partners to be pregnant (20%) than women reported desiring a pregnancy (14%; p,.0001). There were also country differences, with Tanzanian participants more likely to desire pregnancy (21%), followed by 17% of Namibians and 11% of Kenyan participants (p,.0001). Among female participants who did not desire pregnancy and males who did not desire their partner to be pregnant, only 35% were using a highly effective contraceptive method (e.g., pills, injectable, IUD, implant, male or female sterilization). There were significant country differences in highly effective contraceptive use (p = .04); participants in Tanzania reported lower rates of contraceptive use (25%) than participants in either Kenya (39%) or Namibia (41%). Discussion The group-randomized trial described in this paper is designed to evaluate the effectiveness and feasibility of integrating HIV prevention into the routine care of PLHIV attending care and treatment clinics in Kenya, Namibia, and Tanzania. Baseline data from the study participants provides information about the sociodemographic characteristics, health status, and HIV risk behavior of patients attending care. These data indicate that participants were similar in socioeconomic status (SES) to nationally representative samples in each of the countries [10,22,23], with low education and income levels. The lack of paid work and low income reported by many of the PLHIV in this study may affect their health and clinic attendance. For example, transportation costs are often reported as a reason for poor clinic attendance among PLHIV [24,25,26]. The mean age of participants in this sample was higher than national estimates of ages of PLHIV, especially for women [10,22,23,27]. In addition, the majority of participants in this study were diagnosed with HIV within the past 3 years. Over half of participants had their most recent CD4 count less than 350 cells/mm 3 , and participants who had been on ARVs less than one year (20% of those on ARVs) had very low CD4 counts (median = 170). These data suggest that many participants accessed care late in their illness, a finding consistent with other studies from sub-Saharan Africa showing that many patients access care after developing advanced symptomatic disease [28,29,30,31] and aren't benefiting from care and treatment interventions to improve morbidity and mortality. In addition, their declining CD4 counts and increased viral load are also placing their sexual partners at increased risk of HIV acquisition [32,33,34,35,36]. Increased HIV testing and counseling (HTC) efforts with focused linkage to care and treatment services in both facilities (e.g., ANC, TB, STI, and out-patient departments) and communities (e.g., home-based testing and mobile testing) are needed to identify individuals with asymptomatic disease, especially men, and ensure earlier enrollment into appropriate prevention, care, and treatment services. In addition, as one-third of participants did not know their partners' HIV status, more intensive efforts are clearly needed to increase partner and couple testing to identify HIV-positive persons and discordant couples and to link them with prevention, care, and treatment services. Men in HIV care and treatment clinics and in the study sample are under-represented compared to the population of PLHIV. Men also had significantly lower CD4 counts than women. Nearly two-thirds of men had CD4 counts less than 350 cells/mm 3 compared to less than half of women (64% vs. 46%, p,0.0001). This is consistent with several other studies from sub-Saharan Africa which have found that men are less likely than women to enroll in care at an earlier clinical stage [37,38,39], and to remain in clinical care after enrollment [40,41,42]. These findings highlight the importance of strengthening HIV testing and counseling programs targeting men, such as medical male circumcision programs and partner testing in ANC settings. Efforts to identify HIV-positive men earlier, link them to prevention, care and treatment services, and retain them in care are clearly needed. The risk behavior data from this study identify HIV prevention needs among PLHIV attending HIV clinics. Specifically, nearly one-fifth of the participants were in a known discordant relationship, and over one-third did not know their partner's HIV status. In addition, 18% of the sample had not disclosed their HIV status to their partner and 24% were not consistently using condoms. These data indicate that many participants are in need of prevention services such as support for disclosure of HIV status, partner testing, and reducing unprotected sex. In addition, alcohol use was reported by about 20% of participants. Alcohol use has been associated with poor adherence [43,44,45] and increased disease progression [46,47,48], as well as increased risky sexual behavior among PLHIV [49]. Interventions to address these prevention issues in the context of comprehensive care and treatment services are indicated. There were significant differences between countries in many of the reported HIV risk behaviors. For example, consistent condom use among PLHIV in care was 86% in Namibia, but significantly lower in Tanzania (69%). Similarly, only 60% of participants in Tanzania and 63% in Kenya knew their partner's HIV status, but 73% of Namibian participants knew their partner's status. There were differences in reported alcohol use as well, with Tanzanian and Kenyan participants less likely to drink alcohol than Namibian participants, and Tanzanian participants also less likely to report problem drinking. These participant baseline differences between countries highlight the importance of considering the local context when designing multi-country studies and the need to recognize that regional differences could affect generalizability of results from studies conducted in a single region or country. This study has several limitations. Given that this was a clinicbased sample conducted in three countries with generalized HIV epidemics, the generalizability of these findings to PLHIV not enrolled in clinical care and/or to PLHIV in non-generalized epidemics may be limited. In addition, much of the data in this paper are from patient self-report, thus social desirability and recall bias may have affected participants' responses. As a result, some variables may be overestimated (e.g., condom use, disclosure, adherence) and others underestimated (e.g., alcohol use). In summary, the baseline data from this study suggest that increased efforts are needed to identify PLHIV earlier, especially men, and to ensure they access prevention, care, and treatment services following diagnosis. Many PLHIV would also benefit from prevention interventions which address disclosure support, partner testing, alcohol and sexual risk reduction, as well as providing contraceptives and support for safer pregnancy. Integrating these services into the clinical care of HIV-positive persons may increase access to prevention interventions and improve retention in clinical care [4]. Supporting Information Checklist S1 CONSORT Checklist.
2017-03-14T07:35:54.109Z
2013-02-25T00:00:00.000
{ "year": 2013, "sha1": "5ccac0335a1792edd25deb2cf2f45f2c1a9ff177", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057215&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a64e458cf0950b95b1e629d4c11d64e8f1646bde", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
254046391
pes2o/s2orc
v3-fos-license
Insulin-like peptide 3 (INSL3) in congenital hypogonadotrophic hypogonadism (CHH) in boys with delayed puberty and adult men Background Delayed puberty in males is almost invariably associated with constitutional delay of growth and puberty (CDGP) or congenital hypogonadotrophic hypogonadism (CHH). Establishing the cause at presentation is challenging, with “red flag” features of CHH commonly overlooked. Thus, several markers have been evaluated in both the basal state or after stimulation e.g. with gonadotrophin releasing hormone agonist (GnRHa). Insulin-like peptide 3 (INSL3) is a constitutive secretory product of Leydig cells and thus a possible candidate marker, but there have been limited data examining its role in distinguishing CDGP from CHH. In this manuscript, we assess INSL3 and inhibin B (INB) in two cohorts: 1. Adolescent boys with delayed puberty due to CDGP or CHH and 2. Adult men, both eugonadal and having CHH. Materials and methods Retrospective cohort studies of 60 boys with CDGP or CHH, as well as 44 adult men who were either eugonadal or had CHH, in whom INSL3, INB, testosterone and gonadotrophins were measured. Cohort 1: Boys with delayed puberty aged 13-17 years (51 with CDGP and 9 with CHH) who had GnRHa stimulation (subcutaneous triptorelin 100mcg), previously reported with respect to INB. Cohort 2: Adult cohort of 44 men (22 eugonadal men and 22 men with CHH), previously reported with respect to gonadotrophin responses to kisspeptin-54. Results Median INSL3 was higher in boys with CDGP than CHH (0.35 vs 0.15 ng/ml; p=0.0002). Similarly, in adult men, median INSL3 was higher in eugonadal men than CHH (1.08 vs 0.05 ng/ml; p<0.0001). However, INSL3 more accurately differentiated CHH in adult men than in boys with delayed puberty (auROC with 95% CI in adult men: 100%, 100-100%; boys with delayed puberty: 86.7%, 77.7-95.7%). Median INB was higher in boys with CDGP than CHH (182 vs 59 pg/ml; p<0.0001). Likewise, in adult men, median INB was higher in eugonadal men than CHH (170 vs 36.5 pg/ml; p<0.0001). INB performed better than INSL3 in differentiating CHH in boys with delayed puberty (auROC 98.5%, 95.9-100%), than in adult men (auROC 93.9%, 87.2-100%). Conclusion INSL3 better identifies CHH in adult men, whereas INB better identifies CHH in boys with delayed puberty. Introduction Pubertal disorders are common and are associated with considerable psychosocial impact for those affected and their families (1,2). Delayed pubertal onset is defined as the absence of testicular enlargement to a volume of ≥4 ml at an age that is 2-2.5 standard deviations (SD) later than the population mean, typically defined as age ≥14 years in boys (3). The overwhelming majority of boys (95%) with delayed puberty will have inappropriately normal or low levels of gonadotrophins in the context of low sex steroids due to permanent or transient gonadotrophin deficiency (4). Of these, the vast majority of younger adolescents (60-80%) will have constitutional delay of growth and puberty (CDGP) and will spontaneously proceed through normal puberty but later than their peers (5)(6)(7). However, around 10% of younger adolescent boys with delayed puberty will have congenital hypogonadotrophic hypogonadism (CHH), rising to over 90% by 18-20 years (8). CHH is a genetic trait causing impaired hypothalamic gonadotrophin releasing hormone (GnRH) neuronal migration, leading to lack or deficient number of GnRH neurons; or function, as result of disturbed secretion or action of GnRH, or both (9). A further 10-20% have functional causes of HH such as acute or chronic illness, insufficient energy availability or side-effects of medications e.g., glucocorticoids (4). Distinguishing CDGP from CHH poses a major diagnostic challenge in boys with delayed puberty particularly owing to the overlapping clinical features and similar hormonal profiles (10). With median age of effective treatment of CHH males remaining stubbornly high at 18-19 yearsprincipally due to confusion with CDGPit is clear that clinicians find this a difficult distinction. The accurate and timely distinction between CDGP and CHH is important for appropriate counselling and treatment i.e. conservative management or sex-steroid treatment for psychological distress in CDGP versus a greater consideration towards GnRH pump/gonadotrophin treatment in those with CHH with a view to address sexual function, bone, metabolic and psychological health (11). Although the majority of CHH males harbor phenotypic "red flags" (anosmia, cryptorchidism, deafness, clefting, or other developmental anomaly), their significance is commonly not appreciated (12). Leydig and Sertoli cell markers, such as insulin-like peptide 3 (INSL3) and inhibin B (INB) respectively, have been evaluated as potential markers to differentiate CDGP and CHH. INB is a member of the transforming growth factor b (TGFb) superfamily. INB is a glycoprotein heterodimer comprised of an inhibin alpha-subunit and an inhibin beta B-subunit, which is secreted by and is reflective of the number and function of Sertoli cells (13). INB peaks transiently during the mini-puberty at around 2-4 months after birth, and then decreases during childhood prepubertally before increasing again during puberty (14). The INB surge in male puberty may be appreciable as early as 9 years old and thus precedes the appearance of measurable LH surge (15). In prepubertal boys, INB reflects predominantly Sertoli cell mass and function, while in adulthood, germ cells are major determinants of the asubunit and INB production. Consequently, in adulthood INB closely reflects testicular mass including germ cells as well as (20). INSL3 has been shown to demonstrate age-dependent dynamics from birth to adulthood (21). Fetal INSL3 plays a pivotal role in the initial transabdominal testicular descent (22). Significantly higher levels of INSL3 were found in serum from 3month-old boys compared to older prepubertal boys reflecting the transient postnatal gonadotrophin surge or 'mini-puberty' (21). Serum INSL3 levels increment again during puberty (23) and remain high throughout adulthood before declining with age (24). Pathologies affecting the hypothalamo-pituitarygonadal (HPG) axis including CHH and primary testicular disorders including Klinefelter syndrome, cryptorchidism and anorchidism are all associated with lower levels of INSL3 (25). Unlike testosterone, INSL3 is not subject to episodic fluctuations inherent to the feedback loop in the HPG axis (26), thus making INSL3 an emerging biomarker of disorders affecting the male HPG axis. In this study, we evaluated the performance of the testicular markers INSL3 and INB in two cohorts; first in boys with delayed puberty due to either CHH or CDGP, and second in adult men with either CHH and eugonadal controls to determine the role of INSL3 and INB as biomarkers in CHH and delayed puberty. Ethical approval Ethical approval for the study comparing eugonadal men and men with CHH was granted by the West London Research Ethics Committee, London, United Kingdom (UK) (reference: 12/LO/0507). Ethical approval for the study comparing boys with CDGP and boys with CHH was granted by the Ethical Committee of the Medical Faculty of the University of Tübingen, Germany. Written informed consent was provided by all participants. Both studies were conducted in accordance with the Declaration of Helsinki. Participants Boys with delayed puberty Participants with delayed puberty due to either CHH or CDGP were selected from boys who presented with delayed puberty to the Department for Paediatric Endocrinology of the University Children's Hospital Tübingen between the years 2009 and 2013. A diagnosis of CDGP was made at 12-18 months review if the testicular volume (TV) reaches ≥8ml, if TV did not reach a volume of ≥5ml then CHH was assumed at 24 months follow-up. All the boys included in the analysis had TV of ≥8ml at 12-18 months (CDGP) or <5ml at 24 months (CHH). This cohort has been previously reported in a study determining the discriminatory power of INB and luteinising hormone (LH) vs GnRH agonist (GnRHa) test (27). Participants fulfilled the following criteria: age 13-17 years, demonstrated clinical phenotype of delayed puberty at the time of testing including TV <4ml. Boys with delayed puberty were either at Tanner stage G1 with TV ≤3ml or had some evidence of pubertal progress (boys with Tanner stage G2 with TV 3.5-4ml) (27). Participants with hypopituitarism, functional HH, or already diagnosed with CHH, were excluded from this study. All participants underwent detailed medical assessment including full medical history and physical examination by a paediatric endocrinologist for syndromic features and clinical signs of hypopituitarism or other disorders that can cause functional HH. Tanner staging system was used for pubertal staging, Prader orchidometer was used to measure TV and olfactory function quantified using Sniffin' Sticks (Burghart Messtechnik GmbH, Wedel, Germany) in participants suspected to have CHH. Bone age was determined by obtaining left hand radiographs which were reviewed by three experienced paediatric endocrinologists using the Greulich and Pyle radiographic atlas. All participants had measurement of basal serum levels of LH, follicle stimulating hormone (FSH), INSL3, testosterone and INB. A single subcutaneous injection of 100mcg of triptorelin acetate (DECAPEPTYL ® IVF 0.1 mg/ml; Ferring Arzneimittel, Kiel, Germany) was administered followed by serial blood sampling at 4 and 24 hours for serum LH, FSH and testosterone. Adults with CHH or eugonadal controls Participants in the CHH and eugonadal men cohort were recruited through newspaper advertisements, endocrine clinics, and via the CHH online community. In all, 22 adult eugonadal men and 22 men with CHH were recruited. This cohort was previously reported with regards to their responses to kisspeptin-54 (28). All participants underwent detailed medical assessment including full medical history and physical examination. The following criteria were fulfilled by eugonadal men: age 18-35 years, BMI 18-30 kg/m 2 , absence of significant co-morbidities, systemic disease, regular therapeutic or recreational drug use and normal clinical and biochemical reproductive function. Participants with CHH were defined as having hypogonadotrophic hypogonadism with a history of incomplete progression through puberty by the age of 18 years. Men with CHH underwent further examination including TV measurement using a Prader orchidometer, evaluation for syndromic features and clinical signs of hypopituitarism and olfactory function quantified using the 40item University of Pennsylvania Smell Identification Test (UP-SIT). All participants had measurement of basal serum levels of LH, FSH, INSL3, testosterone and INB. CHH men also had genetic testing to identify genes implicated in the aetiology of CHH. Four adult men with CHH had a pathogenic or likely pathogenic variant identified in the genes: ANOS1, FGFR1, PROKR2, and SEMA3A as previously detailed by Abbara et al. (28). Participants with CHH were asked to discontinue testosterone gel for at least 1 week prior to participation in the study. Study visits for those on longer acting intramuscular testosterone preparations were conducted when the participants' testosterone levels were at trough level prior to the next injection to minimise disruption to their treatment. Participants with CHH did not receive gonadotrophin therapy for at least 6 weeks prior to participation in the study. Participants attended two study visits with a washout period of at least 1 week. Kisspeptin-54 (6.4 nmol/kg; Bachem AG, Liverpool, UK) or GnRH (Gonadorelin 100mcg; Intrapharm Laboratories Ltd., Maidenhead, Berks, UK) were administered intravenously as a single bolus at each study visit in random order after a 30minute period of baseline blood-sampling. INB was measured at baseline and serial blood-sampling was performed at 15-minute intervals for 6 hours to measure serum LH, FSH and testosterone levels. Statistical methods Statistical analyses were conducted using GraphPad Prism version 9.0. Normality was determined by the D'Agostino-Pearson test. Parametric variables were reported as mean ± SD and compared using the unpaired t-test. Non-parametric variables were reported as median (interquartile range) and compared using the Mann-Whitney U test. Discriminatory potential was assessed by receiver operated characteristic (ROC) analysis. Proportions were compared by linear regression. P values <0.05 were regarded as statistically significant. Baseline characteristics The clinical characteristics of the 51 boys with CDGP and 9 boys with CHH [previously described by Binder et al. (27) In boys with delayed puberty, those with CHH had significantly higher BMI and were taller compared to those with CDGP. TV and bone age were lower in the CHH group compared to those with CDGP. Men with CHH were recruited at any age and were older and had higher BMI than eugonadal men. Basal INB levels significantly correlated with mean TV in men with CHH, this significant association with TV was not seen with INSL3, LH, FSH and testosterone (Figures 2A, B). Associations between INSL3 and INB In boys with CDGP, there are significant positive associations between INSL3 and INB (r 2 = 0.15, p=0.01) ( Figure 2C). The r 2 values is weak and thus this data should be interpreted in this context. Similarly, in adults with CHH there is also a significant positive association between INSL3 and INB, r 2 = 0.36, p=0.01 ( Figure 2D). However, significant associations were not observed in boys with CHH, nor in adult eugonadal men. INSL3 and GnRHa/GnRH stimulated LH, FSH and testosterone In boys with CDGP, INSL3 positively correlated with GnRHa stimulated LH at 4 hours: r 2 = 0.17, p=0.003; 24 hours: r 2 = 0.26, p=0.0001 (see Figure 3A and Table 1). As previous, the Figure 3B and Table 1). In eugonadal adult men, INSL3 was not associated with LH, but was positively related to baseline FSH levels ( Figure 3C and Table 1), and to testosterone levels at 4 hours after GnRH ( Figure 3D and Table 1). Discussion We found that boys with delayed puberty due to CHH had lower INSL3 levels than boys with CDGP, however there were some overlap in the INSL3 levels between boys with CDGP and CHH (auROC of 86.7%). In comparison, all eugonadal adult men had higher INSL3 levels than adult men with CHH and there was no overlap in INSL3 levels with an auROC of 100%. Basal INSL3 was not able to differentiate CHH men according to their olfactory status or the presence of pathogenic or likely pathogenic variants. In contrast, LH responses to kisspeptin-54 were even lower in those with anosmia or pathogenic variants identified in comparison to other men with CHH (28), suggesting that kisspeptin can provide functional assessment in patients with CHH. Additionally, the LH response to kisspeptin-10 could provide prognostic classification of subsequent pubertal development in boys and girls with delayed puberty (30). Additional prospective studies in larger cohorts are needed to realise this potential for translation to clinical practice. INB was higher in boys with CDGP compared to those with CHH with an auROC of 98.5%, and likewise higher in eugonadal adult men compared to those with CHH (auROC of 93.9%). In boys with CDGP, significant positive associations between INSL3, INB, and testosterone were observed, likely reflecting the positive relationship between Leydig and Sertoli cell r r 2 P-value r r 2 P-value r r 2 P-value r r 2 P-value populations and functionality during pubertal progression. Interestingly, in adult men with CHH, TV positively correlated with INB, but not with INSL3. This stands to reason, as after puberty, INB production is directly proportional to spermatogenic status and germ cell mass in addition to Sertoli cells mass, which together comprise the majority of TV (11). These data are consistent with INSL3 being more reflective of the attainment of complete pubertal development, whereas INB appears to have greater predictive power in the setting of boys with delayed puberty. One reason for this could be that INSL3 changes more slowly in comparison to INB. As previously reported in cell culture studies, INSL3 is largely a constitutive secretory product of Leydig cells and is not acutely regulated in the short-term (hours) by gonadotrophins (26). In contrast, increments in INSL3 production due to Leydig cell proliferation and differentiation following gonadotrophin exposure takes place over several days and is a chronic differentiation dependent process (26). This is evidenced by a report in healthy adult men who w ere acutely e xposed to supraphysiological levels of LH-like bioactivity following which an acute rise in serum testosterone was seen but not in INSL3 (31). In contrast, a chronic gonadotrophin/hCG stimulus over a period of weeks to months did result in a change in INSL3 (25, 32). Thus, although INSL3 production and secretion is dependent on LH, INSL3 is a longer-term signal of the trophic effect of LH on Leydig cell structure and function (25). Overall, INSL3 could represents an additional downstream marker of the Leydig cell function when assessing the HPG axis (26). Interestingly in boys with CDGP, there were positive associations between INSL3, LH and testosterone (basal and following GnRHa stimulation), however such associations were not seen in the eugonadal men or both CHH cohorts. This may reflect the spectrum of maturing Leydig cells present in boys with CDGP thus demonstrating variable incremental LH response to GnRHa as boys traverse the different stages of puberty (23). In contrast in adult eugonadal men, the mature Leydig cell population could already be operating at 'functional capacity' and thus these associations were no longer evident. Another plausible explanation for this difference is that GnRHa was used in the paediatric cohort, whilst GnRH was used in the adult cohort. As GnRHa have higher receptor affinity and longer half-lives than GnRH (33), the resultant supraphysiological and sustained stimulation of the gonadotrophic cells may lead to greater LH releases which could better reveal correlations with INSL3. Boys with CDGP and eugonadal adult men demonstrated negative associations between INB, and basal and stimulated LH as well as FSH levels. In contrast in both boys and men with CHH, positive associations between INB and gonadotrophins were observed. The reason for this discrepancy is intriguing, but perhaps some element of endocrine feedback is present in boys with CDGP and eugonadal adult men that is yet to be established in the CHH cohort. To date there are limited studies of INSL3 in the context of CDGP and no previous studies have compared the performance of INSL3 and INB. This is the first study to explore INSL3 levels in both a paediatric and an adult cohort of CHH comparing these to CDGP and eugonadal control respectively. It confirms previous findings of lower INSL3 levels in a CHH cohort compared to eugonadal adults (25), however another report showed no significant difference between INSL3 in CHH and prepubertal CDGP cohorts (16). Limitations of this study are its retrospective analysis of previous cohorts and the small numbers of patients with CHH, which is a rare condition. The adult men cohort with CHH were also significantly older than the adult eugonadal cohort and were examined after a pause in testosterone treatment. Some but not all of the men with CHH have had previous gonadotrophin treatments, with duration of treatment ranging from 6 months to 5 years. Although study visits were conducted after a washout period in adult men with CHH, it remains possible that the different treatments received could have impacted on the measurements taken in the study. The paediatric cohort were followed up to 24 months after assessment. A diagnosis of CDGP was made if the TV reached ≥8ml within 12-18 months of followup, and of CHH if TV did not reach ≥5ml within 24 months of follow-up. Thus, it is possible that some patients assigned as CDGP could have subsequently had arrested pubertal development following discharge, although none of this cohort re-presented for further assessment. The auROC in this study is calculated using data from the adult and paediatric cohorts included and there is a need for validation in external cohorts. At present there is no 'gold standard' diagnostic test that unequivocally differentiates CDGP and CHH (34). Previously described diagnostic tests include measurement of unstimulated basal/nocturnal gonadotrophins (35, 36), and stimulated gonadotrophin levels following GnRH, GnRHa, or human chorionic gonadotrophin (hCG) (10). Unfortunately, the limited sensitivity, specificity, complexity, and invasiveness of previously described stimulation protocols has limited universal adoption for the assessment of delayed puberty (10,34). A recent study published findings of GnRH stimulation test and INB in a large cohort of men with CHH (n=127), CDGP (n=74), and healthy controls (n=31) (37). There were variable LH responses to the GnRH test, with 47% of patients with CHH having peak LH levels that overlapped with those in the CDGP group (37). For INB, 50% of levels overlapped between those with CDGP and those with CHH (37). Consistent with the findings in the present study, positive correlation between INB and TV was described (37). Binder et al. have also reported the accuracy of a composite outcome based on both the LH response to GnRHa and INB in prepubertal males aged 13.7-17.5 years (27), which provided 100% sensitivity and 98.1% specificity when LH <0.3 IU/L and INB <111 pg/ml cut-offs were applied (27). More recently in addition to kisspeptin-stimulated-LH (30) (discussed above), FSH-stimulated-INB (38) and GnRHa-stimulated INB (39) have been proposed as potential diagnostic tests to predict outcomes in delayed puberty. Recently liquid chromatography-tandem mass spectrometry (LC-MS/MS) has been validated for the quantification of INSL3 (17) and reference range established and tested in healthy boys progressing through puberty (18), males with hypogonadotrophic hypogonadism and Klinefelter syndrome (40). Using this assay, INSL3 concentrations were lower in both untreated boys with Klinefelter syndrome (n=83) or hypogonadotrophic hypogonadism (n=103). Unlike serum testosterone, INSL3 was not affected by BMI, body fat percentage or alcohol consumption (41) and was not affected by diurnal variation. In summary, INSL3 appears to be able to accurately identify the transition to the post-pubertal adult male state, whereas INB may have more predictive capability in boys with CDGP. This suggests that INSL3 could be used to monitor response to treatment in boys with delayed puberty as they transition towards adulthood pending further longitudinal study. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by West London Research Ethics Committee, London, United Kingdom (UK) (reference: 12/LO/0507) and Ethical Committee of the Medical Faculty of the University of Tübingen, Germany. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
2022-11-29T14:08:54.343Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "bdcdd8907059dd72f50673924d6aa1162e318d63", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "bdcdd8907059dd72f50673924d6aa1162e318d63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251018408
pes2o/s2orc
v3-fos-license
Multilabel Prototype Generation for Data Reduction in k-Nearest Neighbour classification Prototype Generation (PG) methods are typically considered for improving the efficiency of the $k$-Nearest Neighbour ($k$NN) classifier when tackling high-size corpora. Such approaches aim at generating a reduced version of the corpus without decreasing the classification performance when compared to the initial set. Despite their large application in multiclass scenarios, very few works have addressed the proposal of PG methods for the multilabel space. In this regard, this work presents the novel adaptation of four multiclass PG strategies to the multilabel case. These proposals are evaluated with three multilabel $k$NN-based classifiers, 12 corpora comprising a varied range of domains and corpus sizes, and different noise scenarios artificially induced in the data. The results obtained show that the proposed adaptations are capable of significantly improving -- both in terms of efficiency and classification performance -- the only reference multilabel PG work in the literature as well as the case in which no PG method is applied, also presenting a statistically superior robustness in noisy scenarios. Moreover, these novel PG strategies allow prioritising either the efficiency or efficacy criteria through its configuration depending on the target scenario, hence covering a wide area in the solution space not previously filled by other works. Introduction The k-Nearest Neighbour (kNN) classifier represents one of the most wellknown algorithms for non-parametric supervised classification, mostly due to its conceptual simplicity and good statistical error properties [1]. For a given query, this method hypothesises about its category by querying the k nearest neighbours of a reference corpus, following a specified similarity measure [2]. In this regard, this classification strategy has been largely considered in a wide range of disparate fields as, for instance, diabetes detection [3], musical key estimation [4] or handwritten signature verification [5], among others. However, as a representative case of the lazy learning paradigm, kNN does not derive a model out of the reference corpus [6]. In contrast, for every query, this method requires an exhaustive search among the elements of the aforementioned corpus, thus entailing low-efficiency figures in both classification time and memory usage [7]. Note that, while this inefficiency issue may be obviated in scenarios with limited amounts of data, when considering large data collections, kNN becomes intractable [8]. Data Reduction (DR) stands as one of the main existing approaches in the related literature for tackling this drawback [9]. This group of methods aims to reduce the size of the reference set for improving the efficiency of the scheme while keeping-or even increasing-the classification performance obtained with the original data. Among them, the Prototype Generation (PG) family represents one of the most competitive alternatives due to its remarkable reduction capabilities compared to other DR strategies [10]. In a broad sense, PG derives an alternative reference set for the classifier by performing different selection and merging operations on the elements of the initial corpus following certain heuristics [11]. Due to the relevance of PG for efficient kNN-based classification, a considerable amount of research effort has been invested in proposing novel strategies as well as improving the existing ones [12]. However, such research works have typically addressed multiclass scenarios-classification tasks in which every sin-gle query is assigned to one category out of a set of mutually excluding labels-, hence neglecting the more general multilabel scenario-case in which an undetermined number of categories is assigned to each query [13]. The work by Ougiaroglou et al. [14] represents one of the scarce works of a PG strategy devised to address multilabel scenarios. More precisely, this work proposes the adaptation of the state-of-the-art Reduction through Homogeneous Clustering (RHC) method [15] to the multilabel space, obtaining the so-called Multilabel Reduction through Homogeneous Clustering (MRHC). The authors not only conclude that such adaptation remarkably improves the efficiency of the kNN classifier in multilabel scenarios but also state the need for contriving multilabel PG strategies due to the shortage of existing alternatives. In this context, the present work further explores the proposal and use of PG methods for improving the efficiency of kNN classification in multilabel scenarios. More precisely, we introduce the novel adaptation of four PG strategies from their original multiclass formulation to the multilabel case. These proposals have been comprehensively evaluated considering several multilabel classification approaches based on kNN with a wide variety of corpora. Additionally, different percentages of label-level noise particularly devised for this multilabel framework-artificial alterations of the classes or labels of the data-have been induced in the corpora to assess the robustness of the proposals and their capability of dealing with such adverse scenarios. The results obtained report a statistical improvement in terms of both reduction capabilities and classification performance for all scenarios and noise levels contemplated compared to the exhaustive search carried out by the base kNN method and the reference MRHC reduction approach. In this regard, these novel proposals not only fill a gap in the scarce multilabel PG literature but also reportedly outperform the only existing strategy in the field, the commented MRHC algorithm. The rest of the work is structured as follows: Section 2 provides the theoretical background of the work; Section 3 presents the proposed PG methods; Section 4 introduces the experimental set-up; Section 5 shows and discusses the results; and finally, Section 6 concludes the work and poses future research lines to pursue. Background To adequately describe multilabel classification, we initially introduce the multiclass framework, as it conceptually represents a simpler task. Formally, let X ∈ R f denote an f -dimensional feature space and Y mc a set of discrete labels. represent an annotated collection of data where each datum x i ∈ X is related to label y i ∈ Y mc by an underlying function h mc : X → Y mc . The goal of multiclass classification is retrieving the most accurate approximationĥ mc (·) to that underlying function. Among the different alternatives for performing such an approximation task, the well-known kNN stands as one of the most common choices given its relevance in the Pattern Recognition field [16]. Formally, given a query q ∈ X, this method modelsĥ mc as: where k stands for the number of neighbours considered, d : X × X → R + 0 is a dissimilarity measure, mode : Y mc → Y mc denotes the mode operator, and Y k mc is the set of labels retrieved from the closest k elements to the query q. As previously introduced, the multilabel paradigm constitutes a generalisation of the multiclass framework in which each individual instance may be associated to more than a single label [17]. Formally, the set of multilabel data collection of mutually non-exclusive labels [18]. As in the multiclass case, the goal is retrieving the most accurate approximationĥ ml (·) to the underlying function h ml : X → Y ml . To leverage the advantages of multiclass classifiers in multilabel scenarios, the literature considers two main approaches [19]: problem transformation and algorithm adaptation. We now describe these paradigms and report some commonly considered methods within them for kNN schemes as it represents the focus of the work. The problem transformation paradigm disentangles the multilabel task into several single-label problems for then applying a multiclass kNN-based strategy for performing the classification task. Some of the most common alternatives are: the Binary Relevance kNN (BRkNN), which decomposes the task into L independent binary classification problems [20]; the Label Powerset kNN (LP-kNN), which derives an alternative single-label corpus where each labelset is considered as a different class [21]; and Random k-Labelsets (RAkEL), which divides the initial set of labels into a number of small random subsets for then performing LP-kNN and creating an ensemble-based classifier [22]. In contrast, the algorithm adaptation approach focuses on modifying the base multiclass classifier to fit the multilabel scenario. In this regard, the Multilabel kNN (ML-kNN) proposed by Zhang and Zhou [23] expands the base kNN method resorting to a maximum-a-posteriori principle to determine the labelset of the query based on its neighbouring instances. Some extensions to this approach are the Dependent ML-kNN [24], which models the different dependencies among the set of labels, the IBLR-ML method [25], which expands the base ML-kNN one by combining it with logistic regression, or the combination of ensembles and ML-kNN as in the work by Zhu et al. [26]. Nevertheless, while these transformations and adaptations allow the use of kNN in multilabel classification tasks, the inherent efficiency issue of these classifiers has been neglected in the literature. Note that, while some multilabel schemes such as the ML-kNN depict similar inefficiency figures to that of the multiclass kNN formulation since they explore the entire reference T ml set, the BRkNN case is of particular relevance as it requires iterating through the T ml set L different times. The Prototype Generation (PG) family of methods stands as one of the most successful approaches for efficient kNN classification in multiclass cases [9]. As a representative case of DR strategy, PG aims to obtain an alternative set R mc by performing certain combinations and transformations on the elements of T mc so that |R mc | < |T mc | while keeping-or even improving-the classification performance. However, as aforementioned, to the best of our knowledge, there is a remarkable lack of methods for performing PG in multilabel scenarios. The sole exception to this assertion is the work by Ougiaroglou et al. [14] where the state-of-the-art multiclass PG method RHC was adapted to the multilabel space. In that work, the authors experimentally proved the usefulness of their PG proposal to improve the efficiency of the multilabel classification and stated the need for devising other alternatives to fill this existing gap in the literature. In this context, the present work proposes the novel adaptation to the multilabel space of four well-known multiclass PG algorithms. More precisely, we consider the classic Chen reduction algorithm [27] as well as the three different versions of the reference Reduction through Space Partitioning (RSP) strategy by Sánchez [28]. For this first-time adaptation to the multilabel space of such PG algorithms, this work proposes several mechanisms for both partitioning and integrating the labels of the multilabel prototypes of the initial corpus for eventually generating the instances of the reduced multilabel set. These novel methods are thoroughly compared, in terms of both performance and efficiency, to the state-of-the-art proposal by Ougiaroglou et al. [14] and to the case in which no reduction is performed considering different multilabel kNNbased classifiers, corpora, and noise scenarios. Such study shall provide insights on whether the proposed multilabel PG methods cope with the commented efficiency issue without decreasing the classification performance and on their robustness as well as data cleansing capabilities in cases depicting the presence of noise in the data. Prototype Generation in the multilabel space This section presents the proposed PG methods for the multilabel space. As commented, we focus on the first-time adaptation of four algorithms originally devised for multiclass cases: the Chen method [27] and the three versions of the Reduction through Space Partitioning (RSP) strategy [28]. In this regard, the first part of the section introduces the original multiclass formulations of these algorithms and the second one presents their respective multilabel adaptations proposed in this work. Reference multiclass PG The considered multiclass PG strategies-the Chen method as well as the different RSP versions-constitute representative examples of the so-called space splitting policy [29], which typically follows a two-step approach: a first stage, space partitioning, divides the feature space of the multiclass set T mc into different regions using certain heuristics; after that, the prototype merging stage computes new prototypes from each region attending to different criteria, producing the reduced set R mc . The existing PG strategies under this framework, therefore, essentially differ in the particular splitting and prototype generation heuristics considered. In the specific case of the Chen and RSP PG families, the aforementioned heuristics depict some similarities. In this regard, we first present the particular approach followed by the Chen method in Algorithm 1, aided by the graphical illustration in Figure 1, for then commenting the different points on which the three RSP strategies differ from it. As it may be checked in the algorithm, the method iteratively divides the feature space of T mc into n d -user parameter-disjoint subsets which are denoted as C mc (i) where n d i=1 C mc (i) = T mc . For that, the largest subset in each iteration is divided in two attending to the distance between the two farthest prototypes in it. Eventually, for each of the n d regions, a new prototype is obtained as the median of the features of the elements in it and labelled after the most common class. Hence, the size of the resulting reduced set equals the number of partitions selected by the user, i.e., |R mc | = n d . The RSP family, as commented, builds upon the Chen proposal by modifying some of the space partitioning and/or prototype merging stages. The first RSP version-RSP1-considers the Chen algorithm prone to discard underrep-Algorithm 1: Chen algorithm for multiclass PG [27] Input : Divide B into subsets: Divide C mc into subsets: Set B = C mc (j), p 1 = q 1 (j), and p 2 = q 2 (j) 15 end while resented classes due to its prototype merging policy (lines 16-18 in Algorithm 1). Thus, instead of computing a single prototype for each of the n d regions and labelling them after the most represented class in each partition, RSP1 only merges prototypes sharing the same label. Hence, each region is now represented by more than a single prototype, more precisely, as many as the number of classes it contains. In this case, therefore, the size of the reduced set may not be known in advance but accomplishes |R mc | ≥ n d . The second version of RSP-RSP2-expands RSP1 by modifying the space partitioning policy and, more precisely, the criterion for selecting the region to split (lines 12-13 in Algorithm 1). RSP2 considers the overlapping degree (3) (c) Space partitioning stage when nc = 3. R mc (d) Prototype merging phase. criterion, which is defined as the ratio of the average distance between instances belonging to different classes and the average distance between instances that are from the same class. The region with the largest overlapping degree is the one to be divided. The third RSP reduction heuristic-RSP3-is based on the idea that each resulting region should represent a cluster of instances belonging to only one class. Thus, this approach modifies the Chen method so that it iteratively performs the space partitioning stage (line 3 in Algorithm 1) until all resulting sets are homogeneous in terms of class representation, remaining the prototype merging phase of the algorithm unaltered. Hence, unlike the RSP1 and RSP2 strategies, the RSP3 approach does not require the n d user parameter related to the number of resulting regions since the method exclusively relies on this class homogeneity criterion to accomplish the space partitioning stage. Multilabel PG proposals Having introduced the four reference PG methods in their multiclass formulation, we now present their respective multilabel adaptations proposed in the work. The multilabel space splitting PG framework may be formulated in an analogous manner to that of the multiclass case. Initially, the space partitioning phase divides the multilabel set T ml ⊂ X × Y ml into n d non-overlapping multilabel After the convergence of this stage, the prototype merging step retrieves the multilabel set of data R ml generated out of these C ml clusters by following a certain approach, where |R ml | ≤ |T ml |. Within this framework, we introduce the different modifications proposed in the work for accommodating the presented multiclass PG methods to such scenario. Our first proposal is the adaptation of the Chen algorithm, namely Multilabel Chen or MChen. Since the space partitioning stage (lines 1-15) computes the set of clusters C mc only relying on the set of features X, no adaptation is required for its multilabel formulation to obtain set C ml . Oppositely, given that the prototype merging stage (lines 16-18) usually requires combining elements from different classes, the question arises about the proper approach to do so in multilabel spaces since the simple selection of the most common label in the C ml cluster is not suitable for the considered scenario. In this regard, we resort to the policy devised by Ougiaroglou et al. [14] for the MRHC method in which the resulting prototype keeps the labels present in at least half of the instances of the cluster. Mathematically, the labelset assigned to the resulting element in cluster C ml (i) is given by: where |C ml (i)| λ denotes the cardinality of label λ in subset C ml (i). This expression replaces that in line 18 of Algorithm 1 whereas the policy followed for obtaining the set of features (line 17) is not modified. Figure 2c provides a graphical example of this merging procedure considering the space partitioning result shown in Figure 2b. The second proposal is the Multilabel RSP1 or MRSP1. As aforementioned, the RSP1 states that, during the prototype merging stage and for each cluster C mc (i) of multiclass data, one prototype must be retrieved for each class present in it. The MRSP1 adapts such stage by resorting to a labelset approach (lines [16][17][18], i.e. each labelset is considered a different class and the instances with the same labelset are merged and assigned to it. Mathematically, set R ml is obtained as: where k = |{y ∈ C ml (i)}| is the number of labelsets in the i-th cluster C ml (i) The Multilabel RSP2 or MRSP2 proposal generalises the space partitioning approach based on the overlapping degree from the RSP2 method to the multilabel space (lines [12][13]. For that, as in the MRSP1 proposal, we resort to a labelset approach: each labelset is considered a different class and the overlapping degree Φ i of the i-th C ml (i) region is computed as the ratio of the average distance between instances belonging to different labelsets-D = -and the average distance between instances of the same labelset-D = . In formal terms, for the i-th region, these pairwise distance values D = and D = are respectively computed as: with 1 ≤ j, k ≤ n d . Based on this, the overlapping degree Φ i for the same i-th region is eventually obtained as: B=T ml Figure 2a. Experimental set-up This section presents the experimental scheme designed for comparatively assessing the proposed multilabel PG methods. For an easier description, this procedure is graphically illustrated in Figure 3. Training set Induce % of noise During the training phase of the procedure, the set of train data T ml ⊂ X × Y ml is altered to induce certain noise level in the instances controlled by the user parameter θ ∈ [0, 1], retrieving set T ml . Then, this latter data collection T ml is processed by a multilabel PG method to obtain a reduced version of the set-namely R ml -that is used as the reference set for the multilabel kNNbased classifier. It must be noted that the noise induction process represents an optional stage in the posed pipeline. Hence, as it will be shown, the first experimental part does not induce any noise by setting θ = 0 while the second one will analyse the robustness and data cleansing capabilities of the reduction methods to the data corruption process by considering θ > 0. During the inference stage, a test set of multilabel data S ml ⊂ X × Y ml drawn from the same distribution as the train data T ml but disjoint from it is considered for evaluating the method. Using aĥ ml (·) prediction function from the particular multilabel kNN-based classification strategy at hand, each sample x i ∈ S ml is given a labelset that is eventually compared to that in the ground-truth based on certain evaluation criteria. The remainder of the section presents the corpora used for assessing the multilabel PG proposals, the noise induction procedure used, the considered kNN-based classification strategies, and the contemplated evaluation protocol. Corpora We have considered 12 multilabel corpora from the Mulan repository [30] comprising a varied range of domains, corpus sizes, initial space dimensionalities, and target label spaces. The precise details in terms of size, features and label dimensionality of these sets are provided in Table 1. In addition, the cardinality-average number of labels associated with each instance-and density-ratio of cardinality and label dimensionality of the corpus-measures are provided for each corpus as they represent common descriptors in the multilabel classification field. Note that, for the sake of reproducible research, we have used the partitions defined by Szymański and Kajdanowicz in these particular corpora [31]. Noise induction procedure To examine the actual robustness of both the existing and the proposed multilabel PG methods, we artificially introduce noise in the data. For that, as This procedure is detailed in Algorithm 2 in which the user parameter θ ∈ [0, 1] represents the induced noise rate, i.e., the percentage of prototypes that change their label. Algorithm 2: Noise induction procedure Input : T ml ⊂ X × Y ml ← Multilabel train corpus θ ← Noise level parameter Output: T ml ⊂ X × Y ml ← Noisy multilabel train corpus Save labelset of the i-th element in set Θ: y = y ∈ Θ i 5 Put labelset in |Θ| − i in the i-th sample: y ∈ Θ i = y ∈ Θ |Θ|−i 6 Set y as the labelset of the |Θ| − i-th element: y ∈ Θ |Θ|−i = y 7 end for 8 Let T ml = T ml ∪ Θ As aforementioned, the particular case of θ = 0 represents that in which no noise is induced in the corpus, hence being T ml = T ml . Classification strategies We have selected three reference multilabel techniques based on kNN as classification methods: BRkNN and LP-kNN from the transformation paradigm as well as ML-kNN based on the algorithm adaptation premise. In all cases, the Euclidean distance has been used as the dissimilarity measure. Regarding the k parameter representing the number of neighbours, we have considered the values k ∈ {1, 3, 5, 7}. Note that this parameter is not optimised by any means during the experimentation since the aim is to examine its influence on the overall classification performance in relation to the PG mechanisms. Evaluation metrics To assess the goodness of the proposals, we consider two criteria: classification performance and efficiency figures. With respect to the former criterion, we resort to the Hamming Loss (HL) as it constitutes a commonly considered approach for measuring the goodness of multilabel classifiers [33]. This metric, which is defined as the fraction of the wrong predicted labels with respect to the total number of labels, can be mathematically posed as: where S ml ⊂ X×Y ml denotes the multilabel set of test data, ∆ is the symmetric difference of ground-truth y i and predictedĥ ml (x i ) sets, and L is the number of labels. As commonly done in the DR field, efficiency is assessed by comparing the size of the reduced set R ml normalised by that of the training set T ml [34]. Computation time was discarded as an evaluation metric due to its variability depending on the load of the computing system. It must be noted that PG methods for kNN seek to simultaneously optimise two contradictory goals, set size reduction and classification performance, being not possible to achieve a global optimum. Hence, as in reference works from the literature [35,36], we address it as a Multi-objective Optimisation Problem in which the two aforementioned objectives are meant to be optimised. The different solutions under this framework-there may exist more than oneare retrieved by resorting to the concept of non-dominance: one solution is said to dominate another if it is better or equal in each goal function and, at least, strictly better in one of them. Those elements, typically known as nondominated, constitute the Pareto frontier in which all elements are deemed as optimal solutions without any order among them. The remainder of the section presents two particular experiments: (i) a first part in which the PG methods are comparatively evaluated obviating the noise induction process; and (ii) a second one whose focus is the noise robustness and data cleansing capabilities of these PG schemes. This section introduces and discusses The implementation of the proposed PG methods and the experimental procedure considered is publicly available in: https://github.com/jose-jvmas/ multilabel_PG. Comparative assessment of multilabel PG strategies In this first experiment, we thoroughly compare the different reduction strategies using the aforementioned multilabel kNN-based classifiers as individual scenarios. In this regard, Table 2 and Figure 4 show the results obtained in which the performance and reduction figures constitute the average of the individual values obtained for the corpora considered. A first remark that may be observed is that the proposed methods fill a region in the space of possible solutions not previously occupied by existing multilabel PG methods. This is because some of the proposals (MChen, MRSP1, and MRSP2) allow selecting the size of the reduced set through a parameter. Note that, while this may be considered a drawback, such a feature allows prioritising either the reduction rate or classification performance depending on the particular application considered. It can be also checked that, for all cases, MChen achieves the highest reduction rates, even when other parameter-based multilabel PG proposals consider the same m value. This is somehow expected since, during the prototype merging stage, MChen only retrieves a single prototype for each region while MRSP1 and MRSP2 obtain a prototype per labelset in those subsets, hence inherently increasing the size of the R ml resulting set. In terms of non-dominance, it may be noted that the obtained Pareto fron- Since the reduction process is applied before the classification stage, the resulting set sizes are the same for all scenarios, being the differences in performance only due to the particular capabilities of the classification scheme. In this regard, it can be observed that LP-kNN may be deemed as the least competitive alternative since, for the same reduction scheme, HL figures tend to be higher than the other alternatives. Oppositely, BRkNN and ML-kNN show similar performance results since the HL figures do not remarkably differ among them. Finally, it may be also checked that the classification rates do generally improve as the number of neighbours considered-k parameter of the classifiersincreases. This fact suggests the presence of some noise in the corpora that is somehow palliated by adequately tuning this parameter. Note that, among the different multilabel classifiers studied, LP-kNN is the one that shows the least improvement when increasing this k value. Statistical significance analysis A significance analysis has been performed to statistically evaluate the results obtained. For that, we have considered the Wilcoxon signed-rank test [37] to assess whether the classification performance and reduction rate of the proposed PG methods significantly improve those of the baseline strategies. More precisely, for each classification scenario, we compare the results obtained by the elements of the particular Pareto frontier against the best figures obtained by the baseline MRHC and ALL methods. For that, we consider the individual results obtained-either performance or reduction-for each contemplated corpus in the experimentation. Table 3 shows the results of such analysis when considering a significance threshold of p < 0.05. , , and = respectively denote that, for each classification scenario, the non-dominated solution in the row significantly improves, worsens or does not differ from the reference one in the column for the performance (HL) or reduction (Size) criterion. given that these methods do not remarkably reduce the set size of the reference corpus. Noise robustness and data cleansing study In this second experiment, we assess the performance of both the proposed multilabel PG strategies as well as the reference ones in scenarios with noisy data. For that, we consider the labelset swapping procedure introduced in Section 4.2 with θ ∈ {20%, 40%} as they stand as representative noise rates commonly considered in the related literature [32]. For comparative purposes, the case of θ = 0 is also included to assess the base case in which no noise is induced in the data. The induction of noise in the corpora clearly affects the overall performance since, in general, all studied cases depict lower classification rates as the noise level increases. It must be noted that, while the use of high k classification values (e.g., k = 5 or k = 7) somehow palliates this effect, the best performance achieved in these noisy scenarios is indeed lower than that of the non-induced noise case. Besides, all PG methods show, in general, worse reduction rates as the noise increases, being the MRHC and MRSP3 strategies particularly affected. The sole exception to this assertion is the MChen method whose reduction capabilities remain stable independently of the noise induced in the data. Overall, it can be noted that MChen can be considered the best noise cleansing strategy since the classification schemes trained after that stage achieve the best overall HL performance figures. MRSP proposals, though, prove not to be that robust against this type of noise since the performance of the classification schemes trained with the set obtained with those methods degrades as the presence of noise increases. Regarding the reference MRHC method, it may be observed a similar performance trend to that of the MRSP family. In terms of non-dominance, it may be observed that the different Pareto when considering the noisiest scenario of the ones studied in the work. In this regard, it may be concluded that the MChen algorithm proves itself as a considerably robust method-both in terms of efficiency and classification performance-against noisy situations, especially when set to high reduction rates (e.g., m = 10% or m = 30%). Statistical significance analysis As in the first experiment, we have considered the Wilcoxon signed-rank test to statistically compare the results obtained by the elements of the Pareto frontier against the best results obtained by the baseline MRHC and ALL methods for each noise scenario. Table 5 shows the outcome of such analysis when considering a significance threshold of p < 0.05. In relation to the classification rate, it may be noted that all non-dominated proposals either equal or improve the exhaustive search case with a significantly lower amount of prototypes. More precisely, our proposals improve the ALL case when inducing an elevated level of noise in the data while, when addressing scenarios with low levels of induced noise, the proposed multilabel methods in the Pareto frontier do not significantly differ from the results of the exhaustive search. Regarding the classification performance of the MRHC baseline method, it may be observed that this strategy is remarkably affected by the noise, being significantly outperformed by all the multilabel PG proposals in the non-dominated frontier. The sole exception to this assertion is the MChen 10 in the Noise 0% scenario that does not significantly differ from the MRHC case. Overall, this analysis proves the superior robustness and noise cleansing capabilities of the proposed multilabel PG alternatives since, in the worst-case scenario, the classification rate achieved is similar to that of the exhaustive search but with a significantly lower amount of samples. Besides, it is also proved that the only existing multilabel PG method in the literature-the MRHC algorithm-is severely affected by these noisy scenarios-both in terms of efficiency and classification rate-, being hence outperformed by the novel multilabel PG proposals introduced in this work. Conclusions and Future Work Prototype Generation (PG) represents one of the most competitive approaches for improving the efficiency of the k-Nearest Neighbour (kNN) classifier, which is typically related to low efficiency figures when tackling scenarios with large amounts of data. Nevertheless, while PG methods are commonly considered in multiclass scenarios, very scarce works have addressed such a task in multilabel frameworks. This work presents the first-time adaptation of four multiclass PG methods to the multilabel case: the reference Chen method [27] and the three versions of the well-known Reduction through Space Partitioning [28]. For that, we gener-
2022-07-25T01:15:58.424Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "e317ebfa8f7e6bae103d09d2d90ce9b35af7dbce", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e317ebfa8f7e6bae103d09d2d90ce9b35af7dbce", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231597917
pes2o/s2orc
v3-fos-license
Cancer Treatment by Caryophyllaceae-Type Cyclopeptides Cancer is one of the leading diseases, which, in the most cases, ends with death and, thus, continues to be a major concern in human beings worldwide. The conventional anticancer agents used in the clinic often face resistance among many cancer diseases. Moreover, heavy financial costs preclude patients from continuing treatment. Bioactive peptides, active in several diverse areas against man’s health problems, such as infection, pain, hypertension, and so on, show the potential to be effective in cancer treatment and may offer promise as better candidates for combating cancer. Cyclopeptides, of natural or synthetic origin, have several advantages over other drug molecules with low toxicity and low immunogenicity, and they are easily amenable to several changes in their sequences. Given their many demanded homologues, they have created new hope of discovering better compounds with desired properties in the field of challenging cancer diseases. Caryophyllaceae-type cyclopeptides show several biological activities, including cancer cytotoxicity. These cyclopeptides have been discovered in several plant families but mainly are from the Caryophyllaceae family. In this review, a summary of biological activities found for these cyclopeptides is given; the focus is on the anticancer findings of these peptides. Among these cyclopeptides, information about Dianthins (including Longicalycinin A), isolated from different species of Caryophyllaceae, as well as their synthetic analogues is detailed. Finally, by comparing their structures and cytotoxic activities, finding the common figures of these kinds of cyclopeptides as well as their possible future place in the clinic for cancer treatment is put forward. INTRODUCTION Cancer cells, often present as malignant tumors, are described as compiled abnormal cells with fast growth and division in an uncontrolled manner (1). They may migrate and invade every part of the body, a phenomenon that is named metastasis (1). Cancer cells do not die, and so apoptosis, a programmed cell death that occurs in normal cells, does not happen in cancerous cells, and as a result, any old, damaged, and defective cells survive and come together with newly born unwanted cells (2). The progression of cancer and its metastasis ultimately ends with patient death. On the basis of a WHO report, it is estimated that 1 in 6 deaths globally is due to cancer (3). As time passes, the incidence of cancer cases increases year by year such that it will involve 29.5 million people in 2040 (4). Cancer treatment by traditional chemotherapeutics may have several drawbacks, including the danger of disease recurrence due to emerging resistance to these agents and the appearance of drug side effects during long-term usage (1). Therefore, cancer treatment requres changing of anticancer drug protocols many times, which obviously imposes high costs and a major economic burden on the patients and their relatives (5). The problem is compounded if there are social and emotional pressures on the patients (5). The successful use of peptides for treating various diseases (6,7) has attracted researchers to apply these agents to combating cancer (8,9). That is because peptides have several advantages over other chemotherapeutics. These noticeable characteristics of peptides may be enumerated as good efficacy, high potency, and low immunogenicity (1). Peptides are amenable to many changes in their sequences in a way of showing selective action in targeted cells while causing low toxicity to normal cells (1). In addition, peptides cause low incidence of resistance in malignant cells (10). However, peptides contain some properties that are considered to be disadvantages (1,9). Among these unfavorable characteristics, low intolerability against lytic enzymes is the most important feature (1). To improve low stability, some strategies have been suggested by researchers (11). These are reordering peptide sequences (12), choosing D-amino acid instead of L-amino acid (13,14), and converting linear peptide into a cyclized form (15). Cyclization can also cause peptide conformation to become more suitable for binding to a peptide biologically active site (16). Cyclic peptides (also called cyclopeptides) have many potential therapeutic properties (17) and are suitable to be used as drugs in the clinic (18). The plant cyclopeptides show many attractive biological activities. They are divided into eight types (19). Among them, Caryophyllaceae-type cyclopeptides composed of cyclo di-, penta-, hexa-, hepta-, octa-, nona-, deca-, undeca-, and dodeca-amino acid residues are mounted by more than 200 kinds (20). These cyclopeptides are extracted from several plant families, mainly from Caryophyllaceae. This is a large family of flowering plants that contains approximately 81 genera and 2625 species (21). Discovery of cyclopeptides from Caryophyllaceae dates back to 1959, when the first cyclopeptide, named Cyclolinopeptide A, was extracted from the seeds of Linum usitatissimum (22). It is a potent immunosuppressive agent (23). Since then, Caryophyllaceaetype cyclopeptides, defined as homomonocyclopeptides, have been discovered from higher plants. These peptides show several biological activities, such as antimalarial, antiplatelet, immunomodulating, immunosuppressive, cyclooxygenase inhibitory, tyrosinase and melanogenesis inhibitory, Ca 2+ antagonism, and estrogen-like and cytotoxic activities (19,24). Demonstrating various activities, including an anticancer effect, by Caryophyllaceae-type cyclopeptides encouraged us to review and summarize the latest biological findings on these peptides. In this review, the focus is on discussing the cytotoxic action of Caryophyllaceae-type cyclopeptides, especially dianthins and their synthetic analogues. Moreover by this review, attempt will be made to obtain information about the relations between structure and anticancer activity (SAR) of the cyclic peptides, which is useful for finding good candidates among cyclopeptides as anticancer agents with optimum structures for application in the clinic. SEGETALINS Discovery of segetalin cyclopeptides from the seeds of Vaccaria segetalis (Caryophyllaceae) started in 1994, when the first cyclopeptide, called segetalin A, was isolated, and its structure was proved by instrumental analyses, i.e., two-dimensional nuclear magnetic resonance (2D NMR) and electrospray ionization mass spectrometry (ESI-MS)/MS as well as by chemical and enzymatic hydrolysis methods. The sequence of its structure was found to be cyclo(Ala-Gly-Val-Pro-Val-Trp-). It was shown that this cyclopeptide produces an estrogenic activity on the uterine weight of ovariectomized rats (25). Soon after, the isolation of other segetalins, such as segetalins B, C, and D, was reported from the same plant (26). The estrogenic activity of segetalin B was higher than segetalin A, whereas segetalins C and D did not show such activity noticeably (26). The resemblance of the part of the peptide sequence between segetalin A and segetalin B (Gly-Val and Trp-Ala) was suggested to be the reason for such similar activity. (The peptide sequences are given in Table 1.) Discovery of the natural segetalins E, F, G, and H from Vaccaria segetalis is documented in several reports (27)(28)(29)(30). Further examining the biological activity, segetalins G and H in addition to A and B displayed estrogenic Cyclo(-Gly-Tyr-Val-Pro-Leu-Trp-Pro-) On P-388 cells, DLA cells, EAC cells Antihelimintic Segetalin F Cyclo(-Tyr-Ser-Ser-Lys-Pro-Ser-Ala-Ser) Vasorelaxant activity in ovariectomized rats (27,31). It was assumed that this property was due to the kind of sequence as well as conformation of the peptides (31). The sequences of Trp-Ala-Gly-Val or Tyr-Ala-Gly-Val, b-turn occurring, one in segetalin B (between Trp 4 and Ala 5 ) and two in segetalin A (between Trp 5 and Ala 6 and between Val 2 and Pro 3 ) as well as cyclic rather than acyclic shapes of the peptides are important factors in demonstrating estrogenic activity (31). When the biological activity of segetalin E (Figure 1) was examined on cancer cells using an MTT (3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide) assay, it was found that segetalin E had moderate inhibitory activity against the growth of lymphocytic leukemia P-388 cells (IC 50 40 µg/mL -1 ≈ 49.2 µM) (32) and higher inhibitory actions against Dalton's lymphoma ascites (DLA) and Ehrlich's ascites carcinoma (EAC) cell lines (with IC 50 values of 3.71 and 9.11 µM, respectively) (29) (Chart 1). Segetalin E also showed antihelmintic activity against two earthworms M. konkanensis and P. corethruses by a dose of 2 mg/ mL. In another study, the structure as well as absolute stereochemistry of segetalin F were determined by analytical and chemical means (30). In an experiment designed to determine the vasorelaxant activity of segetalins against rat aorta contraction induced by norepinephrine, some segetalines, i.e., segetalins F, G, and H, isolated from Vaccaria segetalis, showed relatively strong relaxant activity, and segetalin B exhibited contractile activity. This interesting finding was interpreted as a result of the basic Lys and Arg residues being present in segetalins F, G, and H, whereas in segetalin B, there is no basic residue; instead, a Trp residue is present (27,30). As cyclopeptides may be produced biosynthetically by ribosomalor nonribosomal-dependent peptide synthases, some authors investigated the biosynthesis of Caryophyllaceae-like cyclopeptides in the developing seeds of Saponaria vaccaria. They found that there are genes to encode the precursors of cyclic peptides, which ultimately are cyclized and result in the final products. This finding was further confirmed by expressing a cloned DNA (cDNA) in the roots of transformed S. vaccaria to encode a synthetic precursor of segetalin A. During the investigation, the authors predicted the presence of two more segetalins, i.e., segetalins J and K in S. vaccaria seeds. The prediction by sequence analysis also revealed the existence of genes for encoding cyclic peptide precursors in Dianthus caryophyllus and Citrus species (33). The sequences of all segetalins are shown in Table 1. YUNNANINS In 1994, the isolation of two cyclic peptides, yunnanin A and B, was reported from the root of Stellaria yunnanensis (Caryophllaceae) (34). The peptide structures were confirmed by spectroscopic analysis and chemical methods (34,35). Yunnanin A showed cytotoxic activity against p-338 cells (36). Yunnanin C, isolated from the root of the same plant (36), as well as yunnanin A, indicated in vitro antiproliferative activities against three cell lines-J774.A1 (murine monocyte/macrophage cell line), WEHI-164 (murine fibrosarcoma cell line), and HEK-293 (human epithelial kidney cell line)-after three days of incubation (IC 50 s ranging from 2.1 to 7.5 µg/mL) (37). Interestingly, the synthetic counterparts of these cyclopeptides did not show the same level of cytotoxicity against the above cell lines (IC 50 s less than 100 µg/mL). This discrepancy was discussed as it might be as a result of subtle conformational changes of proline units during the synthesis process, which ended by diverse arrangements of proline residues in the synthetic cyclopeptides (37). In another experiment, the synthesized yunnanin A showed weak antimicrobial, anti-inflammatory, and anthelmintic activities (38). Recently, a cyclic analogue of yunnanin A was synthesized by eliminating tyrosine residue and introducing a phthalimide structure instead, inside the ring structure, through photo inducing a single electron transfer reaction (SET) (Figure 2). The hydroxyl group attached to the isoindolinone part of the phthalimide structure could have a role similar to the hydroxyl group of the eliminated tyrosine residue. It is shown that this analogue exhibits strong toxicity against HepG-2 and Hela cell lines (IC 50 s 29.25 µg/mL and 65.01 µg/ mL, respectively) (Chart 2). This cytotoxicity had almost no toxic activity against normal cells, L929 cell lines (IC 50 203.25 µg/mL) (Chart 2). It is suggested that special intramolecular hydrogen binding and g, b-turn secondary structures of this cyclic peptide analogue could be possible reasons for producing such high activity (39). On the other hand, the synthesis of yunnanin C was also reconsidered along with the preparation of 9 related mutated analogues by the method of serine/threonine ligation (STL)mediated cyclization (40). Three other yunnanins, named yunnanins D, E, and F, were isolated from Stellaria yunnanensis, and their structures were confirmed by spectroscopic and chemical analysis (41). Later, in 2005, the total synthesis of yunnanin F was reported using a disconnection approach (42). By screening for antimicrobial and pharmacological activities, yunnanin F showed moderate-to-good inhibitory activity against the growth of bacterial cells and weak activity against fungal cells (42). Meanwhile, this cyclopeptide demonstrated a good anthelmintic activity against earthworms (42). From the point of pharmacological activity, yunnanin F produced moderate anti-inflammatory activity (42). The structures of yunnanin B, C, and D are shown in Figure 3. The peptide sequences and information on biological activities of all the yunnanin cyclopeptides are given in Table 2. DICHOTOMINS Dichotomins A to E were isolated from the roots of Stellaria dichotoma L. vat. lanceolata Bge, and their structures were defined by 2D NMR spectroscopy, X-ray, and chemical analysis (43,44). In addition, the biological activities of these cyclopeptides were also reported by the same authors. Dichotomins A, B, C, and E demonstrated inhibitory action against the growth of p-388 lymphocytic leukemia cells (IC 50 s were 2.5, 3.5, 5.0, 2.0 µg/mL, respectively) (Chart 3). Interestingly, dichotomin D did not show such activity. It is argued that dichotomins A, B, and C, as hexacyclopeptides, have identical sequences except at the sixth residues. To show similarity between dichotomin E, a pentacyclopeptide, with the three dichotomins A, B, and C, the sequence of Tyr-Ala-Phe in dichotomin E was compared with the sequence of Phe-Leu-Tyr in these hexacyclopeptides. By this comparison, a comment was made that a common feature exists as one aliphatic residue is present within two aromatic residues among A, B, C, and E cyclopeptides. Dichotomin D, as a hexacyclopeptide, had neither an identical sequence with the sequence of dichomins A, B, and C nor an aliphatic residue separating two aromatic residues. In fact, the two aromatic residues of dichotomin D were close together. Instead of cytotoxic activity, dichotomin D showed a strong cyclooxygenase inhibitory action (about 73% inhibition at 100 µM concentration compared with the control). Dichotomin A did not show activity against cyclooxygenase (44). Later on, dichotomins F and G were isolated from the same plant source, and their structures were confirmed by instrumental and chemical methods (45). Both dichotomin F and G demonstrated inhibitory action on cyclooxygenase. In another report, dichotomins H and I were found from the same plant, and their biological activities were studied (46). Using an MTT assay, dicotomins H and I were shown to have inhibitory activity against the growth of p-388 cells (IC 50 s 3.0 and 2.3 µg/mL, respectively) (Chart 3). In a further study, dichotomin J, naturally originated from the roots of Stellaria dichotoma, was synthesized and screened for antibacterial, antifungal, and anthelmintic activities and compared with the appropriate related standard drugs, such as ciprofloxacin, griseofulvin, and albendazole, respectively. It is shown that this cyclopeptide contains good inhibitory action against bacteria (S. aureus, B. substilis) and moderate activity against fungi (C. albicans, A. niger) in 50 µg/mL concentration and high killing activity against earthworm (27.65 min, mean death time comparable with 22.78 min mean death time found for albendazole) (47). The structures of anticancer dichotomins are shown in Figure 4. The peptide sequences and information on biological activities of all the dichotomin cyclopeptides are given in Table 3. CYCLOLEONURIPEPTIDES Cycloleonuripeptides as proline-rich cyclopeptides were discovered from the fruits of Leonurus heterophyllus (Labiatae). Cycloleonuripeptides A, B, and C are nonacyclopeptides ( Figure 5). Among them, cycloleonuripeptide B is epimer with cycloleonuripeptide C. The stuctures of these cyclopeptides were elucidated by 2D NMR and chemical analysis (48). The presence of proline residues in the peptide backbone could end with several possible conformations as a result of cis-trans isomerization of amide bonds involving proline. Therefore, conformational analysis of the cyclopeptides were carried out by distance geometry calculation and restrained energy minimization using NMR data. Although five proline residues are present in these peptide sequences, a single stable conformer was observed for them. In addition, it was found that the skeleta of cycloleonuripeptides A, B, and C contain two bturns (49). Cycloleonuripeptides B and C exhibit inhibitory action on the growth of p-338 lymphocytic leukemia cells (IC 50 s 6.0 and 3.7 µg/mL, respectively) (Chart 4) (48). Further experiments on the fruit extract of Leonurus heterophyllus resulted in the discovery of cycloleonuripeptides D (50), E, and F (51). Cycloleonuripeptide D shows inhibitory action against a cyclooxygenase enzyme, and cycloleonuripeptides E and F demonstrate moderate activities as vasorelaxant agents in rat aorta (50,51). The peptide sequences and information on biological activities of all the cycloleonuripeptides are given in Table 4. CYCLOLINOPEPTIDES Cyclolinopeptide A (CLA), as a first natural cyclopeptide, was isolated from the seeds of Linum usitatissimun (linseed oil) in 1959 (52). Conformational analysis by NMR shows that cyclolinopeptide A occurs as at least four types of conformers in solution state, and none of them contains hydrogen binding intramolecularly. Therefore, high flexibility of the peptide molecule can result in solution (53). This peptide is active as a potent immunosuppressive compared with cyclosporine A (23). The search for more cyclolinopeptides was followed by the discovery of cyclolinopeptides B-E, and their structures were elucidated by 2D NMR and chemical analysis (54). Studying the biological activities of these cyclopeptides showed that cyclolinopeptide B possesses inhibitory activity against concanavalin A-induced mitogenic response of human peripheral blood lymphocytes (IC 50 44 ng/mL), comparable with cyclosporine A (55). Cyclolinopeptides A, B, and E also show moderate inhibition on the proliferation of mouse lymphocyte cells induced by concanavalin A (IC 50 s 2.5, 39, and 43 µg/mL, respectively) (Chart 5) (54). In contrast, cyclolinopeptides C and D do not give such a level of activity (IC 50 >100 µg/mL). In another study, four cyclolinopeptides F-I were found in the seeds of Linum usitatissimun. The structures of isolated cyclopeptides were determined by instrumental and chemical methods. In addition, the immunosuppressive activity of these compounds was evaluated against mouse splenocytes (56). Cyclolinopeptides F-I also do not show immunosuppressive activity (IC 50 >100 µg/mL). For these different results between cyclolinopeptides A, B, and C on one hand and cyclolinopeptides C, D, and F-I on the other, it is inferred that the biological activity is very dependent on the sequence as well as conformation of the peptides. To confirm this hypothesis, the three- dimensional structure of cycylolinopeptide A, studied by X-ray, and its distance geometry calculations were compared with that of cyclolinopeptide B. It was found that, in the solid state, conformation of cyclolinopeptide A was similar to that of the both cyclopeptides A and B in the solution state (57). The structures of cytotoxic cyclolinopeptides are shown in Figure 6. The peptide sequences and information on biological activities of all the cyclolinopeptides are given in Table 5. CHERIMOLACYCLOPEPTIDES Cherimolacyclopeptides A and B were isolated from the seeds of Annona cherimola Miller. Tandem mass spectrometry and 2D NMR spectroscopy were used to determine the peptide sequences ( Figure 7). The solution state structure of cherimolacyclopeptide A was also studied. It is shown that this cyclopeptide contains two b-turns and one new type of b-bulge compared with other cyclopeptides. A cytotoxicity study showed that cherimolacyclopeptide A is a potent cytotoxic agent (IC 50 0.6 µM), but cherimolacyclopeptide B is a weak cytotoxic agent against KB tumor cells (IC 50 45 µM) (Chart 6) (58). The peptide sequences and information on biological activities of cherimolacyclopeptides are given in Table 6. DIANTHINS Dianthus is a genus of Caryophyllaceous that includes 300 species (59). From some Dianthus species, three classes of compounds have been isolated and studied; 1) triterpenoid saponins, called dianosides A and B (60), C-F (61), and G-I (62); 2) dianthin proteins dianthin-30 and dianthin-32 (63) and dianthin-29 (64); and 3) dianthin cyclic peptides A and B (65), C-F (59), G and H (66), and I (67,68). It should be noted that, because proteins and cyclopeptides of Dianthus have the common name of dianthin, these two different classes must not be confused. In this review, isolation, synthesis, and biological activities of dianthin cyclopeptides are discussed. Discussion of the biological activities is focused on the anticancer activity of these cyclopeptides. Dianthin Isolation and Synthesis Isolation Historically, the plant Dianthus superbus L. has been used in China as a traditional medicine for its several biological activities, including diuretic, anti-inflammatory, urinary anti-infective, and anticancer effects (69). In an initial study, two cyclopeptides were isolated from this plant, and their structures were determined as cyclo(-Ala-Tyr-Asn-Phe-Gly-Leu) (dianthin A) and cyclo(-Ile-Phe-Phe-Pro-Gly-Pro) (dianthin B), using instrumental analysis (65). In the following study, four cyclopeptides, dianthins C, D, E, and F, are isolated from the extract of the same plant, and their structures are documented by mass spectrometry, 2D NMR analysis, and some chemical methods (59). In this study cytotoxicity of dianthin E is also reported. In another study, two other cyclopeptides are identified from the extract of Dianthus superbus by employing ESI tandem mass fragmentation, 2D NMR analysis, and X-ray diffraction (28). The proliferative activities of dianthins G and H are also evaluated in this study. A few years ago, the isolation of a new cyclopeptide named dianthin I was reported from Dianthus chinensis, and its structure was determined (67). From Dianthus superbus var. longicalysinus, along with six other known compounds, a cyclopeptide called longicalycinin A was isolated and its structure identified by instrumental analysis and reported as cyclo(-Gly-Phe-Tyr-Pro-Phe) (Figure 1). The biological evaluation of this compound shows its cytotoxicity against the HepG2 cancer cell line (70). So far, there are no other reports on the discovery of new longicalycinin although total synthesis of this and some other cyclopeptides from Dianthus superbus has been documented in several studies (71)(72)(73)(74)(75). The structures of dianthins and longicalycinin A are presented in Figures 8 and 9. The peptide sequences and information on biological activities of dianthins and longicalycinin A are given in Table 7. Synthesis of Dianthins Dianthin A, also called cyclopolypeptide (XIII), was synthesized through several chemical steps using a solution phase peptide synthesis strategy (71). The synthesis of dianthin I by the solid phase method was reported by a group of scientists (68). The synthesis of longicalycinin A was also reported through solution phase (72) as well as solid phase methods (73,74). In addition, several analogues of longicalycinin A were synthesized in order to achieve information about relationships between structure and activity of this cyclopeptide (74). For the synthesis of longicalycinin A analogues, a two-step synthesis strategy was chosen. To explain more, linear peptides at first were synthesized on 2-chlorotrityl (2-CTC) resin and then detached from the resin as protected peptides in the partial cleavage step. In the second step, final deprotection was applied in the solution phase in order to achieve fully unprotected linear peptides. For preparing cyclic peptide analogues, after cleaving protected linear peptides from the resin, conditions for peptide cyclization were employed in the solution phase followed by final deprotection. The reaction product was then solidified in cold diethyl ether. In the other effort, linear and cyclic heptapeptide analogues of longicalycinin A were also synthesized by employing two cysteine molecules (75). The cysteine molecules were added as one at the C-terminal and the other at the N-terminal position of the longicalycinin A linear peptide analogues. In this experiment, the peptide cyclization was performed in an oxidation condition in order to make a disulfide bond between two SH groups of cysteine residues of the linear peptides. Biological Activities of Dianthin Cyclopeptides Various biological activities have been reported in the literature concerning dianthins A-I and longicalycinin A. The studies were made on the evaluation of antiprotozoal, antifungal, antiduretic, anti-inflammatory, and anticancer activities of these cyclopeptides. The survey of these studies is presented as follows. Anticancer Activity of Dianthins As previously mentioned, Dianthus superbus, a genus of plant family Caryophyllaceae, has been used for anticancer activity (69,70). In one study, the antioxidant activity (free radical scavenging ability) and cytotoxic effect of several fractions from an ethanol extract of Dianthus superbus on three human cancerous cell lines, HepG2, HeLa, and Bel-7402, were reported (76). It was demonstrated that, among these fractions, the ethyl acetate part containing a high content of phenolic compounds with high reducing ability showed the most antioxidant activity. Using the MTT assay, the ethyl acetate part also showed considerable cytotoxicity (IC 50 20-36 µg/mL) against the three cell lines. In the other study, by employing the ethyl acetate fraction, apoptotic activation was observed in the HepG2 cell line (77). Treating with 80 µg/mL of the fraction over 24 h caused a considerable increase in the percentage of cells in the sub-G1 phase in which a high amount of apoptotic nuclear fragment bodies was seen. In a further experiment exposing HepG2 cells to the ethyl acetate fraction for 48 h, it was shown that the expressions of Bcl-2 and NF-kB were suppressed, and the amount of cytochrome c was increased in cytosol due to the release from mitochondria. Caspases-9 and -3 were also activated. All these data infer that the ethyl acetate fraction of the ethanol extract of Dianthus superbus induces apoptotic phenomena in Hep-G2 cells by a (77). Also, there are several reports considering the anticancer activities of dianthin cyclopeptides originally isolated from Dianthus superbus. In one study, the biological activities of the synthesized dianthin A, previously isolated from the whole plant of Dianthus superbus, are reported (71). Dianthin A shows a noticeable cytotoxic activity against two cancer cell lines, DLA and EAC cells (cytotoxic concentration inhibitory of 50% growth as CTC 50 , 15.1 and 18.6 µM, respectively) (Chart 7). 5-Fluorouracil (5-FU), as a standard drug, shows CTC 50 , 37.36 and 90.55 µM against the two cell lines, respectively (71). In another study, MTT as a cytotoxicity assay was used to examine the anticancer activity of all the compounds isolated from the methanol extract of Dianthus superbus (59 (71). In a recent publication, longicalycinin A and its several analogues were synthesized, and their cytotoxic activities were examined against HepG2 and HT-29 using different experiments, including MTT, flow cytometry analysis, and Lysosomal membrane integrity assays. The results show that the two cyclopeptide analogues of longicalycinin A, cyclo-(Thr-Val-Pro-Phe-Ala), and cyclo-(Phe-Ser-Pro-Phe-Ala) were effective cytotoxic agents that were even CHART 8 | Cytotoxicity of longicalycinin A on cancer cells, HepG2, compared with dianthin E and doxorubicin (as standard drug). better than longicalycinin A (74). Cytotoxic activity of linear and cyclic heptapeptide analogues of longicalycinin A containing two cysteine residues were also examined on the two cell lines HepG2 and HT-29, using MTT assay and flow cytometry analysis. Skin fibroblast cells were included in the experiment to evaluate if any toxicity of these peptides occurs on normal cells. As a cytotoxic reference, 5-FU was chosen (75). The result of the MTT assay show that the cyclic heptapeptide analogue was toxic against the cancer cell lines more than the linear heptapeptide congener. In addition, flow cytometry analysis demonstrates that apoptosis of the cancer cells can occur by the cyclic heptapeptide in higher percentages than by the linear heptapeptide analogue. In fact, the linear peptide showed no harmful effect on cancer cells as well as on skin fibroblast cells, whereas the cyclic heptapeptide could impose an apoptotic event (about 90%) on all the cancerous and fibroblast cells (75). As a conclusion from this experiment, further and better designed heptapeptide analogues of longicalycinin A containing a disulfide bond are needed to differentiate toxicity between cancer cells and normal cells. In the later study, various fractions of methanol extract of Dianthus superbus, i.e., ethyl acetate, butanol, and distilled water fractions as well as their bioactive compound content were evaluated in vitro from the points of antioxidant, anti-influenza, and cell toxicity activities (78). A cytotoxic activity assay against four cell lines, Hela (ovarian cancer cell), SKOV (ovarian cancer cell), Caski (cervical cancer cell), and NCL-H1299 (human lung cancer cell) showed that the ethyl acetate fraction was the most potent part of the methanol extract (with IC 50 s 9.5, 9.6, 13.8, and 69.9 µg/mL against the four cell lines, respectively). Because SKOV cells, as the most resistant ovarian cells, do not respond to the usual anticancer drugs such as Adriamycin and cis-platin, these results could be interesting. The authors show that the cyclic peptide content of the ethyl acetate (65,79). In one report, Dianthus superbus is shown to enhance cognition and improve memory in memory impaired mice under scopolamine induction. Thus, it is suggested that the plant could be useful in preventing Alzheimer disease (80). Dianthus superbus has been also considered as a source of antioxidants for scavenging reactive oxygen radical species (ROS) (76,81). That is because it contains phenolic compounds with high reducing power. These phenolic compounds mainly correlate with cyclic peptides' content of the plant. In fact, antimicrobial and cytotoxic activities of Dianthus superbus may be somehow in parallel with the kinds of dianthin cyclopeptides present in the plant (76,78,81). On the other hand, antiviral activity of Dianthus superbus has been also reported against influenza viruses, but this activity corresponded to the presence offlavonol glycosides detected in the butanol fraction of the methanolic extract of the plant (78). There are reports concerning the utilization of dianthins as antiparasitic agents (71,72). Dianthin A is shown to have a strong antifungal activity against Candida albicans (MIC, 6 µg/ mL) compared with griseofulvin. This cyclopeptide also demonstrated a moderate anthelmintic activity on earthworms in 2 mg/mL concentration, using membendazole/piperazine citrate as the standard drugs (71). A good anthelmintic activity of longicalycinin A was also reported in the literature (72). In addition, longicalycinin A showed moderate activity against dermatophytes (72). CONCLUDING REMARKS Caryophyllaceae-type cyclopeptides, as natural peptides isolated from the ethanol extract of various higher plants of Caryophyllaceae type (Figure 10), have shown anticancer activity against several cancerous cell lines. The common feature in these peptides, apart from the cyclic structure, is the hydrophobic characteristic of their whole molecules due to the high amount of nonpolar amino acid residues present in their structures (≈80%). As can be roughly calculated from Table 8, proline, phenylalanine, and glycine, in order, make up the higher percentage of residue content in the cyclopeptide scaffolds compared with other residues. Polar amino acids make less contribution (around 20%) in the cyclic structures of peptides. Therefore, from the point of structure-activity relationship (SAR) studies, it may be estimated that these naturally occurring cyclopeptides insert their toxicity on cancer cells by a hydrophobic/hydrophilic characteristic balanced equal to a 4/1 ratio. This estimate is in accordance with a study that reports higher hydrophobic peptides infuse better into the cancer cells through the nonpolar part of the cell membrane, which results in cell disruption and necrosis (82,83). Moreover, the presence of proline as well as glycine in the peptides is important for interaction with the cancer cell membrane (84). Phenylalanine residue can also increase the affinity of the peptides for interaction with the membrane of cancer cells (85). On the other hand, tyrosine residue as a polar amino acid may raise the toxicity of the peptides toward cancer cells (83,86). In addition, the cyclic form of the peptides with fixed conformational structure and less occupying space can cause peptides to face less of a physical barrier to enter the cells (87). However, modification of these cyclic peptides by chemical means, either in a hydrophobic direction (e.g., acylation) or in a hydrophilic way (e.g., phosphorylation to impose negative charge or replacing/adding lysine or arginine within the cyclic peptide structure to add positive charge), is needed (88). By such favorable modification, these cyclic peptides can meet most all the requirements of SAR data needed to make them good candidates for cancer treatment in the clinic. On the other hand, conjugation of these peptides through tyrosine residue with an anticancer drug may even increase their physicochemical properties. These peptides considered cellpenetrating/targeting peptides can improve specificity and reduce side effects of the present anticancer drugs. Because medicinal plants containing cyclopeptides, including caryophyllaceae-type cyclopeptides, have been used traditionally for a long time, it is inferred that these cyclopeptides should not be toxic to healthy cells/tissues of human body, so this is another reason for considering these cyclopeptides as safer compounds for cancer therapy compared with the conventional chemotherapy agents. However, modified caryophyllaceae-type cyclopeptides, like any other newly introduced agent as drugs, must pass all the requirements necessary to ensure that these cyclopeptides are safe as well as biologically active enough to add to the treatment schedules of cancer therapy. AUTHOR CONTRIBUTIONS A part of this manuscript concerning Longicalycinin A synthesis and its analogues was experimentally involved by MG, AB, and BM. All authors contributed to the article and approved the submitted version.
2021-01-14T14:10:44.323Z
2021-01-14T00:00:00.000
{ "year": 2020, "sha1": "c5a325e0456df03fff8eb40afdbab80148a54be0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.600856/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5a325e0456df03fff8eb40afdbab80148a54be0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
323485
pes2o/s2orc
v3-fos-license
Fumonisins affect the intestinal microbial homeostasis in broiler chickens, predisposing to necrotic enteritis Fumonisins (FBs) are mycotoxins produced by Fusarium fungi. This study aimed to investigate the effect of these feed contaminants on the intestinal morphology and microbiota composition, and to evaluate whether FBs predispose broilers to necrotic enteritis. One-day-old broiler chicks were divided into a group fed a control diet, and a group fed a FBs contaminated diet (18.6 mg FB1+FB2/kg feed). A significant increase in the plasma sphinganine/sphingosine ratio in the FBs-treated group (0.21 ± 0.016) compared to the control (0.14 ± 0.014) indicated disturbance of the sphingolipid biosynthesis. Furthermore, villus height and crypt depth of the ileum was significantly reduced by FBs. Denaturing gradient gel electrophoresis showed a shift in the microbiota composition in the ileum in the FBs group compared to the control. A reduced presence of low-GC containing operational taxonomic units in ileal digesta of birds exposed to FBs was demonstrated, and identified as a reduced abundance of Candidatus Savagella and Lactobaccilus spp. Quantification of total Clostridium perfringens in these ileal samples, previous to experimental infection, using cpa gene (alpha toxin) quantification by qPCR showed an increase in C. perfringens in chickens fed a FBs contaminated diet compared to control (7.5 ± 0.30 versus 6.3 ± 0.24 log10 copies/g intestinal content). After C. perfringens challenge, a higher percentage of birds developed subclinical necrotic enteritis in the group fed a FBs contaminated diet as compared to the control (44.9 ± 2.22% versus 29.8 ± 5.46%). Electronic supplementary material The online version of this article (doi:10.1186/s13567-015-0234-8) contains supplementary material, which is available to authorized users. Introduction Mycotoxins are naturally occurring secondary fungal metabolites produced both pre-and post-harvest in crops and other feed and food commodities. Fusarium, Aspergillus, and Penicillium are the most abundant mycotoxin producing mould genera contaminating feed and feed raw materials [1]. Fumonisins (FBs) are produced by Fusarium verticillioides, F. proliferatum, and other Fusarium species and are among the most widespread mycotoxins [2]. FBs are ubiquitous contaminants of corn and other grain products [3]. A global survey on the occurrence and contamination levels of mycotoxins in feed raw materials and finished feed for livestock animals showed that 54% of 11 439 tested samples were contaminated with FBs [4]. FBs were most frequently detected in South American (77%), African (72%) and Southern European (70%) samples, and less frequently in Oceania (10%) [4]. The economic impact of in animal feed is rather difficult to measure because information about subclinical effects on animal health and productivity losses due to chronic low level exposure is limited. Wu [5] estimated the annual economic losses in the USA due to FBs in animal feed to be US$ 1-20 million and US$ 30-46 million, in a normal and an outbreak year of Fusarium ear rot, respectively. More than 28 FB homologues have been identified. Fumonisin B 1 (FB 1 ) is the most common and the most thoroughly investigated mycotoxin because of its toxicological importance. FB 2 and FB 3 are less prevalent, and differ structurally from FB 1 in the number and position of hydroxyl groups [2]. FBs competitively inhibit the ceramide synthase and, as a result, interfere with the biosynthesis of ceramides and sphingolipids of cell membranes [2,6]. Clinical outbreaks have been reported in horses (equine leucoencephalomalacie, ELEM) and pigs (porcine pulmonary edema, PPE). These animal species are regarded as the most susceptible to the effects of FBs [2]. In general, poultry are considered to be quite resistant toward the deleterious effects of FBs. Also species differences occur, laying hens and broilers are less sensitive to FBs compared to turkeys and ducks [7][8][9][10][11]. In broilers, systemic uptake of FB 1 after oral exposure is low, indicating that the absorption is negligible [12]. Following the consumption of FBs contaminated feed, the intestine is the first organ to be exposed to these toxins and negative effects on intestinal tissues have been reported [13]. The jejunum of broilers exposed to high FB 1 concentrations (≥100 mg/kg feed) for 28 days displays a reduced villus height (VH) and villus-to-crypt ratio (V:C) [14]. Besides a mild villus atrophy, also goblet cell hyperplasia is observed in broiler chicks exposed to high levels of FB 1 (300 mg/kg feed) for 2 weeks [15]. It has been shown in vitro that FB 1 has a toxic effect on both undifferentiated and differentiated porcine intestinal epithelial cells (IPEC-1). The effect of FB 1 on epithelial cell proliferation correlates with a cell cycle arrest in the G0/G1 phase [16]. A negative effect of FB 1 on the expression of cell junction proteins E-cadherin and occludin, and consequently on the intestinal epithelial integrity, has been shown in vivo in pigs [17,18]. Furthermore, FB 1 modulates intestinal immunity by decreasing the expression of several cytokines, for example interleukin (IL)-1β, IL-2, IL-8, IL-12p40 and interferon (IFN)-γ in pigs [19][20][21]. It has been shown that exposure of pigs to 0.5 mg FB 1 /kg bodyweight (BW) for 6 days enhanced intestinal colonization and translocation of a septicemic Escherichia coli (SEPEC) [20]. Feeding a FBs contaminated diet (11.8 mg FB 1 +FB 2 /kg feed) for 9 weeks transiently modified the faecal microbiota composition in pigs. Co-exposure to FBs and Salmonella Typhimurium amplified this phenomenon [22]. At present it is unclear what may be the consequences of long term exposure to low levels of FBs. In poultry, necrotic enteritis (NE) is caused by netB producing Clostridium perfringens strains. C. perfringens is a Gram-positive spore-forming anaerobic bacterium that is commonly found in the environment and the gastro-intestinal tract of animals and humans as a member of the normal microbiota [23][24][25]. NE in chickens is still an important intestinal disease despite the application of preventive and control methods, including coccidiosis control. The acute form of the disease causes mortality without premonitory symptoms. The more frequently occurring subclinical form is characterized by intestinal mucosal damage without clinical signs or mortality, leading to decreased performance [24,26]. However, healthy birds often carry netB positive C. perfringens without showing any clinical symptoms of NE [24]. An outbreak of NE is a complex process requiring one or a number of predisposing factors rather than just the presence of pathogenic C. perfringens [27][28][29]. Preexisting mucosal damage caused by coccidiosis, high protein feed (including fishmeal) and indigestible nonstarch polysaccharides are well known predisposing factors [29]. Recently, it was shown that the mycotoxin deoxynivalenol (DON) is also a predisposing factor for the development of NE, through damage to the epithelial barrier and an increased intestinal nutrient availability for clostridial proliferation [30]. Although FBs are ubiquitous contaminants in poultry feed, information about their impact on the intestinal microbial homeostasis in broiler chickens is lacking. The objective of this study was to evaluate the effect of FBs on the intestinal microbial homeostasis, at concentrations approaching the European Union maximum guidance levels (20 mg FB 1 +FB 2 /kg feed) [31]. Therefore, the influence of FBs on the intestinal microbiota composition and intestinal morphology was investigated. In addition, an attempt was made to demonstrate the consequences of the effect of FBs on the intestinal microbial homeostasis in a subclinical necrotic enteritis model. Material and methods Fumonisins FBs (8.64mg FB 1 +FB 2 /g culture material) (Biopure -Romer Labs Diagnostic GmbH, Tulln, Austria) were produced in vitro from a culture of F. verticillioides, and subsequently crystallized [32]. For the in vitro assessment of the impact of FB 1 on growth and toxin production characteristics of C. perfringens, serial dilutions were prepared in tryptone glucose yeast (TGY) broth medium of a FB 1 stock solution of 5000 μg/mL (Fermentek, Jerusalem, Israel) that had been prepared in anhydrous methanol and stored at −20°C. Bacterial strain and growth conditions C. perfringens strain 56, a netB + type A strain, was originally isolated from a broiler chicken with NE and has been shown to be virulent in an in vivo infection model [30,33]. The inoculum for the oral infection of chickens and in vitro experiments was prepared by culturing C. perfringens anaerobically overnight at 37°C in brain heart infusion broth (BHI, Bio-Rad, Marnes-la-Coquette, France) or tryptone glucose yeast (TGY) broth medium, respectively. The colony forming units of C. perfringens/mL was assessed by plating tenfold dilutions on Colombia agar (Oxoid, Basingstoke, UK) with 5% sheep blood, followed by anaerobic overnight incubation at 37°C. Animal experiment Birds and housing The animal experiment was performed using nonvaccinated Ross 308 broiler chickens, obtained as one-dayold chicks from a commercial hatchery (Vervaeke-Belavi, Tielt, Belgium). Animals of both treatment groups, control diet and FBs contaminated diet, were housed in the same temperature controlled room, in pens of 1.44 m 2 , on wood shavings. Each group consisted of three pens of 34 birds, with approximately equal numbers of males and females. Animal units were separated by solid walls to prevent direct contact between animals from different pens. All cages were decontaminated with peracetic acid and hydrogen peroxide (Metatectyl HQ, Metatecta, Kontich, Belgium) and a commercial anticoccidial disinfectant (Bi-OO-Cyst Coccidial Disinfectant, Biolink, York, UK) prior to the housing of the chickens. Water and feed was provided ad libitum. Chickens were not fasted before euthanasia. The animal experiment was approved by the Ethical Committee of the Faculty of Veterinary Medicine and Bioscience Engineering, Ghent University (EC 2012/194). Feed preparation and experimental diets All chickens were fed a starter diet during the first eight days of the trial, and subsequently a grower diet. The diet was wheat and rye based, with soybean meal as main protein source during the first 16 days. From day 17 onwards, the same grower diet was fed with the exception that fishmeal replaced soybean meal as main protein source [30,33]. FBs contaminated feed was produced by adding lyophilized FBs culture material to a control diet. Mycotoxin contamination of both the control and FBs contaminated diet was analyzed by a validated multi-mycotoxin liquid chromatography-tandem mass spectrometry method (LC-MS/MS) [34]. Three different batches of FBs contaminated feed were produced: a starter diet, a grower diet with soybean meal and a grower diet with fishmeal, respectively. Therefore, FBs culture material was added to 500 g of the corresponding batches of control diet. For each batch, this premix was then mixed with 5 kg of control feed to assure homogeneous distribution of the toxin and finally mixed for 20 min in the total amount of feed needed for each batch. To test mycotoxin contamination, samples were taken at three different locations in each batch, subsequently pooled per batch and analyzed for mycotoxin contamination as described above. Trace amounts of nivalenol and DON were detected in the control feed (0.059-0.116 and 0.113-0.170 mg/kg feed, respectively). Analyzed mycotoxins, their limit of detection and limit of quantification were as previously described [30,34]. The levels of FBs and all other tested mycotoxins in the different batches of control feed were below the limit of detection. The average levels of FB 1 , FB 2 and FB 3 in the different batches of FBs contaminated feed were 10.4 mg/kg, 8.2 mg/kg and 2.0 mg/kg, respectively ( Table 1). The average sum of FB1+FB2, 18.6 mg/kg feed was approaching the EU maximum guidance level in feed for poultry of 20 mg FB1+FB2/kg (2006/576/EC) [31]. Evaluation of the impact of FBs on broiler health The BW of all chickens was measured on day 1 and day 8. On day 15, six birds (3♂/3♀) per pen were euthanized using an overdose of sodium pentobarbital (Natrium Pentobarbital 20%, Kela Veterinaria, Sint-Niklaas, Belgium). A blood sample was collected and subsequently a necropsy was performed. Blood samples were centrifuged (2851 × g, 10 min, 4°C) and plasma was stored at ≤−20°C until sphinganine (Sa) and sphingosine (So) concentrations were analyzed. Sa/So is suggested to be the most sensitive biomarker to FBs intoxication in many animals [2]. Plasma Sa and So concentrations were analyzed by a commercial service provider (Biocrates Life Sciences AG, Innsbruck, Austria). Briefly, Sa and So were extracted from plasma and measured in the presence of internal standards using LC-MS/MS with electrospray ionization. The BW and weight of different organs (liver, spleen, kidneys, proventriculus, ventriculus, bursa of Fabricius, heart and lungs) were recorded. The weight of each organ was converted to a relative percentage of the BW. Evaluation of the impact of FBs on intestinal morphology After measuring the length of the different small intestinal segments, 1 cm samples from the mid-duodenum, mid-jejunum and mid-ileum were collected and fixed in neutral-buffered formalin. Small intestinal segments were defined as duodenum encompassing the duodenal loop, jejunum between the end of the duodenal loop and Meckels diverticulum and ileum between Meckels diverticulum and the ileo-cecal junction. Villus height and crypt depth of mid-duodenum, mid-jejunum and mid-ileum were measured on hematoxylin and eosin stained histological paraffin sections using light microscopy with Leica LAS software (Leica Microsystems, Diegem, Belgium). The average of ten measurements per segment per animal was calculated. Assessment of the impact of FBs on the intestinal microbiota At day 15, intestinal content samples of the second half of the different small intestinal segments of all six bird per pen were collected, snap frozen in liquid nitrogen, and subsequently stored at −80°C until further DNA extraction. DNA from intestinal content (duodenum, jejunum and ileum) was extracted using a modified QIAamp DNA Stool mini Kit (Qiagen, Hilden, Germany) protocol. An enzymatic pretreatment with lysozyme and a mechanical disruption step with a bead-beater was added to the original protocol. In brief, frozen intestinal content (250 mg) was transferred into a bead beating tube filled with 0.7 g of glass beads (Ø 100 μm), 0.6 g of ceramic beads (Ø 1.4 mm) and one glass bead (Ø 3.8 mm). Subsequently, 200 μL of TE buffer of pH 8 (10 mM Tris-HCl and 1 mM EDTA (Sigma Aldrich, Steinheim, Germany)) and 125 μL of freshly prepared lysozyme (100 mg/mL, Sigma Aldrich) was added. After homogenizing by vortex mixing (1 min), samples were incubated at 37°C for 45 min at 1000 rpm on a Thermomixer compact shaker incubator (Eppendorf, Hamburg, Germany). The final volume was adjusted to 2 mL with ASL buffer (Qiagen) and samples were bead beated for 10 s at 6000 rpm on a Precellys 24-Dual homogenizer (Bertin Technologies, Montigny le Bretonneux, France). Further DNA extraction was performed with the QIAamp DNA Stool mini kit (Qiagen) in accordance with the manufacturer's instructions. DNA integrity was evaluated by loading 3 μL of DNA on a 0.8% agarose gel stained with ethidium bromide. The purity and concentration of the extracted DNA were measured using ultraviolet absorption at 260/280 nm and 230/280 nm ratio (NanoDrop 1000 spectrophotometer, Thermo Scientific, Waltham, MA, USA). Denaturing gradient gel electrophoresis (DGGE) separates DNA fragments of the same length but with different base-pair sequences. DNA fragments were generated from the small intestinal content DNA samples applying community PCR with universal bacterial primers targeting the variable V3 region of the 16S ribosomal RNA. The nucleotide sequences of the primers were as follows: forward primer F341 with GC clamp 5'-CGC CCG CCG CGC GCG GCG GGC GG GCG GGG GCA CGG GGGG -CCT ACG GGA GGC AGC AG 3' and reverse primer R518 5' ATT ACC GCG GCT GCT GG-3' [35]. PCR amplification was performed in duplicate using a Mastercycler Gradient (Eppendorf, Hamburg, Germany), and each PCR reaction was done in a 45 μL total reaction mixture using 3 μL of the DNA sample (4 ng/μL), 0.125 μM of each of the primers, 100 μM deoxynucleotide triphosphate (dNTP) (Peqlab Biotechnologie GmbH, Erlangen, Germany), and 0.6 μL of peqGOLD Taq-DNA-Polymerase (5 U/μL) (Peqlab). The PCR conditions used were 1 cycle of 94°C for 5 min, followed by 9 cycles of 94°C for 30 s, 64°C for 40 s (decreased by 0.5°C/cycle) and 72°C for 40 s. Subsequently 19 cycles of 94°C for 30 s, 56°C for 40 s and 72°C for 40 s, followed by one cycle of 72°C for 4 min, were run. DGGE was performed as described by [35] with the INGENYPhorU-2×2 system (Ingeny, Goes, The Netherlands). Briefly, amplicons were separated using a 30 to 60% denaturating gradient [35]. 30 μL of the PCR product was loaded and electrophoresis was performed at 100 V for 16 h at 60°C. Each gel included four standard reference lanes containing amplicons of 12 bacterial species for normalization and comparison between gels. DGGE gels were stained with 1x SYBR Green I (Sigma-Aldrich) for 30 min. Fingerprinting profiles were visualized using the Bio Vision Imaging system (Peqlab) and the Vision-Capt software (Vilber Lourmat, Marne-la-Vallée, France). The microbial profiles were processed with GelCompar II vs. 6.6 (Applied Maths NV, Sint-Martens-Latem, Belgium). The similarity between DGGE-profiles, given as a percentage, was analyzed using the Dice similarity coefficient, derived from presence or absence of bands. On the basis of a distance matrix, which was generated from the similarity values, dendrograms were constructed using the unweighted pair group method with arithmetic means (UPGMA) as clustering-method. The microbial richness (R) was assessed as the number of bands within a profile. Low-GC-containing operational taxonomic units (OTUs) were selected for identification based on the differences between the DGGE patterns of the control group compared to the FBs contaminated group. After extraction of the selected bands, and reapplication on DGGE to confirm their positions relative to the original sample, the respective 16S-fragments were sequenced (LGC Genomics, Berlin, Germany) and aligned to the NCBI GenBank prokaryotic 16S ribosomal RNA database using the standard nucleotide BLASTN 2.2.30+ (nucleotide basic local alignment search tool) [36]. Evaluation of the consequences of FBs exposure on necrotic enteritis Quantification of total C. perfringens by qPCR Total C. perfringens in ileal content samples, collected before experimental C. perfringens challenge (day 15), was quantified using the cpa gene (encoding alpha toxin) as target gene. qPCR was performed using SYBR-green 2x master mix (Bioline, Brussels, Belgium) in a Bio-Rad CFX-384 system. Each reaction was done in triplicate in a 12 μL total reaction mixture using 2 μL of the DNA sample and 0.5 μM final qPCR primer concentration ( Table 2). The qPCR conditions were 1 cycle of 95°C for 10 min, followed by 40 cycles of 95°C for 30 s, 60°C for 30 s, and stepwise increase in the temperature from 65°to 95°C (at 10s/0.5°C). Melting curve data were analyzed to confirm the specificity of the reaction. For construction of the standard curve, the PCR product was generated using the standard PCR primers ( Table 2) and DNA from C. perfringens strain CP56. After purification (MSB Spin PCRapace, Stratec Molecular, Berlin, Germany) and determination of the DNA concentration with a Nanodrop ND 1000 spectrophotometer (Nanodrop Technologies, Wilmingtom, DE, USA), the concentration of the linear dsDNA standard was adjusted to 1 × 10 8 to 1 × 10 1 copies per μL with each step differing by 10 fold. The copy numbers of samples (copies/g intestinal content) were determined by reading off the standard series with the Ct values of the samples. C. perfringens infection trial The remaining 28 animals per pen were used in a C. perfringens experimental infection trial as previously described [30]. The BW of all animals was measured on day 16 and day 21. Gumboro vaccine (Nobilis Gumboro D78, MSD Animal Health, Brussels, Belgium) was administered on day 16 in the drinking water of all cages. Both groups were experimentally infected with an oral bolus of 4.10 8 cfu C. perfringens strain 56 on days 17, 18, 19 and 20. On day 21, 22 and 23, each day one third of each group was euthanized by overdose sodium pentobarbital and immediately submitted to necropsy. Macroscopic NE lesion scoring of the small intestines (duodenum, jejunum and ileum) was performed single-blinded as follows; 0 no gross lesions; 1 small focal necrosis or ulceration (one to five foci); 2 focal necrosis or ulceration (six to 15 foci); 3 focal necrosis or ulceration (16 foci or more); 4 patches of necrosis of 2 to 3 cm long; 5 diffuse necrosis typical field cases, partially adapted from [37]. Chickens with a lesion score of 1 or more were classified as NE positive. In vitro assessment of the effect of FB 1 on C. perfringens growth, and cpa and netB transcription Following concentrations of FB 1 were tested for their effect on C. perfringens growth and toxin production: 0, 0.2, 2 and 20 μg FB 1 /mL. All tests were performed in triplicate. The C. perfringens inoculum was 1:1000 diluted in TGY medium, containing the different concentrations of FB 1 , and incubated anaerobically at 37°C. A growth curve was produced by bacterial plating of a ten-fold dilution series of the culture at 0, 2, 3, 4, 5, 6, 7, 8 and 24 h after inoculation. Ten-fold dilutions were prepared in phosphate buffered saline (PBS), and subsequently plated on Colombia agar with 5% sheep blood. After anaerobic incubation overnight at 37°C, the number of colony forming units (cfu)/mL was determined. The impact of FB 1 on cpa (alpha toxin) and netB (netB toxin) transcription was tested by qRT-PCR. The C. perfringens inoculum was 1:10 000 diluted in TGY medium, containing the different concentrations of FB 1 , and incubated anaerobically at 37°C until an optical density (OD) of 0.6-1.0 was measured at a wavelength of 600 nm (6h of incubation). The transcription levels of cpa and netB in the presence of FB 1 were compared to non-FB 1 contaminated test conditions and normalized to the housekeeping gene rpoA, encoding RNA polymerase subunit A. Total RNA was isolated using SV total RNA Isolation system (Promega, Leiden, The Netherlands). RNA was treated with Turbo DNA-free kit (Ambion, Austin, TX, USA) per the manufacturer's instructions to remove genomic DNA contamination. Subsequently, RNA was converted to cDNA with iScript cDNA Synthesis Kit (Bio-Rad, Temse, Belgium). qRT-PCR was performed using SYBR-green 2x master mix (Bioline, Brussels, Belgium) in a Bio-Rad CFX-384 system. Each reaction was done in triplicate in a 12 μL total reaction mixture using 2 μL of cDNA sample and 0.5 μM final qPCR primer concentration ( Table 2). The qPCR conditions were as described above for total C. perfringens determination in ileal content samples. For construction of the standard curve, the PCR product was generated using the standard PCR primers ( Table 2). Statistical analyses Statistical program SPSS version 22 was used for data analysis. To compare the number of NE positive birds (lesion score ≥2) between different groups, binomial TTG GGA TGG AA TTT CCT GGG TTG TTC ATT TC PCR [55] cpa GTT GAT AGC GCA GGA CAT GTT AAG CAT GTA GTC ATC TGT TCC AGC ATC qPCR [56] netB TGA TAC CGC TTC ACA TAA AGG T ACC GTC CTT AGT CTC AAC AAA T PCR [30] netB TCA ATT GGT TAT TCT ATA GGC GGT A ATA TGA AGC ATT TAT TCC AGC ACC A qPCR [57] rpoA ACA TCA TTA GCG TTG TCA GTT AAA G GAG GTT ATG GAA TAA CTC TTG GTA ATG PCR [30] rpoA CCA TCT GTT TTT ATA TCT GCT CCA GTA GGA AGG TGA AGG ACC AAA AAC TAT T qPCR [57] Sequences are presented from 5' to 3'. logistic regression was used. All other parameters, including BW relative organ weight, length of small intestines, Sa/So ratio, villus height/crypt depth measurements, concentration of C. perfringens in ileal digesta, in vitro assessment of clostridial growth, and cpa and netB transcription were analyzed by an independent Student's t-test, after determination of normality. Significance level was set at 0.05. FBs negatively affect sphingolipid metabolism The inhibition of ceramide synthase by FBs causes an intracellular accumulation of sphingoid bases, mainly sphinganine. An increased Sa/So is suggested to be the most sensitive biomarker to FBs intoxication in many animal species [2]. The plasma Sa/So ratio was 1.5 fold higher in animals fed the FBs contaminated diet compared to the control animals, 0.21 ± 0.016 versus 0.14 ± 0.014, respectively (P < 0.001). No significant differences were observed in BW between the control group and the FBs contaminated group (Table 3). A trend (P = 0.060) was observed for an increased relative weight of liver in chickens fed the FBs contaminated diet (3.69 ± 0.134% of BW) compared to the control group (3.39 ± 0.081% of BW). Relative weight of bursa, spleen, proventriculus, ventriculus, kidneys, lungs and heart did not differ between both experimental groups (data not shown). FBs reduce total small intestinal length, ileal villus height and crypth depth The total length of the small intestine was significantly (P = 0.033) decreased in birds of the FBs contaminated group compared to the control group (130.5 ± 2.37 and 139.0 ± 2.95 cm, respectively). No differences were observed in the relative percentage of length of the different segments of the small intestine (Table 4). Feeding a FBs contaminated diet significantly reduced villus height (P = 0.002) and crypt depth (P = 0.011) in ileum (Table 5). No effect was observed on ileal villus to crypt ratio. No effect was shown in duodenum and jejunum. FBs affect the ileal microbiota composition DGGE fingerprint of DNA samples of duodenal and jejunal content, applying community PCR with universal bacterial primers targeting the variable V3 region of the 16S ribosomal DNA, could not show a difference in microbiota composition between chickens fed the control diet or the FBs contaminated diet. In the duodenum, three clades were observed related to the diversity of OTUs, independent of the treatment. One clade showed five samples with a reduced number of bands in the duodenum. Most samples had an average diversity between 10 and 15 OTUs across the medium GC-range. Some samples consisted of 18 to 31 OTUs of the medium and high GCrange (Additional file 1). No difference in number of OTUs between both experimental groups was shown in the jejunum. OTUs were all located in the medium range of GC-content (Additional file 2). Within ileum content samples a difference of the DGGE fingerprint according to the treatment was seen. The majority of the ileum samples of the control group contained OTUs in the lower GC-range which ascribes them to one clade (Figure 1, clade A). Among the FBs group a clade of clearly reduced diversity is formed by eight samples (Figure 1, clade B). Another group of samples of this treatment group show a similar diversity of OTUs of medium GCcontent compared to the control-group, but the low-GC-OTUs were absent (Figure 1, clade C). The dendrogram of DGGE profiles of four chickens (bird 20, 31, 33 and 34) of the FBs group showed a high similarity with the control group (Figure 1, clade D). Subsequently, five low-GC-content OTUs from ileal content samples of the control group, which made the difference in their DGGE-patterns compared to the FBs group, were identified at genus level by sequencing. Based on these results the most affected groups were related to the genera Clostridium and Lactobacillus. OTUs 19 and 20 had 99.09-100% sequence similarity with the type strain of Candidatus Arthromitus, recently renamed as Candidatus Savagella [38]. OTUs 4 and 13 had 100% sequence similarity with Lactobacillus johnsonii and, OTU 16 was similar to the sequence of an unknown species of the genus Lactobacillus. FBs increase the susceptibility for C. perfringens induced necrotic enteritis Quantification of total C. perfringens in DNA samples of ileal content (day 15 of animal trial) by qPCR using cpa gene (alpha toxin) showed an increased level in chickens fed a FBs contaminated diet compared to a control diet (7.5 ± 0.30 versus 6.3 ± 0.24 log10 copies/g intestinal content) (P = 0.027). The number of chickens with NE increased from 29.8 ± 5.46% of the birds in the control group to 44.9 ± 2.22% of broilers which were fed the FBs contaminated diet (P = 0.047). No effect was observed on the mean lesion scores of NE positive broiler chickens ( Figure 2). No macroscopic coccidiosis lesions were observed. Discussion The ingestion of FBs contaminated feed by broiler chickens, at a level of about 20 mg FB 1 +FB 2 /kg feed, affects the intestinal microbial homeostasis. Subsequently, these changes possibly predispose the birds to C. perfringens induced NE. To our knowledge, this is the first time such an effect has been demonstrated. FBs negatively affect broiler health, demonstrated by the increased plasma Sa/So ratios in broiler chicks fed a FBs contaminated diet. These results suggest that the sphingolipid metabolism was impaired after exposure to levels of FBs approaching the EU maximum guidance levels [31]. FBs inhibit the ceramide synthase enzyme, causing an intracellular accumulation of sphingoid bases, mainly sphinganine. Since the disruption in the sphingolipid metabolism occurs before other indicators of cell injury, the Sa/So ratio is suggested to be the most sensitive biomarker to FB intoxication in many animal species [13,39]. A similar increase in the Sa/So ratio has been demonstrated in serum of broilers fed 80-100 mg FB 1 /kg feed for 3-4 weeks [14,40,41]. Furthermore, a linear dosedependent increase in the Sa/So ratio has been observed in the liver of broiler chickens fed 20-80 mg FB 1 /kg feed for three weeks [40]. In the present study, the relative weight of liver was increased in broilers fed the FBs contaminated diet. A similar effect has already been observed when broiler chickens were fed a FBs contaminated diet containing 100 mg FB 1 and 20 mg FB 2 /kg feed for 2-4 weeks [14]. In broilers, this effect was not reported in other studies using low levels FB 1 (<100 mg/kg feed) [8,40]. Dietary exposure to FBs has been associated with histopathological degenerative changes in the hepatocytes including mild vacuolar degeneration and bile duct hyperplasia [42]. Consumption of a diet contaminated with FBs for 15 days reduced small intestinal length, ileal villus height Animals were randomly divided in two experimental groups, each group consisting of three pens. One group was fed a control diet and one was fed a FBs contaminated diet (18.6 mg FB 1 +FB 2 /kg feed). Six birds (3♂/3♀) per pen were euthanized on day 15 and the length of small intestinal segments was recorded. Data presented as mean ± SEM. a Total length small intestines including all three segments: duodenum, jejunum and ileum; b % of total length = (length segment (cm)/total length small intestines (cm)) × 100; (*) significantly different (P < 0.05)/trend (P < 0.10). and crypt depth. These results could be related to the negative impact of FB 1 on epithelial cell proliferation, reducing villus renewal and impairing intestinal absorption of nutrients [13]. This is in accordance with a previous study, where a decreased villus height was observed in the jejunum of broiler chickens fed high concentrations of FBs (>100 mg FB 1 /kg feed) [14,15]. In pigs exposed to FBs for 9 days (1.5 mg FB 1 /kg BW), however, ileal villi tended to be longer [43]. It remains to be determined if this effect on intestinal morphology is induced only by a direct toxic effect of FBs on intestinal epithelial cells, or also indirectly, by the microbiota shift induced by FBs. Longer villi are for example observed in the ileum of chickens treated with L. reuteri, indicating that the composition of the intestinal microbiota may indeed affect intestinal morphology [44]. Since the intestinal mucus layer and microbiota are strongly associated, FBs could also modify the microbiota through modulation of the mucus production. Goblet cell hyperplasia was observed in broiler chickens exposed to very high dietary concentration of FB 1 (300 mg/kg feed) for two weeks [15]. Similarly, it was demonstrated that non-cytotoxic concentrations of DON decreased mucin production in human colonic epithelial goblet cells (HT-29 16E) and porcine intestinal explants [45]. The ingestion of FBs contaminated feed by broiler chickens for 15 days resulted in a modified composition of the intestinal microbiota of the ileum. Based on separation of DNA fragments by electrophoresis of PCR-amplified 16S ribosomal DNA fragments, using polyacrylamide gels containing a linear gradient of DNA denaturants, the DGGE technique provides a genetic fingerprint of a complex microbial community. The PCR product banding pattern is indicative of the number of bacterial species or assemblages of species that are present [35]. The results clearly indicate a reduced diversity of the ileal microbiota in broiler chickens exposed to FBs compared to the control group. The difference was mainly due to a reduced presence of low-GCcontent OTUs in ileal content samples of FBs exposed animals. Feeding a FBs contaminated diet to broiler chickens was correlated with a decrease in the abundance of Candidatus Arthromitus, recently renamed to Candidatus Savagella [38]. These segmented filamentous bacteria (SFB) are a unique group of uncultivated commensal bacteria within the bacterial family of Clostridiaceae. These SFB are characterized by their attachment to the intestinal epithelium and their important role in modulating host immune systems [38,46]. They induce IgA secreting cells and influence the development of the T-cell repertoire [47,48]. Stanley et al. [47] demonstrated that the bestknown predisposing factor for necrotic enteritis, coccidiosis, also eliminates or reduces the levels of this immune modulating bacterium. Since coccidiosis and FBs are both predisposing factors for C. perfringens induced NE in broiler chickens, the role of Candidatus Savagella in the pathogenesis of NE needs to be further investigated. It has been suggested that the colonization of the ileum with SFB is correlated with the population of lactobacilli [49]. Lactobacilli belong to the low GC Gram-positive group of Lactobacillales, fermenting sugars to lactic acid [50]. In this study, FBs also modulated the presence of Lactobacillaceae in the ileum. L. johnsonii (OTUs 4 and 13 with sequence similarity of 100%) was reduced in FBs exposed birds. L. johnsonii has been extensively investigated for its probiotic activities including pathogen inhibition, epithelial cell-attachment, and immunomodulation [50]. Similar to our results, it was recently demonstrated that L. johnsonii was reduced in birds fed fishmeal with or without C. perfringens challenge [47,51]. A positive association was demonstrated between crude protein derived from fishmeal and numbers of ileal and caecal C. perfringens [52]. L. johnsonii interferes with the colonization and persistence of C. perfringens in poultry [53] and some lactobacilli can inhibit growth of C. perfringens [54]. It needs to be further investigated whether lactic acid bacteria (LAB) are able to counteract the negative effects of mycotoxins on the intestinal health in poultry. In conclusion, feeding a FBs contaminated diet at contamination levels approaching the EU maximum guidance level altered the sphingolipid metabolism in broiler chickens without affecting BW gain. FBs modified the composition of the intestinal microbiota of the ileum. DGGE analysis demonstrated a reduced presence of low-GC content OTUs in ileal digesta of birds exposed to FBs, which were subsequently identified as a reduced abundance of Candidatus Savagella and Lactobaccilus spp. such as L. johnsonii. The ileal concentration of total C. perfringens was increased in chickens fed the FBs contaminated diet. Additionally, small intestinal length, ileal villus height, and crypt depth were negatively affected by FBs. The changes in the gut microbiota possibly induced an environment stimulating C. perfringens colonization, and predisposing the birds to necrotic enteritis. The impact of different predisposing factors for NE in broilers, among others coccidiosis, fishmeal and FBs, on intestinal microbiota shows remarkable similarities. The observed predisposing effect is due to the negative impact of FBs on the intestinal microbiota and the animal host, rather than its effect on the bacterium itself. Figure 2 NE lesion score of individual broiler chickens challenged with C. perfringens. Chickens were fed either a control diet or a FBs contaminated diet. Subsequently, birds were orally inoculated with C. perfringens strain 56. Macroscopic intestinal NE lesions in the small intestine (duodenum to ileum) were scored as follow; 0 no gross lesions; 1 small focal necrosis or ulceration (one to five foci); 2 focal necrosis or ulceration (six to 15 foci); 3 focal necrosis or ulceration (16 or more); 4 patches of necrosis of 2 to 3 cm long; 5 diffuse necrosis typical field cases. Chickens with NE lesions scores of 1 or more were categorized as NE positive. No effect was observed on the mean lesion scores of NE positive chickens.
2017-06-23T06:59:14.311Z
2015-09-23T00:00:00.000
{ "year": 2015, "sha1": "dfb05a235a6bd216daefef413bb180be3c0f6864", "oa_license": "CCBY", "oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/s13567-015-0234-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfb05a235a6bd216daefef413bb180be3c0f6864", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
3011810
pes2o/s2orc
v3-fos-license
Embeddings of C*-surfaces into weighted projective spaces Let V be a normal affine surface which admits a C*- and a C+-action. In this note we show that in many cases V can be embedded as a principal Zariski open subset into a hypersurface of a weighted projective space. In particular, we recover a result of D. Daigle and P. Russell. Introduction If V = Spec A is a normal affine surface equipped with an effective C * -action, then its coordinate ring A carries a natural structure of a Z-graded ring A = i∈Z A i . As was shown in [FlZa 1 ], such a C * -action on V has a hyperbolic fixed point if and only if C = Spec A 0 is a smooth affine curve and A ±1 = 0. In this case the structure of the graded ring A can be elegantly described in terms of a pair (D + , D − ) of Q-divisors on C with D + + D − ≤ 0. More precisely, A is the graded subring This presentation of A (or V ) is called in [FlZa 1 ] the DPD-presentation. Furthermore two pairs (D + , D − ) and (D ′ + , D ′ − ) define equivariantly isomorphic surfaces over C if and only if they are equivalent that is, In this note we show that if such a surface V admits also a C + -action then it can be C * -equivariantly embedded (up to normalization) into a weighted projective space as a hypersurface minus a hyperplane; see Theorem 2.3 and Corollary 2.5 below. In particular we recover the following result of Daigle and Russell [DR]. Theorem 1.1. Let V be a normal Gizatullin surface 1 with a finite divisor class group. Then V can be embedded into a weighted projective plane P(a, b, c) minus a hypersurface. More precisely: of the weighted projective plane P(1, e, d) equipped with homogeneous coordinates (x : y : z) and with the 2-torus action (λ 1 , λ 2 ).(x : y : z) = (λ 1 x : λ 2 y : z). (a, b, c) for some positive integers a, b, c satisfying a + b = cm and gcd(a, b) = 1. 2. Embeddings of C * -surfaces into weighted projective spaces According to Proposition 4.8 in [FlZa 1 ] every normal affine C * -surface V is equivariantly isomorphic to the normalization of a weighted homogeneous surface V ′ in A 4 . In some cases (described in loc.cit.) V ′ can be chosen to be a hypersurface in A 3 . Cf. also [Du] for affine embeddings of some other classes of surfaces. In Theorem 2.3 below we show that any normal C * -surface V with a C + -action is the normalization of a principal Zariski open subset of some weighted projective hypersurface. In the proofs we use the following observation from [Fl]. Proposition 2.1. Let R = i≥0 R i be a graded R 0 -algebra of finite type containing the field of rational numbers Q. If z ∈ R d , d > 0, is an element of positive degree then the group of dth roots of unity E d ∼ = Z d acts on R and then also on R/(z − 1) via is isomorphic to the complement of the hyperplane {z = 0} in Proj(R). Let us fix the notations. 2.2. Let V = Spec A be a normal C * -surface with DPD-presentation If V carries a C + -action then according to [FlZa 2 ], after interchanging (D + , D − ) and passing to an equivalent pair, if necessary, we may assume that which is weighted homogeneous of degree 4 k(e + +e − )+d deg Q with respect to the weights Then the surface V as in 2.2 above is equivariantly isomorphic to the normalization of the principal Zariski open subset D + (z) of the hypersurface V + (F ) in the weighted projective 3-space is a cyclic extension of K = Frac(A). Its Galois group is the group of dth roots of unity E d acting on L via the identity on K and by ζ. . The element x = s e + u ∈ A ′ 1 is a generator of A ′ 1 as a C[s]-module. According to Example 4.10 in [FlZa 1 ] the graded algebra A ′ is isomorphic to the normalization of . The cyclic group E d acts on with invariant ring A. Clearly this action stabilizes the subring B. Assigning to x, y, z, s the degrees as in (4), F as in (3) is indeed weighted homogeneous. Since Remark 2.4. In general not all weights of the weighted projective space P in (5) are positive. Indeed it can happen that ke − + d deg Q ≤ 0. In this case we can choose α ∈ N with ke − + d(deg Q + α) > 0 and consider instead of F the polynomial which is now weighted homogeneous of degree k(e + + e − ) + d(deg Q + α) with respect to the positive weights As before V = Spec A is isomorphic to the normalization of the principal open subset D + (z) of the hypersurface V + (F ) in the weighted projective space In certain cases it is unnecessary in Theorem 2.3 to pass to normalization. Corollary 2.5. Assume that in (2) one of the following conditions is satisfied. (ii) e + + e − = 0, and D 0 is a reduced divisor. Then V = Spec A is equivariantly isomorphic to the principal open subset D + (z) of the weighted projective hypersurface V + (F ) as in (3) in the weighted projective space P from (5). Proof. In case (i) the hypersurface in A 3 with equation is normal. In other words, the quotient R/(z−1) of the graded ring R = C[x, y, z, s]/(F ) is normal and so is its Since the divisor D 0 is supposed to be reduced and D 0 (0) = 0, the polynomials Q(t) and then also Q(s d ) both have simple roots. Hence the hypersurface F (x, y, 1, s) = 0 in A 3 is again normal, and the result follows as before. Remark 2.6. The surface V as in 2.2 is smooth if and only if the divisor D 0 is reduced and −m + m − (D + (0) + D − (0)) = 1, where m ± > 0 is the denominator in the irreducible representation of D ± (0), see Proposition 4.15 in [FlZa 1 ]. It can happen, however, that V is smooth but the surface V + (F ) ∩ D + (z) ⊆ P has non-isolated singularities. For instance, if in 2.2 D 0 = 0 (and so Q = 1), then V is an affine toric surface 5 . In fact, every affine toric surface different from (A 1 * ) 2 or A 1 ×A 1 * appears in this way, see Lemma In this case the integer k > 0 can be chosen arbitrarily. For any k > 1, the affine hypersurface V + (F ) ∩ D + (z) ⊆ P with equation x k y − s k(e + +e − ) = 0 has non-isolated singularities and hence is non-normal. Its normalization V = Spec A can be given as the Zariski open part D + (z) of the hypersurface V + (xy ′ − s e + +e − ) in P ′ = P(e + , e − , d, 1) (which corresponds to the choice k = 1). Indeed, the element y ′ = s e + +e − /x ∈ K with y ′k = y is integral over A. However cf. Theorem 1.1(a). Example 2.7. (Danilov-Gizatullin surfaces) We recall that a Danilov-Gizatullin surface V (n) of index n is the complement to a section S in a Hirzebruch surface Σ d , where S 2 = n > d. By a remarkable result of Danilov and Gizatullin up to an isomorphism such a surface only depends on n and neither on d nor on the choice of the section S, see e.g., [DaGi,CNR,FKZ 3 ] for a proof. According to [FKZ 1 , §5], up to conjugation V (n) carries exactly (n − 1) different C * -actions. They admit DPD-presentations Taking here d = 1 it follows that V (n) is isomorphic to the normalization of the hypersurface x n−1 y − (s − 1)s n−1 = 0 in A 3 . Theorem 2.8. For a smooth affine surface V , the following conditions are equivalent. (vi) V is isomorphic to the Zariski open subset Proof. In view of the references cited above it remains to show that the surfaces in These surfaces admit as well a constructive description in terms of a blowup process starting from a Hirzebruch surface, see [GMMR,3.8] and [KK,Example 1]. An affine line Γ ∼ = A 1 on V as in (ii) is distinguished because it cannot be a fiber of any A 1 -fibration of V . In fact there exists a family of such affine lines on V , see [Za]. Some of the surfaces as in Theorem 2.8 can be properly embedded in A 3 as Bertin surfaces x e y − x − s d = 0, see [FlZa 2 , Example 5.5] or [Za, Example 1]. Gizatullin surfaces with a finite divisor class group A Gizatullin surface is a normal affine surface completed by a zigzag i.e., a linear chain of smooth rational curves. By a theorem of Gizatullin [Gi] such surfaces are characterized by the property that they admit two C + -actions with different general orbits. In this section we give an alternative proof of the Daigle-Russell Theorem 1.1 cited in the Introduction. It will be deduced from the following result proven in [FKZ 2 , Corollary 5.16]. Proposition 3.1. Every normal Gizatullin surface with a finite divisor class group is isomorphic to one of the following surfaces. (a) The toric surfaces V d,e = A 2 /E d , where the group E d ∼ = Z d of d-th roots of unity acts on A 2 via ζ.(x, y) = (ζx, ζ e y) . and with coprime integers e, m such that 1 ≤ e < m. Conversely, any normal affine C * -surface V as in (a) or (b) is a Gizatullin surface with a finite divisor class group. Let us now deduce Theorem 1.1. Proof of Theorem 1.1. To prove (a), we note that according to 2.1 the cyclic group E d acts on the ring C[x, y, z]/(z − 1) ∼ = C[x, y] via ζ.x = ζx, ζ.y = ζ e y, and ζ.z = z, where deg x = 1 , deg y = e, and deg z = d . By definition (1) Thus u m + = t m−e v + , u m − = t e v − , and u + u − = t(t − 1) c . The algebra A is the integral closure of the subalgebra generated by u ± , v ± and t. Consider now the normalization A ′ of A in the field L = Frac(A)[u ′ + ], where (10) Clearly the elements m √ v + = t e−m m u + and then also t e−m m both belong to L. Since e and m are coprime we can choose α, β ∈ Z with α(e − m) + βm = 1. It follows that the element τ := t 1 m = t α e−m m t β is as well in L whence being integral over A we have τ ∈ A ′ . The element u ′ + as in (10) also belongs to A ′ and as well u ′ 1) cm , so taking dth roots we get for a suitable choice of the root u ′ − , (11) u ′ + u ′ − = τ m − 1 . We note that u ± , v ± and t are contained in the subalgebra B = C[u ′ + , u ′ − , τ ] ⊆ A ′ . The equation (11) defines a smooth surface in A 3 . Hence B is normal and so 1)) . By Lemma 3.2 below, for a suitable γ ∈ Z the integers a = e − γm and d are coprime. We may assume as well that 1 ≤ a < d. We let E d act on A ′ via ζ.u ′ + = ζ a u ′ + and ζ|A = id A . Since gcd(a, d) = 1, A is the invariant ring of this action. We claim that the action of E d on (u ′ + , u ′ − , τ ) is given by (12) ζ Indeed, the equality u ′c + = t e−m m u + = τ e−m u + implies that ζ.τ e−m = ζ ac τ e−m . Since τ = τ α(e−m) t β the element ζ ∈ E d acts on τ via ζ.τ = ζ αca τ . In view of the congruence αa ≡ 1 mod m the last expression equals ζ c τ . Now the last equality in (12) follows. In the equation u ′ + u ′ − = τ m − 1 the term on the right is invariant under E d . Hence also the term on the left is. This provides the second equality in (12). To complete the proof we still have to show the following elementary lemma. Lemma 3.2. Assume that e, m ∈ Z are coprime. Then for every c ≥ 2 there exists γ ∈ Z such that γm − e and c are coprime. Proof. Write c = c ′ γ such that c ′ and m have no common factor and every prime factor of γ occurs in m. Then for every γ ∈ Z the integers γm − e and γ have no common prime factor. Indeed, such a prime must divide m and then also e = γm − (γm − e). Hence it is enough to establish the existence of γ ∈ Z such that γm − e and c ′ are coprime. However, the latter is evident since the residue classes of γm, γ ∈ Z, in Z c ′ cover this group. 2. As follows from Theorem 0.2 in [FKZ 2 ], the integers c, m in Theorem 1.1(b) are invariants of the isomorphism type of V . Indeed, the fractional parts of both divisors D ± as in (9) being nonzero and concentrated at the same point, there is a unique DPD presentation for V up to interchanging D + and D − , passing to an equivalent pair and applying an automorphism of the affine line A 1 = Spec C[t]. Furthermore, from the proof of Theorem 1.1 one can easily derive that a ≡ e mod m and b = mc − a ≡ −e mod m . Therefore also the pair (a, b) is uniquely determined by the isomorphism type of V up to a transposition and up to replacing (a, b) by (a ′ , b ′ ) = (a−sm, b+ sm), while keeping gcd(a ′ , b ′ ) = 1.
2008-08-28T10:30:05.000Z
2008-08-28T00:00:00.000
{ "year": 2008, "sha1": "fbe55effddd8f78c8b28ea11e622450c0a0689ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9449d9ffa8db9fb55e3608135bc8aa3cf5f39a62", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54093120
pes2o/s2orc
v3-fos-license
A Codesigned Compact Dual-Band Filtering Antenna with PIN Loaded for WLAN Applications 1 College of Computer and Information Science, Southwest University, Chongqing 400715, China 2 College of Computer Science and Technology, Zhoukou Normal University, Zhoukou 466001, China 3Department of Information Engineering, Chongqing City Management College, Chongqing 401331, China 4 Electrical and Computer Engineering Department, The University of Texas at San Antonio, San Antonio, TX 78249, USA Introduction The wireless local area network (WLAN) has experienced a rapid growth in the last decade, which has been widely employed in wireless applications such as WIFI, Bluetooth, and GPS.For further portability and better user experiences, the size reduction has been a trend in the wireless system designs.Generally, antennas and filters are considered as the critical components in reducing the whole sizes of the radio frequency (RF) front ends.In addition, the filter is usually connected right after the antenna, and the direct connection between them may induce additional mismatch losses and deteriorates the performances of the filter.Therefore, the integration of the antenna and the filter will be of great interest, which can provide both desired filtering and radiating functions. Several efforts have been proposed in the literature [1][2][3][4][5][6][7][8][9][10][11][12][13] to integrate the antenna and the filter into a single module, and the sizes as well as operating bands are summarized in Table 1 [1][2][3][4][5][6][7][8].The integration of the open-loop resonator loaded filter and the monopole antenna is presented in [3], which reduces the size of the filtering antenna by employing the antenna structure as the last resonator of the filter.The synthesis technique has been comprehensively studied in [2,8], which is proved to be feasible by the example of an integrated 2.45 GHz third-order filtering antenna with good band-edge selectivity and design accuracy.What is more, compared to the traditional way of connecting the antenna and the filter by standard 50 Ω ports, the filtering antenna in [2,8] reduces the transition loss to be near zero.For further reduction of the footprint of the integrated filtering antenna system, the integration of slot antenna and vertical cavity filter has been proposed in [7].However, most of the referred filtering antennas are for single-band applications, which can hardly meet the demands of the current dual-band 802.11 a/b/g WLAN applications. In this paper, a compact dual-band filtering antenna with printed structure for wireless communication systems is proposed.The filtering antenna is integrated by a microstrip dual-band band-pass filter and a simple monopole antenna, which is conducted by utilizing the synthesis approach.Without restricting the impedance between the filter and the antenna to 50 Ω, the performances of the filtering system are promoted and the total loss of the filtering antenna is almost the same as the filter insertion loss.The presented filtering antenna has a whole size of 27 × 20 mm 2 , which is much smaller compared to the antenna referred to in [3].To widen the bandwidth of the filtering antenna, reconfigurable technology [14][15][16][17][18] has been applied, which usually expands the bandwidth by combining different working states with a fixed size.In particular, the PIN diode is incorporated in the filtering antenna system, which covers the dualband 2.45/5.2GHz WLAN by combining the two working states.Fabrication of the printed filtering antenna is easy and of low cost, while the integrated filtering antenna has good band-edge selectivity and flat antenna gain.Measured radiation characteristics of the proposed filtering antenna are presented. Design of the Filtering Antenna Figure 1 depicts the layout of the proposed integrated filtering antenna, which is printed on a low-cost two-side printed substrate with relative dielectric constant of 3.5 and a thickness of 0.508 mm.The filtering antenna system includes three resonators of high values and one monopole antenna.The dual-band filtering antenna has a total size of 27 × 20 mm 2 , which is realized due to the introduction of the PIN diode and is quite promising for wireless communication systems.Note that the dual-band band-pass filter is directly connected to the antenna, and the mismatch loss is correspondingly reduced.The referred coupling coefficient 12 depends on the relative distance of resonator 1 and resonator 2. The smaller the value of is, the stronger the coupling between the resonators is.The coupling coefficient is calculated based on the following equation: Synthesis of the Dual-Band Filtering where 1 and 2 represent the eigenfrequencies of resonator 1 and resonator 2, respectively, which can be obtained through the optimization of the relative distance between the resonators.The coupling coefficient as a function of the distance between resonator 1 and resonator 2 is depicted in 20 mm 2 .The resonant frequencies are mainly determined to its total length, namely, the length of (2 + 3 + 3).The lengths of the long strip (34.6) and the short strip (13.6 mm) are approximately a quarter-wavelength for 2.45 GHz and 5.2 GHz, respectively.The connecting line between the filter and the antenna is optimized, whose dimension is finally identified as 10.6 mm in length and 1.1 mm in width. Implementation of the Reconfigurable Technology. To achieve larger bandwidth coverage, the PIN diode is incorporated in the filtering antenna, which is located in the coupling structure of the filter.As a result, the bandwidth of the filtering antenna is expanded with a fixed size.When the PIN diode is OFF (state1), the PIN diode is equivalent to a series capacitance with high isolation and the filtering antenna works in the previously mentioned state, covering the 2425-2465 MHz and 4785-5220 MHz.When the PIN diode is ON (state2), it works as a series resistance; then resonator 2 and resonator 3 are directly connected; thus the path of the currents is changed and the resonant frequencies are shifted towards lower frequencies, covering the 2340-2430 MHz and 4740-5100 MHz.Therefore, by combining the two working states, the bandwidth of the filtering antenna is increased by about 40% with the addition of the PIN diode, covering the 2.45/5.2GHz WLAN operation.The prototype is fabricated under the consideration of the PIN diode and its DC bias circuit. Results and Discussion The proposed dual-band filtering antenna system was fabricated and tested.Photos of the fabricated filtering antenna system and the PIN diode as well as its corresponding DC are shown in Figure 4.The simulated results of the integrated filtering antenna are achieved based on the HFSS 13.0, while the measured results are given by using the Agilent N5247A network analyzer and the Satimo StarLab far field measurement system.Simulated and measured 11 against the frequency for the two working states are shown in Figure 5. Type of the PIN diode is selected as Philips BAP64-03, while the DC bias current is supplied by two AAA batteries.The filtering antenna successfully changes its working states to widen the bandwidth by nearly 80 MHz (230%) for 2.4 GHz WLAN and 49 MHz for 5.2 GHz WLAN.The measured results show that the presented filtering antenna provides good selectivity and out-of-band rejection. Measured radiation patterns for the proposed filtering antenna system in the -, -, and - planes are shown in Figure 6, which is conducted at 2400 and 5000 MHz for state1 as well as 2450 and 5200 MHz for state2.For all the different frequencies for the two working states, the measured radiation patterns are dipole-like, which are nearly omnidirectional in the azimuth plane for the two bands. Conclusion In this paper, a small-size dual-band filtering antenna incorporating the PIN diode is proposed.The presented filtering antenna system is printed on the low-cost substrate and occupies a total size of 20 × 27 mm 2 .Through the incorporation of the PIN diode, the bandwidth of the filtering antenna is increased by nearly 230% for 2.4 GHz WLAN, covering the dual-band 2.45/5.2GHz WLAN operation.Performance of the filtering antenna system is promoted through the optimization of the impedance between the filter and the monopole antenna.In addition, by directly connecting the antenna and the filter, the mismatch loss is greatly reduced.According to the measured results, the proposed filtering antenna system has good selectivity and out-of-band rejection, which prove that the filtering antenna is applicable for the RF front end for modern wireless systems. Table 1 : Comparison of the sizes of the filtering antenna.
2018-12-01T22:13:58.248Z
2014-05-15T00:00:00.000
{ "year": 2014, "sha1": "16f661c64bc89e2793c346dedf20ccbfc6f00294", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijap/2014/826171.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16f661c64bc89e2793c346dedf20ccbfc6f00294", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
150268098
pes2o/s2orc
v3-fos-license
Lament of a Wounded Priest : The Spiritual Journey of Job Acknowledging the complex redaction history which produced the Book of Job contained in the Jewish and Christian canonical scriptures, this article offers a spiritual interpretation of the text taking due account of its overall structure and major parts (prologue, main dialogical body and epilogue). With its focus on the formation of personal identity, spiritual theology grants access to a developmental understanding of the biblical narrative and characters. Undergirding this essay is the basic claim that in and with the book and figure of Job are found paradigmatic examples of how to become and remain human and faithful in and despite relentless undeserved suffering. The exploration of Job’s life in suffering leads to the discovery that the lament formulated by a faithful heart compellingly summons God to appear and speak, consecrating the human recipient as mediator of divine revelation and sacramental intercessor. Job’s wounded body and spirit reflect the spiritual journey he has completed and has been commissioned to invite others to undertake. Undeserved suffering can lead to transformative mystical encounters with God, if and when the human heart dares to believe to the end, giving voice to and challenging God from within relentless unjustifiable pain. Introduction The biblical book of Job defies common sense understandings of faith and human existence.For more than two millennia, this fictional rendering of a non-Israelite's life has felt to its readers all too disruptively and painfully real.Thomas Long aptly describes how, in this most famous of wisdom texts, the human experience and reality of suffering breaks open and apart the classical theology of retribution to set before the faithful the mystery that is the absolute freedom of God. The author of the canonical Book of Job no doubt knew that the original plot of the story-perfect world/disaster/perfect world restored-was logically impossible. . . .Job's "perfect world" was built upon the assumption that God plays by a set of moral rules that are widely publicized and known to humanity.As long as a person, like Job, obeys the rules, or engages in acts of purification when one of those rules may have inadvertently been broken, then God can be trusted to "play fair" and to preserve and protect.The problem was that the destruction and suffering experienced by Job came as the direct result of divine behavior, which, as far as these moral rules go, was definitely in foul territory.Job suffers not because he has violated some holy ordinance, but because God issued a seemingly capricious challenge to an Adversary, made a wager in the heavenly court, and enigmatically turned Job over to the power of a malicious opponent.(Long 1988, pp. 10-11) In the Book of Job, divine revelation invites further reflection through the figure of a most honest and not so patient (rebelling) servant.On the basis of his suffering existence, Job demands that God reveal and account for Godself.Job has had enough of the unreliability of mediations (creation, human reason, conscience, justice and consolation); the wounded nakedness to which his undeserved suffering has reduced him can only be soothed by complete self-disclosure on the part of God. Job's faithfulness in response to God's readiness to submit him to the test of extreme suffering challenges us to enter into the deeper mystery of being human before the Lord.What is and can be the significance of a blameless existence in absolute poverty and abjection?How and why would God allow for faithful servants to be subjected to unjustified and unjustifiable suffering?Under the guidance of contemporary biblical exegetes and commentators, in what follows I will explore the challenge that undeserved suffering constitutes to a rational understanding and practice of faith.Drawing attention to selected passages of the biblical text, I will follow theological insights leading away from human explanations and, hopefully, closer to the divine mystery itself.Acknowledging the complex redaction process and history which resulted in and with the production of the Book of Job in the form it takes in the Jewish and Christian canonical scriptures, the present article attempts to provide a spiritual interpretation of the text taking due account of its overall structure and major parts (prologue, main dialogical body and epilogue). 1With its focus on the formation and evolution of personal identity, spiritual theology grants access to an organic developmental understanding of the biblical text and characters.Sandra Schneiders thus speaks of "biblical spirituality," a phrase which, "first and most fundamentally refers to the spiritualities that come to expression in the Bible and witness to patterns of relationship with God that instruct and encourage our own religious experience" (Schneiders 2002, p. 134).The biblical text was not written and transmitted in history only to convey the contents of divine revelation, but also and-arguably-more importantly in order to induce profound lasting personal transformation in its readers.Scripture serves and fulfills a spiritual vocation, for "transformative engagement with the text is the ultimate raison d'être of biblical study within the ecclesial community" (Schneiders 2002, p. 142). In particular, spiritual theology enables and accompanies the definition and production of meaningful life narratives from and within the experience of unexplained and unjustifiable suffering.The challenge and purpose is not to justify or explain away such suffering, but rather to acknowledge its reality and find creative and humanizing ways to live in and through it.Undergirding this essay is the basic claim that in and with the book and figure of Job are found paradigmatic examples of how to become and remain human and faithful in and despite relentless undeserved suffering.The exploration of Job's life in suffering should lead to the discovery that the lament formulated by a faithful heart compellingly summons God to appear and speak, consecrating the human recipient as mediator of divine revelation and sacramental intercessor.Job's wounded body and spirit reflect the spiritual journey he has completed and-I dare to suggest-has been commissioned to invite others (his "friends") undertake.Job's journey in and to faith is one that can be understood only when and as it is lived in the thick and dark of human existence.Let us therefore listen to Job again and anew, in an attempt to further our understanding of how someone who, though entirely blameless "in terms of his moral character and the practice of his faith," ends up "confessing his new and far more profound knowledge of God" (Ortlund 2015, p. 256).What is this knowledge of God which, while emerging in and from faithful compliance with divine law, so transcends human reason and speech that it can only be conveyed by means of lived example and testimony? Job, the Wise? The sacred text opens with a brief description of Job."There was once a man in the land of Uz whose name was Job.That man was blameless and upright, one who feared God and turned away 1 In this respect, the approach adopted here is similar to that used by exegetical scholars Carol A. Newsom (see Newsom 2003) and Susannah Ticciati (see Ticciati 2005) who both focus on interpreting the final product of this redaction history in its organic complexity.from evil" (1:1). 2 As the poem or interlude on wisdom (which forms the whole of chapter 28) confirms, this description articulates the substance of authentic wisdom.In a formulation akin to those found in Proverbs,3 Job 28:28 thus teaches: "Truly, the fear of the Lord, that is wisdom; and to depart from evil is understanding."Job's wisdom is revealed and acknowledged in and by his behavior.Wisdom cannot be reduced to a body of theoretical knowledge, it rather identifies with and finds expression in a definite way of being and doing.Job knows himself to be the recipient of a multitude of privileges (family, servants, cattle and other possessions) for which he pays constant tribute to the Lord.Job is aware of the gracious character of these privileges, never owed to those who enjoy them.Before he is plagued with his infamous sufferings and tribulations, Job lives and practices a faith essentially preventive in nature and character (see Ticciati 2005, p. 71).Job anticipates what could go wrong, prays and offers additional sacrifices to make sure the people he cares about and for remain in good standing with the Lord.Knowing human beings inherently fallible, Job goes out of his way to prevent himself and his dependents from committing any kind of sin.Referring to Job's defense against his friends' accusations, Fredrick Holmgren observes that Job "does not mean that he is absolutely perfect.That belongs to God alone.He affirms only that he has been true to the covenant with God as human beings can be true" (Holmgren 1979, p. 347). Faithfulness to the covenant is for Job a full-time occupation and task; humans must remain vigilant and submit their lives to relentless scrupulous moral examination.Never permanently acquired and fully developed, human faithfulness and piety are fragile.Stephen Mitchell analyzes the business that covenantal morality has turned into at this point of Job's life. Job avoids evil because he realizes the penalties.He is a perfect moral businessman: wealth, he knows, comes as a reward for playing by the rules, and goodness is like money in the bank.But, as he suspects, this world is thoroughly unstable.At any moment the currency can change, and the Lord, by handing Job over to the power of evil, can declare him bankrupt.No wonder his mind is so uneasy.He worries about making the slightest mistake; when he has his children come for their annual purification, it is not even because they may have committed any sins, but may have had blasphemous thoughts.(Mitchell 1987, p. ix) Mitchell goes so far as to claim that the Job of the prologue is not wise, but merely dutifully good; he does not possess the mature personal identity needed to take a free stance toward God (see Mitchell 1987, pp. ix-x).Servile fear (of retribution) is what moves this Job to be and act faithfully. Mitchell's reading of Job's spiritual condition strongly correlates with that of another character in the story: The Satan (Accuser, Adversary).At first unaware of him, the Satan's attention is drawn on Job by the Lord's clear affirmation and praise of his way of life.The Lord's words attest the conformity of Job's way of life to the definition of wisdom."Have you considered my servant Job?There is no one like him on the earth, a blameless and upright man who fears God and turns away from evil" (1:8).Shimon Bakon underscores that these words undeniably reveal "full divine trust in the integrity of man" (Bakon 1993, p. 227).A doubting God would never have brought Job under the scrutiny of the heavenly Accuser, whose vocation precisely is to find fault with the moral/spiritual condition of humans.A God desiring to help Job move from fearful self-oriented faithfulness to loving altruistic service might, however, submit him to such a test.The Lord even describes Job as a faithful servant whose life and actions have no equal on earth.Thus, upheld by God's boastful trust and praise, Job becomes the object of the Satan's challenging doubt.As he objects to the Lord's all-affirmative stance toward Job, the Satan accuses both Job and the Lord of being unfaithful.To put Job's piety to the test of authenticity entails subjecting the Lord's blessing to similar testing.What if Job's motives were not so pure and holy?What if Job served the Lord only by virtue of and for the benefits he receives? In Robert O'Rourke's clear words, "the Accuser claims that Job's faithfulness should be tested, for, if all his blessings were to be taken away, he would undoubtedly curse God" (O'Rourke 2006, p. 59). Consequential Speculations The very formulation of this challenge reveals that the Satan does not have access to Job's heart.The Satan does not know the true motivations undergirding Job's actions.He speaks as an external observer reduced to making suppositions.As he challenges the Lord's affirmation of Job's exceptional righteousness, the Satan makes himself liable to testing and potential repudiation from both the Lord and Job.Knowing, as Job 32:8 reminds us, that the human spirit is quickened by "the breath of the Almighty," 4 the Lord cannot but inhabit and know the depths of the human heart.In these conditions, if and when the Lord accepts the Satan's challenge, the Lord does so not to test Job's faith, but rather to lead the Satan to respect human dignity and goodness as they deserve.God boasts of Job's faithfulness because God has sounded his heart.The suffering of the righteous is a test for the hosts of the Lord.In and through the person of tested Job, the Lord is offered by the Satan the opportunity to assert the dignity of the human creature over and beyond that of the divine hosts ("sons of God").If demonstrated, the faithfulness of Job could be invoked to humble the messengers of God.If Job were to learn to be faithful to God in the context of his undeserved suffering, the Satan might no longer be able to impact Job's existence significantly. Indeed, the Satan's challenge rests on a definite understanding of faith and piety, deemed genuine only when the person "fear(s) God for nothing" (1:9).True reverence is disinterested, that is, never motivated by self-centered desires.True faith and piety demand that God be loved in and for Godself, independently from all forms of compensation and consolation.The Satan supposes that humans are incapable of such faith and piety and he wishes that the Lord grant him the authority and power required to demonstrate the truth of his claim.The Satan's lack of faith in humans forms the justification for the exercise of dominion over them.When the Lord accepts the challenge, the Satan is granted with the power to move human beings (Sabeans and Chaldeans) and natural elements (fire and wind) so that Job's loved ones (with the exception of his wife) and possessions are killed and taken away from him (1:13-19).Hence, if a human person could meet the Satan's challenge, she would thereby be liberated from his influence and even gain a moral/spiritual authority before God he himself does not enjoy.The many blessings Job enjoys and receives at the beginning and end of the biblical book may then result from and reflect the unique communion and intimacy he shares with the Lord, to which the heavenly hosts themselves do not have access.The book of Job may thus constitute one of the most powerful affirmations of human dignity, and of the dignity of the suffering in particular. Job's response to this first challenge demonstrates that he cannot be reduced to the "perfect moral businessman" Mitchell wants to perceive in him."Then Job arose, tore his robe, shaved his head, and fell on the ground and worshiped.He said, 'Naked I came from my mother's womb, and naked shall I return there; the Lord gave, and the Lord has taken away; blessed be the name of the Lord'" (1:20-21).This prayer is the prayer of someone in mourning.Job grieves for the loved ones he has lost.Job bears the destruction of a family, a home, a community and an economy built over a lifetime of faithful service and careful governance with the continuing assistance and blessing of the Lord.This prayer is the prayer of someone who offers unbearable loss to God, acknowledging his own nothingness and utter dependence on God's plenitude.Job's faith does not rest on blessings and rewards (though needed and welcome), but rather on God's presence and involvement in Job's life (to which the blessings and rewards bear witness, though not in necessary and/or sufficient fashion).Job lives from communion in faith with the Lord both in times of prosperity and poverty. 4 In 27:3-4, Job articulates a very similar position: "As long as my breath is in me and the spirit of God is in my nostrils, my lips will not speak falsehood, and my tongue will not utter deceit." The Lord himself attests to this communion by invoking it to challenge the Satan."He still persists in his integrity, although you incited me against him, to destroy him for no reason" (2:3).The Satan's authority is placed by the Lord under direct threat; the infliction of undeserved suffering upon Job has shown to be unjustified and, therefore, unjustifiable.The Satan is summoned to provide an account for subjecting Job to such a treatment.An abstract ideal of disinterested faith does not and cannot justify the infliction of suffering for "no reason," especially when such suffering is turned by the undeserving victim into a medium and instrument for the expression of authentic faith.Once again, the Satan is faced with a basic alternative: Either he acknowledges and pays respect to Job's dignity and faithfulness (and to the trust the Lord has placed in Job) or he invites the Lord to challenge Job to undergo further spiritual transformation (by inciting the Lord to allow him to inflict further undeserved suffering on Job).To justify hitting Job with more unwarranted pain and turmoil, the Satan invokes the human fear of death as the motivation for Job's persisting piety.In this view, human beings believe in God and respect God's commandments because they do not want to lose their lives (literally or in the form of qualitative diminishment), especially when they have lost everything else. The Lord agrees to the Satan's request, allowing him to alter Job's health detrimentally without taking his life.Samuel Balentine interestingly suggests that in the Book of Job it is God who is tempted and fails (twice) to trust (see Balentine 2003, p. 357).The prologue raises serious concerns about God's status and purpose, for as he further observes, "the admission that God can be provoked to do something that might not have occurred without some external pressure invites reflection on the nature of God.Can God be coerced, manipulated, perhaps even tricked?"(Balentine 2003, p. 360).Or, following the hypothesis just hinted at, could God be led to inflict undeserved suffering upon pious human beings to hold his heavenly hosts accountable?In both cases, the main concern resides in and identifies with God's unaccountability.Does the covenant tying the Lord and the people of Israel to one another allow for God to act arbitrarily and/or in ways that intentionally jeopardize Israel's perennity and flourishing?While the covenant recognizes the possibility that the human partner may fail to uphold her obligations and includes mechanisms enabling her to make due amends, "there is no prescribed sacrifice that atones for God's malfeasance, no ritual that requires God's repentance. . . .Once it is established that God has afflicted Job both willfully and gratuitously, the cult has nothing to offer him" (Balentine 2003, p. 363).As seen previously, in Job 2:3, the Lord admits he allowed the Satan to inflict underserved suffering upon Job "for no reason."The epilogue reiterates this thought by having Job's new extended family comfort him for "all the evil that the Lord had brought upon him" (42:11). Challenged and Challenging Faith Hence, the Book of Job reveals a theological landscape overshadowed by God's absolute freedom and arbitrariness.In a fundamental sense, then, God's fate-alongside the Satan's-may be shown to lie in the hands and heart of Job.Susannah Ticciati succinctly describes Job's predicament: "Job must live with this arbitrary God-there is no other option and no other God.Is not this the nub of Job's pain . . .that there is nowhere to which he can escape from this God?" (Ticciati 2005, p. 172).Afflicted with an incurable skin disease, living in absolute poverty and shame, Job's condition kindles horror in his wife's heart who, like the Satan (and most readers), does not understand why Job holds on to his faith (which may reflect both her desire to relieve Job from his suffering and the challenge to which her own faith is subjected).In response to her invitation to let go of his piety, Job retorts: "Shall we receive the good at the hand of God, and not receive the bad?" (2:10).As some of his friends join him and, after a week of respectful silence, seemingly take over from the Satan in subjecting his faith to relentless trial, Job speaks in ways indicating that if he faithfully receives "the bad" from God, he does not do so in and with peaceful patience.Job finds himself at the end of his endurance and wits."What is my strength that I should wait?And what is my end, that I should be patient?"(6:11) As Karla Suomala underscores, the question for Job is: "Should he continue to be the kind of person that he has been-upright, honorable, righteous-even though there is apparently no reason to do so?" (Suomala 2011, p. 397). The task set before Job is tantalizing: To muster the courage and find a way to re-engage positively with God and human existence after undeservingly having lost everything because of God (see Wettstein 2001, p. 345).The achievement of such a feat will demand no less than undergoing a profound spiritual transformation.The first stage of this transformative journey involves giving voice to his broken heart.Job thus begins by cursing the day of his birth, and unleashes his own challenge to God (as the unfolding narrative later reveals) in the form of a lament grounded in and nourished by his profound sense of being unjustly treated.Henrietta Wiley justifiably speaks of Job's construction and expression of lament as a form of resistance: "His outpouring of grief is a veritable torrent of resistance to divine 'favor' and lament over faith, loss and isolation" (Wiley 2009, p. 123).Like Abraham, Job suffers on account of having been elected by the Lord for a special purpose.Blessed by God with a son (Isaac) who embodies the promise and future of Israel, Abraham is asked to sacrifice this same son to God as proof of his faith and righteousness.Blessed with a large family and material riches by God, Job loses everything and everyone he loves (apart from his wife) and is subjected to undeserved suffering to demonstrate his faith and righteousness.God's personal intimacy defines living conditions and demands excluding the enjoyment of "normal" human existence and relationships. Loss and solitude are the appanage of the prophetic life and mission.Wiley explains: "God's demanding attention crowds out the possibility for these men to preserve bonds of affection and support with their families, while God's limitless power renders any affectionate bond with him inconceivable.This leaves the faithful men who are deemed most blessed by God in a state of deepest isolation and loss" (Wiley 2009, p. 129).Job is left to himself not because he is estranged from God, but rather because God has singled him out.God's nearness to Job is such that it becomes a challenge for God's heavenly council who, through the intervention of the Satan, questions the Lord's bestowal of such a favor upon him.Job suffers because he receives too much attention from God.He wishes for the Lord's overbearing gaze to move away from him for a time.Job's words speak for themselves: What are human beings, that you make so much of them, that you set your mind on them, visit them every morning, test them every moment?Will you not look away from me for a while, let me alone until I swallow my spittle?If I sin, what do I do to you, you watcher of humanity?Why have you made me your target?Why have I become a burden to you?Why do you not pardon my transgression and take away my iniquity?(7:17-21) Job basically summons God to retrieve and recover God's authentic self, to resume acting in accordance with God's own infinitely merciful nature and life.Job invites God to find in his own faithful piety an image and likeness of the being and existence God has concealed from him for a time.Job suffers from an overdose of God's judging presence, revealed in and through the relentlessness and gratuity (arbitrariness) of his suffering.Job knows his condition can be cured only by God's healing self-disclosure (experienced as forgiveness and communion).Timothy Polk accurately summarizes: "The man is God-intoxicated, we might say, theocentrically obsessed.Indeed, what he seeks most is to see God and to have God recognize and acknowledge him" (Polk 2011, p. 415).His life and condition have literally become unbearable to him (and for others).In Job's concise formulation: "I am blameless; I do not know myself; I loathe my life" (9:21).Resting on the purity of his faith (which does not suppose perfect self-knowledge), Job speaks his experience of undeserved suffering to hold God accountable.Again, in Job's own words: "My face is red with weeping, and deep darkness is on my eyelids, though there is no violence in my hands, and my prayer is pure" (16:16-17).As he speaks and responds to his friends' counter-arguments, Job really is making his plea to God (cf. 21:4).He wishes to be true to himself before God to offer God the opportunity to repent and make amends by vindicating once and for all his person and actions (cf.23:7).Following Eliezer Berkovits, Job's contention with God reflects the quality of the trust he places in God (see Berkovits 1973, p. 113).Job has been left in such nakedness and poverty that only God's being and life can heal, clothe and support his own.God's relentless testing has turned Job's body and spirit into throbbing open wounds unable to heal by themselves.By virtue of his personal experience of divinely induced unjustified and unjustifiable suffering, Job has become an unfathomable mystery to himself. Job's challenging lament is grounded in the urge to speak his truth, the truth of an existence lived in accordance with one's conscience, faithfully listening to and following one's heart, where God dwells and breathes life into the human person."Until I die I will not put away my integrity from me.I hold fast to my righteousness, and will not let it go; my heart does not reproach me for any of my days" (27:5-6).Job gained awareness of his integrity from a divine source of knowledge.He loves and trusts the God who speaks to him from within his own heart, and this God compels him to contend with the God his friends describe to him and in whom he used to believe (just like them).Job feels he is ready to pass the ultimate test of human faithfulness: to be measured against God's fully disclosed being and truth.He is willing to give his life to be able to meet with the God he loves so deeply (cf.13:15-16)."Sure of his innocence," comments Brian Gault, "[Job] wishes to argue his case before God, regardless of the consequences" (Gault 2016, p. 153). Hence, for Job the main issue is how to be authentically human before and with God, that is, in such a way that God is summoned to be faithful to God's true self.The experience of undeserved suffering shattered the representations of God Job previously entertained and qualifies any new image of God he may produce.Job is confronted with the task of interacting with and relating to God without, beyond mediating representations.Job wants and needs to know God in and as Godself, in the hope that there is to God more (infinitely more) than what he and his friends can conjure.As Long explains, Because Job suffers so grievously and so irrationally, he is no longer permitted the luxury of an illusion.Every attempt at make-believe falls before the reality of empty places at his family table and the throbbing pain in his body.The only god Job can manufacture from his misery is a monster, and Job must decide whether to flee from this arbitrary and punitive god or to stand up boldly to see if there just might be another-a God not of his own making.(Long 1988, p. 6) Most of all, Job wants to cease having to do all the thinking and talking; he knows he is going nowhere and cannot find a way to cope with his suffering existence.As Dianne Bergant notes, "Job's greatest suffering is not his loss or physical affliction but his inability to understand why God has allowed this to happen to him" (Bergant 2018, p. 81).To bridge the gap separating him from himself, separating the God he loves from the God who makes him suffer undeservingly, lies beyond his ability.He craves for the truth revealed and conveyed in God's own terms.He will fall silent if and when God reveals Godself and himself (Job) to him."Teach me, and I will be silent" (6:24). Lament as Spiritual Formation However, as the biblical text shows, Job's moral/spiritual readiness for personal encounter with the Lord is not innate, but rather acquired over time.Job must be prepared to meet with the Lord, and this preparation consists in grasping and assuming the truth that he is being made into an advocate and mediator who stands for humanity before God.Balentine suggests as much when he asks: "Could it be that God is challenging Job to be for himself and for others the môkîah ., the witness, the gō'ēl, that he has longed for but despaired of finding?" (Balentine 2002, p. 511).Job does indeed call for an arbitrator ("umpire," cf.9:33), a heavenly advocate ("witness," cf.16:19), and a redeemer ("vindicator," cf. 19:25) to assist him in bringing his case before, negotiating with and surviving the fateful encounter with God.He needs time to process, adapt to and embrace his own prophetic election and mission.The spiritual formation he undergoes profoundly transforms his understanding of the nature and significance of suffering.What begins as a challenge addressed to God "in order to restore his honor" (O'Rourke 2006, p. 68) morphs into a vicarious sacramental vocation.Lament is the medium, instrument and embodiment of Job's articulation of his personal vocation to serve the Lord in a unique way.Job must learn to speak to the Lord with the authority the Lord wishes him to assume. When Job is able to do so, the Lord responds to his invitation and the encounter is life changing.Josh Carney thus highlights the significance of lament as channel and incarnation of spiritual development throughout the Book of Job. Job's conversation with his friends takes up thirty-six chapters.That is the longest conversation in the Bible.Among other things, the length of this discourse signals both that Job's pain was that real and that the conversation was that important.Lament characterizes the conversation every time Job opens his mouth.For thirty-six long chapters, Job pounds on the ground, or maybe on the door of heaven, or maybe even on God's chest.Finally, in chapter 38, we discover that God has been listening.(Carney 2014, p. 283) Giving voice and ascribing meaning to unjustifiable suffering is the task of a lifetime, for it entails complete reconstruction of self and God.Such wholesome reconstruction precisely is what faith forged in the crucible of undeserved suffering requires.Lament expands understanding of the human condition and God.T. C. Ham describes the effect of Job's complaint on the readers of the biblical text: In Job's lament, our horizon of suffering expands.The reader cannot callously witness the pain of others; the poetry draws us in to feel the pain of Job's loss.In Job's lament, our theology of God expands.The reader cannot comfortably accept the God of retribution theology (as Job's friends do); the poetry compels us to question God's justice.In Job's lament, our understanding of God's relationship to humanity expands.The reader cannot merely assent to simplistic views of God's sovereignty; the poetry begs us to imagine a relational God.(Ham 2016, p. 243) Job's lament defines and sets the criteria for authentic faith received and lived out in the context of unfathomable pain.Correlating the teaching and witness of Job with the experience and reality of the Holocaust, David Blumenthal spells out the tenets of such a faith: To have faith in a post-holocaust, abuse-sensitive world, we must: (1) Acknowledge the awful truth of God's abusive behavior; (2) adopt a theology of protest and sustained suspicion; (3) develop the religious affections of distrust and unrelenting challenge; (4) engage the process of renewed spiritual healing with all that entails of confrontation, mourning, and empowerment; (5) resist the evil mightily, supporting resistance to abuse wherever it is found; (6) open ourselves to the good side of God, painful though that is; and (7) we must turn to address God, face to Face, presence to Presence.(Blumenthal 1993, p. 259). Job's protesting faith emerges out of a profound experience of revelation and communion occurring within suffering itself.The life shattering character of the suffering to which he is subjected would deny Job the possibility of nurturing genuine faith if it were not for the fact that God empowers him to sustain his trust and hope in the true God.Job knows he has been reduced to nothing and lost everything.His body and spirit diffuse searing pain.Whatever strength he has to hold on, defend himself against false accusations and call God to account comes from God, encountered in pain.Edith Barfoot bears witness to the mysterious empowering character of extreme undeserved suffering.In the heart of human powerlessness, divine empowerment can be found.Job survives and fights because at the deepest level of his being, he knows himself welcomed and accompanied by God. If in the course of human suffering the time comes for plunging into the depths of inexpressible pain, when the whole body is racked from head to foot with agony that nothing can alleviate, then the soul itself, the body's faithful ally, is numb because of the physical state which has overwhelmed it, so that it is incapable of conscious prayer for help, when human aid is of no avail, then, more than ever before, down below conscious understanding, there in the unplumbed depth he waits with open arms to receive the tortured being, while he infuses into the soul and mind of her the absolute trust which feeds the inmost consciousness with the knowledge that all is well; and with divine knowledge comes renewed strength to endure and lie still in the everlasting arms.(Barfoot 1977, pp. 7-8) Mystical Encounter Job is the living testimony of the human heart who knows and believes that it is possible to remain true to oneself and to God in and despite suffering.Job sobs and lives in the dark, but his heart remains lit with the fire of love and truth, his "prayer is pure."Only absolute vindication from the transcendent God justifies and empowers to go to the end of suffering, that is, to suffer without end.Job lives in and with the hope that when he will have suffered everything in faith, his body and spirit will become fully transparent to God, who will powerfully manifest Godself in and through his humanity.The suffering human nature and person are here pronounced to be privileged mediations for the divine being and power (Job's undeservingly suffering person reflects the extent of God's freedom, which transcends the law and the distinction of good from evil).To appear and bear true witness to God, the human person must undergo a radical transformation that will profoundly alter her humanity, opening and enlarging it to receive and channel the infinite light and love of God.Job foresees that the suffering he has borne prepares him for personal encounter with the Lord.Job thus travels in and from faith toward complete intimacy and communion with God.Starting with a theological stance not unlike that repeatedly articulated and defended by his "friends" throughout the dialogical section of the book (grounded in the principle of retribution), in the crucible of relentless and extreme undeserved suffering Job is led to assume and embrace a much more proactive and challenging perspective.While in the world and to himself and his fellow humans, the suffering Job is and feels broken, shattered and powerless, before and with God he is empowered to invite and withstand direct confrontation.Job is no passive bystander; his heart compels him to cry out and denounce his plight before both the human community and God.Unheard by his friends, his lament authoritatively summons God to appear and speak to him.The paradoxical conjunction of human powerlessness and "divine" authority in the person, words and actions of Job embodies the spiritual transformation and journey he has undergone and completed. At last, the Lord speaks to Job "out of the whirlwind," but not in a way that meets Job's "human" expectations.God does not provide the explanation Job feels he deserves to receive by virtue of his unjustifiable suffering.Throughout his exchanges with his so-called "friends," Job adamantly reasserted that his undeserved suffering placed God under the obligation of justifying Godself.The Lord responds with a dual challenge intended to test, respectively: (1) Job's knowledge and power over material creation (cf.38:2-3); (2) the legitimacy of Job's claim to bear moral judgment on God's governance of the created order (cf.40:1-2).The Lord God is never bound or comprehended within the scope of human questions, requests and prayers.The human creature rather is always put into question by the Lord.When the Lord appears and speaks to Job, Job must account for himself, for creatures always stand in need of justification.God, however, has no need for justifications, for everything finds justification in God. Job's answer to the divine challenge is profoundly revelatory of the human condition.Job is not impressed with the content of the Lord's speeches, for in faith, he already knew and believed what the Lord tells him about creation and his involvement with it.Eric Ortlund explains: "What new insight does Job get at the end of the book?Job already knew about Leviathan (3:8), and none of YHWH's questions in chapters 38-39 are especially difficult-even when they have to do with things Job does not understand, the questions themselves are not difficult to answer.So Job does not appear to have received new information about God" (Ortlund 2015, p. 261).Carol Newsom further argues that "the second divine speech [chapters 40-41] says nothing new.Since Job appears to be hard of hearing, God simply repeats the message, louder and more slowly" (Newsom 2003, p. 248).What does make an impression on him, however, is the Lord's presence.Job did not so much wish to understand everything, to figure it all out, to make sense of God, evil and suffering once and for all.Job wished for God to be God, to reveal Godself in accordance with the legitimate demands and expectations of his righteous heart.Job needed to know that the Lord of creation identifies with the ground of his own inner life, which he experiences to be at once almighty and merciful.The God who lives and supports Job's faith from within Job's heart must be the same as the God who rules over all that is.Meeting with the Lord face to face-more exactly heart to heart or spirit to spirit-fully convinced him that his faith and integrity were well grounded, taking root in and being directed to the Lord's infinite power and love.Job needed to know and feel, in undeniable fashion, that he was loved and known by God (understanding love as basic life-giving relationship which sustains creaturely existence and human existence in particular).Job is humbled by the infinite honor granted in and by God's personal response.As Polk remarks, "Job is shamed by God's honoring him with his appearance.This appearance, more than anything, is what Job had been demanding; not a judicial verdict, but the honor of a response" (Polk 2011, p. 416). The humility he displays after God has spoken tells us that this need has been met.First, Job refrains from speaking, adopting a humble, receptive listening attitude.Reverent silence seems to him the most appropriate way to stand in the presence of and engage with God."I am of small account; what shall I answer you?I lay my hand on my mouth.I have spoken once, and I will not answer; twice, but will proceed no further" (40:3-5).When the Lord is done with delivering the second part of his speech, Job takes the further step of confessing his ignorance and professes humble reverence to the Lord, for now he sees."I have uttered what I did not understand, things too wonderful for me, which I did not know. . . .I had heard of you by the hearing of the ear, but now my eye sees you; therefore I despise myself, and repent in dust and ashes" (42:3, 5).Job came to know God as God is in Godself and himself before God, without having to understand everything like God.For human beings, to know God amounts to knowing that they are being known by God in personal fashion.Mystical union with the divine mystery forms the foundation and fulfillment of human existence.Job's confession does not involve repentance for wrongdoing or incurred guilt on Job's part.Job's encounter with God supposes that Job has not lost his faith and integrity, for his own life is the testimony he brings before and against God (see Newsom 2003, p. 185).Job's concern and worry is not human sinning, but rather divine (in)justice.God, not Job, is here put on trial by none other than Job.Job is moreover not attempting to set himself above or apart from God (he is not falling prey to the sin of pride).Job rather wishes to live always in the presence and under the gaze of the awe-filling (untamable) divine countenance. Priestly Vocation The epilogue to the Book of Job is most significant, for in this last section the Lord vindicates Job's faith and righteousness.The Lord's words to Eliphaz unequivocally assert: "My wrath is kindled against you and against your two friends; for you have not spoken of me what is right, as my servant Job has" (42:7).In contradistinction with his "friends," Job always spoke the truth.The Lord can handle harsh accusatory language and take responsibility for inflicting undeserved suffering when he is addressed in and with integrity.In the second part of the address to Job, under the guise of a display of power, the Lord even appears to acknowledge limitations to his rule, referring explicitly to the need to curb the pride of many creatures (humans, especially).The Lord demands from Job what the divine governance does not accomplish.Job 40:12-14 reads: "Look on all who are proud, and bring them low; tread down the wicked where they stand.Hide them all in the dust together; bind their faces in the world below.Then I will also acknowledge to you that your own right hand can give you victory." Job always remained faithful to and reverent toward God; he never tried to explain or justify God, as if God could not do so by Godself.Job never gave up on God, always trusting that God would hear and respond to his lament.Job's prayer always remained open hearted, fully assuming its creaturely finitude and inadequacy.The Lord responds to Job's plea by appointing him as legitimate intercessor for other human beings.Carney articulates the priestly vocation of the suffering righteous (Job being a paradigmatic example) as follows: When God sends Job's friends to him to pray for them, Job prays on behalf of them.Why?I believe that it is because his suffering has taken Job to a place within the heart of God that most of us don't go.It is not that those secret chambers have "keep out" signs.It is just that most of us would rather not walk down the narrow road to get there.Job becomes one whose job is to communicate something true about God's heart that we do not know. . . .This is the priestly work of the mystic and the transformation that suffering brings.(Carney 2014, p. 285) With respect to his "friends," then, Job is to act as a priest offering on their behalf prayers and sacrifices that the Lord will receive.Job is consecrated sacramental minister (mediator, intercessor) between the friends and the Lord (see Newsom 2003, p. 157)."My servant Job shall pray for you, for I will accept his prayer not to deal with you according to your folly; for you have not spoken of me what is right, as my servant Job has done" (42:8). The Lord is true God, the God of truth, to and with whom the human person can and must always speak the truth, even and especially in most difficult times.Job's undeserved suffering showed to be the motivation and medium for the expression of a powerful prayer to God.As Balentine observes, the Lord invites Job to act on behalf of his friends so as to enable Godself to express mercy (instead of judgment) toward them.Job's intercession saves the friends from deserved punishment and allows the Lord to recover and live out God's true identity. The epilogue suggests that if Job does not pray the friends will be doomed.And if Job does not pray, the epilogue hints that God might exact a judgment that is incongruent with divine justice and mercy.In short, the friends are wrong, but they are worth praying for.God is angry, but God can be persuaded to check the anger, if someone like Job prays.God, too, it seems is worth praying for. . . .We may think of Job's restoration, then, as coinciding with, if not effecting, God's restoration.(Balentine 2002, pp. 503, 515) Acting as "bidirectional intercessor" (Ticciati 2005, p. 122), Job's priestly ministry restores both the friends and God.Job's intercessory role strongly correlates with that of the prophets, who were tasked with the dual duty of conveying the judging power of God to the Israelite people and advocating for the Israelite people before God.So Yochanan Muffs argues: The prophet is the instrument of divine severity, the attribute of divine justice.But messengership is only part of the whole picture.The prophet has another function: He is also an independent advocate to the heavenly court who attempts to rescind the evil decree by means of the only instruments at his disposal, prayer and intercession.He is first the messenger of the divine court to the defendant, but his mission boomerangs back to the sender.Now, he is no longer the messenger of the court; he becomes the agent of the defendant, attempting to mitigate the severity of the decree.(Muffs 1992, p. 9) As Susannah Ticciati has shown, throughout the dialogical part of the narrative, "what Job longs for, and what Elihu attempts to actualize, is a hearing in which God will argue with Job on Job's own terms, becoming his equal, his opponent at law-a hearing in which he will truly engage with Job" (Ticciati 2005, p. 126).Job wishes for "God to remember his justice towards his servant" (Ticciati 2005, p. 128).Job's experience of undeserved suffering has unveiled the fact that "there is no standard of justice to which God is answerable" (Ticciati 2005, p. 129) other than God.To hold God accountable, Job must bring God (the acting agent) before God (the norm and goal of all being and action).By learning to live faithfully in and with his undeserved suffering, Job allows for his own person to become the medium in and through which this encounter of God with Godself can take place.Job's witness challenges God to be and act as the worthy Lord of such a creature as the faithful and righteous human person.Job's integrity imposes itself to God insofar as it fulfills humanity's being made in the image and likeness of God.Job's undeserved suffering and corollary spiritual transformation reveal the unfathomable depth of human being and existence, which mysteriously convey and contain God.In Ticciati's compelling words, "Job has plumbed the depths of his integrity and found God there at the heart of it.God is discovered to be, not just the arbitrary face of the law, but even more fundamentally, the face of Job's integrity" (Ticciati 2005, p. 155).Now defined by critical dialogue and personal engagement with God, Job's vocation undoubtedly sets him for a most challenging existence.As Muffs aptly remarks, the prophetic vocation involves experiencing "the apperception of a divine force that overwhelms the prophet, takes him up by the forelock, and commands him to speak and to stand up. . . .Because the prophet has an intimate relationship with the Holy One, Blessed He Be, he is able to approach the cloud of the Divine Presence audaciously. . . .There is incredible bravery in prophetic prayer."(Muffs 1992, pp. 9-11).Muffs then evokes Abraham's petition on behalf of Sodom as a paradigmatic instance of prophetic intercession."There is no better example of prayer and petition than that of Abraham in the case of Sodom, which distinguishes itself in its unbridled audacity against heaven: 'Shall the Judge of the world not do justice?' (Gen 18:25).There is something elemental here.God's hands are tied until Abraham, a human being, makes a request, that is, until a prophet intercedes" (Muffs 1992, p. 11).Considering the fact that "the limit to God's total autonomy [involved and instantiated by prophetic intercession] is self-imposed" (Muffs 1992, p. 11), the prophetic vocation entails constantly putting one's life at risk.Insofar as they demand tremendous resilience in the face of undeserved suffering and daring boldness in advocating for oneself before God, Job's life and witness appear to be eminently prophetic.Newsom explains: "Job negotiates the dangerous terrain of alterity by establishing the common ground upon which the divine and the human can meet-the ground of justice. . . .What Job eventually does is to make use of his extensive conceptualization of divine justice to organize an aspect of experience where it did not traditionally function-the right of a person before God" (Newsom 2003, pp. 150, 153).Job acknowledges the inherently "tragic" character of human existence, which takes the form of a never fully appropriated gift received in and from personal encounter and engagement with God's "wholly irreducible Otherness" (Newsom 2003, pp. 252, 256). Transformed in the crucible of extreme and relentless undeserved suffering, Job's being and existence bridge the gap existing between humans and their true selves, the God of justice and the God of mercy, humans and God.Job's restoration does not and cannot bring him back to his initial condition and status.For those who thread the way of suffering, there is no way back.Suffering becomes part of the very fabric of the person's identity and life.The trauma of undeserved suffering never goes away; one must learn to bear with and ascribe meaning and purpose to it.Comparing Job's story to that of the first humans in Eden, Long thus compellingly argues: "Job's world has fractured and cannot be put back together again.Job has been forced out of his perfect and predictable paradise.The gates of the Jobian version of Eden are blocked, and, wherever Job may go, the one way he cannot go is back" (Long 1988, p. 11). In this context, the gift of new children, social relations and possessions is not to be interpreted by Job (and the readers) as a return to normal, pre-suffering existence.The new children do not erase or replace the lost ones, but rather preserve their memory and remind Job of the intrinsic vulnerability of the human nature and condition.In Wiley's sharp words: "The consolation families are meant to replace the irreplaceable.Far from resolving the sense of loss, they serve as a reminder of loss and even the prospect of future losses, not in spite of but because of God's blessing" (Wiley 2009, p. 117).From Job's standpoint, then, what is essential to being human is not something one possesses or is, but the fact of being involved in authentic relationships with other human beings and, most importantly, with God.Human existence, even and especially when it assumes the prophetic vocation, never eludes or escapes the threatening presence of God.The gift and blessing of life and flourishing do not overcome and take away loss and grief, suffering and death, which always remain effective modalities for formative encounter with God. Conclusions Here lies Job's drama: Suffering estranges and isolates the suffering person.Job finds himself forced to bear himself all day, only himself, for he has nothing and no one else, even God is and feels absent.During his pre-suffering life and times, Job did not have to spend much time with himself, since he could care for and rejoice in so many loved ones and things he owned, used and did.The suffering Job has nothing else to care about than his suffering self and that is unbearable.Forced to suffer himself, Job is led to understand that he does not understand his life and the God he believes in.Job feels the need to learn the final truth about himself and God, for without this fundamental knowledge he will never be able to bear his own life. To learn about himself and God, Job must in turn learn to live with God's silence, that is, the fact that God does not speak directly to the human need for explanation and satisfaction of basic needs, in terms that would suit humans.God answers the human plight in God's own way.Job is thus confronted with a great challenge: that of presenting his plight to and before God in a way that allows God to offer a divine (and therefore definitive) answer.The only way Job has found of doing that is to ask questions to God.Questioning his suffering, questioning God from within his experience of suffering is for Job a primary modality of his faithful relationship with God.Job prays to God by questioning his suffering, by living out his suffering as an honest question and challenge to and for God. The Lord answers Job's challenging lament by appearing in and speaking out of the "whirlwind."This image is quite telling.God is a storm that brings down everything the human person may believe, think, or build about and by herself.A whirlwind God responds and speaks to the stormy heart of Job.Job has lost his peace and quiet; he is looking for it, he knows he cannot find it in and by himself.Before and in order to find new peace and quiet in suffering, he must go through another storm: God.Peace will not be found outside of the storm, but within it, in its eye.Job must enter infinite turmoil to find peace and quiet in there, and then follow its movement (for storms are dynamic systems).So what God does is not to provide appeasing answers to Job, but to question and challenge him.Job is forced to acknowledge God's absolutely mysterious character, the unfathomability of God's creation and, in the end, his own.Job comes to see and perceive everything in a new way and the Lord entrusts him with a sacred vocation.He will act as intercessor for his friends, by offering prayers and sacrifices acceptable to God on their behalf and, also, by making the true and living God accessible to them in and through his own person.Job is by the end of the narrative consecrated to the priestly office.Job was led to find in and embrace "innocent suffering [as] a divine calling" (Balentine 2003, p. 357) opting for a vocation of vicarious representation and sacramental intercession.Suffering can give life and lead to encounter with God, if the human heart dares to believe to the end and assume that everything human beings are, have and enjoy is and comes from divine grace."Fearing God for nothing" (1:9), the righteous (undeserving) sufferer summons God to be faithful to Godself, for in the "bitterness of her soul" (10:1) she has discovered both herself and God.
2019-05-12T14:24:26.506Z
2018-12-15T00:00:00.000
{ "year": 2018, "sha1": "f3c55ff476b075e8869ff86a25c7092cf9c90918", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/9/12/417/pdf?version=1545097858", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f3aaa88a1ce9df7750896e94be8a7ea28aa7b0f6", "s2fieldsofstudy": [ "Philosophy", "History" ], "extfieldsofstudy": [ "History" ] }
14347846
pes2o/s2orc
v3-fos-license
Practical Strategies for Stable Operation of HFF-QCM in Continuous Air Flow Currently there are a few fields of application using quartz crystal microbalances (QCM). Because of environmental conditions and insufficient resolution of the microbalance, chemical sensing of volatile organic compounds in an open system was as yet not possible. In this study we present strategies on how to use 195 MHz fundamental quartz resonators for a mobile sensor platform to detect airborne analytes. Commonly the use of devices with a resonant frequency of about 10 MHz is standard. By increasing the frequency to 195 MHz the frequency shift increases by a factor of almost 400. Unfortunately, such kinds of quartz crystals tend to exhibit some challenges to obtain a reasonable signal-to-noise ratio. It was possible to reduce the noise in frequency in a continuous air flow of 7.5 m/s to 0.4 Hz [i.e., σ(τ) = 2 × 10−9] by elucidating the major source of noise. The air flow in the vicinity of the quartz was analyzed to reduce turbulences. Furthermore, we found a dependency between the acceleration sensitivity and mechanical stress induced by an internal thermal gradient. By reducing this gradient, we achieved reduction of the sensitivity to acceleration by more than one decade. Hence, the resulting sensor is more robust to environmental conditions such as temperature, acceleration and air flow. Introduction When G. Sauerbrey developed the "Sauerbrey equation" [Equation (1)] in 1959, he was far away from developing sensitive mobile sensors to detect airborne analytes [1]: (1) This relation between the variation of the oscillating mass (Δm) and the shift of resonance frequency (Δf) of fundamental quartz crystal oscillators is also valid for thin films with a uniform mass distribution [2,3]. It depends on the active quartz crystal area A, the density ρ q and the shear modulus µ q of the quartz. Currently, there are a few fields of application using quartz crystal microbalances (QCM) [4], as well as highly dynamic research [5]. The typical setup of such QCM sensors follows the design principle introduced by King [6], using a chemical sensing layer on top of a QCM. By various kinds of chemical/physical interaction, the target analytes from the surrounding are bound to the sensitive layer. The resulting mass change causes a variation of the oscillation frequency. Different from other approaches (e.g., [7][8][9]) we focus on the online detection of analytes in an airstream without using a preconcentrator. As real-time tracing of very dilute specimens is very desirable, a high sensitivity (in terms of high readout per unit analyte) of the whole sensor is pivotal. This can be achieved by tuning both the chemical and the engineering parts. Our development and selection of appropriate affinity materials for the chemical interaction has been described elsewhere [10,11]. The general setup is depicted in Figure 1. Six differently coated QCMs and an atmosphere containing the analyte are brought into contact in a housing (typically made of aluminum). For continuous measurements, this atmosphere is delivered either by slight pressure to the chamber (outlet is open to atmosphere, for calibration of quartzes, "closed setup") or by suction using a pump at the outlet (for use as a mobile analyzer, "open system"). A typical frequency plot for an airborne analyte shows concentration dependent frequency shifts as depicted in Figure 2. In this paper, we report on the various engineering strategies that were followed to improve the sensor resolution including signal processing and stream optimization. Increasing the Sensitivity There are discussions about the correct definition of "sensitivity" when coming to quartz oscillators. The most simple is the absolute frequency shift in Hz/ppm analyte, assuming an almost linear relationship for sufficiently small concentration ranges. For comparison of QCM systems operating at different frequencies, the relative frequency shift (Δf/f 0 ) may be more reasonable. However, as these devices exhibit a significant noise, the signal-to-noise ratio should be regarded as well. Following Equation (1), a most practical way to increase the absolute frequency shift of a QCM device is the employment of a quartz crystal oscillator with higher operational frequencies. This can be achieved by higher fundamental frequencies, overtones of common quartz crystals or using completely differently acoustic sensor techniques (i.e., SAW, FBAR). As discussed by Vig [12] several factors (aging, Q-factor, accuracy) will influence more strongly high-frequency quartzes and diminish the better theoretical absolute frequency shift. Yet, there are today even 8 GHz thin film oscillators [13] and several other reported systems in the range >100 MHz. In this report, we will describe our approaches to build a QCM sensor based on commercially available high frequency quartz oscillators. As shown in Figure 2, the high frequency of the quartz crystal yields a high sensitivity (in terms of an absolute frequency shift). The specific frequency shift derived from this is approximately 35 Hz/ppm TATP. Special attention was given to meet challenges from environmental influences caused by the use of the system in a mobile fashion. High Frequency Fundamental Quartz The oscillators for this study (195 MHz fundamental frequency) were acquired from KVG Quartz Crystal Technology GmbH, Neckarbischofsheim, Germany. Due to their "inverted mesa" geometry, these devices exhibit a reasonable mechanical stability ( Figure 3). The quality-factor (Q-factor) is a measure for the quality of the quartz oscillator and is defined as the ratio of stored energy to dissipated energy. It is inversely proportional to the random fractional frequency fluctuations or short-term instabilities of a quartz oscillator. Therefore the Q-factor is an important factor for the signal-to-noise ratio (SNR) of a measurement. According to Warner [14] the theoretically maximum attainable Q-factor for AT-cut crystals should be in the range of 1.6 × 10 6 for a 10 MHz quartz oscillator, whereas the theoretical maximum Q-factor for a 195 MHz quartz oscillator is limited to 8.2 × 10 4 . Due to additional losses this theoretical value is not reached in practice, especially when the quartz is not hermetically sealed like in a QCM application. Commonly the Q-factor varies from 10 2 to 4 × 10 5 in air [14,15]. The 195 MHz HFF-Quartz crystals are specified with a Q-factor of 6 × 10 3 to 2.5 × 10 4 (cf. Figure 3). Under atmospheric conditions we measured a Q-factor in the range of 1 × 10 4 to 2 × 10 4 . In order to evaluate the effect of coating to the Q-factor of quartz, the film thickness of a series of quartzes was increased iteratively. Figure 4 depicts the effect of coating on the Q-factor of one quartz crystal [assuming a homogeneous distribution of the affinity material on the oscillator surface, the film thickness is proportional to the mass difference Δm upon coating, which again is proportional to the frequency shift by Equation (1), therefore, film thickness is given in kHz as in this report]. As anticipated, coating of the crystal reduces the Q-factor with the film thickness. In this work we used a coating of 50 kHz (corresponding to a deposited mass of 10 ng) which leads to a Q-factor of approximately 7~8 × 10 3 . Compared to standard sealed quartz crystals with a Q-factor in the range of 10 3 to 10 6 this is still an acceptable value [15]. As recommended by the IEEE subcommittee of frequency we used the Allan deviation σ(τ) with a gate time of τ as SNR measure in the time domain [16]. According to Vig et al. [12] a Q-factor of 8 × 10 3 would result in a minimum σ(τ) of 1.25 × 10 −10 . However, typical QCM systems have minimum Allan deviations in the range of 10 −6 to 10 −8 [15]. This is due to limitations of the electronic set-up and environmental conditions like temperature, acceleration of the handheld device, atmospheric pressure and air flow hard to realize. Therefore, the main goal of this work was to reduce the influence of environmental conditions to achieve a high signal to noise ratio. Oscillator and Counter Description Six oscillator circuits are realized as a Collpitts Oscillator by using the IC Max2620. This IC shows low phase noise and integrated buffers avert frequency pulling due to load-impedance changes. Counting the frequency of these six oscillators can be done by using a simple "forward" counter. Due to the ±1 count error, the resulting relative frequency error is ±5 × 10 −9 for a 195 MHz oscillator with a fixed gate time of 1 s. Another approach to count the frequency is a reciprocal counter. It enables real time control of the gate time and can increase the resolution significantly. If the frequency of the reference clock f ref is much higher than the signal f meas to be counted, the ±1 counting error for the implemented reciprocal counter can be calculated as: To achieve f meas <<f ref , the frequency of the crystal oscillators has to be converted into a low frequency. This is often done by analog mixing (e.g., [17]). Unfortunately this needs high design efforts resulting in larger devices compared to forward counters with a fixed gate time. Another approach is the use of digital mixing as it was done by Shankar et al. [18] or Bruckenstein et al. [19]. They used D-Flip-Flops as digital mixer by discrete circuit elements or by using a silicon-on-insulator process. We have implemented the digital mixer into an onboard FPGA. This enables a simpler implementation due to simulation and debugging of the programmable hardware. Digital ports of the FPGA are triggered by the rising edge of the sinusoidal analog oscillator signal. Therefore the signal is completely processed inside the FPGA. The mixer was formed by an XOR-Gate, which is the ideal digital representation of a gilbert cell in the analog domain. This is why the output of the filter has a delay of 90 clock cycles which is insignificantly small for a counter with a gate time in the range of ms ( Figure 5). (2)) can be calculated as: Compared with the forward counter the counting error can be reduced from e = ±5 × 10 −9 to e(1 s) = ±1.25 × 10 −10 by using the digital mixer implemented into a FPGA. Signal-to-Noise Behavior without Airflow Using the described measurement set-up a handheld measurement platform with a size of just 150 × 55 × 50 mm was realized ( Figure 1). In order to evaluate the Allan deviation for this setup, the reciprocal counter was set in a way, that the given number of oscillations is complete after approx. 20 ms. By use of two parallel counters, it was ensured that each clock cycle was considered and the dead time was zero. Frequency values were recorded continuously over a time period of 2,500 s. For the Allan analysis it was assumed that every measurement represents 20 ms. The gate time as indicated here therefore represents multiples of 20 ms. Therefore the Allan deviation can be calculated for different gate times under the same environmental conditions. However, adding the measurements reduces the number of values for the calculation of the Allan deviation. This is why the error increases with the calculated gate time as it can be depicted from Figure 6. For simplification the error bars are not depicted in the following Allan plots. Nevertheless they show a similar error as shown in Figure 6. The subsequent Allan plot show three quartz crystals: Quartz 1 is an uncoated crystal, Quartz 2 and Quartz 3 are coated with two different compounds (50 kHz "film thickness" each). Among fabrication variation between individual quartzes, the difference between the shown results is attributed to the viscoelastic properties and the mechanical coupling of the different compounds to the quartz surface. As this report does not focus on the variation of quartz behavior upon different coatings, the three different quartzes were just investigated to expand the sample range and to show that the discussed effects are of more general nature. Figure 6. Allan deviation and errors of Quartz 1 just after the startup of the device (90% confidence interval). As expected the white noise, which is mainly caused by the ±1 counting error, increases for shorter gate times. With an increasing gate time the random walk, exceptionally caused by the warm up behavior, becomes more significant. With the runtime the device reaches its thermal equilibrium which is why the random walk is reduced. Figure 7 illustrates the difference of the Allan deviation measured just after the startup of the device and 8 h after. It shows that the Allan deviation of a quartz can vary greatly for different measurements and gate times greater than 0.8 s. To have a runtime independent signal-to-noise ratio, a gate time of 0.2 to 0.8 s seems to be the optimal choice for this device. With these gate times, an Allan deviation of 6 × 10 −10 to 2 × 10 −9 is achieved in this system (Figure 7). This is remarkably close to the theoretical limit of 1.25 × 10 −10 (vide supra). It should be noted that the oscillators are not in a sealed housing and still yield this excellent SNR. This can be measured without airflow, but under atmospheric pressure. Therefore the measurement platform is a good compromise of small signal to noise ratio, sensitivity (in terms of a frequency shift in Hz/ppm) and size. However, further design efforts can be made to reduce the counting error in order to reduce the white noise of the measurement. Influence of Pressure and Turbulences With these results in hands, the system was pushed further to the "real world", i.e., into an air stream. The influence of the atmospheric pressure on QCM frequency was well analyzed in different reports by Kokobun et al. [20]. Atmospheric pressure can produce a deformation of the crystal which can change the capacity and cause mechanical stress. However, in an online detector with a stream of air passing the quartzes, short-term pressure fluctuations might have the most relevant influence on signal stability. In our sensor system, the double-headed micro diaphragm pump NMP015 manufactured by KNF Neuenberger (Freiburg, Germany) with a maximal air flow of 2,200 sccm (standard cubic centimeters per minute) was used. The array of quartz crystals is located in a measurement chamber with 13 mm diameter. Figure 8 shows the resulting Allan deviation at different flow rates. For a flow rate of 0.08 m/s (650 sccm) the Allan deviation increases by more than one decade, for a flow rate of 0.13 m/s (1,000 sccm) to more than 10 −7 . Turbulences in the airflow might be a source of noise in frequency. To get an idea about the influence of these turbulences the system was modeled and analyzed with SolidWorks ® [21]. An accurate 3D model was developed in order to model the vorticity on the quartz surface. The boundary conditions or the simulation are a turbulent inlet with atmospheric pressure and an outlet with a constant volume stream of 750 sccm. Figure 9 displays the result of the numeric simulation. Impressively clear, the vorticity has its maxima at the inlet and at the location of the quartzes. The turbulences seem to build up a turbulent layer on the wall of the measurement chamber. This layer promotes vortices at the quartz surface. This will cause mechanical stress onto the quartz surface and most probably leads to noise in frequency. Figure 9. Vorticity of the air flow in the measuring tube with the quartz crystals. The inlet is defined with a constant atmospheric pressure of 1,024 mBar at 10% turbulences. The boundary condition of the outlet is a constant volume flow of 0.75 L/min. Laminar Flow Element Reduction of the turbulent layer on the wall was tried by a laminar flow element (LFE) which removes turbulences at the inlet. The laminar flow element consists of a set of 1 mm cannulas cut by wire EDM (electrical discharge machining) as depicted in Figure 10. The simulation result of this set up is shown in Figure 11. The turbulences are located at the area between inlet and LFE but the air flow in the vicinity of the oscillators can be seen as laminar. A detailed view of the quartz loci in both simulations ( Figure 12) illustrates that the vorticity at the quartz surface is tremendously reduced. For the evaluation of this concept, a removable LFE was designed probing the influence onto the measurement. As depicted in Figure 13 the LFE reduces the noise introduced through turbulences considerably. From a flow rate of 1,000 sccm the Allan deviation is reduced from approximately 3 × 10 −7 to 3 × 10 −9 for a gate time of 200 ms. Interestingly the Allan deviation is just reduced for the half with a flow rate of 650 sccm, which is much fewer compared to a flow rate of 1,000 sccm ( Figure 14). Figure 11. Vorticity of the air system in the measurement tube with laminar flow element. Inlet and outlet are defined with the same boundary conditions as in Figure 9. Especially the Allan deviation of Quartz 1 with an air flow of 650 sccm (Figure 14, Quartz 1 LFE) indicates additional noise sources beside the 1/f noise, random walk and turbulences. For smaller flow rates the pump cycles are reduced, which lead to an irregular flow and a non-optimal load of the pump motor. An additional volume can be used to smooth the uneven airflow. Figures 13 and 14 show the resulting Allan deviation with an additional tank of ≈800 mL. The minimum Allan deviation is reduced to 2~3 × 10 −9 for both flow rates. Therefore the uneven air flow could be identified as another source in noise. However, the SNR can only be reduced a little which shows that a turbulent airflow is a major noise source. Compared to other existing QMB devices with an Allan deviation in the range of 10 −6 to 10 −8 we measured an excellent minimum Allan deviation τ(0.2s) ≈ 2~3 × 10 −9 with a constant air stream of 650-1,000 sccm. Acceleration and Temperature Effects The temperature behavior can be divided into static and dynamic temperature effects. The static temperature at the quartz crystal can dramatically affect the resonance frequency [22]. Nevertheless, this kind of temperature effect has just a small impact for QCM applications because no absolute frequency information is required. Static Temperature Effects When the resonator is powered, it takes some time before it reaches the thermal equilibrium ( Figure 15). The length of this warm-up period depends on the electrical circuit, the input power and the thermal properties of the resonator. For example, a 195 MHz quartz crystal with a low Q-factor needs more power to oscillate compared to another 195 MHz quartz with a higher Q-factor. Therefore, the self-heating of the crystal will be larger. This is a reason why various quartz oscillators can have a different warm-up behavior. However, in the desired sensor application, the typical responses (cf. Figure 2) are on a much shorter time scale than this warming process and can be easily identified by mathematical filters. In a typical environment of a handheld sensor, temperature changes of more than 10 °C are possible. Figure 16 shows the frequency response of a changing environment. The sensor with an active pump was put from room temperature of 25 °C to an oven with 50 °C. As it can be seen, the sensor system with LFE shows just a small change in frequency with a maximum frequency slope of 1 Hz/s, whereas the system without LFE has a maximum slope of 3 Hz/s. Therefore the heat is equalized to the sensors temperature by passing the set of 1 mm cannulas. As a result, altering the absolute temperatur has just a small influence onto QCM measurements when using a heat compensation as the LFE. Like the warm-up behaviour, this kind of frequency shift can easily identified by mathematical filters. Dependency of Dynamic Temperature and Acceleration Effects Aside of the described direct influences of temperature, the sensitivity towards acceleration effects seems to be strongly temperature dependent as well. Acceleration effects on the frequency are given by a function of direction and magnitude of the acceleration . The dependency between acceleration and frequency is typically linear up to an acceleration of 50 g [22] and is given by [23,24]: (4) The typical range of the sensitivity to acceleration (g sensitivity) can cover several orders of magnitude from 10 −7 g for low cost crystals to 10 −10 g for precision stress-compensated (SC) crystals [22]. It is caused by many factors as the quartz design, angle of cut, mounting structure, orientation, package type, and others [22][23][24]. All these effects cause mechanical stress, which will have an effect on the resonance frequency of the oscillator. Physically, every expansion of particles will result in elongation , which will cause stress. For an infinitesimally small volume δV with a density of ρ the forces of a simplified resonator are given by Newton's Second Law of Motion F a and the mechanical stress per unit area F T [25]: (5) The local gradient of the mechanical stress δT ij is defined by the force in direction i and the surface with the normal j. Newton's Second Law of Motion can be rearranged to: (6) Equalizing the formulas results in the following equation for the movement: Consequently, changing the mechanical stress tensor δT ij will also affect the resonant behavior of the quartz and therefore change its frequency. In a static environment, this stress will be constant for a specific quartz oscillator. As Kosinski explains the acceleration sensitivity causes a deformation of the crystal [26]. Zhou and Tiersten showed that by keeping the stress minimal and symmetric the sensitivity to acceleration can be reduced considerably [27]. External forces, like gravity and a changing temperature gradient can change the mechanical stress which will result in a frequency shift. This is why a movement or rotation around the axis of the QCM sensor will cause an unwanted frequency shift. In standard quartz applications, the quartz oscillators are encapsulated and more robust, so the g sensitivity compared to the free-standing 195 MHz HFF-Quartzes must be lower. To test the g sensitivity, a special rotatable sensor module (without air flow inside) was rotated with a constant angular speed of 0.6 rpm around its x axis ( Figure 17). Compared to gravity the centrifugal acceleration is negligibly small due to the small angular speed. As displayed in Figure 18, the rotation results in a frequency shift of more than 35 Hz. The same behavior was observed for rotations around the other axes. Taking Equation (4) and a frequency shift of 35 Hz (as shown in Figure 18) into account, the value of the g sensitivity in direction of the x axis is 9.2 −9 g. For many applications, this is a reasonable result. However, in this setup, an orientation-dependent instability of 35 Hz reduces the resolution of the QCM dramatically. First indications that temperature effects might be involved in orientation-based signal shifts were observed when repeating the same experiment during the warm-up period of the oscillators (Figure 19). With a warming oscillator, the amplitude of the frequency changes increases as well. The result of tip-over experiment (a sudden 90° rotation around the x axis, no continuous rotation) can be seen in Figure 20. During the rotation the thermal gradient on the quartz surface seems to change, which results in an overshoot of the expected frequency shift. The most important heat source in this setup is the quartz oscillator itself. This was demonstrated by rising the supply voltage stepwise and repeating the tip-over experiment ( Figure 21). With increasing voltage (and increasing temperature strain in the quartz) the g sensitivity increases as well. All these results imply that temperature gradients inside the oscillator should be reduced in order to diminish mechanical sensitivity. Therefore, a heat pipe from the oscillators to the housing with heat sink paste was designed ( Figure 22). Hence, the temperature around the quartz oscillators can be considered as constant. By doing this the unsymmetrical stress introduced by the temperature gradient of the crystal is reduced which leads to a reduction of the acceleration sensitivity (cf. [26][27][28]). With this knowledge, a QCM with a g sensitivity of less than 5-10 g could be achieved ( Figure 23). Therefore, the sensitivity towards orientation changes is reduced by more than one decade simply by using a heat pipe from the oscillators and brings the system into close proximity to reference grade QCM with a acceleration sensitivity of 10 −10 g. Figure 23. Frequency response of a 90° rotation with heat pipe at the oscillators. Conclusions Different design and layout optimizations for a continuously working QCM sensor in flowing air were described. Optimization of the electronic components yielded for a gate time in the range of 200 to 800 ms a standard deviation of 6 × 10 −10 to 2 × 10 −9 . In order to compensate for "real-world" influences on the sensor stability, introduction of a laminar flow element and a heat pipe reduced the noise in frequency for a 195 MHz quartz to just 2~3 × 10 −9 in a mobile, handheld device. This will dramatically enhance the sensitivity for the detection of airborne analytes, e.g., traces of explosives. The application in the chemical sensing of volatile organic compounds will reported in due course.
2015-09-18T23:22:04.000Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "2ab09f5770ec3d4fb79be1a9f4f3b29ae31ed068", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/13/9/12012/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ab09f5770ec3d4fb79be1a9f4f3b29ae31ed068", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
257075811
pes2o/s2orc
v3-fos-license
Genetic variation supports a causal role for valproate in prevention of ischemic stroke Valproate is a candidate for ischemic stroke prevention due to its anti-atherosclerotic effects in vivo. Although valproate use is associated with decreased ischemic stroke risk in observational studies, confounding by indication precludes causal conclusions. To overcome this limitation, we applied Mendelian randomization to determine whether genetic variants that influence seizure response among valproate users associate with ischemic stroke. We derived a genetic score for valproate response using genome-wide association data of seizure response after valproate intake from the Epilepsy Pharmacogenomics Consortium. We then tested this score among valproate users of the UK Biobank for association with incident and recurrent ischemic stroke using Cox proportional hazard models. Among 2,150 valproate users (mean 56 years, 54% females), 82 ischemic strokes occurred over a mean 12-year follow-up. Higher valproate response genetic score was associated with higher serum valproate levels (+5.78 μg/ml per one SD, 95% CI [3.45, 8.11]). After adjusting for age and sex, higher valproate response genetic score was associated with lower ischemic stroke risk (HR per one SD 0.73, [0.58, 0.91]) with a halving of absolute risk in the highest compared to the lowest score tertile (4.8% vs 2.5%, p-trend=0.027). Among 194 valproate users with prevalent stroke at baseline, a higher valproate response genetic score was associated with lower recurrent ischemic stroke risk (HR per one SD 0.53, [0.32, 0.86]) with reduced absolute risk in the highest compared to the lowest score tertile (3/51, 5.9% vs. 13/71, 18.3%, p-trend=0.026). The valproate response genetic score was not associated with ischemic stroke among the 427,997 valproate non-users (p=0.61), suggesting minimal pleiotropy. In an independent cohort of 1,241 valproate users of the Mass General Brigham Biobank with 99 ischemic stroke events over 6.5 years follow-up, we replicated our observed associations between the valproate response genetic score and ischemic stroke (HR per one SD 0.77, 95% CI: [0.61, 0.97]). These results demonstrate that a genetically predicted favorable seizure response to valproate is associated with higher serum valproate levels and reduced ischemic stroke risk among valproate users, providing causal support for valproate effectiveness in ischemic stroke prevention. The strongest effect was found for recurrent ischemic stroke, suggesting potential dual-use benefits of valproate for post-stroke epilepsy. Clinical trials will be required in order to identify populations that may benefit most from valproate for stroke prevention. Introduction Valproate is a widely used antiepileptic drug that has been associated with decreased risk for ischemic stroke in observational studies. [1][2][3][4] Valproate is assumed to exert this preventive effect by inhibiting histone deacetylase 9 (HDAC9), 5,6 which in animal studies has been found to lead to a stabilizing and anti-inflammatory effect on atherosclerotic plaques. [7][8][9] Although genetic variants in the HDAC9 gene have been repeatedly and robustly associated with large-artery stroke in population-based and casecontrol genome-wide association studies (GWAS), 10,11 causal evidence supporting a role for valproate in stroke prevention through this mechanism in humans is still missing. One of the reasons for this gap is that observational studies are prone to biases and thus cannot deliver evidence for a causal drug effect. 12 While randomized clinical trials provide the needed evidence, they are often tailored to a specific indication and can be underpowered for secondary endpoints or uncommon side effects, 13 making efficient evaluation of drug repurposing challenging. Germline genetic variation, constant throughout the lifespan and thus not prone to confounding, can be leveraged to assess whether a drug causally contributes to a specific outcome. If there are genetic variants that are known to influence drug response, the drug users in an observational study can be divided according to the predicted genetic response. Because the prescribing health care providers and the patients are not aware of these genetic variants at the time of prescription, the cohort of drug users can be considered randomized and blinded by genetics. If the genetic variants that predispose to better drug response are associated with the outcome of interest in users of the drug, the results support the hypothesis that the drug causally contributes to the outcome. Pleiotropic effects of the genetic variants that have an independent effect on the outcome can be ruled out if no associations are found among non-drug users. We have recently applied this form of in silico simulation of a randomized controlled trial, a special case of Mendelian randomization, to show that statins causally contribute to intracerebral hemorrhage. 14 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint Utilizing this framework, we investigated the causal contribution of valproate use on risk of incident and recurrent ischemic stroke. To gain insight into clinically relevant effects on atherosclerosis, we additionally investigated the effect on myocardial infarction. We leveraged data from a previously published GWAS of clinical response to three antiepileptic drugs (valproate, lamotrigine, and levetiracetam) 15 to construct and validate a genetic score for valproate response among valproate users in the UK Biobank (UKB) and study its association with the selected outcomes ( Figure 1A). To rule out the possibility that detected associations are driven by an antiepileptic drug class effect, we also used the same approach to study the effects of lamotrigine and levetiracetam on ischemic stroke. To assess the robustness of our findings, we replicated observed associations in an independent cohort of valproate users of the Mass General Brigham Biobank (MGBB). Study population The UKB is an ongoing, population-based prospective cohort study of over 500,000 individuals aged 37-73 years at baseline, recruited from 2006-2010 in 22 assessment centers across the UK. 16 A wide range of baseline data including phenotyping assessments, biochemical assays, genome-wide genotyping, and primary care data (in a subset of the total cohort) together with ongoing follow-up outcome data is available. Only individuals with available genetic data were included in the present study. Because we used genetic variants 15 and a linkage disequilibrium reference panel 17 that were both discovered only in individuals of European ancestry (see below), we further restricted our cohort to European ancestry participants. The UKB has institutional review board approval from the Northwest Multi-Center Research Ethics Committee (Manchester, UK). All participants provided written informed consent. We accessed the data following approval of an application by the UKB Ethics and Governance Council (Application No. 36993). . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint Mendelian randomization approach Because the genetic variants that are used for the exposure (in our study, valproate, lamotrigine, and levetiracetam clinical response) are derived from a GWAS including only individuals exposed to those medications, but stroke and myocardial infarction outcome GWAS have been performed among drug users and non-users, we used an individual-level approach in the UK Biobank to test for drug-specific effects and assess for pleiotropic effects of our genetic instruments. We constructed a genetic score for response to each drug and tested each score for association with the outcomes of interest among individuals exposed or not exposed to each medication. Because of the random assortment of common alleles in a population, genetically predicted drug response is randomly allocated, and thus an association of the genetic response score with the outcome of interest in those exposed to the drug provides evidence of a causal drug effect. Further, pleiotropic effects of the genetic variants that inadvertently modify the risk for chosen outcomes independent of the drug can be ruled out if there is no association among individuals not exposed to the drug. This approach has been described by us and others in previously published work. 14,18 Identification of antiepileptic drug users We identified valproate, lamotrigine, and levetiracetam drug users via verbal baseline interview and primary care prescription data, allowing us to capture drug prescriptions within a time period between 1978 to 2018. We have previously reported the details of our pipeline to extract medication data from UKB primary care data. 14 First, all available ever-approved formulations for valproate, lamotrigine, and levetiracetam were gathered by using international nonproprietary (INN) names, former and current trade names in the UK (via the National Health Service Dictionary of Medicines and Devices [DM+D] browser, https://services.nhsbsa.nhs.uk/dmd-browser/search), and their associated DM+D and British National Formulary (BNF) codes (Supplemental Table S1). Then, all primary care prescription data and the verbal interview data were searched for these formulations. Individuals were considered as users of a drug if they reported intake of one of the formulation names containing the drug at any verbal . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint interview (baseline or follow-up, UKB field 20003) or if they had two or more prescriptions of a formulation containing the drug in the primary care data (gp_scripts table). The first drug prescription date for each individual was defined either as the first prescription date from primary care data or as the verbal interview date, whichever was earlier and available. To assess whether patients were on monotherapy or had other antiepileptic drugs prescribed, we extracted prescriptions for the most common antiepileptic drugs (Supplemental Table S1) between first prescription of valproate and end of follow-up. Construction of the genetic scores for antiepileptic drug response We used genome-wide association data from the Epilepsy Pharmacogenomics Consortium (EpiPGX) on seizure freedom after antiepileptic drug intake in European ancestry patients with generalized epilepsy. 15 In that study, participants were defined as treatment responder if they were seizure free under continuous treatment for at least one year, and as treatment non-responders if they had 50% of pretreatment seizure frequency or higher under adequate dosing of the drug according to a specialist. 15 The cohort included patients on valproate (n=565), lamotrigine (n=387), and levetiracetam (n=209). 15 Association tests were performed based on responder vs. non-responder status for each of the drugs. 15 There was no participant overlap between the EpiPGX GWAS and the UKB. Association results were available for single nucleotide polymorphisms (SNPs) associated with drug response at p<0.05 (n=162,242 for valproate, n=162,666 for lamotrigine, and n=162,430 for levetiracetam). To construct the genetic scores to be used as instruments for valproate, lamotrigine, and levetiracetam response, we leveraged PRS-CS (polygenic prediction via bayesian regression and continuous shrinkage priors), a novel unsupervised polygenic prediction method for that uses a high-dimensional Bayesian regression framework to derive a genetic score from GWAS summary statistics without requiring an external validation cohort. 17 PRS-CS takes linkage disequilibrium of genetic variants into account by using an external linkage disequilibrium reference panel and outperforms traditional clumping and thresholding approaches such as PRSice. 17, 19 We used PRS-CS with default parameters . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint to generate SNP weights for response to each drug, which yielded weights for 33,089, 33,300, and 32,736 SNPs for valproate, lamotrigine, and levetiracetam, respectively. To test robustness of the discovered associations, we performed sensitivity analyses with an alternative genetic instrument for valproate response that was derived using a clumping and thresholding approach. Following previously described approaches for drug response Mendelian randomization studies, 14,20 we selected the SNPs from the EpiPGX GWAS results that were associated with valproate response at p<5x10 -5 and clumped at r 2 <0.001 based on the 1000 Genomes European reference panel. 20 The alternative genetic score for valproate response consisted of 20 SNPs that were retained after clumping of 139 SNPs. Finally, using the weights derived from PRS-CS and clumping and thresholding, we calculated the individual genetic scores for corresponding drug users in the UKB using imputed genotype data. For assessment of appropriate randomization, Kaplan-Meier curves, and calculation of absolute risk differences, individuals were divided in genetic score tertiles. The SNPs and weights for the genetic scores are provided in Supplemental Tables S2-S5. Validation of the genetic score on serum valproate response to normalized valproate dosing We aimed to test the effect of the derived genetic score for valproate response on valproate serum levels to confirm its validity to test the hypothesis whether valproate has an effect on ischemic stroke through serum level-dependent effects. Genetic variants that are associated with seizure response to valproate could be unrelated to the effect of valproate on ischemic stroke, if they influence a pathway that is not related to drug metabolism, but rather further downstream in its effect on seizure prevention. However, if the genetic variants predict valproate response through an effect on valproate serum levels below the threshold of impacting prescriber behavior, they are a proxy for genetically predicted drug exposure and thus can be used as an instrument for randomization in the test for an effect on ischemic . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint stroke. In this special case of Mendelian randomization, this is assertion of the relevance assumption of the genetic variants. Valproate serum levels were gathered from the primary care clinical data by using the Read codes '44W4.' and 'XE25d'. Values that were 0 (indicating off-valproate situations) and those higher than 200 µg/ml (potentially erroneous or not in µg/ml) were discarded. For each serum level value, the taken valproate dose at the time of measurement was approximated. First, the duration in days between the prescriptions before and after the date of the serum level measurement was calculated. Then, the quantity of tablets was multiplied by the dose of each tablet, divided by the duration of the prescription interval, yielding the average daily dose in mg per day. The association of valproate dose with valproate serum levels was tested in a linear regression model with valproate serum level as dependent variable and average daily valproate dose and the genetic score as independent variables. The model was additionally adjusted for age at the time of serum level measurement and sex. Levels for lamotrigine or levetiracetam were not available in the UK Biobank. Outcome ascertainment UKB participants' records have been linked with inpatient hospital codes, primary care data, and death registry for longitudinal follow-up. Outcome events were gathered from hospital admissions and death registry data using International Classification of Diseases (ICD) 10 codes that were aligned with the diagnostic algorithm in the UKB (https://biobank.ndph.ox.ac.uk/showcase/ukb/docs/alg_outcome_main.pdf). Incident events were defined as events occurring after baseline and after the first drug prescription date. Because only very limited information on ischemic vs. hemorrhagic stroke subtypes before baseline exists in the UKB, recurrent ischemic strokes were defined as ischemic strokes occurring in individuals with a history of any stroke at baseline, as defined in the UKB field 42006. The ICD-10 codes used for ascertainment of the outcomes are supplied in Supplemental Table S6. Stroke outcomes in the UKB have been routinely used in genetic association studies, including the most recent GWAS of stroke risk. 11 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Association of the genetic scores with outcomes To explore the effects of valproate on the selected outcomes, cause-specific Cox proportional hazard models censored for death were used with the valproate-specific genetic scores as independent variable, adjusted for age, sex, principal components (PC) 1-3, and genotyping assay in the cohort of valproate users. Although ischemic stroke subtypes are not available in the UKB, we tried to investigate the pathophysiological mechanism of valproate's action on stroke prevention. To approximate cardioembolic stroke, ischemic stroke in the setting of atrial fibrillation was analyzed by adding an interaction term of the genetic score with prevalent atrial fibrillation in the model for ischemic stroke and performing subgroup analyses in individuals with and without a diagnosis of atrial fibrillation before or within six months after ischemic stroke. Because of the small cohort size, the main analyses were performed in all valproate users regardless of potential cryptic relatedness, and sensitivity analyses were performed in a cohort restricted to unrelated individuals (KING kinship coefficient < 0.0884). Chi-Square test for trend in proportions were used to assess significant differences in absolute stroke risk between genetic score tertiles. To rule out that observed associations between the genetic scores and the outcomes are caused by pleiotropic effects of the genetic score not related to valproate use, we tested the same associations among non-valproate users, thus assessing the independence and exclusion restriction assumptions of Mendelian randomization. To further rule out an antiepileptic drug class effect, all analyses were repeated with the genetic scores for lamotrigine and levetiracetam response among users of the respective drugs. To exclude drug interactions, we also performed a sensitivity analysis among the cohorts of patients on valproate monotherapy. Replication analyses We aimed to confirm our discoveries in the MGBB, an ongoing prospective clinical research cohort of . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. Patients are recruited in-person at MGH and BWH and online through an electronic patient gateway. The MGBB provides ongoing electronic health record data with ICD codes for outcomes, imaging, and, on a subset of individuals, genetic data. Recruitment has been ongoing since 1998 and to date, more than 133,000 patients have been included in the Biobank and over 65,000 have been genotyped. Genotyping has been performed in batches on different arrays, with batches 1-9 performed by MGH and batches 10-13 by the Broad Institute. To minimize batch effects and variation across different genotyping arrays, many of the individuals that have been genotyped in batches 1-9 have been regenotyped in batch 13. For the current analyses, we considered only individuals that were genotyped in batches 10-13, which we combined and imputated together on the Michigan Imputation Server using the Haplotype Reference Consortium v.1.1 reference panel. We used exactly the same PRS-CS approach to construct the genetic score in MGBB participants. We queried the MGB Biobank database using the same criteria that we used in the UKB. We identified patients with available genetic data and two or more oral prescriptions of valproate. Most of the prescriptions contained instruction texts from which we could extract the daily dose by multiplying the medication dose by the prescribed number of times taken daily. Outcomes were identified using inpatient ICD-9 and ICD-10 codes (Supplemental Table S6) and were classified as prevalent or incident events regarding their occurrence before or after the date of the first valproate prescription. We used the same statistical analysis approaches for the replication analyses in the MGBB as in the UKB. Linear regression models were used to assess the associations of the average valproate dose and the genetic score with valproate serum levels. For the association of the genetic score with outcomes, Cox proportional hazard models were constructed with the time to event as the number of days between the date of the first valproate prescription and incident event date for patients for which an outcome occured, and the number of days between the first valproate prescription and the last encounter for patients for which no outcome occured. We detected significant associations between the principal components and age and sex, most likely due to chance imbalance across genotyping batches; people . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint in later batches were better powered and consisted of older individuals and more women, and thus models adjusted for age and sex and the principal components introduced collinearity to our model. Because we found no associations between the genetic score for valproate response with age and sex, indicating near-perfect randomization, we removed age and sex from our Cox models and the reported effect estimates are from models with the genetic score as main predictor, adjusted for principal components 1-3 which still contain age and sex information. Software and statistical methods used ANOVA and Chi-Square test were used for comparison of continuous and discrete baseline variables, respectively. SNP extraction and genetic score calculation was performed with PLINK and bcftools, relationship inference was performed with KING. [21][22][23][24] Data extraction, curation, preparation, statistical analysis, and figure generation was done with RStudio on Mac OS X. 25 Baseline characteristics We identified 2,150 valproate users after exclusion of non-European ancestry individuals and those with unavailable genetic data ( Figure 1B, Table 1). The date range for the prescriptions was July 1987 to March 2018, and the mean time between first and last prescription was 6.5 ± 6.9 years. Valproate users had a significant higher rate of vascular risk factors compared to valproate non-users, but no difference in the genetic score ( Table 1). The genetic score for valproate response was normally distributed among valproate users (Figure 2A). When comparing valproate users stratified by genetic . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint score, no differences in baseline characteristics, cardiovascular risk factors, antiplatelet and statin use, or approximated valproate doses (among the 1,387 individuals with available prescription data) were found across genetic score tertiles, indicating an appropriate randomization ( Table 2). Most patients were on valproate monotherapy, and all tertiles had average valproate serum levels in the therapeutic range ( Table 2). Validation of the genetic score on serum valproate response to normalized valproate dosing A total of 549 valproate serum level values were available for 202 valproate users after exclusion of 137 values that were zero and 41 values that were higher than 200 ( Figure 2B) Figure 2C). We found a significant association of the genetic score with valproate serum levels, indicating higher average serum levels in participants with higher genetic scores (+5.78 µg/ml per one SD, 95%CI[3.45, 8.11], Figure 2C). These associations were replicated with the alternative genetic score (Supplemental Table S7). Serum levels for lamotrigine or levetiracetam were not available for validation of genetic scores. Association of the valproate genetic instrument with incident ischemic stroke Among the 2,150 valproate users, 82 ischemic strokes occurred over a mean follow-up of 11.6 years. In Cox proportional hazard models, a higher genetic score for valproate response was associated with a lower risk of incident ischemic stroke (HR 0.73, 95% CI [0.58, 0.91] per one SD increase, Figure 3A). Individuals in the lowest tertile of the genetic score had an almost two-fold increased absolute risk for ischemic stroke compared to those in the highest score tertile (4.8% vs 2.5%, p-trend = 0.027, Figure 3B). Sensitivity analyses confirmed robustness of the findings among the 1,967 unrelated individuals Table S9). Restricting our cohort to the patients on valproate monotherapy (n=1,675), the associations were replicated but with wider confidence intervals due to lower power . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. In the cohort of valproate users, 67 individuals had a diagnosis of atrial fibrillation before or within six months after ischemic stroke, and 24 ischemic strokes occurred among them. No interaction between prevalent atrial fibrillation and the genetic score was found (p=0.40). No association between the genetic score and ischemic stroke was found in this subgroup (p=0.51), however when removing the 67 individuals with atrial fibrillation from the cohort of valproate users, the association of the genetic score with incident ischemic stroke was more pronounced (HR 0.68, 95% CI [0.52, 0.89], Figure 3A), suggesting a better utility of valproate for prevention of ischemic stroke not related to atrial fibrillation. Association of the genetic score with recurrent ischemic stroke Among the 194 individuals with prevalent stroke at baseline, 22 recurrent ischemic strokes occurred over a mean follow-up of 11.2 years. In Cox proportional hazard models, a higher genetic score was associated with decreased risk for recurrent ischemic stroke (HR 0.53, 95% CI [0.32, 0.86] per one SD increase). Although only few cases in total, individuals in the lowest tertile of the genetic score had a three-fold higher absolute risk for ischemic stroke compared to those in the highest tertile (13/71, 18.3% vs 3/51, 5.9%, p-trend=0.026, Figure 3C). Associations of the genetic score with myocardial infarction No associations of the genetic score for valproate response were found for myocardial infarction (133 events, p=0.93) among valproate users. Also, no associations were found among valproate non-users, reassuring that our genetic instruments were not affected by horizontal pleiotropy (p=0.18, Figure 3A). Replication in the MGB Biobank We identified 1,241 valproate users with available genetic data in the MGBB, among which 99 ischemic stroke events and 126 myocardial infarction events occurred over a median follow-up of 6.7 years. In . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint this independent cohort we found 839 patients with 6,353 valproate serum level measurements. The associations between prescribed valproate dose and the genetic score with valproate serum levels replicated with almost identical effect estimates (Supplemental Table S13). We also found significant association between the genetic score for valproate response and ischemic stroke (HR 0.77, 95% CI: [0.61, 0.97]), and no association between the genetic score and myocardial infarction (p=0.68). Discussion In this study, we used common genetic variants to stratify 2,150 valproate users in the UKB by their genetically predicted response to valproate and investigated their risk of incident ischemic stroke and myocardial infarction. We leveraged data from a GWAS of seizure control after valproate intake from a cohort of patients with epilepsy and applied it to medication prescription and intake data of 502,000 individuals from a population-based observational cohort study. We found that a higher genetically predicted seizure response to valproate was associated with higher valproate serum levels, indicating that the included genetic variants predispose to greater valproate exposure by affecting valproate metabolism. In addition, this higher genetically predicted response to valproate was associated with a lower risk of ischemic stroke, among valproate users only. There was no such association among individuals who were not taking valproate, and no association among lamotrigine and levetiracetam users, ruling out independent pleiotropic effects of the genetic variants or an antiepileptic drug effect. The robustness of our results is supported by replication in the MGBB biobank, an independent electronic health record database from a different continent that is representative of a clinical population. Our results support a causal effect of valproate on ischemic stroke risk and demonstrate the utility of leveraging genetic data in observational cohorts to model drug response in silico for drug repurposing. Valproate's anticonvulsant effects were discovered in 1963 27 and it is today a commonly used drug for seizure prevention and mood stabilization. [28][29][30] Valproate is a nonspecific HDAC inhibitor, 6 and with the . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint detection of HDAC9 as genetic risk locus for large-artery stroke, 7,10,11 the role of valproate for stroke prevention has been postulated. Various studies have investigated the association of valproate with vascular outcomes in observational studies with conflicting findings. Some have found a decreased risk of ischemic stroke 1,2 and myocardial infarction, 2,3,31,32 while others have found an increased risk of stroke. 33,34 A recent meta-analysis found an increased overall stroke risk in patients with epilepsy, but a decreased stroke risk in patients taking valproate compared to other antiepileptic drugs. 4 Our study confirmed the presumed effect of valproate on ischemic stroke, but failed to confirm previously hypothesized associations with myocardial infarction despite better statistical power, suggesting confounding by indication or attrition bias in previous findings. To test its clinical value for ischemic stroke prevention, valproate is currently under investigation in the Sodium Valproate to Prevent Stroke (SOLVE) trial for the prevention of atherosclerosis progression in patients with large-artery stroke in the UK (ISRCTN12685153). Our results show a decreased risk of stroke in valproate users. We were unable to investigate valproate's effect on specific stroke subtypes because this information is not available in the UKB, limiting our insight into a specific clinical mechanism. However, we found a stronger association in individuals without a diagnosis of atrial fibrillation even though this subgroup contributed many events and thus statistical power, suggesting that valproate's overall ischemic stroke prevention effect is not by preventing cardioembolism. The lack of association with myocardial infarction could suggest that valproate's stroke prevention mechanism acts through other known pathways beyond slowing of atherosclerosis through HDAC inhibition, such as protection of the blood brain barrier, 35 increase in ischemic tolerability through neuroprotective effects as demonstrated in vivo for brain ischemia, spinal cord injury and traumatic brain injury, [35][36][37] inhibition of platelet aggregation, 38 or increase in tissue plasminogen activator. 39 In our analyses, we found the strongest effect of valproate on ischemic stroke among individuals with a history of prior stroke, despite a small cohort size. Existing data shows that post-stroke epilepsy is . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint common, with an overall incidence of up to 7% 40 and a high recurrence rate of up to 12% in five years 41 . If future studies confirm our observation that valproate contributes most to the prevention of recurrent ischemic stroke among stroke survivors, future trials could evaluate its utility as a treatment for post-stroke epilepsy as a dual-use secondary prevention agent. Our study provides a compelling example of how genetic data can aid in prioritizing drug repurposing targets when observational studies are limited. The most likely reason for the conflicting evidence for the effect of valproate from observational studies is confounding by indication. As shown in our results, valproate users have a higher number of vascular risk factors compared to the general population (Table 1). Thus, association of valproate in solely epidemiological models use would yield an increased risk for vascular outcomes in our cohort, leading to a false conclusion from confounding bias. Our study shows that in these cases, Mendelian randomization is a powerful tool for overcoming this challenge, randomizing individuals to similar baseline characteristics (Table 2), thus providing an unbiased approach that is similar to an actual clinical trial. This intriguing concept is enabled through the increasing number of studies providing genetic markers for drug response. 42 Although valproate is a widely used and known drug, its adverse and teratogenic effects 43 reduce its attractiveness for drug repurposing in the large collective of stroke survivors. However, our findings potentially apply to other HDAC inhibitors that have been tested in vitro and in vivo, 44 providing further justification for stroke prevention trials employing HDAC inhibitors. Our research illustrates the potential utility and reliability of the under-utilized primary care prescription data within the UKB. It demonstrates the capacity to generate robust and replicable findings. Our investigation yielded nearly identical estimates for the associations between the prescribed dosage of valproate and serum valproate levels within both the UKB and the MGBB, despite these data being sourced from distinct healthcare systems across two different continents, utilizing two disparate methodologies for daily intake assessment. Furthermore, we observed congruent effect estimates between the genetic score and valproate serum levels, irrespective of the absence of standardized . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint measurements. These findings suggest that with sufficiently large cohorts, it is possible to discern true biological effects despite the diversity in electronic health care records and healthcare systems, along with their inherent weaknesses and limitations. We acknowledge the limitations of our study. First, the SNPs used to construct the genetic score for valproate response were discovered only in individuals of European ancestry 15 and applied to a predominantly European UKB population, raising the question whether our findings can be applied to Non-European populations. Although the leveraged genetic variants mark valproate response, they act as instrument for randomization in our study and thus it is likely that the findings also apply to individuals of non-European racial/ethnic background. Second, although we identified 2,150 valproate users, the absolute number of outcomes was low, restricting the power of our study. Third, we were not able to investigate valproate's effect on stroke subtypes because detailed stroke phenotyping is unavailable in the UKB. Furthermore, it is possible that some of the ischemic strokes among valproate users might have been misclassified seizures with Todd's paralysis. However, for several reasons we believe that it is unlikely that such a misclassification has substantially biased our findings: i) to drive observed associations, the majority of strokes must have been misclassified, which would raise serious concerns about the UKB phenotyping and all its associated research findings, ii) we did not find an association of the genetic response scores for lamotrigine and levetiracetam with ischemic stroke, and iii) we replicated our findings in the MGBB cohort on inpatient stroke diagnoses. Fourth, since we gathered data on valproate prescriptions, and not on valproate use, we cannot be certain that our whole cohort was in fact using valproate, but we postulate that this might only have diluted our effect estimates towards the null. Finally, our study cannot answer the question which patient collective would benefit most from valproate use, however with our limited power we found the largest effect for prevention of recurrent ischemic stroke. In conclusion, by using an innovative Mendelian randomization approach leveraging genetic data, our study supports a causal role for valproate in the prevention of ischemic stroke, with the largest . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint Figure 1. Study overview. A: Concept of the in silico trial randomized by genetic variation. A genetic score consisting of genetic variants known to predispose to higher likelihood of seizure freedom after valproate intake was associated with serum valproate levels and incidence of ischemic stroke. B: Study flow. After exclusion of Non-European individuals and those without genetic data, 2,150 valproate users and 427,997 valproate non-users were identified. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 30, 2023. ; https://doi.org/10.1101/2023.02.14.23285856 doi: medRxiv preprint
2023-02-22T20:00:55.033Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "b13cfc878491688ebb65a8bcb018973598988801", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9980256", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bfba070cbd153dd05d65761c9d7def8204d47c0f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1449511
pes2o/s2orc
v3-fos-license
An evaluation of the IL 508 eight-channel blood-chemistry analyser The new IL 508 is an eight-channel discrete, selective analyser. The eight-channel configuration on the system comprises the electrolytes sodium, potassium, chloride and total carbon dioxide and the chemistry channels for measuring urea, creatinine, glucose and total protein. The instrument is modular in design with the four electrolyte channels housed on one side of the central visual display unit (VDU) and sampler unit and the four chemistry modules on the other side. The dimensions of the instrument are: height 1.25m, width 2m, depth 0.Sin. The instrument weighs 341 kg. The system is intended to be used to analyse serum, heparinized plasma, urine or cerebrospinal fluid at a rate of 100 samples per hour. The actual rate of analysis is 112 samples/h if the standards used for calibration are taken into account. The central area of the instrument comprises the VDU, keyboard, thermal printer, cassette recorder and the sample platform. This platform holds six of the 10 sample racks, each of which has a capacity for 10 cups. These sample racks are magnetically coded for machine recognition. The VDU is an 8 in. diagonal screen which displays commands, results of samples and information related to a fault-finding program. The keyboard consists offour commands: ENTER, START, CLEAR and HALT, the 10 digits 0 to 9, a decimal-point key and an erase key. The keyboard is pressure-sensitive. As this is not an alphanumeric keyboard there is no facility for the input of any patient identification apart from the laboratory accession number. 0 to 9, a decimal-point key and an erase key. The keyboard is pressure-sensitive. As this is not an alphanumeric keyboard there is no facility for the input of any patient identification apart from the laboratory accession number. Reagents The reagents used during the period ofevaluation were supplied by Instrumentation Laboratories. All reagents are delivered in 500ml bottles which fit in the reagent trays on the instrument. Glucose, urea nitrogen and creatinine need to be reconstituted before use. Details regarding stability of all reagents are given in the manufacturer's instruction manual. Procedure Evaluation procedures were based on a recommended scheme [-8] and the equipment was not modified in any way during the trial period. Patient samples and quality-control material covering a wide range of concentrations were employed for method comparison. Sample size The total sample requirement for all eight channels is 230#1. This is a convenient volume for the 300 #1 cups commonly in use in paediatric biochemistry laboratories. Smaller volumes can, however, be used for selective analyses. Total carry-over Sample interaction was measured by analysis of 10 alternating groups containing three specimens of elevated concentration and three of low concentration, i.e. specimens A1, A2 and A3 contained a high level and were followed by specimens B1, B2 and B3 containing a low level of concentration for all channels. The Within-batch precision was measured at high, low and mid range levels of concentration by analysis of 40 replicates. Samples which were stored deep-frozen were employed for assessment of between-batch precision. These samples were run with each batch of patient samples on 20 separate occasions. Sample interaction Carry-over [-9] is low or undetectable on all channels of the IL 508. This is as expected in a discrete analyser. Accuracy of analysis Results for the five quality-control specimens gave good agreement between values determined on the IL 508 and target values, the exceptions were chloride where the IL results were consistently higher, and total protein where the values were lower by varying amounts. The total protein results are readily explained in that the quality-control material used was bovine and cannot be reliably assayed by a kinetic method. This is mentioned by Instrumentation Laboratories in their instruction manual. An explanation for the higher chloride results may be that there is no specimen dialysis on the IL 508. In the analysis of patient samples (table 6) good correlation was found in all channels with the exception of total carbon dioxide, which is probably due to differences in time of analysis between methods and also differences in the methods of comparison themselves. However, initial patient comparison runs showed poor correlation in the sodium, chloride and creatinine channels. Poor correlation in the sodium and chloride resulted from evaporation and, in the case of sodium, a faulty spin cup. Attempts to overcome evaporation were initially unsuccessful and an interim solution ofloading no more than two racks at any time was adopted. The use of thin polycarbonate film spread over each of the sample racks also helped in avoiding factitious results due to specimen evaporation. The sample probe is robust enough to pierce the polycarbonate immediately prior to sample aspiration. As an alternative to the somewhat cumbersome task of applying polycarbonate film, container cups with a small central hole were subsequently used in conjugation with the tray-cover supplied with the instrument. The effects of evaporation were then successfully eliminated for periods up to 45 minutes. Samples in unprotected micro-sample cups were earlier found to be subject to evaporation which produced changes in plasma sodium concentration of about 2 mmol/1 after only 20 min. Drift during routine machine use was tested by running 50 samples without any intermediate calibrations. Results showed no appreciable drift on any channel. The creatinine results were initially up to 40 lower than the SMA 6/60 values. An improved creatinine reagent, however, overcame this problem resulting in the reported correlation (table 6). This new reagent also almost completely eliminated the interference of bilirubin. The total protein analysis was examined with a series of albumin to globulin ratios, from 5 to 5 1, with no variation-in the total protein result. It was observed, however, that bovine controls did not react as quickly as human serum giving results which were on average 15 low. Precision runs were carried out using turbid and clear material and it was noted that with turbid material the precision was markedly reduced on all channels. In the course of performing the patient comparison study, haemolysed, lipaemic and icteric samples were analysed, and no significant differences in results were observed between the methods. R. W. Logan et al. Evaluation of the IL 508 blood-chemistry analyser electrode wash--the wash is routinely requested by the machine after a systems shut-down. All the dispensers are then primed with reagent before proceeding to calibrate the machine. Each week the glucose and urea dispensers are cleaned with a cleaning agent and distilled water; the thermal printer is also cleaned weekly. All the other dispensers are cleaned once a month. Instruction manual A comprehensive operator's manual covers all aspects of the instrument. The manual deals adequately with the installation procedure; it has a section covering operation, a section explaining the programming procedure, maintenance and trouble-shooting, and an appendix covering the chemistries. There is also a section on system diagnostics. This explains the diagnostic software which is used as an aid in the detection and isolation of faults occurring in the system. The software is capable ofexercising all simple machine functions, but it cannot isolate a problem on its own. The operator must interpret diagnostic results and proceed in a step-by-step manner to isolate a failing mechanism or circuit. Record of machine performance There were very few mechanical problems with the machine during its trial period. Most of the problems were very minor, such as syringe dispensers sticking--these were easily overcome by pressing the reset button. A problem that was not cured this way was a sticking probe, which resulted in the probe not picking up correct volumes. This in turn led to very low glucose and urea results. The problem was eventually traced using the system's diagnostics. Initially the sodium results were erratic due to the linkage connecting the drive motor to the sodium spin cup being faulty. This resulted in an incomplete evacuation of the contents of the sodium cup. This fault was rectified by the manufacturer. The roller-covers for the chemistry and electrolyte modules have a tendency on opening and closing to come apart in sections due to badly designed guide-channels. The IL 508 would be improved if it were fitted with hinged doors for access to the machine, similar to the doors on the sampler module. Some problems were encountered in the software--namely the clock lost time and the electrode-wash signals stopped the machine. The latter problem was rectified by fitting a new board; the slow clock proved to be a programming error. Maintenance The IL 508 is very easy to maintain. Daily maintenance includes cleaning the sample probes, the cuvettes and running an instrument was not operated on a routine basis, but from previous experience a figure of 1.5 staff members would seem to be realistic. General Operation and maintenance of the system are simple and easily learnt by staff with experience of other laboratory equipment. The machine makes fairly economical use of reagents. The saline and buffer diluents have to be replaced twice daily, thus it would be beneficial to increase the size of the reagent bottles from 500 ml to 11. All reagents are supplied in 500 ml bottles with the exception of glucose and urea which are two component reagents, dry powder and 250 ml diluent. These reagents have a three-day life span but are normally used within this period. It was found necessary to let the reconstituted reagents stand for 30min. before use. Aqueous calibrators are supplied in foul separate packages (see table 8). For sodium, potassium, chloride and total carbon dioxide an initial two-point calibration is undertaken followed by subsequent two-point calibrations every 60min. (timing can be altered). Single-point calibration (Autocal 1) is performed every 12rain. For urea, creatinine, glucose and total protein single-point calibration is used, it is repeated every 24 min. During the evaluation it was found that the total protein calibrator was unsuitable, answers were, on average, 5 g/1 low for human material. Monitrol has been substituted and gives satisfactory results. Monitrol is replaced fresh daily, the other calibrators are replaced every third day. The calibrators are protected from evaporation during use by small self-sealing cellophane diaphraghms. During our experience with the instrument no blockage ever occurred. An efficient system oflaundering the probes inside and out using flush and vacuum is applied between samples. However, short sampling may occur without being brought to the operator's notice. Small samples may be analysed for as many parameters as volume allows using the selective program. The 'run list review' program may be used to edit specimen order or selected analyses during the run. If required, the run may be interrupted by the use of a halt button. A separate sliding rack designed to hold one sample is provided which will magnetically activate a stat sample, the analysis of which is made as soon as the current analysis is. complete. The machine will then return to its programmed run. The sample racks are also magnetically coded and may be loaded in any order. Individual cups can only be identified by acquisition number together with rack number. A series of flags to indicate drift, noise, out-of-range, out-ofcalibration and/or out-of-control-range can be printed against any particular result. A further series ofwarnings is displayed on the VDU, for examples: 'buffer diluent low', 'strip printer paper out'. On current software, 'strip printer paper out' is only a warning, the machine will continue to analyse without recording results. With the basic instrument as supplied, these results cannot be recovered. If, however, a peripheral interface board is fitted, a cassette drive may be used to store such results for subsequent printing. Most error messages will result in a machine halt. After an unacceptable calibration, one sample will already have been analysed before the halt and will display a 'C' for out-of-calibration against the appropriate channel. As initially supplied, the only permanent record of results was via the thermal printer. However, an interface board has recently been delivered and this allows direct access to an external printer and/or computer. It has been a fairly simple exercise to program a Wang 2200 to capture the data and print the results horizontally in the order required for reporting to the wards. The integral cassette-recorder unit has only just been supplied by the manufacturer. Conclusions The IL 508 can be easily incorporated into a routine analytical laboratory. The instrument is capable of handling a medium to large work-load whilst retaining the ability to analyse emergency samples quickly, either during a run or from standby. The machine has been well designed giving access to all major components via roll-up covers. Linkage via the interface board to a laboratory computer should render the instrument a very useful and reliable laboratory tool.
2014-10-01T00:00:00.000Z
1982-01-01T00:00:00.000
{ "year": 1982, "sha1": "96af357474ca878f05cfb4d7053b5e8f09891ef9", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jamc/1982/831257.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc309b6bbf65ccdfa1d7fd66b1dfb1a92775e267", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
837470
pes2o/s2orc
v3-fos-license
Spice: discovery of phenotype-determining component interplays Background A latent behavior of a biological cell is complex. Deriving the underlying simplicity, or the fundamental rules governing this behavior has been the Holy Grail of systems biology. Data-driven prediction of the system components and their component interplays that are responsible for the target system’s phenotype is a key and challenging step in this endeavor. Results The proposed approach, which we call System Phenotype-related Interplaying Components Enumerator (Spice), iteratively enumerates statistically significant system components that are hypothesized (1) to play an important role in defining the specificity of the target system’s phenotype(s); (2) to exhibit a functionally coherent behavior, namely, act in a coordinated manner to perform the phenotype-specific function; and (3) to improve the predictive skill of the system’s phenotype(s) when used collectively in the ensemble of predictive models. Spice can be applied to both instance-based data and network-based data. When validated, Spice effectively identified system components related to three target phenotypes: biohydrogen production, motility, and cancer. Manual results curation agreed with the known phenotype-related system components reported in literature. Additionally, using the identified system components as discriminatory features improved the prediction accuracy by 10% on the phenotype-classification task when compared to a number of state-of-the-art methods applied to eight benchmark microarray data sets. Conclusion We formulate a problem—enumeration of phenotype-determining system component interplays—and propose an effective methodology (Spice) to address this problem. Spice improved identification of cancer-related groups of genes from various microarray data sets and detected groups of genes associated with microbial biohydrogen production and motility, many of which were reported in literature. Spice also improved the predictive skill of the system’s phenotype determination compared to individual classifiers and/or other ensemble methods, such as bagging, boosting, random forest, nearest shrunken centroid, and random forest variable selection method. http://www.biomedcentral.com/1752-0509/6/40 these "components" can then be used to design in silico system models (e.g., positive and negative feedbacks, information processing and signal transduction cascades) to better understand real system behavior. To somewhat simplify this intricate process, datadriven characterization of a complex system behavior often starts with defining a target set of system's distinct phenotypes of interest, such as thermo-resistance, acidtolerance, hydrogen production, and enumerating only those key system components that could be responsible for or contributing to the given phenotype(s). For example, if the target phenotype is ethanol production by microbial cells via biomass degradation, then enumeration of phenotype-related system components would identify all the groups of proteins involved in degradation of cellulose to sugars, transport of these sugars through the membrane, and their fermentation to ethanol. Similarly, enumeration of all the cancer-related cellular components would identify all the genes that are likely related to the expression of cancerous cellular phenotype. The difficulty in enumerating all the phenotype-related system components lies in dealing with the enormous number of system components (or features) that could easily reach thousands or even hundreds of thousands. Such enormous feature space could easily lead to the problem, coined by Bellman as "the curse of dimensionality" [2]. The problem gets complicated if one needs to select all those features that would provide clear differentiation between the true and merely feasible associations with the target phenotype. In addition, hierarchical nature of most biological systems leads to "short-and long-range" interactions between the features, or system. For example, hydrophobic residue pairs could enhance a propensity for other adjacent hydrophobic pairs ("short-range" feature correlation). On the other hand, highly specific residue interactions may be under selective pressure to fit into an overarching architectural motif (such as helix-turnhelix motif ), thus contributing to "long-range" feature dependencies. Moreover, it is often the case that a coordinated, not independent, action of several system components determines what phenotype(s) a given system will likely express. A system response represents a complex process, involving a series of (frequently induced) interacting events. Such non-linear cooperative or competing interactions between the system components often form hierarchical functional modules (e.g., communities) that act not only on different spatial and temporal scales but also in response to fluctuations induced by endogenous and exogenous factors. Hence, the approaches that identify individual components that confer a given system phenotype are likely not optimized to detect groups of such interplays between system components. Instead, there is a need for methods that aim to enumerate all the groups of cross-talking system components that could be associated with the system phenotypic state. We call this problem the enumeration of system phenotype-determining component interplays. To address this problem, we propose an iterative, classification-driven approach that comprehensively enumerates the set of feature subsets that discriminate between different system phenotypes (or classes). We define a system component (a protein or group of proteins) as a feature in this paper. Given a set of observations about system components (features) with the corresponding assignment of the system's phenotype (class), our method measures the importance of feature subsets to discriminate between system phenotypes. Despite combinatorial complexity of the problem, our method almost exhaustively explores feature subsets based on information-theoretic selection and dense enriched subgraph enumeration process. Our method rests on a hypothesis that if a subset of system components discriminates between system's functional states, then when considered altogether, these components most likely form a cross-talking phenotype-determining feature subset. It also places the contribution of an entire feature subset at the core of the analysis as opposed to the approaches that first evaluate the importance of individual features and then filters those that are associated with a particular system's phenotype. It further filters those feature subsets that are statistically significant, and are thus assumed to be relevant to the target phenotype(s). Our method can be applied to both instance-based data such as microarray patient sample data and network-based data such as gene networks. The major contributions of this work are as follows: 1. We propose an algorithm, named SPICE, to address the new problem of enumeration of system phenotype-determining component interplays. SPICE iteratively enumerates all the groups of statistically significant cross-talking system components, which, to the best of our knowledge, no existing methodologies are particularly designed for. 2. We evaluate our method on both instance-based data and network-based data to identify system components related to three target phenotypes: biohydrogen production, motility, and cancer. We show that the identified phenotype-related components are biologically relevant and consistent with the results in literature. 3. Additionally, we apply our method to eight benchmark microarray data sets to show its effectiveness and robustness on the phenotype-classification task. http://www.biomedcentral.com/1752-0509/6/40 Related work To the best of our knowledge, the proposed problem of enumerating statistically significant component interplays that are key contributors to the system's phenotype has not been addressed in literature. The problem resembles, yet with quite apparent distinctions, the problems of feature selection, phylogenetic profiling, network alignment, and frequent subgraph mining. At a higher level, these problems could be divided into two major categories depending on whether pairwise relationships between system components are known. If they are defined, then the system could be modeled as a complex network, and multiple network alignment approaches [3,4] that look for subgraphs that co-occur across multiple network instances for the same system's phenotype are putative candidates for the target component interplays. The key limitation of this strategy is that such approaches aim to identify the component groups that are present in all or most of a given set of network instances and would likely miss those that are only common to a subset of the instances. Likewise, they are not equipped with any means to suggest that these groups are specific to the target system phenotype and not common to multiple system phenotypes. While the former limitation is addressed by the approaches based on frequent subgraph mining [5,6], similar comments would still hold for the latter comment. In addition, the runtime for these approaches grows exponentially; even the most efficient ones, such as MULE [5] that enumerates maximal frequent edge sets, took almost 57 days for a set of 98 network instances (details available upon request). While efficient heuristics have been reported [7], they are tailored for specific network types (e.g., metabolic networks). For the second category, the system is often represented by its set of components (i.e., features) that are defined over multiple instances (i.e., observations) for each of the finite set of system's distinct phenotypes. In this case, univariate approaches, such as those that, for the given feature, look for a strong correlation between its profile and the system's phenotype profile across multiple instances identify a set of putative candidates for component interplays. Different correlation measures, such as Pearson correlation, Mutual Information, Student's t-test, ANOVA, Wilcoxon rank sum, Rank products, and other univariate filter feature selection techniques can provide different candidate sets that could be further assessed with set-theoretical approaches to provide either higher specificity (i.e., intersection of sets) or higher sensitivity (i.e., set union). A particular instance of such a strategy is phylogenetic profiling [8], where different organisms that exhibit various (but finite) phenotypes (e.g., aerobic vs. anaerobic growth) are considered as observations characterized by the the presence or absence of particular genes (or components). The underlying hypothesis behind this approach is that candidate genes are more likely to be present in phenotype-expressing organisms than in phenotype-nonexpressing organisms due to an evolutionary pressure to conserve the phenotype-related genes [9]. While simple, fast, and effective [10] in finding individual components that are likely associated with the system's phenotype, such methods are quite limited in discovering of the component interplays. Multivariate feature selection approaches could be considered as the closest approximation to the proposed problem. The multivariate feature selection approaches can be broadly divided into the following categories: (1) filter techniques (e.g., fast correlation-based algorithm [11]), (2) wrapper techniques (e.g., GA/KNN method (combining a Genetic Algorithm (GA) and the k-Nearest Neighbor (KNN) method) [12]), and (3) embedded techniques (e.g., random forest [13]). In filter techniques, the relevance of features is evaluated according to some metric, and the features with the top k ranking are then selected for further analysis. Filter feature selection techniques are simple, fast, and effective, but these techniques often ignore the correlations between different features. In biology, these correlations depict protein interactions and should not be ignored. Wrapper methods take the dependencies between the features into account, but suffer from overfitting problem. Additionally, they are often computationally expensive. Embedded methods can be far less computationally expensive than wrapper methods, but these approaches are very specific to a given classification algorithm. Our work is also related to network-based identification methods. Network-based identification methods aim to incorporate pathway or gene network information (typically generated from expression datasets) information to help identify functional modules, or improve the prediction. Pathway-based methods [14,15] try to detect the network pathways by assuming that the genes inside a module are co-expressed. However, pathway-based methods ignore the detailed network topology, and a small perturbation that is likely to affect many "modules" [16]. While integrating of gene expression information into identification of gene modules is biologically meaningful, gene-network based methods are rarely satisfactory because they either focus on small networks by using the greedy subgraph search algorithm [17,18] or focus on detecting non-overlapping subnetworks [16,19,20]. Results and discussion The nature of the proposed methodology, System Phenotype-related Interplaying Components Enumerator (SPICE) (see Method section), suggests that detected component interplays (Steps 1-4) (1) could play an important http://www.biomedcentral.com/1752-0509/6/40 role in defining the specificity of the system's phenotype(s); (2) would likely exhibit stronger inter-component relationships within the same group than between the groups and are functionally coherent, likely, act in a coordinated manner to perform the phenotype-specific function; and (3) collectively, could improve the predictive skill of the system's phenotypes (Step 5). Groups of enzymes associated with biohydrogen production Biological hydrogen is a promising renewable energy source [21], which can be generated by utilizing one of three metabolic processes: light fermentation, dark fermentation, or photosynthesis [22]. To date, a number of phylogenetically diverse microorganisms have been identified as hydrogen producing. Such organisms include photosynthetic bacteria, nitrogen-fixers, and heterotrophic microorganisms [23]. In order to generate hydrogen, these organisms may rely upon one or more metabolic routes. As such, the biohydrogen production phenotype provides an opportunity to evaluate the capabilities of SPICE to handle a relatively complex phenotype. Identification of phenotype-related components was based on the assumption that if a component (i.e., a group of enzymes in a metabolic process) is specific to biohydrogen production, then it is likely evolutionarily conserved across H 2 -producing organisms, and it is absent in most H 2 -non-producing ones. Our first experiment includes the data about 17 H 2producing and 11 H 2 -non-producing microorganisms (see Additional file 1) and compares SPICE's performance against the two commonly used statistical methods: Mutual Information (MI) and Student's t-test, and one multivariate feature selection approach: SVM recursive feature elimination (SVM-RFE). Among 17 H 2producing microorganisms, four microorganisms utilize bio-photolysis, five microorganisms utilize light fermentation, and eight microorganisms utilize dark fermentation. 11 microorganisms are listed as non-hydrogen producing because they are not associated with hydrogen production based on literature review, or they lack hydrogenase [24], one of the key enzymes involved in hydrogen production. All microorganisms used in this experiment were verified as completely sequenced using the NCBI database. The input to SPICE is a matrix, with the enzyme EC numbers along the rows, 28 organisms (hydrogen producing and non-producing) along the columns, and the entry in each cell (i, j) is the copy number for enzyme i in organism j. The last row of the matrix includes information about the organism's ability to express the hydrogen production phenotype. The mutual information method [25] assesses correlation between the enzyme's phylogenetic profile and the organism's H 2 -production profile across multiple organisms. In addition, it reports a significance threshold by shuffling the enzyme profile vectors and calculating the mutual information with the organism's phenotype profile. Only those enzymes, whose mutual information values lie above the confidence cutoff are reported. The Student's t-test is another statistical method to identify phenotype related enzymes, where we utilize the enzyme phylogenetic profiles alone to measures statistical bias of enzyme copy numbers in one phenotypic group of organisms vs. the other. The test results are filtered so that only enzymes with the p-value less than 0.05 are considered significant. Guyon et al. [26] proposed the SVM-RFE algorithm to rank the features (enzymes) based on the value of the decision hyperplane given by the SVM. The features with small ranking scores are removed. The top 240 enzymes (out of 1,229 enzymes) are considered significant. Figure 1 and Figure 2 show the pathway and key enzymes for hydrogen production from the fermentation of glucose to acetate ( Figure 1) and butyrate ( Figure 2) in Clostridium acetobutylicum. Within this process, glucose is broken down through a series of glycolytic enzymes to generate pyruvate. Pyruvate is then converted to acetyl-CoA through the action of pyruvate ferredoxin oxidoreductase. During this step, hydrogen gas is produced when pyruvate is oxidized, thus resulting in the formation of CO 2 plus H 2 . Production of hydrogen via this route is mediated through two enzymes-pyruvate ferredoxin oxidoreductase and hydrogenase. Acetyl-CoA generated produced from pyruvate can then enter a number of pathways, including the acetate and butyrate formation pathways. While production of hydrogen occurs predominately during formation of Acetyl-CoA and not in the secondary pathway (e.g., conversion of Acetyl-CoA to acetate), acetate and butyrate fermentation pathways play an important role in the overall yield of hydrogen by microorganisms. In metabolic engineering studies, the goal is to generate the highest theoretical yield of hydrogen through alteration of metabolic routes or key enzymes related to hydrogen production. For enhanced hydrogen production, acetate is the desired end product because of its higher hydrogen yield compared to other by-products, such as butyrate [27,28]. Specific differences in conversion efficiencies can be observed by comparing the two chemical reactions below: The first reaction shows that the maximum theoretical hydrogen yield is 4 H 2 per mol of glucose produced when acetate is the end product [29,30], compared to a maximum theoretical hydrogen yield of 2 H 2 with butyrate as the end product [27,31,32]. During acetate and butyrate formation, 2 mols of hydrogen are generated during reaction 3 when pyruvate ferredoxin oxidoreductase reduces ferredoxin (Fd) and hydrogenase immediately oxidizes it (Figures 1 and 2). When acetate is the only end product as depicted in Figure 1, then additional hydrogen is produced when 2NAD + is reduced to form 2NADH + 2H + (reaction 3). An illustration of the two reactions is shown in Figure 1 (acetate) and Figure 2 (butyrate). Due to the importance of acetate and butyrate production in the generation of hydrogen production, we evaluated the ability of SPICE to identify these two pathways. Results show that SPICE identified all of the acetate pathway's constituent enzymes, including acetate kinase (E.C. 2.7.2.1), as being significant. In contrast, the Student's t-test and the MI method did not find any of the enzymes, and SVM-RFE detected acetate kinase. Additionally, all five enzymes active in the butyrate pathway [28] were found by the SPICE method. Among these, only three were discovered by the SVM-RFE, two were found by the Student's t-test and none by the MI method. Within facultative anaerobes like Escherichia coli, hydrogen gas may be produced directly through the production of formate. In this pathway, pyruvate is converted to formate and acetyl-CoA with the use of pyruvate formate lyase (E.C. 2.3.1.54) [33]. The formate hydrogen lyase complex made up of formate dehydrogenase and ferredoxin hydrogenase breaks down the formate into hydrogen gas and carbon dioxide [28]. In this study, pyruvate formate lyase was found by the SPICE method to be significant. Table 1 shows that SPICE detected all the enzymes (see Additional file 2) specific to the three pathways in facultative anaerobes, such as Escherichia coli, while mutual information could not even discover a single enzyme, Student's t-test could only detect 2 enzymes, and SVM-RFE could find four out of 7 enzymes. Thus, SPICE outperformed, in terms of sensitivity, the existing state-of-the-art methods based on Student's t-test, MI, and SVM-RFE. The enzymes identified by SPICE are next described in the context of their corresponding metabolic pathways. COG modules corresponding to biohydrogen production To expand our study beyond metabolic subsystems to include possible regulators, transporters, and others, in our next experiment, we replace enzymes in the matrix with the clusters of orthologous groups (COGs) [34]. We obtain COG-organism association information from the STRING database. The new COG-centric matrix for this experiment can be found in Additional file 3. The set of enumerated COG modules with the statistically significant p-value of 0.05 is provided in Additional file 4. SPICE was able to identify COG modules that are known to be associated with hydrogen production based on our literature review and prior knowledge. Next, we will briefly summarize some of these modules. COG modules related to nitrogenase In addition to the metabolic pathways described above, other key enzymes are known to be associated with hydrogen production in a number of microorganisms [35][36][37]. Examples of such enzymes include nitrogenase and hydrogenase enzyme complexes. Hydrogen producing organisms capable of fixing nitrogen contain enzyme complexes, termed nitrogenases. Within nitrogenase complexes, nitrogen gas is converted to ammonia, inadvertently resulting in the production of hydrogen gas as a byproduct [23,36]. Evaluation of the COG modules generated by SPICE indicated the presence of two modules, each containing an essential component of enzyme complex nitrogenase. In the first module, two COGs (COG2710 and COG0120) were identified. COG2710 is associated with expression of the molybdenum-iron protein (NifD) [23] and COG0120 is associated with the protein-Ribose 5-phosphate isomerase (RpiA). NifD protein is one essential component of nitrogenase, serving as the binding site for substrates during nitrogen-fixation [23,38]. RpiA takes a vital part in carbohydrate anabolism and catabolism through its participation in the Pentose Phosphate Pathway (PPP) and Calvin Cycle [39]. In addition, studies of central metabolism indicate that RpiA is a protein highly conserved across many microorganisms [39]. However, in this study, RpiA was paired with NifD, suggesting that both proteins may be associated with nitrogen-fixation, hence biological hydrogen production. In terms of hydrogen production, metabolism of and the ability to metabolize specific carbohydrates play an indirect role in the overproduction of hydrogen. One example is the C. butyricum. Metabolic studies of the C. butyricum demonstrate the ability of this bacterium to digest a variety of carbohydrates and to produce hydrogen via degradation of carbohydrates [40]. Another role RpiA may play is the production of NADPH required for fixing nitrogen [41]. In nitrogen fixers, the oxidative pentose phosphate cycle has been reported as active. During oxidative PPP, Riboluse-5phosphate is converted to ribose-5-phosphate by Rpi. During this reaction, NADPH is generated, thus allowing for N assimilation, N-fixation, and production of hydrogen. The second nitrogenase-related module identified by SPICE contains COG1348 (NifH) and COG3883 (Uncharacterized). Similar to NifD, NifH is also considered to be an essential component of nitrogenase. It is responsible for assisting with the biosynthesis of co-factors for NifD [42]. COG3883 is uncharacterized. While we cannot predict the role of the protein from this module, its presence suggests that it is either associated with the nitrogen fixation or hydrogen production phenotype. COG modules corresponding to hydrogenase Hydrogenase enzyme complexes are key enzymes involved in the uptake and production of biological hydrogen [35]. Analysis of hydrogenase enzymes have identified three different types, each associated with a number of accessory proteins necessary for activation [35,43]. These include the [NiFe]-hydrogenase, [FeFe]-hydrogenase, and non-metal containing hydrogenase enzyme [35]. Due to the importance of hydrogenase in both hydrogen production and hydrogen uptake, several studies have examined the role of hydrogenase enzymes in a number of different hydrogen-producing organisms [44,45]. These studies have found many microorganisms, including Clostridium acetobutylicum, capable of having both hydrogen uptake (e.g., [FeFe]-hydrogenase) and hydrogen evolving enzymes (e.g., [NiFe]-hydrogenase). In this study, SPICE predicted the presence of both hydrogen uptake and hydrogen evolving enzymes as related to the hydrogen production phenotype. Categorization of hydrogen uptake hydrogenases may be due to the absence of hydrogenase in microorganisms present in our data set. In this study, SPICE identified one module containing a hydrogen evolving hydrogenase. Within this module two COGs, COG4624 (iron only hydrogenase) and COG3541 (predicted nucleotidyltransferase) were present. The protein ID for COG4624 was not identified in the literature review; however, [Fe]-hydrogenases are responsible for producing hydrogen [46]. Nucleotidyltransferases are proteins involved in a number of biological processes ranging from DNA repair to transcription [47]. Since these proteins are generally involved in DNA and RNA-related processes, it is unclear why a predicted nucleotidyltransferase was paired with hydrogenase. To understand the interaction between these two proteins, experimental molecular analysis is necessary. Another COG module found by SPICE contains COG0068 and COG0025, which are associated with expression of two hydrogenase uptake proteinshydrogenase maturation factor (HypF) and NhaP-type Na+/H+ and K+/H+ antiporters (Nhap). HypF has been found as a carbamoyl phosphate converting enzyme (or an auxiliary protein) involved in the synthesis of active [NiFe]-hydrogenases in Escherichia coli and other bacteria [48]. NT01CX 0020, an orthologous group of COG0025, is associated with expression of sodium/hydrogen exchanger protein (NHE3). NHE3 has been found to play an important role in hydrogen production of Acidaminococcus fermentans, Escherichia coli and bacterial communities within a dark fermentation fluidised-bed bioreactor [49][50][51]. SPICE also identified three other types of hydrogenase maturation proteins-HypC, HypD, and HypE. COGs corresponding to these proteins are COG0298 (HypC), COG0409 (HypD), and COG0309 (HypE). Understanding complexes, such as uptake hydrogenase enzymes, is important for deciphering regulatory mechanisms and activity of these key enzymes. For example, in studies evaluating accessory proteins present in [NiFe]-hydrogenase complexes, HypCDEF proteins are described as regulators for maturation of uptake hydrogenase through participation in development of the active center [35,52]. If one of the Hyp proteins is missing, the entire complex is inactivated. In H 2 -producing microorganisms such as Escherichia coli, hydrogenase maturation proteins act as regulators for maturation of uptake hydrogenase in development of the active center [35,36]. Regulation is conducted by inserting Fe, Ni, and diatomic ligands of HypA-F proteins into the hydrogenase center for activation and maturation [53]. To carry out this process, HypE and HypF are in charge of synthesis and insertion of Fe cyanide ligands into the hydrogenase's metal center, and HypC and HypD are responsible for construction of the cyanide ligands [36,54]. In addition, SPICE identified two hydrogenase proteins associated with anaerobiosis [55]. They are COG0374 (HyaB) and COG0680 (HyaD). Unlike the Hyp proteins, which are accessory proteins involved in the assembly of the metallocenters, Hya proteins are responsible for the maturation of hydrogenase-1 [46]. http://www.biomedcentral.com/1752-0509/6/40 Other COG modules related to biohydrogen Other biohydrogen production-related COGs, such as COG-0374, COG0375, COG3261, COG0680, COG4624 and others, shown under the hydrogenase category in STRING database are detected as part of other modules by SPICE. As mentioned earlier, hydrogenase is one of the key proteins (or enzymes) involved in hydrogen production and uptake [24]. The complete list of all the identified putative biohydrogen-related COG modules is available in Additional file 4. Motility-related COG modules For a large-scale experiment, we set up another experiment on a different phenotype-motility. A total of 141 organisms including 56 non-motile organisms and 85 motile organisms were chosen from Slonim et al. [8]. For p-value of less than 0.01, SPICE detected 96 modules. The input data and results can be found in Additional files 5 and 6, respectively. One of the motility phenotype-related COG modules contained COG1338, COG0265, COG1484, and COG3420. Among the four COGs, COG1338, whose function is associate with the expression of flagellar biosynthetic protein (Flip), has a high correlation with flagellar assembly pathway [56]. Flagellar assembly pathway, which enables the movement of microorganisms, is well-known to be important for bacterial motility [56,57]. Proteins associated with the other three COGs include uncharacterized serine protease (YyxA) and two hypothetical proteins. YyxA in a motile organism, Bacillus amyloliquefaciens, has a similar phylogenetic profile to chemotaxis-related proteins [58]. Chemotaxis pathway, which is also important for bacterial motility, determines how the microorganism moves according to its environment [8]. Chemotaxis pathway and flagellar assembly pathway function together to guide bacteria's direction of movement [8]. The phylogenetic profile of the other two hypothetical proteins (associate with COG1484 and COG3420) are shown to be correlated with the pattern of motility across many bacterial genomes [8]. Additionally, SPICE enumerated other COG modules that contained other known flagellar-related COGs like COG1516, COG1345, and COG1815 and other known chemotaxis-related COGs such as COG0840, COG0643, and COG0835, supported by literature [8,56,57]. Besides flagellar-related and chemotaxis-related COGs, type III secretion system-related COGs, such as COG1766, COG1684, COG1987, and COG1338, were also found in some of our enumerated modules. The type III secretion system is found to be highly correlated with bacterial motility, because some of its protein structure is very similar in structure, function, and gene sequence to the flagellar assembly system [56,59]. Cancer-related genes Identifying all the genes that could discriminate tumor cells from normal cells in microarray gene expression data is non-trivial [60]. Again, the task is not to find a single "best"-discriminating gene set, but enumerate as many cancer-related genes and groups of genes as possible provided they are associated with cancer expression phenotype; this task is becoming particularly important in the context of personalized medicine. Leukemia cancer data was selected to show the effectiveness of our method to detect phenotype-related gene modules in biological networks. Leukemia data can be downloaded from Broad Institute Cancer Program Data (http://www.broadinstitute.org/cgi-bin/cancer/datasets. cgi). It contains 72 measurements for the expression of 7,129 genes, corresponding to the samples taken from bone marrow and peripheral blood. Out of these samples, 47 samples are classified as ALL (Acute Lymphoblastic Leukemia), and 25 samples are classified as AML (Acute Myeloid Leukemia). The first 11 genes identified by SPICE were used as seed set, and a total of 145 phenotype-associated gene functional modules (see Additional file 7) were generated by DENSE algorithm in the Leukemia network. 5 out of the 11 seed genes are filtered out by our method. Table 2 shows the first 5 models identified by our algorithm. Specifically, gene KIAA0016 found by our model 1 is highly correlated with anti-cancer agents [61]. KIAA0016 encodes TOMM20-a mitochondrial import receptor [62]. TOMM20 has been shown to interact with a central anti-apoptotic Bcl-2 (B-cell lymphoma 2) gene [63]. The expression of Bcl-2 has been used as a prognostic marker for acute myeloid leukemia [64]. KIAA0035, Cellular nucleic acid binding protein and KIAA0016 belonged to a functional module in the Leukemia network. Our method also detected an overlapping functional module with only one gene (KIAA0242) difference to model 1. Zyxin found by our model 3 plays a vital role in mitosis [65], and the LIM Domain of Zyxin is known to interact with leukemogenic bHLH proteins, such as TAL1, TAL2, and LYL1 [66]. Predictive skill Data Eight publicly available multi-phenotype-genotype datasets are used in this study. Table 3 summarizes some characteristics of these datasets, their sources, and the best-to-date performance reported in literature. For comparison purposes, the last column indicates SPICE's performance. Evaluation methodology For two-class, 10-fold cross-validation are employed. 10fold cross validation has been proved by Witten and Frank [76] to be a good way to evaluate the performance of a classifier. In 10-fold cross-validation, the original data is partitioned into 10 different subsets. Each of the 10 subsets is used as the test set, and nine other subsets are used as training set. For multi-class datasets, 3-fold cross validation is used to ensure that each subset can have all different classes of samples. Bootstrapping validation, via commonly used bootstrap estimators, e0 bootstrap and .632 bootstrap [77], is also applied. In e0 bootstrap, the training data consists of n instances by re-sampling with replacement from the original data of the same size of n. And the test data is the set difference between original data and training data. Thus, if the training data has j unique instances, then the test data will be the other n-j instances on the original data. The error rate on the test data is treated as the e0 estimator, while the .632 bootstrap also takes the training error into consideration, and uses the linear combination of 0.368 * + 0.632 * e0 as the estimated error rate, where is the training error. For good error estimation, we use ≈ 200 iterations [77] and report the average error rate. Bagging [78], boosting [79], random forest [80], nearest shrunken centroid method (PAM) [81], and random forest variable selection (varSelRF) [82] ensemble learning techniques are employed as benchmark methods. The ensemble size used for these methods is the same as the one used for SPICE. We utilize different skill metrics including accuracy, sensitivity, specificity, precision, F 1 -measure, variance, Heidke Skill Score (HSS) [83], Peirce Skill Score (PSS) [83], and Gerrity Skill Score (GSS) [83]. Accuracy is defined as the ratio of the number of correctly classified data points to the total number of data points in the test set. The HSS measures how well a forecast did as to a randomly selected forecast. PSS, also called "true skill statistic, " is another popularly skill score computed by the difference between the hit rate and the false alarm rate. GSS, also known as "threat" score or critical success index, is a particular useful measure of skill for situations where the occurrences of the event to be forecast are substantially less frequent than the non-occurrences [83]. Figure 3 shows cross validation accuracy of SPICE compared to bagging, boosting, random forest, PAM, and varSelRF ensemble methods. We report the accurate results of bagging, boosting, random forest, PAM, and varSelRF by using the default parameters. CART decision tree is used as the base classifier for bagging, boosting, and SPICE. To be consistent, we use 11 iterations as the stopping criterion (or the maximum ensemble size) for all the methods. SPICE outperforms bagging, boosting, random forest, PAM and varSelRF by up to 33%, 13%, 18%, 10%, and 24%, respectively. Table 4 summarizes SPICE's skill on two-class microarray data using five metrics: accuracy and its variance, sensitivity, specificity, precision, and F 1 -measure; it also reports an average number of features per model. Table 5 summarizes SPICE's skill on multi-class microarray data using five metrics: accuracy and its variance, HSS, PSS, and GSS. Different weighting schemes' test One factor that may influence the results of SPICE method is the weights assigned to different candidate classifiers in the ensemble for determining the phenotype. Here, we test three different weighting schemes described in Step 5: bringing component interplays altogether section: majority voting, training accuracy-based voting, and internal cross-validation-based voting. The experimental results show that there is no bearing on prediction accuracy by choosing different weighting schemes for a majority of microarray datasets, although the training accuracybased voting and internal cross-validation-based voting performed slightly better (3-5%) than the majority voting scheme on few datasets like the B-cell lymphoma dataset. However, all weighting schemes highly outperformed any single classifier in the ensemble. Robustness assessment To assess robustness, we applied bootstrapping using both e0 and .632 bootstrap estimators with 200 bootstrapping trials. Bootstrapping is applied to all three categories of data sets. Leukemia data is the original 2-class data without any preprocessing, CNS data is the discretized data, and Lymphoma 3class data is multi-class data with logarithmic transformation and standardization. shows that SPICE provides bootstrap error rates comparable with cross-validation results. Figure 4 shows the ensembles built by SPICE on Leukemia and Lymphoma 3class data, using 11 or fewer classifier models (Figure 4(a)), with each model including 2-3 features (Figure 4(b)). The fact that the ensemble uses information from multiple diverse models and achieves a good accuracy with only a few features per model is a good indicator for our classifier ensemble methodology. Figure 5 shows the runtime of SPICE and the benchmark methods on eight microarray datasets with 30 iterations as the stopping criterion. Our experiments were conducted on a PC with an Intel Core 2 Duo CPU (2.2GHz) and 6GB of RAM. All algorithms were implemented in the Matlab programming language. Algorithm efficiency For the eight datasets we tested, it shows that our SPICE algorithm is much faster than bagging and boosting. While SPICE is slower than random forest on some datasets, SPICE could achieve better prediction accuracy on those datasets. Generalization SPICE can be considered one of meta-learning ensemble algorithms [84], because SPICE can employ an arbitrary base classifier. Table 7 shows its effectiveness compared to a single classifier using different base classifiers on the Colon cancer dataset with the 10-fold cross-validation. SPICE improves the prediction accuracy of a single classifier, namely by about 30%, 14%, and 7% for Naïve Bayes, CART decision tree, and linear SVM, respectively. Thus, SPICE can be applied to improve some base classifiers other than decision tree, which makes SPICE more useful. Conclusion In this paper, we addressed the important and challenging problem of enumerating statistically significant and application-relevant component interplays that are key contributors to the system's phenotype. We presented SPICE, an effective, iterative feature subsets enumeration method that discriminates between different systems' phenotypic states on both instance-based data and network-based data. SPICE successfully identified cancerrelated genes from various microarray data sets and found enzymes or COGs associated with biohydrogen production and motility phenotype by microbial organisms. SPICE also improved the predictive skill of the system's phenotype determination by up to 10% relative to individual classifiers and/or other ensemble methods, such as bagging, boosting, random forest, nearest shrunken centroid, and random forest variable selection method. Method The key steps underlying SPICE are shown in Figure 6. At a higher level, SPICE first identifies a candidate component (feature) set (Step 1: identifying candidate component interplays section), it then scores its phenotype specificity-determining skill (Step 2: scoring candidate component interplays section) along with statistical significance assessment (Step 3: assessing statistical significance section). These three steps are repeated in an iterative fashion by "knocking out" the selected candidate component sets until the stopping criterion is met (Step 4: iterative "knock-out" of component interplays section). Finally, the ensemble of classifiers is formed to predict the system's phenotype(s) given the values of all its component-interplay groups (Step 5: bringing component interplays altogether section). An additional step is added between Step 4 and Step 5 to ensure that the identified systems components are more strongly linked to the phenotype through comparative analysis of biological networks (Detecting biologically relevant component interplays through biological networks section). Next, we explain each of these steps in more detail. Step 1: identifying candidate component interplays We hypothesize that if the component is key to defining the system's phenotype, then its value distributions will be separable between the observations from different phenotypes. If the separation is strong, then such a component, alone, is likely able to discriminate system phenotypes. And almost any method, such as entropy-based, would likely succeed in detecting those components. However, with real data sets such a strong separation is less likely. Hence, one should strive for discovery of separation signals that while being weaker at the individual component level, they-as a group-should be able to discriminate between system phenotypes. Therefore, the effective analysis should not only include an individual component with a strong discriminatory signal, but also extend to a group(s) of interplaying components out of a set of thousands of components. This creates a multiplicity of possible combinatorial interplays to search for and excludes a possibility for a brute-force enumeration. Thus, our goal is to provide a framework for automatic exploration of such combinatorial interplays that could offer both the computational efficiency and the application domain relevance. To address this issue, we propose to employ the multilevel paradigm via divide-and-conquer strategy. The multilevel paradigm is known for its effectiveness when solving very large-scale scientific problems. In the context of linear systems of equations, for instance, algebraic multi-grid methods, have been devised to solve linear http://www.biomedcentral.com/1752-0509/6/40 Figure 6 The overview of SPICE's key steps. systems by essentially resorting to divide-and-conquer strategies that utilize the relationship between the mesh and the eigen-functions of the operator. In the data analysis field, however, methods that take advantage of the multi-level paradigm are less explored. A few recent studies include [85] as well as the top-down divisive clustering or spectral graph partitioning techniques. Specifically, the intuition behind our approach stems from the well-known concept of modularity, introduced by Hartwell et al. [86], as a generic principle of complex system's organization and function. These functionally associated modules often combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing yet another essential design principle of system organization and function-hierarchical modularity. Thus, our method first identifies modules of system components with putatively stronger associations within the modules than between the modules. This process divides all system components into modules that likely function together to define what phenotypic state the system is in. The process further conquers each of these modules in order to refine the specificity of the inter-component relationships within the module. Figure 7 shows an illustration of this divide-andconquer approach to multilevel dimension reduction. The sample artificial input set shown contains two substructures: points from a multivariate Gaussian distribution (grey) and the three groups of colored points arranged into nested rings (top). (Note that the color of the points is only there to show how the data groups together before and after the partition followed by dimension reduction). The standard PCA result performed on the monolithic set is mediocre, i.e., distinguishing the four different groups is impossible using only linear PCA. After partitioning the set, the "appropriate" technique is applied to each partition (bottom): the kernel PCA to the nested ring points (left partition) and the linear PCA to the Gaussian cluster (right partition). As a result, not only is the size of the data reduced for each partition, but also the four groups become distinguishable using only the first principal component. Unlike the example in Figure 7, in the context of our problem, we deploy decision tree-based procedure to divide the feature set into non-overlapping partitions and apply the "appropriate" classification technique to each partition. The reason is that due to highly underdetermined nature of our problem, subsampling of the input data sample could possibly lead to an unreliable inference methodology. Likewise, due to a possibly non-linear interplay between the system's features, it would be more desirable to divide the system components into "blocks" with possibly stronger interconnects within the blocks and weaker inter-connects between the blocks. This strategy is inspired by the modularity principle of complex systems. Thus, a higher-level supervised separation of the high dimensional feature space into the rectangular shape http://www.biomedcentral.com/1752-0509/6/40 hyperspaces is achieved via information-theory driven decision boundaries with a subsequent refinement of decision boundaries within the identified subspaces (see Step 2). We propose a decision tree-based methodology for our feature space partitioning. The features in a decision tree are considered as one feature subset, and each feature is a system component. There are multiple reasons for why we choose decision tree based methodology, including (a) efficiency to process many features (unlike BBNs that are exponential in the number of features), (b) inherently multiclass by nature, and (c) the ability to handle continuous and multi-variate types of features (unlike NNs for which distance metrics are poorly defined for mixed data types), among others. We use the CART-decision tree algorithm [87] to select a set of discriminatory features from the available feature space. Basically, CART builds a decision tree by choosing the locally best discriminatory feature at each split step based on the Gini Index Impurity Function. To avoid overfitting, CART employs backward pruning to build smaller, more general decision trees. CART chooses features in a multivariate fashion, which allows the feature selection process to find a set of discriminatory features instead of considering one feature at a time. More importantly, especially, in the context of underdetermined or unconstrained problems, CART's inherent feature pruning capability often leads to a fewer number of components, or smaller size modules. This is a desirable property for building a more robust classifier downstream of our analysis pipeline (Step 2 and Step 5). Also, decision boundaries themselves could result in rules that are more interpretable and could provide additional insights to domain scientists on the magnitude of the feature attributes that affect a system's phenotype. The reason is that not only is it important to know what group of features is contributing to the system's phenotypic state but to what extent the feature values could change the system's phenotypic state. For example, if the expression of a particular gene becomes above a certain threshold, then this causes a "knock-out" of a particular metabolic pathway. With decision trees, the full feature space gets partitioned into hypersubspaces by the decision rules of the form of a i ≤ f i ≤ b i . Once this high-level factors contributing to the system's phenotype are learned, more complex (e.g., non-linear or conditional) relationships between the components in the group could be learned by more sophisticated classifiers, such as BBNs or kernel SVMs (see Step 2). Step 2: scoring candidate component interplays Candidate system's components identified in Step 1 are next assessed in terms of their collective ability to contribute to the system's phenotypes. Basically, the goal is to define a scoring function that could measure how well this group of components (features) discriminates between system phenotypic states. On the one hand, http://www.biomedcentral.com/1752-0509/6/40 mutual information (MI) for an individual component could be used with its proper generalization to a group of components. However, robust probability estimationan essential step in MI definition-requires a large sample size, which is often unavailable for underdetermined systems. Moreover, the generalized MI is biased toward the presence of a component in the group with high information content. Due to these limitations, we define a scoring function in terms of classification accuracy provided by multivariate discriminant methods, such as SVMs, BBNs, neural networks, or decision trees. Specifically, we ask a question: if only a candidate component set were used to determine the system's phenotypic state, how much predictive skill this set could have. Since individual components within the candidate group could be related to each other in a complex manner, we first let a proper classifier (e.g., kernel SVM or BBN) learn this complex relationships from the entire group of features and choose the accuracy of the best performing classifier as the scoring measure of the putative components' interplay (see Line 6-7 in Algorithm 2 of Additional file 8). Note that different candidate groups may require different classifiers-the best performing classifier model is chosen both for Step 3 and for Step 5. [For our experiments, we use training accuracy]. Step 3: assessing statistical significance Given a candidate feature set (Step 1) and its predictive skill score (Step 2), we next assess statistical significance of this score, namely, how likely a similar skill score could be observed at random. Specifically, we want to use the confidence level for the classification accuracy to sift phenotype-specificity determining component groups. It is expected that the statistically significant, highly scored component groups are application-significant. For example, a group of candidate genes could be biologically significant for biohydrogen production or cancer phenotype expression (see Phenotype-specificity determining components sections). It is worth observing that, generally, sample instances within the same system phenotype tend to be more similar than those from the other phenotypes. Hence, separation of feature value distributions between the samples from different states will be relatively clearer, and thus classification accuracy-as a measure of feature set's discriminatory power-can be biased. This implies that standard statistical testing like shuffling the phenotype (class) labels is not acceptable. Thus, to provide a robust assessment of statistical significance, we measure an empirical p-value of each candidate feature set using the Monte Carlo procedure described in [88]. Specifically, for each feature subset, we randomly sample N feature subsets (N = 1, 000) from the entire feature set of the same size as our candidate set, and compute the corresponding accuracies of the classifiers built from these feature sets. Then, we estimate an empirical p-value of the target feature subset as p = (R + 1)/(N + 1), where N is the total number of random samples (N ∼ 1, 000) and R is the number of these samples that produce a test statistic greater than or equal to the value for the target feature subset. This corresponds to the percentile where our target score falls onto within the accuracy distribution for N samples. In our experiments, the selected pvalue meets 95% confidence level. Please find the detailed pseudo-code for the statistical significance assessment in Additional file 8. Step 4: iterative "knock-out" of component interplays The candidate component-interplay group identified in Steps 1-3 is probably not the only group of system components that is responsible for a system's behavioral phenotypic state. For example, such a group of enzymes could contribute to a direct conversion of a particular type of sugar to ethanol, but there could still be other groups of genes required for ethanol production, such as regulators of these enzymes' expression in the cell, transporters of different sugars from the environment into the cell, or stress response regulators that detect toxin (i.e., ethanol) concentration level in the cell. In addition, if a subsystem is critical for a specific system's function, then it often gets replicated (e.g., multiple gene copy numbers in the genome) in the complex system; this redundancy contributes to system's robustness. Therefore, our task is not simply to identify a single "best" group but, ideally, to enumerate them all. The combinatorial nature of this task necessitates heuristic approaches. Our strategy is inspired by the way biologists often conduct their mutagenesis studies. Namely, they knock-out a group of genes (e.g., via gene deletion) and observe the mutant system's response. By analogy, our methodology knocks-out the selected candidate feature sets and proceeds with Steps 1-3 on the mutant system in an iterative fashion until some stopping criterion is met (see Line 3 in Algorithm 2 of Additional file 8). Under this approach, each iteration produces a subset of features out of the current feature set (see Line 5 in Algorithm 2 of Additional file 8), then removes these features from the set so that they can't be selected again (see Line 15 in Algorithm 2 of Additional file 8). There are several different criteria that could be used to decide when to stop the iterative process. Ideally, one would observe a monotonically decreasing scoring value with the number of iterations and will stop once the score falls bellow a certain threshold. However, no theoretical grounds could be provided for such a monotonic behavior of the scoring function under the scenario of iterative feature set knock-outs. In fact, we empirically observed a fluctuating behavior of the scoring function with the number of iterations. Therefore, due to inherently high dimensional data, we set the threshold on the maximum number of iterations as our stopping criterion. Line 3-17 in Algorithm 2 of Additional file 8 summarizes the aforementioned iterative knock-out procedure. Step 5: bringing component interplays altogether While the enumerated set of putative system's component interplays is important in its own right (as illustrated in Results and discussion section), here we combine them altogether by building an ensemble of classifier models from Step 3. Thus, unlike traditional classification methods that aim to find the single subset of features that offer the most optimum classifier performance, our goal is to enumerate suboptimal feature sets that could provide insights on what factors and their inter-factor relationships could determine the specificity of the system's phenotype. We then combine these subsystems through the framework of the ensemble methods in order to construct a system-level predictor of system's behavioral states. In the last step (Step 5 in Figure 6), we need to combine the predictions of all the classifiers that pass statistical significance criterion (Step 3) to come up with the final prediction value. In order for the ensemble to make a prediction, each classifier is given a weighted vote, and the class with the most votes is the prediction of the ensemble (see Line 18 in Algorithm 2 of Additional file 8). We tested three possible weighting schemes: a simple majority voting scheme, in which every classifier is given equal weight; a training accuracy-based method, in which every classifier is weighted based on its training accuracy; and an internal cross-validation-based voting, in which each classifier is weighted by that model's cross-validation accuracy on the original training data. Two of the key characteristics for building a robust classifier ensemble include (a) the diversity among the classifier models in the ensemble [84] and (b) the reasonably high accuracy of the individual members in the ensemble. In our case, the former is ensured due to our feature set knock-out strategy (Step 4) and the latter is guaranteed by a combination of decision-tree based feature enumeration (Step 1), the scoring function (Step 2), and the statistical significance assessment (Step 3) that, in combination, also reduce possible redundancy among the models and thus reduce the possible bias (e.g., due to a significantly large portion of highly similar models). By bringing the enumerated component interplays altogether (Step 5) a good ensemble of classifiers can be achieved (as illustrated in Results and discussion section). Detecting biologically relevant component interplays through biological networks Thus far, we have presented how to detect component interplays from an instance-based data. And it has been shown that the system components enumerated by SPICE often form functional modules or communities. However, an additional step could be added between Step 4 and Step 5 to ensure that the identified systems components are more strongly linked to the phenotype through biological networks. The gene functional association networks used in this paper are obtained from the STRING database [89]. The nodes in the networks are genes. And a pair of nodes is connected with an edge if the corresponding genes are considered to be functionally associated by some evidence. The edge weights are assigned by the STRING database based on the evidence that support the functional association [89]. A threshold above 700 is considered as "high confidence" in the STRING database, so we only keep the edges with weights above 700. After the network construction, we employ our Dense and Enriched Subgraph Enumeration (DENSE) algorithm [90] to enumerate "dense and enriched" subgraphs in each network. Intuitively, DENSE works as follows, given an organismal protein (gene) functional association network and a set of proteins (genes) as the query, DENSE enumerates all the dense subgraphs that are enriched by the query proteins. Every subgraph generated by DENSE contains at least γ percentage of nodes that are from the query protein set, and each node in the subgraph is adjacent to at least μ percentage of the other nodes in the subgraph. And in simple terms, the algorithm is able to extract the proteins that are functionally associated with the query proteins (i.e., form functional modules with them). In the paper [90], a biologist's knowledge priors have been incorporated into the query set. Here, we use the phenotypedetermining components generated by SPICE as the query set for the DENSE algorithm. The default parameter values, μ = 75 and γ = 0.1, are used to find all highly connected (but not fully connected) subgraphs that contain at least one query node. [For more details on the DENSE algorithm and the software, please, refer to paper [90]]. The "dense enriched" subgraphs generated by DENSE are assumed to be the functional modules, because we start with the functional association network and impose the μ parameter to generate the highly connected subgraphs. However, a further functional enrichment analysis is performed on the discovered modules by using the GO TERM FINDER tool [91]. And the result shows that the discovered modules are indeed functionally coherent. [Since our work does not focus on the functional enrichment analysis, the experimental results are available upon request]. While DENSE is an effective and efficient algorithm to identify the functional modules in a biological network, it can only be applied to a single network at a time. However, we would like, using both phenotype-expressing and non-http://www.biomedcentral.com/1752-0509/6/40 expressing organisms, to identify functional modules that are more biased towards the target phenotype. Thus, in this section, we propose an effective methodology to discover functional modules using DENSE but extending the procedure to utilize both phenotype-expressing and non-expressing organisms. Definition 1 (β-Similar Dense Subgraphs). Given two dense subgraphs generated from two different networks, we call the two subgraphs β-similar dense subgraphs if they share at least β percentage of nodes corresponding to homologous genes. For a set of networks corresponding to phenotypeexpressing organisms, we hypothesize that the conserved β-similar dense subgraph (see Definition 1) across the group of networks are the phenotype-associated functional modules. After generating all "dense enriched" subgraphs from each biological network by DENSE, we first detect the β-similar dense subgraphs across two networks based on the Definition 1, and then check if the β-similar dense subgraphs detected in the previous two networks are conserved in the third network. This procedure is continued until all networks in the group are examined. Our algorithm may miss some of the phenotype-related modules if the stringent value of β = 100 are used. Hence, we chose a β value of 75 (midpoint of 50 and 100) to identify highly conserved (but not identical) subgraphs across all networks as the most probable modules. Detection of the conserved β-similar dense subgraphs in a group of networks can also help us filter out some spurious query nodes (see Cancer-related genes section), which are generated by our Step 1-Step 4. We can take it one step further and use a group of contrast biological networks (i.e., networks of organisms that do not express the phenotype) to filter and obtain dense subgraphs that are not only identified as conserved in the previous step but are also "biased" towards the target phenotype. Here, by biased, we mean occurring in phenotype expressing organisms but not occurring in the phenotype non-expressing organisms. To achieve this goal, first, the networks are partitioned into different groups according to the phenotype(s), and then the β-similar dense subgraph detection algorithm is applied to each group of networks. After getting all the conserved β-similar dense subgraphs from all groups, we remove all the common conserved β-similar dense subgraphs appearing in at least two groups of networks. As noted, three parameters, γ , μ and β, are used in our algorithm. The thresholds of the parameters depend on the application. But because the computational time of DENSE algorithm is relatively small, users can try different thresholds and use their prior knowledge to design the query sets (e.g., pathway-phenotype associations) to validate the results. [The parameter sensitivity analysis is available upon request]. And similar to other comparative analysis methods, our results are sensitive to the phylogenetic diversity of the organisms we chosen. A scoring function based on the phylogenetic diversity could be considered as an option to address this problem. Our work is different to other network-based identification methods in a number of ways: (1) we can discover dense, possibly overlapping subgraphs of a single network or groups of networks; and (2) we are able to identify "fuzzy functional modules" that are enriched by some target set of proteins (genes).
2016-01-23T22:05:39.153Z
2012-05-14T00:00:00.000
{ "year": 2012, "sha1": "7c9f6ab015c596e53f0fe0a86afd5a57a8347d31", "oa_license": "CCBY", "oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/1752-0509-6-40", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86e8b90677b59a598871a687814126ed15eac78d", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Biology" ] }
234653401
pes2o/s2orc
v3-fos-license
Mild Increases in Plasma Creatinine after High-Risk Abdominal Surgery Are Associated with Long-Term Renal Injury: A Retrospective Cohort Study Background: The impact of mild acute kidney injury (AKI) observed in the immediate postoperative period after major surgery on long term renal function remains poorly dened. According to the “Kidney Disease: Improving Global Outcomes” (KDIGO) classication, a mild injury corresponds to a KIDIGO stage 1, characterized by an increase in creatinine of at least 0.3 mg/dl within a 48-hour window or 1.5 to 1.9 times the baseline level within the rst week post-surgery. We tested the hypothesis that patients who underwent moderate-to high-risk abdominal surgery and developed mild AKI in the following days would be at an increased risk of long-term renal injury compared to patients with no postoperative AKI. Methods: In this single centre retrospective study, all consecutive adult patients with a plasma creatinine value ≤ 1.5 mg/dl who underwent high-risk abdominal surgery between 2014-2019 and who had at least three recorded creatinine measurements (before surgery, during the rst seven postoperative days, and at long-term follow up [6 months-2 years]) were included. AKI was dened using a “modied” (without urine output criteria) KDIGO classication as mild (stage 1 characterised by an increase in creatinine of > 0.3 mg/dl within 48-hours or 1.5-1.9 times baseline) or moderate-to-severe (stage 2-3 characterised by increase in creatinine 2 to 3 times baseline or to ≥ 4.0 mg/dl). Development of long-term renal injury was compared in patients with and without postoperative AKI Results: Among the 815 patients included, 109 (13.4%) had postoperative AKI (81 mild [KDIGO 1] and 28 moderate-to-severe [KDIGO 2-3]). The median long-term follow-up was 360, 354 and 353 days for the three groups respectively (P=0.190). Patients who developed mild AKI had a higher risk of long-term renal injury than those who did not (odds ratio 3.1 [95%CI 1.7-5.5]; P<0.001). In multivariable analysis, mild postoperative AKI was independently associated with an increased risk of developing long-term renal injury (adjusted odds ratio 4.5 [95%CI 1.8-11.4]; P=0.002). Conclusions: Mild AKI after high-risk abdominal surgery is associated with a higher risk of long-term renal one year after surgery. In these study populations, AKI has consistently been reported to be associated with increased lengths of hospital stay, higher readmission rates and greater healthcare costs [12][13][14][15]. Development of AKI in these patients has also been associated with altered short and long-term clinical outcomes, including death [10,16]. One of the most common systems used to diagnose AKI is the "Kidney Disease: Improving Global Outcomes" (KDIGO) classi cation, in which kidney dysfunction is based on changes in serum creatinine and urine output [17]. However, as urine output is rarely accurately measured in the perioperative setting, postoperative AKI is frequently assessed based on an increase in serum creatinine alone. Mild kidney injury is more frequent than moderate or severe injury after a major surgical procedure. Unfortunately, these three stages are generally combined into a global composite of "AKI", ignoring the obvious differences in incidence and severity across the stages. Moreover, many clinicians (surgeons, intensivists and anaesthesiologists), routinely under-recognise the importance of mild AKI, [18] as they often consider postoperative AKI to be a transient phenomenon without short or long-term consequences [19]. However, Turan et al. recently reported that mild postoperative AKI could affect long-term renal function in patients who had had various non-cardiac surgical procedures [20]. Nevertheless, there is limited speci c literature regarding the long-term renal consequences of mild AKI after major abdominal surgery [19][20][21]. We therefore conducted a retrospective cohort study to analyse the association between postoperative AKI and long-term renal injury after high-risk abdominal surgery. We hypothesised that patients with a slight increase in their postoperative plasma creatinine, corresponding to mild AKI, would be at higher risk of long-term renal injury compared to patients without postoperative creatinine increase. . Methods The Ethics Committee of Erasme, Brussels, Belgium on February 10th, 2020, approved this single centre retrospective cohort analysis (Reference: P2020/031). Data collection was performed by Z.M in our institution between February 11th and April 1st 2020. We included all consecutive adult patients (≥ 18 years old) who: 1) had undergone elective high-risk abdominal surgery (including hepatobiliary surgery, pancreatectomy, gastrectomy, oesophagectomy, cancer debulking, and cystectomy) under general anaesthesia between January 1st, 2014, and April 30th, 2019. Patients who had had major vascular surgery were also included if the surgery involved an abdominal incision (e.g., aorto bifemoral bypass and abdominal aortic aneurysm surgery); 2) had a plasma creatinine value measured before surgery, within 7 days after surgery, and at a later follow up visit (6 months to 2 years after surgery). Patients who received dialysis in the preoperative period, those with chronic kidney disease (prede ned as a baseline creatinine level > 1.5 mg/dl), those who had emergency surgery and patients who had another surgical procedure in the two years following their rst surgery (unless it was a redo surgery in the same admission) were excluded. Patients who had suprarenal clamping during their vascular surgery were also excluded as this clamping phase can seriously impact renal function. For each eligible patient, we recorded, from our hospital health records, the plasma creatinine concentration prior to surgery (the most recent result available in the three months before surgery), the highest creatinine concentration during the seven postoperative days, and the creatinine concentration at long-term follow up (between 6 months and 2 years; if multiple creatinine values were available, the measurement closest to one year following surgery was always selected). If no long-term creatinine measurement was available in the hospital database, attempts were made to contact the patients and/or their general practitioners to obtain any values that had been measured elsewhere. The change in creatinine concentration between the preoperative and the postoperative period was used to classify patients according to a "modi ed" KDIGO classi cation in which the urine output criteria were not considered [17]. Mild AKI (KDIGO stage 1) was characterised by an increase in creatinine of ≥ 0.3 mg/dl within 48-hours or 1.5-1.9 times baseline; moderate AKI (KDIGO stage 2) by an increase in creatinine of 2-2.9 times baseline; and severe (KDIGO stage 3) was characterised by an increase in creatinine 3 times baseline or to ≥ 4.0 mg/dl). To simplify our statistical analysis because of the low occurrence rate, AKI stages 2 and 3 were combined into a single category (2-3), leaving us with three nal groups (no AKI vs AKI stage 1 vs AKI stage 2-3). Long-term renal injury was de ned using the difference between the preoperative creatinine concentration and the long-term follow-up measurement. We used the same KDIGO classi cation system to stage longterm renal injury as we used for the immediate postoperative period. Statistical Analysis Distribution of continuous data was analysed using a Kolmogorov-Smirnov test. Normally distributed data are presented as means ± standard deviation and were compared between groups using a one-way analysis of variance. Non-normally distributed data are presented as medians (interquartiles ranges) and were compared using a Kruskall-Wallis test. Dichotomous variables are presented as crude numbers and percentages and were compared between groups using a Chi-square test. Modelling of the risk of longterm renal injury was performed using the same approach as Turan et al [20] including early AKI and all covariates listed in Tables 1 and 2 in a logistic (binomial) model. The risk of developing long-term renal injury is presented as an odds ratio with 95% con dence intervals. Statistical analyses were done using Minitab 16 (Paris, France and Medcalc Software LTD, Ostend, Belgium) and R (www.r-project.org) Results Among the 1482 patients who underwent high-risk abdominal surgery between January 1st 2014 and April 30th 2019, 815 patients met the inclusion criteria and were thus included in our study. The main reason for exclusion was lack of postoperative or long-term creatinine values (Fig. 1). Among the 81 patients who developed mild postoperative AKI, 19 patients (23.5%) had persistent mild or moderate-to-severe renal injury one year after surgery, compared to 64 (9.1%) of those who had no postoperative AKI (P < 0.001) (Fig. 2). Among the 28 patients (3.4%) who developed moderate to severe AKI postoperatively, 10 (35.7%) had some degree of long-term renal dysfunction. Patients who developed mild AKI after surgery therefore had a threefold higher chance of developing long-term renal injury compared to patients without postoperative AKI (odds ratio [95% CI] of 3.1 [1.7-5.5]; P = 0.0001). In patients with postoperative AKI KDIGO stage 2-3, the odds ratio for development of long-term renal dysfunction was 5.6 (95% CI 2.5-12.6; P < 0.0001). Occurrence of postoperative AKI was associated with older age, higher baseline creatinine level and presence of comorbid conditions, notably a history of chronic hypertension, myocardial infarction, atrial brillation, or chronic obstructive pulmonary disease (Table 1). Patients who developed postoperative AKI had longer surgery times, received more uid and had a higher estimated blood loss during surgery compared to patients without postoperative AKI (Table 2). Patients with postoperative AKI also had more postoperative complications and a longer hospital length of stay (Table 3). Importantly, long-term creatinine values were measured around one year in median among all groups (Table 3; P = 0.190) Discussion In a cohort of 815 patients who underwent high-risk abdominal surgery, more than one fth of patients (21%) who developed mild postoperative AKI had mild renal injury long term, and 2.5% developed moderate to severe long-term renal injury. This observation demonstrates that even a slight increase in postoperative creatinine can be important and should not be neglected. Stated a different way, development of mild postoperative AKI more than tripled the odds of having renal injury one year after surgery compared to patients without postoperative AKI. Our results are in agreement with the only available study which examines the association of mild AKI with long-term renal injury [20]. This study, recently published by Turan et al, utilized a large database from the Cleveland Clinic that included more than 15,000 patients who underwent a variety of noncardiac surgical procedures ranging from low to high-risk. Interestingly, postoperative AKI was a complication in only 3% of their study population compared to 13.4% in our study. This is not surprising as we included only patients who had had high-risk abdominal surgery that carries a greater risk of postoperative renal dysfunction than low-risk abdominal surgery. Moreover, major surgery is a well-known contributing factor for postoperative AKI [22]. This increased risk is likely due to larger uid shifts, blood losses and a relatively high incidence of perioperative hypotension in these patients, all of which can compromise renal blood ow [23][24][25][26]. Moreover, these types of surgical procedure are more often performed in elderly patients who are at a greater risk of having comorbidities that predispose to development of AKI. This study has some limitations that should be taken into consideration. Firstly, our study was observational, retrospective, and single-centre and included a relatively small sample size, largely because of the high proportion of patients without long-term follow up creatinine concentrations. Therefore, a causal relation cannot be proven. Secondly, we only included patients who underwent highrisk abdominal surgery [27] so that the data cannot be extrapolated to other types of surgery (neurosurgical, cardiac, etc). Thirdly, as urine output was not used for the AKI classi cation ("modi ed" KDIGO classi cation), this may have led to an "underestimation" of the incidence of postoperative AKI in our study cohort. Lastly, our logistic regression only took into account perioperative variables so that we did have information on speci c events during the long-term follow-up (oncological evolution or cardiovascular problems). Conclusions Although mild increases in postoperative plasma creatinine concentration are frequently considered to have little long-term clinical signi cance, we found that patients undergoing high-risk abdominal surgery who developed a mild increase in plasma creatinine concentration had a much higher incidence of longterm renal dysfunction. Clinicians should not neglect "minor" disturbances in renal function after surgery as they may persist or even worsen during long-term follow up. Jean-Louis Vincent is Editor-in-Chief of Critical Care. He has no other con icts related to this article. The other authors have no con icts of interest related to this article. V: Data analysis and editing the nal manuscript. R: Study design, statistical analysis and editing the nal manuscript. VdL: Study design and conception, statistical analysis and editing the nal manuscript.
2020-11-12T09:09:14.138Z
2020-11-09T00:00:00.000
{ "year": 2020, "sha1": "0fb0586efde840fc5231cee3de6a1cea74368463", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-102670/v1.pdf?c=1604959259000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "65532841c91daf663d482608d5c574e3e919aa2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117881721
pes2o/s2orc
v3-fos-license
Ab initio calculations on nuclear matter properties including the effects of three-nucleons interaction In this thesis, the ground state properties of nuclear matter, namely the energy per particle and the response to weak probes, are computed, studying the effects of three nucleon interactions. Both the variational approach, based on the formalism of correlated basis function, and the auxiliary field diffusion Monte Carlo method have been used. A scheme suitable to construct a density-dependent two-nucleon potential in correlated basis approach is discussed. The density dependent potential resulting from UIX three-nucleon force has been employed in auxiliary field diffusion Monte Carlo calculations that turned out to be in very good agreement with correlated basis variational results. Hence, the underbinding of symmetric nuclear matter has to be ascribed to deficiencies of the UXI potential. A comparative analysis of the equations of state of both pure neutron matter and symmetric nuclear matter obtained using a new generation of"chiral inspired"local three-body potentials has been performed. These potentials provide an excellent description of the properties of light nuclei, as well as of the neutron-deuteron doublet scattering length. The weak response of symmetric nuclear matter has been computed at three-body cluster level. Two-body effective interactions and one-body effective operators have been derived within the formalism of correlated basis functions. The inclusion of the three-body cluster term in the effective interaction allowed for a direct inclusion of the UIX three-nucleon potential. Moreover, the sizable unphysical dependence of the effective weak operator is removed once the three-body cluster term is taken into account. Introduction Ab initio nuclear many-body approaches are based on the premise that the dynamics can be modeled studying exactly solvable few-body systems. This is a most important feature since, due to the complexity of strong interactions and to the prohibitive difficulties associated with the solution of the quantum mechanical many-body problem, theoretical calculations of nuclear observables generally involve a number of approximations. Hence, models of nuclear dynamics extracted from analyses of the properties of complex nuclei are plagued by the systematic uncertainty associated with the use of a specific approximation scheme. Highly realistic two-nucleon potentials, either purely phenomenological [1,2,3,4] or based on chiral perturbation theory (ChPT) [5,6], have been obtained from accurate fits of the properties of the bound and scattering states of the two-nucleon system [7,8,9,10,11,12,13]. Unfortunately, however, the extension to the case of the threenucleon potential, the inclusion of which is needed to account for the properties of the three-nucleon systems, is not straightforward. The definition of the potential describing three-nucleon interactions is a central issue of nuclear few-and many-body theory. Three-nucleon forces (TNF) are long known to provide a sizable contribution to the energies of the ground and low-lying excited states of light-nuclei, and play a critical role in determining the equilibrium properties of isospinsymmetric nuclear matter (SNM). In addition, their effect is expected to become large, or even dominant, in high density neutron matter, the understanding of which is required for the theoretical description of compact stars. Phenomenological models, such as the Urbana IX (UIX) potential, that reproduce the observed binding energy of 3 H by construction, fail to explain the measured nd doublet scattering length, 2 a nd [14], as well as the proton analyzing power in p-3 He scattering, A y [15]. The investigation of uniform nuclear matter may shed light on both the nature and the parametrization of the TNF. The equation of State (EoS) of SNM is constrained by the available empirical information on saturation density, ρ 0 , binding energy per nucleon at equilibrium, E 0 , and compressibility, K. Furthermore, the recent observation of a neutron star of about two solar masses [16] puts a constraint on the stiffness of the EoS of beta-stable matter, closely related to that of pure neutron matter (PNM). Nuclear matter calculations are carried out using a variety of many-body approaches. The scheme referred to as Fermi-Hyper-Netted-Chain/Single-Operator-Chain (FHNC/SOC), based on correlated basis functions and the cluster expansion technique, has been first We have found that none of the parametrizations simultaneously reproduces the equilibrium properties of nuclear matter. Nevertheless, one of the TM ′ three-body potentials and one of the chiral NNLOL potentials provide values of SNM saturation density close to the experimental one. This is a remarkable feature of these potentials, as, unlike the UIX model, they do not involve any parameter adjusted to reproduce ρ 0 . Over the past few years, the CBF approach and the cluster expansion formalism have been also used to develop well behaved effective interactions, which take into account the main effects of NN correlations and are suitable for use in standard perturbation theory in the Fermi gas basis, thus allowing for a consistent treatment of equilibrium and nonequilibrium properties of nuclear matter [47,37]. In view of the critical role played by interactions involving more than two nucleons, the implementation of the results discussed in this Thesis in the CBF effective interaction should be regarded as one of the most interesting applications of our analysis. As a first step in this direction, we have employed the effective interaction approach to compute the weak response of symmetric nuclear matter including three-body cluster contribution. Although the density, transverse and longitudinal responses of nuclear matter at momentum transfer around ∼ 1 fm −1 have been already computed within CBF using the chain summation scheme [33,34,48], our approach, based on effective weak operators and effective potentials, is more general, as it allows for a consistent description of the nuclear response in the regions of both low and high momentum transfer, where long-and short-range correlations are known to be dominant. In Chapter 1, we introduce the concept of nuclear matter, emphasizing its features and relations with physical systems. The derivation of two-and three-nucleon interactions and their capability of reproducing experimental data are discussed and the formalism of chiral perturbation theory is outlined. In Chapter 2, after discussing the limitations of the independent particle model, we describe CBF theory and the FHNC/SOC summation scheme, as well as the Monte Carlo many-body formalisms, pointing out that these approaches are able to encompass the correlation structure of nuclear matter, originating from the nuclear interactions. The first part of Chapter 3 is devoted to the derivation of the density dependent potential, obtained from an average of the UIX three-nucleon force, while in the rest of the Chapter we discuss a comparative analysis of the chiral inspired three-nuclear forces in nuclear matter. Chapter 4 is focused on the inclusion of the effects of three-nucleon interactions in the CBF effective interaction, and the application of this approach to the calculation of the weak response of nuclear matter. Bulk properties of nuclear matter Nulcear matter is uniform system of nucleons interacting through strong interactions only. While being a theoretical construct, it provides an extremely useful model to investigate the properties of both atomic nuclei and neutron star matter. Note that in these systems, with the exception of neutron stars in the early stages of their life, the temperature can be safely set to zero, as thermal energies are negligible compared to Fermi energies. Two quantities that characterize nuclear matter are the density ρ and the proton fraction x p ρ = ρ p + ρ n where ρ p and ρ n indicate the proton and neutron densities, respectively. PNM is the limiting case in which x p = 0, while for SNM x p = 1/2. Strong interactions do not bind PNM, which in neutron stars is packed by gravitational attraction. SNM on the other hand is bound, and its equilibrium properties can be deduced deduced from the analysis of nuclear data. The nuclear charge distribution, ρ ch (r) is almost constant within the nuclear volume and its central value is basically the same for all stable nuclei. It can be parametrized by ρ ch (r) = ρ 0 1 + e (r−R)/D . (1.2) Elastic electron-nucleus scattering experiments have shown that the nuclear charge radius, R, is proportional to implying that the volume increases linearly with the mass number. The parameters r 0 = 1.15 fm and D = 0.54 fm have been extracted from experimental data, see for instance [49]. Equation (1.3) and the nuclear mass formula, to be discussed below, imply that the equilibrium ρ 0 = 3 4πr 3 0 =≃ 0.16 ± 0.02 fm −3 . (1. 4) In addition, one can observe that the central charge density of atomic nuclei, measured by elastic electron-nucleon scattering, does not depend upon A for large A. As shown in Fig. 1.1, the limiting value does not differ from the one resulting from r 0 . Figure 1.1: Saturation of central nuclear densities of medium-heavy nuclei as measured by electron-nucleus scattering, from [50]. The curves of Fig. 1.1, parametrized by Eq. (1.2), show that the charge density drops from 90% to 10% of its value over a distance R T ≈ 2.5 fm, independent on A, called surface thickness. The (positive) binding energy per nucleon is defined as the difference between the mass of the bound nucleus and that of its constituents In Table 1.1 the masses and the binding energies for 16 O, 56 Fe, 62 Ni and 120 Sn are shown. The dependence of the binding energy on the atomic and mass numbers can be parametrized according to the semiempirical mass formula [51,52], based on the liquid drop model and the shell model The first term in square brackets, proportional to A is the volume term and describes the bulk energy of nuclear matter. It is due to the strong nuclear interaction, that does not distinguish between neutrons and protons. Because strong interactions are short ranged, a given nucleon may only interact strongly with its nearest neighbors and next nearest neighbors, this explaining the scaling with A instead of the A(A − 1) characteristic of the long-ranged interaction. The term proportional to A 2/3 , denoted as surface term is also due to the strong interactions. It is actually a correction to the volume term, arising from the the fact that nucleons close to the surface have fewer neighbors than the inner ones. The third term accounts for the Coulomb repulsion between protons. Since the electrostatic interaction is long ranged the scaling is given by Z(Z − 1). The fourth term, proportional to [(A − Z) − Z] 2 , goes under the name of symmetry energy. Its origin can be justified on the basis of the Pauli exclusion principle explaining the experimental evidence that stable nuclei tend to have the same number of protons and neutrons. The last term, pairing term, which captures the effect of the spin coupling, can be exhaustively explained in the framework of the the shell model. It accounts for the fact that even-even nuclei (i. e. nuclei having even Z and even A − Z) are likely to be more stable with respect to even-odd or odd-odd nuclei. Hence, the value of the constant λ is −1, 0 and +1 for even-even, even-odd and odd-odd nuclei, respectively. SNM is described by the semiempirical mass formula by putting Z = A/2 and taking the limit for A → ∞. Neglecting the Coulomb repulsion, the volume term is the only one surviving in this limit. Therefore, the coefficient a V can be identified with the binding energy per particle of SNM. Typical fits of Eq. (1.6) gives [53] In the vicinity of the equilibrium density, the energy per particle of SNM can be expanded according to (1.8) The coefficient is the (in) compressibility module that can be extracted from isoscalar breathing modes [54,55], and isotopic differences in charge densities of large nuclei [56] K 0 = 220 ± 30 MeV . (1.10) Nuclear hamiltonian Although impressive steps have been done in this directions [57,58,59,60], the description of nuclear matter properties at finite density and zero temperature within the framework of quantum crhomo-dynamics (QCD) still seems to be out of reach for the present computational techniques. For this reason, in this work we rely on dynamical models in which non relativistic nucleons interacts by means of instantaneous potentials, describing both the short and the long range interactions, the latter given by meson (mainly pions) exchange. Within this picture, nuclei can be described in terms of point like nucleons of mass m, whose dynamics are dictated by the hamiltonian wherev ij andV ijk are the two-and three-body potentials, respectively. In principle fourand more-body potential could be included in the hamiltonian, but there are convincing indications that their contribution is negligibly small. Two-body interaction Highly realistic two-nucleon potentials, either purely phenomenological or based on chiral perturbation theory (ChPT) have been obtained from accurate fits of the properties of the bound and scattering states of the two-nucleon system [7,8,9,10,11,12,13]. Besides the technical complexities involved in the fits, from the analysis of nuclear experimental data, the following main features of the NN interaction can be inferred. • The saturation of nuclear densities, see Fig. 1.1, showing that the density in the inner part of nuclei is almost constant and independent on A, indicates that nucleons cannot be packed together too tightly. In a non relativistic picture, a coordinate space potential with a repulsive core is able to reproduce this experimental evidence. Denoting by r 12 the inter particle distance, one has v(r 12 ) > 0 , r 12 < r c . (1.12) It is worth mentioning that the authors of Ref. [42], using the renormalization group equations to smear off the repulsive core, were able to develop a class of lowmomentum potentials. In this framework, the saturation properties is explained in terms of many-nucleons interactions. • The nuclear binding energy is almost the same for nuclei with A ≥ 12. Together with the proton neutron scattering data, this is an indication for the NN interaction to be short-ranged v(r 12 ) = 0 , r 12 > r 0 . (1.13) • Combining the neutron-proton cross section data with the properties of the deuteron ( 2 H) it is found that the singlet state does not have bound states, as opposite to the triplet state. In particular, the deuteron is the only NN bound state, consisting of a neutron and a proton coupled to total spin S = 1 and isospin T = 0. This is a clear manifestation of the spin-dependence of the NN. • From the observation that the deuteron exhibits a non vanishing quadrupole moment, it can be deduced that the angular momentum does not commute with the hamiltonian. Consequently the NN potential cannot be invariant under the rotation of the spatial coordinates alone. • The mirror nuclei are pairs of nuclei such that the proton number in one equals the neutron number in the other and vice versa. This obviously implies that mirror nuclei have the same mass number but atomic number differing by one unit. Examples of mirror nuclei are 13 7 N and 13 6 C or 15 7 N and 15 8 O. The spectra of mirror nuclei show striking similarities, as the energy of the levels with the same parity and angular momentum are the same, beside small electromagnetic corrections, showing that the nuclear interactions are charge symmetric. This is a manifestation of a more general symmetry of the underlying theory, the isospin invariance. The first attempt of a theoretical description of NN scattering data is due to Yukawa [61]. He made the hypothesis of nucleons interacting through the exchange of a particle of mass µ, related with the range of the interaction r 0 through (1.14) For r 0 ∼ 1.0fm the above equation gives µ ∼ 200 MeV, which is of the same order of magnitude of the pion mass m π ≃ 140 MeV. The most simple parity conserving vertex between the nucleons and the pseudo scalar pion has the form igγ 5 τ , where g is a coupling constant and τ accounts for the isospin of the nucleons. Hence, the non relativistic limit of the scattering amplitude described by the Feynman diagram of Fig. 1.2, leads to the definition of a NN potential, whose expression in coordinate space reads v π (r 12 ) = 1 3 g 2 4π m 3 π 4m 2 τ 12 T π (r 12 )S 12 + Y π (r 12 ) − 4π m 3 π δ(r 12 ) σ 12 , (1. 15) In the above equation, σ ij = σ i · σ j and τ ij = τ i · τ j , where σ i and τ i are Pauli matrices acting on the spin or isospin of the i-th, while 16) with α, β = 1, 2, 3, is the tensor operator. The radial functions associated with the spin and tensor components read The phase shift analysis of the high angular momentum neutron-neutron (nn) and neutronproton (np) scattering states shows that for g 2 /(4π) ≃ 14 the one pion exchange potential, v π , provides an accurate description of the long range part (r 12 ≥ 1.4 fm) of the NN interaction. High angular momentum in fact gives rise to a high centrifugal barrier, preventing the nucleons from coming very close to each other. In order to describe NN interactions at intermediate range one should consider processes in which more two and more pions, possibly interacting among themselves, are exchanged. In the short range region, exchange of heavier mesons and more complicated processes, that can be modeled through, e.g., contact interactions, are expected to be dominant. Chiral perturbation theory (ChPT) A framework in which the above processes are taken into account in a systematic way is provided by the chiral perturbation theory. By analyzing the spectra of hadrons having up and down valence quarks, it can be deduced that the approximate chiral symmetry of QCD is spontaneously broken. The corresponding quasi-Goldstone bosons can be identified with pions, that in turn are much lighter than all other hadrons. They would be exactly massless if the masses of u and d quarks were vanishing. Goldstone's theorem states that the interactions of Goldstone bosons become weak for small momenta, of the order of the pion mass. The natural cutoff of ChPT is the rho-meson mass, providing the high-energy scale of the theory, Λ χ ≃ 800MeV. Therefore it is possible to expand the scattering amplitude in powers of the small ratios between either the external momenta or the pion mass and Λ χ . Pion loops are naturally incorporated and all corresponding ultraviolet divergences can be absorbed at each fixed order in the chiral expansion by counter terms of the most general Lagrangian, involving pions and nucleons, consistent with spontaneously broken chiral symmetry and other known symmetries. Is it worth noting that the Lagrangian also contains contact interaction terms among nucleons, needed to renormalize loop integrals, make results fairly independent of the regulators, and parametrize the unresolved shortdistance dynamics of the nuclear force [62]. In the pion-nucleon sector ChPT work well as the interaction vanishes at vanishing external momenta in the chiral limit. In the pure nucleonic sector the situation is more complicated, since the strong interaction does not become perturbative even in the chiral limit at vanishing three-momenta of the external nucleons. In his seminal papers [63,64] Weinberg proposed to apply ChPT to the "effective NN potential", defined as the sum of connected diagrams for the scattering matrix, generated by old fashioned timeordered perturbation theory. Following this idea Weinberg was able to demonstrate the validity of the well-established intuitive hierarchy of the few-nucleon forces: the twonucleon interactions are more important than the three-nucleon ones, which are more important than the four-nucleon interactions and so on. Within ChPT it also possible to explain why the first order diagram of Fig. 1.2 is able to provide an accurate description of the long-range part of the NN potential, although the coupling constant g is much larger than one. The one-pion exchange is in fact the leading contribution in the chiral parameter |q|/Λ χ , where |q| ∼ m π is the spatial momentum of the nucleons. Two-body potential at next-to-next-to-next leading order (NNNLO) in the chiral expansion has been derived independently by Entem and Machleidt [5] and by the Julich group [6]. Both these potentials are able to reproduce the Nijmegen phase shifts χ 2 ≃ 1. Unfortunately, a coordinate space expression for such potentials is not available. Therefore, they can be employed neither in FHNC/SOC nor in AFDMC formalism yet. On the other hand, a local version of the three-body potential at NNLO does exist and will be discussed later in this Chapter The Argonne potential Phenomenological NN potential [1,2,3,4] are generally written aŝ v =v π (r 12 ) +v I (r 12 ) +v S (r 12 ) , (1.19) wherev π (r 12 ) is given by Eq. (1.15) stripped of the delta function contribution,v I (r 12 ) describes the intermediate range attraction attributed to two-pion exchange, andv S (r 12 ) accounts for the short range repulsion, which may be due to the exchange of heavier mesons and/or to the overlap of the quarks distributions of the nucleons. Comparison with ChPT suggests thatv S (r 12 ) is strictly related to the contact terms in the chiral Lagrangian. The highly realistic Argonne v 18 potential (AV18) [3] can be written in the form v 18 (r 12 ) = 18 p=1 v p (r 12 )Ô p 12 . (1.20) The static part of AV18, given by the first six operatorŝ (1. 21) sufficient to describe deuteron properties and the phase shifts corresponding to S and D states. In order to explain the P-wave phase shifts, the spin-orbit term has to be introducedÔ (1. 22) In the above equation L ij is the relative angular momentum and S ij is the total spin of the pair (1.24) The remaining 10 operators are required to achieve the description of the Nijmegen scattering data with χ 2 ≃ 1. They are given bŷ The last four operators account for the charge symmetry breaking effect, due to the different masses and coupling constants of charged and neutral pions. Instead of the full AV18, we will be using the so called Argonne v ′ 8 and Argonne v ′ 6 potentials, which are not simple truncations of the original model, but rather "reprojections" [65]. For the purpose of the following discussion, the 3 S 1 , 3 D 1 , 3 P 0 and 3 F 2 phase shift calculated using the Argonne v 18 potential and its reprojected versions v ′ 6 and v ′ 8 are displayed in Fig. 1.3. The Argonne v ′ 8 potential is obtained by refitting the scattering data in such a way that all S and P partial waves as well as the 3 D 1 wave and its coupling to 3 S 1 are reproduced equally well as in Argonne v 18 . The differences with the full AV18 starts appearing in higher partial waves' phase shift, like the 3 F 2 plotted of Fig. 1.3. In all light nuclei and nuclear matter calculations the results obtained with the v ′ 8 are very close to those obtained with the full v 18 , and the difference v 18 − v ′ 8 can be safely treated perturbatively. The Argonne v ′ 6 is not just a truncation of v ′ 8 , as the radial functions associated with the first six operators are adjusted to preserve the deuteron binding energy. Our interest in this potential is mostly due to the fact that AFDMC simulations of nuclei and nuclear matter can be performed most accurately with v 6 -type of two-body interactions. Work to include the spin-orbit terms in AFDMC calculations is in progress. On the other hand we need to check the accuracy of our proposed density-dependent reduction with both FHNC and AFDMC many-body methods before proceeding to the construction of a realistic two-body density-dependent model potential and comparing with experimental data. Three-body interaction It is well known that using a nuclear Hamiltonian including only two-nucleon interactions leads to the underbinding of light nuclei and overestimating the equilibrium density of nuclear matter. Hence, the contribution of three-nucleon interactions must necessarily be taken into account. In order for the three-body potential to be symmetric under the exchange of particles 1, 2 and 3 (remember that the sum of Eq. (1.11) has the constraint k > j > i), it has to written as a cyclic sum. For all the potentials we are considering in this Thesis, it turns out that there are only three independent cyclic permutationŝ V 123 =V (1 : 23) +V (2 : 13) +V (3 : 12) , (1.27) withV (i : jk) =V (i : kj). UIX three-body potential One of the most widely used three-body potential is the Urbana IX (UIX) [21], that consists of two terms. The attractive two-pion (2π) exchange interaction V 2π turns out to be helpful in fixing the problem of the underbinding in light nuclei, but makes the nuclear matter energy worse. The purely phenomenological repulsive term V R prevents nuclear matter from being overbound at large density. The V 2π term was first introduced by Fujita and Miyazawa [67] to describe the process whereby two pions are exchanged among nucleons and a ∆ resonance is excited in the intermediate state, as shown in the Feynman diagram of Fig. 1.4. It can be conveniently written as a sum of an anticommutator and a commutator term The ξ(x) are short-range cutoff functions defined by In the UIX model, the cutoff parameter is kept fixed at c = 2.1 fm −2 , the same value as in the cutoff functions appearing in the one-pion exchange term of the Argonne v 18 two-body potential. On the other hand, A 2π is varied to fit the observed binding energies of 3 H. The three-nucleon interaction depends on the choice of the NN potential; for example, using the Argonne v 18 model one gets A 2π = −0.0293 MeV. The repulsive term V R is spin-isospin independent and can be written in the simple form Figure 1.5: Energies per particle of ground and low-lying excited states of light nuclei, resulting from GFMC calculations, computed with the AV18 and AV18+UIX interactions, compared to experiment [24]. The Monte Carlo statistical error are represented by the light shaded region. The red dashed lines indicate the breakup thresholds for each model or experiment. The two parameters A 2π and U 0 have different values for v ′ 8 and v ′ 6 . We disregard such small differences in this analysis, mostly aimed at testing the quality of the densitydependent reduction of the UIX three-body potential, rather than reproducing empirical data. As displayed in Fig. 1.5, showing the results of the GFMC calculations of Ref. [24], when the UIX potential is used the binding energy of 3 H is exactly reproduced by construction, and that of 4 He turns out to be very close to the experimental value. A significant improvement is also observed for the binding of the p-shell nuclei. However, more and more underbinding is provided by the AV18+UIX for increasing A and A − Z. In particular a problem with the isospin dependence of the interaction model is revealed by the fact that 8 He is more underbound than 8 Be. With respect to the pure AV18 case, the relative stability of the lithium nuclei is improved, but the Borromean helium nuclei are still unbound. Additional GFMC calculations of higher-lying excited states, not shown in Fig. 1.5, indicate that the AV18+UIX model underestimate the spin-orbit splittings among spin-orbit partners such as the 3/2 − and 1/2 − states in 5 He. Moreover, the phenomenological UIX model fails to explain the measured nd doublet scattering length, 2 a nd [14], as well as the proton analyzing power in p-3 He scattering, A y [15]. Chiral inspired models of three nucleon forces In recent years, the scheme based on ChPT has been extensively employed to obtain three-nucleon potential models [39,40,68,69]. The main advantage of this approach is the possibility of treating the nucleon-nucleon (NN) potential and the TNF in a more consistent fashion, as the parameters c 1 , c 3 and c 4 , fixed by NN and πN data, are also used in the definition of the TNF. In fact, the next-to-next-to-leading-order (NNLO) threenucleon interaction only involves two parameters, namely c D and c E , that do not appear in the NN potential and have to be determined fitting low-energy three-nucleon (NNN) observables. Unfortunately, however, πN and NN data still leave some uncertainties on the c i 's, that can not be completely determined by NNN observables. A comprehensive comparison between purely phenomenological and chiral inspired TNF, which must necessarily involve the analysis of both pure neutron matter and symmetric nuclear matter, is made difficult by the fact that chiral TNF are derived in momentum space, while many theoretical formalisms are based on the coordinate space representation. The local, coordinate space, form of the chiral NNLO three nucleon potential, hereafter referred to as NNLOL, can be found in Ref. [41]. However, establishing a connection between momentum and coordinate space representations involves some subtleties. The authors of Ref. [39] have shown that the NNLO (momentum space) three body potential obtained from the chiral Lagrangian, when operating on a antisymmetric wave function, gives rise to contributions that are not all independent of one another. To obtain a local potential in coordinate space one has to regularize using the momenta transferred among the nucleons. This regularization procedure makes all the terms of the chiral potential independent, so that, in principle, all of them have to be taken into account. The potential would otherwise be somewhat inconsistent, as it becomes apparent in nuclear matter calculations, which involve larger momenta. A comparative study of different three-nucleon local interactions (Urbana UIX (UIX), chiral inspired revision of Tucson-Melbourne (TM ′ ) and chiral NNLOL three body potential), used in conjunction with the local Argonne v 18 NN potential, has been recently performed [45]. The authors of Ref. [45] used the hyperspherical harmonics formalism to compute the binding energies of 3 H and 4 He, as well as the nd doublet scattering length, and found that the three body potentials do not simultaneously reproduce these quantities. Selecting different sets of parameters for each TNF they were able to obtain results compatible with experimental data, although a unique parametrization for each potential has not been found. This problem is a consequence of the fact that the three low-energy observables considered are not enough to completely fix the set of parameters entering the definition of the potentials. In a chiral theory without ∆ degrees of freedom, the first nonvanishing three-nuclon interactions appear at NNLO in the Weinberg power counting scheme [63,64]. The interaction is described by three different physical mechanisms, corresponding to three different topologies of Feynman diagrams, drawn in Fig. 1.6 [39]. The first two diagrams correspond to two-pion exchange (TPE) and one-pion exchange (OPE) with the pion emitted (or absorbed) by a contact NN interaction. The third diagram represents a contact three-nucleon interaction. (1.32) The first three termsV 1 ,V 3 andV 4 come from the TPE diagram and are related to πN scattering. In particular,V 1 describes the S-wave contribution, whileV 3 andV 4 are associated with the P -wave. The other terms,V D andV E , are the OPE and contact contributions, respectively. Their momentum space expressions are [39] The strengths of the TPE, OPE and contact terms, V 0 , V D 0 and V E 0 are given by where g A = 1.29 is the axial-vector coupling constant, F π = 92.4 MeV is the weak pion decay constant and Λ χ is the chiral symmetry-breaking scale, of the order of the ρ meson mass. The low energy constants (LEC) c 1 , c 3 and c 4 also appear in the sub-leading two-pion exchange term of the chiral NN potential and are fixed by πN [70,71] and/or NN [5] data. The parameters c D and c E are specific to the three-nucleon interaction and have to be fixed using NNN low energy observables, such as the 3 H binding energy and the nd doublet scattering length 2 a nd [39]. The many-body methods employed in our work, namely FHNC/SOC and AFDMC, require a local expression of the three-body potential in coordinate space, that can be obtained performing the Fourier transform [41] where the cutoff functions F Λ , defined as can depend on the momenta transferred among the nucleons, q i , only. This feature has important consequences for the OPE and contact terms, that will be discussed at a later stage. The cutoff Λ in the previous equation, while not being required to be the same as Λ χ , is of the same order of magnitude. Choosing the fourth power of the momentum in Eq. (1.36) is therefore convenient, as the regulator generates powers of q/Λ which are beyond NNLO in the chiral expansion. The Fourier transform can be readily computed, and provides the following coordinatespace representation of the chiral three-body potential: where W 0 , W D 0 and W E 0 are obtained multiplying the corresponding V 0 , V D 0 and V E 0 by a factor m 6 π /(4π) 2 . The radial functions appearing in the above equations are defined as while z n , proportional to Z n introduced in Ref. [18], is given by with j 0 (x) = sin(x)/x. Note that, due to the form of the cutoff function of Eq. (1.36), the radial functions are not known in analytic form, and must be obtained from a numerical integration. Recently, the authors of Ref. [45] have studied the low energy NNN observables using the hyperspherical harmonics formalism and a nuclear hamiltonian including the NNLOL potential and the Argonne v 18 [3] two-body interaction. This mixed approach requires a fit of all the LECs appearing in the chiral three-body interaction, not c D and c E only. Hence, consistency in the treatment of two-and threenucleon interactions, that would be achievable using a hamiltonian in which all potentials are derived from chiral effective theory, is lost. Nevertheless, it is possible to exploit chiral perturbation theory to assess the importance of the different terms contributing to the TNF. This procedure allows one to select the most relevant spin-isospin structures entering the three-nucleon potential, as well as the shape of the corresponding radial functions,. Within the chiral approach, to obtain a potential yielding a fit to the experimental data of accuracy comparable to that achieved by the Argonne v 18 model, one has to include terms up to NNNLO [9,10]. As a consequence, a fully consistent calculation in principle requires a NNNLO three-body interaction, the expression of which has been only recently derived in Ref. [40,68]. It turns out that some of the terms appearing at NNNLO can be taken into account shifting the constants c i of about 20-30% with respect to their original values [40]. This procedure has been followed in precision studies of TNF. By fitting all the LECs of the NNLOL interaction, the authors of Ref. [45] have improved upon the NNLO approximation, as they have effectively included the corrections to the c i appearing at NNNLO level. The best fit parameters for the 3 H and 4 He binding energies and for the nd scattering length, 2 a nd , are listed in Table 1.2. For all the different parametrizations, denoted by NNLOL i , c 1 and Λ χ have been fixed to their original values 0.00081 MeV −1 and 700 MeV, respectively [39]. The momentum cutoff of Eq. (1.36) has been set to 500 MeV. [45]. As noticed in Ref. [72], despite the different underlying physical mechanisms, both TM and UIX three-nucleon interactions can be written as a sum of terms of the same form as those appearing in Eq. (1.37). The differences among NNLOL, TM and UIX lie in the constants and in the radial functions. The TM ′ potential only involves the V 1 , V 3 and V 4 contributions [73]. The cutoff function for this potential is not the same as in Eq. (1.36), but (1.40) The above form allows for the analytical integration of Eq. (1.39), yielding the radial functions The TM ′ potential corresponds to the following choice of the strength constants (compare to Eq. (1.37)) a, b and c being the parameters entering the definition of the TM ′ potential [73]. The authors of Ref. [45] have determined the parameters of the TM ′ potential fitting the same set of low energy NNN observables employed for the NNLOL potential. In order to get a better description of the experimental data, they introduced a repulsive three-nucleon contact term, similar to the chiral V E , but with τ 12 omitted where The corresponding radial function can be computed analytically from Eq. (1.39) As in the original paper [18], in Ref. [45] the value of the pion-nucleon coupling constant is set to g 2 = 179.7 MeV, the pion mass is m π = 139.6 MeV and the nucleon mass is defined through the ratio m N /m π = 6.726. The symmetry breaking scale Λ χ of Eq. (1.45) has the same value, 700 MeV, used for the NNLOL potential. The parameters of the TM ′ potentials, TM ′ i , that according to Ref. [45] reproduce the binding energies of 3 H and 4 He and 2 a nd , are listed in Table 1.3. It turns out that V 1 , gives a very small contribution to the low energy NNN observables. Therefore, the parameter a has been kept to its original value −0.87 m −1 π . It can be shown that the anticommutator and commutator terms of the UIX potential, displayed in Eq. (1.28), correspond to V 3 and V 4 of Eq. (1.37), provided the following relations between the constants (1.47) and the radial functions Y (r) = y(r) + r 2 3 t(r) T (r) = r 2 3 t(r) (1.48) are satisfied. On the other hand, the repulsive term of the UIX potential of Eq. (1.31) is equivalent to the V E term appearing in the TM ′ potential and (aside from the τ 12 factor) in the NNLOL chiral potential if the following relations hold In Ref. [45] it has been found that the original parametrization of the UIX potential underestimates 2 a nd and slightly overbinds of 4 He. The authors of Ref. [45] have calculated the differential cross section and the vector and tensor analyzing powers of p − d scattering at E lab = 3 MeV for the different parametrizations of NNLOL and TM ′ potentials. They found that all of them lead to underestimating A y (the so-called A y puzzle remains unsolved) and T 11 , while the central minimum in T 21 is always overestimated. However, NNLOL model provides a slight improvement with respect to the UIX potential in the description of the polarization observables. On the other hand, no substantial modifications from the UIX results are given by the TM ′ interactions. Many body description of nuclear matter Ab initio nuclear many-body approaches are based on the premise that nuclear dynamics can be modeled studying exactly solvable systems, having mass number A ≤ 3. This is a most important feature since, due to the complexity of strong interactions and to the prohibitive difficulties associated with the solution of the quantum mechanical manybody problem, theoretical calculations of nuclear observables generally involve a number of approximations. Hence, models of nuclear dynamics extracted from analyses of the properties of complex nuclei are plagued by the systematic uncertainty associated with the use of a specific approximation scheme. In ab initio approaches, the hamiltonian entering the time-independent many-body Schoredinger equationĤ is the one defined in Section 1.2, without any additional adjustable parameters. In the first Section of this Chapter we discuss the independent particle model, and argue that it is not suitable to encompass correlation structure induced by the nuclear hamiltonian. The following Sections are devoted to more advanced approaches, allowing one to take into account correlation effects. We will focus on the variational method, based on correlated basis function (CBF) theory, and the diffusion Monte Carlo technique. Mean field approach: the Hartree-Fock method D. R. Hartree [74], V. A. Fock [75] and J. C. Slater [76], proposed to use as a starting point toward the solution of the many-body Schroedinger equation describing atomic electrons, the central field approximation. Within this approximation, based on the independent particle model, each nucleon moves in a single-particle effective potential representing the average effect of the interactions with the other A − 1 nucleons. Each nucleon is described by its own wave function, ψ n i (x i ) eigenfunction of the hermitian operator h HFĥ where the generalized coordinate x i = {r i , σ i , τ i } represents both the position and the spin-isospin variables of the i-th nucleon. The operatorĥ HF , called the one-particle Fock hamiltonian, is given byĥ wherev HF (r) is the single particle Hartree-Fock potential, that is built from the states ψ n i (x i ) using a self-consistent iterative procedure, based on the variational principle, described in Appendix A. Within this approximation, the many particle ground-state for a system made of A nucleons is a single Slater determinant Ψ of one-nucleon states where A is the antisymmetrization operator. As shown in Appendix A, the single particle energy ǫ i is given by where dx j stands for integration over the coordinate r j and trace over the spin and isospin variables of the j-th nucleon. The total energy of the system, E[Ψ], is not the sum of the single particle energies, but rather A physical meaning to the single particle energies can be given through Koopmans' theorem. Assuming that the spin orbitals of the A − 1 system are the same as those of the A system, from the previous equation it can be shown that ǫ n i is the separation energy of the nucleon in the state ψ n i As explained before, the self-consistent field method allows for the determination of the spin-orbitals of the A occupied states, {ψ 1 , . . . , ψ A }, with single-particle energies ǫ 1 , . . . , ǫ A , E A being the Fermi energy of the system. The remaining eigenfunctions of h HF , which satisfy Eq. (A.9), are associated with unoccupied (virtual) states having single particle energies larger than the Fermi energy. Unlike {ψ 1 , . . . , ψ A }, they are not determined in a self-consistent fashion, as they do not enter the definition of the Fock hamiltonian. The key point of the Hartree Fock approach is that occupied and virtual states provide a natural basis to describe the many-body system [77]. While the many-body ground state is the Slater determinant of occupied single-particle states, Eq. (2.4),excited many-body states are constructed by removing n occupied states from the Slater determinant and replacing them with n virtual states. Such excited states are called n − particle n − hole (np nh) states and are eigenstates of the Hartree Fock hamiltonian, also known as "Fokian" (2.8) The Hartree-Fock procedure is the basis, for instance, of the nuclear shell model, that has been successfully applied to explain many nuclear properties. [78,79,80]. As far as nuclear matter is concerned, the single particle wave functions are known to be plane waves, as dictated by translation invariance. Therefore, a uniform system can be conveniently described within a box of volume V with periodic boundary conditions [52], using the wave functions where η α i ≡ χ σ i χ τ i represents the product of Pauli spinors describing the spin and the isospin of particle i. In order to satisfy the periodic boundary conditions, the wave vector k is discretized; for a cubic box of side L, it turns out that The momentum of the occupied states is smaller than the Fermi momentum k F , which is related to the density of the system, ρ, through k F = (6π 2 ρ/ν) 1/3 , and ν is the spin-isospin degeneracy (ν = 2 for PNM, ν = 4 for SNM). The plane waves of Eq. (2.9) are already solutions to the Hartree Fock equations; in other words they are the best single-particle wave functions for uniform systems. A remarkable feature of nuclear matter is that the starting single particle wave functions are known and simple, unlike what happens, for instance, in finite nuclei. Due to the lack of translation invariance, even generating the single particle wave functions is a difficult task, as it requires the solution of the Hartree-Fock equations [52]. The single particle energy of nuclear matter can be easily derived substituting the wave function of Eq. (2.9) in Eq. (A.13). In the case of SNM (ν = 4) for potentials of the form of Argonne v 18 , carrying out the summation over the occupied states with |k j | ≤ k F yields where the Slater function is given by Summing over spin-isospin states of Eq. (2.5) amounts to tracing over the spin-isospin variables of the nucleon j. Such a trace is normalized, as it incorporates the factor 1/ν coming from the summation over the momentum k j . The factor A arising from the same sum, divided by wave function normalization factor V produces the factor ρ, appearing in Eq. (2.11). Standard perturbation theory performed in the basis of the Hartree-Fock solutions can not cope with the repulsive core of the nuclear force, which cause individual terms of the perturbative expansion to diverge [20]. As an example [81], consider the scalar repulsive potential v(r) = |v 0 | |r| ≤ r 0 0 |r| > r 0 . (2.13) The single particle energy computed from Eq. (2.11) using this potential is seen to be of order ρ r 3 0 |v 0 |; if the potential approaches the hard sphere interaction, similar to the strong repulsive core of the nuclear interaction, the single particle energy keeps increasing. In other words, since the eigenfunctions of the Fock hamiltonian are the same as those of the non interacting Fermi gas, the many-body wave function largely differs from the exact ground state associated with the nuclear hamiltonian. Standard perturbation theory in such a basis can not be expected to be convergent as the matrix elements of the nuclear hamiltonian between np nh states are not perturbative corrections to the ground state expectation value. To circumvent this problem, one can follow two different strategies, leading to either G-matrix or correlated basis function (CBF) perturbation theory. Within the former approach proposed by Brueckner [82,83,84,85], the bare potential v ij , is replaced by a well behaved effective interaction, the G-matrix, which is obtained by summing up the series of particle-particle ladder diagrams. The physical basis of this theory was elucidated by Bethe [86], while Goldstone introduced the linked cluster expansion [87]. For a more recent review of the G-matrix approach, also known as Brueckner-Bethe-Goldstone expansion, see Refs. [88,89]. In this Thesis we have been using the CBF approach, to which the following Section is devoted. Correlated basis functions theory Theories of Fermi liquids based on correlated basis functions are a natural extension of variational approaches in which the trial ground state wave function is written in the form whereF is suitable many-body correlation operator. The simplest choice suitable for dealing with the strong short-range repulsions is the scalar correlator of the form known as Jastrow correlator [81]. However, this choice for the correlation operator is only suitable for purely central potentials, such as those describing the interaction between 3 He atoms. For state-dependent potentials, like the Argonne nuclear interaction, spin-isospin dependent correlations, to be introduced at a later stage, are needed. The variational approach consists in the minimization of the expectation value of the hamiltonian which is an upper bound to the true ground-state energy E 0 . For instance, in the pure central Jastrow case, minimizing E V allows for finding the radial function f (r ij ). Apart from the technical difficulties involved in finding the optimal radial function, it is clear that the resulting correlation function is small within the repulsive region of the NN potential. As noted in the review of Clark [90], historically, the development of the variational approach has been somewhat discouraged not only by the difficulties involved in the calculation of E V , potentially leading to violations of the variational principle, but also by a psychological obstacle: the embarrassing conceptual simplicity of the method, in other words, its lack of "snob appeal". Nevertheless, the variational approach succeeded in treating the atomic helium in both the liquid and solid phases [91,92]. Although nowadays the numerical problem of solving the many-body Schrödinger equation for the ground state has been resolved, to a large extent, by the Green's function Monte Carlo method [93], this approach does not provide a quantitative understanding of the ground-state wave function [94]. However, the knowledge of the analytic form of the ground-state function would be particularly useful to extend the microscopic theory to treat the elementary excitations and finite-temperature properties of helium liquids. A successful approach in this direction has been provided by the variational theory [95], including also the back flow correlation, proposed by Feynman and Cohen [96]: a velocity dependent correlation, arising from the flow induced by a moving atom. As far as the nuclear many body problem in concerned, this method is supported by a variety of experimental evidence [97,98] showing that short range NN correlations are a fundamental feature of nuclear structure. The description of nuclear dynamics in terms of interactions derived in coordinate space appears to be the most appropriate, for both conceptual and technical reasons. First of all, correlations between nucleons are predominantly of spatial nature, in analogy with what happens in all known strongly correlated systems, like liquid 4 He. In addition, one needs to clearly distinguish the effects due to the short-range repulsion from those due to relativity. The correlated basis theories of Fermi liquids are a natural extension of the variational approach. A non orthogonal but complete set of correlated basis states can be defined as [99,100] where |Ψ n is the n − particle n − hole state of Eq. (2.8). The correlation operator,F, is determined by the variational calculation of the ground state energy. The variational energies E v n , although only E v 0 has been variationally estimated, are given by the diagonal matrix elements of the hamiltonian between correlated states The energies E v n are extensive quantities, as they are of order A, while excitation energies E v n − E v 0 are of order 1. In order to compute the perturbative corrections to the variational energies, the hamiltonian H is decomposed in two termsĤ where, as will became clear in the following, neitherĤ 0 norĤ 1 are hermitian operators. The "unperturbed" hamiltonianĤ 0 is defined through the correlated basis states and the variational energies, in such a way that Notice that, since the correlated states are not orthogonal,Ĥ 0 is not diagonal in this basis The metric matrix N nm is defined by where S nm is the overlap matrix, with It is convenient to distinguish the diagonal part from the not diagonal part of the hamiltonian (2.22). Assuming that the nondiagonal elements of both the metric and the hamiltonian be small, there have been two fundamental ways of treating the problem pertubatively. One way, consists in diagonalizing H nm as it is, without bothering to orthogonalize the basis, using the nonorthogonal perturbation theory. The other way is employing some procedure to orthogonalize the basis first, and then apply the standard perturbation theory. Within the former approach, the authors of Ref. [101], introducing the so called diagonal metric, were able to show that the perturbative corrections to the variational energy E v n can be casted in a way that is formally identical to standard perturbation theory in an orthogonal basis where E n is the exact eigenvalue of the full hamiltonianĤ, the eigenstates of which are denoted as |Ψ n }Ĥ |Ψ n } = E n |Ψ n } . (2.27) The differences with respect to the orthogonal case are enclosed in the matrix V nm , the perturbative expansion of which reads Replacing V nm in Eq. (2.26) with its expansion leads to Earlier derivations of the latter result, not involving the diagonal metric formalism, can be found in Refs. [102,103]. Like in ordinary many-body perturbation theory, each order of the perturbative expansion diverges with the number of particle, A. However in Ref. [104] it has been shown that divergent terms appearing at different orders cancel each other. A major difference with respect to ordinary many-body perturbation theory is that there is an energy dependence in the matrix element V nm of Eq. (2.26), arising from the non orthogonality of the CBF state. Another peculiar feature of CBF perturbation theory is the fact that V nm is a many-body operator, as, throughF , it incorporates the effect of the correlations among all the particles of the system. In the earlier calculations [105,106], where the correlator was taken to be of the simple Jastrow form of Eq. (2.15), the second order term of the perturbative corrections has been found to be large. The NN potential has indeed a complicate spin-isospin structure, that can not be encompassed considering radial correlations only. In particular, since this wave function is spherically symmetric, the expectation value of the tensor component of the NN interaction averages to zero. In the pure Jastrow case, the CBF states are not sufficiently close to the exact eigenstates of the hamiltonian, and more terms in the perturbative series need to be calculated. In liquid 3 He or in electron gas, where the potential is purely central, Jastrow CBF is much more justified [107]. A generalization of the Jastrow correlation operator whose structure reflects the complexity of the NN interaction has been proposed in Ref. [108,109,110] Note that the symmetrization operator S is needed to fulfill the requirement of antisymmetrization of the state |Ψ n , since, in general, [Ô p ij ,Ô q ik ] = 0. Since the first six operators present in the NN potential form a closed set, this choice for the correlation operator has a tremendous advantage in analytic manipulation necessary to compute the energy per particle. As will be shown in Section 2.5, the product of any two of the O p , p < 6, can be reduced to a linear combination of elements from this set. In this Thesis we will stick to this choice forF although in Ref. [111] the correlation operator has been extended including spin-orbit correlations. In fact, the variational choice of Eq. (2.31) (F 6 model) implies that spin orbit correlations are neglected. We motivate this choice mainly with the technical difficulties of consistently including spinorbit correlations; in spite of the calculations performed of Ref. [111], we believe that the contribution of the spin-orbit correlation is still an open problem. In several FHNC/SOC calculations of the binding energy of SNM the spin orbit terms of the potential have been included only pertubatively. Moreover in all the FHNC/SOC calculations of the linear response [33,34], optical potential and Green's function [35,36] of SNM, whose results have been used to explain a variety of experimental data, spin-orbit correlations have been neglected. Before moving to the cluster expansion technique, which has been developed to compute the matrix elements of the hamiltonian, it is worth spending few words on orthogonal CBF theory. As a matter of fart, a clear analysis of the convergence properties of the non-orthogonal CBF perturbation theory has not been performed yet. For instance, the truncation of the series at some perturbative order leads to non-orthogonality spuriousities, whose effects may not be negligible. Moreover, the calculation of quantities other than the ground-state energy, like the response function, is made difficult by the fact that properly orthogonalized eigenvectors cannot be easily extracted the nonorthogonal CBF basis. If one attempts to diagonalize the CBF states using the standard Löwdin transformation [112], the resulting states are worst than the original one. For instance, the expectation value of the hamiltonian in the Löwdin orthogonal ground-state is larger than E v 0 . To avoid this inconvenient, a two-step orthogonalization procedure which preserves the variational diagonal matrix elements of the hamiltonian and allows for using ordinary orthogonal perturbation theory in zero temperature calculations, has been developed [113]. Cluster expansion formalism Both correlation operators of Eq. (2.15) and (2.31) are defined in such a way to possess the cluster property. This means that if the system is split in two (or more) subset of particles that are moved far away from each other, the correlation operator factorizes into a product of two (or more) correlation operators in such a way that only particles belonging to the same subset are correlated. For instance, consider two subsets, say i 1 , . . . i m and i m+1 , . . . i A ; the cluster property implieŝ (2.32) The above property allows for expanding the matrix elements of the hamiltonian, (or of any other many-body operator), between CBF states in sum of terms involving an increasing number of particles, knowns as clusters. In the literature are present both analytic [90,106] and diagrammatic cluster expansion formalisms [114,115,116]; moreover different classification schemes have been adopted, corresponding to different choices for the smallness parameters of the perturbative expansion. In the calculation of the expectation value of any many-body operator it is convenient to perform separate cluster expansion for the numerator and the denominator, the latter arising from the normalization of CBF states It is a general property of the cluster expansion, to be discussed in detail below, that divergent terms coming from the expansion of the numerator and of denominator cancel. The two following subsections will be devoted to the description of the original Fantoni Rosati (FR) diagrammatic cluster expansion formalism [114,115] and to its generalization [117,118], developed to deal with spin-isospin dependent correlation operators. Fantoni Rosati cluster expansion and FHNC summation scheme The FR cluster expansion has been obtained through a generalization of the concepts underlying the Mayer expansion scheme, originally developed to describe classical liquids [119], to the case of quantum Bose and Fermi systems. Within the FR approach, both the termFĤF of the numerator N nm and the term FF of the denominator D nm associated with the expectation value of the hamiltonian are expanded in terms of Notice that for scalar correlation operator of Eq. (2.15) to respect the cluster properties one can impose The variational parameter d c , to be fully explained later on, is the central healing distance encompassing the fact that when two-particles are further apart than d c they are not anymore correlated. Hence, the quantity h(r ij ) can be seen as a smallness parameter for the cluster expansion, as it is indeed in the case of the "power-series" (PS) expansion scheme. Two-body distribution function In the calculation of the ground-state expectation value of any two-body scalar operator, it is very useful to employ the scalar two-body distribution function, g c (r 1 , r 2 ), defined as . (2.36) In terms of g c (r 1 , r 2 ), the expectation value of the two-body potential reads For the first equality we have exploited the symmetry property of the wave function, which is due to the fact that F is symmetric and Ψ 0 antisymmetric in the generalized particle coordinates. Since nuclear matter is uniform, g c (r 1 , r 2 ) = g(r 12 ), implying that v diverges with the number of particles. However, the potential energy per particle is finite and reads v A = ρ 2 dr 12 g c (r 12 )v(r 12 ) . that can be easily derived by integrating Eq. (2.36) and using translation invariance of g c (r 12 ). Note that the latter results is a consequence of the fact that the scalar two-body distribution function can interpreted as the joint probability of finding two particles with coordinates r 1 and r 2 . Following [100], in the following subsections we will provide a detailed description of the FR cluster expansion of the scalar two-body distribution function. Cluster decomposition ofF †F In the scalar Jastrow case the product of the correlation operator reduces tô (2.40) It is convenient to put aside the correlation between the interacting particles, denoted as "active correlation", while the others are called "passive correlations". Without loss of generality we can writê The generic cluster term f 2 (r 12 )X (n) (r i , . . . , r n ) correlates the positions of the two interacting particles and of the n − 2 medium particles and should be considered as an n-body operator. For the sake of clarity we give the explicit expression of the first cluster terms Expansion of the numerator in cluster diagrams The cluster expansion of the numerator can be performed by substituting the rhs of Eq. (2.41) in Eq. (2.36) The integration of the cluster term on the squared modulus of the Fermi-gas wave function, which is invariant under the exchange of any particles, gives rise to a combinatory factor: ..,A X (N ) (r 1 , r 2 ; r 3 , r 4 , . . . , r N )|Ψ 0 (X)| 2 . (2.44) Using the above result in Eq. (2.43) leads to where X (2) =1. Note that we have introduced the mean-field N-body correlation function, defined as: (2.46) We now proceed by integrating out the variables x N +1 , . . . , x A from g c M F N by using the orthogonality of single particle states. As shown in Appendix B, extracting N particles from the Slater determinant of the ground-state Ψ 0 yields is the minor ψ 0 , describing a system of A − N particles with holes n 1 <, . . . , < n N . The minors satisfy the following orthonormality condition With the help of the above equation, the mean-field N-body distribution function can be written in the form implying that if the number of particles, N, is larger than the number of quantum states, A, the mean-field N-body distribution function vanishes, i. e. We preliminary remark that this property is crucial for the exact cancellation of the unlinked diagrams of the numerator with the denominator to take place. The antisymmetrization operator A can be written in the form For a uniform system like nuclear matter the single particle states are normalized plane waves, see Eq. (2.9). Thus, the two-particle exchange operator, defined by the relation can be written asP acts on the spin-isospin degrees of freedom of the nucleons' wave function, while exchanges the radial coordinates of particles i and j. Because Pauli matrices are traceless, in the pure Jastrow case, when the traces of Eq. (2.49) are carried out, the exchange operator reduces to its central part and one is left with can be written in terms of the Slater function, defined by (2.58) In the limit of infinite volume, the sum over the discrete momentum can be replaced by an integral. Hence it can be easily shown that The first terms of the mean-field N-body distribution function reads The factors 1/ν come from the normalization of the exchange operator of Eq. (2.56). In particular, producing a two-particle loop, ℓ 2 (r ij ), requires one exchange operatorP ij with the associated factor 1/ν. For a loop involving n > 2 particles and n-1 exchange operators, the corresponding factor is (1/ν) n−1 . Moreover, there are two possible orderings of the exchange operators producing loops having more than two particles exchanged, bringing and additional factor 2. We are now ready to give the general structure of the cluster decomposition for the numerator of Eq. (2.45). It is very useful to do this pictorially, introducing the so called "cluster diagrams". The diagrammatic rules are the following: • The diagrams consist of dots (vertices) connected by different kinds of correlation lines. Open dots represent the active (or interacting) particles (1 and 2), while black dots are associated with passive particles, i.e. those in the medium. Integration over the coordinates of a passive particle leads to the appearance of a factor ρ. • The dashed lines, representing the correlations h(r ij ) and denoted as "correlation lines", cannot be superimposed. • The statistical factor −ℓ(r ij )/ν, coming from the expansion of g M F n (r 1 , . . . , r n ) is represented by an oriented solid "exchange line". The exchange lines must form closed loops and, as can be readily seen from the expansion of A in terms of the exchange operators of Eq. (2.51), different loops cannot have common points. Hence, the total exchange pattern consists in one or more non touching exchange loops. • Each solid point must be reached by at least one correlation line; in fact in Eq. (2.45) each integration over r i is associated with a term X (N ) (r 1 , r 2 ; r 3 , . . . , r i , . . . , r N ). The cluster terms have no specific prefactor, except those coming from the exchange rules and a ρ factor for each integration. One might wonder where the 1/(n − 2)! of Eq. (2.45) ended up. The factor is due to the counting of the permutations of the n − 2 internal points and it is automatically taken into account by considering only topologically different graphs, or, in other words, by the fact that the labels of the solid points in the cluster diagrams are dummy indices. The only remnant of that factor is the inverse of what is usually called "symmetry factor", s. This counts the permutations of the solid points' labels that, without renaming the integration variables, leave the cluster term unchanged. For instance diagrams (a) and (b) of Fig As a consequence we take into account only one of them and no prefactors appear. On the other hand a prefactor s = 1/2 is associated with the diagram (a) of Fig. (2.4), because the exchange of points 3 and 4 leads to an identical expression, even without relabelling the dummy variables The reason of this fact lies in the constraint i < j in the expansion of F † F of Eq. (2.40), implying that a diagram analogous to diagram (2.4.a) with the points 3 and 4 exchanged does not appear in the cluster expansion. Cluster diagrams may be reducible or irreducible. The integrals corresponding to irreducible diagrams cannot be factorized. Obviously an irreducible diagram must be linked, i.e. each couple of points must be connected by a sequence of lines, which can be both correlation and exchange lines. which is just the cluster term of (2.1.a) multiplied by the cluster term of the added part. Translational invariance gives rise to a factor V for each unlinked part of the diagram but for the one containing the external points 1 and 2. Therefore, the order of magnitude of a cluster diagram is V Nu−1 , where N u is the number of unlinked parts. For example, the order of magnitude of the linked diagram (2.1.a) is 1 while the one of diagram (2.1.b), having two unconnected parts, is V . Expansion of the denominator in cluster diagrams. The same procedure followed for the expansion of the numerator can be used for the denominator. However, in this case there are no interacting particles; henceF † F can be conveniently expanded aŝ where the cluster term X (N ) correlates N particles. The explicit expressions for N = 2 and N = 3 are Substituting the expansion ofF † F of Eq. (2.68) in the denominator of Eq. (2.36) and exploiting the invariance of |Ψ 0 M F | 2 under any two-particle exchange, one finds Two-body distribution function as a sum of cluster diagrams. The ratio of Eq. (2.36) involves two infinite series of cluster terms, corresponding to the expansion of the numerator and of the denominator. Let us consider a generic n body linked (reducible or irreducible) cluster diagram, L n , of the numerator [120], where each internal point is connected to the points 1 and 2 by at least one continuous path of correlation and/or exchange lines. Each cluster diagram of the numerator can be built as a product of L n times a factor U q , with q = A − n, representing the sum of all the q body unlinked diagrams (2.71) An example of the above equation is depicted in Fig. 2.7, where the diagram (2.1.a) belongs to L 3 and the sum of the diagrams enclosed in round parenthesis is U A−3 . Considering the expression of the expansion of the denominator, it is readily seen that Hence, one might naively think that the denominator cancels the disconnected diagrams of the numerator in the thermodynamic limit, A → ∞, only. However, by using the property of Eq. that proves the fundamental linked cluster property Because of this property, which is a common feature of diagrammatic expansion techniques, the divergent terms of the numerator and of the denominator cancel out, giving rise to finite physical quantities. It has to be remarked that for Fermi systems the exact cancellation of unlinked diagrams still holds when spin-isospin dependent correlation are considered. Perturbative schemes In this subsection we briefly present two possible choices for the smallness parameter of the cluster expansion, namely, the expansion in the number of points and the power series (PS) expansion. Expansion in the number of points Within this scheme, the order of magnitude of a linked diagram is given by the number of its internal points: expanding the two body distribution function at n = m. As a factor ρ n−2 is associated with n internal points, the smallness parameter is nothing but the density. This was the first scheme adopted for classifying diagrams of Fermi liquids; however the series is rapidly convergent for low-density regimes only. This is not the case for nuclear matter, as found by the authors of Ref. [121]. They have shown that at the equilibrium density, the 1-5 body cluster contributions to the energy of SNM are 22.1 MeV, 43.7 MeV, 10.8 MeV, 3.4 MeV, and 2.6 MeV, while those with n > 5 give 0.8 MeV. The computational cost of this approach exponentially increases with the number of internal points: actual calculations do not go beyond the three-body cluster contribution in the case of spin-isospin dependent operators. It has to be noted that the expectation value of the hamiltonian at any finite order of the expansion in the number of internal points is unbound. The main reason for this lies in the normalization of the wave function that is not properly taken into account. Nevertheless, as it will be fully explained at a later stage, a procedure usually employed to obtain the correlation function f p (r) consists in minimizing the two-body cluster contribution to the energy per particle imposing constraints on the variational parameters of the correlation functions. Power series (PS) expansion Suppose to multiply each correlation line h(r ij ) by a parameter α, it follows that the term of order m in the PS expansion scheme [114] of g 2 (r 12 ), denoted as g αn 2 , corresponds to the terms of order α m resulting from the sum of Eq. (2.74). Of course the zeroth order corresponds to the mean field results, . A given order of the power series expansion mixes different orders of the expansion in the number of points. For instance the first order in PS includes all of the 2-body diagrams, ten 3-body diagrams and nine 4-body diagrams. A remarkable feature of the PS scheme is that the sum rule of Eq. (2.39) for the two-body distribution function is satisfied at any order. A very simple argument to proof this statement, that can be found in [100], is the following. For any choice of α the sum rule reads (2.75) Since the mean-field two-body distribution function satisfies the former sum rule, it turns out that The integral of the series is zero for any choice of α within the convergence radius, thus the integral of each of the coefficients g c αn 2 has to be zero. The drawback of the PS expansion is that it does not appropriately describe the shortrange behavior of the pair function, which is crucial for the calculation of the binding energy. The factor f (r 12 ), while being set aside in Eq. (2.41), does not in fact multiply all terms of the expansion. Fermi Hyper-Netted Chain (FHNC) and RFHNC schemes To allow for a good description of both the short and the long range behavior of the pair function, the development of a scheme in which infinite sets of cluster diagrams are summed up is required. This goal is achieved by the FHNC summation procedure, two versions of which had been originally proposed: one by Fantoni and Rosati [114,122,115] and the other by Krotscheck and Ristig [123,116]. The two schemes are essentially complementary: the one by Krotscheck and Ristig is better suited to the treatment of long-range correlations, while the FR is more convenient for the evaluation of the energy in the presence of strong short-range correlations. Since in this work have been dealing with the calculation of the expectation value of the potential, we have extensively used the FR approach, that we present for the case of a Fermi liquid and a wave function with Jastrow-type correlations (for which FHNC has been originally developed). In what follows, we will not assume that the reducible clusters diagrams cancel out, which is strictly true only for a uniform Fermi liquid described by scalar correlations, as rigorously proved in Ref. [114]. In Section 2.5.3 we will show how the cancellation mechanism works in the case of the 3-body cluster contributions. The first detailed classification of the cluster diagrams, limited to the bosonic case, can be found in the fundamental work of J.M.J. van Leeuwen, J. Groeneveld and J. de Boer [124], while the extension to Fermi systems is extensively analyzed in the more recent Refs. [100,120]. Vertex Corrected Irreducible (VIC) diagrams In this subsection we will show that, when scalar correlations only are considered, reducible diagrams can be included in the calculation of the two-body scalar distribution function through vertex corrections to the irreducible diagrams. For the sake of introducing the reducible diagrams, the correlation line h(r 34 ) had been attached to the point 3 of diagram (2.1.a). It is readily seen that to the point 3 we can add all the possible linked one-body diagrams, or, in other words, all the linked diagrams contributing to the one-body correlation function g(r 3 ). The net result is that the sum of all the reducible diagrams having diagram (2.1.a) as the irreducible part, can be represented by diagram (2.1.a) with the vertex 3 renormalized, in the sense that it is vertex corrected by ξ d , which in the case of translation invariant system is a constant A pictorial representation of g c (r) is given in Figure ( Note that there are no restrictions on the kind of cluster terms in the sum, because in diagram (2.1.a) no exchange lines reach the point 3, which is dubbed "d" point (the label d comes from "dynamical correlations" which is the name originally given to the correlation lines). Conversely, if the reducibility point is reached by exchange lines, like, for example, the point 3 of diagram (2.1.b), which is an "e" point, then, to avoid the forbidden superposition of exchange lines, the cluster diagrams of the vertex correction needs to be attached to point 3 through a correlation line. It follows that there are two kinds of vertex corrections: ξ d and ξ e , for reducibility points of type d and e, respectively. Actually there is a third type of vertex correction, occurring when the reducibility point is an internal point connected to the rest of the diagram containing the external points only through exchange lines. This correction cannot be the full ξ e , since any solid point must be reached by at least one correlation line, hence in this case the vertex correction is ξ c = ξ e − 1. The above procedure can be applied to the external point of diagram (2.1.a) and to all irreducible diagrams. The conclusion is that the sum of reducible and irreducible diagrams may be seen as a sum of vertex corrected irreducible diagrams, which are called "VIC diagrams", namely irreducible diagrams whose points carry the vertex corrections ξ d , ξ e and ξ c . Note that, taking into account vertex corrections, diagram (2.2.a), without the two superimposed h(r 34 ) lines, is allowed, because point 3 is vertex corrected by ξ c . Consequently, the second diagrammatic rule does not hold for VIC diagrams that may have internal points reached by exchange lines only. For a uniform system like nuclear matter, the one body distribution function is equal to unity, g 1 (r) = 1. This implies the sum rule ξ d = 1 that is an useful check for the accuracy of the vertex corrections calculation. We define a new set of one-body VIC diagrams U d (r 1 ) and U e (r 1 ) in terms of the following equations The external point 1 of the diagrams belonging to U d (r 1 ) is not reached by any exchange lines; conversely in those forming U e (r 1 ) the point 1 must be reached by a loop of exchange lines. Moreover diagrams in both U d (r 1 ) and U e (r 1 ) cannot be built by pieces connected only by means of the point 1. The exponential in the above equations is due to the fact that any number of d-structures forming U d can be attached to the point 1. Since the symmetry factor for n topologically identical structures is 1/n!, the global contribution of U d is given by n (1/n!)U d (r 1 ) n . For the sake of illustration, some of the diagrams belonging to U d (r 1 ) and to U e (r 1 ) are depicted in Fig. 2.9. Simple and composite diagrams When in an irreducible diagram it is possible to distinguish two or more pieces that are connected with the rest of the diagram by means of the point i and j only, we denote these parts as subdiagrams [124]. Two or more subdiagrams are said to form parallel connections between the external points 1 and 2 when the whole diagram consists of two or more parts which are only connected by means of points 1 and 2. As no integration is carried out over the two external points, a factorization of the integral associated with the diagram takes place. An irreducible diagram consisting of two or more parallel subdiagrams is called composite or X − diagram. When such division in parallel subdiagrams is not possible the diagram is called simple. one in Fig. 2.10. Note that the set of these diagrams, denoted by X cc , does not directly contribute scalar two-body distribution function. Nodal diagrams An important concept concerning the diagrams classification is the possible occurrence of a "node". Such a node is an internal point through which all possible paths joining the external points 1 and 2 pass through. A diagram with one ore more nodes is denoted as "nodal" diagram. Diagrams (2.1.a), (2.3.a) and (2.3.b) are nodal. A nodal diagram is also necessarily a simple diagram, but not all simple diagrams are nodal, as for example diagram (2.4). Non nodal simple diagrams are called "elementary" diagrams, or E − diagrams. Nodal diagrams can be classified according to the same scheme, based on the kind of external points, adopted for composite diagrams. Examples of diagrams belonging to sets N dd , N ed , N de , N ee and N ee are depicted in Fig. 2.12. FHNC equations Consider a nodal diagram contributing to N xy (r 12 ) and label 3 the node closest to point 1. All the nodal diagrams can then be found by convoluting the sum of all nonnodal 1 − 3 subdiagrams, X xx ′ (r 13 ), with the set of 3 − 2 subdiagrams X y ′ y (r 23 ) + N y ′ y (r 23 ) (with or without nodes). This leads to the following integral equation the indexes x and y running over the types d and e of the external point. The coefficients ζ x ′ y ′ , accounting for the vertex corrections and for the proper treatment of the exchange loops, read In order to better understand the effect of the long-range part of the correlation function, it is worth rewriting the convolution Eq. (2.80) in momentum spacẽ ( 2.82) whereÑ(k) andX(k) are the Fourier transforms of the nodal and composite functions, respectively. Note that we have omitted the subscripts referring to the kind of verices, whose presence is irrelevant to the purpose of this discussion. Solving Eq. (2.82) forÑ (k) yieldsÑ . As a further simplification, consider the nodal diagram N n (r) formed by n correlation lines h(r). The analytic expression ofÑ n (k) can be easily derived iterating Eq. (2.82), whereX(k) has to be replaced by its first order contributionh(k) Therefore, the sum of all the N n (r) is given by which is in turn a particular case of Eq. Consequently, eachÑ n (k) is more divergent thanh(k) in the long wavelength limit, while their sum is well behaved as it diverges likẽ h(k). From this fact, we can gather that the expansion in the number of points diverges at any finite order, while the chain summations leads to a well behaved long range limit. The iterative equation for N cc , cannot be written in a form analogous to that of Eq. (2.80). It differs more from those given in ref., where because the cyclic nodal diagrams do not show not all the cancellations occurring when the cancellation of reducible diagrams had been assumed from the very beginning [125]. We need to distinguish two different kinds of external points: the point x which is reached by an exchange line and at least one correlation line and the point p which is reached by an exchange line only. Hence four type of nodal cyclic functions, namely N xx cc , N xp cc , N px cc and N pp cc and correspondingly three type of composite functions, X αβ cc , have to be properly taken into account. For instance, the convolution of N xx cc with any other of the cyclic functions brings a vertex correction ξ e , whereas the convolution of two pp cyclic functions requires a ξ c vertex correction. The four nodal equations are given by The total cyclic nodal functions is given by where P is given by At this stage, it is worth introducing the expressions for the partial two-body scalar distribution functions where E xy ( r 12 ) represent the sum of the xy elementary diagrams. The explanation for the presence of the exponential in the above equations is analogous to the one given for the vertex corrections of Eq. (2.79). Note that, as for the vertex corrections, there are no exponentials associated with the e− structures, as two exchange loops cannot be superimposed. The composite functions, which in turn can be seen as generalized links, are defined as The functions U d,e appearing in eq. (2.79) and entering the vertex corrections ξ d,e are solutions of the following integral equations and E d and E e are the one-body elementary diagrams with external point d and e respectively. The FHNC equations are solved numerically by means of iterative procedures. At the nth step, the nodal diagrams and the 1-body structures resulting from the step n − 1, namely N xy (n − 1) and U x (n − 1), are employed to compute the partial scalar two-body distribution functions as well as the composite functions X xy (n). Hence, the new nodal functions and the 1-body structures are calculated making use of those quantities. The iterative procedure is stopped when the difference between the values of a test quantity, e.g. the nodal diagrams or the energy, computed in two successive iterations is smaller than a given convergence parameter. In order for the procedure to start, the nodal diagrams are initially set to zero, while for vertex corrections one sets ξ d = ξ e = 1. For dense systems, like liquid helium or nuclear matter, the convergence may be difficult to reach, and one may want to smooth out the iterative process. One of the most used technique consists in multiplying by a "mixing parameter", 0 < α mix < 1, the nodal diagrams resulting from the step n − 1 of the iteration procedure and by 1 − α mix those obtained in the current step n. Then all the other quantities at the iteration n, like, for instance, the composite diagrams, are obtained using the mixture (2.95) In order to close the FHNC scheme, an iterative equation for the elementary diagrams would be required. However, because of their topological structure, a consistent treatment of the elementary diagrams based on two-body kernel equations, like those for the nodal and composite diagrams, is not feasible. The simplest approximation consists in neglecting all the elementary diagrams, setting E xy = 0 (FHNC/0 approximation). Although elementary diagrams are neglected in this approach, FHNC/0 provides a very good description of the long-range part of the two-body distribution function, and it is also accurate enough at short interparticle distances (although E xy (r) are short-range functions). For instance, in nuclear matter calculation the elementary diagrams' contribution is likely to be small, provided that accurate minimization procedures, like the one described in Section 2.5.4, are performed. A brute force approach based on the direct calculation of the n−body elementary diagrams E n , like the 4−body diagrams of Fig. 2.13, and the inclusion of their contributions in the FHNC equations, (FHNC/n approximation) does not seem to be a promising strategy. In fact, in the cases where the elementary diagrams give appreciable effects, like in liquid helium, the FHNC/n series shows poor convergence. A more reliable procedure for bosonic systems, denote as scaling approximation (HNC/s) [94] amounts to approximating the full set of elementary diagram by αE 4 . The scaling parameter, α, is determined by matching the expectation values of the kinetic energy obtained following the Pandharipande-Bethe and the Jackson-Feenberg prescriptions, prescriptions, to be discussed in the next Section, which in an exact calculation would give the same results. The existence of a scaling property for fermionic systems is not at all clear, as there are different classes of elementary diagrams, depending on the kind of the external points. Nevertheless, the fermionic generalization of the scaling approximation has been used in liquid 3 He with some success [126]. Preliminary results obtained using iterative equations with four-body kernels are quite encouraging, as the difference with respect to Variational Monte Carlo results turns out to be much smaller with respect to the FHNC/0 case [127]. The total scalar two-body distribution function is given by the sum of the four different kinds of the partial two-body distribution functions, multiplied by the appropriate vertex corrections The numerical solution of the coupled FHNC integral equations is not trivial at all. Moreover,the numerical convergence of the solution does not ensure that this solution is acceptable from the physical point of view, as neglecting elementary diagrams could in principle lead to a violation of the variational principle. A useful tool to check both the numerical and the theoretical accuracy of the calculations is the fulfillment of the sum rule of the scalar distribution function of Eq. (2.39). Comparing the values of the kinetic energy expectation value obtained using the Pandharipande-Bethe and the Jackson-Feenberg prescriptions, is also an indicator for the variational principle to be respected. Using Eqs. (2.38) and (2.96) one can evaluate the potential energy contribution to the energy per particle. In addition, the knowledge of the two-body distribution function allows for the calculation of the kinetic energy per particle, to be discussed in the next Section. Kinetic energy prescriptions In this Section we shall perform a number of manipulations on the kinetic energy expectation value. To keep the discussion general, we do not assume that F takes the simple scalar Jastrow form of Eq. (2.15) adopted in the FR approach, but only that it is a real function of the relative particle coordinates, so that the correlator of Eq. (2.30) satisfies F † = F . The kinetic energy expectation value is given by where den= dx 1,...,A Ψ * 0 F F Ψ 0 accounts for the normalization of the CBF wave function In the first line we have exploited the symmetry properties of the correlated wave function, as in Eq. (2.37). It is worth noting that a crucial role in this context is played by the symmetryzation operator S appearing in Eq. (2.30). The most straightforward form for the kinetic energy is obtained by applying the laplacian to the right This expression of the kinetic energy is called "Pandharipande-Bethe" (PB) [128], although it was first implemented by Iwamoto and Yamada [129]. The first term in square brackets generates the Fermi gas energy The full PB kinetic energy is given by Integrals involving ∇ 2 i F ij are included in W kin and are completely analogous to those arising from the two-body potential. Three body terms with derivatives acting on the correlation only, In order to simplify the notation, cancellation among irreducible diagrams has been assumed; moreover, in the expression for U F the Abe diagrams appearing in the three-body distribution function have been neglected Note that terms containing ∇ i Ψ 0 gives zero contribution in direct diagrams after summation over k i . The only terms contributing to U F are exchange diagrams in which Integrating by parts the last term in square brackets of Eq. (2.97) and using the identity yields the "Clark-Westhaus" form of the kinetic energy In the case of pure scalar Jastrow correlation it turns out that ( 2.105) Integrating once by parts the first line of Eq. , while yet another integration produces an expression in which the laplacian acts on the left, The Jackson-Feenberg form of the kinetic energy is obtained by averaging these contributions The W B two-body integral contains the kinetic contributions involving derivatives on correlations only ( In the case of central correlations it has the form of a two-body integral for a Bose liquid The W φ and U φ have a fermionic origin as they result from ∇ 2 i Ψ * 0 Ψ 0 and for a scalar Jastrow correlation read × g cc (r 23 )r 12 ·r 13 . (2.108) Although the three forms of the kinetic energy that we have derived are formally equivalent, each has its own distinctive advantages and disadvantages in actual cluster expansion calculations. The CW form has the remarkable feature of not involving second derivatives and there are no additional terms arising when one goes from a Bose system to a Fermi system. Because of the large cancellation among the two-body potential contribution and W kin + W F , the PB kinetic energy is rather unsensitive to the short-range uncertainties of g c (r 12 ). The three body term U φ of the JF procedure is smaller than the three body terms U and U F of the PB and CW prescriptions, making the JF kinetic energy essentially unaffected by the approximations involved in the three-body distribution function. The drawback of the JF prescription mainly resides in the deficient cancellation occurring between W φ + W φ and the two-body potential contribution. Hence, the JF kinetic energy is more affected by the poor knowledge of the two-body distribution function at short distances. Extension to operators: FHNC/SOC In this Section we will summarize the extension of the FR cluster expansion scheme, extensively used to deal with spin-isospin dependent correlation operators, introduced in The operatorial structure of the correlations and their non commutativity make the development of a full FHNC summation scheme for diagrams containing spin-dependent correlation prohibitive. Here, we will briefly discuss the so called Single Operator Chain (FHNC/SOC) summation scheme, a detailed description of which can be found in [118,120]. In addition to the function h(r ij ) of Eq. (2.34), one has to also consider the products where the factor 2 of the first quantity accounts for the term in which the central correlation is on the right ofÔ p 12 while the operatorial one is on the left and for the reversed arrangement. For the calculation of the expectation value of the NN potential depending on spinisospin operators, like the AV18 of Eq. (1.20), it is worth introducing the two-body state dependent distribution functions, defined analogously to g c (r 1 , r 2 ) of Eq. (2.36) . (2.111) The expectation value of the two-body potential can be conveniently rewritten in terms of the two-body state dependent distribution functions Because of translation invariance, the state dependent distribution functions, like the scalar one, depends on the magnitude of the relative distance, g p (r 1 , r 2 ) ≡ g p (r 12 ). Thus, like for the scalar case, the expectation value of the two-body potential diverges with number of particles, while is a finite quantity. Since the total spin and the total isospin of SNM are both vanishing, the following sum rules are satisfied by g p (r 12 ) ρ dr 12 g σ (r 12 ) = −3 ρ dr 12 g τ (r 12 ) = −3 ρ dr 12 g στ (r 12 ) = 9 . (2.114) As far as the expansion of the numerator is concerned, Eq. (2.41) can be easily generalized Note that the cluster termsX n are operators; for examplê The numerator of g p (r 12 ) can be expanded analogously to Eq. (2.45) where the operatorial N−body mean field distribution function, defined aŝ can be readily shown to be (see Eq. (2.46) and (2.49)) Figure 2.14: Operatorial correlation bonds. The property of Eq. (2.50) holds forĝ M F N , allowing for the cancellation between the unlinked diagrams of the numerator and the denominator to take place even in the case of operatorial correlations. Writing the antisymmetrization operator as in Eq. (2.51) and summing over the plane wave momenta, for the first terms ofĝ M F N we get This expression is very similar to the one of Eq. The symbol "CTr" denotes the normalized trace of the spin-isospin operators, originating from the sum over the spin-isospin states of Eq. (2.120) and the sum over the spinisospin degrees of freedom, Tr 1,...,N , of Eq. (2.117). The factor 1/ν N accounts for the normalization of the trace, such that CTr(1) = 1. Diagrammatic rules The diagrammatic rules given in Section 2.4 need to be extended to account for 2f c (r ij )f p>1 (r ij ) and f p>1 (r ij )f q>1 (r ij ). The former is represented by a single wavy line, the latter by a double wavy line; in both cases a letter indicating the kind of the operator involved in the correlation is placed close to the bond itself, see Fig. 2.14. A thick solid line, displayed in Fig. 2.15, has been introduced to represent the interaction term,F 12Ô p 12F 12 , of the operatorial two-body distribution function. Note that the value of the diagrams in general depends on the ordering of the operators, operators; hence all permutations need to be considered. The diagrammatic classification is analogous to the one used for the FR cluster expansion technique. As already said, disconnected diagrams of the numerator exactly simplify with the denominator and only connected diagrams have to be calculated. Unlike the Jastrow case, reducible diagrams do not completely cancel out. At a later stage in this Thesis, we will describe the calculation of the two-and three-body cluster contributions to the energy per particle. In those simple examples, we will show how to deal with reducible diagrams. Traces As can be realized from Eq. (2.117), the calculation of v p /A requires the evaluation of the traces of spin-isospin dependent operators present in both the potential and the correlations. Since these operators are scalar in the Fock space formed by the product of configuration, spin and isospin spaces the Pauli identity can be written as where a and b are generic vector operators not containing σ i . The latter equation, which applies for σ i → τ i also, can be used to express a generic operator product as where C does not contain any spin-isospin dependent operators while the rest contains terms in which each σ k and τ k occurs at most once. Owing to the fact that Pauli matrices are traceless the only contribution of Ô ij is C. In general C depends on the ordering of the operators appearing in Ô ij , hence all the possible orderings arising from (S F ) needs to be properly taken into account. The authors of Ref. [118] distinguished three different operatorial structures of the cluster term, to the analysis of which we devote the following Sections. Product of operators acting on the same pair As a first example, consider a cluster term in which the points i and j are joined by two operators, hence with A p = 1, 3, 3, 9, 6, 18 for p = 1, 6. The CTr of diagrams in which more than two operators insist on the same pair ij can be easily evaluated with the aid of the K pqr matrices, defined asÔ The values of K pqr are given in Table 1 of Ref. [118]. Comparing the last two equations it is readily seen that K pq1 = δ pq A p . Using Eq. (2.126) it turns out that Note that, since operators acting on the same pair of points commute, the order of operator in the previous equation is immaterial, thus Single operator rings (SOR) Single operator rings, like the one showed in Fig 2. LetÔ p ij andÔ q jk the only two operators arriving at the point j. Making use of the Pauli identity it is possible to completely eliminate the operatorial dependence on point j. Integrating over the azimuthal angle φ j and tracing over the spin-isospin degrees of freedom of particle j yields (2.129) The coefficients ξ pqr ink depends on the internal angles of the triangle r ij , r jk , r ik The evaluation of SOR diagrams is rather simple: once the operators with one point in common are placed next to each other, e. g.,Ô p ijÔ q jkÔ r kl . . . , successive contractions over the common points can be made by means of Eq. (2.129). Every contraction gives a ξ factor until at the end one is left with two operators acting on the same pair, resulting in a factor A p . Multiple operator diagrams Consider the normalized trace of the diagram (a) of Fig. 2.17, where more than two operators arrive at both points i and j. In principle, all possible orderings of the operators have to be considered in the evaluation of the normalized trace. However, invariance under cyclic permutations is a general property of the traces. As a consequence, there are only two different orderings of the operators: a "successive" order, in whichÔ p ij andÔ q ij can be placed next to each other, and an "alternate" order, in wich eitherÔ r ik orÔ s jk is placed between them. For the successive order, using Eq. On the other hand, for the alternate order one has To determine the matrix L pqr one has to note that either It can be easily seen that L pqr = K pqr A r and L pqr = −K pqr A r in the former and in the latter case, respectively. Another possibility that needs to be discussed contemplates two SOR meeting at the point i, like in the diagram (b) of Fig. 2.17. Because of the invariance of the trace upon cyclic exchanges, again there are only two distinct cases. When the two operators acting on the pairs ij and ik are contiguous, it turns out that where we have used K pq1 = δ pq A p . In order to deal with the alternate order, we introduce the matrix D rs where in the case of tensor operators, the above equation implies an integration over the azimuthal angle φ. The entries of D rs depend on the kind of the operatorsÔ r andÔ s D στ = 0 D (στ,tτ )(στ,tτ ) = − 8 9 Thus, for the alternate order trace finds FHNC/SOC approximation The technique for summing linked cluster diagrams containing operatorial correlations is made technically difficult because of the their non commutativity, which makes a full FHNC summation prohibitive. Diagrams having one or more passive operatorial bonds are calculated at leading order only. Such an approximation is justified by the observation that operatorial correlations are much weaker than the scalar ones. Based on this feature, one would be tempted to conclude that the leading order amounts to dressing the interaction line with all possible FHNC two-body distribution functions. This is not true as, besides the short range behavior, the intermediate range behavior of NN correlations also plays an important role that needs to be taken into account. In particular, tensor correlations, and to some extent also exchange correlations, have a much longer range than the central ones. In order to handle this problem, summing the class of chain diagrams turns out to be to be of great importance, as remarked in Sec. 2.4.2 for the pure central case (see Eq. (2.83) and the subsequent discussion). The above issue is taken care of by summing up the Single Operator Chains (SOC) in the corresponding FHNC/SOC approximation [118,130]. SOC are chain diagrams in which any single passive bond of the chain has a single operator of the type f c (r ij )f p (r ij )Ô p ij or −h(r ij )ℓ(k F r ij ) × P ij , with p ≤ 6, or FHNC-dressed versions of them. Note that if a single bond of the chain is of the scalar type then the spin trace of the corresponding cluster term vanishes, as the Pauli matrices are traceless. Then the SOC is the leading order, and at the same time it includes the main features of the long range behavior of tensor and exchange correlations. The calculation of SOC, as that of FHNC chains, is based upon the convolution integral of the functions corresponding to two consecutive bonds. Unlike FHNC chains, however, the SOC have operatorial bonds. Therefore, the basic algorithm is the convolution of two operatorial correlations having one common point of Eq. (2.129). The ordering of the operators within an SOC is immaterial, because the commutator [Ô ik ,Ô kj ] is linear in σ k and τ k , and Pauli matrices are traceless. The only orderings that matter are those of passive bonds connected to the interacting points 1 or 2, discussed in Eqs. (2.131), (2.132), (2.135) and (2.138). A second important contribution which is included in FHNC/SOC approximation is the leading order of the vertex corrections. They sum up the contributions of sets of subdiagrams which are joined to the basic diagrammatic structure in a single point, like diagram (b) of Fig. 2.17. Therefore, a vertex correction dresses the vertex of all the possible reducible subdiagrams joined to it. In the FHNC/SOC approximation they are taken into account only at the leading order, i.e. including SOR. Vertex corrections play an important role for the fulfillment of the sum rules. The full FHNC/SOC equations including the SOR vertex corrections can be found in the reference paper [118,130]. For pedagogical purposes, we limit ourselves to the equations for SOC diagrams, as this eliminates the problem of the reducible diagrams, as all the SOC diagrams are irreducible [120]. We will make the further approximation of neglecting elementary diagrams. Regarding the notation, the symbols for nodal and composite diagrams carry an additional index, specifying the operatorial dependence: N p xy , X p xy . The generalization of Eq. (2.80) accounting for operatorial nodal diagrams reads Since irreducible diagrams only are present, the factor ζ dd only selects contributions respecting the Pauli principle The partial two-body distribution functions, g p xy = N p xy + X p xy , are given by (compare to Eq. (2.91)) where h p (r 12 ) = 2f p (r 12 )f c (r 12 ) + f c (r 12 ) 2 N p dd (r 12 ) h c (r 12 ) = exp[N dd (r 12 )] L(r 12 ) = N cc (r 12 ) − ℓ(r 12 )/ν (2.142) The composite functions can be effortlessly obtained by subtracting the contribution of the nodal diagrams from the partial two-body distribution functions X p xy (r 12 ) = g p xy (r 12 ) − N p xy (r 12 ) . (2.143) The total operator distribution function is given by g p (r 12 ) = g p dd (r 12 ) + 2g p de (r 12 ) + g p ee (r 12 ) . N r cR (r 12 ) = 6 p,q=1 N r cc (r 12 ) = N r cL (r 12 ) + N r cR (r 12 ) . (2.146) Finally, the partial two-body distribution functions with circular exchanges is given by g p c (r 12 ) = g c cc (r 12 )∆ p . (2.147) The cc nodal functions enter as closed SOR in the generalized equations for X c ee and N c cc . Moreover, they contribute to the energy expectation value, the full calculation of which will not be reported in this Thesis. The interested reader is again referred to the Refs. [118,130]. Nevertheless, in the next Section, we do present the calculation of the two-and three-body cluster contributions to both the potential and the kinetic energy per particle. Two-and three-body cluster contribution In this Section we analyze in detail the two-and the three-body cluster contributions to the energy per particle for the case of a "static" potential, without momentum dependent terms. This analysis, although being useful for pedagogical reasons, in particular for the treatment of the reducible diagrams within the FR summation scheme, will be the cornerstone of the effective interaction, that will be developed for the calculation of the response. With the notation used in Section 2.5, the two-body contribution of F † v 12 F can be expressed as (2. 148) In what follows, the cluster expansion of the denominator will be disregarded as it is understood that, for fermionic systems, the denominator exactly cancels the disconnected diagrams of the numerator. The two-body contribution of the potential is then Because of the translation invariance of the system, it is possible to integrate out the coordinate of the center of mass R 12 = 1 2 (r 1 +r 2 ), so that the two-body cluster contribution to the potential energy per particle reads The two-body term of the cluster expansion of the kinetic energy,T , is where the commutator removes the Fermi gas energy, which is a one-body contribution. Using the symmetry of the wave functions one gets (2.152) In order to remove the term with the product of the gradients acting on both the correlation functionF 12 and on the plane wave, it is convenient to integrate by part the latter expression (see Section 2.4.3), with the result (2.153) Since ∇ 1F12 = ∇ 12F12 , we can integrate out the coordinate of the center of mass, getting the following expression for the kinetic energy per particle For calculating ( ∇ 1F12 )( ∇ 1F12 ) one has to account for the fact that the tensor operator depends onr 12 , hence where the following property of the gradient of the tensor operator has been used The first term of Eq. (2.155) can be conveniently written in terms of the derivative with respect the magnitude of r 12 Thanks to the relation The three-body cluster contribution appearing in the expansion of F † v 12 F is given by Within the FR diagrammatic scheme, the three-body cluster contribution to v 12 it is not merely the expectation value of the latter result, unlike the two-body case. As a matter of fact, the reducible diagrams arising form four-body cluster term of F † v 12 F , the detailed calculations of which can be found in appendix C, needs to be taken into account. The direct term of the three-body cluster contribution in the FR expansion scheme is given by [121] from the three-body cluster contribution of F † v 12 F , whereas the reducible four-body diagrams of Fig. 2.19 contribute with the factor It is worth remarking that in the pure central Jastrow case,F ij = f c ij , the four-and three-body reducible diagrams completely cancel, as discussed in Section 2.4.1, and we obtain the well-known irreducible contribution The corresponding four-body diagram producing the term 1 2F 12v12F12 (F 2 13 +F 2 23 − 2)P στ 12 is drawn in Fig. 2. 20 The diagrams in which particles 1 and 3 are exchanged contributes with v 12 where the term 1 2F 12v12F12 (F 2 13 − 1)P στ 13 comes from the four-body reducible diagram of Since the potential is invariant under x 1 ↔ x 2 . the diagrams with the exchange between particles 2 and 3 give the same contribution reported in Eq. (2.166). The associated four-body reducible diagram is very similar to the one of Fig. 2.21 but with the loop attached to particle 2 instead of particle 1. Consider the diagrams with the circular exchange involving particles 1, 2 and 3. In this case there are no reducible four-body diagrams that partly cancel the reducible part of the three body diagram. In addition, there are no three-body reducible diagrams with circular exchange at all. However, the four-body diagram of Fig. 2.21, with no correlation lines linking particles 1 and 2 to the others, can be reduced to a three-body term, so that the three-body diagram with a circular exchange reads , that contributes to the three-body diagrams having a circular exchange between particles 1, 2 and 3. As explained in Section 2.4.3, three-body cluster contribution to the PB kinetic energy contains terms of the kind ∇ 2 1 (SF 12F13F23 ). Their explicit expressions can be obtained from the corresponding equations for the two-body potential by substituting the first term of the normalized traces witĥ for the second term. Following the notation of Section 2.4.3, terms with (∇ 2 1F 12 ) are denoted by W kin , those having ( ∇ 1F12 ) · ( ∇ 1F13 ) are included in U. On the other hand, the three-body cluster terms belonging to W F arise from the diagrams where particles 1 and 2 are exchanged Note that in this case there are no subtraction terms arising from reducible diagrams. Determination of the correlation functions An upperbound to the binding energy per particle, E V /A, can be obtained by using the variational method, which amounts to minimizing the energy expectation value H /A with respect to the variational parameters included in the model. Its cluster expansion is given by where T F is the energy of the non interacting Fermi gas and (∆E) 2 denotes the contribution of two-nucleon clusters (2.175) Neglecting higher order cluster contributions, the functional minimization of H /A leads to a set of six Euler-Lagrange equations, to be solved with proper constraints that force f c and f (p>1) to "heal" at one and zero, respectively. That is most efficiently achieved through the boundary conditions [29,118] Numerical calculations are generally carried out using only two independent "healing distances": d c = d p=1...4 and d t = d 5,6 . Additional and important variational parameters are the quenching factors α p whose introduction simulates modifications of the two-body potentials entering in the Euler-Lagrange differential equations arising from the screening induced by the presence of the nuclear mediumv The full potential is, of course, used in the energy expectation value. In addition, the resulting correlation functions f p are often rescaled according tô The energy expectation value H /A, calculated in full FHNC/SOC approximation is minimized with respect to variations of d c , d t , β p , and α p . To determine the best values of the variational parameters we have used a version of the "Simulated annealing" algorithm [131]. In metallurgy the annealing procedure consists in heating and then slowly cooling a metal, to decrease the defects of its structure. During the heating the atoms gain kinetic energy and move away from their initial equilibrium positions, passing through states of higher energy. Afterwards, when the metal slowly cools, it is possible that the atoms freeze in a different configuration with respect to the initial one, corresponding to a lower value of the energy. In minimization problems the analog of the position of the atoms are the values of the parameters to be optimized, in our case d c , d t , β p and α p , while the energy of the system corresponds to the function that has to be minimized, that in our case is the variational energy In the simulated annealing procedure, the parameters d c , d t , β p , α p are drawn from the Boltzmann distribution, exp(−E V /T ), where T is just a parameter of the simulated annealing algorithm, having no physical meaning. We have used a Metropolis algorithm, with acceptance probability of passing from the state s = {d c , d t , β p , α p } to the proposed state By looking at the distribution of the parameters resulting from the Metropolis random walk, it is possible to find the valuesd c ,d t ,β p andα p corresponding to the minimum of E V , e.g. to the maximum of the Boltzmann distribution. As the fictitious temperature T is lowered, the system approaches the equilibrium and the values of the parameters get closer and closer tod c ,d t ,β p ,α p . Auxiliary field diffusion Monte Carlo A central issue in many-body physics is the evaluation of multidimensional integrals, like the one of Eq. (2.111). An alternative to the cluster expansion technique is represented by stochastic algorithms using the central limit theorem to compute multidimensional integrals, known as "Monte Carlo methods" [132,133]. Using standard numerical integration methods, like the Simpson rule, the computation of a D-dimensional integral requires an exponentially growing number of operations. To be definite, in order to estimate the value of a D-dimensional integral with an accuracy ǫ, the quantity of operations scales with ǫ −D . The central limit theorem guarantees that Monte Carlo methods scale as ǫ −2 , regardless from the dimensionality. Variational Monte Carlo Variational Monte Carlo (VMC) uses the stochastic integration method for evaluating the expectation values for a chosen trial wave function. In the CBF approach, the trial wave-function is given by Ψ T (X) ≡ X|F|Φ 0 . In order to comply with the standard notation of Monte Carlo formalism, we do not use the variety of the bra and ket symbols introduced for CBF theory and |Φ 0 ≡ |Ψ 0 . The expectation value of any operatorÔ on the state Ψ T can be conveniently written making the spin-isospin sum explicit The VMC algorithm prescribes to sample the configuration R from the probability density and to estimate the integral with the sum where N c is the number of sampled configurations. VMC can be seen as an alternative to the cluster expansion technique, allowing for controlling the approximation arising from elementary diagrams and SOC approximation. The main drawback of VMC, shared with the CBF, is that the goodness of the result entirely depends on the accuracy of the trial wave function. In actual facts, neutron matter calculations with for the energy per particle, have been limited to 14 nucleons in a box [134]. This is due to the operator structure of the correlations, implying a sum over the spin-isospin degrees of freedom of A particles. The possible spin states of A nucleons are 2 A and since Z of the A nucleons are protons there are A!/Z!(A − Z)! isospin states. Hence the total number of spin-isospin states is (2.186) Diffusion Monte Carlo The diffusion Monte Carlo (DMC) method [135,132,133,136], overcomes the limitation of the variational wave-function by using a projection technique to enhance the true ground-state component of a starting trial wave function. The trial wave-function can be expanded on the complete set of eigenstates of the the full hamiltonian, introduced in Eq. The energy offset E T is adjusted to be as close as possible to E 0 with the aim of making the damping exponential factor constant. For the sake of simplicity, at this point we limit ourselves in considering the 3N spatial coordinates only. An entire Section will be dedicated to the inclusion of spin-isospin degrees of freedom through the auxiliary fields. Inserting a completeness on the orthonormal basis {|R ′ } in Eq. (2.189) yields Projecting on the coordinates R ′ | leads to is Green's function of the operatorĤ + ∂ ∂τ . Notice that no assumptions on the smallness of ∆τ have been made so far. In terms of the wave function the Schrödinger Eq. (2.189) reads If we neglect the interaction terms in the Hamiltonian The associated Green's function is a 3N-dimensional Gaussian for the spatial coordinate having variance τ in each dimension describing the Brownian diffusion of N particles with a dynamic governed by random collisions. No DMC algorithm is necessary to solve this problem, as the limit τ → ∞ leading to an uniform wave function can be analytically taken. However, to explain how the DMC algorithm works, let us represent represent the distribution Ψ(R, τ ) by a set of discrete Brownian sampling points or random walkers (2.199) Evolving this discrete distribution for an imaginary time ∆τ by means of Eq. (2.193) we obtain a set of Gaussians, centered in the positions R k In order to get a discrete representation for the positions of the walker at τ + ∆τ , each Gaussian is sampled by a new delta function. The procedure of propagation/resampling is then iterated until convergence is reached. When the full Hamiltonian, including both the potential and the kinetic term, is consideredĤ =T +V , In the limit of small ∆τ the Green's function can be factorized with the branching factor given by The wave function at τ + ∆τ can then reads • The initial set of walkers is sampled from the distribution Ψ(R ′ , τ = 0) = Ψ T (R ′ ) and the starting trial energy E T is chosen, for instance from a variational calculation. • The coordinates of the walkers are diffused by means of a Brownian motion where ξ is a stochastic variable distributed according to a Gaussian probability density with σ = ∆τ /m and zero average so that the walkers are distributed according to G d (R, R ′ , ∆τ ). • The branching or birth/death algorithm is applied: the weight is assigned to each walker and a number of copies of the walker proportional to w is generated. Then the convolution theorem governing the composition of random variables guarantees that the distribution of the walker is Ψ(R, τ + ∆τ ). In actual facts, the integer number of copies is given by the branching factor where INT denotes the integer part of a real number and η is a random number drawn from the uniform distribution on the interval [0, 1]. The energy offset E T is adjusted to keep the the total population of walkers fluctuating around a desired value. • Once convergence is reached, i.e. for large enough τ , the configurations are distributed with a probability density Ψ(R, τ ). Therefore, the ground-state expectation values of observables that commute with the hamiltonian can be computed by . (2.211) Importance sampling The basic version of the DMC algorithm described in the previous Section is poorly efficient, as the brownian diffusive process ignores the shape of the potential. Hence, the weight of Eq. (2.208) suffers of large fluctuations from step to step, as, for instance, there is nothing that prevents two-particles from moving very close to each other even in presence of an hard-core repulsive potential. The idea of the importance sampling technique consists in using the knowledge of the trial wave function Ψ T (R) [135,139] to guide the diffusive process. Let us multiply the imaginary time Schrödinger equation (2.195) by the trial wave function Ψ T (R) and introduce a new distribution f (R, τ ) = Ψ T (R)Ψ(R, τ ). We obtain a non homogenous Fokker-Plank equation is the 3A dimensional "drift velocity" and is the "local energy". The imaginary time evolution for f (R, τ ) can be conveniently written in terms of a modified Green's function Comparing with Eq. (2.193) it is immediately found that It is shown in Appendix D that the importance sampling makes the diffusion driven by the drift velocity, that carries the walkers along in the direction of increasing Ψ T (2.217) The branching factor now contains the local energy instead of the potential energỹ (2.218) If the trial wave function is sufficiently accurate, the local energy remains close to the ground-state energy throughout the imaginary time evolution. As far as the expectation value of the operatorÔ is concerned, from the last term of Eq. (2.210) it follows Using the central limit theorem, Ô can be computed by sampling the configurations from f (R, τ ) . (2.220) Sign problem In order to project out the ground state of a given Hamiltonian, the DMC algorithm implies a diffusive process, whose starting distribution of walkers is given by the trial wave function. Hence, for the diffusion interpretation to be applicable, Ψ T must be positive definite in the whole configuration space. This is the case, for example, of many-bosons system, whose ground-state wave function is positive definite. The ground state of a fermionic system on the other hand, is described by an antisymmetric wave function, to which a probability distribution interpretation cannot be given. Let us describe this issue in more details. It can be proven that the ground state Ψ 0 (R) of a regular hamiltonianĤ is node less. Hence, from a strictly mathematical point of view, the search of an antisymmetric ground state, Ψ A 0 (R), corresponds to the search of an excited state of the many-body hamiltonian. In terms of the energy eigenvalues, this corresponds to where E 0 and E A 0 are the ground-state energies for the bosonic and the fermionic system described byĤ [77]. Expanding the trial antisymmetric wave function in terms of eigenstate of the hamiltonian and choosing the energy offset to be E A 0 , in the limit of large imaginary time we get The sum over n runs over the bosonic eigenfunctions of the hamiltonian having a smaller energy than E A 0 ; when τ → ∞ these terms diverge. The dots indicate the converging term, i.e. the eigenfunctions (both bosonic and fermionic) with energies larger than E A 0 that are exponentially suppressed with respect to Ψ A 0 (R). The exponentially growing component along the symmetric ground state does not affect the expectation of the Hamiltonian. Because of the orthogonality between antisymmetric and symmetric wave functions, it turns out that However, the variance of the DMC estimate for the energy expectation value σ 2 | is exponentially diverging. In particular, the bosonic components dominates the second term of the variance, as the orthogonality which eliminates the symmetric contributions does not apply in the following integral . (2.224) We are left with the contradictory statement that the energy converges to exact eigenvalue with an exponentially growing statistical error: the signal to noise ratio exponentially decays. In order for the DMC to be used for fermionic systems, it is possible to artificially split the configuration space in regions within which the sign of the trial wave function does not change. The 3A − 1 dimensional subset of the configuration space where the trial wave function vanishes is denoted as "nodal surface". In the fixed node approximation [140], during the diffusion process the walkers crossing the nodal surface are dropped. In other words, the nodal surface of the ground-state is imposed on the system, as it defined by the constraint Ψ T (R) = 0. It can be proven [141] that the energy obtained from a fixed node DMC simulation obeys a variational principle. It is important to note that this variational principle only applies to ground-state calculations while a much weaker variational principle holds for excited-state calculations [142]. In the case of nuclear hamiltonian, the overlap between walkers and the trial-wave function is complex and the sign problem turns into a phase problem. A generalization of the fixed-node approximation, the so-called "constrained-path" [143] was introduced to deal with complex wave functions. It amounts in constraining the walkers to diffuse in regions where the overlap with the trial wave function is positive. To this aim, a suitable choice of the drift terms, used in the earlier AFDMC calculations [144] is To avoid the signal to noise ratio exponentially decay, the constrained-path approximation is realized by imposing Thus, the walkers having an overlap with the trial wave function that after a diffusive step changes sign are dropped. Another approach followed to put under control the sign problem is the "fixed-phase" approximation, introduced do deal with hamiltonian containing a magnetic field [145]. The walkers are forced to have the same phase as the importance function Ψ T . The drift term is given by It can be shown that an additional term in the branching has to be considered; in particular only the real part of the kinetic energy contribution to the local energy has to be kept Both for the constrained-path and the fixed-phase approximations an accurate trial wave functions would be needed. While in GFMC calculations [146] the full operator structure of the CBF is taken into account, AFDMC trial wave function only encloses pure central Jastrow correlations, which is positive definite. Hence for AFDMC, the nodal structure of the nuclear matter wave function is entirely given by the Slater determinant of plane waves. It is relevant to the purpose of the sign problem discussion to remark that using constrained-path approximation, the DMC algorithm does not necessarily provide an upper bound in the calculation of energy [147]. Moreover, it has not been proved that the fixed-phase approximation gives an upper bound to the real energy. For further details concerning constrained path and fixed phase approximations the reader is referred to the original papers and to the exhaustive discussion reported in the PhD Thesis of Paolo Armani [138]. Spin-isospin degrees of freedom and auxiliary fields The method we have described needs to be generalized to account for spin-isospin degreed of freedom, that are of major importance in nuclear few-and many-body systems. Let us start from an example: within the GFMC approach the eight spin configuration of the 3 H nucleus (we neglect for the moment the isospin) are represented by [148] Each coefficient a α represent the amplitude of a given many-particles spin configuration; for instance a ↑↑↓ (R) = ↑↑↓ | 3 H . (2.230) The many-particles spin configuration space is closed under the action of the operators contained in the hamiltonian. For example, applying σ 12 = 2P σ 12 − 1 yieldŝ Since the total charge is conserved, for the isospin of the 3 H we have pnn, npn, or nnp; thus, the vector describing the whole spin-isospin structure has 24 entries. In the GFMC algorithm the the imaginary time evolution of Eq. (2.215) is need to applied to each of the 2 A A! Z!(A−Z)! spin-isospin configurations and the the imaginary time evolution of Eq. (2.193) generalizes to (2.232) The Green's function depends on the spin-isospin configuration In order to deal with system having a large number of protons and neutrons, like for example medium-heavy nuclei or nuclear matter, GFMC does not seem to be a feasible approach. The idea of AFDMC consists in using a single-particle wave function, instead of the many-particle wave function of GFMC. For comparing the two methods, the spin structure of 3 H in AFDMC approach is where complex coefficient c α i denotes the amplitude for the i − th particle to have spin state α. Taking also the isospin degrees of freedom into account, it can be readily shown that the dimension of the vector describing a system with A nucleons is 4A. Already for such a small nucleus like 3 H the dimension of the spin-isospin structure of ADMC is a factor 2 smaller than the one of GFMC. The gain in computational time of AFDMC with respect to GFMC becomes enormous for larger system. However, GFMC is still the best or at least one of the best available method for dealing with hard-core potentials like the Argonne v 18 . The spin-orbit terms and three nucleon forces have not been included in AFDMC algorithm yet, with the notable exception of PNM. The main concern of the single-particle wave function is that it is not closed with respect to the application of a quadratic spin (or isospin) operator. Let us again apply the operator σ 12 , as we did in Eq. (2.231) The resulting sum of two single-particle wave functions cannot be expressed as a single particle wave function. Therefore, if using the standard DMC algorithm, the imaginarytime propagator generates a sum of single particles wave functions at each time step. This would be catastrophic, as the number of single particle wave function would soon become enormous. The idea of AFDMC is to use the Hubbard-Stratonovich transformation to reduce the spin-isospin dependence from quadratic to linear, making the use of single particle wave functions feasible. In the following we will show how this is done for Argonne potentials incorporating the first six operators. The inclusion of the spin-orbit term is possible in the case of PNM only and it has beed described at length in Ref. [149] and [138]. The first six components of the Argonne potential can be rewritten aŝ where the spin independent and spin dependent contribution read The matrices A are vanishing on the diagonal, as there is no self-interaction in the potential. Moreover, since they are real and symmetric, they have real eigenvalues and orthogonal eigenstates, given by jβ A σ iα,jβ ψ σ n,jβ = λ σ n ψ σ n,iα , jβ A στ iα,jβ ψ στ n,jβ = λ στ n ψ στ n,iα , j A τ i,j ψ τ n,j = λ τ n ψ τ n,i . In terms of these operators, the spin dependent part of the potential reads (O στ n,α ) 2 λ στ n + 1 2 With the symbolÔ n we denote the 3AÔ σ n , the 9AÔ στ n and the 3AÔ τ n . It is now possible to use the Hubbard-Stratonovich transformation, that for a generic operatorÔ and a parameter λ is defined by We define a walker to be the 3A spatial coordinates and νA spinors, c α i . Hence, within the AFDMC approach, the spin-isospin coordinate S has to be added to the spatial coordinate R. The imaginary time evolution (compare with Eq. (2.193) and Eq. (2.232)) reads where the AFDMC Green's function, which includes the integration over the auxiliary fields, is given by An important point to make is that the operatorÔ n contains a sum over particle index j, as can be seen from Eq. (2.245). However, these operators commute and we can conveniently represent the exponential of the sum as a product of exponentials, each rotating only one single-particle state. Therefore, the application of the operatorÔ p n to the spin-isospin state |S ′ , turns into a product of independent rotation. For the spin rotation, generated byÔ σ n , one has Rotating the j − th single particle state amounts in a change of the coefficients In order to give an explicit expression to c ′ ↑ j and c ′ ↓ j , the following identities have to be used where in our case the vector a is given by (we omit the index σ for brevity) a = |λ n |∆τ x n ψ n,j . (2.256) When λ n < 0, exploiting Eq. (2.254), we find On the other hand, if λ > 0, making use of Eq. (2.255) the transformed coefficients read Note that if the integral over the auxiliary fields was computed using the standard methods, like the Simpson rule, we would be left with a sum of rotated spinors, one for each sampled value of the auxiliary fields x n . In the first realizations of the AFDMC algorithm a discrete version of the Hubbard-Stratonovich transformation, due to Kooning, was implemented. It essentially consists in replacing the Gaussian by a three point weighted sum. With a probability depending on the weight, only one over these three values has to be used for rotating the spinors. In more recent works, following the spirit of the Monte Carlo algorithm, the Gaussian has been considered as a probability distribution. One value is sampled directly from the Gaussian and used to rotate the spin-isospin degrees of freedom of the walkers. A physical interpretation can be (and has been) given to the auxiliary fields. As can be consistently explained in chiral perturbation theory, nuclear interactions can be explained in terms of pion exchanges. Within the Born-Oppenheimer approximation, the light pions are the fast degrees of freedom, coupled with the slow and more massive nucleons. If the fast meson field is integrated out to give a potential, keeping the nucleonic coordinates fixed, and solving for its ground-state energy, then meson coordinate corresponds to the Hubbard-Stratonovich auxiliary fields. Importance sampling can be implemented to the integral over the auxiliary fields. The overlap of the walker with Ψ T is not generally picked around x n = 0. Hence, instead of sampling from the Gaussian, it is more efficient to sample values of x n where the trial wave-function is thought to be large. One way consists in shifting the Gaussian, introducing a drift term analogous to the one used for the spatial coordinates. For the detailed calculations of the drift term for the Hubbard-Stratonovich variables, the reader is referred to Refs. [25,138,149]. The agreement between Green Function Monte Carlo (GFMC) and AFDMC energies of neutron drops, obtained using the Argonne v ′ 8 plus UIX hamiltonian, discussed in Ref. [150], supports the validity of PNM calculations carried out within AFDMC with the Argonne v ′ 8 model. The highly accurate GFMC method has been used to study neutron matter properties in both the normal [134] and superfluid [151] phases. Moreover, using a fixed-phase like approximation, AFDMC also yields results in very good agreement with those obtained from Green Function Monte Carlo (GFMC) calculations for light nuclei [22]. Chapter 3 Three-body potential in nuclear matter UIX potential within FHNC/SOC approach Within CBF approach, the expectation value of a three-body potential, e.g. the UIX model, reads Let us writeV 123 as a sum of spin-isospin three-body operators multiplied by scalar functions, depending on the relative distances only As for the case of the two-body distribution functions g p 12 , it is useful to define threebody distribution functions g p 123 , reflecting the operatorial structure ofV 123 Hence, analogously to Eq. (2.113) relative to the two-body potential, the expectation value ofV 123 can be written as Neglecting the Abe diagrams, as in Eq. (2.102), the three-body distribution function can be approximated by a product of the partial two-body distribution function of Eq. (2.141) and (2.147), denoted by Z p xy in Ref. [17], It can be noted from the previous equation, that the FHNC/SOC approximation has been exploited as at most two-operators arrive at a given point. Moreover, for the sake of brevity, vertex corrections arising from separable diagrams have not been reported. The diagrams involved in the FHNC/SOC calculation of the expectation values of the V 2π term of the UIX potential are depicted in Fig. 3.1. The thick lines represent the potential, while dashed and wavy lines correspond to the partial two-body distribution functions; vertex corrections, although included in the calculations, are not shown. Because of the symmetry properties of the wave function, we can restrict our analysis to the permutation (3 : 12) . The other three permutations are accounted for in the symmetry factor appearing in front of each of the following expressions [17], while the contribution of diagram (3.d), involving three non central correlations was first taken into account by the authors of Ref. [19]. Density dependent effective potential As shown in Fig. 1.5, the inclusion of the UIX three-body potential in the hamiltonian considerably improves the theoretical estimates of the energies of the ground and low-lying excited states of nuclei with A ≤ 12 . However, for nuclei heavier than 3 H, some discrepancies with experimental data persist; moreover the empirical equilibrium properties of nuclear matter are not correctly reproduced. This problem can be largely ascribed to the uncertainties associated with the description of three-nucleon interactions, whose contribution turns out to be significant. Derivation of the effective potential Our work [27] is aimed at obtaining a two-body density-dependent potentialv 12 (ρ) that mimics the three-body potential. Hence, our starting point is the requirement that the expectation values of V 123 and ofv 12 (ρ) be the same: implying in turn (compare to Eqs. (2.113) and (3.4)) A diagrammatic representation of the above equation, which should be regarded as the definition of thev 12 (ρ), is shown in Fig. 3.3. The graph on the left-hand side represents the three-body potential times the three-body correlation function, integrated over the coordinates of particle 3. Correlation and exchange lines are schematically depicted with a line having a bubble in the middle, while the thick solid lines represent the three-body potential. The diagram in the right-hand side represents the density-dependent two-body potential, dressed with the two-body distribution function. Obviously, v ρ 12 has to include not only the three-body potential, but also the effects of correlation and exchange lines. In Section 3.1 we have examined the left-hand side of Eq.(3.15), that has been evaluated in [17] within the FHNC/SOC scheme. Here we discuss the derivation of the explicit expression of the two-body density-dependent potential appearing in the right-hand side of the equation. The procedure consists of three different step, each corresponding to a different dressing of the diagrams involved in the calculation For each of these steps the final result is a density-dependent two-body potential of the formv where, depending on the step, the v p (ρ, r 12 ) ≡ v p 12 (ρ) can be expressed in terms of the functions appearing in the definition of the UIX potential, the correlation functions and of the Slater functions. Step I. Bare approximation As a first step in the derivation of the density-dependent potential one integrates the three-body potential over the coordinate of the third particlê Diagrammatically the above equation implies that neither interaction nor exchange lines linking particle 3 with particles 1 and 2 are included. Only the two-body distribution function is taken into account in the calculation of the expectation value of V 123 (3.18) Note that only the scalar repulsive term and one permutation of the anticommutator term of the three-body potential provide non vanishing contributions, once the trace in the spin-isospin space of the third particle is performed. As shown in Fig 3.8, the contribution of the density-dependent potential to the energy per particle of SNM and PNM v (I) 12 (ρ) /A is more repulsive than the one obtained from the genuine three-body potential UIX. Thus, the scalar repulsive term is dominant when the three-body potential is integrated over particle 3. Step II. Inclusion of statistical correlations As a second step we have considered the exchange lines that are present both in g 123 and g 12 . Their treatment is somewhat complex, and needs to be analyzed in detail. Consider, for example, the diagram associated with the exchange loop involving particles 1, 2 and 3, depicted in Fig. 3.4. Its inclusion in the calculation of the densitydependent two-body potential would lead to double counting of exchange lines connecting particles 1 and 2, due to the presence of the exchange operatorP 12 in g 12 . This problem can be circumvented by noting that the antisymmetrization operator acting on particles 1, 2 and 3 can be written in the form (3.19) in which the exchange operators contributing to the density-dependent potential only appear in the first term of the right-hand side. On the other hand, the second term in the right-hand side of Eq. (3.19) only involves the exchange operatorsP 12 , whose contribution is included in g 12 and must not be taken into account in the calculation ofv 12 (ρ). Two features of the above procedure need to be clarified. First, it has to be pointed out that it is exact only within the SOC approximation that allows one to avoid the calculation of commutators between the exchange operatorsP 13 andP 23 and the correlation operators acting on particles 1 and 2. The second issue is related to the treatment of the radial part of the exchange operators. Although it is certainly true that one can isolate the trace over the spin-isospin degrees of freedom of particle 3, arising fromP 13 andP 23 , extracting the Slater functions from these operators is only possible in the absence of functions depending on the position of particle 3 [128]. However, this restriction does not apply to the case under consideration, as both the potential and the correlations depend on r 13 and r 23 . As a consequence, retaining only theP 13 andP 23 exchange operators involves an approximation in the treatment of the the Slater functions, whose validity has been tested by carrying out a numerical calculation. By singling out the radial dependence of the exchange operators, and by computing the inverse of the operator (1 −P στ 12 ), whereP στ ij denotes the spin-isospin part ofP ij , it is possible to find a "Slater Exact" density-dependent potential v S.E. 12 (ρ) whose calculation does not involve any approximations concerning the Slater functions. It can be easily verified that Note that in the above equation we have omitted all correlations functions, whose presence is irrelevant to the purpose of our discussion. The density-dependent potential obtained from Eq.(3.21) must be compared to the one resulting from the approximation discussed above, which (again neglecting correlations) leads to the expression v S.A. where "S. A." stands for Slater Approximation. We have computed v S.E. 12 (ρ) and v S.A. 12 (ρ) for SNM within the FHNC/SOC scheme, for both the scalar and the anticommutator terms of the UIX potential. The results, plotted in Fig. 3.5, clearly show that Eq.(3.22) provides an excellent approximation to the exact result for the exchanges of Eq. (3.21). Hence it has been possible to use Eq. (3.22) also to compute the contribution coming from the commutator of the UIX potential, avoiding the difficulties that would have arisen from an exact calculation of the exchanges. The second step in the construction of the density-dependent potential is then which is a generalization of the bare potential of Eq. (3.17). Figure 3.8 shows that taking exchanges into account slightly improves the approximation of the density-dependent potential. However the differences remain large because correlations have not been taken into account. Step III. Inclusion of dynamical correlations The third step in the construction of the density-dependent potential amounts to bringing correlations into the game. We have found that the most relevant diagrams are those of Fig. 3.6. Note that, in order to simplify the pictures, all interaction lines are omitted. However, it is understood that the three-body potential is acting on particles 1, 2 and 3. Correlation and exchange lines involving these particles are depicted as if they were passive interaction lines. Moreover, in order to include higher order cluster terms, we have replaced the scalar correlation line f c ij 2 with the Next to Leading Order (NLO) approximation to the bosonic two-body correlation function: The full bosonic g bose (r ij ) or g dd (r ij ) might be used instead of the NLO approximation. However, including higher order terms would have broken our cluster expansion. The correction to f c ij 2 of Eq. (3.24), whose diagrammatic representation is displayed in Fig. 3.7, can indeed be considered to be of the same order as the operatorial correlations. Figure 3.6 shows that the vertices corresponding to particles 1 and 2 are not connected by either correlation or exchange lines. All connections allowed by the diagrammatic rules are taken into account multiplying the density-dependent potential by the two-body distribution function, according to the definition of Eq.(3.15). We have already discussed the exchange lines issue, coming to the conclusion that only the exchanges P 13 and P 23 have to be taken into account. This is represented by the second diagram, where the factor 2 is due to the symmetry of the three-body potential, that takes into account both P 13 and P 23 . Note that, in principle, an additional term involving the anticommutator between the potential and the correlation function should appear in the second line of the above equation. However, due to the structure of the potential it turns out that The calculation of the right-hand side of of Eq. (3.25) requires the evaluation of the traces of commutators and anticommutators of spin-isospin operators, as well as the use of suitable angular functions needed to carry out the integration over r 3 . As for the previous steps, we have computed the contribution of the density-dependent potentialv (III) 12 (ρ) to the energy per particle. The results of Fig. 3.8 demonstrate that the density-dependent potential including correlations is able to reproduce the results obtained using genuine three-body UIX to remarkable accuracy. To simplify the notation, at this point it is convenient to identifŷ v 12 (ρ) ≡v Note that the above potential exhibits important differences when acting in PNM and in SNM. For example, in SNM v p (ρ, r 12 ) = 0 for p = 1, σ 12 τ 12 , S 12 τ 12 , while in PNM v p (ρ, r 12 ) = 0 for p = 1, σ 12 , S 12 , as shown in Figs. 3.9. Numerical Calculations A constrained simulated annealing optimization, described in Section 2.5.4, has been performed, by imposing the sum rules for the kinetic energy and for the scalar two-body distribution function. In particular the difference between the Pandharipande-Bethe (PB) and the Jackson-Feenberg (JF) kinetic energies has been forced to be less than 10 % of the Fermi Energy T F of Eq. (2.174), while the sum rule (2.39) for g c (r 12 ) has been satisfied with a precision of 3 %. In our calculations we have optimized the variational paremeters for four different Hamiltonians, each corresponding to different potential terms: , and Argonne v ′ 6 + UIX. The energy per particle of SNM and PNM computed adding to the two-body potentials Argonne v ′ 8 and Argonne v ′ 6 the density-dependent potential of Eq. (3. 16), have been compared to the results obtained using an hamiltonian with the same two-body potentials and the Urbana IX three-body interaction model. In order to show how much the density dependent potential differs from the original UIX, we compute the expectation values of these potentials with the same correlation functions, i. e. those resulting from the calculation with the genuine three-body potential; no optimization procedure has been performed for the density-dependent potentials. Both calculations have been consistently carried out within the FHNC/SOC scheme. It is worth noting that our simulated annealing constrained optimization allows us to: i) reduce the violation of the variational principle due to the FHNC/SOC approximation; ii) perform an accurate scan of the parameter space. As a consequence, our FHNC/SOC calculations provide very close results to those obtained via Monte Carlo calculations, as shown in Figs. 3.10 and 3.11, to be compared with those of Ref. [22] where the agreement between FHNC and Monte Carlo methods were not nearly as good. Auxiliary Field Diffusion Monte Carlo (AFDMC) approach In order to check the validity of our variational FHNC/SOC calculations, we carried out AFDMC simulations for both PNM and SNM. We have computed the equation of state of PNM and SNM using the AFDMC method with the fixed-phase like approximation. We simulated PNM with A = 66 and SNM with A = 28 nucleons in a periodic box, as described in [152] and [153]. The finitesize errors in PNM simulations have been investigated in [152] by comparing the Twist Averaged Boundary Conditions (TABC) with the Periodic Box Condition (PBC). It is remarkable that the energies of 66 neutrons computed using either twist averaging or periodic boundary conditions turn out to be almost the same. This essentially follows from the fact that the kinetic energy of 66 fermions approaches the thermodynamic limit very well, as can be seen in Table 3.1. The finite-size corrections due to the interaction are correctly estimated by including the contributions given by neighboring cells to the simulation box [25]. From the above results for PNM we can estimate that the finite-size errors in the present AFDMC calculations do not exceed 2% of the asymptotic value of the energy calculated by using TABC. The finite-size effects of SNM calculations can be estimated from the difference of the energies of PNM obtained with 14 neutrons and the TABC asymptotic value, which is of the order of 7%. Although a calculation with 132 nucleons would be more accurate, performing such a heavy simulation does not appear to be justified in the context of a preliminary model of density dependent potential, and also in view of the fact that, at present, we are not able to simulate SNM with spin-orbit interactions. The statistical errors, on the other hand, are very small and in the Figures are always hidden by the squares, the triangles and the circles representing the AFDMC energies. PNM equations of state In the PNM case (see Fig. 3.10), the EoS obtained with the three-body potential UIX and using the density-dependent two-body potential are very close to each other. For comparison, in Fig. 3.10 we also report the results of calculations carried out including the two-body potential only. In our approximation, with the exception of the line with diamonds of Fig. 3.6, we have neglected the cluster contributions proportional to ρ 2 . One could then have guessed that the curves corresponding to the UIX and density-dependent potential would have slightly moved away from each other at higher densities because, as the density increases, the contributions of higher order diagrams become more important. Probably, in this case a compensation among these second and higher order terms takes place. The density-dependent potential obtained in the FHNC/SOC framework has been also employed in AFDMC calculations. As can be plainly seen in Fig. 3.10, the triangles representing the results of this calculation are very close, when not superimposed, to the circles corresponding to the UIX three-body potential AFDMC results. SNM equation of state In the EoS of symmetric nuclear matter, the above compensation does not appear to occur, as can be seen in Fig. 3.11. At densities lower than ρ = 0.32 fm −3 , the curves resulting from UIX and the density-dependent potential are very close to one other, while for ρ > 0.32 fm −3 a gap between them begins to develop. The gap is smaller when the two-body potential Argonne v ′ 8 is used, but the reason for this is not completely clear. We have computed the saturation density ρ 0 , the binding energy per particle E(ρ 0 ) and the compressibility K = 9ρ 0 (∂E(ρ)/∂ρ) 2 for all the EoS of Fig. 3.11. The variational Figure 3.10: Energy per particle for PNM, obtained using the density-dependent potential of Eq. (3.17) added to the Argonne v ′ 8 (a) and to Argonne v ′ 6 (b) potentials. The energies are compared to those obtained from the genuine three-body potential and from the twobody potentials alone. FHNC/SOC results are listed in Table 3 The saturation densities are quite close to the empirical value ρ 0 = 0.16 fm −3 . For the genuine three-body potential this is not surprising, since the parameter U 0 is chosen to fit the saturation density, as discussed in Section 1.2.2. On the other hand, the fact that the density-dependent potential also reproduces this value is remarkable and needs to be emphasized. The binding energies obtained withv 12 (ρ) are very close to those coming from UIX potential, but they are larger than the empirical value E 0 = −16 MeV. As for the compressibility, the experimental value K ≈ 240 MeV suffers of sizable uncertainties. However, also in this case the result obtained with the density-dependent potential differs from that obtained with the UIX potential by less than 5%. Chiral inspired three-nucleon potentials in nuclear matter The work described in this Section, based on Ref. [46], is aimed at testing in nuclear matter the different parametrization of the chiral inspired potentials of Ref. [45], introduced in Section 1.2.2. Table 3.2: Values for the saturation densities, the binding energy per particle, and the compressibility of SNM obtained from the variational FHNC/SOC EoS of Fig. 3.11. NNLOL contact term issue While the NNLOL chiral interactions provide a fully consistent description of the binding energies of 3 H and 4 He, as well as of the scattering length 2 a nd , some ambiguities emerge when these interactions are used to calculate the nuclear matter EoS. For our purposes, it is convenient to rewrite the NNLOL chiral contact term of Eq, (1.37) in the formV where the superscript τ has a meaning that will be soon clarified. The radial function Z 0 (r) = m 3 π /(4π)z 0 (r) approaches the Dirac δ-function in the limit of infinite cutoff. Strictly speaking, the local version ofV E is a genuine "contact term" in this limit only, while for finite values of the cutoff it acquires a finite range. In addition toV τ E of Eq. (3.29), the chiral expansion leads to the appearance of six spin-isopin structures in the contact term. For example, the scalar contribution iŝ Within this context, the superscripts τ and I identify the τ 12 and scalar contact terms. In Ref. [39] it has been shown that, once the sum over all cyclic permutation is performed, all contributions to the product between the potential and the antisymmetrization operator A 123 have the same spin-isospin structure. Therefore it is convenient to take into account just one of the contact terms. This result was obtained in momentum space, without the cutoff functions F Λ . As a consequence, in coordinate space it only holds true in the limit of infinite cutoff. In particular, forV τ E (3 : 12) andV I E (3 : 12), it turns out that making this two terms equivalent. The limit of infinite cutoff is crucial, because the radial part of the exchange operator, when multiplied by the Dirac δ-functions, is nothing but the identity e ik ij ·r ij δ(r ij ) = δ(r ij ) . After the regularization, i.e. with the δ-function replaced by Z 0 , the proof is spoiled and the six different structures are no longer equivalent. In PNM contact terms involving three or more neutrons vanish because of Pauli principle. On the other hand, the expectation value of the contact terms of the NNLOL potential can be different from zero. Let us assume that reproducing the binding energies of light nuclei and 2 a nd require a repulsive V E . Then, one has to choose either c τ 12 E < 0 or c I E > 0. In PNM, as τ 12 P N M = 1 , (3.33) it turns out that V τ E is attractive and V I E repulsive. This means that fitting the binding energies and the n − d scattering length with either V τ E or V I E alone leads to an ambiguity in the expectation value of the potential. By expanding the cutoff function From the above equation it becomes apparent that the expectation value of the threenucleon potential, as well as its sign ambiguity, is nothing but a a cutoff effect. Hence, it should be regarded as a theoretical uncertainty. Note that, since Λ χ ≃ Λ, then V E P N M is of the same order of the next term in chiral expansion. To clarify this issue, let us consider a simple system: a Fermi gas of neutrons, in which correlations among particles are not present. The expectation value of the contact interaction reads where A is the number of neutrons. The factor 1/2 includes the 1/3! arising from the unrestricted sum over particle indices 123, multiplied by a factor 3 from the cyclic permutations of the potential, all giving the same contribution. The Slater function ℓ(r ij ) is given in Eq. (2.59). It can be easily seen that, ifV I,τ E (1 : 23) ∝ δ(r 12 )δ(r 13 ), then Consider now a Fermi gas with equal numbers of protons and neutrons, where In the limit of infinite cutoff the above equations imply Table 3.3 show that for PNM the larger the cutoff the smaller the expectation value of the three nucleon Table 3.3, but for SNM. contact term. Note that for Λ = 500 MeV, the expectation value is still sizably different from the asymptotic limit. As far as SNM is concerned (see Table 3.4), as the cutoff increases the possible choices of the three nucleon contact term tend to the asymptotic values of Eq. (3.41). As in the case of PNM, the results corresponding to Λ = 500 MeV, are significantly different from the asymptotic values. We emphasize that the parameter c E has not been included in this analysis, even though it is itself cutoff dependent. Unfortunately, the authors of Ref. [45] kept Λ fixed to 500 MeV. Had this not been the case, their fit to the experimental data would have resulted in a set of different constants c E , corresponding to different values of Λ. It would have been interesting to extrapolate the expectation value ofV E to the limit of infinite Λ, where the cutoff effects associated with the regularization procedure are expected to vanish. FHNC/SOC calculations Using the relations for the constants and the radial functions given in Eqs. (1.47) and (1.48), the computation of the diagrams of Fig. 3.1 with theV 3 andV 4 terms of both the TM ′ and NNLOL potentials is the same as that ofV 2π reported in Section 3.1 and in Ref. [17]. Thanks to the identity Aside from the radial function,V 1 is completely equivalent toV 3 , the anticommutator term of the UIX potential. Therefore, we were allowed to use again the results of Section 3.1 and Ref. [17]. Furthermore, exploiting the identities where (3.46) In conclusion, includingV D amounts to properly adding the above radial functions to those already appearing inV 3 . TheV E term of TM ′ is completely equivalent toV R (see Eq. (1.49) ). This allowed us to use the results of Section 3.1 and Ref. [17] for the diagrams of Fig. 3.2. The same holds true for the chiral contact termV E in PNM, as τ ij P N M = 1, while in SNM the calculation ofV E requires the evaluation of the diagrams of Fig. 3.1. For the calculation of the density dependent potential of Section 3.2, we constrained the difference between the Pandharipande-Bethe (PB) and the Jackson-Feenberg (JF) kinetic energies to be less than 10 % of the Fermi Energy T F and required the sum rule involving the scalar two-body distribution function, g c (r 12 ), to be fulfilled with a precision of 3 %. For this comparative study of chiral inspired potential, in variational calculations of SNM we have imposed the further condition, firstly considered in Ref. [19], that the sum rule of the isospin component of the two-body distribution function be also satisfied to the same accuracy. Using also the sum rules for the spin and spin-isospin two body distribution functions leads to a sizable increase of the variational energies, which turn out to be much higher than those obtained releasing the additional constraints, as well as the AFDMC results. The same pattern is observed in the results of variational calculations not including TNF. For this reason, we have enforced the fulfillment of the sum rules for g c (r 12 ) and g τ (r 12 ) only. For potentials other than UIX, it turns out that the variational energies of PNM resulting from our optimization procedure are lower than the AFDMC values at ρ > ρ 0 . By carefully analyzing the contributions of the cluster expansion diagrams, we realized that the value of diagram (3.1.a) was unnaturally large. In particular, we have found that a small change in the variational parameters leads to a huge variation of the value of the diagram. Moreover, the minimum of the energy in parameter space was reached in a region where the kinetic energy difference was very close to the allowed limit. To cure this pathology, we have constrained the PB-JF kinetic energy difference to be less than 1 MeV, regardless of density. The variational energies obtained imposing this new constraint are always larger than the corresponding AFDMC values and the value of diagram (3.1a) is brought under control. For the sake of consistency, the same constraint on the kinetic energies has been also applied to SNM. In addition, the variational energy minimum does not correspond to the maximum allowed violation of the constraints. As a consequence, it would be largely unaffected by a slight modification of the constraints. AFDMC calculations We have computed the EoS of PNM using the AFDMC approach with the TM ′ and NNLOL chiral potentials combined with the Argonne v ′ 8 NN interaction. An efficient procedure to perform AFDMC calculations with three-body potentials is described in Ref. [154]. Since V 3 is equivalent to the anticommutator term of the UIX model (while the commutator, V 4 , is zero in PNM), and in PNM theV E terms of both the TM ′ and NNLOL potentials do not show any formal difference with respect to the repulsive term of UIX, the inclusion of these terms reduces to a replacement of constants and radial functions. The authors of Ref. [154] also described how to handle the V 1 for the TM model, and no further difficulties arise in the case of the NNLOL potential. As theV D term has never been encompassed in AFDMC, it is worthwhile showing how the calculation of this term reduces to a matrix multiplication. The expectation value of V D is given by withV D (i : jk) =V D (i : kj) (otherwise all six permutations need to be summed). Thanks to this property one can write It is possible to writeV D (j : ik) of Eq. (3.45) in terms of cartesian components operatorŝ The anticommutation relation {σ α i , σ β j } = 2δ αβ makes the expectation value ofV D a sum of 3N × 3N matrix multiplications V D = 2 i<k,j (Y αi;βj Z βj;δk + Z αi;βj Y βj;δk + T αi;βj Z βj;δk + Z αi;βj T βj;δk )σ α i σ δ analogous to those of Ref. [154]. In order to compute the expectation value ofV D the former expression has been added to the cartesian matrices associated with the two-body potential. Finally, as the free-gas value obtained with 66 neutrons turns out to be very close to the thermodynamic limit of 73.00 MeV, the finite size corrections for 66 neutrons tend to be small. For the same reasons adduced for the AFDMC calculations of the density dependent potential, we simulated PNM with A = 66 neutrons in a periodic box, using the fixedphase approximation. Finite-size effects are expected to be larger when the density is bigger, as the dimension of the box decreases. In order to check the validity of our calculations, at ρ = 0.48 fm −3 we have repeated the calculation with 114 neutrons. For all the potentials, the energies per particle obtained with 114 neutrons are higher than those obtained with 66 neutrons. The authors of Ref. [152] found a similar behavior for PNM at ρ ≤ 0.32 fm −3 in the case of the v ′ 8 plus UIX hamiltonian, and ascribed part of this difference to the Fermi gas energy, amounting to 72.63 MeV and at 74.15 MeV for 66 and 114 neutrons, respectively. However, the difference of the energy per particle obtained with 66 and 114 neutrons is always within 4 MeV. It is worth noting that once finite-size effects on the Fermi gas energy are accounted for, the residual finite-size effects do not exceed 4% of the energy per particle. TM ′ potential The results of Fig. 3.13, showing the density dependence of the energy per nucleon in PNM, indicate that, once the new constraint on the difference between PB and JF kinetic energies is imposed, the agreement between FHNC/SOC (solid line) and AFDMC (triangles) results is very good. The most striking feature of the results displayed in Fig. 3.14 is that, despite the parameters of the three body potentials being different, all SNM EoS obtained from the TM ′ potential turn out to be very close to each other. This is probably due to the fact that these potentials are designed to reproduce not only the binding energies of 3 H and 4 He, but also the n-d doublet scattering length 2 a nd . It is remarkable that although the parameters of TM ′ potentials were not adjusted to reproduce nuclear matter properties, the EoS saturates at densities only slightly lower than ρ 0 = 0.16fm −3 , and the compressibilities are in agreement with the experimental value K ≈ 240 MeV. On the other hand, the binding energies are larger than the empirical value E 0 = −16 MeV and rather close to the one obtained from the UIX potential, ∼ 10 MeV and shown in Section 3.2. The numerical values of all these quantities are listed in Table 3.5. Table 3.5: Saturation density, binding energy per particle and compressibility of SNM corresponding to the TM ′ EoS displayed in Fig. 3.14. NNLOL chiral potentials The results displayed in Fig. 3.15 show that, as in the case of the TM ′ potentials, the EoS of PNM computed within the AFDMC and FHNC/SOC schemes are very close to each other over the entire density range. The EoS of Fig. 3.15 are softer than those obtained from both the TM ′ (compare to Fig. 3.13), and UIX (see Fig. 3.10) potentials. This is due to the ambiguity in the term V E , discussed in Section 3.3.1. In the NNLOL 2 , NNLOL 3 , and NNLOL 4 models the constant c E is negative. Therefore, the contribution ofV E is attractive, making the EoS very soft. WhenV E is repulsive (i.e. c E is positive), as in the NNLOL 1 potential, its contribution is very small and the resulting EoS, while being stiffer than those corresponding to the other NNLOL potentials, remains very soft. The recent astrophysical data of Ref. [16] suggest that the EoS of PNM be at least as stiff as the one obtained with a readjusted version of the effective density-dependent potential of Lagaris and Pandharipande in combination with the Argonne v ′ 6 two-body interaction [31]. Therefore, the EoS resulting from chiral NNLOL potentials are not likely to describe a neutron star of mass around 2M ⊙ . The SNM EoS corresponding to the NNLOL potentials are displayed in Fig. 3. 16. The fact that the NNLOL 4 potential provides the stiffest EoS, while in PNM provided the softest, is not surprising. As discussed in Section 3.3.1, when the contact term is attractive in PNM, it is repulsive in SNM, and viceversa. A large cancellation between the repulsive core of the Argonne v ′ 8 and the strong attractive contact term contribution of the NNLOL 4 potential is observed. This could influence the variational results, which for this particular three-body force could be less accurate than for the other interactions. As the corresponding AFDMC results do not show a similar behavior, giving a simple physical interpretation to the inflection point at ρ ≃ 0.24 fm −3 resulting from the FHNC/SOC calculations turns out to be difficult. The results listed in Table 3.6 show that none of the chiral NNLOL potentials fulfills the empirical constraints on the SNM EoS. All potentials overestimate the saturation density, while the compressibility is compatible with the empirical value only for the NNLOL 2 and NNLOL 3 models. As for the binding energies, they are closer to the experimental value than those obtained using both the UIX and TM ′ potentials. As a final remark, it has to be noticed that using the scalar repulsive term V I E instead of V τ E provides more repulsion, resulting a stiffer EoS. As stressed in Section 3.3.1, this issue needs to be addressed, taking into account all terms that become equivalent in the limit of infinite cutoff only. Moreover, since the discrepancies among these terms are of the same order as the NNNLO term of the chiral expansion, other contact terms have to be included [155]. Fig. 3.14, but for NNLOL plus v ′ 8 hamiltonian. The understanding of neutrino-nucleons interactions is required in both astrophysics, mainly in the context of supernovae explosions and of the cooling of neutron stars, and high energy physics, in particular to reduce the systematic uncertainties of the data analysis. In the low-momentum transfer regime (|q| of the order of tens MeV), the non relativistic limit of the weak current matrix element is expected to be applicable. The nuclear response to weak probes delivering energy ω and momentum q at leading order in |q|/m reads In the above equation A is the particle number andÔ q the one-body weak operator that induces a transition from the ground state |Ψ 0 } to the excited state |Ψ f }, which are eigenvalues of the nuclear hamiltonian with energies E 0 and E f , respectively (see Eq. (2.27)). The non relativistic Fermi (F) and Gamow-Teller (GT) operators describing low energy weak interactions areÔ where g V = 1.00 and g A = 1.26 are the form factors at zero momentum transfer, while τ + i is the isospin-rising operator acting on the i-th nucleon. The spin longitudinal and spin transverse components of the Gamow-Teller response functions are defined as the components parallel to q and orthogonal to q, respectively. They can differ significantly at larger values of |q| and, in principle, have to be calculated separately. However, when not otherwise specified, with "Gamow-Teller response" we refer to the total response, given by the sum over the cartesian components: The differences between the integrated spin-longitudinal and spin-transverse responses will be discussed in Section 4.5.3. Calculations of the charged current weak response have been recently performed in Refs. [47,156]. In the high energy domain the short range correlations of the CBF play a major role, while for low neutrino energy ≤ 10MeV, the response is mainly affected by long range correlations that lead to excitation of collective modes, taken into account within the correlated Tamm-Dancoff approximation (CTDA). Even including the spin-isospin dependent correlations of Eq. (2.30), generally the CBF states are not eigenstates ofĤ. However, it has been argued by Feenberg that the Hamiltonian matrix has smaller non diagonal elements in CBF than in the non interacting FG basis. Hence, perturbative many-body calculations are expected to converge more rapidly in CBF. Neglecting orthogonality corrections, the CBF weak operators matrix elements read Following Refs. [47,156], in this Thesis we only consider transitions between the correlated ground-state and correlated 1particle-1hole (1p − 1h) excited states. The n ≥ 2 particle-hole correlated states give a smaller contribution, mainly at large excitation energy, the size of which will be estimated at a later stage, studying the sum rules of the weak response. The CBF matrix element between the ground-state and 1p − 1h excitation reads where p m and h i denote the whole set of quantum numbers of the single nucleon state, namely the momentum, the spin and the isospin projections along the z−axis. This quantity, entering all our calculations of the response function, will allow us to define the effective weak operators, as discussed in the next Section. Effective weak operators We define the effective weak operatorsÔ ef f q through the relation As for the calculation of the hamiltonian expectation value, a cluster expansion of the weak operator correlated matrix element can be performed [157]. The smallness parameters in this case are f c ij − 1 and f p ij . Following [33], we rewrite the matrix element of Eq. (4.7) as A cluster expansion of both R a and R b needs to be performed, so that the order n of the perturbative series in terms of f c ij − 1 and f p ij is given by It is convenient to carry out the cluster expansion of R b first, since it does not involves the transition operator. The product of correlation operators can be written as in Eq. 2.68, associated with the denominator of the two-body distribution function Because of the symmetry of the ground state wave functions, one finds i<j< ... and an analogous relation holds for the 1p − 1h state. Using the results of Appendix B, it is possible to extract N particles from the Slater determinant of the ground state, to obtain In order to make the simplification among disconnected diagrams manifest, it is worth introducing a new index,n i = 1, . . . , h i − 1, h i + 1, . . . , A, labeling the A − 1 states of a system lacking both single-particle states p m and h i . Thus, adding and removing the contribution of the hole state, the N-body term of the numerator reads Although the sum of the cluster terms precisely gives the denominator of the twobody distribution function, the diagrammatic rules stemming from Eq. (4.13) slightly differ from those of g p (r 12 ). • Wavy lines represent both scalar and operator correlations, f c ij − 1 and f p ij and two kinds of vertex and exchange lines are present. • The bare vertex, arising from ψn i (x i ) * ψn i (x i ), does not involve the hole state h j ; it tends to the "standard" vertex in the thermodynamic limit only. On the other hand, the direct term of the hole state, ψ h i (x i ) * ψ h i (x i ), is represented by an h j -vertex: a closed loop around the internal point i. • The hole state is not exchanged in the bare exchange lines, but there is an additional , which connects points i and j. At most one h i vertex or h i exchange line can appear in a given diagram. It it possible to factorize the sum of cluster diagrams in two subsets, as shown in Fig. 4.1. The first subset, denoted with 1 + C (h i ), in addition to the unity, contains all the connected diagrams having one h i vertex or one h i -exchange line. All the remaining diagrams, both connected and disconnected, belong to the second subset. The diagrammatic cluster expansion rules for the denominator can be readily obtained from those of the numerator by replacing the hole state h i with the particle state p m . The sum of the denominator's cluster diagrams can be factorized in two subsets. The first, 1 + C (p m ), is made of the unity and of connected diagrams with one p m vertex or one p m -exchange line, while the second cancel the corresponding one from the numerator. Therefore, the following analytic expression for R b is obtained At first order in p f p − 1, the explicit expression of Eq. (4.14), diagrammatically repre-sented in Fig. 4.2, reads where |α i denotes the spin-isospin state of particle i. Expansion of R a Apart form the square root, the denominator of R a is identical to the numerator of R b , hence the same diagrammatic rules apply. In the numerator, on the other hand, the weak transition operatorÔ q appears. Using the symmetry of the 1p − 1h and ground-state wave functions, we can write withÔ q (1), defined in Eq. (4.3), acts on particle 1 only. The cluster expansion of F †Ô q (1)F reads Using again the results of Appendix B, the orthogonality of the Slater minors introduced therein and the properties of the antisymmetrization operator A, one obtains where the indexn i has been defined just above Eq. (4.13). The diagrammatic rules for R b need to be modified in order to include the weak transition operator acting on particle 1 and to deal with the p − h exchange line. • The weak transition operatorÔ q (1), carrying momentum q and a spin-isospin operator, is attached at point 1. It is represented by a dashed line and an arrow indicating the flow of the momentum q. • All internal points but point 1 must be reached by at least one wavy line. • Like in the diagrammatic expansion of R b , bare vertices are due to ψ * n i (x i )ψn i (x i ) and are lacking of the hole state; in the bare exchange line the hole state is absent as well. If the diagram is finite in the thermodynamic limit, both vertices and exchange lines tend to the "standard" ones of the FR cluster expansion. There are neither hand p-vertices, nor h-and p-exchange lines. • There is one single ph-exchange line connecting points i and j, arising from ψ * In order for a given diagram not to vanish, either the ph-exchange line or the ph-vertex need to be connected with particle 1. As in the case of R b , the sum of cluster diagrams can be factorized in two subsets, shown in Fig. 4.3. The first, indicated with C (q, p m , h i ), consists of the connected diagrams with particle 1 and one ph-exchange line or one ph-vertex. The second one includes both connected and disconnected diagrams without ph-exchange and ph-exchange lines. As in the calculation of R b , diagrams not containing the weak transition operator, the ph-exchange line and the ph-vertex cancel with the corresponding ones arising from the denominator. Hence, the following expression for R a is obtained The first order diagrams inf − 1 contributing to R a are displayed in Fig. 4.4. Zero-th order The weak operator's matrix element at zeroth order inf − 1, displayed in Fig. 4.5, corresponds to the non interacting Fermi gas result Using Eq. (4.18), the right hand side of the above equation can be written in the form where the discretized momentum conservation is expressed by the Kronecker delta function. The SNM spin-isospin operator matrix elements α pm |Ô στ (1)|α h i , along with all the other matrix elements appearing in the rest of this Section, are given in Appendix E for both Fermi and Gamow-Teller transitions. First order At first order inf − 1, diagrams arising from both the numerator and the denominator of Eq. (4.20) contribute to the matrix element. In order to include all the first order terms, three-body cluster contributions have to be taken into account. These contributions, neglected in [157,47,156] bring about some inconsistencies, to be discussed later. At first order inf − 1, the two-body cluster term of Eq. (4.17) is given by The analytic expressions of the two-body diagrams of Fig. 4.6, corresponding to the numerator of Eq. (4.20), can be derived by substituting the latter relation in Eq. (4.17). Since all of these diagrams are converging in the thermodynamic limit, the indexn i can be safely replaced by the standardn i , including the hole state h i . Thus Because of the property X 1 (x 1 ; x 3 , x 2 ), when the latter expression for the three-body cluster term is inserted in Eq. (4.18), one finds (4.32) The first order three-body cluster diagrams can be grouped in two subgroups; those coming from the first line of Eq.(4.32) are depicted in Fig. 4.8, the others in Fig. 4.9. N g cancels the contribution of diagram O (1) N a . This is an indication that three-body diagrams need to be taken into account. The first two diagrams of Fig. 4.9, being disconnected, simplify with the denominator. All the remaining diagrams of Fig. 4.9 vanish. This can be shown by noting that the bare exchange line is represented by the modified Slater functionl ij , that does not contain the hole state dr ijlij e ih i ·r ij = 0 , (4.38) and using the matrix elements Second order As in the first order case, diagrams coming from both numerator and denominator of Eq. (4.20) have to be taken into account. In order to consistently include all the second order terms, in principle up to five-body cluster diagrams need to be computed. In our calculations [158], we have considered second order two-body cluster terms only, as in Refs. [157,47,156]. Second order three-body cluster diagrams are in fact much smaller than the corresponding two-body ones, as confirmed by a direct computation of some of them. Considering the second order term, to the two-body cluster of Eq. (4.24) becomes The additional term (f 12 − 1)Ô q (1)(f 12 − 1) brings about new diagrams for the numerator of Eq. (4.20) which are analogous to those of Fig. 4.6. As can be seen in Fig. 4.10, the only difference consists in the replacement of the single wavy lines with doubly wave lines, accounting for the second order correlations (f 12 − 1)(f 12 − 1). The second order two-body numerator diagrams of Fig. 4.10 are given by . As far as the denominator of Eq. (4.20) is concerned, the two-body second order diagrams, shown in Fig. 4.20, can be obtained from those of Fig. (4.7) by again replacing the single wavy lines with doubly wavy lines. As a matter of fact, once the second order term in considered, the two-body cluster term of Eq. (4.10) reads The results of the second order two-body cluster terms coincide with those obtained by the authors of Refs. [157,47]. Effective interaction Using the formalism of CBF and the cluster expansion technique, the authors of Ref. [157,47], were able to develop an effective interaction, obtained from the bare Argonne v ′ 8 potential, which incorporates the effects of the short-range correlations. In Ref. [37], the two-body effective interaction of Ref. [157,47] was improved with the inclusion of the purely phenomenological density dependent potential of Ref. [111], accounting for the effects of interactions involving more than two nucleons. The CBF effective interaction, v ef f 12 , suitable for use in Hartree-Fock calculations, is defined through the matrix elements of the hamiltonian in the correlated ground-state As suggested by the above equation, the effective interaction allows one to calculate any nuclear matter observables using perturbation theory in the orthonormal FG basis. However, in general, extracting the effective interaction is a very challenging task, involving difficulties even more severe than those associated with the calculation of the expectation value of the hamiltonian in the correlated ground state. The procedure developed in Ref. [47] consists in carrying out a cluster expansion of the lhs of Eq. (4.47) and keeping only the two-body cluster contribution. The sum of the two-body cluster contribution of the potential and kinetic energies of Eqs. (2.149) and (2.153) reads On the other hand, the expectation value of the effective potential is given by Therefore, the effective potential at two-body cluster level turns out to bê The effective potential of the above equation slightly differs from the one reported in the literature. The authors of Refs. [47,156] have not done the integration by parts leading to Eq. (2.153). As a consequence they have neglected the terms in which the gradient operates on both the correlation function and on the plane waves. In our effective potential, these terms, although small compared to the other contributions, are fully taken into account. We have improved the effective potential by adding the tree-body cluster contributions of Eqs. (2.164-2.173). This allowed us to consistently include the UIX potential, whose leading order terms emerge at three-body cluster level. As in the construction of the density dependent potential from UIX, the issue of the exchange pattern has to be carefully analyzed. The distinctive feature of the present calculation is that v ef f 12 contains the correlation between particles 1 and 2 making possible to implement the inversion ofP στ ij of Eq. (3.20) in a straightforward way. To be definite, consider the three-body cluster contribution of the ground-state expectation value of the two-body potential A similar argument holds for the three-body cluster term of the kinetic energy and the three-body potential contributions to the effective potential. In Fig. 4.12 the central and tτ components of the effective potentials at two-body and three-body cluster level are compared. The starting bare NN interaction is the Argonne v ′ 6 ; for the three-body cluster results the UIX three-body potential has been included in the calculations. Starting from a bare hamiltonian whose only interaction is the Argonne v ′ 6 NN potential we have computed the EoS of SNM for the low-density regime using both the new three-body cluster effective interaction and the older one with only two-body cluster . In the lower panel of Fig. 4.13, the EoS of SNM are shown for an hamiltonian containing the Argonne v ′ 6 NN potential along with the UIX three-body interaction model. Again the curve obtained from the three-body cluster effective potential is close to the full calculation and, it exhibits the saturation, which is a remarkable feature. Correlated Fermi gas, Hartree-Fock approximations In both the correlated Fermi gas (CFG) and correlated Hartree-Fock (CHF) approximations, the weak response of cold SNM, defined in Eq. (4.1), is given by [47] S F G (q, ω) = 1 Within the CGF approximation, the single particle energies are obtained from Eq. (2.5) in the case of a non-interacting hamiltonian (4.54) The single particle approximation is retained in the CHF; however the potential enter the calculation of the single particle energies. As mentioned in Section 2.1 the Hartree-Fock approximation is not suitable for nuclear potentials having a repulsive core, like the Argonne models, because it does not encompass the correlations between nucleons. We use instead the effective potential described in the previous Section, which is appropriate for mean field calculations. Thus, Eq. (2.5) becomes Correlated Tamm Dancoff approximation (CTDA) The Tamm-Dancoff approximation amounts to expanding the final state of Eq. (4.1) in series of 1p − 1h excitations Because the hamiltonian is translationally invariant, the total momentum q of the state is conserved, and the momenta of the particle, p m , and the hole, h i , satisfy the relation p m − h i = q. The eigenvalue equation for the effective hamiltonian defines the excitation energy ω f Multiplying from the left the previous equation by Ψ pn;h j | and using the orthonormality of the 1p − 1h states yields Hence, the coefficients C f mi can be interpreted as the eigenvectors of the hamiltonian H nj;mi ≡ Ψ pn;h j |Ĥ|Ψ pm;h i , while the excitation energies ω f are the associated eigenvalues. In Appendix B it is shown that singling out particle 1 from the 1p − 1h wave function Ψ pmh i yields the result The indexes n i andñ i denote the single particle states of the Fermi gas and of the 1p − 1h excited state, respectively. Note that, in the above equation,ñ i must be regarded as a function of n i ; in particular it turns out thatñ i (n i ) = n i except forñ i (h i ) = p m . The minor Ψ pmh i m =ñ 1 has been obtained from Ψ pmh i by removing the column corresponding to the stateñ i and the first row, associated with particle 1. Extracting particle 2 from the Slater determinants leads to where Ψ pmh i m =ñ 1 ,ñ 2 , the minor of Ψ pmh i m =ñ 1 , is lacking particle 2 and the stateñ 2 . Being a one-body operator, the kinetic energy, on account of Eq. (4.59), contributes only to the diagonal part of H nj;mi The diagonal matrix elements of the effective potential can be computed employing Eq. (4.60) The sum of the first term of Eq. (4.61) and the first of Eq. (4.62) corresponds to the ground state energy of the system By recognizing in the sum of Eq. (4.61) and the Eq. (4.62) the single particle energy of Eq. (2.5) one gets for the diagonal part of the 1p − 1h matrix elements of the hamiltonian where we denote with p n h i |O 12 |h j p m the two-body matrix element of the operatorÔ 12 The off-diagonal matrix elements of the hamiltonian come from the two-body potential; making again use of Eq. (4.60) it turns out that Collecting Eqs. (4.64) and (4.66), one finds the following compact expression [47] for the matrix elements of the hamiltonian Finally, substitution in the eigenvalue equation (4.58) gives The nuclear hamiltonian commutes with the total isospin, T with the total isospin projection along z, T z and with the total spin, S, but not with the total spin projection along z, S z because of the tensor term of the potential. Solving Eq. (4.58) would then lead to coefficients C f mi such that the final states of Eq. (4.56) are eigenstates of S, T and T z : |f ≡ |f T TzS . The combinations of particle hole pairs that are eigenstates of the total spin S and its projection along the z-axis S z , that define the particle-hole Clebsch-Gordan coefficients, are shown in Table 4.1. The differences between the total spin states of the particle particle pairs, also given in Table 4.1, are due to the phase factor appearing in the canonical transformations to particles and holes [52]. For a more detailed discussion, see Appendix F. The treatment of the total isospin can be done in complete analogy, replacing the up and the down single particle spin states with the proton and the neutron isospin states, respectively. It is possible to reduce the computational complexity of the eigenvalue equation by carrying out the summation over the spin-isospin projections along the z-axis of Eq. Table 4.1: Spin configurations for a particle particle pair and a particle hole pair for spin-1/2 particles. Total spin state particle particle particle hole A further simplification arises by noting that the final states of both the Fermi and the Gamow-Teller transitions are characterized by having T = 1 and T z = 1. To simplify the notation, the isospin indexes may then be omitted and it is understood that T = 1 and T z = 1. Unlike in Eq. (4.56) the sum is now restricted to the momentum of the particle and of the hole and to S z , furthermore the coefficients C f T TzSSz where k ij ≡ h i − h j . For the sake of simplicity, the superscript "ef f " has been omitted where the channels of the effective potential are specified. For the Gamow Teller transition, the final state has S = 1; hence it is necessary to compute the nine matrix elements p n h i |v ef f 12 |h j p m SS ′ z Sz corresponding to S z , S ′ z = −1 , 0 , 1 Numerical calculation of the response We model the infinite system using a cubic box of side L with periodic boundary conditions. Hence, the single particle wave functions are the plane waves of Eq. (2.9) with the discrete momenta of Eq. (2.10). For zero temperature SNM, all single-particle states with |k i | ≤ k F are occupied in the ground state. The momenta of the 1p − 1h excitations are such that |h j | ≤ k F and |p i = h j + q| > k F . For the hole and particle momentum to be on the lattice of allowed momentum states in the box, the momentum transfer must be such that q = 2π L (n qx , n qy , n qz ) . n q i = 0, ±1, ±2, . . . . The size of the basis, that can be increased by increasing n 2 qx + n 2 qy + n 2 qz , has been determined requiring that the response of a system of noninteracting nucleons computed on the lattice agreed with the analytical result of the FG model [47,156]. The FG response is obtained replacing the effective operator with the bare operator in Eq. (4.53) and using the single particle energy of Eq. (4.54). The analytical calculations can be performed using a continuum of momentum states [52], while the numerical result consists of collection of discrete delta function peaked at the values of the single particle energies. For a better representation of the results, as well as for fitting purposes, a gaussian representation of the energy conserving delta function has been adopted (4.86) For sufficiently small values of the gaussian width σ, the results become insensitive to it. All the results that will be shown in this section refer to SNM at equilibrium density ρ = 0.16 fm −3 . CFG and CHF FHNC/SOC calculations and their associated minimization procedure, explained in Section 2.5.4, provide a set of correlations function, corresponding to the minimum of the hamiltonian expectation value. We have found the best correlation functions for the Argonne v ′ 6 , v ′ 8 two-body potentials, and for comparison we have also considered the correlations of Ref. [20] corresponding to Argonne v 18 . With these correlations, the Fermi and Gamow-Teller response functions have been evaluated in CFG and CHF approximations. When only two-body cluster diagrams are considered, as in Refs. [47,156], the CFG response, suppressed by a 20 − 25% with respect to the FG case, exhibits a sizable dependence on the choice of correlations, as shown in the left panels of Figs. 4.14 and 4.15. These figures refer to a transfer momentum q = |q|(4x + 4ŷ + 4ẑ)/ √ 48 with |q| = 0.3 fm −1 . The folding gaussian function has a width of 0.25 MeV. This unphysical effect is removed once the effective weak transition operator is computed at three-body cluster level. As a matter of fact, the CFG curves in the right panels of the aforementioned figures are very close, when not superimposed, to each other. Therefore our results appear to be more robust than those of Refs. [47,156], as the physical quantities should not be sensitive to the details of the short range behavior of the correlation functions. A small dependence on the choice of the correlation is observed in CHF calculations, also shown in Figs. 4.14 and 4.15. The reason for this lies in the single particle energies, which depend on the first four components of the NN potential (see Eq. (2.11)). It turns out that the first four component of the v ′ 8 potential are very similar to those of v ′ 6 , while the same behavior is not observed for the full v 18 . We observe that the shift of the strength to higher ω is slightly enhanced by three-body clusters, expecially for Fermi transition. CTDA results The nuclear matter response calculated in CTDA for |q| = 0.30 MeV is displayed in Figs. 4.16 and 4.17 for Fermi and Gamow-Teller transitions, respectively. The peak corresponding to the collective mode is shifted to lower energies when the tree-body cluster is included. This effect, ascribed to the change of the single particle energies, is mitigated when the UIX potential is included in the hamiltonian. When the three-body cluster is included, it produces in a depletion of the Gamow-Teller resonance at |q| = 0.30, particularly apparent when the nuclear hamiltonian is (ω, q) Sum rules The set of final states in Eq. (4.1) is not exhausted by 1p − 1h excitations. In principle, transitions to more complex multi p − h states should also be considered. So far, the contribution of these states have been neglected; however, an estimate of their importance can be obtained computing the sum rules. The static structure function is defined by S(q) = dωS(q, ω) While a direct integration of the CTDA response functions allows for the evaluation of S(q), from the last line of the latter equation it is clear that S(q) can be computed by using the variational ground state (VGS) resulting from FHNC-SOC calculations. In particular, the knowledge of the two-body operatorial distribution functions of Eq. (2.111) is needed to compute the structure function. While the VGS calculations include all the multi p − h excitations in the CBF basis, in the CTDA only the correlated 1p − 1h states are taken into account. Therefore, multi p − h contributions can be estimated from the difference S V GS (q) − S CT DA (q). Note, however, that an interplay between many-body correlations and multi p − h excitation could in principle take place. In fact, while VGS includes many-body correlations through the chain summations, in the CTDA of Ref. [47] only two-body cluster terms have been considered. The static structure functions for the Fermi and Gamow-Teller transitions are displayed in the left and in right panels of Fig. 4.19, respectively. The dashed lines refer to the noninteracting FG, while the squares and the stars represent the two-body and three-body cluster results, respectively. The NN potential of the effective hamiltonian is the Argonne v ′ 6 , while in the tree-body cluster calculation the UIX interaction has been used. Note that, in the sum rules' calculation, the authors of Ref. [47] have set to one the factor g A of the Gamow-Teller transition. For a better comparison with their results, we did the same choice of normalization. As noted in Ref. [47], the variational calculations of Ref. [159], because of the approximations involved in FHNC/SOC calculations, do not show the correct behavior, S(q = 0) = 0, required by baryon number conservation. On the other hand, the static structure function obtained within CTDA does exhibit the appropriate low-momentum limit. As far as the multi p − h excitations are concerned, the two-body cluster results show that their contribution is smaller than the dominant 1p − 1h excitation, but it is not negligible. When three-body cluster is accounted for, the VGS and the CTDA curves get closer. The shift turns out to be detectable for the Fermi transition case, while for the total Gamow-Teller response is very small. This is a clear indication that the difference between FHNC/SOC and correlated Tamm-Dancoff results has largely to be ascribed to the multi p − h excitations. There are experimental and theoretical indications that the spin longitudinal and spin transverse response functions can differ significantly due to tensor forces. Thus, we studied how the UIX three-body force affects these quantities computing the spin longitudinal and spin transverse static structure functions, defined as As shown in Fig. 4.20, the inclusion of the UIX potential brings the CTDA curves for S L (q) slightly closer to those of VGS across all the values of |q|. As far as the transverse static response function is concerned, the position of the maximum of the CTDA calculations including UIX potential almost coincide with the VGS results. At small momentum transfer however, the three-body cluster contributions move away the S T (q) obtained from CTDA from the variational results. Recently [160], the sum rules relations (additional sum rules with an increasing power of ω in the integrand can be defined) has been inverted to obtain the spin-response function of PNM at zero momentum transfer. The authors of Ref. [160] used the formalism of AFDMC to compute the operatorial distribution functions, obtaining very promising results. As a follow up of the present work, we are planning to compute the response function of PNM and compare it with their findings. The main body of this Thesis has been devoted to the discussion of a novel approach, developed in Ref. [27,28], allowing one to obtain an effective density-dependent NN potential taking into account the effects of three-and many-nucleon interactions. The resulting effective potential can be easily used in calculations of nuclear properties within many-body approaches based on phenomenological hamiltonians, including the effects of strong NN correlations, which can not be treated in standard perturbation theory in the Fermi gas basis. Moreover, the derivation of the density-dependent NN potential is fully consistent with the treatment of correlations underlying the FHNC and AFDMC approaches. While the reduction of n-body potentials to a two-body density-dependent potential is reminiscent of the approach of Refs. [29,30], our scheme significantly improves upon the TNI model, in that i) it is based on a microscopic model of the three nucleon interaction providing a quantitative description of the properties of few nucleon systems and ii) allows for a consistent inclusion of higher order terms in the density expansion, associated with four-and more-nucleon interactions. As shown in Chapter 3, the results of calculations of the PNM and SNM equation of state carried out using the density-dependent potential turn out to be very close to those obtained with the UIX three-body potential. In this context, a critical role is played by the treatment of both dynamical and statistical correlations, whose inclusion brings the expectation value of the effective potential into agreement with that of the UIX potential (see Fig. 3.8). This is a distinctive feature of our approach, as compared to different reduction schemes based on effective interactions, suitable for use in standard perturbation theory [44,161]. Using the density-dependent potential we have been able to carry out, for the first time, a AFDMC calculation of the equation of state of SNM consistently including the effects of three nucleon forces. The results of this calculation show that the v ′ 6 +UIX hamiltonian, or equivalently the one including the effective potential, fails to reproduce the empirical data. The FHNC results obtained using the v ′ 8 potential indicate that the 5-6 MeV underbinding at equilibrium density can not be accounted for replacing the v ′ 6 with a more refined model, such as v 18 . Hence, the discrepancy has to be ascribed either to deficiencies of the UIX model or to the effect of interactions involving more than three nucleons. In order to improve on the UIX three-nucleon interaction, we have performed nuclear matter calculations using the new generation of chiral inspired three-nucleon potentials in coordinate space [46]. We have carried out a comparative analysis of the EoS of PNM and SNM obtained using the different parametrizations of the NNLOL potential, as well as the improved versions of the TM model discussed in Ref. [45]. The calculation of the SNM EoS has been been performed within the variational FHNC/SOC approach. In the case of PNM we have also used the AFDMC computational scheme, the results of which turn out to be in close agreement with the variational FHNC/SOC estimates. Our analysis shows that the transformation from momentum to coordinate space brings about a cutoff dependence, leading to sizable effects in nuclear matter. As discussed in Section 3.3.1, the contribution of the contact term, which in PNM would vanish in the Λ → ∞ limit, can not be fully determined fitting the low energy observables. Moreover, the NNN contact terms of the NNLOL 2 and NNLOL 3 models turn out to be attractive in PNM, leading to a strong softening of the EoS. An illustrative example of the uncertainty associated with the local form of the NNN contact term is provided by the results of Fig. 3.16 and Table 3.6. The NNLOL 4 model largely overestimates the empirical value of the compressibility modulus of SNM, thus yielding a stiff EoS. On the other hand, as pointed out in Section 3.4.2, it predicts a soft EoS of PNM. The impact of this is ambiguity is large, since compressibility is a most important property of the EoS. The recent discovery of a ∼ 2 M ⊙ neutron star appears in fact to rule out dynamical models yielding a soft EoS of β-stable matter. None of the considered three-nucleon potential models simultaneously explains the empirical equilibrium density and binding energy of SNM. However, among the different parametrization that we have analyzed, the NNLOL 4 and TM ′ 3 provide reasonable values of ρ 0 . It has to be emphasized that this is a remarkable result, as, unlike the UIX model, these potential do not involve any parameter adjusted to reproduce ρ 0 . In order to resolve the inconsistencies involved in the contact term, one should include all contributions to this term arising from the chiral expansion at NNLO. Moreover, as pointed out by the authors of Ref. [155], due to the choice of the regulator function (see Eq.(1.36)), a fully consistent treatment should also take into account NNNNLO contact contributions. In future works we are planning to include the NNNLO contributions to the threenuclear interactions of Refs. [40,68], as well as the three-body NNNNLO contact terms of Ref. [155]. For consistency, at that point also the NNNLO long-range terms calculated in [69] should be accounted for. However, since we are not using the chiral potential in the two-body sector, we need to deal with the issue of determining the low-energy constants entering the three-body interaction. The last Chapter of the Thesis reports the results of a calculation of the weak response of nuclear matter, carried out using effective operators and effective interactions derived within the framework of the CBF approach. This calculation significantly improves on those of Ref. [47,156], as it explicitly includes the contributions of three-nucleon clusters and three-nucleon forces. As far as the effective weak operators are concerned, we have shown that the sizable dependence of the response function on the choice of the correlation function is in fact unphysical. When three-body clusters diagrams are considered, the transition matrix elements, in which correlations enter only through the effective weak operators, become nearly independent of the correlation functions. The three-body cluster contributions described in Section 2.5.3 have been consistently included in the construction of the effective interaction. As a first step we have computed the EoS of SNM for an hamiltonian including the two-body potential only. In this case, the three-body cluster effective interaction provides a EoS of SNM much closer to to the one resulting from full FHNC/SOC calculations, compared to the one obtained using the two-body cluster effective interaction of Ref. [47,37]. The leading contributions of the UIX potential, emerging at three-body cluster level, have been also included in the effective potential. As a result the EoS of SNM exhibits saturation at ρ ≃ 0.18 fm −1 . Inclusion of the three-nucleon interactions also affects the single particle spectrum, leading in turn to a shift of the CHF response as a function of energy transfer. The main effect of the three-nucleon force on the TDA response originates from a change of the off-diagonal elements of the effective interaction. As a result, the collective mode associated with the Fermi transition at |q| = 0.3 MeV turns out to be shifted to lower energy, although its magnitude is nearly unaffected by the three-body cluster contributions. On the other hand, a depletion of the peak is observed for the Gamow-Teller transition. The analysis of the TDA response for different momentum transfer also reveals a sizable effect of three-nucleon cluster contributions. The sum rules for the Fermi transition comes closer to the variational results once the three-body cluster cluster is taken into account, thus confirming the importance of many body effects, which are included in the variational calculations through the chain summations. The residual discrepancy cannot be accounted for by the n > 4−body cluster contributions, and is likely to be ascribable to the effect of multi p − h excitations, which are taken into account in variational calculations. This effect appears to be even larger even larger in the structure function obtained from the Gamow-Teller response. In this case the change due to the three-body cluster is in fact very small. Some improvements are observed in the sum rules of the longitudinal and the transverse response. As shown in Fig. 4.20, the results of the three-body cluster calculations of S L (q) carried out within TDA are slightly closer to the variational ones for all the values of |q|, compared to the two-body cluster case. Moreover, the position of the maximum of the TDA calculations of the static transverse response is almost coincident with that obtained from variational calculations. At small momentum transfer however, the three-body cluster contributions move the TDA S T (q) away from the variational result. spin-orbitals must remain orthogonal. To achieve this, a set of A 2 Lagrange multipliers ǫ n i ,n j is introduced in the variational equation Since the energy is a real quantity, the Lagrange multipliers are elements of an Hermitian matrix, ǫ n i n j = ǫ * n j n i . The matrix of the Lagrange multipliers can therefore be diagonalized performing a unitary transformation on the spin-orbitals The new Slater determinant differs by a phase factor from the previous one, Ψ ′ = det(U)Ψ, while the functional E[ψ] is not affected by this unitary transformation. To simplify the notation, instead of working with the primed indeces, we assume that the diagonalization has been made from the beginning. Therefore Eq. (A.6) can be written as being {ǫ n i } the eigenvalues of ǫ n i n j . Carrying out the variation δE[Ψ] leads to the canonical set of integro-differential Hartree Fock equations − ∇ 2 2m + n j [Ĵ n j −K n j ] ψ n i (x i ) = ǫ n i ψ n i (x i ) . (A.9) The direct and exchange operators,Ĵ andK, respectively, result from the differentiation of the direct and of the exchange term defined in Eq. (A.5). While the direct operator is localĴ as in order to evaluate their action on a given function at a point in the configurational space, we need to know just the value of the function at the same point. On the other hand, the exchange operator K n j ψ n i (x i ) = dx j ψ * n j (x j )v ij ψ n i (x j )ψ n j (x i ) , (A.11) is not local, since we need to know the value of the function ψ n i in all the configurational space. Comparing Eq. (A.9) and (2.2), it is readily seen that the Hartree-Fock potential is given byv (A.12) The solutions of the eigenvalue equation enter the definition of the Fock hamiltonian, through the direct and exchange operators. For this reason the Hartree-Fock equations of Eq. (A.9) are often referred to as pseudo-eigenvalue equations and are solved iteratively. The starting point is building the Fock hamiltonian with a guess of approximate spinorbitals. Solving the resulting eigenvalue equation provides new spin-orbitals that can be used for the Fock hamiltonian. The procedure must be repeated until the solutions are close enough to the spin-orbitals used for the construction of the operator. This method of solution is often called self-consistent field. Using the orthonormality condition (A.2), ǫ n i can be obtained by taking the scalar product of Eq. (A.9) with ψ n i ǫ n i = I n i + n j [J n i n j − K n i n j ] , (A. 13) or, in Dirac notation, |n i + n j n i n j |v ij |n i n j − n j n i . (A.14) Eq. (B.1) from which the first two rows and the colums n 1 and n 2 have been removed. A more compact expression can be given for the latter result Ψ 0 = 1 A(A − 1) n 1 <n 2 (−1) n 1 +n 2 +1 A[ψ n 1 (x 1 )ψ n 2 (x 2 )]Ψ 0 m =n 1 ,n 2 (x 3 . . . x A ) . C.1 Four-body reducible diagrams In this appendix we report the explicit calculations of the four-body reducible diagrams involved in the calculations of the FR three-body cluster contribution of v 12 . The analytic expression of diagram (a) drawn in Fig. 2.19 is In general, the imaginary time evolution does not preserve the normalization. The branching factor is nothing but the multiplicity of the walkers that at τ + ∆τ generated starting from R ′ . Hence an integration over the possible final positions has to be performed To compute the integral, as before we perform a Taylor series expansion where r iα denotes the α-th cartesian component of the coordinates of the i-th particle. The Green's function depends on the integration variable only through the gaussian diffusive term, which is symmetric in the relative coordinate R − R ′ . Therefore, only even terms survive
2012-10-01T22:02:14.000Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "21012a0b45f349a35f5c6bec167257ef8728bb11", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "21012a0b45f349a35f5c6bec167257ef8728bb11", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234744107
pes2o/s2orc
v3-fos-license
Fresh and Rheological Performances of Air-Entrained 3D Printable Mortars The effect of air-entraining admixture (AEA) on the fresh and rheological behavior of mortars designed to be used in 3D printers was investigated. Blast furnace slag, calcined kaolin clay, polypropylene fiber, and various chemical additives were used in the mortar mixtures produced with Super White Cement (CEM I 52.5 R) and quartz sand. In addition to unit weight, air content, and compressive strength tests, in order to determine the stability of 3D printable mortar elements created by extruding layer by layer without any deformation, extrudability, buildability, and open time tests were applied. Fresh and rheological properties of 3D printable mortars were also determined. It was concluded that the addition of AEA to the mortars decreased the unit weight, viscosity, yield, and compressive strength, but increased the air content, spread diameter, initial setting time, and thixotropy of 3D printable mortar. It is recommended to develop a unique chemical admixture for 3D printable mortars, considering the active ingredients of the chemical additives that affect fresh and rheological performance of mortar such as superplasticizer, viscosity modifying, and cement hydration control. Introduction 3D-object production was first developed by Charles Hull in 1984 using numerical information [1]. In this method, a 3D digital model is converted into a stereolithography (STL) format and sent to the 3D printer. Since almost all of the methods are based on building an object layer upon layer, creating a product with a 3D printer was defined in ASTM F2792-12a [2] as Additive Manufacturing (AM). The fourth industrial revolution (Industry 4.0), on the other hand, has been aimed at the digitalization of the most complex industrial products and industrial works. 3D printer technology as an application of digitalization is used in many areas such as industrial manufacturing, medicine and health, aviation and space, architecture and construction, military applications, textiles, food, and education. Although approaches of construction automation and digitalization are still in the innovation or seed phases, as it has many advantages, AM in the construction industry will be applied to large-scale elements in the near future due to increasing scientific research data and technological developments [3,4]. Some advantages of the method can be exemplified as follows; faster construction, lower building cost, more geometric freedom, shorter supply chain, improved productivity, lower energy consumption, less waste, non-use formwork, safer construction sites, and social benefits such as opportunities for gender equality and construction workers to acquire new skills which include the use of robotic systems [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Besides all these benefits, sizes of 3D printers, directional dependency, cybersecurity, interoperability, and lack of standards and regulations are among the disadvantages of 3D printing in the construction sector [9,11,15,18,[20][21][22][23][24][25][26][27]. Mortar/concrete that can be used in the 3D printer must be extruded to an acceptable degree so that it can be removed from the nozzle of the printer, must have sufficient buildability properties, must be rigid to support other layers without collapsing, and finally, must have sufficient time (open time) for maintaining workability [10]. The effort to meet all the mentioned properties at the same time makes the 3D printable mortar or concrete mix design very complicated. It is well known from the literature (e.g., [28,29]), on the other hand, that air-entraining additives (AEAs) improve concrete resistance against frost attack. AEAs consisting of surface-active agents or surfactants reduce surface tension at the water/air interface and decrease the damaging effect of the hydraulic pressure resulting from freezing-thawing cycles of concrete due to the intentional creation of tiny air bubbles caused by soluble salts, wood resins, stearic acid, and lignosulfonate acid [28]. As stated in [30], AEA can be used to produce massive micro-pores (50-1.250 µm) with a uniform pore shape and reduce the liquid-air interfacial tension with improved hydration shell thickness. For a material with high yield stress such as fresh concrete, the extra air-entraining agent can be added to decrease the rheological parameters for better pumpability [31]. In addition, adding air-entraining admixture is one of the effective methods to reduce the density of fresh concrete, and by reducing the concrete density, the layers below will be able to easily carry the layers added one on top of the other [9,32]. Lu et al. [33] designed spray-based 3D printable cementitious materials with fly ash cenosphere (FAC) and air-entraining agents and used these materials for density reduction of concrete. Assaad et al. [34] compared the efficiency of AEA and styrene-butadiene rubber (SBR) latexes to protect 3D printable mortars against deterioration due to frost attack and determined that the incorporation of SBR was more efficient than AEA to reduce the drop in bond strength due to freeze-thaw cycles. They also indicated that air entrainment would only protect the layer itself against frost, while the interface between successive layers remains vulnerable to frost attack and risks of delamination. Das et al. [35] studied the effect of different processing (pumping, acceleration/mixing, and extrusion) encountered in 3D printing of mortar and emphasized that the effect of the processing conditions on the stable air-void system resulting from the AEA should be taken into account. Considering the current state of 3D printable materials, it can be seen that there is still not enough focus on material properties. De Schutter et al. [36] reported that currently available high-performance cement-based materials cannot be directly 3D printed due to their inadequate rheological and stiffening properties. Rahul et al. [37] stated that despite its rapid growth, there is only a limited understanding of the material requirements for 3D printability. According to Buswell et al. [38], 3D concrete printing (nowadays often referred to as 3DCP) manufacturing processes, which require expert machine operators and extraordinary care in the preparation and formulation of materials, are currently inconsistent and unreliable. Wangler et al. [39], on the other hand, stated that material challenges are significant to control early age hydration, rheology, and structural and durability performance, and concluded that material selection is the main issue for mechanical design of 3D printable concrete. Marchon et al. [40] also reported that one of the main issues for hydration and rheology control of concrete for digital fabrication is the material properties. For extrusion-based large-scale digital construction, cement-based materials need to exhibit optimum rheological and mechanical properties to comply with often conflicting requirements such as pumpability, extrudability, and buildability [41]. Bos et al. [42] reported that the material characteristics are an important (although not sole) parameter to determine the buildability of 3D printed concrete; however, they stated that it is not yet clear which material properties are the most suitable for 3D printable cementitious mortars, despite some suggestions that have appeared in earlier works. These results expressed by the researchers also reveal the importance of studies on the determination of material properties for three-dimensional-printed mortars or concretes. Most of the researchers studying the topic provided valuable suggestions and conclusions about the constituents that make up the binding phase (matrix) of 3D printable mixes. For example, Le et al. [43], Jeon et al. [44], Hambach et al. [45], Rahul et al. [37], Kazemian et al. [46], Panda and Tan [47], and Panda et al. [48] determined that traditional mineral admixtures such as fly ash, silica fume, and ground granulated blast furnace slag increase the performance of 3D printable concretes. Marchon et al. [40] stated that it is essential to take into account inorganic additives such as fly ash, slag, and silica fume or calcined clay for 3D printed concrete mix design to obtain easily extrudable mixtures. Srinivasan et al. [49] and Kuder and Shah [50], on the other hand, indicated that rheological modifiers like calcined clay were found necessary for a successful extrusion. Tregger et al. [51] studied the effect of calcined clay, fly ash, and high-range water-reducing admixture on the green strength of cement paste. Voigt et al. [52] searched the effect of fly ash and calcined clay on flowability and shape stability of 3D printed concrete. Panda et al. [53] used high volume fly ash mixtures with the nano-attapulgite clay to improve the printability of 3D concrete. Kondepudi and Subramaniam [54] studied a baseline mixture in which alkali-activated fly ash and slag were modified using dry components such as micro-silica and clay. Other researchers (e.g., [55][56][57]) have also noted that clays can be used as rheological modifiers for cement-based materials, as well-chosen additives such as clay powders and chemical admixtures help to achieve the desired level of thixotropic behavior in 3D concrete while on the move. On the other hand, the number of studies in the literature investigating the effect of air-entraining admixture on the fresh properties of 3D printable mortar is quite limited, and some of the findings in the studies contradict each other [33]. The first part of a comprehensive scientific research [58] whose main purpose was to investigate the effect of AEA on the behavior of 3D printable mortars in fresh and hardened state is presented in this article. Another aim of this study was to contribute to the content of special chemical additives to be produced for 3D printable mortar. For this purpose, mortars were produced using all chemical additives proved by preliminary experiments that they can contribute positively to the improvement of the basic characteristics of 3D printable cement-based materials. It is expected that the results of the study will also contribute to the clarification of the contradicting findings in the literature regarding the addition of AEA to 3D printed concrete. Materials Rapid hardening Super White Cement (CEM I 52.5 R in accordance with EN 197-1:2011) and ground granulated blast furnace slag (GGBFS) formed the binder components. Super White Cement was chosen for its superior adhesion strength and high strength [43,59], while GGBFS was determined as a mineral additive to both increase the performance of 3D concrete and to be compatible with white cement [43,60,61]. Due to its high calcium content, the fast-setting time of GGBS at room temperature was also considered an advantage. Additionally, high purity calcined kaolin clay was preferred as a rheology regulator to improve shape stability and printing quality due to its water-retaining property [62][63][64]. Chemical compositions and physical properties of the three components are shown in Table 1. Two different fine aggregate classes of silica sands with sizes of 0-0.5 mm and 0-1 mm and particle densities of 2.44 and 2.49 respectively were used in the mixtures. Monofilament synthetic (polypropylene: pp) microfibers (commercially available as MasterFiber M 100) which are an ultra-thin polypropylene fiber with high tensile strength, high elasticity modulus, designed to disperse quickly and homogeneously throughout the mortar matrix were added to mixtures to reduce shrinkage and crack formation. The fibers were 13~19 mm in size and had specific gravity of 0.91, tensile strength of 480 MPa, and modulus of elasticity of 8.48 GPa. Based on the evaluations in terms of criteria such as consistency, setting time, 3D concrete characteristics of the mixtures, and flowability from the nozzle of a 3D printer, many preliminary experiments were carried out within the scope of the study. According to the findings obtained from the preliminary tests given in [58], we decided to add more than one chemical additive to the mixtures. High-performance viscosity modifying agent (VMA1, commercially available as MasterMatrix ® SDC 100) and superplasticizer (MasterGlenium ® T 803) were used in order to provide the extrudability and buildability properties of 3D printed mortar and to regulate the workability. Due to the early setting property of CEM I 52.5 R-type cement and the effect of VMA1, 3D printable mortars began to harden within minutes while they were in fluid consistency when they were first poured. Therefore, a non-chloride chemical admixture (MasterRoc ® HCA 20) was used to control the dynamics of cement hydration and the workability time of the mortar. Another viscosity modifying agent and strength enhancer (VMA2) (MasterRoc ® MS 685) was needed to provide improved cohesion, reduce porosity, and increase the compactness of mixtures. In order to ensure the buildability of 3D printable mortars, the layers added on top of each other must start to set after "a certain period of time" in order to carry each other. A setting accelerator additive (MasterRoc ® SA 194) was added to the mixtures as the cement hydration control as admixture shortened the period too much. Since the amount of entrained air was chosen as a parameter in the experimental study, air-entraining admixture (MasterAir MA 1) was also added to the mixtures. Finally, when used with the air-entraining admixture, a high-performance plasticizer/set retarding additive (MasterSet R 2) was also used in accordance with the manufacturer's recommendation as it improves flowability and workability of the mixture. Table 2 shows the specifications provided by the manufacturer (Master Builders Solutions Yapı Kimyasalları,İstanbul, Turkey) of all chemical additives used in the mixtures. Proportions of Mortar Components, Mix Design, and Coding The rates of air-entraining admixture were selected at 4 different levels as 0, 0.1, 0.15, and 0.2% of the binder amount. The water/binder ratio in this study was kept at 0.35 for all the mixes. Cement dosage was also kept high (680 kg/m 3 ) in order to improve the workability properties and to increase the fluidity from the nozzle of a 3D printer. The percentage of blast furnace slag was determined as 20% of the cement weight and was used by adding to the cement dosage (i.e., the total binder amount was 828 kg/m 3 ). Based on preliminary studies, the microfiber rate was decided as 0.2% of the whole mixture volume. In the mixtures, the aggregates with maximum aggregate size (D max ) of 1 mm were 2 times the aggregates with D max of 0.5 mm by volume and the total aggregate amount was 1.24 times the amount of binder. Mix design for the four groups produced within the scope of the study is given in Table 3. In the study, the groups containing AEA at the rates of 0, 0.1, 0.15, and 0.20% were coded as, respectively, A0, A1, A1.5, and A2. Table 2 for the types of chemical additives corresponding to the numbers. Test Procedures Since there are no standardized methods for determining the technical specifications of mortar or concrete mixes for 3D printing, many researchers have proposed their own methods for laboratory testing of printable cementitious materials [65]. The fresh state properties of all mixes were examined both by the conventional fresh concrete tests (consistency, unit weight, and air content) and by the interrelated characteristics (extrudability, buildability, and open time) which were necessary for proper extrusion and forming of 3D printable mortar in this study. Tests for determining the rheological behavior of all mixtures and compressive strength after curing were also made during the experiments. In the study, as with all types of concrete, the compressive strength tests of 3D printed mortars were also carried out, as they provided useful information about both durability and mechanical properties of the mortars. Experimental Procedure of Preparation of the Samples The detailed procedure of preparation of the 3D printed mortars was as follows: firstly, all dry and powder materials (cement, GGBFS, aggregates, clay, and microfiber) were added to the mixture and mixed in a mortar mixer at low speed (62 rpm) for one minute, then half of the water was added to the mixture and also mixed at a low speed for one minute. After these procedures, superplasticizer, cement hydration control, viscosity modifyer, setting accelerator, viscosity modifying and strength enhancer, and plasticizer/set retarding admixtures were added respectively and separately to the mixture, together with the remaining water and mixed at moderate speed (140 rpm) each one for one minute. The mixture was stirred for one more minute at high speed (285 rpm) and rested for one minute. Finally, AEA was added to the mixture and mixed at high speed for only one minute and pouring process of the mortar was started. In total, the mixing time was approximately 11 min. Consistency, Unit Weight, and Air Content of Fresh Mortar To determine the fresh properties of the 3D printable mortar, the following tests on the mixtures were performed: Based on the research results by Lachemi et al. [71] and Ma et al. [72], the flow table test was used to evaluate the viscosity of fresh mortar and the deformation through restricted areas. Since it was determined in previous studies (e.g., [73]) that the air-entraining additives reduced the unit weight of the concrete, it was necessary to control the unit weights of the mixtures. On the other hand, since the main parameter of this study was the air-entraining admixture, an air content test was required to determine the amount of air obtained by entraining air into the fresh mortar produced in the experiments. Extrudability 3D printable cement-based materials must be extrudable in structural integrity without discontinuity and segregation, maintaining consistency throughout the entire casting process [65]. As stated in the studies by Zhang et al. [18], Rahul et al. [37], and Kazemian et al. [46], the fact that the overlapping mortar layers, which can be easily poured from the pump end, have the same thickness and height everywhere during the pouring shows that mortar has inline quantification of extrudability. During the experiments carried out in this study, it was observed that the thicknesses of the layers were the same everywhere in the measurement made every 10 cm along the 30 cm line (see Figure 1a). Buildability Buildability is an indicator of the feasibility of fresh mix for additive printing and the resistance to deformation under the pressure of subsequent layers. This characteristic can be determined by measuring the maximum level at which poured mortar can be climbed without crushing and collapsing [74]. In this study, it was determined that the crushing started on the lowest layer after reaching approximately 10 layers with pouring a circle of mortar (see Figure 1c) and thickness of each layer was approx. 2.5 cm (see Figure 1b). Open Time Open time, also known as printability, is a period in which the mix maintains proper pumpability and in this period the mix must maintain desired quality and adhesion in a layer-by-layer structural build-up [65]. Open time of the 3D printable cementitious materials can be determined based on the shear stress test, jump table, Vicat apparatus, V-Funnel method, penetration tests, and mini cone. In this study, the open times of the mixtures were determined with the Vicat apparatus, considering the suggestions made by the authors of [43,72,75]. In the experiments, 3D printable mortars were filled in the Vicat mould specified in TS EN 196-3 [76] without rodding then the penetration depths of the needle were measured at certain time intervals and the open times of the mortars were interpreted by using these measurements. Rheological Properties Rheology is a discipline that studies the deformation and flow properties of a material under stress and provides a better understanding of the properties of fresh cement-based materials. The most suitable model that represents rheological behavior of fresh mortar and concrete is the Bingham model [77] represented by the following equation: Here τ (Pa) defines the shear stress at γ (1/s) shear rate, and τ 0 (Pa) and µ (Pa.s) define, respectively, the shear threshold and plastic viscosity. Based on the model, it can be said that each fresh mortar mix has a threshold shear value and plastic viscosity. The shear threshold (τ 0 ) is the shear stress required to initiate flow applied to a material. When the shear stress exceeds the shear threshold, the material starts to flow and the resistance to flow depends on the plastic viscosity. Plastic viscosity, on the other hand, refers to the resistance of the material against flowing after it exceeds the slip threshold. Rheological properties such as viscosity and shear threshold can be measured in cement paste, mortar, and concrete with a rheometer using the Bingham model [78][79][80]. In this study, a rotational rheometer (trade name: RheolabQC, a product of Anton Paar GmbH, Graz, Austria) was used to determine the viscosity properties of 3D printable mortars. Results and Discussion The results of all the fresh and hardened 3D printable mortar tests are given in Table 4. Based on the data given in Table 4, the graphs showing the behavior of fresh mortar are given in Figure 2 and the experimental findings given in the table are evaluated below. Evaluation of Unit Weight Tests Results The results of the unit weight test of 3D printable mortar mixtures are plotted in Figure 2a. As can be seen from the figure, the unit weights of the 3D printable mortars linearly (R 2 = 0.9606) decreased with increasing the dosage of the air-entraining admixture in the mixtures. These decreases brought the unit weight of 3D printable mortars lower than those of normal weight concrete, and even to the upper limits determined in TS EN 206 [81] for lightweight concretes. While the unit weight of A0 group without AEA was found to be 2130 kg/m 3 , the unit weight of A2, which was the group with the highest rate of AEA, was found to be 1670 kg/m 3 . Lu et al. [33] also reported that the unit weights of spray-based 3D printable cementitious materials decreased below 2000 kg/m 3 with the addition of air-entraining additives. However, the layers in all mortars, including A2 which was the lightest and has the lowest compressive strength, easily carried both their own loads and the loads of the upper layers during production of full-size samples. Therefore, it should be emphasized that all groups have the buildability properties of 3D printable mortar. As can be seen from the mix design given in Table 3, the cement dosages of the mixes were very high (680 kg/m 3 ) compared to conventional mortars/concretes. It was thought that the unit weight loss caused by air entrainment to the mortars produced in the study was caused by the high volume of air created by the air-entraining additive in the mixtures with very high cement dosages. Evaluation of Air Content Test Results The findings obtained from the air content tests of 3D printable mortar are graphed in Figure 2b. In this study, it was determined that mortars without air-entraining additives also have an air content of 2.5% (see Table 4), due to the combined effect of fibers and other chemical additives. As can be seen from Figure 2b, on the other hand, increasing AEA dosage caused significant increases in the air contents of mortars. In fact, although the air content of the group without AEA was 2.5%, this ratio reached 6.5% even with the addition of AEA at the minimum dosage (0.1%). However, the increase in the air content of mixtures containing 0.15 and 0.2% AEA was less than that of 0.1% AEA. As a matter of fact, the air content of the mortar group containing 0.1% AEA was 160% higher than the group without AEA, while the air contents of the groups containing 0.15 and 0.2% AEA were 15 and 31% higher, respectively, than the group with 0.1% AEA. It has been determined in many previous studies that the air-entraining admixture increases the air content of the mortar or concrete. For example,Şahin et al. [82] used AEA at the rates of 0, 0.05, and 0.1% and obtained air quantities between 0-6% in fresh concrete. In a study conducted byŞahin [83], AEAs with different chemical compositions were added to concrete in different proportions (0, 0.00625, 0.0125, 0.025, 0.05, 0.1, and 0.125%) and air contents varying between 1 and 7.5% were obtained. Similar results were reported by Zhang and Ansari [84]. According to TS EN 206 [81], on the other hand, the recommended air content amount to produce concrete resistant to freeze-thaw attack (environment effect degree = XF) is at least 4%. From the values given in Figure 2b, it was concluded that the amount of entrained air produced within the scope of this study was at a sufficient level to produce mortar resistant to the freeze-thaw effect. Hydrophilic ends of air-entraining agents, which are generally composed of hydrophilic ends attached to a hydrophobic chain, create airspaces by attaching to either cement-water or air-water interfaces [85]. This adsorption greatly reduces the air-water surface tension so that the air-entraining additives achieve the formation and stabilization of small bubbles [86]. From a rheology perspective, entrained bubbles may play a role in the lubrication of the cement paste and increase its volume depending on the specification of the AEA, as a result of which the workability of cement-based materials may increase [87]. In the literature, there are studies with different results regarding the relationship between air content of mortar/concrete and its rheological behavior. For example, Szwabowski et al. [88] determined that while the yield stress and plastic viscosity continued to decrease until the air content of the self-compacting concrete reached 5%, the spreading increased and then remained stable. However, Banfill [78] found that the increase in air content of concrete strongly reduced the plastic viscosity of concrete, but its effects on yield stress were not significant. Barfield and Ghafoori [89] analyzed the performance of concretes made with different AEA types and indicated that fresh concretes with similar air content may show a big difference in slump. Based on these findings, it can be said that there is not always a definite relationship between the air content and rheological properties of fresh mortar and fresh mortar may exhibit different rheological behaviors. As a matter of fact, as will be detailed in the following section, although the air amount of the mortars produced within the scope of this study increased significantly, the spread diameters of the mortars did not change much. Evaluation of Flow Table Test Results The flow table test results obtained from the experiments performed on fresh mortars are given in Figure 2c. As can be seen from the figure, the groups with AEA had very close spread diameters (approx. 16 cm), while the group without AEA flowed less (14 cm) than the others. The data generated in this test indicated that adding AEA to mortars without AEA will increase their fluidity but increasing the AEA dosage in mortars with AEA does not have much effect on the increase of the flowability of the mortar. This result was similar to the change in air content caused by the addition of AEA to mortars. Rahul and Sanatham [90] found the spread diameters of 4 groups of 3D printed mortar to be 18.3-18.7 cm. Lu et al. [33] obtained spread diameters of 3D printable cementitious materials ranging from 15 to 25 cm depending on the ratio of air-entraining admixture. Tay et al. [91] tried to determine the printability zone for 3D printable concrete using the flow table test and found that the spread diameters of 16 groups of the concretes ranged from 11 to 21 cm. Rubio et al. [92], on the other hand, produced concretes with spread diameters of 3D printed concrete ranging from 22 to 28 cm, depending on the ratio of silica fume, fly ash, and polypropylene fiber in the mixtures. Evaluation of Initial Setting Time Results As can be seen from Figure 2d, AEA has been very effective in extending the initial setting time of 3D printable mortars. The group without AEA started setting as early as 35 min. On the other hand, by adding AEA to the mixes, the initial setting time of the mixtures was linearly (R 2 = 0.9974) extended depending on the dosage of the AEA. However, although the initial setting times of the Super White Cement (CEM I 52.5 R) and GGBFS selected for the experiments were 110 and 170 min, respectively (see Table 1), even the initial setting time of the group with the highest AEA ratio (0.2%) did not exceed 90 min. The initial setting time of mortars with and without AEA was shortened by the addition of chemical additives, especially the set accelerating, to the mixtures in order to give 3D mortar characteristics. Le et al. [43] measured the setting time of high-performance 3D printable concretes using a Vicat apparatus to determine the open time and stated that determining the initial and final time of the setting was not sufficient alone to find the printability time of 3D printable mortars. The observations made in this experimental study also confirmed the results stated in [43], Because, when they are immobile (i.e., if mixing is not continued), 3D printable mortars may begin to lose their workability and solidify even though they have not yet started to set. In other words, if the mortars are not mixed, they may lose their extrusion capability from the nozzle even though they have not hardened yet. This situation leads to questioning the accuracy of the measurements based only on the penetration depth of the Vicat needle. Evaluation of Compression Strength Test Results The 28-day compressive strengths determined under uniaxial compression in the 5 × 5 × 5 cm 3 samples of the 3D printable mortars produced on the basis of the mix design are shown in Figure 3. In the group without AEA, a compressive strength value (~55 MPa) corresponding to the lower limit of high-strength concrete determined by ACI PRC-363-10 [93] was obtained. In the study conducted by Özalp et al. [75], White Portland 52.5 R-type cement was used and the compressive strength of the 3D printable concrete produced was also close to the strength value found in this study (approx. 60 MPa). However, the compressive strength of the 3D printable mortars decreased dramatically and linearly with the addition of AEA to the mixtures. The amount of reduction compared to the A0 group was 47, 65, and 78% for the A1, A1.5, and A2 groups, respectively. The reduction in the compressive strength of mortar/concrete with the addition of AEA is an expected result in concrete technology, because, as stated inŞahin et al. [82], billions of small, closed, and independent air voids entrained into concrete by means of AEA cause a decrease in the compressive strength of concrete. It was observed during the compressive strength tests that plastic deformation on all 3D printable mortars seemed to indicate ductile fractures due to the polypropylene fibers added to prevent shrinkage of the mortars. On the other hand, in order to evaluate the relationship between fresh and hardened mortar properties together, the interrelation of the unit weight, air content, and compressive strength of 3D printed mortars is shown in Figure 4. From Figure 4a, it can be seen that the change of the unit weights and the air contents of the mortars with the compressive strength were roughly mirrored images of each other, with only subtle differences. On the other hand, as can be seen from Figure 4b, it can be observed that there was a relationship between the fresh and hardened properties of the mortars, and the compressive strength of the mortars decreased with the increase of air content and decrease in the unit weight. In other words, in parallel with the literature, the compressive strength of the mixes increased as the unit weights of mortar increased, whereas the compressive strength decreased as the air content of the mortar increased. As a matter of fact, in this study, it was determined that the control group (A0) with the highest unit weight (2130 kg/m 3 ), but the lowest air content (2.5%), had the highest compressive strength. Evaluation of Rheological Properties of 3D Printable Mortar Mixtures During the experimental studies, the shear stress and viscosity of all mixtures were also measured in order to be used in the evaluation of the rheological behavior of 3D printable mortars. The graphs showing the change of shear stress and viscosity of all groups with shear rates are given in Figure 5. As seen in Figure 5, the shear stress and viscosity of the A0 groups were higher than those of the other groups. Although the air contents of the other three groups (A1, A1.5, and A2) were different (Table 4), their shear stress and viscosity of the groups were very close to each other. However, while the A2 group with the highest AEA was expected to have the lowest viscosity and shear stress, the A1.5 group was the group with the lowest rheological properties. Similar to the graph given in Figure 2a, the viscosity of 3D printable mortars decreased with the addition of AEA to mixtures compared to those without air-entraining admixtures and the decrease in viscosity of the mixtures continued in parallel with the increase in the AEA amount (see Figure 5b). Since both the opening time and the extrudability of 3D printable mortars were directly related to the rheological behaviors, yield stress and viscosity of the mixtures were determined and given in Table 5. In the table, the parameters determined during the transition of the device from low shearing speeds to high-shearing speeds are shown as "up (or acceleration ramp)", and the parameters determined during the transition from high-shearing speeds to low shearing speeds are shown as "down (or deceleration ramp)". The hysteresis loop method, on the other hand, consists of determining the combination of the up and down flow curves [94], and the area between the rising and falling curves is used as the thixotropy index. Repeating this test at various time intervals can be used as an indicator of structure kinetics [95]. Thixotropy values obtained from the area between hysteresis loops within the scope of this study are also given in Table 5. Based on the data in Table 5, the variation of the viscosity, yield stress, and thixotropy of mortars with AEA is given in Figure 6, as they were considered as critical fresh concrete properties to control the printability-as a combination of pumpability, extrudability, and constructability-of 3D printable cementitious materials [95]. The yield stress and viscosity values measured within the scope of this study decreased with increasing the amount of AEA, as can be seen in Table 5 and Figure 6a,b. In other words, as the amount of AEA increased, the concrete became more fluid. Thixotropic behavior means that the shear resistance of fresh concrete or mortar decreases overtime at a constant deformation rate, and it is the situation that the concrete maintains its fluidity without hardening as long as the mixing process continues. Thixotropy values can also be used for describing the shape stability of the fresh mixture. As can be seen from the results given in Table 5, the thixotropy values of the samples coded as A0, A1, and A1.5 were very close to each other, but the mixture coded as A2 had a higher thixotropy value than those of other groups. On the other hand, it can be seen from the line of best fits given in Figure 6 that the yield stress and viscosity of the mortars decreased while the thixotropy of the mixes increased with the increase of the AEA ratio. The viscosity, yield stress, and thixotropy values obtained from rheological measurements in this study were compatible with the rheology values of high thixotropic concretes in the literature [43,[95][96][97][98][99]. Conclusions The main conclusions drawn from the evaluation of the findings obtained from this experimental study are summarized below: • It is recommended to develop a unique chemical admixture for 3D printable mortars, considering the active ingredients of the chemical additives that affect the behavior of fresh mortar such as superplasticizer, viscosity modifying and cement hydration control. • Increasing the amount of entrained air in fresh 3D printable mortar mixture by adding AEA increased the air content, but decreased the unit weights of the mixtures, as in conventional mortar or concrete. However, although the amount of air content of the mixes increased, the spread diameters of the mortars did not change significantly. • Although the initial setting time of the group without AEA (A0) was very short (35 min), the initial setting times of 3D printable mortars increased with the addition of AEA and the group with the highest dosage of AEA (A2) started to set after 90 min. However, even 90 min is lower than the initial setting times of Super White Cement (CEM I 52.5R) and GGBFS selected for the mixtures. As a result of the combined effect of many chemical additives that were chosen consciously in order to acquire the most appropriate mix design for the interrelated characteristics (extrudability, constructability, and open time) determined for 3D printable mortar in the literature, the initial setting times of the mixtures decreased. • Increasing the dosage of AEA dramatically reduced the 28-day compressive strength of 3D printable mortars. The reductions in compressive strength were, respectively, 47.5, 65, and 78% for the A1, A1.5, and A2 groups compared to that of A0 group. Therefore, it is recommended to pay attention to the use of air-entraining additives in 3D mortar or concrete applications where compressive strength is an important priority. • The addition of AEA to 3D printable mortars reduced the viscosity and shear stress of the mixtures, and the A1.5 group had the lowest values. Yield stress varying between 50 and 262 Pa was obtained in the study and these values were found to be sufficient for the printability of 3D printable mortar mixes. The thixotropy values of the samples without AEA (A0) and containing AEA at low dosage (A1 and A1.5) were very close to each other, but mixtures containing the highest dosage of AEA (A2) had higher thixotropy values than the other groups. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-18T05:17:30.099Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "e5957adb3aa379686f31e2908ec8c91c1f8c5dcb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/9/2409/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5957adb3aa379686f31e2908ec8c91c1f8c5dcb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
158993290
pes2o/s2orc
v3-fos-license
Eastern Democratic Republic of the Congo on the way to peace? War remembrances, post conflict discourse construction: a discursive analysis of Djungu-Simba's novellas Throughout the history of the Democratic Republic of the Congo, various writings – literary, sociological, and political – retrace the challenges that have faced the nation: colonialism, access to independence, postcolonial failures, wars, dictatorships and female exploitation. Some histori-ans, sociologists, literary critics and journalists touted the successes of colonial times and the first Republic, especially those who worked for these governments. Other essential voices remained silent, in order to avoid dictatorial repression and censure, especially under President Mobutu. With the arrival of Laurent Kabila, freedom of speech brought forth discourse that slowly deterio-rated into insincere rhetoric and, finally, utter silence. In Les Terrassiers de Bukavu: Nouvelles , Charles Djungu-Simba, the editor of several Bukavu Congolese novella writers, predicts that the socio-political discourse put forth in such narratives will lead to chaos followed by further repression that will continue the cycle presented all along these metanarratives unless oppression is radically addressed from the grassroots. Introduction Violence and armed conflict in the eastern Congo has, for the past seventeen years, endangered people's lives, scattered communities, pitted neighbors and neighboring countries against each other, and caused poverty and chaos. These ills have moved writer Charles Djungu-Simba, a native of the area, to weave stories by a range of writers, constructing a semiotic discourse that is situated at the border of memory, creation and destruction. This discursive analysis looks at narratives presented in Les Terrassiers de Bukavu: Nouvelles for a post-conflict discourse construction. These narratives utilize the same literary and mythological dimensions as oral narratives with regard to production setting, theme and sound repetition, parallelism, digression, imagery, symbolism and their am-plification. However, they tend to depict a higher level of violence and negative impact in the Democratic Republic of the Congo, including malfunction, deliquescence and total chaos. Charles Djungu-Simba K. (2009) puts together novellas written by a number of generally unknown writers who provide first hand testimonies of wars and conflicts in the eastern Congo. He also includes his own contribution, a novella entitled "Postscriptum". A native of South Kivu in the Democratic Republic of Congo, his long career includes work as journalist, political advisor, critic and researcher. He has participated in many literary forums, and is known for his novels, novellas, short stories and cultural programs. He received his PhD from Antwerp University, and works there as researcher in Francophone literature. This paper addresses the ways in which Djunga-Simba's book contributes to understanding the cycles of violence and the beginnings of new political eras. Specifically, this paper analyzes the ability of the book to articulate a socio-political discourse addressed to Bukavu people, Congolese, and the world, in terms of the current state of the country and possible steps that could lead to healing the psychological wounds of war and the restoration of the country's wellbeing. Waugh (1992) states that fiction can serve as a medium to explore apocalyptic narratives, a sense of crisis in a relationship with postmodernism. As such, fiction becomes and stands for a cultural artifact that presents the concerns of the age, thus exemplifying Rancière's (2004) understanding of literature as reflecting real sociological issues. Many literary critics contend that literary texts using various techniques may, in fact, be the most useful tool for depicting and studying social changes (Foster, 1947;Booth 1967Booth , 1983Yasui, 1982;Walsh, 2007, Cuddon 1998Abrams and Harphan, 1999;Rancière, 2004). Regarding Djungu-Simba's work, these critics concur on the writer's capacity to depict and manage current social phenomena. Literary texts that reflect social phenomena The texts contained in Les Terrassiers de Bukavu provide readers with several opportunities to move both in terms of fiction and reality within particular narratives, and thus apply Jacques Rancière's (2004) understanding of fiction to socio-political polemics on the eastern Democratic of the Congo in general, and Bukavu in particular. By combining the oral narratives' communication strategies and the novella as literary genre Djungu-Simba and the other novella writers successfully provide readers with a strongly documented file of the last decades in a war-torn and decadent eastern Democratic Republic of the Congo, particularly in the city of Bukavu. Their narratives are based on memories of violence and conflict that the writers have witnessed first hand or know of through accounts from close relatives. The eastern region of the Democratic Republic of the Congo has indeed been at war since 1997, and is currently known as the world capital of rape (Carly Brown 2012: p. 2, C. 1). Many waves of people moved from there as IDPs (Internally Displaced People). Many others have found refuge in foreign countries. In Djungu-Simba's collection of novellas, many storylines clearly reflect current events in the Congo. Djungu-Simba obviously manages to involve the reader either by Through the succession of events in Les Terrassiers de Bukavu: Nouvelles the reader may see a kind of traditional fable, a generalized tale that emerges from the presentation of a cycle of evil from beginning to end. As such, each novella can be viewed as a section of one narrative. Together, these novellas reflect oral tradition performances that lasted several nights, and needed the public for different aesthetic contributions, and mnemotechnical strategies that helped performers to quickly remember the coherent succession of narrative sequences. The narratives in Les Terrassiers de Bukavu: Nouvelles present increasingly chaotic scenes, presenting the pattern of an apocalyptic myth in which chaos precedes novelty and recreation, and which leaves open the possibility of the re-construction of a better country through an apocalyptic myth pattern. In this vein, Rahman (2009) goes a step further as he looks at the capacity of apocalyptic narratives to situate a period of recuperation following a period of immeasurable violence that is particularly directed at the "other". He contends that this kind of narrative is self-perpetrating, self-fulfilling and itself remains one of the sources of the endless internal violence. Leonard and McClure (2004) and Rahman (2009) contend that political issues often take on religious issues and the other, a situation in which the other, being held responsible for all ills, is dehumanized. At the same time, there is in the background a space that offers a chance of redemption to even the worst criminals. Such apocalyptic cycles are reflected in the frame of Les Terrassiers de Bukavu: Nouvelles, with the stories and the sequence in which they are presented, leaving the hope that, after many years of death and desolation, recovery and reconstruction will unfold in the Democratic Republic of the Congo. A long tradition of discourse on the Democratic Republic of the Congo This chapter depicts the long-standing tradition of political discourse in the Democratic Republic of the Congo. While the primary vehicle of the apocalyptic myth story line that links the novellas in Djungu-Simba's selection, is fiction, the narratives also reflect a long tradition of political discourse based on key political actors' demagogies, that is conveyed in different writings (fiction and non-fiction). Conrad and Rubango, later documented in this chapter, are but an illustration of the ways in which these political discourses may be conveyed to the public. Djungu-Simba contributes to this long tradition and finds a specific way of depicting those political discourses that have led to the chaos both in Bukavu, and more generally in the Democratic Republic of Congo. The reader who is aware of the country´s political history since independence, will have no difficulty noticing the general insecurity that has prevailed in many places during this period and the key role that political leaders have played in that general failure. Increasingly, screened popular theater, music, popular comedies, and even funeral songs contribute to the exposure the ineffectual discourse of politicians who are incapable of bringing about peace and welfare. Strategies used to reveal these discourses vary from theaterical reproduction (words and characters building up politicians' profiles, and incompetences), to more elaborate creative writing, including the novellas that Djungu-Simba has selected. Writing strategies on the Congo have traditionally involved the works of individual writers subjectively interpreting different socio-political events. Djungu-Simba inaugurates a new trend that mixes local testimonies collected from oral sources or written (obviously translated) sources and written communication techniques, mostly by unknown writers' presentations of major social issues in order to convey a socio-political discourse through a recurrent and incessant semiotic chaos. It is worth mentioning that a particularity of the orality here exploited moves from old traditions, where the storyteller, a bard, griot, or reciter was the entertainer, to home and neighbors' talks on their fears, suspicions, and virtual hopes, all rendered under tense short narratives and calling for a range of emotionalreactions. In collecting novellas in Les Terrassiers de Bukavu: Nouvelles, Djungu-Simba presents a compendium of current events. Like the long tradition of oral story-telling or the presentation of life in novels such as Joseph Conrad's Heart of Darkness, Djungu-Simba's collection of brief narratives, which center around the recurring theme of war and depiction of violence and human abuses in the Congo, may ultimately and hopefully contribute to positive social change. Unfortunately, based on the evidence of history, such a positive outcome ultimately may only be realized after decades of destruction and chaos through oppressive dictatorship have been inflicted on the country. Early writings on the Democratic Republic of the Congo, such as Heart of Darkness, developed many rhetorical discourses on the discovery of an uncivilized continent, using mythological, unrealistic, even racist terms to depict Africans. Very quickly, however, writers' paralipses took on a new direction due to, on the one hand, absolute censure and lack of freedom, and on the other, the weakness of national education. This new direction expounded on King Leopold II's exploitation of the Congo, first as his own personal property, and later as a colony of the kingdom of Belgium. These first discourses tended to attack colonialism, and continued their denunciations until the Democratic Republic of the Congo became an independent country. Throughout this process, political critics received more and more support from Belgian polities clearly opposed to the colonization project. Belgium, however, could not derogate to the international pressure, it granted Congo its independence in relatively acceptable official lines shaken by Prime Minister Lumumba's unscheduled speech quickly followed by his still legally disputed internationally motivated and planned death (Ludo de Witte, 2002). The independence was indeed granted under much international pressure as the international community had already backed up the independence of a number of African countries. At the same time, Lumumba's uncontrollable changes and hatred of the West were a big challenge for the international community, and the building of anew nation. A close look at Lumumba's life reveals a history of personal and identity changes that inevitably lead to his leadership. Two of Rubango's texts (1999Rubango's texts ( , 2001) elaborate on Lumumba's political discourse and depict the evolution of his controversial personality, including his name change from "Isaïe" to Patrice. Lumumba quickly replaces his birth names Tasumbu Towasa with the rather more popular name Lumumba, which means "a moving crowd." To clearly demarcate himself from his village and people, he adopts the name of a leader often associated with whiteness or white leaders without pointing out his intention to be the same kind of leader. He takes the name of Osungu or white man (Rubango, 2000, p. 174). His other name, Emery, is said to be an imitation of a Belgian name -Hemerijckx -he knew in his childhood or one of the first évolués, Emery Pene Senga, from his village (Rubango, 2000, p. 175). These name changes metaphorically represent Lumumba's personal attitude towards politics and power and the development of this leader can be understood through oral narrative techniques that depict his long transformational process from a community member to the embodiment of a cultural hero (Biebuyck and Mateene: 1989). Undoubtedly, Lumumba's lifestyle, also a topic of literary productions and the formulation of political discourses, preludes and duplicates his dual political discourse that often seeks to please the dominant power while hiding his personal convictions. During his political life, he gives speeches that laud and the colonialists' work. Even after his incendiary speech at the independence ceremony, he congratulates Belgians on their civilized bearing and humanitarian work. One can only wonder whether this contradictory attitude is a discursive strategy to mitigate the pressure and danger he quickly experiences after being labeled a communist. Lumumba's style is here stressed in a bid to underline how Congolese political leaders are part of a long tradition that does not necessarily adhere to the needs of the people, but rather reflects egotistical views that are made concrete at the expense of their countrymen's welfare. Whatever the reasons, during the post-independence period, Congolese politicians (and intellectuals) adopt very similar strategies in their socio-political analyses. Their discourses aim at openly pleasing political authority, whereas the truth is found in private talks held behind "curtains", "radio-trottoir" (side-walk radio), or very far from home in the Diaspora. In 1975 during President Mobutu's regime, a so-called Institute emerged, the "Institut Makanda Kabobi", which was a national school of ideology training demagogues. The aim of the organization was to promulgate a national discourse on power and the "everlasting, mythical, presidential hero, and founder of the nation", in effect the official presentation of the country leader. Under a total lack of freedom, the national political discourse follows Lumumba's self-flattering pattern in presenting Mobutu as an idol and an icon. At the same time, especially in countries far from home, every opportunity is taken to describe Mobutu's dictatorship in more objective terms. Nguz Karly I Bond (1938-2003) is a very good example of that kind of Congolese Politician. Educated in Belgium, he went back to work in the Congo and led a very tumultuous political life, spending much time in exile. Put in prison, he was presented as the political opponent to President Mobutu. However, it is only when he went to Belgium on an official visit and chose exile that he was able to communicate to the world details of President Mobutu's dictatorship. Consequently, a few years after, President Mobutu's fall coincided with a virtual return to freedom of speech feted through many newspapers and magazines. Suddenly, the Congolese media found much pleasure in changing any acronym or official name in order to poke fun at the once feared political authorities and institutions. At the same time, other more academic writings used of major national events as a pretext to produce critically adapted texts (Rubango, 2009, pp. 676-678). Such writings cover all aspects of social life, from local students' upraising or massacre to the invasion wars from the East, and highlight the resulting poverty that obviates the claimed revolutionary benefits. Political discourse strategy changes and the current Rwanda crisis In Les Terrassiers de Bukavu: Nouvelles, Charles Djungu-Simba offers a strategy the lies at the crossroads between reality and fiction. By using history, fantasy, and above all community memory in locally well-documented narratives, his work plays on readers' subjectivities. Without overstressing oral literature details, Finnegan (1970, pp. 4-12), Sekoni (1990, pp. 139-141), Okpewho (1992, pp. 30-41), Ngandu (1984, pp. 16-17) suggest that Djungu-Simba and the other novella writers seem to utilize visual and sound resources and develop narratives that captivate the audience, transfer experience, and together create a work of art. As such, Djungu-Simba is comparable to a griot leader, the master and a leader of an evening's entertainment, making possible the passage from factual events to the world of dreams moving the audience to a realm where the cultural hero's a-temporal grandeur is in full control. He manouvers the audience into a co-creation process of the "illo tempore" with the public through a continuum that reveals and teaches (Biebuyck & Mateene, 1989: 1-48). Unlike the traditional bard whose collections were gathered from hard to identify sources and transmitted from one generation to the other, Djungu-Simba's contemporary novellas collected from local writers directly consider the first and the second audiences. Following African oral narrative production strategies, writers "write" their own stories and then listen to the stories of their peers contributing to the general aesthetics and social entertainment where words, music and body steps become one; to construct a common social frame that gives birth to a new meaning, otherwise known as maieutic process. As pointed out earlier, through amplification and circumambulation, and with their capacity to summarize and repeat the main themes, the novella writersstanding for storytellers -facilitate the sharing of stories already known by their public, as the eastern Congo violence is largely documented. They still captivate and hold readers' attention (or that of the participating audience) through details that make sure the amplification is well conveyed. Bullying, criminal behaviour, cheating and defrauding, prostitution and HIV-AIDS, shooting innocent people, betrayal, earthquake, confusion and desolation, abuses and dehumanization, as reported in the seven novellas, participate in the loss of human dignity, and in the general suffering that becomes amplified through war violence. Charles Djungu-Simba looks at these male and female writers as wise producers, the first consumers of the narratives, and also as first and second audiences of a product finally presented to a wider audience all around the world. As such they seem to follow the narrative production mode that O.R. Dathorne (1976) sees as including an audience that is already aware of social situations, and gives contributions at different performance levels. Blair (1976) insists on the capacity of oral literature to involve the audience through choruses and repetitions, a co-creation process that offers much that through the griot contributes to communal aesthetics and comprehension. In this book, the reader witnesses novelty as the concept "griot" changes into a literary device for both "mythbreaking and mythmaking" -as Charles Djungu-Simba along with Hale (1997), Murphy (2001) and Nzbatsinda (1997) confirm, this is a role change that signals both continuity with and a break from the communication styles of the (griot) past. In addition, Djungu-Simba puts the selected novellas into a coherent order that confirms a general apocalyptic myth and its subsequent chaos. He subtly encourages the reader to become involved in taking up new responsibilities for the rebirth, and by the same token, happily bridging oral traditions in their forms, content and social responsibilities. The African written griot aspirations open to semiosis, metanarratives and twofold stories. Thus, the griot-audience-centered technique supports Guattari's understanding of subjectivities in construction exposed to shared experiences and analyses revealing different sides and details of the same message. And in the same vein, Umberto Eco (1984) provides us with an excellent description for understanding communication processes, message production, code reading and readers' contribution through an open narrative left to their creativity for a natural ending that is different from the chaos the novellas suggest. Communications as noted in the above mentioned examples turn around daily questions and community preoccupations as violence becomes omnipresent. From Jonathan who bullies a classmate in the first novella, a succession of violent activities stand for waves that rise up with different strength and height in all narratives. Charles Djungu-Simba's collection of novellas Les Terrassiers de Bukavu: Nouvelles stands more as a collective work than the result of a personal presentation. Moreover, the writers of the novellas are not only scholars, but also a sample of the local population with whom for many years they lived through war and violence. As previously indicated, the novellas included in Les Terrassiers de Bukavu: Nouvelle represent different aspects of Bukavu today, a location that also serves as a microcosm of the Democratic Republic of the Congo. The country's multiple facets irremediably emerge and reflect a long chain of event moving to the illo tempore, the continuum of creation in the long past often seen in mythic connotations. Mircea Eliade (1961Eliade ( , 1978Eliade ( , 1990Eliade ( , 2004Eliade ( , 2005) whose scholarship turned essentially around myths, ancient religions, shamanism, rituals and socio-cultural contexts, describes how social performers lead the way in socio-religious rituals in ancient and modern times in an attempt to catch up the original "Imago Mundi" at the set of creation where everything is perfect without any kind of corruption. The author groups and links these novellas into segments that build up or create a new kind of Congolese literary and socio-political discourse stimulating a sense of self-respect, progress, solidarity and responsible government. Importantly this amplifies the commitment of early African writers to literature that serves the welfare of their population. In this vein, Buckley-Zistel (2008) offers an excellent illustration of indicators she uses to show how Rwandans have moved from the 1994 genocide environment towards a national identity. The author describes how national politicians lead their countrymen to look differently at each other, and depicts Rwandan unification around a citizenship principle in a post-conflict perspective that puts an end to cleavages that turn around ethnic interests. Charles Djungu-Simba's selection of novellas focuses on each novella's afrika focus -2014-06 [ 17 ] Eastern Democratic Republic of the Congo on the way to peace? description of the long lasting war that Prunier (2009) describes in terms of continental war. The narratives, taken together through an apocalyptic myth, depicts chaos that coincides with a starting point for a better future in a new environment, a new country with people and neighbors who behave differently. However, the long narrative also resembles the phoenix myth, by insisting on the complete chaos that buries everything, and makes the unknown future seem more threatening. In fact, in the first novella a long delayed letter arriving from Rwanda bringing bad news to the couple Bonané and Walu (p. 31), the identification of Rwandans and Ugandans as attackers of Bukavu under the cover of Congolese rebels -Banyamulenge (pp. 32-34), build up the main Bukavu victimization pattern repeated with a few variables. This pattern is also found in the second novella "Les Malheurs de Josephine", ["Josephine's Misfortunes"] that shows the beautiful Josephine (p. 51) as a victim of the socio-political development that leads to her sad HIV-AIDS related death. Unfortunately, she leaves a list of one hundred and fifty six contaminated victims, which means she knew she was contaminating them. The third novella stresses that the proximity of the Democratic Republic of the Congo and Rwanda in the neighborhood is not necessarily a good omen of peaceful coexistence, nor can one assume a small country like Rwanda is militarily weak and passive. Bombs from Rwanda send Doc Ka, his family and many other people wandering in the wild (pp. 71-72, 74-75). In the fourth novella "De la Résistance à la Libération", ["From Resistance to Liberation"], Passy, Harry and Nando experience how betrayal, mistreatment and killing are not exclusive to Rwanda -they happen in the Congo as well. Their own government unjustly puts them in security cells, and on several occasions attempts to kill them (p. 82). The fourth novella "Les coulisses d'une Ville Oubliée", ["The Corridors of a Forgotten City"] shows how nature seemingly reacts to human actions as not worthy of God's love. On a Sunday, at a church service, a strong earthquake shakes Bukavu three times (p. 105). It leaves houses in ruins, and Bukavu people homeless and wandering aimlessly. The sixth novella, "Phtiriase sur Bukavu", ["Phthiriasis on Bukavu"], delves into the interpersonal mistreatment and injustice found at the grassroots level (pp. 119, 121, 123). Many people are involved in immoral acts such as collecting money from their family members, neighbors and anonymous others. Increasingly, people are used as a means rather than an end: theft, forgery, unlawful constructions reflect a deep decadence. The seventh novella, added as a post-scriptum, "Rendez-moi ma dignité !", ["Give me back my dignity"] applies the initial pattern within a household context turning a blind eye to bullying, and the sacredness of marriage. Rebel chiefs raped Cyprien Shamavu's wife, Eulalie Sengera Segheta for about one month. Later, Cyprian Shamavu "lends" his wife to a UN liaison officer, Brian Fergusson for money (pp. 146, 149). Sengera Segheta goes for good with the UN officer and Cyprien becomes mad (p. 151). The original bullying pattern and turning away from suffering are repeated through continuing apocalyptic myth-like ending. Charles Djungu-Simba's political discourse presentation and argument Each novella in Les Terrassiers de Bukavu: Nouvelles presents characters who, in spite of their dreams, ambitions and opportunities made possible by their educated status, all fall into a vicious cycle that perpetuates starvation, poverty, illiteracy, war, violence, women's victimization, and death as daily realities. The reader could think of those who are not depicted -still present in the background of narratives -in their wealthy situation as leading a better life. However, chaos engulfs everything because true prosperity is impossible to achieve when a significant section of the population experiences death on a daily basis and lives in an environment where a total absence of political organization or leadership has led to an abundance of bandits of many kinds, including corrupt political and religious leaders, who operate against the people. The first novella, "Et le ciel s'assombrit", ["And the Sky Became Dark"] constitutes the only contribution to the book by a female author's. Clearly, this deliberate choice plays a very important role. As a metaphor, it fills a mother function giving life to different kinds of children as illustrated in all other novellas. The metaphor repeats an African proverb comparing a mother's womb to a field where different species of crops grow whereas weeds fill most space leaving no chance for good crops to grow to their full size. In fact, Astrid Mujinga, the writer of the first novella depicts family life in a chaotic environment. Thus, from the beginning of the book, the female and maternal references describe difficult existential conditions and the possibility of salvation and welfare through new generations. In fact, family situations are described in the first lines: "Waku relut pour la enième fois la lettre…" (Waku reread the letter for an unknown time) (p.31). Reading the letter several times shows the difficulty she has communicating in real time. In addition, the letter comes from the neighboring Rwanda, and brings bad news about illnesses suffered by fragile members of the family. This news depicts a situation in which death seems inevitable, but at the same time life can be saved with the right decisions and adequate means. It is a binary system, a dual presence of life and death, success and failure within a continuum, a continuous duality that Djungu-Simba's collection segments in the selected novellas, depicting chaos and failure, on the one hand, and hopefulness on the other. Another illustration can be found in the novella "Les Apparences Trompeuses", ["Deceiving Appearances"]. Although Doctor Katunga and an unnamed nurse, under the authority of occupant forces, are both obliged to work at the hospital, they have opposite attitudes. Doctor Katunga forgets about his own suffering. He wants to save lives by offering his service. The nurse refuses to give hospital serums. He wants money (pp. 76-77). The maternal imagery or metaphor leaves space to the community role in the raising of a new generation or better in the re-construction of the social tissue. There is a need of much strength and power in order to "Terrasser" (p. 9) or to plow and dig deep channels, i.e., to construct, build or re-build Bukavu. For the re-construction of the city, much work is needed in order to obtain effective results. The title of the first novella "Et le ciel s'assombrit", ["And the Sky Became Dark"] predicts the need for hard work, genius and technology in order to achieve rebirth from the dreadful situation the narrator depicts: a afrika focus -2014-06 [ 19 ] Eastern Democratic Republic of the Congo on the way to peace? dirty, muddy ground awash with rainwater and an impossible circulation of goats and people searching for shelter. It is from that unfortunate environment that Bukavu must be born anew through a "mythic process" that leads back to its original shape, beauty and closeness with nature that made of it a rich place that gods loved. This journey back to the "illo tempore", the ["once upon a time"] traces back to the mountain from which God gave cows to Bukavu people. In the post-scriptum (p. 153), Rubango explains the local oral narratives turning around cow myths that illustrate the original wealth, the beauty of Bukavu and the surrounding area, as a gift from the creators still celebrated in local rituals and oral narratives. In popular myths, Bukavu is practically an altar where creation, reconciliation and meeting rituals celebrate deities and their powers. It is also portrayed as a safe space that reproduces life. In the mythic continuum, ritual celebrations, especially purification rituals, request harmony from the community as a sine qua non condition for progress. This step does not seem simple given the general results of war, violence, misunderstanding, threats and ethnic divisions. Whereas neighbors and friends can be asked to do their best and pay back debts (the case of Tchim's), share food with visitors as in the case of Bonané and his wife, it is still very difficult to find agreement with some inhabitants of the same area identified, or suspected of collaborating with the invaders, killers and those whose presence is seen in terms of the dirt (p. 37). The first novella's title, "Et le ciel s'assombrit", ["And the Sky Became Dark"], stands as a forecast of terror and violence as the sky predicts floods. On identities either related to language, to customs and especially to any support given to the invaders, the community is very much divided, as the enemy is differently perceived. Even children have their own ideas and evaluation criteria and do not miss an opportunity to side with their parents, and at least to give a contour to their own understanding of identity, adversity, enmity and vengeance -or vendetta -for blood's sake. Children fighting in school justify their identity in their own way and, by the same token, bullying the students whose parents, rightly or wrongly, are thought to represent the common enemy that has caused their families' suffering. These children feel the need to take part in the conflict even though nobody has ever advised them to do so (pp. 38-39). This first novella generates the main repeated features that characterize Charles Djungu-Simba's discourse. It deviates from previous narratives related either to the independence question, or to Lumumba's double-edgedpolitical discourse, or to speech and freedom of expression as mentioned above. It recuperates historical and emotional testimonies that are organized in such a way as to strongly cohere and move smoothly from one segment to another, captivating the audience while leading it to its own complementary message production through a metanarrative of responsibility. In addition, when writers' minds produce narratives that mix reality and fiction, using their own social indicators, they state logical problems and suggest coherent answers, often freely expressing their emotions. Floods understood literally and figuratively as presented in this first novella shed light on human violence and build up the main chaotic imagery carried on all pages and in all novellas as the skies are dark predicting more rain, and the ground is already slippery and full of mud. The second novella by Sim Kilosho Kambale, "Les malheurs de Joséphine", ["Josephine's Misfortunes"] stresses how the population's misfortune is very much linked to foreigners present in Bukavu and elsewhere in the country. It once again pinpoints the poverty, previously depicted through Tchim's debt, and the general environment suffering from rainwater and the threat of flood, as the skies get darker and darker. However, Djungu-Simba's discourse, developed in the succession of selected novellas, offers evidence on how difficult it is to develop effective strategies and workable plans in order to solve social issues if important sacrifices are not agreed upon. Josephine's misfortune caused by her frivolity and use of morally mean strategies supports the warning to be as careful as possible. Similarly, many Bukavu girls often try to find solutions through selling sex with different people, and especially with United Nations agents. The same girls are often taken as hostages and used as sex slaves. Josephine, a pastor's daughter, demonstrates how prostitution has become a means of survival for many girls. However, when she attempts to capitalize on her illegal and immoral financial resources gained through prostitution, she moves to a legal business selling precious stones that were unknowingly fraudulently bought from Walikale village people. Thus, she falls victim to crooks (pp. 54-58), who lure her and run away with her money leaving her with valueless pebbles. Nicknamed by her community, "J'ose la Belle" or "Daring the Beautiful", her name change indicates her capacity to try any means possible, however unwise it may be, to achieve her ends, including a second failed attempt that leads to HIV-AIDS resulting from unprotected sex. On her way back home, as she is crying and reflecting on the losses accumulated through her life of prostitution, very dark and heavy skies gather above Lake Kivu, Nyungwe and Cyagungu cities. Heavy rains start pouring, thus making a quick link to the general idea of flood, apocalypse and chaos as documented by Leonard and McClure (2004). Josephine's traumatic experiences prolong the imagery that characterizes the discourse elaborated in the first novella. Dark skies, heavy rains and floods are a continuation of a muddy city, slippery spaces, disorder and chaos. Prostitution -female, male or political -reinforces personal suffering and leaves the community without hope as it prevents personal, communal, or administrative rebirth. This novella also points out that little can be expected from foreign forces present in Bukavu and in the country. The United Nations Forces are shown in their weaknesses participating in prostitution and leading amoral lives by using their high salaries to objectify people, particularly women, who they are supposed to protect against evil forces (p. 55). The presence of United Nations Forces is unlikely to lead to a general renewal. The soldiers have forgotten their primary mission to bring peace and harmony to the people and to promote community life in spite of the diversity of the population. On the contrary, they are shown as victims of the same chaotic influences. In addition, the novella clearly depicts the impotence of the Church through the character of Josephine's father, the pastor. His own daughter falls into prostitution and finally suffers from HIV-AIDS, a disease against which medical institutions in the Democratic Republic of the Congo are inadequately equipped... Political leaders, when presented at all in the novellas, are careless and useless (pp. 83-101). Ethnic influences and considerations have invaded the once believed sacred institutions -politics and religion -as both partake in the general apocalyptic environment, and eventually disintegrate and collapse as victims under the Armageddon (pp. 105-107). In fact, any re-organization or attempt at harmony that is not well monitored or does not have highly qualified moral standards is doomed to failure from its beginning. The understanding of a general conversion includes the acknowledgement of personal weaknesses and individual responsibility towards some situations before pointing fingers to others, and the capacity to offer sacrifices that can satisfy the gods. Conversion also implies harmonization with everybody, including neighbors, friends and enemies, neighboring countries. The process remains inclusive rather than exclusive so that "adaptation, alteration, modification, reconstruction, redesign, redevelopment, rehabilitation" lead to true change through rites of passage. Arnold van Gennep (1960) gives an illustration of how the individual participates in community life and growth in partaking in different rites and rituals. Rites fundamentally teach the individual to rid him or herself of egotism, and consider the individual as part of the community energy and life. They also teach resistance in all difficult conditions whereas rituals stand for the moments solemn promises are made demonstrating a deep respect for community interests and values. The third novella starts from this point, stressing an archetypal transformational imagery that is continued in the title "Apparences Trompeuses", ["Deceiving Appearances"], and in the first sentence of the narrative "Mais ces deux villes s'interpénètrent", or "these two cities penetrate each other" (p. 71). The explanation of the word "penetration" reduces to zero the distance separating these two countries, namely Rwanda and the Democratic Republic of the Congo. They are very close and the same people may inhabit both borders and may, in particular, share the same culture or let us say in this case precisely have the same facies (see the above example of Jonathan bullying classmate Anastase Sengiyunva simply because of his facies and his belonging to a given ethnic group). In addition, the reader easily comprehends that in this neighborhood, there is a kind of brotherhood based on mutual assistance and security principles. Moreover, the closeness of both lands penetrating each other refers to a biblical archetypal image of creation in the book of Genesis. This book gives two different accounts of woman's creation. The first account considers a spontaneous creation of both male and female whereas the second gives a scene where the female is created from man's rib in which God insufflates a living force. In both cases, both creatures are very close. There is also the idea of creation ex nihilo that myths, especially in narratives similar to biblical creation texts, use to show the spontaneity of creation. Whether the reader looks at the penetration image as the male and female coming from the same body or from other mythological techniques leading to procreation capacities as described by Leonard and McClure (2004), both lands change in the imagery of creation or procreation that supposes closeness and love. All the same, in the scenario presented, there is no chance for love and understanding; war and violence have torn people apart, and rapes have accompanied violence, portraying hatred, disdain and the commoditization of human beings. The discourse -and metanarratives -insists on the fact that, in the absence of human rights, universal societal conventions, and above all love, closeness would not prevent people from using each other as means rather than ends. There is an absence, then, of that supposes detachment from personal egotistic benefits for the sake of the Res Publica, the country. To deepen the idea of closeness versus possible violence in the absence of any moral rule and community leadership, the writer of the novella "Les Apparences Trompeuses", ["Deceiving Appearances"] reveals strategies that revolve around how two very close countries attack each other and lead to a general lack of confidence (p.73). In addition, the total absence of political authority testifies how uninterested politicians are in their constituencies and build up their power for power's sake. The politicians' unidentified and unclassified modus operandi opposes ethnic groups and countries often over futile aims. In contrast to this, Doctor Ka -a character in this novella "Apparences Trompeuses", ["Deceiving Appearances"] -, decides to help people who find themselves in unacceptable situations, in spite of his own family issues and fears. Whereas nurses used the war as an opportunity to force different fees on people coming to the hospital for health care (p. 77), Dr. Ka chose to look at the hospital as an opportunity to assist suffering people. He exemplifies a new profile of politician. Perceived within the chaos or the total confusion, the behavior of these nurses is symptomatic of the general chaos visibly manifest in the novella through the total absence of charity and understanding in the community. The discourse developed in the novella, "Apparences Trompeuses", ["Deceiving Appearances"], once again points at the ubiquitous nature of evil. The scene depicting Noe, the nurse (p. 78) shouting at a patient's husband in order to get a payment for the blood-testing drop she gave the doctor illustrates the general presence of corruption. Thus, whereas Doctor Ka stands for a local proposal that can launch local community initiatives and charity, the nurse stands for human failure that perpetuates violence, hatred, and dehumanization. Doctor Ka's presence in the hospital attracts attention and respect: "Docteur Anakuya. Here is the Doctor! (p. 78)". He can bring about changes in some behavior even though he cannot be understood at once. Even the most feared mythic militias respect his work because only human life -that Dr. Ka works for -makes possible a project of whatever kind. It is with the idea of accepted suffering, servicing, and ownership, on the one hand, and inherent violence and abuses, on the other hand, that the discourse organizer, i.e., Djungu-Simba, bridges this step to the next one. The fourth novella "De la résistance à la liberation", ["From Resistance to Liberation"] continues the theme related to community service in a sector other than health. It highlights politics and public opinion and shows how people do not blindly accept their suffering. Many expect salvation to come from above, i.e. from God, as their desperation cannot find any relief from human agencypolitical leaders, ethnic group partners and neighbors. However, it is out of this confusion that a group of young people develops plans to distribute tracts with the intention of raising spirits, resisting a corrupt power, denouncing dictatorship and bringing about afrika focus -2014-06 [ 23 ] Eastern Democratic Republic of the Congo on the way to peace? change from the deliquescent and dehumanizing power. The small group of young people -including Passy, Harry, Nando, and Timon -has decided to do something in order to change their situation. The eastern Congo was for many years under the occupation of its neighbors, mainly Rwandans. Officially, a national government has taken the top control of the country, but the suffering is still present. The young people have decided to distribute tracts in order to sensitize as many people as possible in a bid to attract their leaders' attention. Unfortunately, one of them, Timon, reveals the secret, betrays his friends, and they are caught. This is another example showing that the fruit is rotten on the inside, i.e. Bukavu people or largely speaking Congolese have to take care of their many issues in addition to those that neighbors may be accused of. Even though this is a very meaningful signal that youth -often referred to as the new generation -get involved in the destiny of their populations, the writer uses techniques that foreground the lack of experience and/or wisdom of youth. The idea of distributing tracts -calling for resistance -in order to raise public awareness is wonderful, but somehow weakened when the young people are betrayed and finally imprisoned. Otherwise, the young people's failure can still be viewed as a call to communal work and collective responsibility, without any exclusion of older generations. The above section of the discourse also repeats a Roman idiom, considered as universal truth today: "Homo homini Lupus", [man is a wolf for other people]. Politicians, for instance, in Bukavu and certainly elsewhere in the Congo, do not hesitate to get rid of "dangerous" opponents. However, as possible ways out, the text identifies networks that never leave alone those people exposed to dictatorship or prison suffering. After a denunciation of the abusive use of poison for killing incarcerated people, Mado, the woman in charge of bringing food to the jail brings solace to the young people, and promises to spare their lives (pp. 89-90). They are able to inform their families, local and international human rights associations of their situation thanks to Patty who is freed as he is not found to be involved in writing tracts (p. 90). In fact, African governments -for various reasons but mainly for international economic assistance, submitted to conditions -are very much concerned with their outer rather than inner image and avoid as far as possible their dictatorial practices being aired around the world. It is in fact thanks to a communication strategy that the imprisoned group finally finds freedom. Needless to say the presence of Mado pinpoints that future perspectives cannot be possible without the contribution of women. "De la résistance à la libération", ["From Resistance to Liberation"], still duplicates, amplifies and continues the theme of general chaos and confusion showing the capacity of evil to contaminate the social environment. This is taken a step further with the addition of a new discourse segment, the contribution of security and political institutions to the general anarchy and violence in society. Many citizens are incarcerated and kept away from any possible outside connection, or killed, whereas only a minority are saved, thanks to the above described communication strategies and networks. This part also focuses on public speeches and their impact on the population. The President, clearly identified as Mzee Kabila, i.e., Laurent Désiré Kabila, talks about the arrested people and compares them with the intellectuals who ruined the country under the first dictatorship, i.e., under the rule of President Mobutu. In such circumstances, the reference to the past would normally attract the entire population against the public enemy. However, this works only when the people identify the enemy as the reason for their suffering, which is not presently the case. It is through that disorder -people suffering from their leaders' incapacity and incompetence -that finally the "heroes" reach their community where everybody is waiting for them (pp. 100-102). The young people stand for potential hope and the capacity to bring about expected changes as a logical outcome of an apocalyptic myth. To continue with the idea of general degradation, the following novella, "Les Coulisses d'une ville oubliée", ["The Corridors of a Forgotten City"] centres on the architectural presentation of Bukavu. This depiction includes not only the general architecture of the area, but also the way in which different anarchic mushroom-like constructions raised everywhere denote how superficial many people are, and how some, in spite of the general poverty, wish to create the impression of wealth. The illusion of getting easily into leadership positions either as politicians or social organizers leads many to act in incoherent ways without taking into account the environment and responsibility for future generations. Such attitudes only duplicate the general chaos, and reinforce the general suffering. In an illustration of the writing imagination for a region full of mountains close to and all around Lake Kivu that offers the beautiful scenery of water and a panoramic view, the writer places all manner of constructions as a power-show of the new wealth. These rich people often take by force the properties of the poor, but can be punished by nature that turns everything on its head. Chaos ensues when an earthquake turns everything upside down, unconventional and poorly constructed houses are reduced to piles of debris. However, the owners, who ignored the laws are quick to obtain new authorization papers, through corrupt means, to rebuild in the same exiguous, dangerous and inadmissible locations. The earthquake caught people by surprise as they worshipped in Bukavu cathedral on a Sunday. Surprise, fear and bewilderment send people running in different directions, some forgetting their children and personal valuables. Church and religion cannot protect the population from this chaos when the entire community seemingly makes bad judgments and does not differentiate between right and wrong. The upside down vision that this earthquake reveals (pp. 105-107) also extends to personal justice. Since the general administration does not offer a safe, protective environment against thieves and other bandits, people take matters into their own hands and arrest the thieves. In the anarchy, the population punish anyone caught stealing or breaking into houses, and select punishments reserved for criminals that are often disproportionate to the crimes. Meanwhile, some international experts or United Nations agents lead openly amoral lives and are not punished at all for their immorality or failure to assistance to the most fragile population in need. Darius Kitoka, the author of "Les coulisses d'une Ville Oubliée", ["The corridors of a forgotten city"], takes a closer look at the daily lives of people and depicts their new habits. Any social event provides an opportunity to increase friendships and to bring the community together. However, events such as marriages, social invitations and government services are all opportunities for different kinds of corruption. Thus, invitations to feasts and celebrations are seen by "poor" people as a means to obtain money and satisfy their own needs even though they are all facing starvation and financial challenges. In addition, the police show little concern for protecting people. They see these people as a source of income through extortion. Finally, all these contribute to the confirmation of general chaos. Djungu-Simba adds his ideas in the section entitled "Post-Scriptum", which can be considered the conclusion of all the other narratives depicting the general chaos. However, this part deals exclusively with the presence of United Nations Forces in the country and their participation in the general chaos and increase in turmoil. An invisible vicious circle condemns all their actions to move back to the same starting point of chaos; these international forces cannot obviously bring about rapid change or progress. The writer calls on the United Nations Forces to help restore dignity and pride since, unfortunately, many United Nations agents are seen rather acting in their own interests in terms of prostitutes and through their general carelessness aggravating the general climate of insecurity. The example of Cyprian and Sengera is very eloquent. The gentleman offers his wife to a United Nations agent for money so that she can bring him money to solve their family issues. Unfortunately, responding to the general pattern, this idea leads to the same chaos as his wife falls in love with the United Nations agent and finally forsakes her marriage and leaves for Monrovia with the same agent. The reader rightly raises the question of whether that is the end of everything. That is exactly how the apocalyptic mythic pattern works in order to lead to a rebirth. It is therefore time for another "revolution" or cycle to start and bring new life to Bukavu in particular and the Congo in general with the help of the reader. Conclusion The novellas grouped in this book certainly do not present the same political discourse argumentation, or the same sequential presentation of actions that lead to the general described disorder. Nevertheless, they all respond to the same pattern depicting a general turbulence, chaos and an apocalyptic ending. If those who brought war and destruction came from close neighbor Rwanda, it is obvious that there is participation of elements inside Bukavu who collaborated with the general extension of the chaos. The general authority deliquescence touches all social sectors. At the same time, locals pretended to support the war to liberate their country, but quickly changed in tolerating the perpetrators. In parallel, yet in the background, the book develops a family imagery essentially revolving around parents, children, close family members and friends. As the suffering and chaos that impinge on Bukavu social life goes on, we see school children meting out their own kind of justice by beating their classmate. It is a boy from the ethnic group suspected of collaboration with the invaders, killers and rapists. These children's reac-tion may be judged as lacking discernment, doubtlessly because of their young age. In contrast, the "Baghdadi" boys -as they are called -take the decision to distribute tracts and denounce the dictatorship, in spite of what could be called a lack of age wisdom. This is also dealt with as a literary device inviting them to community actions. The reader is slowly led to understand that young people are at the center of the future. Finally, the main theme revolves around a general chaos, on the one hand, hope for a better future, on the other hand, but only if everybody takes seriously the challenges and accepts the difficult therapy: changing oneself, forgiveness, and above all a new kind of leadership. In combination, these novellas offer a comprehensive historical background on Bukavu and the country. All authors use terms such as turbulence, apocalypse and myth, which provide a new language for considering the tribulations in the Congo, and utilize literary devices that largely pave the way for a clear reading and analysis of the novellas. All writers provide theoretical and scholarly data that permit the de-construction of narratives in order to understand the breadth of the chaos emanating from flood, earthquake, volcano flows, debris, and human violence -the final end of a cycle or system -and the rejuvenating energy that emanates from young generations and community conversion leading to renewal. The readers in general and the Bukavu population in particular are familiar with biblical texts and imagery. They can easily remember that the biblical books of Genesis and Apocalypse are both constructed around chaos and logically complete each other. Whereas Genesis shows chaos observed prior to creation, the Apocalypse shows how a general chaos due to a lack of harmony leads to the doomsday, and is but a necessary step for re-creation. Thus, the novellas mentioning mud, thunders, flooding channels also offer imagery and metaphors of how everything contributes to utter chaos, a state of complete confusion, and a night-like apocalypse. It is within that confusion that "les héros de la résistance congolaise incarnée dans le jeune trio de Bagdad: Passy, Harry et Nando; [Congolese resistance heroes embodied in the young Bagdad trio: Passy, Harry and Nando, heroes of the Congolese resistance"] as mentioned in "De la Resistance à la Liberation", ["From Resistance to Liberation"] start their action. These young people stand like an artifact that the community artist has carved. They are Bukavu's positive potential. Whatever happens, the motherland will count on its youth to rebuild its most essential elements. This image repeats the pattern already seen in the first novella, where family life is presented as the most important human dynamic, the very fabric that can eventually bring about a new kind of people, strong enough to rebuild Bukavu from its ashes, based on strong ethical norms. The writer incidentally uses the Greek Mythology image of the phoenix to clearly suggest a rebirth at the example of the mythical bird that resurrects from its ashes. This image has turned in a worldwide metaphor that shows people's capacity to rebuild life, resistance, and humanity from the most unthinkable situations or crises. The selection and order of the segments comprising this collection of novellas offer what should be perceived as Djungu-Simba's political discourse, based on literary communication strategies. As they come from stories shared with families or with close afrika focus -2014-06 [ 27 ] Eastern Democratic Republic of the Congo on the way to peace? friends, they may be seen as socio-political indicators shared with readers through semiotic levels. First, the people of Bukavu are completely forsaken in the plight of facing the dangerous unknown. They do not have leaders; on the contrary, every inside or outside adventurer can impose on them any kind of suffering. Second, the novella collection concurs that when taken in the light of the Roman idea of thinking of people in terms of bread and games ["panem et circenses"], Bukavu is almost dead. It does not have the chance to feed its population properly nor allow them to enjoy life through various kinds of distractions and hobbies. All the novellas vividly show the total suffering that people face from all sides without any hope of a rapid or timely change. Third, to communicate the omnipresence of violence, suffering and death, the novellas stand for segments of an apocalyptic myth that shows a vicious circle inexorably leading to death. Fourth, a new cycle is likely to arise from the general chaos if each person makes a commitment to start a new way of life through sacrifice, harmony, and acceptance of a new kind of leadership in a new environment. Under these conditions, Bukavu can enjoy its original miracles of beauty and natural wealth while its citizens benefit from the commonwealth. Fifth, the semiotic dynamic still allows the development of Bukavu to freely go through other metanarratives, achieving greater responsibility at all levels. This new type of discourse does not directly attack leaders and politicians but presents them with images and concerns that necessarily demand their reaction. Sixth, through his new socio-political, collected, edited, and sequentially organized narratives, Djungu-Simba finally succeeds, though using mostly other people's narratives, in "mythbreaking" the general atmosphere turning around chaos. He becomes more than a griot using material inherited from a long past; he becomes a "mythmaker", permitting audiences and readers to dream of a better Bukavu, a better world, through a new cycle that involves personal, community, and global respect and responsibilities. In the Post-Scriptum, Dujungu-Simba moves through the character of Cyprien who sacrifices his wife and family for money to demonstrate how human dignity and justice can be corrupt, but can still be recuperated through personal and community will, and hard work.
2019-05-20T13:03:35.558Z
2014-02-14T00:00:00.000
{ "year": 2014, "sha1": "0c83e648521936cbfa3e64ae99d79994dfb72fd5", "oa_license": null, "oa_url": "https://doi.org/10.21825/af.v27i1.4894", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c2e14e13227e2881b0e41807105d0a87f57e3145", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }