id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
228942037 | pes2o/s2orc | v3-fos-license | Superior esterolytic activity in environmental Lactococcus lactis strains is linked to the presence of the SGNH hydrolase family of esterases
Graphical Abstract Summary: We investigated the esterolytic activity of dairy- and environment-derived Lactococcus lactis strains through a quantitative esterase assay based on hydrolysis of p-nitrophenyl dodecanoate (PNP). In general, environmental L. lactis strains had higher esterolytic activity than dairy strains. Comparative genome analysis revealed the presence of an open reading frame related to esterolytic activity in the environmental strain L. lactis DPC6855 (from corn), encoding the predicted product SGNH/GDSL hydrolase family protein. The 1,287-bp gene encodes a 428-amino acid SGNH/GDSL hydrolase. The presence of this gene in most of the environment-derived strains was established by PCR; the gene was not found in the genome of L. lactis DPC6853 or in genomes of L. lactis strains from dairy sources, suggesting a possible correlation between the SGNH hydrolase family and higher esterolytic activity. This work provides further evidence of more diverse genotypic and phenotypic traits in environmental compared with dairy L. lactis strains.
Graphical Abstract Summary
We investigated the esterolytic activity of dairy-and environment-derived Lactococcus lactis strains through a quantitative esterase assay based on hydrolysis of p-nitrophenyl dodecanoate (PNP). In general, environmental L. lactis strains had higher esterolytic activity than dairy strains. Comparative genome analysis revealed the presence of an open reading frame related to esterolytic activity in the environmental strain L. lactis DPC6855 (from corn), encoding the predicted product SGNH/GDSL hydrolase family protein. The 1,287-bp gene encodes a 428-amino acid SGNH/GDSL hydrolase. The presence of this gene in most of the environment-derived strains was established by PCR; the gene was not found in the genome of L. lactis DPC6853 or in genomes of L. lactis strains from dairy sources, suggesting a possible correlation between the SGNH hydrolase family and higher esterolytic activity. This work provides further evidence of more diverse genotypic and phenotypic traits in environmental compared with dairy L. lactis strains. Abstract: Lactococcus lactis strains are widely used in the dairy industry in fermentation processes for production of cheese and fermented milks. However, the esterolytic activity of L. lactis is not generally considered high. For this reason, purified microbial lipases and esterases are often added in certain dairy processes to generate specific flavors in the final food product. This work demonstrates the superior esterolytic activity of a collection of L. lactis strains isolated from different environmental sources compared with that of dairyderived strains. It provides further evidence of the more diverse metabolic capabilities displayed by L. lactis strains from environmental sources compared to their domesticated dairy counterparts. Furthermore, the presence of a 1,287-bp gene encoding a 428-amino acid SGNH hydrolase in the high-esterolytic environmental strains suggests a possible link between superior esterolytic activity and the presence of the esterase from the SGNH hydrolase family.
L ipolysis is an important biochemical event for flavor diversification in dairy manufacture. The pathway generates free fatty acids, di-and monoglycerides, and glycerol, which contribute to the flavor profile of fermented dairy products and act as substrates for other highly flavored components (Thierry et al., 2017). Many mold-ripened cheeses, such as blue cheese, undergo significant lipolytic activity through the actions of Penicillium roqueforti, whereby volatile and nonvolatile aroma compounds (mainly methyl ketones) are generated to provide unique flavors (Collins et al., 2003;Martín and Coton, 2017). The key enzymes involved in this lipolytic process are lipases and esterases, which catalyze hydrolysis and synthesis of esters and triglycerides, contributing to flavor development (Broadbent et al., 2005). The free fatty acids released by these enzymes act as precursors for flavor compounds such as esters, methyl ketones, lactones, and secondary alcohols (Thierry et al., 2017;McAuliffe et al., 2019).
Lactic acid bacteria, including Lactococcus lactis, are usually considered to have weak esterolytic activity compared with other bacterial species such as Flavobacterium, Acinetobacter, Propionibacterium, and Pseudomonas (Collins et al., 2003;Thierry et al., 2017). However, L. lactis strains isolated from environmental (or nondairy) niches exhibit much greater diversity in their metabolic capabilities than their dairy counterparts (Alemayehu et al., 2014;Cavanagh et al., 2014). Environmental lactococcal strains exhibit certain adaptation capabilities such as higher tolerance to salt and alkaline conditions, high glutamate dehydrogenase (GDH) activity, and diverse metabolization of carbohydrates, including sugars usually found in plant environments such as arabinose and xylose, which has been demonstrated to affect the production of flavor compounds in certain dairy processes (Alemayehu et al., 2014;Cavanagh et al., 2014Cavanagh et al., , 2015. Although no significant difference was found in lipase production by dairy and nondairy L. lactis in a previous study (Nomura et al., 2006), Kalbaza et al. (2018) demonstrated higher lipolytic activity in nondairy L. lactis than in Lactobacillus strains.
In this study, we investigated the esterolytic activity of a group of 16 dairy and environmental L. lactis strains ( Table 1). The dairy isolates were of the subspecies lactis, whereas all nondairy isolates, with the exception of DPC6853, were of the subspecies cremoris. However, we have shown in previous studies that environmental L. lactis strains that are genotypically subspecies cremoris behave phenotypically like dairy subspecies lactis (Cavanagh et al., 2015); therefore, we phenotypically compared the dairy subspecies lactis strains to the environmental subspecies cremoris strains. Cell extracts were prepared according to a method previously described (Stefanovic et al., 2017) from overnight cultures grown in M17 (Oxoid, Basingstoke, UK) supplemented with 5 g/L lactose monohydrate (L-M17; VWR, Leuven, Belgium) for dairy strains or M17 supplemented with 5 g/L d(+)-glucose monohydrate (G-M17; VWR) for environmental strains. A quantitative esterase assay, relying on the principle of hydrolysis of p-nitrophenyl dodecanoate (Sigma-Aldrich, Arklow, Ireland) to dodecanoic acid and p-nitrophenyl (PNP) was used (Bertuzzi, 2017). Although both dairy and environmental strains showed esterase activity, there were clear differences between the 2 groups of strains in relation to the levels of esterase activity. The dairy strains showed activities in the range of 10 to 22 μmol of PNP/mg, with a mean of 18.5 μmol PNP/mg (Figure 1). However, environmental strains showed the greatest activity (range of 58-100 μmol of PNP/mg; mean of 78.5 μmol of PNP/mg), except for strain DPC6853, the activity of which was 17.5 μmol of PNP/mg. The environmental strain used as a reference in our study, KF147, shared high esterase activity with the environment-derived group of strains. The means of the 2 groups analyzed (dairy and environmental) differed significantly (P = 0.00003). Our findings further confirm the more variable metabolic activities of environmental L. lactis strains (Alemayehu
Superior esterolytic activity in environmental Lactococcus lactis strains is linked to the presence of the SGNH hydrolase family of esterases
Desirée Román Naranjo, 1,2 Michael Callanan, 2,3 Anne Thierry, 4 and Olivia McAuliffe 1,3 * et al., Cavanagh et al., 2014Cavanagh et al., , 2015. Interestingly, one environmental strain, DPC6853, did not show this high esterolytic activity. This strain was the only one in our collection isolated from corn, and further investigation is required to determine whether there is a link between the observed activity and specific environmental conditions.
To determine a possible genetic link to the high esterolytic activity observed in the environmental strains, comparative genome analysis was performed on available genome sequences of the environmental strain set (Table 1). The genome data were analyzed using the Artemis 16.0.0 genome browser (Carver et al., 2005) and the BLASTP web server (Madden et al., 1996), using default parameters. Analysis of the draft genome of strain DPC6855, isolated from grass, revealed the presence of one open reading frame (FIB60_02895) related to esterolytic activity, which encodes the predicted product SGNH/GDSL hydrolase family protein ( Figure 2A). Subsequent analysis of the genomes available for 3 other environmental strains in our collection also revealed the presence of this 1,287-bp gene encoding the 428-AA SGNH/GDSL hydrolase in DPC6856 and DPC6860. Examination of the literature revealed Table 1 for sources of strains.
that the SGNH-hydrolase family is a recently classified subgroup of the GDSL group of esterase and lipase enzymes that possess multifunctional properties such as regiospecificity and broad substrate specificity (Akoh et al., 2004). A conserved XynE-like domain is associated with the SGNH hydrolase subfamily and has the consensus AA sequence of Ser-Gly-Asn-His (SGNH) found in the active site. This motif provides a catalytic mechanism different from the classical GxSxG motif-containing hydrolases, such as the lack of nucleophile elbow and the presence of a flexible active site (Akoh et al., 2004;Reina et al., 2007;Oh et al., 2019). The SGNH hydrolase encoded by FIB60_02895 is related to the putative arylesterase/acylhydrolase encoded by the xynE gene located in a xylanase gene cluster in the rumen microbe Prevotella bryantii (Miyazaki et al., 2003).
Interestingly, this gene was not found in any of the publicly available genomes of L. lactis strains from dairy sources, or indeed, the genome of strain DPC6853 from corn, which displayed lower levels of esterase activity than the other environmental strains. The presence of the gene was confirmed in other environmental strains for which genome sequence information is publicly available, such as L. lactis ssp. lactis NCDO 2118 (isolated from frozen peas; Oliveira et al., 2014) and L. lactis ssp. cremoris KW2 (isolated from fermented corn; Kelly et al., 2013). Indeed, 2 loci encoding SGNH/GDSL hydrolase proteins were identified in strain KF147. The first is LLKF_RS02565, which encodes a GDSL family lipase. This 858-bp gene encodes a 286-AA protein and contains an Ypmr_like conserved domain. The second is a 1,286-bp gene (LLKF_0950) that encodes a 428-AA predicted protein product FIB60_02890 (glycosyl hydrolase) and glf (UDP-galactopyranose mutase)] from the draft genome sequence of Lactococcus lactis DPC6855 isolated from grass. Also shown are the locations of the SGNH_Forward and SGNH_Reverse primers used to generate the 994-bp product in the PCR-based assay. The predicted SGNH hydrolase protein is shown, revealing the location of the conserved domain "Xyn_E-like" as well as the catalytic triad site and oxyanion hole found within the enzyme. Graphic was generated using SnapGene software (Insightful Science; snapgene.com). (B) PCR-based detection of the SGNH gene in the dairy and environmental strain sets. A 1% agarose gel was used and HyperLadder 1 kb (Bioline, London, UK) was used as the molecular weight marker. The dairy strains are represented in lanes 1 to 6: DPC141 (1), DPC155 (2), DPC176 (3), DPC220 (4), DPC266 (5), DPC420 (6), and the environmental strains are represented in lanes 7 to 15: KF147 (7), DPC6853 (8), DPC6854 (9), DPC6855 (10), DPC6856 (11), DPC6857 (12), DPC6858 (13), DPC6859 (14), DPC6860 (15). See Table 1 for sources of strains.
with 98% similarity to that found in DPC6855, DPC6856, and DPC6860 and described as a SGNH hydrolase superfamily protein, also related to esterolytic activity.
To determine the presence of the gene encoding SGNH hydrolase in the strains used in this study where whole-genome sequences are not yet available, we designed a set of primers, SGNH-F (5′-TGAGTGGTACGGCCTTTCGC-3′) and SGNH-R (5′-GAAAATAATCAATCAAGCACATACAT-3′), to amplify the partial gene sequence. No amplification product was detected in any of the dairy strains tested, whereas the 994-bp product was detected in all environmental strains except for the corn-derived strain with low esterase activity (DPC6853; Figure 2B). Subsequent sequencing of the amplified products revealed 100% identity to the gene found in DPC6855. Thus, except for the corn-derived DPC6853, all 7 environmental isolates in this study possessed the SGNH hydrolase gene, whereas it was not detected in any of the tested dairy strains using these PCR conditions. This correlates with our phenotypic analysis because the dairy strains showed low esterase activity compared with the environmental strains (<22 vs. ~80 μmol of PNP/mg, respectively), confirming a link between the presence of the FIB60_02895 gene and higher esterase activity.
In conclusion, this work provides further evidence of more diverse genotypic and phenotypic traits in L. lactis strains from environmental sources compared with their dairy counterparts. An SGNH hydrolase protein was identified that is potentially related to the higher esterase activity observed in these strains, and work is currently ongoing to associate the role of this gene with the functionality observed. Alternative knockout methods are being tested because the traditional knockout by double recombination has proven ineffective in this case. In addition, the metabolite profiles and the ability of these environmental strains to hydrolyze milk glycerides compared with the less-active dairy strains are being examined. These strains represent potential options for in situ production of lipolytic enzymes in dairy processing. | 2020-11-05T09:08:42.469Z | 2020-10-28T00:00:00.000 | {
"year": 2020,
"sha1": "58a7102fde9e8ec1f7b03e52ef8f09f49faa12b5",
"oa_license": "CCBY",
"oa_url": "http://www.jdscommun.org/article/S266691022030034X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d72833eb33dcc1be821c5384b0767fef811cc706",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
1736780 | pes2o/s2orc | v3-fos-license | Contribution of LFP dynamics to single-neuron spiking variability in motor cortex during movement execution
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters.
Introduction
The variability of neuronal responses at the level of single-neuron spiking is a fundamental problem in neuroscience (Shadlen and Newsome, 1998;Churchland, 2010;Churchland and Abbott, 2012). Neuronal responses in neocortex to repeated stimuli presentation or behavioral tasks show substantial variability. Determining the sources of this variability is particularly important for understanding encoding of stimuli and behavioral parameters in neuronal ensembles. The issue is also critical for the development of reliable brain machine interfaces for the restoration of movement, communication, and sensory function in people with sensorimotor impairments (e.g., Hochberg et al., 2012). Beyond intrinsic stochasticity due to, for example, thermal noise and synaptic release failure (Faisal et al., 2008), variability in cortical neural responses has been proposed to arise from fluctuations in spontaneous, ongoing neural dynamics (Arieli et al., 1996;Wörgötter et al., 1998;Truccolo et al., 2002;Carandini, 2004). Although often neglected, spontaneous and ongoing neural dynamics are likely to affect how neurons respond to sensory inputs or even how they modulate their activity during behavior. In this way, spontaneous neural dynamics can provide a background of contextual effects which otherwise may appear as spiking variability (Fiser et al., 2004;Hermes et al., 2012;Goris et al., 2014) due to noise.
Local field potential (LFP) oscillations in different frequency bands result, to a large extent, from ongoing collective dynamics, i.e., modes of coordinated or coherent activity in neuronal populations (Nunez and Srinivasan, 2006;Buzsáki et al., 2012). Previous studies have investigated how features in multiband LFP oscillations relate to sensory stimuli and behavior and how decoding based on LFP features compares to decoding based on spiking activity (Belitski et al., 2008;Bansal et al., 2012). Additionally, some studies have examined how well LFP features predict spiking activity (e.g., Haslinger et al., 2006;Montemurro et al., 2008;Rasch et al., 2008;Kayser et al., 2009;Kelly et al., 2010). However, most of these studies have focused on early sensory cortices and most have been conducted during anesthesia, a neural state distinct from alert and active behavior. More importantly, none of the above studies have addressed how well ongoing collective dynamics reflected in LFPs account for single-neuron spiking variability that is not explained by behavioral parameters (e.g., movement kinematics). For example, the result in Bansal et al. (2012) showing LFP and spiking activity are redundant with respect to decoding kinematics does not address the issue of excess variability in single-neuron spiking.
Here, we examined and quantified how well features in multiband LFP oscillations account for single-neuron variability not explained by behavioral parameters in a naturalistic 3-D reach and grasp task performed by rhesus macaques (Macaca mulatta). The behavioral task elicited diverse reaching and grasping kinematics, and included reaching to grasp different objects with different styles of grip.
We quantified the amount of variability accounted for by LFP features by fitting point process models (Truccolo et al., 2005 in which the conditional intensity function ("instantaneous" conditional spiking rate) was modeled as a function of covariates, including the ongoing LFP features mentioned above. To assess the amount of spiking variability explained by various models, we compare the relative predictive power of LFP features, reach and grasp kinematics, intrinsic spiking history, and combinations of these covariates, using receiver operating characteristic (ROC) curve analysis.
Behavioral Task
Data from three male rhesus macaques were examined in this study, C, R, and S. Data from C have been used previously in Vargas-Irwin et al. (2010). All experimental procedures were conducted as approved by the local Institutional Animal Care and Use Committee (IACUC). This study employed a task termed Free-Reach and Grasp (FRG). In FRG, an experimenter pseudorandomly swings within the monkey's reach one of various small (3-5 cm) objects of differing shapes attached to a string, and the monkey is rewarded for grasping the object. The FRG task was designed to elicit naturalistic and continuous threedimensional reach and grasp behaviors that require online motor control . Reach and grasp actions were organized in blocks within each experimental session. We analyze data from the entire FRG block, and so our data contains a diversity of behavioral conditions, including visually guided reaching (including online corrections as the objects' movement is unpredictable), the grasping and holding of the object, and the period of juice reward (see Vargas-Irwin et al., 2010).
Kinematic Feature Extraction
Arm and hand kinematics were recorded at 240 frames per second using a Vicon motion capture system (Vicon Motion Systems; Oxford Metrics Group) as detailed in Vargas-Irwin et al. (2010). This system employs infrared-reflective markers to track arm and hand positions in real time, and is capable of inferring missing data from briefly occluded markers. For our analysis, we focus on the 3-D kinematics measured at the wrist, as well as the distance between the thumb and forefinger (grasp aperture), as indicators of the kinematics related to reaching (i.e., hand position in space) and grasping. We analyzed kinematics features similar to those used in Hatsopoulos et al. (2007). These are normalized velocity trajectories of both the wrist endpoint and grip aperture in time, combined with zero lag position FIGURE 1 | Behavioral and neural signals during the free-reach and grasp task. Kinematic, spiking, and LFP data from a single trial of the free reach to grasp task (monkey S, area M1). (A) Velocity of 3D wrist endpoint ("'X," "Y," "Z") and distance between the thumb and forefinger ("Aperture"). (B) Broadband LFP, low-pass filtered at 500 Hz during two reach and grasp movements. (C) Instantaneous amplitude envelope (bold trace), via Hilbert transform, from the bandpass filtered LFP (gray trace). This particular example is filtered in the 7-15 Hz range. (D) Corresponding instantaneous phase extracted from the same bandpass filtered signal as in (C). (E) Spiking population raster during this trial: spikes from 47 units are plotted along the y axis. Neurons show different task-related modulations. and speed. For comparison, we also analyze position trajectories of these markers over time. The velocities of motion capture markers were estimated using a Savitzky-Golay filter generated by fitting a 5th order polynomial to a discrete differentiation operator at the sampling rate of the kinematics. The polynomial extended 25 samples (10 ms) to either side of the current timepoint. The polynomial order was selected such that frequencies higher than 20 Hz were attenuated, so that the resultant velocity trajectories were sufficiently smooth to down-sample at 40 samples-per-second. Velocity trajectories were sampled from the smoothed velocity every 25 ms, starting 100 ms before the current time-point and extending 300 ms into the future. Similarly, a smoothed position estimate was extracted using Savitzky-Golay filters based on a 5th order polynomial fit to a discrete impulse. For the normalized velocity feature set, we followed the steps in Hatsopoulos et al. (2007), i.e., velocities were normalized by the L2 norm of the velocity trajectory. 3-D wrist position and grip aperture were normalized separately. The average speed and position over the trajectories were added as additional features. A separate feature set based on position trajectories was used for comparison. Position trajectories were taken from the position estimates, sampled at 40 samples-per-second, covering the same period (100 ms before to 300 ms after) as the normalized velocity trajectories.
Neural Recordings
Recordings were made using microelectrode arrays (Blackrock Microsystems), as previously described in Vargas-Irwin et al. (2010). Electrodes were 1.5 mm long, likely targeting layer 5 of motor cortex. Monkey G was implanted with 4 × 4 mm 96-microelectrode arrays in both M1 and PMv. Two monkeys (R,S) were implanted with one 96-microelectrode array in PMv, and a 3.2 × 2.4 mm 48-microelectrode array in each of M1 and PMd areas. Electric potential signals were recorded broadband (analog band-pass filtered to 0.3 Hz -7.5 kHz; digitized at 16bit and sampled at 30 kilosamples per second) using a Cerebus Data Acquisition System (Blackrock Microsystems). For spike detection, recorded signals were digitally filtered with a 250 Hz fourth-order high-pass Butterworth filter. For each electrode, candidate spikes (action potentials) were identified online via threshold crossing detection in the amplitude of the high-pass filtered signal (Cerebus Data Acquisition System). Preliminary spike sorting was performed by a custom automated spike sorter (Vargas-Irwin and Donoghue, 2007), and verified using the commercial Plexon Offline Sorter (Plexon Inc.). Candidate units to be included in the analysis had a minimum signal-to-noise ratio (SNR) of 3.0 (defined as in Vargas-Irwin and Donoghue, 2007). In addition, we required that (a) the inter-spike-interval (ISI) histogram display a clear refractory period to exclude potential multi-unit clusters; (b) that there be at least 500 inter-spike-interval events smaller than 100 ms within the task data, to provide accurate estimates of spike-history filters; and (c) that units be clearly separated into different clusters in the PCA feature space. Electrodes exhibiting cross-talk or excessive noise were excluded from analysis. For monkey C, LFPs were extracted during recording sessions from the broadband signal using a causal 500 Hz fourth-order low-pass Butterworth filter, and stored at two kilosamples per second. LFP data from monkeys R and S were filtered offline to match this processing. Channels displaying cross-talk or excessive noise were excluded from the analysis.
LFP Feature Extraction
For analysis of band-limited LFP oscillations, LFP signals were filtered using causal (forward only) fourth-order Butterworth low-pass and band-pass filters, with cutoff frequencies 0.3-2 Hz (δ), 2-7 Hz (θ ), 7-15 Hz (α), 15-30 Hz (β), 30-60 Hz (γ ), 60-100 Hz (high γ ), 100-200 Hz (MUA1), 200-400 Hz (MUA2). The 0.3-2 Hz low-pass signal captures slow motorevoked potentials (MEPs). The two highest-frequency bands are likely to reflect a substantial contribution from multi-unit activity (MUA) to neocortical LFPs, as well as other possible high-frequency source signals. For the narrow delta, theta, alpha, and beta frequency bands, we considered four features: the instantaneous phase and amplitude of the analytic signal, and the real and imaginary component of the analytic signal. The LFP analytic signal was computed from the band-pass filtered LFP using the Hilbert transform. LFP instantaneous phase and amplitude were computed as the complex argument and modulus of the analytic signal, respectively. (The real component of the analytical signal corresponds to the band-pass LFP itself.) For the broader, higher frequency gamma and multi-unit bands, we use only the analytic signal and the amplitude envelope. Feature extraction was performed on the LFP sampled at two kilosamples per second, and decimated to one kilosample per second for neural point process modeling.
Spike Contamination
In this analysis, we predict single-unit spiking from features of the LFP recorded on the same electrode as the isolated unit. Because of this, when predicting neuronal spiking from LFP features, it is important to prevent action potentials from contaminating the filtered LFP. We elected not to use existing spike removal procedures (e.g., Zanos et al., 2012) because the broadband LFP data were unavailable for monkey C, and because spike-removal methods make implicit assumptions about which features of the LFP relate to the spike waveform as opposed to collective dynamics locked to spiking. Instead, we employed causal filtering to extract LFP features. Although spike contamination can occur as low as 10 Hz (Waldert et al., 2013), causal filtering restricts this contamination to times following a spike, avoiding the confound of predicting spikes from themselves (i.e., via their contamination of the LFP features). Although the discrete Hilbert transform used to compute phase and amplitude features is non-causal, the effective filters created by the composition of the Hilbert transform with the causal band-pass filters remain predominantly causal. As a further precaution, and to guard against imprecision in localizing spike times, we added 1 ms delay to LFP features. Under this approach, the noncausal contribution to the imaginary component of the analytic signal was negligible: less than 0.14% of the impulse response (measured as the percentage of the area under the absolute impulse response) was non-causal. Since causal filters can add amplitude and phase distortions, we addressed this concern by comparing the predictive power of causally filtered LFP and that of zero-phase (non-causal) filtered LFP, which contains no delay. We determined that the choice of causal verses zero-phase filtering did not alter the conclusions of this paper for frequencies below 30 Hz. Zero-phase filtering for higher frequencies resulted in higher predictive power in some cases, which was likely the result of action potential contamination as supported by the recent studies mentioned above.
Intrinsic Spiking History
To assess the extent to which a neuron's own spiking history explains spiking variability, and to compare its predictive power to that of kinematic and LFP features, we included features of spiking history in our modeling (Truccolo et al., 2005). In addition to intrinsic biophysical processes (refractory/recovery period, bursting, etc.), spiking history can potentially also reflect indirect neuronal network dynamics effects. For example, spiking history models are capable of capturing spiking rhythmicity that may arise as a result of oscillatory input. We used raised cosine bases in logarithmically scaled time, covering the past 100 ms of spiking activity, to estimate temporal filters (see Pillow et al., 2008 andTruccolo et al., 2010, for more details). The resulting temporal filters were convolved with the past spiking activity to capture history effects on the spiking probability at a given time. Ten basis functions were used. More recent spiking history, typically related to after-spike refractory and recovery periods, and bursting, was modeled with more localized (finer temporal resolution) basis functions. Longer-term history effects can capture intrinsic rhythmicity and also, implicitly, network dynamics.
Stochastic Neural Point Process Models
We used a generalized linear point process model (Truccolo et al., 2005) to explore the extent to which different covariates explain spiking variability. The probability of a neuron spiking in a sufficiently small time interval, indexed by t, of duration , can be written as where Y t corresponds to the spiking activity at time t, Y t =1 for a spike, 0 otherwise, and λ t is the conditional intensity function ("instantaneous conditional spiking rate, " in spikes per second) of the modeled neuron. The bin size must be chosen small enough such that the probability of two spikes occurring within the same time bin is negligible. Here = 1 ms. We used a regularized maximum likelihood approach to model the logarithm of the conditional intensity function as a linear combination of model features: where X t is the covariate vector at time t, A is a vector of model parameters, and µ is a parameter related to background activity level. X t can refer to LFP features at time t, past and future kinematics, convolutions of intrinsic spiking history up to but not including time t with temporal filters, or combinations of these covariates. For example, for a given Hilbert-transform of an LFP band, z(t), the feature vector corresponds to a model with cosine tuning to a preferred Hilbert phase θ 0 , as well as amplitude envelope and analytic signal features, i.e., ln(λ t ) = µ + a 1 |z t | + a 2 Re(z t ) + a 3 Im(z t ) + a 4 cos(θ 0 − Arg(z)) (4)
Model Fitting
Model estimation is solved by finding parameters A and µ that maximize the L2-regularized log-likelihood of the observed spiking activity (Truccolo et al., 2005): where α is a penalty or regularization parameter. The loglikelihood is normalized by the number of samples T so that the strength of regularization does not depend on the amount of data. The parameter µ is not penalized. All features are z-scored prior to model fitting to ensure that all features are zero mean and of comparable scale, which ensures that the L2 penalty is applied equally to all features and improves numerical accuracy.
We used a gradient descent approach for the minimization of the negative log-likelihood under L2 regularization. Models were fit under a two-tier cross-validation scheme. An outer level of 10-fold cross-validation ensures that results are not overfit. An inner level of cross-validation selects the regularization parameter α. Ten values of the regularization parameter α, base-10 logarithmically spaced between 1e-9 and 1e2 inclusive, as well as α = 0, were tested. On each of the 10 outer-level cross-validations, 90% of the data were taken as training data, and 10% were reserved for testing. The training data were split randomly into two equal groups. For each group, models were generated for each value of the regularization parameter α. The value of the regularization parameter that led to the best generalization (in terms of predictive power, see below) in this internal cross-validation step was selected for fitting a model on all of the training data. This model was then validated on the remaining 10% of the data that had been withheld for testing. This two-tier cross-validation procedure was repeated 10 times, such that all of the available data was used for model validation and assessing predictive power. To confirm that L2 regularization sufficiently prevented over-fitting when adding LFP features to the kinematics model, we shuffled LFP features in 100 ms blocks with respect to the spiking activity. We found that adding these non-informative features to the kinematics and kinematicshistory combined models reduces the predictive power very little, by at most 0.03, and with the population mean decrease ranging from 0.001 and 0.006 across all sessions. This difference is too small to alter the conclusions of our study. In preliminary analysis, we also explored L1 regularization and also a mixed L1-L2 regularization (fitted via elastic net, Friedman et al., 2010). We found that L2 regularization outperformed these alternatives, both in terms of computational time and predictive power under generalization. Additionally, we found that L2 regularized GLM models outperformed simpler approaches, such as naive Bayes and spike-triggered average and covariance analysis.
Assessment of Predictive Power
Model performance was evaluated using the area under the ROC convex hull (AUC) measured on testing data, using the model (Equation 2) to compute the conditional spiking probability, Pr(Y t =1|X t , A, µ) ≈ λ t (X t , A, µ) , from observed covariates . ROC analysis assesses predictive power in the context of binary (in this case spike train) sequences (Fawcet, 2006). We report predictive power (PP) as 2×AUC−1, which ranges from 0 (no predictive power) to 1 (complete prediction of single-neuron spikes in 1 ms time bins). Note that this predictive power measure is based on both true and false positive rates, since is derived from the ROC analysis.
Results
We are interested in how well collective neural dynamics, as reflected by features in ongoing and evoked multi-band LFP signals, can explain motor cortex single-neuron spiking variability not accounted for by motor behavioral covariates such as reach and grasp kinematics. We first demonstrate that the examined LFP features can predict single-neuron spiking in motor cortex, then we assess the extent to which this predictive power compares and is redundant to information available in 3-D kinematics. We also assess the extent to which intrinsic spiking history, i.e., temporal dynamics or correlation in the modeled spiking activity itself, adds predictive power to kinematics, and evaluate whether LFP features remain predictive conditioned on kinematics and intrinsic spiking history.
Datasets from seven experimental sessions were used in these analyses: two each from monkey C and R, and three from monkey S. Sessions from monkeys R and S were collected within a week of each-other. The two sessions from monkey C were collected 3 months apart. Between three and nine reach and grasp blocks, averaging 140 s long, were collected on each session. Each session included 15-42 successful free-reach-to-grasp trials or reaches in each block. This yielded 7-17 min of FRG task data, averaging 10 min of data per session. A detailed statistical description of the kinematics and examples of kinematic trajectories in these experimental blocks can be found in Vargas-Irwin et al. (2010) and Bansal et al. (2012), who reported some of the data from monkey C in this task. An example of kinematics trajectories with the corresponding neuronal ensemble spike raster is shown in Figure 1. For each array in each session, between 19 and 83 wellisolated units were identified for analysis (mean = 52, σ = 16). For a given monkey and area, some of the neurons are thought to be the same across sessions, for this reason we do not combine sessions when we perform statistical significance tests.
Features of LFP Oscillations Predict Single-Neuron Spiking with Substantial Power
First, we evaluated the ability for multiband LFP features to predict single-neuron spiking. We fit a regularized generalized linear model to predict single-unit spiking (1 ms time resolution) from multiband LFP features. Spiking probability at any given 1 ms time interval was modeled as a function (Equations 1-2, Methods) of features of ongoing LFP activity. LFP features included instantaneous phase and amplitude envelope as well as the analytic signal, extracted via a Hilbert transform, from four narrow LFP bands (Methods), δ (0.3-2 Hz, motor related potentials), θ (2-7 Hz), α (7-15 Hz), β (15-30 Hz), as well as the amplitude envelop and analytic signal for four broader, higher frequency bands: γ 1 (30-60 Hz), γ 2 (60-100 Hz), and two multi-unit activity (MUA) bands MUA 1 (100-200 Hz), and MUA 2 (200-400 Hz). Figure 1 illustrates the signal processing steps involved in the computation of the instantaneous phase and magnitude for a single frequency band.
We report the extent to which a model explains spiking variability in terms of "predictive power" (PP). Predictive power is the normalized area under the receiver operator characteristic (ROC) curve such that 0 corresponds to chance level and 1 to perfect prediction of spike times at 1ms resolution (see Methods: Model fitting; Methods: Assessment of predictive power). Figure 2 shows PPs obtained from three example neurons from different monkeys and areas. For illustration and comparison, we also show the corresponding PPs based on a model including only kinematics features related to lagged 3-D velocity and position (similar to "pathlets" in Hatsopoulos et al., 2007) and grasp aperture (Methods). The examples show a case (Figure 2, left) in which LFP features and kinematics both explained a substantial fraction of spiking variability (PP = 0.73, 0.75, respectively) and two other examples in which LFP features did better and worse than kinematics, respectively. Overall, we found that LFP features were typically predictive, and that for some neurons LFP accounted for a substantial (i.e., PP > 0.5) fraction of variability. We performed a permutation test to assess chance level LFP predictive power by shuffling LFP features in 100 ms blocks relative to spiking, and found that the 95% chance predictive power ranged from 0.03 to 0.06 across sessions. Across sessions and areas, between 85% and 100% of units showed LFP predictive power higher than this chance level. As shown in Figure 3, high predictive power from LFP was consistent across all monkeys, motor cortical areas (PMv, PMd, and M1) Frontiers in Systems Neuroscience | www.frontiersin.org FIGURE 3 | Features of ongoing and evoked multiband LFP oscillations predict single-neuron spiking: summary across animals and areas. Histogram counts of the predictive power of point-process models based on LFP features, for all isolated units. LFP features include the instantaneous phase, amplitude envelope, and analytic signal, in the δ = 0.3-2 Hz, θ = 2-7 Hz, α = 7-15 Hz, and β = 15-30 Hz LFP bands, as well as amplitude envelope and analytic signal in the γ 1 = 30-60 Hz, γ 2 = 60-100 Hz, MUA 1 = 100-200 Hz, and MUA 2 = 200-400 Hz bands (see Methods: LFP feature extraction). LFP was consistently predictive of spiking variability for a subset of neurons in all sessions. In the figure legends, "S" indicates the session, µ is the mean of each distribution, m is the median, and σ is standard deviation. Sessions have different numbers of units, and the differences in bar height also reflect differences in sample size (e.g., monkey C area PMv). and sessions. This finding demonstrates that collective dynamics reflected in ongoing and evoked LFP oscillations can account for a substantial fraction of single-neuron spiking variability.
LFP Features Contributing to Prediction of Single-Neuron Spiking
We examined whether some of the multiband LFP features contribute more to prediction of single neuron spiking than others. Analysis based on estimated model coefficients is complicated because of the nonlinear (multiplicative) interactions between different features (amplitude, phase, or analytical signal) in different frequency bands. Instead, we performed an analysis based on fitting a single model for each feature separately and assessing how well each separate model and feature predicted spiking. This allowed easy visualization of the predictive power of each individual LFP feature.
This analysis revealed some trends common to all animals and motor cortex areas, but also some variations (Figure 4). Consistently across animals and areas, low-frequency local field potentials (δ, 0.3-2 Hz) showed predictive power in the time domain signals and phases, but not amplitude envelopes. Additionally, the amplitude envelope in the multi-unit (100-200 and 200-400 Hz) bands was predictive, more-so than the signal. The low-frequency <2 Hz analytic signal was the most predictive for 49% of units (486 units out of 991), and the instantaneous amplitude envelope and phase the most predictive for 13% and 8% of units (133 and 82 units out of 991), respectively. The amplitude envelope in the 200-400 Hz band was the most predictive for 14% of units (142 units out of 991). Features from intermediate 2-100 Hz bands generally performed poorly, with the exception of beta (15-30 Hz) amplitude, which although less predictive than the aforementioned features, was still amongst the top 4 most predictive features for 23% of units (227 units out of 991). The finding that LFP amplitude was predictive for the beta-frequency LFP was strongest in monkey R for areas M1 and PMv. To understand in more detail how the predictive power in beta amplitude varies across monkeys and areas, we examined the distribution of the model parameter weights for beta amplitude (in the case of the amplitude-only model, the parameter matrix A in Equation (2) is simply a single scalar parameter). Model weights for beta amplitude in monkey R areas M1 and PMv were more negative (mean ± 2SD for M1 and PMv were −0.18 ± 0.4 and −0.23 ± 0.4) than those from other monkeys and areas (−0.04 ± 0.16), indicating that a reduction in beta amplitude is typically associated with an increase in firing rate.
Predictive Power of Kinematics During Naturalistic Reach and Grasp Movements
We next quantified the predictive power of motor behavior, specifically kinematic features of the 3-D reach and grasp movements. We found that kinematics trajectories can also predict single-neuron spiking with substantial accuracy, at times achieving predictive power levels around 0.8 (e.g., Figure 5, monkey R area M1). However, similarly to LFP features, there was considerable diversity in the extent to which kinematics predicted spiking, with some units being predicted poorly. Similar results were obtained by using position trajectories (i.e., position at multiple time lags with respect to spiking; Methods). The 95% chance level predictive power for kinematics ranged from 0.03 to 0.08, as assessed by shuffling kinematics features in 100ms blocks relative to spiking. Across sessions and areas, between 49 and 100% of units showed LFP predictive power higher than this chance level. These effects were consistent across all animals, sessions, and motor areas (Figure 5), with mean predictive power ranging between 0.16 to 0.36 across sessions and areas. The fact that the task kinematics predict single-unit spiking variability confirms that we are recording from motor FIGURE 5 | Kinematic features predictive power for single-neuron spiking: summary across animals and areas. Histogram counts of the predictive power of point-process models based on kinematics features, for all isolated units. Kinematics features are normalized velocity trajectories of wrist endpoint and grip aperture, extending from 100 ms in the past to 300 ms in the future, sampled every 25 ms, as well as the average speed and zero-lag position for wrist endpoint and grip aperture (Methods: Kinematic feature extraction). In the figure legends, "S" indicates the session, µ is the mean of each distribution, m is the median, and σ is standard deviation. Sessions have different numbers of units, and the differences in bar height also reflect differences in sample size (e.g., monkey C area PMv). cortex populations that exhibit task-modulation and tuning to motor output. Figure 6 directly compares the predictive power of LFP and kinematics features. Overall, the predictive power of LFP features was typically less than that of kinematics during this free reach and grasp task: the difference between the predictive power of models based on kinematics and LFP features ranged from −0.20 to 0.45. With exception of monkey S area PMv, units for which LFP features explain more variability than kinematics are rare. The mean difference in predictive power within each session ranged from −0.04 to 0.14, and the median difference from −0.02 to 0.12, with all (session, area) pairs except monkey S area PMv session 3 displaying significantly better median predictive power for kinematics. (Wilcoxon signed-rank test with p < 0.05, corrected for multiple comparisons using Bonferroni correction for 19 (session, area) pairs.) Furthermore, across all monkeys and areas, predictive power of LFP was highly correlated with that of kinematics: the Pearson correlation coefficient between the predictive power of kinematics and LFP features ranged from 0.52 to 0.96, with a mean of 0.86 and a median of 0.88. This raises the possibility that LFP and kinematics shared some common effect, which we address below. FIGURE 6 | Kinematics and features of ongoing and evoked multiband LFP oscillations achieve similar predictive power on a neuron by neuron basis. Scatter plots compare the predictive power of LFP features (x-axis) to that of kinematic features (y-axis). LFP features include the instantaneous phase, amplitude envelope, and analytic signal, in the δ = 0.3-2 Hz, θ = 2-7 Hz, α = 7-15 Hz, and β = 15-30 Hz LFP bands, as well as amplitude envelope and analytic signal in the γ 1 = 30-60 Hz, γ 2 = 60-100 Hz, MUA 1 = 100-200 Hz, and MUA 2 = 200-400 Hz bands (see Methods: LFP feature extraction). Each data-point is a single unit from one session. The dashed line indicates equality. Most units lie above the dashed line, indicating that kinematic features better predict single-unit spiking variability. In the figure legends, "S" indicates the session, ρ is the Pearson correlation coefficient between the predictive power for kinematics and LFP features. Predictive power from both LFP and kinematics are highly correlated, i.e., neurons that are well predicted by LFP features tend also to be well predicted by kinematics.
LFP Features are Mostly Redundant to Kinematics When Explaining Single-neuron Spiking Variability
In order to determine whether LFP features can account for single-neuron spiking variability not explained by kinematics, we asked whether the predictive information carried in the examined LFP features about single-neuron spiking variability was redundant to the predictive information carried in kinematic features. To assess redundancy we compare the predictive power of point process models based only on kinematics features to models that included both kinematics and LFP features. We used L2 regularization to control for overfitting to the training data due to the larger number of parameters in the models that combined both kinematics and LFP features (Methods). We found that forgoing L2 regularization led to overfitting, in which the larger number of parameters in the combined LFP-kinematics model generalized less well to the evaluation data. Tests using shuffled LFP features confirmed that the L2 regularization approach adequately prevented overfitting (Methods). Figure 7 compares, on a unit-by-unit basis, the relative predictive powers of kinematics and LFP features. The analysis reveals that, with a few exceptions (e.g., some neurons in PMv FIGURE 7 | LFP features add little predictive power to a kinematics model. Scatter plots comparing, on a unit-by-unit basis, the predictive power of the kinematics model (x-axis), to the predictive power of a model that uses both LFP and kinematics features (y-axis). Although LFP features by themselves can achieve high predictive power for single-neuron spiking (Figure 3), their combination with kinematics typically results in only a small increase in predictive power, suggesting that most predictive information in the examined LFP features is redundant to predictive information in kinematics features. In the figure legends, "S" indicates the session, µ is the mean change in predictive power, and m is the median change in predictive power.
in monkey S), LFP features added little predictive power to kinematics. This finding suggests that although LFP features were able to account for a substantial fraction of spiking variability, this information was highly redundant to information available in the examined kinematics features. This finding confirms the conjecture raised earlier, based on the high correlation between LFP and kinematics predictive power (Figure 6), that the information available in these two signals was redundant in terms of prediction of single-neuron spiking activity. Nevertheless, for each session, after adding LFP features, the mean change in predictive power was positive, ranging from 0.008 to 0.07, and the median change in predictive power ranged from 0.006 to 0.06. This median increase was statistically significant for all sessions.
Intrinsic Spiking History Adds Substantial Power to Kinematics in the Prediction of Single-neuron Spiking Variability
The above findings suggest that the examined features of LFP collective dynamics during movement reflect primarily sensorimotor processes related to motor representations and computations associated with the measured reach and grasp kinematics. Nevertheless, these LFP features accounted for a small, but statistically significant, fraction of neural variability not FIGURE 8 | Intrinsic spiking history carries complementary information to kinematics features. Scatter-plots showing a unit-by-unit comparison of the predictive power of the kinematics model (x-axis) to that of a model that uses both kinematics features and intrinsic spiking history features (y-axis). Adding 100 ms of intrinsic spiking history information improves prediction substantially for almost all units, consistently across animals and areas. In the figure legends, "S" indicates the session, µ is the mean change in predictive power, and m is the median change in predictive power.
accounted by the examined kinematics features. To investigate the potential sources of this additional predictive power, we take a detour in this section and consider first the predictive power of a neuron's own spiking history. Here we focused on the preceding 100 ms spiking history, which can capture fast intrinsic biophysical processes such as refractory and recovery periods after an action potential, and also bursting dynamics, which are common in certain types of motor cortex neurons (Chen and Fetz, 2005). In addition, temporal autocorrelations within singleneuron's spiking activity can be induced, for example, by both intrinsic rhythmicity and rhythmicity due to ongoing neuronal network dynamics affecting spiking. We used temporal filters to capture the effects of intrinsic spiking history. Temporal filters were estimated with semi-parametric models using raised cosine functions (Pillow et al., 2008;Truccolo et al., 2010;Methods). Ten logarithmically-spaced raised cosine functions on the past 100 ms were used.
When information about a single neuron's own spiking history is added to the model, there is a significant improvement in predictive power compared to a model including only kinematics features (Figure 8). Within each session and area, the mean increase in predictive power ranged from 0.04 to 0.17. The median increase in predictive power ranged from 0.03 to 0.16, and was statistically significant for all sessions and motor areas (Wilcoxon signed-rank test, p < 0.05 with Bonferroni correction for 19 (session, array) multiple tests).
FIGURE 9 | Conditioned on intrinsic spiking history, the contribution of LFP features to kinematic models is redundant. Scatter plots comparing the predictive power of a model based on kinematics and intrinsic spiking history (x-axis) to that of a model based on kinematics, intrinsic history, and LFP features (y-axis). LFP features add negligible predictive power after accounting for behavioral and intrinsic spiking history effects. In the figure legends, "S" indicates the session, µ is the mean change in predictive power, and m is the median change in predictive power.
This result demonstrates that fast-timescale spiking history can explain variability in single-neuron spiking that is not redundant to variability examined by the kinematic features in this motor task.
Conditioned on Spiking History, Contribution of LFP Features to Kinematic Models is Further Reduced
Having demonstrated that intrinsic spiking history adds predictive power to kinematic models, we finally assessed whether LFP features can account for variability in single-neuron spiking not accounted for by kinematics and intrinsic history features. Figure 9 shows that, across monkeys and motor areas, adding LFP features to models based on kinematics and intrinsic spiking history leads to no substantial improvement in predictive power. Across sessions and areas, the mean change in predictive power when adding LFP features to a model containing both kinematics features and intrinsic spiking history features ranged from 0.002 to 0.02, and the median change ranged from 0.001 to 0.02. Nevertheless, this median improvement was statistically significant for all but one session (monkey C area PMv session 1) (Wilcoxon signed-rank test, p < 0.05 with Bonferroni correction for 19 (session, array) multiple tests).
This result demonstrates that the LFP predictive power not redundant to kinematics features was primarily redundant to information available in the recent 100 ms spiking history in this motor task. In other words, additional single-neuron variability not explained by kinematics seems to be better explained by fast-timescale features in intrinsic spiking history than by the examined motor cortex LFP features in this reach and grasp task.
Discussion
Neocortical neurons are embedded in large networks possessing highly recurrent connectivity. Recurrent connectivity typically leads to rich spontaneous collective dynamics. The extent to which these spontaneous dynamics contribute to single neuron variability in awake behaving primates, and how these dynamics interact with sensory inputs and behavioral outputs is an important open question in neuroscience. Here we examined this problem in the context of collective dynamics reflected in LFP oscillations at multiple frequencies in three different areas of motor cortex in monkeys performing naturalistic 3-D reach and grasp actions. These LFPs are thought to result, to a large extent, from collective modes of activity driving spatially coherent postsynaptic potentials at multiple spatiotemporal scales (Nunez and Srinivasan, 2006;Buzsáki et al., 2012). LFP features (e.g., amplitude envelope, phase, and analytic signal) in eight different frequency bands predicted single neuron spiking (1 ms time resolution) with significant predictive power for many neurons in all of the three examined motor cortex areas (PMv, PMd, and M1). Neurons for which LFP predictive power was high tended also to show high kinematics predictive power. In fact, this relationship was close to linear (Pearson correlation coefficient ranging from 0.52 to 0.96 across all the studied areas, monkeys and sessions). More importantly, predictive information in the examined LFP features was mostly redundant to the predictive information available in kinematics. In other words, models combining both LFP features and kinematics typically improved only marginally over models using only kinematics in the studied 3-D reach and grasp task. These results should not be dismissed as overfitting artifacts since they were obtained under well controlled L2 regularization aiming to preserve generalization of models with larger number of parameters. Furthermore, in the few cases for which LFP features seemed to add predictive information with respect to kinematics, this information turned out to be redundant to the information available in short term correlations in the intrinsic spiking history. Overall, our findings suggest that multiband LFP oscillations in motor cortex of alert behaving primates, although predictive of single-neuron spiking during movement execution, are primarily related to collective dynamics controlling aspects of motor output (e.g., kinematics) rather than other potential ongoing dynamics not directly related to the task (e.g., arousal levels).
Several previous studies have looked at the relationship between single-neuron spiking and features of LFP oscillations, mostly in sensory cortices and during anesthesia (e.g., Haslinger et al., 2006;Kelly et al., 2010). Recent work by Ecker et al. (2014) has shown that previously reported high correlations between neuronal pairs and strong phase locking to ongoing LFPs in primary visual cortex during stimulation were highly dependent on the anesthesia state, with neuronal ensemble spiking becoming much more asynchronous during awake stimulation tasks. Our analysis goes beyond previous studies by examining motor cortex LFP and spiking in awake behaving non-human primates. Furthermore, to our knowledge, this is the first time that the redundancy between the information available in multiband LFP features and the information available in behavioral output (kinematics) has been systematically assessed in motor cortex. It remains to be seen how much of the residual variability is inherent to stochastic aspects of the biophysics (e.g., noise due to synaptic failure and amplification effects during spike generations; Carandini, 2004), to other motor-related covariates (e.g., torques and muscle activations) not examined in this paper, or to network dynamics not faithfully reflected in LFP features. In the latter, it is possible that the cortical layer from which the electrode tips recorded (likely layer 5 in our data) may impact LFP predictive power. For example, LFPs recorded from layers 2/3 of motor cortex may potentially exhibit different spike prediction performance and different levels of redundancy with respect to kinematics. In addition, we note that typically recorded LFPs might not be as "localized" as previously thought (Kajikawa and Schroeder, 2011). In particular, rhythmic oscillations in electric potentials recorded intracellularly and on broad extracellular fields may share similar frequencies, and yet show very different phase-locking dynamics with respect to neuronal spiking (e.g., Harvey et al., 2009). Thus, the broader LFP spatial average might result in signals that are less predictive of single-neuron spiking and more related to population activity.
The relationship between single-neuron spiking and ongoing LFP oscillations, in particular the locking of neuronal spiking to the phase of oscillations in specific frequency bands, might be highly dependent on the neuron types (e.g., pyramidal vs. fast spiking interneurons; Buzsáki et al., 2012). Recent work by Vigneswaran et al. (2011) has demonstrated that certain types of pyramidal neurons in primary motor and premotor cortices can show features of action potential waveforms and spiking statistics that are indistinguishable from features in inhibitory interneurons. Therefore, an analysis based on such putative classification would remain highly questionable in our motor cortex data.
In our analysis, low frequency (0.3-2 Hz) and higher (>100 Hz) frequency LFP bands tended to contribute the most to prediction of neuronal spiking. The former relate to motor evoked potentials, which are known to be highly correlated with the population spiking (Bansal et al., 2012), and the latter to multiunit activity, whose movement-related modulation also reflects correlated spiking in neuronal populations. Intermediate frequency bands tended to contribute little during movement execution in this type of task. One could raise the possibility that the relationship between LFP features and single-neuron spiking in these intermediate frequency bands could be much more transient than the relationship between spiking and kinematics during movement execution. For example, beta oscillations, even during movement preparation, typically occur in transient, not sustained, events lasting a few or several cycles. Thus, one would like to build models in which spiking phase-locking should be obviously conditioned on the amplitude of the beta filtered LFPs, so that these transients can be properly captured. In this regard, we note that the neural point process models used here should capture such dependence on beta amplitude, since the log-additive form of the models allows for (nonlinear) multiplicative effects and interactions among different terms (e.g., beta amplitude and phase) in the models. We also note that, although more complex LFP features and models could potentially improve spike prediction, the same could be said about improving the predictive power of motor behavioral covariates by using more complex or a larger set of kinematic features, including for example kinetics (torques) and muscle activation covariates. We hope to be able to examine more complex LFP and motor behavior-related features in the future.
The results reported here on the redundancy between motorcortex multiband-LFP features and motor behavior are specific to execution of motor tasks in non-human primates who were highly engaged during movement execution. Multiband LFP features also provide reliable biomarkers for broader brain states and their changes. For example, the relationship between singleneuron spiking activity and ongoing LFPs is likely to change substantially depending on anesthesia, drowsiness, resting vs. awake states, attentional and volitional states, as well as stages during motor tasks (e.g., preparation vs. execution). In this broader context, including a larger variety of neural states than examined in this study, we expect multiband LFP features will be an important independent signal to account for neuronal spiking variability not explained by stimuli or behavioral covariates.
Variability in single-neuron spiking activity has often been characterized as of two types: private and shared (e.g., Deweese and Zador, 2004;Churchland and Abbott, 2012;Litwin-Kumar and Doiron, 2012;Goris et al., 2014). Private variability is likely to reflect chaotic nonlinear dynamics in highly recurrent neuronal networks (Litwin-Kumar and Doiron, 2012). Amplification of membrane potential fluctuations by the spiking generation process (Carandini, 2004) in addition to local stochastic factors such as thermal fluctuations and synaptic failure (Faisal et al., 2008) are also important contributors. On the other hand, shared variability in neuronal ensembles is thought to evolve on slower time scales and reflect representational and computational states in neuronal networks (Churchland and Abbott, 2012;Litwin-Kumar and Doiron, 2012). The examined fluctuations in multiband LFP oscillations seem primarily to be related to this shared variability. Multiband oscillatory LFP activity results in large part from coherent or shared dynamics in neuronal networks. In addition, features in these oscillations that are predictive of single-neuron spiking seemed mostly redundant to parameters in motor behavior. Overall, our finding was that information in the examined multiband LFP features directly relates to these shared representational and computational dynamics across neural populations in motor cortex. Singleneuron activity in motor cortex populations has been shown to be dominated by latent low-dimensional collective dynamics . We hope in the future to investigate the relationship between multiband oscillatory LFP activity, in particular slow fluctuations, and latent low-dimensional rhythmic dynamics in motor cortex. | 2016-05-04T20:20:58.661Z | 2015-06-22T00:00:00.000 | {
"year": 2015,
"sha1": "c387f5d79bcd4334a73b89e152a901d1bc4afc01",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fnsys.2015.00089",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0016d8ddfe9eadb1ec53242ba5a1ffc339d3c797",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
18999891 | pes2o/s2orc | v3-fos-license | Flexible Polyhedral Surfaces with Two Flat Poses
We present three types of polyhedral surfaces, which are continuously flexible and have not only an initial pose, where all faces are coplanar, but pass during their self-motion through another pose with coplanar faces (“flat pose”). These surfaces are examples of so-called rigid origami, since we only admit exact flexions, i.e., each face remains rigid during the motion; only the dihedral angles vary. We analyze the geometry behind Miura-ori and address Kokotsakis’ example of a flexible tessellation with the particular case of a cyclic quadrangle. Finally, we recall Bricard’s octahedra of Type 3 and their relation to strophoids.
Introduction
A polyhedral surface is called (continuously) flexible, when there is a continuous family of mutually incongruent poses, called flexions, sharing the intrinsic metric.Each face is a planar rigid body; only the dihedral angles can vary between faces sharing an edge.
In discrete differential geometry, but also in architecture, there is an interest in polyhedral structures that can change their shape while all faces remain rigid (see, e.g., [1]).Everybody can simulate this flexibility with real world models made from cardboard.However, one must be careful: the models of theoretically-rigid polyhedral surfaces can show a flexibility, due to imprecision and slight bending of the material.A famous example (see Figure 1) is described in Schwabe and Wunderlich's article [2] and was even exposed at the science exposition "Phänomena" 1984 in Zürich.At that time, it was falsely stated that this polyhedron is flexible.However, it is only snapping between two planar realizations and one spatial pose.The unfolding shows that all faces are congruent isosceles triangles.Below, we present three examples of flexible polyhedral surfaces that share the property that they have (at least) two flexions where all faces are coplanar.Such poses will be called "flat".
• Firstly, we analyze the geometry behind Miura-ori.
• Secondly, we study Kokotsakis' flexible tessellation of convex quadrangles and a particular case with a second flat pose.These two examples include flexible 3 × 3-complexes of quadrangles, so-called quadrangular Kokotsakis meshes [3].• Finally, we recall the well-known Bricard octahedra of Type 3.They can be characterized by having no global symmetry, and they admit two flat poses.We will emphasize the role of a particular cubic curve, the strophoid.
As a preparation, we provide a necessary condition for flexible polyhedral surfaces with two flat flexions.This condition refers to vertices V where four faces are meeting.Let α, β, γ and δ be the sequence of interior angles at V .Under the "vertex figure" of V , we understand the intersection of these four faces with a sphere, which is centered at V and whose radius is smaller than the shortest edge with endpoint V .Hence, the vertex figure is a spherical quadrangle with sides of lengths α, β, γ and δ (Figure 2).It has no effect on the flexions of this spherical quadrangle, when we replace any side of length λ by its complementary arc of length 2π − λ.Therefore, we can confine ourselves to the case α, . . ., δ < π; we call this quadrangle the reduced vertex figure.
Lemma 1.Let a flexible polyhedral surface be given that passes through two flat poses.Then, at each vertex, where four faces are meeting, the reduced vertex figure has either a plane of symmetry, at least after replacing one or two vertices by their antipodes, or the sides coincide in pairs.
Proof.The side lengths of the reduced vertex figure satisfy 0 < α, . . ., δ < π (see Figure 2).The spherical quadrangle admits two aligned poses if two of the following four equations hold: It cannot happen that the length of one side, e.g., A 0 B 0 , equals the sum of the other three, since this spherical quadrangle admits only the aligned position.In any non-aligned pose of the polygon A 0 ABB 0 with side lengths β, γ and δ, the distance between the initial point and the endpoint would be shorter than α.For a similar reason, a spherical quadrangle with β + γ + δ − α = 2π cannot be continuously flexible.Equations (a) and (b) are equivalent to β = π − α and δ = π − γ.When we replace A and B by their respective antipodes A and B, we obtain a spherical deltoid A 0 A BB 0 .The same holds in the case where (a) and (d) are satisfied.Replacing one vertex by its antipode means replacing one edge of the pyramid with apex V by its reflection with respect to V (note the extensions P * 3 and P * 4 of P 3 and P 4 in Figure 6).Spherical kites have either an axis of symmetry or the flexion is trivial, since the sides coincide in pairs.Spherical isograms are either self-intersecting with an axis of symmetry or they have a center of symmetry.When in the latter case two neighboring vertices are replaced by their antipodes, then the axial symmetry of the spherical isogram is transformed into a planar symmetry.
Miura-ori
Miura-ori is a Japanese folding technique named after Prof. Koryo Miura, The University of Tokyo.However, already before K. Miura, this technique was known; according to a personal communication with Gy.Darvas, it was kept as a military secret in Russia in the thirties of the last century.There are many applications of Miura-ori.It is, e.g., used for solar panels, because they can be packed in compact form and deployed into their flat shape by pulling on one corner only [4].It serves as a kernel to stiffen sandwich structures.Furthermore, it can be utilized for the design of cupolas [1].
Let us analyze the process of folding a sheet of paper as depicted in Figure 3 with given valley and mountain folds, thus proving that this quadrangular surface is really continuously flexible.To begin with, we take two coplanar parallelograms with aligned upper and lower sides (Figure 4a).Now, we rotate the parallelogram on the right-hand side against the left one about the common edge: the lower sides span a plane E 1 , and the upper sides span a plane E 2 parallel to E 1 .We continue and extend the two parallelograms to a zig-zag strip by adding parallelograms, which are translatory congruent alternately to the left-hand or right-hand parallelogram of the initial pair.After this, the complete strip has its lower zig-zag boundary still placed in E 1 and the upper one in E 2 (see Figure 4c).This remains valid when we fix the plane E 1 , but vary the bending angle ϕ.
After iterated reflections in planes E 2 parallel to E 1 (Figure 4d) or after translations orthogonal to E 1 , the complete Miura-ori flexion is obtained as depicted in (Figure 5).When finally the two initial parallelograms become coplanar due to a bending angle of 180 • , Miura-ori reaches a totally folded pose, which is again flat.The set of edges of Miura-ori can be subdivided into two families of folds.The "horizontal" folds are copies of the zig-zag line in E 1 and placed in horizontal planes.They are aligned in the stretched initial pose as depicted in Figure 3.The other folds are located in mutually parallel vertical planes, since their edges are obtained by iterated reflections in planes parallel to E 1 .We call these folds "vertical".While the edges of each horizontal fold alternate between valley and mountain fold, all edges of a vertical fold are of the same type, either valley or mountain folds.Now, we are going to analyze the self-motion of Miura-ori.For the sake of simplicity, we assume that all edges have a unit length (obviously, the side lengths of the zig-zag line in E 1 can vary without restricting the flexibility; analogously, the distances between the planes through the horizontal folds need not be the same everywhere).
We fix the planes of one horizontal and one vertical fold, as well as the crossing point V of these two folds (Figure 6).Let P 1 , . . ., P 4 be the four parallelograms meeting at V .We start with the motion of P 1 : the two edges of P 1 with the common endpoint V rotate within the respective fixed planes, such that the included interior angle α < 90 • remains constant (compare with Figure 4a).The reflection in the horizontal plane E 2 maps P 1 onto the second parallelogram P 2 .It has the same interior angle α at V and moves like P 1 .Now, we elongate the horizontal edges of the other two parallelograms P 3 and P 4 (with interior angle 180 • − α) beyond the fixed vertical plane by the unit length.This gives two additional parallelograms P * 3 and P * 4 with the interior angle α at V .Each shares a "vertical" edge with P 1 or P 2 , respectively.Hence, the reflection in the fixed vertical plane must map P 1 onto P * 3 and P 2 onto P * 4 .
This reveals a hidden local symmetry of Miura-ori: when at each vertex V two adjacent parallelograms P 3 , P 4 with congruent interior angles at V are replaced by their "horizontal elongations" P * 3 and P * 4 , respectively, we obtain a pyramid of four congruent parallelograms P 1 , P 2 , P * 3 , P * 4 with apex V .This pyramid flexes, such that it remains symmetrical with respect to the planes of the horizontal fold and the vertical fold passing through V (compare with Lemma 1). Figure 6.The two-fold local symmetry at each vertex V becomes visible when the edge between P 3 and P 4 is extended beyond V .
Next, we study the relations between angles: we use a coordinate frame with origin V , with the xand y-axis in the horizontal plane E 2 and with the [yz]-plane spanned by the vertical fold passing through V (see Figure 6).Let 2ϕ and 2ψ be the bending angles between consecutive segments of the horizontal and vertical folds, respectively.Thus, the sides of P 1 have the direction vectors: and we obtain the dot product: In the stretched position, the parallelogram P 1 is located in the vertical [yz]-plane; the corresponding (half) bending angles are ϕ = 0 • and ψ = 90 • − α.The other limit is the totally folded position with P 1 in the [xy]-plane, ϕ = α and ψ = 90 • .In order to obtain formulas for the dihedral bending angles 2γ and 2δ along edges of the horizontal and vertical folds (see Figure 6), we need the unit vector perpendicular to the plane of P 1 : Its dot products with the unit vectors along the xand z-axis give: Theorem 2. During the self-motion of Miura-ori, the horizontal folds remain in mutually-parallel planes.The same holds for vertical folds, and their planes are orthogonal to the planes spanned by the horizontal folds.The dihedral bending angles 2γ along the edges of the horizontal folds are everywhere the same, as well as the angles 2γ for the vertical folds.They satisfy Equation (2), while ϕ and ψ are related by Equation (1).
The motion depicted in Figures 4 and 5 is not the only self-motion of Miura-ori.The initial pose admits bifurcations.Trivial flexions arise, e.g., when in the stretched position, we use any aligned horizontal fold as an axis of rotation for a folding.The same can be done, independently of each other, with several aligned horizontal folds.Furthermore, after full 180 • -rotations, we obtain an n-fold covered strip and can treat it like a single strip.Hence, Miura-ori admits more than two flat poses.
Miura-ori is a very particular case of a Voss surface.This is a polyhedral surface with quadrangular faces, such that each 3 × 3 complex is a flexible Kokotsakis mesh [5] of the isogonal Type 3 (according to the enumeration given in [6]): at each vertex, opposite interior angles are either equal or complementary (Lemma 1); and there is an additional equation to satisfy ( [7], p. 12).A long lasting open problem, the classification of flexible quadrangular Kokotsakis meshes [3] has recently been solved by I. Izmestiev [8].The classification of flexible Kokotsakis meshes with a central n-gon and n ≥ 5 is still open.
There are several generalizations of Miura-ori, among them the impressive approximations of free-form surfaces presented by T. Tachi in [9].Some patterns are even examples of rigid origami [10], designed within the family of Voss surfaces.
Kokotsakis' Flexible Tessellation
Another remarkable flexible mesh of quadrangles, which also starts from a flat initial pose, dates back to A. Kokotsakis.In [5] (p.647), he proved the flexibility of the tessellation displayed in Figure 7.However, he did not present any geometric property of its spatial poses; this will be done below.Choose an arbitrary quadrangle P 1 .By iterated rotations about the midpoints of the sides, we obtain a well-known regular tessellation of the plane (Figure 7).We can generate the same tessellation also in another way: when we glue together two adjacent quadrangles, we obtain a central-symmetric hexagon (blue shaded in Figure 7).Now, the same tessellation arises when this hexagon undergoes iterated translations, as indicated in Figure 7 by the red arrows.
In the following, we prove for convex quadrangles that this polyhedral structure is flexible: let P 1 , . . ., P 4 be four pairwise congruent faces with the common vertex V 1 (shaded area in Figure 8).They form a four-sided pyramid with apex V 1 , and the four interior angles at V 1 are respectively congruent to the four interior angles of a quadrangle.Our pyramid is flexible, provided the fundamental quadrangle is convex; otherwise, one interior angle at V 1 would be greater than the sum of the other three interior angles, so that the only realization is flat.According to the labeling in Figure 8, left, for any pair of adjacent faces, there is a respective 180 • -rotation ("half-turn") ρ 1 , ρ 2 , ρ 3 or ρ 4 , which exchanges the two faces.Therefore, e.g., P 2 = ρ 1 (P 1 ).The axis of ρ 1 is perpendicular to the common edge V 1 V 2 and located in a plane that bisects the dihedral angle between P 1 and P 2 .
After applying consecutively all four half-turns to the quadrangle P 1 , this is mapped via P 2 , P 3 and P 4 onto itself.This implies: Now, we can extend this flexion of the pyramid step by step to the complete tessellation: the rotation ρ 1 exchanges not only P 1 with P 2 , but transforms the pyramid with apex V 1 onto a congruent copy with apex V 2 sharing two faces with its preimage.This is the area hatched in green in Figure 8, right.Analogously, ρ 4 generates a pyramid with apex V 4 sharing the faces P 1 and P 4 with the initial pyramid.Finally, there are two ways to generate a pyramid with apex V 3 .Either we transform ρ 2 by ρ 1 and apply ρ 1 • ρ 2 • ρ 1 , which exchanges V 2 with V 3 , or we proceed with ρ 4 • ρ 3 • ρ 4 , which exchanges V 4 with V 3 .Thus, we obtain two mappings, , which are equal by virtue of Equation ( 3).Hence, each flexion of the initial pyramid with apex V 1 is compatible with a flexion of the 3 × 3 complex of quadrangles.This can be extended to the complete tessellation (in [6], it is shown that this tessellation is a particular case of the line-symmetric Type 5 of flexible Kokotsakis meshes).
The product of two half-turns is a helical motion about the common perpendicular of the axes of the two half-turns.The angle of rotation and the length of translation is twice the angle and distance, respectively, between the two axes.When the pyramid with apex V 1 is not flat, then the axes of the 180 • -rotations ρ 1 , . . ., ρ 4 are pairwise skewed; the common perpendicular of any two of them is unique.Equation (3) implies that the axes of the four rotations have a common perpendicular a.Hence, the motions ρ 1 • ρ 2 and ρ 3 • ρ 2 = ρ 4 • ρ 1 are helical motions with a common axis a.All vertices of the flexion have the same distance to a; they lie on a cylinder Ψ or revolution (in [11], T. Tarnai addresses another relation between planar tessellations and cylindrical shapes; he shows how semiregular tessellations can be folded into cylindrical shells).
When the quadrangles P 3 and P 4 are glued together, we obtain a line-symmetric skewed hexagon, one half of our initial pyramid with apex V 1 ; the half-turn ρ 3 maps this hexagon onto itself (Figure 8).We can generate the complete flexion by applying iterated helical motions ρ 1 •ρ 2 and ρ 3 •ρ 2 on this hexagon.Theorem 3. Any flexion of the Kokotsakis tessellation is obtained from the line-symmetric hexagon consisting of the two planar quadrangles by applying the discrete group of coaxial helical motions generated by ρ 1 • ρ 2 and ρ 3 • ρ 2 .In the initial flat pose, these generating motions are the translations applied to a centrally symmetric hexagon, thus generating the regular tessellation in the plane.
The flat initial pose of the pyramid with apex V 1 admits a bifurcation between two continuous motions of the pyramid.Hence, the complete quad mesh admits two self-motions (Figure 9).When starting with a trapezoid P 1 (compare with Figure 14 in [12]), one of these self-motions is trivial: it results from mutually-independent bending along aligned edges of the initial pose.Once we have clarified that for each non-planar flexion, all vertices lie on a cylinder Ψ of revolution with axis a, we can give another explanation for the flexibility (see Figure 10): the plane spanned by the quadrangle V 1 . . .V 4 intersects the circumcylinder Ψ along an ellipse E. Given a convex quadrangle V 1 . . .V 4 , there is a pencil of conics passing through the four vertices.This pencil includes an infinite set Symmetry 2015, 7 of ellipses, which is bounded by parabolas or pairs of parallel lines.Hence, we can specify an ellipse E out of this pencil and choose one of the two right cylinders passing through E. Now, we define the half-turns ρ 1 and ρ 4 ; their axes pass through the midpoints of the sides V 1 V 2 and V 1 V 4 , respectively, and intersect the cylinder's axis a perpendicularly (see Figure 10).The common normal of the lines V 2 V 3 and a is the axis of ρ 1 •ρ 2 •ρ 1 ; that between V 3 V 4 and a is the axis of ρ 4 •ρ 3 •ρ 4 .Iterations of these half-turns transform our initial face P 1 into all faces of the flexion.In [12], cases are studied where a particular flexion of a m × n-complex closes.This gives rise to an infinitesimally rigid tessellation on a right cylinder by plane quadrangles (see Figure 11).Proof.In the case of a cyclic quadrangle, the pencil of circumscribed conics includes one circle.There is a unique right cylinder passing through this circle, and the corresponding axes of half-turns ρ 1 , . . ., ρ 4 are coplanar.The complete flexion is flat; the circumcircles of all faces coincide (compare with Figure 12).Similar to Miura-ori, this tessellation mesh can be "packed" into a very small size, such that it finds a place in the circumcircle of one quadrangle, at least "theoretically", i.e., without paying attention to self-intersections.However, a real-world model consisting of at least 3 × 3 quadrangles cannot approach such a pose, since the faces have to be placed one upon the other.Then, there is a collision between the "edges" spanning over distant levels and the vertices of quadrangles placed between them.In the case of a cyclic quadrangle, opposite angles are complementary.Therefore, this flexible tessellation is an example of a Voss surface, too.
Bricards' Octahedra of Type 3
In this final section, an octahedron is understood as a double-pyramid over a quadrangle, which need not be planar.Hence, the octahedron contains three pairs (A, A ), (B, B ) and (C, C ) of opposite vertices.Each of them can be seen as a pair of apices of a double pyramid; the corresponding bases, called "equators", are the quadrangles BCB C , ACA C or ABA B , respectively.
According to R. Bricard's fundamental result [13], there are three types of flexible octahedra.Those without any symmetry belong to Type 3 and admit two flat poses.They can be defined as shown in Figure 13a.
Each of the three equators is a quadrangle with sides tangent to a circle with the same center M .Hence, there is a local symmetry at each vertex with respect to an axis, which passes through M (note Lemma 1).
The following theorem relates the given flat pose to a strophoid S, which is defined as a circular curve of degree three with a node N and orthogonal tangents at the node.On S, a symmetric relation can be defined [14]: two points P, P of S \ {N } are called associated if the lines N P and N P are symmetric with respect to the node tangents t 1 and t 2 .Furthermore, the strophoid S is the locus of points X with the property that one angle bisector of the lines XP and XP passes through the node N .This holds simultaneously for all pairs (P, P ) of associated points of S.
As a consequence, at our given flat pose of the octahedron, there must be a strophoid S with the node N = M and associated points (A, A ), such that S passes also through B, B , C and C (Figure 13b).Proof.In order to obtain the second flat pose, let us keep the face ABC fixed, while the opposite triangle A B C is replaced by a congruent copy A B C .Both flat poses must show respectively equal distances AB = AB , AC = AC , . . ., CB = CB .We show that the reflection in the line AB maps C onto C , the reflection in the line AC maps B onto B and the reflection in the line BC maps A onto A (see Figure 14).Firstly, these reflections preserve the distances to the points of the fixed triangle.Secondly, since both pairs of lines, (AB, AB ) and (AC, AC ), are symmetric with respect to AM , the angles < ) B AB = 2 < ) B AC and < ) C AC = 2 < ) C AB are equal.Hence, there is a rotation about A, which sends the side B C onto B C and preserves the length.The same holds for the other sides A B and A C .Furthermore, at spatial poses of the flexible octahedron of Type 3, there is a local symmetry at each vertex V (Lemma 1): the six planes of symmetry share a common axis a (see, e.g., [15], Figure 3).All equators are placed on one-sheet hyperboloids of revolution with the axis a.As demonstrated in [16,17], even at each spatial pose, there is a particular spatial cubic passing through the six vertices.Of course, this cubic is the spatial analogue of the strophoid S displayed in Figure 13b.
All flexible octahedra have self-intersections.Real-world models work only when either they are built as a framework or one pair of opposite faces is removed (see, e.g., [7], Figure 5).
There exist also particular flexible octahedra of Type 2, i.e., with an axis of symmetry, which admit two flat poses.They are a sort of limiting case when point M in Figure 13a, tends to infinity (see, e.g., [15], Figure 4).G. Nawratil [18] proved that there is one non-trivial type of a flexible octahedron with some vertices at infinity: it consists of two prisms with a kite as the common basis; the generators of the prisms are orthogonal to the kite's axis of symmetry.These examples can even be free of self-intersections.
The analogue of Type 3 in the hyperbolic space is also flexible.Of course, more sub-types can be distinguished, since the vertices can be either finite points or on the absolute conic or outside (note [19]).Recently, A.A. Gaifullin generalized all of this in [20]: He proved that, in the algebraic sense, Type 3 is the simplest; there exist flexible higher-dimensional analogues, so-called cross-polytopes, in all dimensions, in Euclidean spaces, as well as in spherical and hyperbolic spaces.
Figure 1 .
Figure 1.This polyhedron called "Vierhorn" is locally rigid, but snaps between its spatial pose (b) and the two flat realizations (a,c).Dashes in the unfolding (d) indicate valley folds.
Figure 2 .
Figure 2. The interior angles at the vertex V serve as side lengths of the spherical four-bar A 0 ABB 0 , which controls the dihedral angles ϕ 1 and ϕ 2 during the flexion.
Figure 5 .
Figure 5. Snapshots of the folding procedure of Miura-ori.
Figure 8 .
Figure 8.The flexion of a four-sided pyramid can be extended to a flexion of the complete tessellation.
Figure 10 .
Figure 10.There is an infinite set of ellipses passing through the vertices of a convex quadrangle.
Figure 11 .
Figure 11.This is an infinitesimally rigid pose of a flexible tessellation mesh[12].
Theorem 4 .
When the basic quadrangle of the flexible Kokotsakis' tessellation is cyclic, i.e., has a circumcircle, it admits a second flat pose: all quadrangles are packed into a single circumcircle.However, this fails for real-world models, because of self-intersections.
Figure 12 .
Figure 12.The tessellation mesh of a cyclic quadrangle admits a second flat pose.
Figure 13 .
Figure 13.In the flat pose, the pairs of opposite points of Bricard's Type-3 flexible octahedron are associated with a strophoid S.
Figure 14 .
Figure 14.The two flat poses of a Type-3 flexible octahedron.
This paper focuses on three well-known examples of continuously-flexible polyhedral surfaces and analyzes the geometry behind them.Emphasis is placed on versions with two flat poses.The examples, which are studied in detail, are Miura-ori, A. Kokotsakis' flexible tessellation with congruent convex quadrangles and R. Bricard's Type-3 octahedra.We do not claim that the presented three examples are the only ones with two flat poses. | 2015-09-18T23:22:04.000Z | 2015-05-27T00:00:00.000 | {
"year": 2015,
"sha1": "787ad537b3ad251a04afa8bdf33be65fde4075eb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/7/2/774/pdf?version=1432730739",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "787ad537b3ad251a04afa8bdf33be65fde4075eb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
1505641 | pes2o/s2orc | v3-fos-license | Identification of CiaR Regulated Genes That Promote Group B Streptococcal Virulence and Interaction with Brain Endothelial Cells
Group B Streptococcus (GBS) is a major causative agent of neonatal meningitis due to its ability to efficiently cross the blood-brain barrier (BBB) and enter the central nervous system (CNS). It has been demonstrated that GBS can invade human brain microvascular endothelial cells (hBMEC), a primary component of the BBB; however, the mechanism of intracellular survival and trafficking is unclear. We previously identified a two component regulatory system, CiaR/H, which promotes GBS intracellular survival in hBMEC. Here we show that a GBS strain deficient in the response regulator, CiaR, localized more frequently with Rab5, Rab7 and LAMP1 positive vesicles. Further, lysosomes isolated from hBMEC contained fewer viable bacteria following initial infection with the ΔciaR mutant compared to the WT strain. To characterize the contribution of CiaR-regulated genes, we constructed isogenic mutant strains lacking the two most down-regulated genes in the CiaR-deficient mutant, SAN_2180 and SAN_0039. These genes contributed to bacterial uptake and intracellular survival. Furthermore, competition experiments in mice showed that WT GBS had a significant survival advantage over the Δ2180 and Δ0039 mutants in the bloodstream and brain.
Introduction
Bacterial pathogens that have the capability of penetrating the central nervous system (CNS) thereby eliciting life-threatening diseases are a major human health concern. A severe outcome of bacterial infiltration of the CNS is the development of meningitis. One such pathogen, Group B Streptococcus (GBS), is a Gram-positive bacterium that is the leading cause of neonatal meningitis. Although intrapartum chemoprophylaxis is available to pregnant mothers during delivery, GBS infections among both pre-term and term infants still occurs [1]. Infants who survive meningitis suffer long-term neurological complications including developmental delays, hydrocephalus, visual impairment, deafness, cerebral palsy, and seizures [2]. GBSinduced meningitis occurs upon blood-brain barrier (BBB) penetration after a prolonged period of bacteremia [3]. Persistent blood-borne bacteria evade a variety of host defenses and have the propensity to cross the BBB, although the exact mechanism(s) of GBS-BBB transit are still being discovered. The majority of the BBB is composed of a specialized single cell layer known as human brain microvascular endothelial cells (hBMEC), which regulates passage of molecules, nutrients, and infectious agents into the brain [4]. Still, a few bacterial pathogens like GBS are able to disrupt this barrier to gain access to the CNS, resulting in inflammation, BBB permeability, and disease progression.
Much research has been devoted toward understanding the key GBS virulence factors that allow for BBB transit and breakdown. It is believed that direct invasion and subsequent transcytosis of brain endothelial cells by GBS is the critical first step for the development of meningitis [5]. Our lab has published several studies implicating multiple bacterial factors that participate in this initial invasion processes into brain endothelium including GBS surface associated proteins such as pili, lipoteichoic acid (LTA), serine rich repeat (Srr) proteins and a fibronectin binding protein, SfbA [6][7][8][9][10]. Following bacterial uptake, electron microscopy (EM) studies have demonstrated the presence of GBS in membrane-bound vesicles within hBMEC [11,12] suggesting the involvement of endocytic pathways, however, little is known about how GBS persists and traffics through the BBB. We have recently demonstrated that autophagy is induced in BBB endothelium during GBS infection, but that this pathway was not effective in completely eliminating intracellular GBS [12]. Thus, an understanding of how GBS resists intracellular host defenses and transits through brain endothelial cells is warranted.
To this end, we have investigated GBS trafficking within brain endothelial cells and the bacterial factors responsible for GBS survival. Endocytic trafficking is initiated upon bacterial invasion of host cells and subsequently, Rab GTPases aide in delivering these invaders to the lysosome for degradation [13]. Numerous bacterial pathogens, such as Legionella pneumophila, Mycobacterium tuberculosis, Pseudomonas aeruginosa, and Salmonella enterica are known to inhibit or disrupt endocytic trafficking to establish an intracellular niche or simply promote survival or growth [14]. To accomplish this, bacteria likely modulate gene expression to adapt to different host cellular environments, often through two component regulatory systems (TCRS). TCRS function through phosphotransfer signals from a membrane-bound sensor histidine kinase, which senses environmental changes, to subsequent activation of a cytoplasmic response regulator, with downstream transcription modulation [15]. GBS genome sequence analysis suggests multiple putative TCRS, but most of these systems are currently not well described [16]. One recent study has found that GBS encodes as many as 21 TCRS [17]. Established GBS TCRS include DltR/S, which maintains constant levels of ᴅ-alanylation in GBS LTA [18]; RgfA/C, which represses the expression of C5a peptidase [19]; CovR/S global regulatory system, which controls the expression of multiple virulence factors [20]; LiaFSR, which regulates cell wall stress and pilus expression [21]; FspS/R, which regulates fructose-6-phosphate metabolism [17]; and CiaR/H, which promotes survival in the host intracellular environment [8].
We have demonstrated previously that the CiaR response regulator promoted GBS intracellular survival in phagocytic cells and brain endothelial cells [8]. Further, a GBS mutant deficient in ciaR exhibited increased susceptibility to killing by antimicrobial peptides, lysozyme, and reactive oxygen species [8]. GBS CiaR also contributed to overall virulence potential in a murine bacterial competition model of infection [8]. Thus, we hypothesize that CiaR regulation may impact GBS intracellular trafficking in BBB endothelium. Our previous studies compared the transcriptional profiles of WT GBS and the isogenic ΔciaR mutant grown to log phase under identical conditions. Only one gene with a predicted function of purine and pyrimidine biosynthesis was upregulated more than twofold, while several genes were more dramatically down-regulated in the ΔciaR mutant [8]. The most highly down-regulated gene, SAN_2180, encodes a conserved hypothetical protein, while the second most down-regulated gene, SAN_0039, belongs to the M23/M37 family of metallopeptidases, which catalyze the hydrolysis of nonterminal peptide linkages in oligopeptides or polypeptides [22]. Here we investigate the role of these CiaR regulated genes to GBS interaction with brain endothelial cells and to virulence potential.
Microscopy
Coverslips with GBS overnight culture were air-dried, heat fixed, and then subjected to a standard Gram stain protocol. Images were taken using a Zeiss upright microscope with an attached Axiocam Icc3 camera. For electron microscopy, 1mL (10 7 CFU) of bacterial cells suspended in PBS was fixed in a cocktail of 2% gluteraldehyde and 1% osmium tetroxide in PBS for 10 minutes. The solution was then passed through a 0.4μm polycarbonate filter to collect bacterial cells and rinsed with 4mL of water. In order to dry the samples, the filters were taken through a series of increasing concentrations of ethanol (50,75,85, 95, 100%) before being placed in a Tousimis SAMDRI-790 critical point drying machine. The dried filters were mounted onto SEM sample stubs with a piece of double-sided carbon tape before applying a 6nm layer of platinum with a Quorom Q150ts high-resolution coater. Samples were viewed using an FEI FEG450 scanning electron microscope.
In vitro infection assays
To determine the total number of cell surface-adherent or intracellular bacteria, hBMEC monolayers were grown to confluence in growth medium containing 10% FBS, 10% Nuserum, and 1% non essential amino acids in 24-well tissue culture-treated plates. Bacteria were grown to mid-log phase and used to infect cell monolayers as described previously [23]. Briefly, hBMEC monolayers were incubated with GBS at 37°C with 5% CO 2 for 30 min. To assess adherent bacteria, cells were washed five times with phosphate-buffered saline (PBS) to remove non-adherent bacteria, then trypsinized with 0.1 ml of trypsin-EDTA solution and lysed with addition of 0.4 ml of 0.025% Triton X-100 by vigorous pipetting. To assess intracellular bacteria GBS were incubated with hBMEC for 2h, cells were washed three times with PBS and 1 ml of media containing 100 μg/ml of gentamicin and 5 μg/ml of penicillin was added to each well to kill extracellular bacteria. Lysates were then serially diluted and plated on THB agar to enumerate bacterial colony-forming units (CFU). Bacterial adherence and invasion was calculated as (recovered CFU/original inoculum CFU)×100%. GBS intracellular survival experiments were performed as described above except that intracellular bacteria was enumerated at indicated time points.
Lysosomal isolation
hBMEC were grown in 75 cm 2 flasks at 37°C with 5% CO 2 until confluence was achieved, and subsequently infected with WT and mutant strains of COH1 GBS at an MOI = 10 for 2 hours. After infection, cells were incubated with media containing penicillin (5μg/mL) and gentamycin (100μg/mL) for either 1 or 12 hours to eliminate extracellular bacteria. Cells were then washed with DPBS and subjected to lysosomal isolation using the Lysosomal Enrichment kit for Tissue and Cultured Cells according to manufacturer's instructions (Thermo-Fisher). Briefly,~200mg of cells were harvested with trypsin and centrifuged for 2 min at 850 × g. Lysosome enrichment reagent A containing a protease inhibitor cocktail (CalBioChem) was added to pelleted cells and subjected to a 2 min incubation on ice. After incubation, cells were then sonicated 15 times to lyse the cells and Lysosome enrichment reagent B containing a protease inhibitor cocktail was then added to the cells. Cells were then centrifuged for 10 min at 500 × g at 4°C. The supernatant was then collected and the final concentration was altered to 15% with OptiPrep Cell Separation Media. The samples were then loaded on discontinuous OptiPrep gradients varying from 30%, 27%, 23%, 20% to 17% in a 13.2 mL ultracentrifugation tube (Beckman-Coulter) and centrifuged in a SW 41 Ti rotor at 145,000 × g for 2 hours at 4°C. After centrifugation, the lysosomal fraction was isolated from the top gradient and washed using 2 volumes of DPBS in a microcentrifuge at 17,000 × g for 30 min at 4°C to remove OptiPrep media. Lysosomal pellets were then washed with Gradient Dilution Buffer at 17,000 × g for 30 at 4°C. Pellets were then re-suspended in 0.1% Triton X-100 and plated on Todd Hewitt Agar to enumerate bacterial CFU. For lysotracker staining, pellets were re-suspended in PBS and stained with Lysotracker Red (Life Technologies) for 15 minutes and imaged using a Zeiss Axiovert 200 inverted fluorescence microscope (Carl Zeiss).
In vivo competition assay
Animal experiments were approved by the Institutional Animal Care and Use Committee at San Diego State University (protocol APF 13-07-011D) and performed using accepted veterinary standards. Animals are housed 4 mice/cage per NIH standard space requirement in Micro-Isolater cages with contact bedding. The light cycle is 12/12 (light from 6am-6pm) and cages are changed 3 times/week. Mice are fed a standard rodent diet (Purina) using a free feed system with fresh food added weekly. During the experiment animals were monitored visually at least twice a day for signs of disease such as ruffled fur, lethargy or agitation and moribund appearance. Those animals showing the first signs of disease will be monitored a minimum of four times a day for worsening signs. Animal suffering from any of the symptoms: severe lethargy or agitation, moribund appearance, failure to right oneself after 5 seconds, may be defined as moribund and will be humanely sacrificed prior to the experimental endpoint. The method of euthanasia used was an overdose of CO 2 followed cervical dislocation. In our studies no animals died prior to the experimental endpoint. 8-week-old male CD1 mice (Charles River Laboratories) were injected intravenously with 2×10 8 bacteria at a 1:1 ratio of WT and either one of the isogenic mutant strains. After 72 hours, mice were euthanized and blood and brain were collected to enumerate bacterial CFU. PCR was performed to confirm the presence or absence of targeted genes on recovered CFU. Primers 2180_F: 5'-AGAGCACGTTATCCTTTCGCT-3' and 2180_R: 5'-TCCGCCAAAACGTGCAACAT-3'; and primers 0039_F: 5'-GAGCCAAC TTTTCTTGGATGAC-3' and 0039_R: 5'-ACTAGATTGATTCTGTACAGGA-3' were used for screening. Experiments were repeated twice (5 mice/group).
Characterization of CiaR regulated genes
Our data indicate that the response regulator CiaR may play a role in survival and trafficking within brain endothelial cells. Thus, we hypothesize that CiaR-regulated genes may impact GBS intracellular survival and the efficient trafficking of GBS through brain endothelium. Previous microarray analysis of the WT and ΔciaR mutant identified the two most highly regulated genes, SAN_2180 and SAN_0039 [8] (Fig 1A). To characterize the impact of SAN_2180 and SAN_0039 on GBS interaction with human brain microvascular endothelial cells (hBMEC) and virulence, we generated isogenic knockout strains using in-frame allelic substitution of either gene with a chloramphenicol acetyltransferase (cat) resistance cassette using a method described previously [7] and as described in Materials and Methods. Both constructed mutants exhibited similar growth rates in THB compared to the WT parental strain (data not shown) and similar morphology as observed by Gram stain and electron microscopy ( Fig 1B and 1C), although the ΔciaR mutant appeared to grow in shorter chains.
We have previously shown that the ΔciaR mutant exhibited decreased survival in hBMEC [8], thus we characterized the interactions of the Δ2180 and Δ0039 mutants with hBMEC, specifically the adherent and invasive capabilities as well as the ability to survive and persist intracellularly. The Δ2180 mutant exhibited a~2-fold and significant decrease in hBMEC adherence and invasion compared with the WT parent strain (P < 0.005 and P < 0.00005, respectively) (Fig 2A and 2B). Interestingly, the Δ0039 mutant displayed increased adherence and invasion into hBMEC (Fig 2A and 2B), however, when calculating the percentage of the hBMEC-associated GBS that had invaded the intracellular compartment both mutants exhibited decreased invasive capability compared to the WT strain ( Fig 2C). These data indicate that both SAN_2180 and SAN_0039 contribute to GBS uptake into hBMEC. To examine whether these genes impact intracellular survival, we infected hBMEC with WT and mutant strains for 2 hours, incubated with extracellular antibiotics and at 2, 6, and 12 hours post antibiotic treatment, the intracellular pool was quantified as described in Materials and Methods. The percent of invasive bacteria recovered over time is shown relative to the first time point for each strain (Fig 2D). We observed a gradual decrease in intracellular WT bacteria over time as we have demonstrated for GBS in hBMEC previously [7]. However, the level of intracellular organisms over time for each of the mutant strains was significantly less compared to the WT strain ( Fig 2D).
Characterization of GBS intracellular trafficking in brain endothelial cells
Efficient trafficking of bacteria through endothelial barriers is a hallmark of the development of a multitude of vascular diseases. Endocytic trafficking consists of labeling by Rab GTPases, small guanine nucleotide binding proteins responsible for vesicular trafficking of cargo, and selectively transporting intracellular pathogens to the lysosome [13,25]. Rab5 is a monomeric GTPase known to be involved in early endocytic trafficking, while Rab7 acts later in the endocytic pathway to regulate lysosomal fusion [26]. Rab5 and Rab7 specifically label early and late endosomes, respectively, and subsequently traffic cargo or pathogens to the lysosome, which is labeled by lysosomal associated membrane protein 1 (LAMP1). We investigated the association of WT GBS and the ΔciaR, Δ2180 and Δ0039 mutants with Rab5, Rab7, and LAMP1 labeled compartments. Infection of hBMEC with GFP-GBS strains was carried out for 2h as described in Materials and Methods, then at various time points post antibiotic treatment, cells were processed and stained for endosomal and lysosomal markers. Representative images following WT GBS infection 1h post antibiotic treatment show GBS localizing with each marker during the infection time (Fig 3A). To quantify co-localization over time, for each time point, we counted triplicate biological samples, at least 100 cells with intracellular bacteria and evaluated Rab5, Rab7, and LAMP1 localization with GBS WT and mutant strains (Fig 3B-3D). At early time points there were either similar amounts of WT and ΔciaR mutant GBS that localized with endosomal and lysosomal-labeled vesicles or in some cases there were higher amounts of WT GBS. However at later time points, significantly more of the ΔciaR mutant was associated with both Rab and LAMP1 markers. These findings demonstrate that GBS associates with vesicles involved in the endocytic pathway, with 25-30% of intracellular bacteria localizing with the lysosome. Generally the Δ2180 mutant strain exhibited similar localization with Rab5 and Rab7 positive vesicles as the ΔciaR mutant, except at the early time point where we observed that the Δ2180 mutant co-localized more with rab5 (Fig 3B and 3C). It is also notable that the WT strain co-localized more than other strains with rab7 and LAMP1 positive vesicles at the early time point. However, both Δ2180 and Δ0039 mutants co-localized less with LAMP1 positive cells when compared to WT and the ΔciaR mutant strain, particularly at later time points ( Fig 3D). Thus, overall CiaR regulation and specific regulated genes may influence GBS trafficking through endocytic compartments in different ways.
Recovery of GBS from lysosomes isolated from brain endothelial cells
Our results suggest that GBS may regulate genes to prevent or limit endocytic trafficking to the lysosome. Although we observed that there was actually less co-localization of both Δ2180 and Δ0039 mutants with lysosomal markers at later time points, it is possible that these mutants are still readily trafficked to acidic compartments, but exhibit increased sensitivity to lysosomal killing. Thus, we investigated whether we could recover viable intracellular GBS from lysosomes isolated from brain endothelial cells. We used a lysosomal enrichment protocol that employs differential centrifugation to enrich for lysosomes based on size and density. Following hBMEC infection with WT and mutant GBS strains for 2 hours, cells were treated with antibiotics, and lysosomes were subsequently isolated at early (1 hour) and late (12 hours) time points. Western blot analysis was performed to confirm that recovered lysosomes were positive for LAMP1 (data not shown). We first used Lysotracker, which stains acidic vesicles, and found that WT GBS could be visualized within acidified lysosomes (Fig 4A). Lysosomes were also lysed in 0.1% Triton X-100 and lysates plated on THB agar to enumerate viable bacteria. Markedly more WT GBS was recovered from the lysosome fraction than any of the mutant strains at the early time point post infection (Fig 4B). Furthermore, fewer viable GBS was recovered from lysosomes at the later time point. These data further indicate that GBS does indeed traffic to the lysosome and that CiaR, SAN_2180, and SAN_0039 contribute to initial GBS trafficking and survival.
CiaR regulated genes contribute to GBS virulence
We have demonstrated previously that CiaR promotes bacterial fitness and overall virulence in a mouse model of GBS infection [8]. To similarly examine whether the CiaR regulated genes contribute to virulence in vivo, we employed the same bacterial competition model as described previously [8,27]. Mice were challenged intravenously with equal amounts (2 × 10 8 CFU) of WT COH1 and either Δ2180 or Δ0039 isogenic mutants. At the experimental end point (72h), mice were euthanized and blood and brains were collected for the enumeration of surviving bacteria. PCR-based screening was used to distinguish between the WT and mutant strains. Data is expressed as the percentage of WT or mutant GBS recovered compared to total recovered CFU. Consistently more WT GBS than the Δ2180 or Δ0039 mutant strains were recovered from the blood and brain (Fig 5A and 5B). This is consistent with previous results with the ΔciaR mutant [8], and suggests that both CiaR regulated genes contribute to bacterial fitness and virulence.
Discussion
Penetration of the BBB likely requires GBS to invade microvascular endothelial cells and transcytose through cells, exiting basolaterally to breach the CNS. It is known that GBS can persist within brain endothelial cells for up to 24 hours post infection, however, there is no net increase in bacterial replication, and the intracellular pool actually decreases over time [7,11]. This is likely due to the ability of the host cell to limit GBS intracellular growth by using various forms of antibacterial defense including transit to the lysosomal compartment for subsequent degradation. We observed that intracellular WT GBS readily acquired markers of endosomal maturation, as GBS associated with early and late endosomes, and approximately 25-30% of intracellular organisms localized with acidified lysosomal vesicles (Fig 3). Our results presented here suggest that GBS may use the response regulator, CiaR, to prevent endocytic trafficking to the lysosome. A GBS mutant deficient in CiaR exhibited increased localization with vesicles in the endocytic pathway including Rab5, Rab7 and LAMP1 positive cells. Rab GTPase modifications have been identified during bacterial infection and Rab5 modulation by numerous pathogens has proven to be an efficient strategy to promote intracellular replication or persistence [28]. Additionally, bacteria such as Helicobacter pylori and Mycobacterium bovis may modulate Rab7 endosomal maturation in order to establish a protective intracellular niche during infection [29,30]. Interestingly, following infection with the ΔciaR mutant less viable bacteria was recovered from lysosomes isolated from hBMEC. This is likely due to the increased sensitivity of the ΔciaR mutant to the hostile environment of the phagolysosome, namely antimicrobial peptides and reactive oxygen species, which we have demonstrated previously [8]. The reduced recovery of the Δ2180 and Δ0039 mutant strains from lysosomes likely also A. Isolated lysosomes infected with WT COH1 GFP expressing GBS were subjected to staining with Lysotracker Red (0.5μM) and visualized using fluorescence microscopy. Scale Bar, 5 μm B. hBMEC were infected with WT or mutant GBS strains for 2 hours (MOI = 10) and subjected to lysosomal isolation by differential centrifugation after 1 or 12 hours post antibiotic treatment. Lysosomal pellets were plated to enumerate the amount of recovered viable CFU. Statistical analysis performed was a One-way ANOVA with a Tukey's multiple comparisons test and the data represents mean ± S.D. p < 0.05 *, p < 0.005 **. reflects an increased sensitivity to lysosomal killing as we observed these mutants, like ΔciaR, were more sensitive to antimicrobial peptides, lysozyme, and H 2 O 2 (data not shown).
While GBS is not thought of as a classic intracellular pathogen, GBS survival in phagocytic cells has been reported [31,32]. Most research has focused on understanding the virulence factors responsible for GBS persistence in human macrophages and neutrophils. The pore forming β-hemolysin/cytolysin (β-h/c) encoded by cylE is a major virulence factor contributing to GBS disease progression [3]. Interestingly, cylE deletion results in the loss of β-h/c activity and the carotenoid pigment. It has been shown that cylE contributed to enhanced GBS survival within phagocytes that was attributed to the ability of carotenoid to shield GBS from oxidative damage [33]. However, other reports suggest that the ß-h/c did not impact intracellular survival in macrophages (Cumley et al., 2012) or even that the absence of the β-h/c enabled increased GBS survival in professional phagocytes [34]. It is possible that these results reflect differences in GBS strains, β-h/c production, and/or host cells and cell lines, and requires further investigation. Other GBS cell associated factors reported to impact survival in phagocytic cells include pili [27] and the capsule polysaccharide [35,36]. An additional TCRS, the CovR/S global regulator that regulates many genes including cylE, has also been shown to be required for intracellular survival in macrophages [37]. That study suggests that CovR/S mediates a transcriptional response stimulated by the acidic environment in the phagolysosome that mediates survival [37]. Less is known about GBS survival in epithelial or endothelial cells. Interestingly, we have previously infected hBMEC with a GBS ΔcovR mutant to assess bacterial uptake and survival, and were not able to recover viable ΔcovR bacteria from the intracellular compartment [20]. However, at this point we cannot conclude whether CovR regulation is required for GBS invasion into brain endothelial cells, or if it regulates intracellular survival.
Two-component regulatory systems allow bacteria to adapt to changing environmental conditions. CiaR/H is not fully characterized in GBS, but it has been linked to stress tolerance and host defense resistance similar to the role of CiaR/H in Streptococcus mutans [38] and Streptococcus pnuemoniae [39]. Interestingly, the S. pneumoniae CiaR homologue has also been described to be involved in β-lactam resistance and lytic capabilities [40,41]. CiaR-deficient GBS displayed decreased intracellular survival in neutrophils, macrophages, and brain microvascular endothelial cells and was more susceptible to killing by antimicrobial peptides and reactive oxygen species, suggesting CiaR/H as a vital element for environmental stress tolerance [8]. Previously, our group identified a subset of genes that are down-regulated in a CiaR-deficient mutant. One gene, SAN_0039, encodes a putative metallopeptidase exhibiting a high degree of homology (70% similarity, 56% identity for 91%protein coverage) to a protein called Zoocin A (zooA) [8]. Zoocin A is produced by S. zooepidemicus (Group C Streptococcus) which has a bacteriolytic effect on several other Streptococcal species [42]. Zoocin A has two functional domains, an N-terminal catalytic domain and a C-terminal substrate-binding or target recognition domain [43,44]. Zoocin A has been determined to act as a ᴅ-alanyl-L-alanine endopeptidase which hydrolyses the cross bridge of peptidoglycan of certain Streptococcus species [45]. Utilization of peptidoglycan hydrolases for both peptidoglycan rearrangement and pathogenicity in host cells has been described in several bacteria. S. pneumoniae, Listeria monocytogenes, and Staphylococcus aureus employ differential acetylation strategies to obtain resistance to lysozyme [46][47][48]. Another peptidoglycan hydrolase, known as IspC, has been identified in L. monocytogenes as being essential for virulence in vivo, and crossing the blood-cerebrospinal fluid barrier [49]. The attenuated virulence of an IspC deficient mutant may be partly due to the reduced surface expression or display of other known or putative virulence factors [49]. We observed that while the GBS Δ0039 mutant exhibited increased adherence to hBMEC, invasion into cells was reduced, suggesting a defect in a surface factor(s) that modulates bacterial uptake. Future studies using proteomic analysis of GBS WT and the Δ0039 mutant strain will be of interest to determine if the GBS peptidoglycan hydrolase contributes indirectly to host cell interaction and virulence by modulating surface targeting mechanisms that affect other GBS factors. We should note that overexpression of the 0039 gene was toxic to GBS making complementation experiments impossible. This is consistent with what has been observed for Zoocin A (R. Simmonds, personal communication).
Another GBS gene, SAN_2180, was the most highly down-regulated gene in the ΔciaR mutant [8]. Characterization of a Δ2180 mutant demonstrated that this gene contributes to bacterial uptake, a phenotype that was complemented by reintroducing the WT gene back into the Δ2180 mutant (data not shown). The Δ2180 mutant also exhibited decreased survival within brain endothelial cells as well as decreased virulence potential in vivo. Like the ΔciaR mutant, the Δ2180 mutant more readily localized with endosomal and lysosomal marked cells, but was not readily isolated from lysosomes even at early times points likely due to an increased sensitivity to antimicrobial factors. Thus this factor may be the primary CiaR regulated gene responsible for the observed phenotype of the ΔciaR mutant, although future studies with a double mutant strain would help clarify this. Protein sequence analysis using BLAST predicted that the SAN_2180 protein belongs to the proteins of unknown function family, DUF1003, but shares homology (60% similarity, 42% identity for 93% protein coverage) to a protein in Lactococcus lactis involved in acid tolerance and multistress tolerance [8,50]. Additionally, the SAN_2180 protein sequence is homologous to cyclic nucleotide-binding proteins present in other Streptococcus species such as Streptococcus urinalis (84% identity for 100% protein coverage), Streptococcus parasanguinis (69% identity for 98% protein coverage) and Streptococcus gallolyticus (60% identity for 98% protein coverage). Cyclic nucleotide-binding proteins are important for binding intracellular messengers such as cyclic AMP [51]. Modulation of host cAMP levels has been proven to be a novel bacterial mechanism to engage inflammatory responses and disease progression [52]. However, further characterization of the SAN_2180 encoded protein is needed to elucidate specific mechanisms responsible for its role in GBS invasion and intracellular survival.
In summary our data suggest that GBS may modulate gene expression through the TCRS CiaR/H to promote intracellular survival. This may impact trafficking to the lysosome as at later time points we observed only 25% of intracellular WT bacteria localizing with LAMP1 positive vesicles compared to 45% in the absence of CiaR regulation. Interestingly, we have not observed GBS free in the cytoplasm of hBMEC, even at later time points [12], suggesting that surviving GBS likely traffics through endosomes in brain endothelial cells to promote transcytosis across the BBB. However further experimentation is required to fully characterize the fate of all GBS-containing endosomes and how a percentage of GBS bacteria may avoid the lysosomal compartment and transit through the brain endothelium in order to breech the CNS. Our data suggest that the specific CiaR regulated genes, SAN_2180 and SAN_0039, may not independently explain the phenotype of the CiaR deficient mutant, but still provide interesting new bacterial targets that may further inform the mechanisms of BBB penetration as well as the development of preventative therapies. | 2018-04-03T03:12:58.967Z | 2016-04-21T00:00:00.000 | {
"year": 2016,
"sha1": "5bd046c4a0270f404e4b81a5b8d34d303809b19e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0153891&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bd046c4a0270f404e4b81a5b8d34d303809b19e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7642315 | pes2o/s2orc | v3-fos-license | INCREASED FREQUENCY OF HLA B 17 ANTIGEN IN GIRLS WITH TURNER SYNDROME AND THEIR FATHERS
HLA-A, -B and -DR antigen distribution was studied in 49 girls with Turner Syndrome (TS), in 43 of their parents, as well as in 433 controls. No increased frequency of DR3, DR4 was found in our group. However, an increased frequency of HLA B 17 antigen was disclosed (18.3% in TS versus 6.4% in the controls, p<O.OO I and Pc<O.O I). Furthermore, the HLA B 17 antigen was of paternal origin in 77.7% of the cases . The interpretation of the present findings is quite difficult. Most likely, the findings are related to the chromosomal abnormality rather than to autoimmunity. It is quite possible that genes within the region of class I genes create unfavorable circumstances leading to the loss of the sex chromosome or, alternatively, genes in this region confer protection and prevent miscarriage of the affected fetus.
INTRODUCTION
Glucose intolerance and autoimmune phenomena are more frequently encountered in girls with Turner Syndrome (TS) than in the general population (Cassidy et al., 1978;Grunciro de Papendik et al., 1987;Larizza et al., 1989) .A study was designed to investigate the diabetic tendency in girls with Turner Syndrome by determining, among other parameters, the HLA-A, -B and -DR antigens.An interesting, unexpected finding, of increased frequency of HLA B 17 antigen emerged which is reported here-in .*
MATERIALS AND METHODS
HLA-A, -B and -DR antigen distribution was studied in 49 girls with Turner Syndrome and in 43 of their parents.433 healthy, unrelated individuals served as controls.All subjects tested were of Greek origin.Almost 90% of the cases of TS were examined for shortness of stature.A few girls presented with lyphedema in the neonatal period and one presented with hypertension .Their chromosome constitution is depicted in Table I (A) .The typing of HLA-A , -B and -DR antigens was performed by the classical two-step microlymphocytotoxicity test.The isolation of the cells was carried out by monoclonal antibodies specific for CDS antigen (for HLA-A, -B typing) or DR ~ chain monomorphic epitope (for HLA-DR typing), using immunomagnetic isolation techniques.The control subjects were blood donors and hospital personnel.The results were analysed using the chi square test.
RESULTS
The HLA-A, -B antigens, which were tested, their % frequency in TS girls and controls, as well as the difference in antigen distribution (indicated by p value) are shown in Table 2.The frequency of HLA-DR3 and -DR4 antigen did not differ in girls with TS, in comparison to the controls.
The HLA-B 17 antigen was found in 9 girls with TS (IS,3%) and in 2S of the controls (6.4%).This difference was statistically significant (p<O.OO I, Pc<O.O 1).The frequency of the B 17 antigen in the fathers tested was 16.3% (p<O.O I), while in the mothers it was 4.5 %.
Of the 9 girls, who possessed HLA B 17 antigen, 7 inherited the antigen from their father, one from the mother, while in one B 17 antigen was inherited from both parents.Thus, the HLA B 17 antigen was of paternal origin in 77.7% of the girls.The distribution of the HLA-B 17 antigen, according to chromosome constitution, is depicted in table I (B) and it shows that all girls with this antigen had, either, 45X or 45X/46XX mosaic.
DISCUSSION
No increased frequency ofHLA-DR3 and -DR4 antigens was noticed in our group , another indication that the pathogenesis of glucose intolerance in girls with TS is different from that of type I diabetes mellitus.It is quite interesting, however, that the HLA-B 17 antigen was more frequently observed in girls with TS (IS.3% versus 6.4% in the controls).The number of subjects studied does not allow an evaluation of the effect of the type of chromosomal aberration (Table I B).It may be pertinent, however, that 6 of the 10 cases were 45X and the remaining were 45X/46XX mosaic.Most interesting was the observation that in 77.7% of the cases the HLA B 17 antigen was of paternal origin.* P <0.01 after correction for the number of antigens tested.Cassidy et al. (1978) in studying 23 girls with TS found no increased frequency in any of the HLA antigens tested.Their population group however was rather small.It should also be mentioned that in a study of 46 girls with TS from Northern Italy ((Larizza et al., 1989) aiming at the evaluation of their autoimmune tendency, a high frequency of HLA-B38 and -A31 antigen was observed.However, the difference from the controls, in this sample, was not significant, after correction for the number of antigens tested.Thus far, the HLA-B 17 antigen has not been associated with any autoimmune disorder and our findings should not be related to the autoimmune phenomena known to occur more frequently in TS .Hence, another relation of the finding, if any, to the syndrome should be sought.On the other hand it has been shown that the lost sex chromosome in TS is paternal in about 80% of the cases (Hassold et al., 1991 ; Loughlin et al., 1991; Cockwell et al., 1991) and it may not be irrelevant that the HLA-B 17 antigen, in our study of girls with TS, was paternal in 77.7% of the cases .Furthermore, southern blot analysis of human genomic DNA with HLA Class I eDNA probes (Trowsdale et al., 1991) supports the idea that the HLA-cIass I gene family includes genes apart from those encoding HLA-A, -B and -C antigens.
Based on the above, admittedly limited, data, one would like to speculate that gene(s) within the HLA-cIass I region could create unfavorable circumstances leading to the loss of the sex chromosome.Alternatively, gene(s) in these loci perhaps offer protection and prevent miscarriage, which is known to be a very frequent event in XO embryos.
It is obvious that the precise interpretation and significance of the increased frequency of HLA-B 17 antigen in girls with TS, inherited from their fathers, must await further studies.
Table 2 :
Distribution ( % ) of HLA-A , -8 antigens tested in subjects with Turner Syndrome and in controls. | 2018-04-03T03:39:58.298Z | 1993-12-01T00:00:00.000 | {
"year": 1993,
"sha1": "a6e7881f185561853df062455b6b3c9e58a92b85",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/dm/1993/243295.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a6e7881f185561853df062455b6b3c9e58a92b85",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52020820 | pes2o/s2orc | v3-fos-license | Recent Advances in Diagnosing Chronic Pulmonary Aspergillosis
Purpose: The diagnosis of chronic pulmonary aspergillosis (CPA) is occasionally complicated due to poor sensitivity of mycological culture and colonization of Aspergillus species in the airway. Several diagnostic methods have been developed for the diagnosis of invasive pulmonary aspergillosis; however, their interpretation and significance are different in CPA. This study aimed to review the recent advances in diagnostic methods and their characteristics in the diagnosis of CPA. Recent findings: Radiological findings of lung, histopathology, and culture are the gold standard of CPA diagnosis. Serodiagnosis methods involving the use of galactomannan and β-D-glucan have low sensitivity and specificity. An Aspergillus-specific IgG antibody assay showed good performance and had better sensitivity and reproducibility than conventional precipitant antibody assays. Currently, it is the most reliable method for diagnosing CPA caused by Aspergillus fumigatus, but evidence on its effectiveness in diagnosing CPA caused by non-fumigatus Aspergillus is lacking. Newly developed lateral flow device Aspergillus and detection of volatile organic compounds in breath have potential, but evidence on its effectiveness in diagnosing CPA is lacking. The increasing prevalence of azole-resistant A. fumigatus strains has become a threat to public health. Some of the azole-resistant-related genes can be detected directly from clinical samples using a commercially available kit. However, its clinical efficacy for routine use remains unclear, since resistance-related genes greatly differ among regions and countries. Conclusion: Several issues surrounding the diagnosis of CPA remain unclear. Hence, further investigations and clinical studies are needed to improve the accuracy and efficiency of CPA diagnosis.
INTRODUCTION
Aspergillus species are environmental molds that produce airborne spores, and the average human is estimated to inhale hundreds of Aspergillus conidia daily (Hospenthal et al., 1998). Host immunity and the underlying pulmonary diseases are critical factors in determining the outcome of this daily exposure. Patients with defects in cell-mediated immunity, including those with neutropenia due to cytotoxic chemotherapy, or T-cell dysfunction due to corticosteroid or other immunosuppressive therapy are at risk of developing invasive pulmonary aspergillosis (IPA) characterized by hyphal invasion of lung tissues and dissemination to other organs (Baddley, 2011;Patterson et al., 2016). However, patients with underlying chronic respiratory disorders, such as chronic obstructive pulmonary disease, post-pulmonary tuberculosis, non-tuberculosis mycobacteriosis (NTM), cystic fibrosis (CF), bronchiectasis, or allergic bronchopulmonary aspergillosis could develop saprophytic Aspergillus colonization and infection, namely, chronic pulmonary aspergillosis (CPA) (Saraceno et al., 1997;Takeda et al., 2016;Lowes et al., 2017). CPA is a slowly progressive pulmonary disease caused by Aspergillus spp. (Saraceno et al., 1997) and its prognosis is poor; the 5-year mortality rate of CPA patients is approximately 50-85% (Lowes et al., 2017). CPA is categorized into five disease entities based on the recent guidelines of the European Respiratory Society: Aspergillus nodule, simple pulmonary aspergilloma, chronic cavitary pulmonary aspergillosis (CCPA), chronic fibrosing pulmonary aspergillosis (CFPA), and subacute invasive pulmonary aspergillosis (SAIA) .
The diagnosis of CPA is occasionally complicated, as there are several disease entities in CPA, which are described in the following section, and some patients with underlying pulmonary diseases develop Aspergillus airway colonization. Diagnostic methods used for CPA are similar with those of IPA, but their interpretation and significance are different. Clinicians need various clinical information such as patients' background, radiological images, clinical courses, cultural tests, and other supportive diagnostic methods to diagnose CPA. The present review describes the currently available diagnostic methods and discusses new approaches for diagnosing CPA and their future directions.
Radiological and Histopathological Findings
Simple pulmonary aspergilloma is defined as single pulmonary cavity containing a fungal ball in a non-immunocompromised patient with minor or no symptoms and no radiological progression over at least 3 months of observation. Aspergillus nodule is characterized by the presence of one or more nodules without cavitation caused by Aspergillus spp. Muldoon et al., 2016).
On the contrary, CCPA and SAIA are characterized by one or more cavities with or without fungal ball and its radiological progression such as expanding thick-walled cavities and pericavitary infiltration . The crucial difference between them is that SAIA involves hyphal invasion into the lung parenchyma (Yousem, 1997;Hope et al., 2005); however, it is not occasionally easy and practical to obtain sufficient histopathological samples to confirm the diagnosis. Therefore, clinical information such as time course of radiological progression (CCPA >3 months; SAIA 1-3 months) and process of cavity formation are indispensable for clinical diagnosis; CCPA usually occurs in pre-existing cavities, whereas in SAIA, cavities can be subsequently formed by the necrotic Abbreviations: CCPA, chronic cavitary pulmonary aspergillosis; CFPA, chronic fibrosing pulmonary aspergilllosis; CT, Computed tomography; ABPA, Allergic bronchopulmonary aspergillosis; CPA, chronic pulmonary aspergillosis; BAL, bronchial alveolar lavage; BDG, β-D-glucan; GAG, galactosaminogalactan; GM, galactomannan; IPA, invasive pulmonary aspergillosis; LFD, lateral flow device; PCR, polymerase chain reaction; RT-PCR, Reverse transcription-polymerase chain reaction; SAIA, subacute invasive pulmonary aspergillosis; TR, tandem repeats. change of nodules or infiltration lesion due to Aspergillus species. infection (Izumikawa et al., 2014). However, it is hard to distinguish them if the serial radiography films are not available. Particularly, the patients with NTM infection are difficult to diagnose due to their similarity in radiological findings such as nodular shadows and cavity formation (Kobashi et al., 2006). CFPA is defined as severe fibrotic destruction of at least two lung lobes complicating CCPA leading to a major loss of lung function and generally the end result of untreated CCPA (Denning et al., 2003. Thus, these three clinical entities are vague and overlapping in some cases; however, it is essential to distinguish them in order to estimate their prognoses. Although triazole antifungals are recommended in these entities, their efficacy was better in patients with SAIA than in those with CCPA, as reported in a prospective study in France (Cadranel et al., 2012). Recently, "scab-like sign" observed inside the cavitary lesion in CT was proposed as a high-risk sign of hemoptysis in CPA patients, this could be useful when following the CPA patients (Sato et al., 2018).
Mycological Culture
Mycological culture is the basic methods for diagnosing CPA, although it has several limitations. The culture positivity rates of Aspergillus species from respiratory specimens in CPA vary widely, ranging from 11.8 to 81.0% depending on reports (Kitasato et al., 2009;Kohno et al., 2010;Nam et al., 2010;Shin et al., 2014). Uffredi et al. reported that 48 (63%) individuals were colonized patients among 76 non-granulocytopenic patients whose respiratory specimens yielded Aspergillus fumigatus (Uffredi et al., 2003). In our previous study, only 11 (16.4%) of 67 individuals were colonized patients among those with culture positive for A. fumigatus. By contrast, 58 (65.9%) of 88 individuals were colonized patients whose cultures yielded nonfumigatus Aspergillus strains (Tashiro et al., 2011). These reports imply that the clinicians need to be careful when interpreting the results of fungal cultures from respiratory specimens, as Aspergillus species are ubiquitous organism that is present in the air, and some of them are saprophytic fungus and cannot be the target of treatment. The most important way to distinguish the colonization from infection is to confirm clinical information, such as the transitional change of radiological findings; however, films are not always available. Therefore, we need a biomarker that reflects the invasiveness of Aspergillus infection.
Antigen and Antibody Test
It is not always easy to obtain the histopathological specimen, as some patients are not tolerable for invasive diagnostic procedure such as transbronchial lung biopsy due to their general conditions; therefore, serodiagnosis is indispensable for the diagnosis of CPA. Galactomannan (GM) antigen assays in serum and bronchial alveolar lavage (BAL) fluid have high sensitivity and specificity for the diagnosis of IPA, with cutoff values of 0.5 and 1.0, respectively (Maertens et al., 2007(Maertens et al., , 2009). However, the GM serum assay has lower sensitivity and specificity for CPA, with a cutoff value of 0.5 (Kitasato et al., 2009;Shin et al., 2014), than for IPA. GM antigen in BALF showed relatively higher sensitivity (77.2%) and specificity (77.0%), with a cutoff value of 0.4, than that in serum (Izumikawa et al., 2012).
Although the β-D-glucan (BDG) assay has high sensitivity for the screening of a wide range of invasive fungal infections such as candidemia, pneumocystis pneumonia, and IPA, its specificity is limited (Karageorgopoulos et al., 2011;Onishi et al., 2012). Furthermore, its sensitivity is very low (about 20%) in CPA patients (Kitasato et al., 2009;Kohno et al., 2010). Urabe et al. recently reported that the combination of GM and BDG assays in BALF had a higher diagnostic accuracy compared with other single or combinations of diagnostic methods including PCR (Urabe et al., 2017).
Detection of the Aspergillus-specific antibody plays an important role in the diagnosis of CPA and Allergic bronchopulmonary aspergillosis and this method has been widely used. The precipitating Aspergillus IgG antibody has better sensitivity (80-90%) than GM and BDG assays (Kitasato et al., 2009;Kohno et al., 2010) At the moment, commercial Aspergillus-specific IgG plate ELISA tests are currently produced by Serion (Germany), IBL (Germany/USA), Dynamiker/Bio-Enoche (China), Bio-Rad (France), Bordier (Switzerland), and Omega/Genesis (UK) (Page et al., 2015). Siemens (Germany) supplies an automated Aspergillus-specific IgG ELISA system (Immunolite), while Thermo Fisher Scientific/Phadia (multinational) supplies an automated Aspergillus-specific IgG fluoroenzyme immunoassay system (ImmunoCAP), which is an ELISA variant (Page et al., 2015). The Phadia ImmunoCAP IgG assay and Bio-Rad Platelia Aspergillus IgG method have been reported to possess better sensitivity and reproducibility compared with the method involving the use of the conventional precipitant antibody (Baxter et al., 2013). These detection kits have excellent performance in the diagnosis of CPA and ABPA (Baxter et al., 2013;Dumollard et al., 2016;Fujiuchi et al., 2016;Page et al., 2016Page et al., , 2018. However, all these tests use purified antibodies to culture extracts or recombinant antigens of A. fumigatus, and were originally designed to detect A. fumigatus. As non-fumigatus strains account for 40% (30 of 74) of CPA patients in Japan (Tashiro et al., 2011) and 38% in India (Shahid et al., 2001), these assays might have limitations in diagnosing CPA caused by non-fumigatus strains in some areas.
Polymerase Chain Reaction (PCR)
Polymerase chain reaction (PCR) for the diagnosis of IPA has been used for over 2 decades, though is not included in the European Organization for the Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG) definitions of invasive fungal disease (White et al., 2015). Aspergillus PCR from blood sample has similar sensitivity and specificity for the diagnosis of IPA (White et al., 2015), but failed to detect Aspergillus DNA in patients with SPA and CPA (Imbert et al., 2016), conversely, this implies that PCR could be useful to eliminate disseminated infection from CPA. In BALF sample, PCR showed tolerable sensitivity (66.7-86.7%) and specificity (84.2-94.2%) compared to GM or BDG (Urabe et al., 2017). RT-PCR has advantages, (1) its quantitative aspect offers the possibility to establish precise cutoff values that could distinguish colonization from active infections, (2) since RT-PCR detects RNA, which is an indicator of the living fungal cells.
New Strategies
Aspergillus-specific lateral flow device (LFD) was newly developed. It uses the mouse monoclonal antibody JF5, which binds to a protein epitope present on an extracellular glycoprotein antigen secreted constitutively during the active growth of A. fumigatus. This method can detect Aspergillus antigens in human serum within 15 min. An early clinical trial showed that LFD is comparable to GM in serum in terms of diagnosing IPA, with a sensitivity and specificity of 81.8 and 98%, respectively (White et al., 2013). In a single center prospective study, LFD test using BALF specimen also showed tolerable sensitivity (77%) and specificity (92%) for proven/probable IPA (Prattes et al., 2014). However, recently, a single center study reported that LFD showed low sensitivity of 38% for IPA (Castillo et al., 2018). The evidence of LFD's utility in CPA diagnosis is quite limited to date, clinical studies on the diagnosis of CPA are needed to better understand the clinical use of LFD.
Volatile organic compounds (VOCs) are known to be detected from the breath of an infected individual. Initially, 2-pentylfuran was reported as the potential diagnostic VOC in IPA patients (Syhre et al., 2008;Chambers et al., 2009). A recent proof-ofprinciple study was conducted using electronic noses to detect the characteristic VOC pattern of IPA and showed high sensitivity of 100% and a specificity of 83.3% (de Heer et al., 2013). Other researchers used thermal desorption-gas chromatography/mass spectrometry to detect the specific VOCs pattern of IPA and also showed high sensitivity of 94% and specificity of 93% (Koo et al., 2014). Moreover, Heer et al. applied the same methods to detect A. fumigatus colonization in CF patients and showed sensitivity of 78% and specificity of 94% (de Heer et al., 2016). These methods can be useful screening tests, as they are noninvasive diagnostic procedures; however, there might be an issue in distinguishing CPA from Aspergillus-colonized patients.
Galactosaminogalactan (GAG) is a newly discovered extracellular polysaccharide of Aspergillus species, composed of α-1-4-linked galactose and α-1-4-linked N-acetylgalactosamine. It was observed only in hyphae form (Fontaine et al., 2011). GAG is particularly abundant in A. fumigatus, which is the most pathogenic specie among hundreds of Aspergillus species (Lee et al., 2015). Furthermore, GAG is required for its virulence (Gravelat et al., 2013). Therefore, this component could be a potential biomarker to estimate the invasiveness of Aspergillus infection.
Diagnosis of Infection With Azole-Resistant A. fumigatus
In recent years, the global increase of azole-resistant A. fumigatus became an emerging concern for public health, despite the fact that the rates of resistant strains vary among regions, countries, or continents, and rates of resistant strains are especially high in European countries (van der Linden et al., 2015;Meis et al., 2016;Rivero-Menendez et al., 2016). Azole antifungals are the mainstay of treatments for pulmonary aspergillosis. The mortality rates in IPA patients infected with azole-resistant strains were higher (Lowes et al., 2017). However, it is still unclear how patients acquired azoleresistant strain infection affects the clinical course or mortality in CPA patients, because some azole-resistant strains obtained from aspergillosis patients treated with azoles showed poor condition and attenuated growth activity in in vitro condition (Ballard et al., 2018). CPA patients need at least 6 months of oral azole treatment ; detecting the azole-resistant strain earlier could provide them benefit by changing the treatment regimen. However, it is difficult to diagnose azole-resistant A. fumigatus infection in the clinical setting, as in vitro antifungal susceptibility testing of Aspergillus species is not routinely done in most clinical laboratories due to its cost and technical problems. The screening test with azole containing (itraconazole, 4 mg/L; voriconazole, 1 mg/L; posaconazole, 0.5 mg/L; and no antifungal) 4-well agar plate showed a sensitivity of 99% and a specificity of 99%, to screen the azole-resistant mutants (Arendrup et al., 2017); this could be useful and practical for routine test in clinical laboratories in countries where azole-resistance rate is high.
Azole-resistant A. fumigatus strains are mainly categorized into "environmental route" and "patient-acquired route" by means of resistance acquisition. The former was estimated to be generated by the agricultural fungicides used for crop protection and carries the tandem repeats (TR) of 34, 46, and 53 base pairs upstream in the promoter region of CYP51A with a single point mutation of CYP51A gene. By contrast, the latter were generated by the long-term use of medical azoles and carries various single point mutations of CYP51A gene (Meis et al., 2016). The environmentally obtained azole-resistant strains seemed to originate in Europe and have already spread into other regions worldwide (Meis et al., 2016).
The most commonly used method is simple polymerase chain reaction (PCR) amplification of the entire coding and promoter region with sequence analysis of the PCR products; however, this method is not practical for clinical use as it is time consuming. Restriction fragment length polymorphism by AluI is valuable as it can detect TR34 and L89H mutations from DNA samples faster than sequencing (Ahmad et al., 2014). The commercially available AsperGenius R (PathoNostics) can detect L98H, T289A, Y121F, and TR34 mutations as well as A. fumigatus gene directly from BALF specimen by multiplex real time PCR. In a multicenter clinical study, it showed good diagnostic performance on BAL and could detect A. fumigatus with resistance-associated mutations, including in culture-negative BALF samples, and detection of mutations was associated with azole treatment failure (Chong et al., 2016). However, the efficacy of this detection kit for CPA patients is unclear, as these mutations are relatively rare among patientacquired azole-resistant strains obtained worldwide (Meis et al., 2016;Chowdhary et al., 2017); on the contrary, 27 (93.1%) of 29 of CPA patients from Europe had an L98H mutation from BALF samples and 16 (55.2%) had a TR34 mutation (Denning et al., 2011).
CONCLUSION
Needless to say, the gold standard of CPA diagnosis is the radiological findings of the lungs, its histopathology, and culture from the focus of infection. The definitive diagnosis by histopathology and culture is not always easy to perform; thereby, other diagnostic tools are also dispensable and biomarkers to reflect the disease status are needed. Diagnostic methods for CPA described in this review are summarized in Table 1. Currently, the Aspergillus-specific IgG antibody is the most promising tool for diagnosing CPA caused by A. fumigatus. We propose the algorithm for the diagnosis and treatment of CPA (Figure 1). When the patient is suspected of chronic aspergillus infection, it is important to rule out the mycobacterium infection first. Indication of bronchoscopy examination should be considered depending on the result of Aspergillus IgG antibody test. If it is negative, bronchoscopy examination is strongly recommended, as non-fumigatus Aspergillus infection can be the causative organism. If it is positive, bronchoscopy examination is however, optional, to determine which antifungal agents to be used, or collect more precise epidemiological information.
Since the emergence of azole-resistant A. fumigatus strains is a serious concern, convenient detection methods are required to detect these directly from clinical samples; however, further investigation is required. In addition, we need to investigate how these azole mutants are produced inside the lungs and how they affect CPA patients to discover other methods to decrease their prevalence. | 2018-08-17T13:03:37.229Z | 2018-08-17T00:00:00.000 | {
"year": 2018,
"sha1": "a5906e89feee64ed0756989a5e0b4ac2e4862679",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.01810/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5906e89feee64ed0756989a5e0b4ac2e4862679",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8221790 | pes2o/s2orc | v3-fos-license | Loss of the Birt–Hogg–Dubé tumor suppressor results in apoptotic resistance due to aberrant TGFβ-mediated transcription
Birt–Hogg–Dubé (BHD) syndrome is an inherited cancer susceptibility disease characterized by skin and kidney tumors, as well as cystic lung disease, which results from loss-of-function mutations in the BHD gene. BHD is also inactivated in a significant fraction of patients with sporadic renal cancers and idiopathic cystic lung disease, and little is known about its mode of action. To investigate the molecular and cellular basis of BHD tumor suppressor activity, we generated mutant Bhd mice and embryonic stem cell lines. BHD-deficient cells exhibited defects in cell-intrinsic apoptosis that correlated with reduced expression of the BH3-only protein Bim, which was similarly observed in all human and murine BHD-related tumors examined. We further demonstrate that Bim deficiency in Bhd−/− cells is not a consequence of elevated mTOR or ERK activity, but results instead from reduced Bim transcription associated with a general loss of TGFβ-mediated transcription and chromatin modifications. In aggregate, this work identifies a specific tumor suppressive mechanism for BHD in regulating TGFβ-dependent transcription and apoptosis, which has implications for the development of targeted therapies.
Introduction
Birt-Hogg-Dube´(BHD) syndrome is a rare inherited cancer susceptibility disease characterized by benign hair follicle tumors and lung cysts in a majority of patients, and renal cell carcinoma (RCC) in approximately one third of diagnosed BHD cases . Genetic analyses of families affected by BHD syndrome previously identified germline mutations in the BHD gene on chromosome 17p11.2, most of which were predicted to prematurely truncate the encoded protein (Nickerson et al., 2002;Toro et al., 2008). Mutational analyses of BHD-related tumors from patients, as well as those from mouse and rat models of BHD syndrome, identified inactivating 'second hits' in the remaining wild-type BHD allele, formally establishing BHD as a tumor suppressor (Okimoto et al., 2004;Vocke et al., 2005;Hasumi et al., 2009). BHD inactivation has also been described in a subset of von Hippel-Lindau-independent RCC syndromes, in approximately one third of sporadic renal cancers, and in 60% of idiopathic cystic lung disease cases (Khoo et al., 2003;Gunji et al., 2007;Woodward et al., 2008).
While the genetic basis of BHD syndrome is well understood, the cellular and molecular mechanisms involving the protein encoded by BHD, also called Folliculin, remain unclear. Dissecting BHD's molecular functions is particularly challenging since the BHD gene product bears no sequence or functional homology to any known protein. Studies in Schizosaccharomyces pombe have suggested a role for the putative yeast BHD ortholog in amino-acid homeostasis, while siR-NA-mediated knockdown of Drosophila BHD in the fly has implicated it in male germline stem cell maintenance through Stat and/or BMP signaling (Singh et al., 2006;van Slegtenhorst et al., 2007). Finally, mammalian cell culture analyses and mouse models have linked BHD and mammalian target of rapamycin (mTOR) signaling, potentially mediated through an interaction with its binding partner, called Folliculininteracting protein, and AMP-activated kinase (Baba et al., 2006). These studies demonstrate that BHD loss can paradoxically result in either stimulation or inhibition of mTOR depending on the system examined (Baba et al., 2006(Baba et al., , 2009Hudon et al., 2008Hudon et al., , 2010. To investigate the cellular and molecular mechanisms by which BHD suppresses tumorigenesis, we established Bhd À/À embryonic stem (ES) cell lines, since the early lethality of Bhd À/À embryos precluded the development of other cellular models. We observed that the major cellular consequence of BHD loss is resistance to apoptosis caused by decreased Bim expression. This phenomenon was independent of mTOR and ERK hyperactivation and was instead associated with a general loss of TGFb-dependent chromatin modifications and transcription. These data present new insight into how BHD functions as a tumor suppressor and suggest novel targeted therapeutics for BHD syndrome, sporadic RCC, and cystic lung disease.
Results
Homozygous mutant Bhd embryos die during early embryogenesis To define the basic molecular functions of BHD, we initially attempted to establish mouse embryonic fibroblasts from a gene-trap mouse model. In order to fully characterize this mutant Bhd allele (denoted as m), we mapped the insertional position of the trap cassette to intron 8 of the murine Bhd gene, and established Southern and PCR-based genotyping assays to discriminate wild type from the m allele (Figures 1a-c). Using multiple quantitative real-time (qRT)-PCR probes, we observed a 40-50% reduction in Bhd mRNA expression in Bhd þ /m ES cells compared with Bhd þ / þ controls by qRT-PCR (Figure 1d), and confirmed expression of the BHD-bgeo fusion product by staining the ES cells with X-Gal (Figure 1e).
Bhd þ /m intercrosses yielded no viable Bhd m/m embryos after e7.5; although we were unable to genotype e6.5 embryos, we observed that B25% of these were severely abnormal or partially resorbed upon dissection, exhibiting grossly atrophied and disorganized egg cylinders, distorted visceral endoderm and lack of cavitation (Figures 1f and g). This phenotype is consistent with a recently reported embryonic lethal phenotype for an independent mutant Bhd allele (Hasumi et al., 2009). Loss of BHD does not affect proliferation, but results in decreased cellular size and resistance to cell-intrinsic apoptosis Since the early lethality of the Bhd m/m embryos precluded the development of differentiated cell types, we generated an ES cell model for in vitro studies. To do this, we used an independent mutant Bhd allele (denoted as '-') with a gene trap in Bhd intron 1, which ablated expression of the entire Bhd open reading frame. The Bhd þ /À ES cells were characterized in the same manner as the Bhd þ /m cells to validate efficacy of the trap cassette ( Supplementary Figures 1a-e).
Bhd À/À ES cells were generated from the parental Bhd þ /À cells as shown by PCR, and western blot confirmed complete loss of BHD protein expression (Figures 2a and b). To verify BHD loss did not affect the undifferentiated state of Bhd À/À ES cells, we analyzed protein levels of Oct-4, Nanog, Sox2 and phospho-Stat3, and alkaline phosphatase activity in Bhd þ / þ , Bhd þ /À , and Bhd À/À cells ( Supplementary Figures 2a-c). We observed no difference in all parameters of ES cell pluripotency, allowing for valid comparison in in vitro studies across genotypes.
Finally, we evaluated the response of Bhd À/À cells to cell-intrinsic apoptosis by starving cells of serum, glucose, or amino acids for 24 h. Bhd À/À cells clearly demonstrated a resistance to all three apoptosis-inducing stresses as revealed by light microscopy, decreased accumulation of sub-G1 populations, and reduced Caspase-3 and Parp cleavage . This effect was specific to the intrinsic apoptotic pathway, as Bhd À/À cells showed no differential response to the death-receptor ligand Tumor necrosis factor a ( Figure 2g). Thus, BHD deficiency does not confer an obvious increase in cellular proliferation or growth, but does reduce cell-intrinsic apoptosis, thereby identifying a plausible cellular role for BHD as a tumor suppressor.
Bim expression is lost in Bhd À/À cells and BHD-related tumors and contributes to apoptotic resistance The cell-intrinsic apoptotic resistance of Bhd À/À cells could either be due to increased intracellular nutrient stores, an effect previously observed in a mutant yeast model of BHD, or result from a defect in the cell-intrinsic death machinery (van Slegtenhorst et al., 2007). To assess the former possibility, we measured levels of 13 different intracellular amino acids by high performance liquid chromatography and observed no differences between Bhd þ / þ and Bhd À/À ES cells, suggesting that nutrient homeostasis was not playing a role in the apoptoticresistance phenotype (Figure 3a).
To determine if the cell-intrinsic death machinery might be impaired in Bhd À/À ES cells, we examined protein levels of several Bcl2 family members and found many pro-apoptotic BH3-only proteins to be misregulated in Bhd À/À cells. Both extra-long (EL) and long (L) isoforms of Bim, as well as Pumaa and Bad were significantly downregulated in Bhd À/À ES cells, while Bid and BMF levels were upregulated ( Figure 2b). Bik, Noxa, and Pumab, as well as the multi-domain Bcl2 family members BclXL and Bak were unchanged. Similar results were obtained using three independent BHD-deficient ES clones homozygous for the Bhd m allele ( Supplementary Figures 3a and b). We focused primarily on Bim regulation in subsequent experiments, given its dominant role among BH3-only proteins in mediating the cell-intrinsic death response and frequent loss in RCC (Kim et al., 2006;Zantl et al., 2007;Guo et al., 2009).
To test whether the apoptotic resistance of Bhd À/À cells was Bim dependent, we restored either BimEL or Bhd expression by retroviral transduction in Bhd À/À cells. Restoration of BimEL or Bhd expression, or treatment of cells with the BH3-only chemical mimetic ABT-737, rescued the apoptotic response of Bhd À/À ES cells to amino-acid deprivation, as assessed by FACs analysis (Figure 3c, upper panel), and Parp and Caspase-3 cleavage ( Figure 3c, lower panel). Importantly, Bhd reconstitution was sufficient to restore Bim protein levels. Interestingly, autophagy inhibitors 3-methyladenine and chloroquine rescued the apoptotic response in Bhd À/À cells (Figure 3c), similar to apoptosis-deficient Bax À/À /Bak À/À double knockout cells (Lum et al., 2005).
To determine whether loss of BHD results in Bim deficiency in vivo, we performed immunohistochemistry on BHD-related tumors from both human BHD patients and aged Bhd þ /m mice. 3/3 human-derived tumors (one fibrofolliculoma, one chromophobe-clear cell hybrid RCC, and one chromophobe RCC) and 5/5 solid murine renal tumors (four hybrid oncocytic and one papillary mass projecting from a cyst), exhibited absent or significantly reduced Bim expression, confirming that loss of Bim expression is a common event in BHD-related tumors of diverse histologies ( Figure 3d).
Bim is transcriptionally downregulated in Bhd À/À cells independent of mTORC1, mTORC2, ERK, or GCN2-eIF2a pathway aberrations Bim is regulated transcriptionally by several wellcharacterized signal transduction pathways, translationally by miRNAs, and on the level of protein stability (Dijkers et al., 2000;Dehan et al., 2009;Su et al., 2009). We initially assessed Bim mRNA and found it to be decreased by B80% in Bhd À/À cells compared with Bhd þ / þ and Bhd þ /À cells (Figure 4a). To investigate a possible additional role for enhanced proteolytic degradation of Bim, we treated Bhd À/À cells with the proteasome inhibitor MG-132, but observed no increase in Bim protein (Supplementary Figure 4a) (Dehan et al., 2009). We also examined levels of mir-19 and mir-92, miRNAs previously shown to regulate Bim translation in ES cells (Su et al., 2009), but detected no differences between Bhd À/À and control cells (Supplementary Figure 4b).
To determine which transcriptional networks and upstream signaling pathways were involved in Bim misregulation, we investigated the mTOR and ERK pathways that have previously been linked to Bim regulation (Dijkers et al., 2000;Dehan et al., 2009), and are hyperactivated in late-stage cysts and tumors derived from a mutant Bhd mouse model (Baba et al., 2008). Western blot analysis of Bhd þ / þ , Bhd þ /À , and Bhd À/À ES cells revealed that Bhd À/À ES cells exhibited hyperphosphorylation of S6K1 and 4E-BP1 (indicative of increased mTOR complex 1 (mTORC1) activity), hyperphosphorylation of Akt and the FoxO family of Figure 2 Bhd À/À ES cells do not have a proliferative or growth advantage, but are resistant to apoptosis. (a) PCR genotypes verifying loss of wild-type Bhd allele in three independent ES clones, designated as '1', '2', and '3,' after step-up selection. WT=wild-type, Mut=mutant alleles. (b) Western blot demonstrating loss of BHD protein expression in three independent Bhd À/À ES clones with a non-specific (NS) band shown as a loading control. (c) Growth curve showing Bhd À/À ES cells proliferate at similar rates to Bhd þ / þ and Bhd þ /À counterparts, but have increased saturation density at day 4. (d) Coulter counter size measurements showing Bhd À/À cells are significantly smaller than Bhd þ / þ and Bhd þ /À ES cells (*P ¼ 0.000623). Bhd À/À ES cells are resistant to stress-induced apoptosis as shown by (e) brightfield images of Bhd þ / þ and Bhd À/À ES cells starved of amino acids for 1 day, or (g) FACs analysis of cells containing sub-G 1 DNA content after 1 day serum, glucose and amino-acid (AA) deprivation compared with Bhd þ / þ and Bhd þ /À cells (*Po0.0000345), but not by Tumor necrosis factor a treatment. (f) Representative FACs plot showing Bhd À/À ES cells have significantly less sub-G 1 DNA content following 1 day of amino-acid starvation. (h) Western blot of Caspase and Parp cleavage with and without amino-acid starvation in Bhd þ / þ and Bhd À/À ES cells. All bar graphs represent averages of three independent experiments with error bars showing standard error of the mean (s.e.m.). proteins (indicative of mTOR complex 2 (mTORC2) hyperactivation) and hyperactivation of the MEK-ERK-p90RSK cascade (Figure 4b).
To determine if these pathways were playing a role in decreased Bim levels in Bhd À/À cells, we treated cells with inhibitors of each at doses that reduced signaling in Bhd À/À cells to levels observed in Bhd þ / þ cells. We found that pharmacologic inhibition of PI3K-dependent mTORC2, MEK, or mTORC1 activities, either singly or in combination, was insufficient to restore Bim protein levels in Bhd À/À cells (Figure 4c). Treatment of Bhd À/À cells with the same panel of inhibitors was similarly unable to rescue the death-resistance phenotype as assessed by sub-G1 population quantification (Figure 4d, upper panel), or Caspase and Parp cleavage following 2-day amino-acid starvation (Figure 4d, lower panel).
Bim expression is also known to be regulated by Chop, a transcription factor activated downstream of Figure 3 Death resistance of Bhd À/À cells is not due to increased intracellular amino-acid levels, but instead results from decreased Bim protein levels. (a) Intracellular amino-acid levels normalized to total protein in Bhd þ / þ versus Bhd À/À ES cells as assessed by HPLC. eukaryotic initation factor a (eIF2a) kinases as part of the unfolded protein response and as an adaptation to nutrient withdrawal (Puthalakath et al., 2007). Interestingly, Bhd À/À cells exhibited decreased basal and aminoacid deprivation-induced levels of p-eIF2a levels, but not ER stress-induced p-eIF2a after thapsigargin treatment (Supplementary Figures 5a-c). However, induction of the downstream effectors ATF4 and Chop was only slightly attenuated in response to amino-acid deprivation (Supplementary Figure 5b). More importantly, reintroduction of Chop into Bhd À/À cells failed to rescue Bim expression (Supplementary Figure 5d). Collectively, these data suggest that the downregulation of Bim and apoptotic resistance observed in Bhd À/À cells are not due to signaling aberrations in the mTORC1, mTORC2, ERK, and eIF2a signaling pathways.
Bhd À/À ES cells exhibit many phenotypic and molecular defects characteristic of TGFb-pathway mutants We next considered the TGFb pathway, since many reports had previously shown it regulated Bim signaling pathways are hyperactivated in Bhd À/À ES cells. (c) Inhibition of hyperactivated mTORC2 by 5 mM LY294002 (LY) or ERK by 10 mM PD98059 (PD) or a combination of both, or inhibition of mTORC1 by 20 nM rapamycin (rapa) for 24 h does not restore Bim protein expression levels in Bhd À/À cells by western blot. Drug efficacy was shown by decreased p-FOXO levels with LY treatment and decreased p-ERK levels with PD treatment in Bhd À/À cells. Decreased mobility shifts in S6K demonstrate effectiveness of both LY and rapa treatments. Doses were chosen based on their ability to bring activation levels of signaling components in Bhd À/À cells to those observed in Bhd þ / þ cells. (d) Bhd À/À ES cells were pre-treated for 6 h with the same panel of inhibitors as (c), then starved of amino acids for 2 days with inhibitors re-added after 1 day of starvation to maintain concentrations. Death was assessed by quantification of sub-G 1 populations by flow cytometry (upper panel, *Po0.0345) and Caspase and Parp cleavage by western blot (lower panel). All bar graphs in this figure represent averages of three independent experiments with error bars reflecting s.e.m. transcription in diverse cell types (Wildey et al., 2003;Ramjaun et al., 2007;Ramesh et al., 2008;Yu et al., 2008;Houde et al., 2009). We first verified that the TGFb ligand family regulates Bim mRNA in ES cells by treating them with Activin, a TGFb family ligand to which ES cells are responsive. Bim mRNA was increased within 2 h of Activin treatment in Bhd þ / þ ES cells, which did not occur in Bhd À/À cells (Figure 5a). To ascertain whether this effect was specific to Bim, we analyzed the mRNA levels of several other BH3-only family members in response to Activin. Contrary to Bim, Bid, BMF, and Bad mRNAs were not induced by Activin in wild-type cells, and actually appeared elevated in Bhd À/À cells (Supplementary Figure 6a). The latter finding was consistent with increased Bid and BMF protein levels in Bhd À/À cells, but appeared opposite to that of Bad protein levels (Figure 3b). While Pumaa mRNA was induced by Activin, this was not BHD dependent as it occurred in both Bhd þ / þ and Bhd À/À cells (Supplementary Figure 6a). Taken together, these data identify Bim as a specific BH3-only mRNA regulated by TGFb in a BHD-dependent manner.
To determine if TGFb-mediated transcriptional defects were a general phenomenon in Bhd À/À cells, we Figure 5 Bhd À/À ES cells exhibit phenotypes and transcriptional defects characteristic of TGFb-signaling components. (a) Bhd þ / þ and Bhd À/À ES cells were cultured for 24 h in N2B27 media, then stimulated with 10 ng/ml of Activin. Bim mRNA levels were significantly induced in Bhd þ / þ cells after stimulation, but not in Bhd À/À ES cells (*P ¼ 0.00864). (b) qRT-PCR analysis showing canonical TGFb target genes PAI-1, p15, Lefty1, and Pitx2 are significantly downregulated in multiple Bhd À/À ES cell clones labeled 1, 2 and 3 (*Po0.04). (c) X-gal stained 70 Â image of (a) whole mount or 200 Â image of (b) sectioned yolk sac taken from a e10.5 Bhd þ /m embryo exhibiting high Bhd expression in visceral endoderm of the yolk sac. (d) 200 Â image of day 12 EBs showing Bhd þ / þ EBs form expanded cystic structures reminiscent of yolk sacs while Bhd À/À EBs fail to do so. (e) qRT-PCR analysis on RNA from EBs, showing two independent Bhd À/À EBs (clones 1 and 2) fail to express maximal levels of mature yolk sac markers a-feto-protein (Afp) and Trithioredoxin (Ttr). Averages represent maximal expression levels for each mRNA, which occurred between days 6 and 9 of EB development (*Po1.0 Â 10 À5 for both mRNAs). (f) Quantification of benzediene-positive EBs that express hemoglobin (Hb) at day 10 in methylcellulose cultures demonstrating Bhd À/À EBs fail to form erythroid lineages (*P ¼ 2.32 Â 10 À6 ). Data represent the average of three independent experiments with error bars showing standard deviation. (g) qRT-PCR analysis of Gata-1 and CD34 mRNA expression in day 10 EBs in methylcellulose cultures, which is significantly reduced in Bhd À/À EBs (Po4.49 Â 10 À6 ). examined the basal mRNA levels of several established TGFb-transcriptional targets involved in diverse biological processes. mRNA levels for the cell cycle regulator p15, pro-apoptotic plasminogen inhibitor activator type I (PAI-1) and Lefty1 and Pitx2, which have developmental roles were all lower in Bhd À/À cells compared with Bhd þ / þ cells (Figure 5b). Other canonical TGFb mRNA targets such as SnoN, p21, Col1a2, and Lefty2 were similarly downregulated in Bhd À/À cells (Supplementary Figure 6b). Other TGFb targets previously implicated in cell cycle regulation and apoptosis were unaffected in Bhd À/À cells, including Gadd45b and p57, while the death-associated protein kinase was actually upregulated (Supplementary Figure 6c). Overall, these data support a positive role for BHD in TGFb-mediated transcription that governs biological processes implicated in cancer such as apoptosis, cell cycle, and differentiation.
TGFb ligands and receptors are required for normal yolk sac vasculogenesis and embryonic hematopoiesis in vivo, and in ES cell-derived cystic embryoid body (EB) cultures in vitro (Dickson et al., 1995;Oshima et al., 1996;Goumans et al., 1999;Larsson et al., 2001). Interestingly, strong BHD expression was observed in yolk sac visceral endoderm, as revealed by X-Gal staining of E10.5 embryos heterozygous for the Bhd m allele (Figure 5c). To investigate whether BHD is required for yolk sac development, we generated day 12 EBs from Bhd þ / þ and Bhd À/À ES cells. Bhd À/À ES cells failed to form expanded cystic EBs, which are reminiscent of mature yolk sacs in vivo (Figure 5d), or induce mature yolk sac markers a-fetoprotein (Afp) and Trithioredoxin (Ttr) (Doetschman et al., 1985) ( Figure 5e). To further evaluate the role of BHD in embryonic hematopoietic stem cell development, we generated Bhd þ / þ and Bhd À/À EBs in methylcellulose cultures, which promotes the differentiation of hematopoietic lineages. Bhd À/À EBs failed to form hemoglobinized colonies at day 10 in culture (Figure 5f; Supplementary Figure 6b) and exhibited severely reduced mRNA levels of the erythroid gene Gata-1 and CD34 (a marker of both hematopoietic and endothelial lineages) (Figure 5g). Interestingly, we observed a delayed induction of the mesodermal marker Brachyury, a super-induction of Fgf-5 and attenuated maintenance of Hnf4 at later stages (Supplementary Figure 7). However, these minor effects are unlikely to explain the dramatic hematopoietic and yolk sac defects of Bhd À/À EBs, as maximal induction of these target genes was still robust in Bhd À/À EBs, increasing by orders of magnitude.
Bhd À/À cells exhibit hypo-acetylation of TGFb target gene promoters, resulting in their death-resistance phenotype Because BHD deficiency resulted in molecular and phenotypic defects similar to TGFb receptor and ligand mutants, we next investigated whether the transcriptional defects occurred at the level of receptor-mediated phosphorylation and nuclear translocation of Smadtranscriptional regulators. Therefore, we examined nuclear levels of phosphorylated and total receptorregulated Smad2 and the common partner Smad4, which form a complex and translocate to the nucleus upon cellular engagement of TGFb ligands (Zhang et al., 1996). Western blot analysis did not reveal any differences in basal or Activin-induced nuclear accumulation of phospho-Smad2, total Smad2, or total Smad4 in Bhd þ / þ versus Bhd À/À ES cells (Figures 6a and b).
Since Smads have previously been shown to activate transcription via acetylation of Histone H3 at target promoters, we performed chromatin immunoprecipitations for acetylated Histone H3 at the well-characterized Lefty1 promoter (Ross et al., 2006). Whereas Bhd þ / þ ES cells demonstrated a significant increase in acetyl-H3 levels at the Lefty1 promoter following Activin treatment, Bhd À/À cells exhibited no response (Figure 6c). A similar effect was observed at the promoter of PAI-1, which has been implicated in the apoptotic response (Kortlever et al., 2006;Lademann and Romer, 2008) (Supplementary Figure 8a). This difference was not due to a global reduction of acetyl-H3 levels, which appeared unaffected based on western blot assays (Figure 6d). To determine whether restoration of acetylated Histone levels rescues Bhd À/À cell defects, Bhd À/À cells were treated with the histone deacetylase (HDAC) inhibitor Trichostatin A (TSA), which restored mRNA levels of multiple TGFb target genes (PAI-1, p15, and Bim), Bim protein levels and cell death (Figures 6e and f). TSA appeared to have variable effects on other misregulated BH3-only proteins in Bhd À/À cells under nutrient deprivation (Supplementary Figure 8b). First, nutrient deprivation itself changed protein levels of some BH3-only proteins in Bhd À/À cells, with BMF actually decreasing and Bad equilibrating to wild-type levels, in contrast to nutrient replete conditions (Figure 3a). TSA appeared to have no effect on Pumaa and Bid levels abundance under nutrient limitation, while it slightly elevated Bad and BMF levels. It seems unlikely, however, that Bad and BMF induction can explain the effects of TSA, given the ability of Bim reexpression alone to completely reverse the phenotype of Bhd À/À cells. In summary, BHD loss appears to contribute to apoptotic resistance and TGFb-dependent transcriptional defects via aberrations in chromatin modifications at target gene promoters including Bim (Figure 7).
Discussion
To date, the cellular and molecular mechanisms of tumor suppression by BHD have remained elusive. Uncovering BHD protein functions is particularly challenging, given the lack sequence homology to any known protein. In this study, we initially observed that the major cellular consequence of BHD loss is apoptotic resistance due to loss of Bim expression, a phenomenon that can be observed in BHD-related renal tumors. These findings prompted us to ask what signaling pathways and/or transcriptional networks were implicated in this phenotype. We initially examined the mTOR pathway, as multiple published reports have shown correlations between BHD loss and mTOR activity (Hartman et al., 2009;Hasumi et al., 2009;Hudon et al., 2010). While we did observe aberrant mTOR signaling in Bhd À/À cells, this was surprisingly unrelated to the apoptotic defects.
We turned our attention to the TGFb pathway, given its known role in regulating Bim transcription and tumor suppression. For instance, in humans, Smad4 germline mutations are associated with juvenile polyposis syndrome, which predisposes individuals to gastrointestinal cancers, and TGFbRI, TGFbRII and activin receptor II are mutated or downregulated in a broad spectrum of sporadic cancers (Levy and Hill, 2006). Epigenetic silencing of genes involved in TGFb signaling has also been specifically observed in RCCs (McRonald et al., 2009). Genetic ablation of these signaling components in mice have confirmed the tumor suppressive function of this pathway in models of colon, mammary, and pancreatic cancer (Massague, 2008). Our data demonstrate that BHD loss results in yolk sac and Figure 6 Loss of BHD results in HDAC-mediated silencing of TGFb-transcriptional targets. Western blot of nuclear extracts showing (a) basal or (b) Activin-induced levels of nuclear phospho-smad2, Smad2, and Smad4 are unaffected in Bhd À/À ES cells. Nonspecific (NS) band is shown as a loading control. (c) qRT-PCR using primers located in the Lefty1 promoter on genomic DNA immunoprecipitated with acetyl-histone H3 antibody from ES cells stimulated with activin for 1 h following 24 h culture in N2B27. (d) Western blot showing total levels of acetyl-histone H3 are unaffected in Bhd þ / þ versus Bhd À/À cells with and without Activin treatment. (e) Treatment of Bhd À/À cells with TSA restores mRNA levels of PAI-1, p15, and Bim as shown by qRT-PCR analysis of mRNA expression levels (*Po0.0362). (f) Treatment of Bhd À/À ES cells with TSA rescues the Bhd À/À cellular death-resistance phenotype and Bim expression as shown by analysis of sub-G 1 populations (above, *P ¼ 0.000145) and western blot for Bim, and Caspase and Parp cleavage (below). hematopoietic defects reminiscent of TGFb receptor and ligand mutants. These phenotypes correlate with global aberrations in TGFb-mediated transcription, which appear to occur downstream of nuclear accumulation of phosphorylated Smads. Our data indicate this defect is most likely due to HDAC-mediated effects on chromatin at Smad-dependent promoters, given the ability of TSA to reverse the biological and molecular defects of Bhd À/À cells (Figure 7). Further work will be needed to elucidate the nature of these effects.
Previous studies have suggested a contradictory role for mTOR in BHD-related tumorigenesis, showing that BHD loss can result in either upregulation or downregulation of mTOR signaling. These studies have been largely correlative and have not demonstrated a causative relationship between the signaling aberrations and biological outcomes (Baba et al., 2006;Hartman et al., 2009;Hasumi et al., 2009;Hudon et al., 2010). While we observed increased mTOR signaling in Bhd À/À ES cells, as reported for late-stage kidney tumors taken from a BHD mouse model, this phenomenon appeared unrelated to the observed apoptotic resistance. We speculate that effects of BHD loss on mTOR signaling likely result from compensation and/or crosstalk with more proximal defects in TGFb signaling.
Collectively, the data presented here establish a novel model whereby BHD exerts its major tumor suppressive effect through TGFb-mediated transcriptional output to regulate cell-intrinsic apoptosis. A recent publication has further supported this model, suggesting TGFbdependent transcriptional defects in a human BHDderived tumor cell line (Hong et al., 2010). This study demonstrated that BHD deficiency results in decreased levels of TGFb pathway signaling components themselves including the ligands TGFb-2 and Inhibin b A and Smad3. This contrasts with our data that demonstrate an effect of BHD downstream of signaling, on canonical Smad target gene promoters. Further, the report by Hong et al. suggested that TGFb-2 stimulates cell growth and survival in the absence of BHD, while the TGFb ligand Activin does the opposite. The latter is surprising in light of our findings, given the failure of BHD null ES cells to mount any transcriptional response to Activin treatment. This discrepancy likely derives from differential cell type, dosage and/or length of treatment, which has been well documented for TGFb ligands (Massague, 2008). In any case, these molecular defects likely extend to other tumor-promoting effects such as cell-type specific proliferative advantages, due to p15 loss, and stromal alterations in BHD-related tumors in vivo (Massague, 2008). Although our in vitro study did not uncover a role for BHD in proliferation, this is likely due to the unusual cell cycle machinery of ES cells, which is refractory to cyclindependent kinase inhibitors (Savatier et al., 1996;Faast et al., 2004). It is probable that BHD loss and consequent defective TGFb-mediated signaling in vivo results in a proliferative advantage. Collectively, these findings suggest several targeted therapeutics for the treatment of BHD-related tumors, as well as sporadic cases of cystic lung disease and RCC, including BH3only mimetics, autophagy inhibitors, and HDAC inhibitors, which are all currently in clinical use for the treatment of other types of cancer.
Materials and methods
Sub-G 1 quantification Cells were fixed and stained as previously described, then fluorescence was measured by flow cytometry (Riccardi and Nicoletti, 2006). At least 10 000 cells were counted for each experiment and data were analyzed using FlowJo software.
X-Gal staining of tissues and embryos X-gal staining was carried out as previously described for both cells and whole mount tissues fixed in 4% paraformaldehyde according to the Sanger Institute gene-trap protocol. X-gal stained yolk sac sections came from tissue snap-frozen in OCT.
Alkaline phosphatase staining Cells were stained using the Alkaline Phosphatase Detection Kit from Millipore (Bellerica, MA, USA) according to the manufacturer's instructions.
RNA isolation and qRT RNA was extracted using Qiagen (Valencia, CA, USA) RNeasy columns with DNAase treatment and cDNA was written off B1 mg of RNA per reaction using the ABI (Foster City, CA, USA) one-step RT-PCR Master Mix. qRT-PCR reactions were run in triplicate on an ABI 7900HT machine for each experiment, utilizing either SYBR green or Taqman primer probe sets as noted in Supplementary Materials. All target mRNA levels were normalized to b-actin or 18s expression levels. All qRT-PCR data reflect average mRNA levels from three independent RNA extractions and reverse transcription reactions with error bars showing standard error of the mean.
Protein extraction and western blotting
Whole-cell lysates were prepared using a buffer containing 120 mM NaCl, 1 mM EDTA, 10 mM pyrophosphate, 10 mM glycerophosphate, 50 mM NaF, 1% Triton, and freshly added protease inhibitor cocktail tablets from Roche (Basel, Switzerland) and 2 mM orthovanadate. Nuclear extracts were prepared using the Active Motif Nuclear Extraction kit, with 500 mM NaCl added after nuclease digestion followed by sonication of nuclear extracts. Lysates were run on polyacrylamide gels, transferred to nitrocellulose, blotted using standard protocols, and visualized on film using HRP-conjugated secondary antibodies and ECL reagent.
Cell size measurements
Cell size was measured using a Z2 Coulter particle size analyzer. At least 10 000 cells were measured for each experiment.
Characterization of gene-trapped ES cells and mice
Bhd þ /m ES cells were obtained from Bay Genomics and Bhd þ /À ES cells were obtained from the German Gene Trap Consortium, with isogenic parental Bhd þ / þ lines being obtained from each respective source. Gene-trap insertions were mapped by first PCR amplifying the boundary of the fusion transcript using primers within the 3 0 end of the trap cassette and within the sequence tag, on cDNA generated from heterozygous ES cells. The precise genomic location of the cassette was mapped by direct PCR amplification of the respective intronic-trap cassette boundary using primers in the exon just upstream from the insertion and within the 5 0 end of the trap cassette. Southern blots were carried out on either PvuII or BamHI-digested DNA on Bhd þ /m or Bhd þ /À ES cell, respectively, with 5 0 end-directed probes being generated by PCR amplification from genomic DNA and subsequent TOPO cloning. Bhd þ /m mice were generated and intercrossed and embryo collection was performed as previously described (Covello et al., 2006;Hartman et al., 2009) Proliferation assay In all, 5 Â 10 4 cells were plated in a gelatinized 12-well plate and counted using the Countess automatic counter from Invitrogen (Carlsbad, CA, USA) each day after plating.
Immunohistochemistry
Paraffin-embedded tissue was immunostained using a rabbit polyclonal anti-Bim antibody from Novus following antigen retrieval in sodium citrate solution and developed using diaminobenzidine.
Embryoid bodies
For suspension cultures, 8 Â 10 6 ES cells were plated in 10 cm bacterial dishes in DMEM containing 10% FBS and b-mercaptoethanol. Media was changed every other day, with EBs being split 1:2 on day 3. Methylcellulose cultures and benezidiene staining were performed as previously described (Cooper et al., 1974;Adelman et al., 1999).
Northern blotting
Northern blots for miRNAs were done as previously described (Gruber et al., 2009).
Chromatin immunoprecipitation
In all, 1 Â 10 6 ES cells were fixed for 10 min in 1% formaldehyde for 10 min at room temperature. Crosslinking was quenched with 125 mM glycine for 5 min, then cells were lysed and sonicated to shear DNA for 3 min with on/off pulses using a Fisher Sonic Dismembranator Model 500 at 20% power. Chromatin immunoprecipitation was carried using the Imgenex QuikChIP kit according to the manufacturer's instructions using a pan acetyl-H3 antibody from Millipore. Eluted protein/DNA complexes were purified by phenol/ chloroform extraction and ethanol precipitation.
Statistics
All P-values were generated using a two-tailed Student's t-test. | 2017-11-08T01:34:47.561Z | 2011-01-24T00:00:00.000 | {
"year": 2011,
"sha1": "3771de549d2db13414b06511dc702ddb1c97f0da",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/onc2010628.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "30f85c4d6af5803c0253d098e8efc3a055014b55",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
28066112 | pes2o/s2orc | v3-fos-license | GLOBAL DYNAMICS OF A MODEL OF JOINT HORMONE TREATMENT WITH DENDRITIC CELL VACCINE FOR PROSTATE CANCER
. Advanced prostate cancer is often treated by androgen deprivation therapy, which is initially effective but gives rise to fatal treatment-resistant cancer. Intermittent androgen deprivation therapy improves the quality of life of patients and may delay resistance towards treatment. Immunotherapy al- ters the bodies immune system to help fight cancer and has proven effective in certain types of cancer. We propose a model incorporating androgen depri- vation therapy (intermittent and continual) in conjunction with dendritic cell vaccine immunotherapy. Simulations are run to determine the sensitivity of cancer growth to dendritic cell vaccine therapy administration schedule. We consider the limiting case where dendritic cells are administered continuously and perform analysis on the full model and the limiting cases of the model to determine necessary conditions for global stability of cancer eradication.
1.
Introduction. Prostate cancer is one of the most ubiquitous cancers in males in the United States, with an expected one in six men diagnosed in their lifetime [23]. The prostate requires androgens, especially testosterone and 5α-dihydrotestosteron (DHT) to function and continue proliferation. The current treatment protocol is to suppress androgen, which should lead to inhibited growth of cancer cells as well. However, although the initial response rates to therapy are excellent, eventually androgen independent prostate cancer arises and is most often fatal [33].
Recent research and clinical trials have begun to question whether the efficacy and comfort of this treatment could be increased by intermittent androgen deprivation (IAD) therapy [6,22,28,37]. Intermittent androgen deprivation therapy works by administering androgen deprivation therapy until the patient reaches a certain threshold of prostate specific antigen (PSA), which is a biomarker of the disease. Upon reaching this threshold, the patient is removed from therapy until their PSA levels rise above a second threshold, when they are once again put on androgen deprivation therapy. This treatment still results in overall androgen suppression and may improve the quality of life of the individual patient, lessening the unpleasant side effects of androgen deprivation therapy [5,18]. Although it has often been proposed that intermittent androgen deprivation therapy may increase time to androgen independent relapse, this statement remains unproven in clinical trials, including meta-studies [2,5,39,43]. Of course, there are studies which suggest that only certain patient groups may benefit from intermittent androgen deprivation therapy, but determining those groups remains a work in progress [25,44]. Adding to the confusion is the fact that there is no consensus in the medical community on the duration or intervals of the treatment [10]. Some have concluded that the findings determining that intermittent androgen deprivation therapy is non-inferior to continual androgen deprivation therapy are inconclusive due to flawed or inconsistent studies [17,34].
Immunotherapy treatments use the body's immune system to fight against cancer by enhancing or repressing an immune response in the patient. Dendritic cells are the strongest of the antigen-presenting cells, meaning they ingest antigens and present the antigen material to naive and memory T cells in the system. These T cells then target the specific antigen for removal. Dendritic cell vaccines are created by extracting dendritic cells from the patient, loading the cells with antigens and re-injecting the dendritic cells into the patient. The target antigen in this case is PAP (Prostatic acid phosphatase), which has been used in clinical trials [3,9,40]. The dendritic cells then serve to activate the T cells into an immune response against PAP. Dendritic cell vaccines have been suggested as a method to improve the efficacy of hormone therapy treatment of advanced prostate cancer. In fact, Provenge is an FDA-approved dendritic cell vaccine for advanced prostate cancer which has been shown to extend the life of patients [4,11].
An emerging field among mathematical oncology is determining optimal dosing strategies in order to manage cancer. Metronomic therapy, or the use of much lower dosages of medication more frequently, has been mathematically investigated as a possible alternative to large dosage strategies [29,30]. There are many advantages towards this type of approach: lowering cytotoxicity, lowering costs, and increasing time until treatment resistance. This strategy aims to manage cancer, rather than eradicate it.
This research examines the effect of varying the frequency in which the patient receives dendritic cell vaccines on long term behavior of prostate cancer. We examine not only the case of discrete injections, but also consider a continuous injection, as through an intravenous (IV) therapy. We perform analysis on the model to determine biologically realistic parameter values which could result in stable condition of the disease or the elimination of the prostate cancer, and analytical results are corroborated with simulations. The parameters which do not have existing literature values, or are patient-specific, are thoroughly investigated with simulations to determine what changes to dendritic cell vaccine therapy must be implemented to delay the onset of androgen independent prostate cancer. Additionally, we examine the quasi-steady state system and perform a full analysis of the steady states, determining what conditions are necessary to generate global stability.
2. Model formulation. There have been many mathematical models that discuss the evolution and treatment of prostate cancer using androgen deprivation therapy. In 2004, Jackson [20] formulated a partial differential equation model which featured both androgen independent and androgen dependent cancer cells. This model exhibited the effects of androgen independent relapse consistent with experimental data [20]. Ideta et. al [19] formulated a mathematical model comprised of ordinary differential equations designed to determine prostate cancer growth while on intermittent androgen therapy. Their model featured androgen concentration, androgen independent, and androgen dependent cancer cells, with a term to model the mutation rate from androgen dependent to androgen independent in the absence of androgen.
Many have extended upon the model proposed by Ideta et. al. Hirata et. al [15] extended their model to include two sub-classes of androgen independent cancer cells: a subpopulation whose mutation to androgen independent was reversible, and a subpopulation who's mutation to androgen independent was irreversible. From this model, they have investigated how to optimize patient treatment protocols [14,16]. Additionally, they were able to use the model to classify patients and determine whether intermittent androgen deprivation therapy would be more or less effective for each type of patient [13]. Most recently, they have analyzed their model to find conditions for existence non-trivial periodic orbits for one of the types of patients [12].
Tanaka et. al [41] extended the Ideta et. al model by incorporating stochasticity, for more realistic PSA level data. Portz et. al [37] introduced cell quotas to model how dependent the androgen dependent and androgen independent cancer cells are on androgen concentration. This model has been furthered by other researchers and compared to previous models [8,32].
Immunotherapy has also been formulated by mathematicians. Kirschner and Panetta [24] established a model which quantified the anti-tumor immune response using populations of T cells, IL2, and tumor cells. Their model also allowed for incorporation of therapy. In the context of prostate cancer, Kronik et. al [26] formulated a mathematical model which investigated the response of prostate cancer to dendritic cell vaccines, corroborated with patient data. However, they only considered one type of tumor cell population and did not account for hormonal therapy.
There has been little work in investigating the combination of dendritic cell vaccines with any type of androgen deprivation therapy. Portz and Kuang [27,36] examined immunotherapy in conjunction with IAD for prostate cancer which combined the Ideta et. al model [19] with the Kirschner and Panetta [24] model. Their system of 6 equations included androgen independent cells, androgen dependent cells, androgen, cytokines (IL-2), activated T cells, and dendritic cells. Recently, Peng et. al [35] used a 10-dimensional ordinary differential equation model to investigate how androgen deprivation therapy combines with immunotherapy, including dendritic cell vaccines. They were able to determine how to synergistically combine ADT with immunotherapy and fit their model with mouse data. However, their model did not take into account intermittent androgen deprivation therapy.
Our model is based on the Portz and Kuang [27,36] model described above with some appropriate modifications. Their results showed that adding dendritic cell vaccines resulted in an increase in time to androgen independent relapse. However, the study did not consider how the dynamics of dosage amounts and frequencies for the dendritic cell vaccines might influence the outcome of the treatment. It has been hypothesized in a mathematical model that changing the dosages and frequency of administration of the dendritic cell vaccine may drastically alter the time to androgen independent relapse [26].
Our mathematical model is a population-style model of the interaction between androgen dependent cancer cells (X 1 ), androgen independent cancer cells (X 2 ), activated T cells (T ), concentration of cytokine IL-2 (I L ), concentration of androgen (A), and number of dendritic cells (D).
The androgen dependent cancer cells (AD) are governed by their proliferation and death (given by r 1 (A, X 1 , X 2 )), their mutation to androgen independent cancer cells (AI), the mutation from androgen independent cells, and the number killed by T cells. The AI cancer cells are also governed by proliferation and death, independent of androgen, their mutation from AD cells, their mutation to AD cells, and the number killed by the T cells. The T cell counts are determined by the number activated by the dendritic cells, their natural death, and clonal expansion. The concentration of cytokines is determined by their production by stimulated T cells and a clearance rate. The concentration of androgen in the blood is described by homeostasis term and deprivation therapy term. The dendritic cells are governed by their death rate.
The modeling of the intermittent androgen deprivation therapy is governed by the u(t). Note that if u(t) = 1, we are modeling the 'on-treatment' portion of the therapy, and when u(t) = 0, we model the 'off-treatment' therapy. Below are the detailed equations: In this case, y(t) represents the serum PSA level. When the PSA level decreases below a certain threshold, L 0 , the androgen deprivation therapy begins. When the PSA level increases above another threshold level, L 1 , the 'on-treatment' therapy starts. Note that L 0 < L 1 .
Although our model is similar to the Portz and Kuang model, we include three major changes which allow for a more realistic model. Firstly, we change the growth and death functions. The original model incorporated an exponential growth rate for the androgen independent cell population, which is very unrealistic. We modified this growth function to be logistic. Similarly, we change the growth and death for the androgen dependent cell population. We assume that the lack of androgen affects the androgen dependent cell population in two ways: lack of androgen lowers the growth rate, and the lack of androgen actively kills the cell population. This is a realistic assumption which has been used in other prostate cancer models with great success [7,21]. We note that when androgen is at its homeostatic level, a 0 , the growth rate of androgen dependent cells is at its highest. As the levels of androgen decrease, the growth rate of androgen dependent cells also decreases, and the death due to lack of androgen increases. When androgen is at its lowest value, 0, there is no growth of androgen dependent cells, and the highest death rate due to lack of androgen happens.
The second major change considers the mutation functions. The original model assumed that AD cells mutated to AI cells in an androgen-depleted environment. However, they did not consider the option that AI cells might mutate back to AD cells in an androgen-rich environment. We include this second mutation term in our model. If A = a 0 , we note that there would be no mutation from AD to AI cell population, as we are at the homeostatic androgen levels. As we decrease the level of androgen, we increase the rate at which AD to AI mutations occur, until we reach A = 0, which gives the largest mutation rate m 1 . For mutation from AI to AD, we assume a similar, but opposite function. When we have low levels of androgen, we assume that there is no mutation from AI to AD. However, as we increase through androgen levels, the mutation rate from AI to AD increases accordingly. The mutation rate functions are listed below.
The third major change we consider is using a generic class of functions for the four immune system interactions. As the immune system is extremely complicated with many interacting cells, there is not much data to help form a hypothesis on how these various components of the immune system interact together. In order to combat this, we consider functions that are generic, but we do want some basic properties for these functions. Specifically, we assume the following.
3. Simulations and observations. There are many potential functions which could be excellent candidates for our f i , but a logical choice would be a Holling Type II function. It is reasonable to assume that after a large enough presence of cancer cells, the death rate would approach a maximum death rate. It is not sensible to use a linear function which would assume the death rate is proportional to the number of cancer cells. The functions that we propose and use for simulations are: Explanations for the various parameters and their chosen values , as well as their respective sources are displayed in Table 1.
As we were unable to find a literature estimate on what the carrying capacity of the tumor cells would be, we decided to estimate with a brief calculation. The mean weight of a male prostate is 11 grams (7-16 grams) and the average human cell has a mass of 1 ng. Thus, we can conjecture that the carrying capacity is on the order of 10 billion cells.
In previous iterations of the immunotherapy mathematical model, hypothetical vaccines were administered every 30 days consisting of 0.3 million dendritic cells. In order to preserve the correct dosage, when the length between vaccinations was altered, the dosage was also altered accordingly. For the discrete case, variances from daily vaccinations to vaccinations separated by 120 days were considered. Simulations are extended to 4000 days to be able to determine long-term behavior of the prostate cancer. A continuous dose of dendritic cell vaccines was also considered, as if constantly administered through an IV. All simulations have the same initial conditions with initial AD count of 15 million cells, AI count of 0.1 million cells, 0 activated T cells, 0 concentration of cytokines (ng/mL) , 30 nmol/mL concentration of androgen, 0 dendritic cells. We have set the threshold to turn off treatment L 0 as 5 and the threshold to begin treatment L 1 as 15 ng mL . Once graphical results are obtained, it is advantageous to determine numerically and analytically whether the more successful dendritic cell vaccination timings have an overall effect.
3.1. Discrete case. For all numerics, we consider that e 1 = e 2 , which means that T cells are able to kill AD and AI cancer cells equivalently. We begin by keeping e 1 = e 2 , the effectiveness with which the T-cells eliminate cancer cells steady at a value of 0.75, which is reasonable within the range given in Table 1. We run a series of simulations, out to 4000 days, varying only the dosage level and frequency of the dendritic cell vaccine. The vaccination frequencies are varied from daily vaccinations to 120 days between each injection, and the dosages are varied from 0.01 to 1.20 million cells, respectively. Figure 1 displays the PSA levels, androgen dependent (X 1 ) and androgen independent (X 2 ) cell densities versus time in days. We can see from the graph that the androgen suppression is triggered when the level reaches about 15 ng mL , and is discontinued when the level reaches about 5 ng mL . As can be seen, the PSA levels eventually skyrocket, indicating rise of androgen independent cancer. When the PSA levels grow drastically, it is clear that androgen suppression therapy is no longer effective. The effect is clear in the corresponding AD and AI graphs: the AD cancer cells are eliminated in the infrequent injections, which gives rise to the more fatal androgen independent cancer. This is exhibited through the AI count, which skyrockets in the infrequent injection case. The rapid increase of AI cells indicates that the more fatal androgen resistant prostate cancer has begun. From these graphs, it is apparent that more frequent injections of the dendritic cell vaccine can help the effectiveness of the intermittent hormone therapy, delaying the rise of androgen independent cancer. We can see that by increasing the frequency of the injections, but keeping the total dosage identical, there are vast improvements in the survival time of the patient. In order to quantify this relationship, we perform numerical analysis to determine the lowest value of e 1 required for the solution to produce a limit cycle. A limit The biological range, as stated in 1 ranges from 0-1, cancer cells killed per day. A summary of the limit cyclical behavior is charted in figure 2. We immediately notice that as vaccine timing is shortened, the minimal value of e 1 necessary to exhibit a stable disease state also decreases. We recall that e 1 could be a patient-specific parameter, as it is the maximum rate that T cells kill cancer cells per day. Thus, for those with weaker immune systems, more frequent injections could be much more effective. We note that for cases where the vaccine timing is greater than 30 days, even with values of maximal e 1 = 1, there is no stable disease state. Therefore, infrequent large doses of vaccine are much less effective at stabilizing the disease. Additionally, we can examine the shape of the limit cycles: as we increase the frequency of the dosage, we notice that the limit cycles are much smoother.
3.2. Continuous case. We have shown that, even for total amount of vaccine being constant, more frequent dendritic cell vaccine administrations are more effective than infrequent administrations. If we consider the limiting case of this behavior, we arrive at the case of a continual injection, as if the patient is always connected to the vaccine through an IV system. In order to accommodate for the continual injection, we slightly modify the equation (6) representing the dendritic cell numbers to: dD dt = v − cD, where v is the continual injection rate. The simulation begins in this case with an initial injection of 0.04 billion dendritic cells, which is then kept constant throughout the duration of the simulation. For this Figure 2. Limit cycle solutions for androgen dependent X 1 , and androgen independent X 2 cancer cells. The minimal value of e 1 required to produce limit cycle behavior is noted above each solution. As vaccine timing decreases, minimal e 1 necessary to have stable disease state decreases.
continuous case, a variety of values of e 1 were considered, to determine if continuous vaccinations may help eliminate cancer. Figure 3 displays the PSA concentration for the different cases. We can see that contrary to the discrete case, it is apparent that even at lower values of e 1 = 0.25, the cancer is manageable. In fact, we even see elimination of cancer for very large values of e 1 , for e 1 > 0.75. In order to clearly see if the cancer is manageable for lower values of e 1 we turn to the cell counts for AD and AI cancer cells. Figure 3 displays these counts for the continual dendritic cell vaccinations. It is apparent for the duration of the simulation, AI cells only become dominant for the e 1 values of 0 and 0.25. This implies that androgen independent cancer is avoidable for a larger range of e 1 values than in the discrete case. This means that despite a weaker auto-immune response, it may be possible to suppress the growth of androgen independent cancer cells. Additionally, the treatment cycle lengths are very different depending on the e 1 value: as we increase through our e 1 value, we see that the length of 'off-treatment' is much longer, resulting in more comfort for the patient.
Since we see a full range of behavior as we increase through e 1 , we take a closer look at a bifurcation diagram for the parameter. The resulting bifurcation diagram is shown in Figure 4. We can see that during most biologically relevant parameter values of e 1 , which is a range of 0-1, we have cyclical behavior. At the smallest range of e 1 = 0, we have stability of carrying capacity equilibrium. However, once we increase past e 1 = 0, we can see that we have stable cyclical behavior. As observed in Figure 3, as we increase through e 1 we continue observing stable disease cycles. Finally, for higher values of e 1 , we notice that the cycles collapse into a single steady state solution, which is stable. As we continue increasing e 1 past this point, the steady state approaches zero, which represents the eradication of the disease. We note that in examining the solutions for small values of e 1 , the Figure 3. PSA serum level, AD cell density, and AI cell density for continual dendritic cell vaccinations, with an injection rate 0.04 billion cells for various values of e 1 . It is apparent that in this continuous case, a wider range of e 1 is able to suppress the growth of cancer and elongate the cycles of IAD. graph appears to be zigzagging. We believe this may be due to the model dynamics having a long transition time before approaching the limiting periodic state. We also have interesting dynamics at e 1 = 0.24, where we seem to approach a steady state, before the solution switches back into cyclical behavior. 4. Basic properties of the system. Although numerically studying the behavior of equations is useful, further information can be gathered by analysis of the equations themselves. It is also a metric to ensure that the behavior of the solutions have biological meaning. In order to analyze our equations, we assume that we are in fact continuously suppressing the androgen, instead of intermittent suppression. This assumption will lead to the following change of equation governing the concentration of androgen: Proof. Solutions for D and A are explicitly solvable, whose solutions are positive. If solutions do not remain positive there must be some time t 1 > 0 such that , or more specifically, We now examine the equations in detail to determine equilibrium points and their respective stabilities. We hope that this will give us as sense of biological meaning. What values must biological parameters exhibit in order for prostate cancer to be eliminated?
Proof. All variables except X 2 are easily solved and have only one steady state. For X 2 , we can either have X * 2 = 0, which exists always, or an X 2 that solves g(X 2 ) := r 2 − r2X2 K − f 2 (0, X 2 , T * ) = 0. We examine this quantity in more detail by calculating its sign at X 2 = 0: is the corresponding T * value when X * 2 = 0. Next we calculate the sign as X 2 → K, its maximum possible value: there is no biologically relevant X 2 value that solves g(X * 2 ) = 0, since the function f 2 is monotonically decreasing in X 2 . Therefore, there is only the trivial, disease-free equilibrium E * 0 = (0, 0, ev µ(cg+v) , 0, 0, v c ) in our domain.
On the other hand, if r 2 − f 2 (0, 0, T * 0 ) > 0 then, by the Intermediate Value Theorem, there must be some X * 2 ∈ (0, K) which solves g(X * 2 ) = 0, giving us an endemic equilibrium E * 1 = (0, X * 2 , T * 1 , I * L1 , 0, v c ), representing androgen independent relapse. Now that the equilibria have been found, we now turn to finding the eigenvalues of the Jacobian to determine stability. We examine the disease-free equilibrium E * 0 , which generates the following matrix: . This Jacobian results in the following set of eigenvalues: Since we know that all parameters are non-negative, and assuming that all parameters are in fact positive, we can see that all eigenvalues except one are guaranteed to be negative. Only r 2 − f 2 (0, 0, T * 0 ) has the possibility to be negative, positive, or zero. If r 2 − f 2 (0, 0, T * 0 ) < 0, we in fact have the cancer-free equilibrium to be stable. Otherwise if r 2 > f 2 (0, 0, T * 0 ), the cancer-free equilibrium is unstable. We would like to understand biologically what this means. We write our stability condition as r 2 < f 2 (0, 0, T * 0 ) : this means the maximal growth rate of androgen independent cells is less than the death rate of androgen independent cells due to T cells. We observe that since these functions are monotone, in the case of strictly monotone function, we can invert f 2 to find an equivalent condition that involves T * 0 . Recalling that T * 0 = ev µ(cg+v) , this means we can solve for an explicit solution involving v, our critical dosage parameter. Theoretically, if the remaining parameters could be measured for a patient, a critical dosage could be calculated. We note that if given patient specific parameters such as their individual T cell efficiencies, we are able to calculate a necessary dose level to stabilize the diseasefree equilibrium. In the context of our proposed simulation functions f i , given in (11), our condition for stability becomes v > cgg2r2µ e2e−g2r2µ , which is indeed a critical dosage value.
5. Global analysis of limiting system. As the full system is difficult to perform analysis on, we consider the limiting systems by performing a quasi-steady state approximation. This can give insight into the biology. We begin by assuming quasi-steady states for androgen, cytokines, and dendritic cells. This is a reasonable assumption, since the time scales at which these processes occur is much shorter than that of the populations of cancer cells growing. Then, we use the results of Thieme [42] to examine the asymptotic behavior of the limiting system. We consider the system when androgen deprivation therapy is continual (A = 0). We will examine the case where androgen deprivation therapy is continually on thoroughly and determine necessary conditions of f i (X 1 , X 2 , T ) to obtain global stability of eradication of prostate cancer and conditions for global stability of the diseased steady state.
We examine the case where androgen deprivation therapy is turned on (A = 0). We let I L , D, and A go to quasi-steady state. We end up with the following set of equations: Note that: It is apparent that lim t→∞ X 1 (t) = 0. Thus we can reduce the system to: which is defined on the subdomainΩ = {(X 2 , T ) : X 2 ≥ 0, T ≥ 0} and is the limiting system of (18)- (23). Before we begin the analysis of the system, we would like to ensure that the behavior of the quasi-steady state system is analogous to the full system. We must quantify what dynamics are preserved and eliminated by simplifying the system. We run the system with the same parameters for the full system and for the reduced system. Figure 5 shows the comparison. We can see that by examining the quasisteady state system we do lose some of the dynamics in certain cases, like when e 1 = 0.15. For all other values of e 1 , however, we note that the quasi steadysate system very closely resembles the full system, meaning that our results for the quasi-steady state system may be extended to the full system. We also note that our expected outcome -that X 1 goes to zero, is exhibited in all of the figures.
Theorem 5.1. The disease-free steady state of (18)-(23) is globally asymptotically stable under the following conditions: . To prove this theorem, we break down the result into several propositions for simplicity. We will begin with positivity and boundedness, move to local asymptotic stability, and end with conditions necessary for global asymptotic stability. Proof. We begin with the proof for positivity, examining T first. We note that since we are assuming that T (t 0 ) ≥ 0, in order for T (t) < 0 for some t, we would require that dT dt < 0 when T = 0. However, dT dt | T =0 = eD g+D > 0 since all parameters are positive. For X 2 , since we are assuming that X 2 (t 0 ) ≥ 0, in order for X 2 (t) < 0 for some t, we would require that dX2 dt < 0 when X 2 = 0. However, dX2 dt | X2=0 = m 1 X 1 > 0 since we have already proved that X 1 stays positive and all parameters are positive.
We have already proven that solutions remain nonnegative, so we now look to boundedness. We begin with T : whereḡ = min[µ − f 3 (I L , T )] > 0 by assumption. This implies that T is bounded. We look towards the boundedness of X 2 now. We immediately see that X 2 is bounded above by K.
We are now ready to prove the main result for global asymptotic stability.
Proof of Theorem 5.1. Since solutions to the system are positive and bounded, we immediately see that there can be no limit cycles around our non-negative equilibrium, since our only equilibrium is on the boundary. By Poincare-Bendixson, all solutions tend towards the disease-free steady state, so E * 0 is globally asymptotically stable.
We can interpret two of these conditions for the global stability in terms of biology. We can see that the condition r 2 < f 2 (0, 0, T * 0 ) can be interpreted as the maximal intrinsic growth rate of the cancer cells must be less than the death rate due to the T cells. Recall that this was the condition for local stability of the full system, so we are unsurprised to see the same condition again. Similarly, µ > f 3 (I L , T ) means that we want the death rate of the T cells to be greater than the production of T cells due to I L . Now that we have examined the conditions for global stability for the disease-free equilibrium, we also want to explore the dynamics for the equilibrium that is not disease-free.
Theorem 5.2. The diseased steady state of (18)-(23) is globally asymptotically stable under the following conditions: To prove this theorem, we break down the result into several propositions for simplicity. We will begin with positivity and boundedness, move to local asymptotic stability, followed by eliminating limit cycles, and end with conditions necessary for global asymptotic stability. Proof. Positivity and boundedness have already been proven in Proposition 3, and the results hold for current conditions, as long as we assume condition i).
Proposition 6. The limiting system (25) contains two equilibria: the disease-free equilibrium, E * 0 , and a secondary equilibrium, E * 1 . The secondary equilibrium is positive (assuming condition ii)). The disease-free equilibrium is a saddle point (under conditions ii) and iii)).
Proof. The existence of the equilibria follow from Proposition 2, so we know that under condition ii), if r 2 > f 2 (0, T * 0 ), we will have two biologically relevant equilibria: E * 0 = (0, T * 0 ) = (0, ev µ(cg+v) ) and E * 1 = (X * 2 , T * 1 ). The local stability of the disease-free steady state E * 0 is exhibited in the Jacobian: and the eigenvalues are given by λ 1 = r 2 − f 2 (0, T * 0 ) > 0 by condition ii) and λ 2 = −µ + T * 0 ∂ ∂T f 3 (0, T * 0 ) < 0, by condition iii). Thus, the disease-free equilibrium is a saddle point. Now we examine the Jacobian of the diseased equilibrium: Thus, the trace is given by and the determinant is given by In order for E * 1 to be stable we require τ < 0, ∆ > 0. Notice that τ < 0 is given by assuming condition iii). However, we have no idea about the sign of ∆. Given our information of τ < 0, we know that E * 1 is either a saddle point or a stable node/spiral. By Poincare-Bendixson Theorem, the only option remaining is that all solutions of (25) converge to E * 1 . Thus, E * 1 is globally asymptotically stable.
6. Conclusion. Current treatment options for late-stage prostate cancer are suboptimal in terms of survival and quality of life. By examining a model of intermittent hormone therapy coupled with dendritic cell vaccines, we are able to prolong both the life and quality of life of the patient. We found that, keeping total yearly dosages the same, more frequent injections are conducive to managing prostate cancer for a longer period of time. We extrapolate this idea to the extreme by modifying the model to include a 'continual' dosage, as if administered through an intravenous fluid. In our model, there are several parameters which are patient specific, or do not have accepted literature values. We examined the effect of varying values of e 1 , the killing T-cell efficiency. Predictably, increasing e 1 led from androgen independent relapse, to stable limit cyclical behavior, and when increased enough, total eradication of the disease. This parameter measures how effective dendritic cell vaccine therapy will be -if e 1 is small, the therapy will be negligible. e 1 also acts to elongate the cycle times for a stable cyclical disease, which means increased time for the period when the patient is not undergoing androgen deprivation therapy. We performed bifurcation analysis on parameter e 1 to examine the various behaviors that exist in the system. As our simulations only considered the case where e 1 = e 2 , we could also investigate how the dynamics change when we allow these values to be different.
Additionally, for mathematical analysis, we simplify the model to have continuous androgen suppression, in order to determine the effect of continual dosages. We notice there are two possible equilibria -cancer-free equilibrium and an androgen independent (fatal) cancer equilibrium. If our continual dosage is less than a determined critical value, the cancer-free equilibrium exists, but it is unstable. If our continual dosage is higher than that critical value, the cancer-free equilibrium is stable and a cancerous equilibrium is born (stability unknown). These findings have biological significance. In previous papers it has been determined that for a 30-day vaccine, it is necessary for a large values of e 1 to ensure cancer-free progression [36]. In this analysis, we could have smaller values of e 1 that still result in stable disease-free equilibrium. For patients with less effective immune systems (lower e 1 values), it is possible to eradicate cancer with higher presence of dendritic cells. As dendritic cell vaccines are not known to have an adverse effect on the human body, it is possible that for patients with weakened immune systems, a tailored dose could be administered.
We further examined the limiting cases of behavior for this system, by allowing several parameters to go to quasi-steady state. We were able to determine requirements for global stability -or the guarantee of elimination of prostate cancer in some cases. Additionally, we were able to determine further conditions for the global stability of the endemic equilibrium.
Despite the insights that this model has afforded us, there is still much to be done. Future work may include comparing the model with available patient data. This will allow us to determine the efficacy of this model at predicting behavior of the prostate cancer. We will also be able to explore the patient-specific parameters and the effect these have on final outcome. Additionally, we would like to be able to compare our model with other patient-data validated models. Mathematically, the local stability of the endemic equilibrium of the full system should also be studied in detail. | 2017-10-17T09:34:03.740Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "ef6d801cd01abfb3a884847a8024659e29898ad8",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=86aea472-0f54-4956-987a-39cb94f49efd",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9a51b9a8ec9951e7073224c8667b208bd1321cc8",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23062754 | pes2o/s2orc | v3-fos-license | BRAINSTEM ACETYLCHOLINE SENSITIVE NEURONS ACTIVATED BY CUTANEOUS IMPULSES IN CATS
In order to determine the cholinoceptive mechanism associated with cu taneous inhibition of jaw-closing and lumbar motoneurons, the area related to the inhibition produced by stimulation of the superficial radial nerve was identified by a lesion within the pontomedullary reticular formation and effects of drugs upon neurons were studied within this area. The cutaneous inhibition, as tested by the inhibition of monosynaptic reflex activity of jaw-closing and that of hindlimb spinal motoneurons was completely abolished by lesion of the medial portion of the pontomedullary reticular formation, but was little affected by lesions of the lateral portion. Intravenously administered physostigmine (0.15-0.30 mg/kg) excited 11 of 21 neurons whereas electro phoretic ACh (90 nA) excited 26 and inhibited 4 of 96 brainstem neurons located in this area. Eight of 11 physostigmine, and 4 of 26 ACh excited neurons were reticulospinal neurons with axonal conduction velocities of 20-40 m/sec. From results presented here together with those reported previously, these physostigmine sensitive and ACh excited brainstem neurons, reticulospinal and non-reticulospinal neurons, could be cholinoceptive interneurons of the polysynaptic inhibitory pathway, from the superficial radial nerve to jaw-closing and hindlimb spinal motoneurons. A muscarinic cholinergic mechanism has been suggested to be involved in the postsyn aptic inhibition of jaw-closing motoneurons of the trigeminal nucleus (1) and of hindlimb spinal motoneurons (2) by forearm cutaneous impulses (cutaneous inhibition). Previous observations following lesioning indicated that the pontomedullary reticular formation was essential for the inhibition, not only of the jaw-closing but also hindlimb spinal moto neurons produced by stimulation of the superficial radial (SR) nerve (2, 3). It has been reported that spontaneously active, acetylcholine (ACh) sensitive neurons are located in the pontomedullary reticular formation (4-8) and also that cutaneously excited neurons are found in this area (9). All these data suggest that ACh sensitive neurons in the brainstem are involved in the polysynaptic inhibitory circuitry of the cutaneous inhibitions. In the present experiments, a study was made of the effects of intravenously administered physostigmine and electrophoretically administered ACh on neurons responsive to cutaneous impulses in this area. The objective was to demonstrate the possibility that ACh functions as a trans mitter in the pathway of cutaneous inhibitions of these motoneurons. MATERIALS AND METHODS The experiments were performed on 53 adult cats which were either unanesthetized, after precollicular decerebration during ether anesthesia, or anesthetized with pentobarbi tone sodium (35 mg/kg, i.p. initially, additional doses when required). Effects of lesions of the pontomedullary reticular formation on the cutaneous inhibition of the hindlimb spinal motoneurons were studied only in decerebrate and cerebellectomized animals. After all surgical operations were completed, gallamine triethiodide was administered and the animals artificially respired (30-40/min). The body was warmed by a pad placed under the animal and the rectal temperature was maintained around 38'C. The blood pressure was monitored continuously from the left femoral artery. Two pairs of collar type electrodes were implanted bilaterally on the SR nerves. The distal pair was tied to the crushed portion of the nerves. A pair of silver wire electrodes was used for stimulation of the spinal cord. The ventral side of the spinal cord between C., and C3 was exposed, and a silver wire electrode (cathode) was inserted into the spinal cord near the midline. The other silver wire electrode (anode) was positioned so as to be along the entire lateral aspect of the spinal cord. These electrodes were fixed on the vertebral column with dental cement. Duration of stimulation pulse was 0.1 msec. Monosynaptic reflexes (MSRs) were recorded from the masseter nerve following stimu lation of the mesencephalic trigeminal tract (MTT), as described in detail in a previous paper (1). The spinal MSR was recorded from the L7 ventral root following stimulation of the medial gastrocnemius-soleus (mG-S) nerve, as described in detail in a previous paper (2). When a lesion was made in the brainstem, care was taken by using fine scissors and a number of small lesions was made in order to avoid as much as possible abrupt blood pressure changes. The blood pressure did, however, increase during and immediately after the lesioning. In this phase, the amplitude of the MSR fluctuated. Gradually over a 15-30 min period, the blood pressure recovered and remained constant. The MSR also main tained a consistent amplitude. Therefore, effects of lesioning of the brainstem on the cutaneous inhibitions were always studied after the blood pressure and amplitude of the MSRs had recovered to normal level and remained constant. In a series of experiments to determine the effects of intravenously administered physos tigmine on the brainstem neurons, a glass recording microelectrode filled with 1 M K-acetate saturated with methylblue (resistance approximately 10 M SZ) was inserted into the brainstem by a pharyngeal approach through a hole made in the basioccipital bone of the supine positioned animal (10). Potentials recorded from the masseter nerve, L7 ventral root, and through the microelectrode were amplified, displayed on a cathode-ray oscilloscope and photographed. Reticulospinal neurons (RS) were identified by antidromic stimulation of their axons in the spinal cord. Three criteria have been described by Wolstencroft (11) to distinguish antidromic excitation of RS neurons and two were used in this study. The first was a response with short and constant latency following stimulation of the spinal cord. The second was the ability of a neuron to respond to each stimulus at rates of 100-300 Hz. Physostigmine salicylate and atropine sulphate were dissolved in physiological saline at 0.1 and 1 % concentrations, respectively and were administered i.v. into the femoral vein relatively slowly so as to avoid abrupt fall in blood pressure. The average number of SR-evoked spike discharges or frequency of spontaneous firing rates of the brainstem neurons were calculated from 5 traces superimposed immediately before and at an appropriate time after the drug administration. In a series of experiments to study effects of ACh on brainstem neurons, a double barrel electrode was used. A pair of glass micropipettes was mounted on a small manipulator (HMD-2, Narishige). One, filled with I M K-acetate saturated with methylblue was used for recording (resistance approximately 10 M! ). The other (tip diameter, 1-3 /gym) was used to administer ACh electrophoretically, and was filled with 1.0 M ACh chloride dis solved in distilled water (pH, 3.0). These two micropipettes were fixed to each other with acryl resin (Aron Alpha, Sankyo) and dental cement while viewed under a microscope in order to set the gap between the tips at 30-50 ,'utn in a transverse direction. Acetylcholine was administered to all neurons in response to stimulation of the SR nerve of either side by passing DC current or long duration pulses of 300 cosec at 1 Hz for 20-90 sec through the pipette. In order to obtain a constant current for iontophoretic application of ACh, a 1000 M J)~ resistor was connected in a series between the positive pole of a battery (90 V) and the ACh containing electrode. The average number of SR-evoked spike discharges or frequency of spontaneous firing rates of the brainstem neurons and their standard errors (S.E.) was calculated from successive 5 traces before, during and after administration of ACh. Significant increase in their number of spikes or spontaneous firing rate of the neurons after drug administration was determined statistically (t-test). When the number of discharges of the brainstem neurons was increased by physostigmine (systemically) or ACh, DC current (10 1€A) was passed through the recording electrode for 5 min to release methylblue. All animals were sacrificed immediately after each experiment and the brain was removed for histological examination.
tone sodium (35 mg/kg, i.p. initially, additional doses when required). Effects of lesions of the pontomedullary reticular formation on the cutaneous inhibition of the hindlimb spinal motoneurons were studied only in decerebrate and cerebellectomized animals. After all surgical operations were completed, gallamine triethiodide was administered and the animals artificially respired (30-40/min). The body was warmed by a pad placed under the animal and the rectal temperature was maintained around 38'C. The blood pressure was monitored continuously from the left femoral artery.
Two pairs of collar type electrodes were implanted bilaterally on the SR nerves. The distal pair was tied to the crushed portion of the nerves. A pair of silver wire electrodes was used for stimulation of the spinal cord. The ventral side of the spinal cord between C., and C3 was exposed, and a silver wire electrode (cathode) was inserted into the spinal cord near the midline. The other silver wire electrode (anode) was positioned so as to be along the entire lateral aspect of the spinal cord. These electrodes were fixed on the vertebral column with dental cement. Duration of stimulation pulse was 0.1 msec.
Monosynaptic reflexes (MSRs) were recorded from the masseter nerve following stimu lation of the mesencephalic trigeminal tract (MTT), as described in detail in a previous paper (1). The spinal MSR was recorded from the L7 ventral root following stimulation of the medial gastrocnemius-soleus (mG-S) nerve, as described in detail in a previous paper (2).
When a lesion was made in the brainstem, care was taken by using fine scissors and a number of small lesions was made in order to avoid as much as possible abrupt blood pressure changes. The blood pressure did, however, increase during and immediately after the lesioning. In this phase, the amplitude of the MSR fluctuated. Gradually over a [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] min period, the blood pressure recovered and remained constant. The MSR also main tained a consistent amplitude. Therefore, effects of lesioning of the brainstem on the cutaneous inhibitions were always studied after the blood pressure and amplitude of the MSRs had recovered to normal level and remained constant.
In a series of experiments to determine the effects of intravenously administered physos tigmine on the brainstem neurons, a glass recording microelectrode filled with 1 M K-acetate saturated with methylblue (resistance approximately 10 M SZ) was inserted into the brainstem by a pharyngeal approach through a hole made in the basioccipital bone of the supine positioned animal (10). Potentials recorded from the masseter nerve, L7 ventral root, and through the microelectrode were amplified, displayed on a cathode-ray oscilloscope and photographed.
Reticulospinal neurons (RS) were identified by antidromic stimulation of their axons in the spinal cord. Three criteria have been described by Wolstencroft (11) to distinguish antidromic excitation of RS neurons and two were used in this study. The first was a response with short and constant latency following stimulation of the spinal cord. The second was the ability of a neuron to respond to each stimulus at rates of 100-300 Hz.
Physostigmine salicylate and atropine sulphate were dissolved in physiological saline at 0.1 and 1 % concentrations, respectively and were administered i.v. into the femoral vein relatively slowly so as to avoid abrupt fall in blood pressure. In order to obtain a constant current for iontophoretic application of ACh, a 1000 M J)~ resistor was connected in a series between the positive pole of a battery (90 V) and the ACh containing electrode.
The average number of SR-evoked spike discharges or frequency of spontaneous firing rates of the brainstem neurons and their standard errors (S.E.) was calculated from successive 5 traces before, during and after administration of ACh. Significant increase in their number of spikes or spontaneous firing rate of the neurons after drug administration was determined statistically (t-test). When the number of discharges of the brainstem neurons was increased by physostigmine (systemically) or ACh, DC current (10 1€A) was passed through the recording electrode for 5 min to release methylblue. All animals were sacrificed immediately after each experiment and the brain was removed for histological examination.
RESULTS
Results will be presented in three sections. First, effects of localized lesions of the pontomedullary reticular formation upon the cutaneous inhibition of jaw-closing and hindlimb spinal motoneurons, secondly, effects of intravenously administered physostigmine on spike activity evoked from the SR nerve, and finally effects of electrophoretic ACh on these activities. These results indicate firstly that the medial, and not lateral, portion of the pontomedul lary reticular formation is essential for cutaneous inhibition of jaw-closing motoneurons.
Secondly, unilateral section of the medial reticular formation reduced the cutaneous in hibition produced by stimulation of either the ipsilateral or contralateral SR nerve and lastly almost the entire medial portion, and not particularly dorsal or ventral areas , of the reticular formation is required for cutaneous inhibition. Since the medial portion of the pontomedullary reticular formation appears to be essential for the cutaneous inhibition, it was assumed that interneurons for the cutaneous inhibition may be located within this region. A search was thus made for neurons re sponding to physostigmine and/or ACh which were activated from the SR nerve and had axons descending to the spinal cord (RS neurons).
As is already known (9, 11), a single pulse stimulation of the SR nerve excites brainstem neurons.
About 80% of brainstem neurons tested discharged repetitively in response to a single stimulus at a supramaximal intensity for Aa afferent fibers. As illustrated in Fig. 3A, the latency of the first action potential ranged from 4.5 cosec to 1S msec following stimu lation of the SR nerve.
One example of effects of physostigmine on these brainstem neurons (15 RS and 6 non-RS neurons) is illustrated in Fig. 4. The antidromic action potential of the RS neuron had a constant latency of 1.8 msec following stimulation of the spinal cord at C2_3, and firing followed stimulation at 100 Hz. This neuron responded with an orthodromic latency of 7 msec following stimulation of the SR nerve, and the discharges lasted for about 30 msec (Fig. 4A). On the average, each stimulus produced 4.8 spikes. This neuron also fired spontaneously at a mean frequency of 12 Hz as shown in Fig. 4D. Physostigmine was given (i.v.) twice at intervals of 10 min. The first injection of physostigmine (0.15 mg/kg) increased the number of SR-evoked action potentials slightly and the number of discharges increased 5 ruin after the second injection of the same dose of physostigmine (Fig. 4B, G).
The spontaneous firing rate also increased to a maximum of about 24 Hz (Fig. 4E, G). Horizontal straight and dotted line: average number of SR-evoked spikes and average frequency of spontaneously firing spikes, respec tively in control.
Since it is impossible to discriminate the SR-evoked spikes from spontaneous firing ones, the latter was counted in the former. However, in 100 msec of sweep time before the drug administration, the average numbers of SR-evoked spikes (involving spontaneous firing ones, Fig. 4A) and spontaneously firing spikes alone (Fig. 4D) were 4.8 and 1.2, respec tively. Therefore, in control, pure SR-evoked spikes can be estimated as 3.6 in number by subtracting the latter from the former. At 5 min after the second administration of physos tigmine, the number of SR-evoked spike discharges including spontaneously firing ones (Fig. 4B, G) was 9.6 and the number of spontaneously firing spikes alone (Fig. 4E) Hence physostigmine apparently increases not only the rate of spontaneously firing spikes but SR-evoked spike discharges in brainstem neurons. These changes in discharge rate were reversible and the rate returned to the control level within 3 min after the administration of atropine (1 mg/kg, i.v., Fig. 4C, F). This antagonism by atropine was confirmed with 4 RS neurons. The time course of physostigmine effect and antagonism by atropine is illustrated in Fig. 4G.
FIG. 5. Localization of brainstem neurons excited by physostigmine (open circles)
and ACh (filled circles) responsive to SR nerve stimulation.
As shown in the solid and shaded columns in Fig. 3, physostigmine (0.15-0.30 mg/kg) increased the number of spike discharges in 11 of 21 brainstem neurons and did not affect the others. Eight of eleven excited neurons were RS (black columns of Fig. 3A and B), the average latency of the initial action potentials being 10.5 cosec (range 5-20 msec) after SR nerve stimulation (Fig. 3A). The average antidromic latency of these RS neurons was 1.3 cosec (range 0.9-1.8 cosec, Fig. 3B).
Histological examination showed that these brainstem neurons sensitive to physostig mine were located in the nucl. reticularlis gigantocellularis (3), nucl. reticularis parvo cellularis (3) and formatio reticularis (5). These anatomical sites are illustrated in Fig. 5 by open circles at three different levels (P=6.5, 7.5 and 8.0 according to the atlas of Snider and Niemer (13)). following cord stimulation. One example of the facilitatory effect of ACh is illustrated in Fig. 6. This neuron responded to a single stimulation of the SR nerve with 3-5 discharges (4.1--0.7, mean--' S.E., n-6) (Fig. 6A) and also fired spontaneously with a frequency of 10-20 Hz (Fig. 6B). Histological examination indicated that neurons excited by ACh were also located, as were neurons affected by physostigmine, in the nucl. reticularis gigantocellularis (5), nucl.
DISCUSSION
The present experiments demonstrated that the medial, not lateral, portion of the pontomedullary reticular formation is essential for the cutaneous inhibition of jaw-closing as well as hindlimb spinal motoneurons. Both of these cutaneous inhibitions were com pletely abolished by a complete transection of the medial reticular formation. Since a small lesion of the medial reticular formation slightly reduced the cutaneous inhibition of jaw-closing motoneurons, the inhibitory pathways, including inhibitory interneurons, from the SR nerve to the jaw-closing motoneurons may be located diffusely in the medial reticular area.
The inhibition of jaw-closing and of hindlimb spinal motoneurons evoked by impulses to A,,, afferent fibers of the SR nerve may involve a muscarinic cholinergic mechanism (1,2). In the present experiments intravenously administered physostigmine (0.15-0.30 mg/kg) excited 11 of 21 and electrophoretic ACh excited 26 of 96 brainstem neurons which responded to stimulation of the SR nerve of either side at a supramaximal intensity for stimulating Aa afferent fibers and were located in the medial reticular area. This location is in fairly good agreement with that of ACh sensitive neurons determined by their spontaneous activity (5,6) and with that of neurons responding to SR nerve stimulation (9).
The effect of physostigmine was completely antagonized by atropine in all neurons tested.
This could be relevant to a cholinergic synaptic mechanism influencing these particular neurons. Atropine antagonism, however, was tested on only 4 of 11 physostigmine sensitive neurons and as this effect of atropine was long lasting, the effect of atropine alone was not studied. According to Bradley et al., (7), electrophoretic physostigmine potentiated the action of ACh, but in addition to this action, it had a strong excitatory effect on many brainstem neurons. Hence it may be possible that some of the physostigmine sensitive neurons upon which atropine antagonism was not tested, might have been excited by physos tigmine, independent of the effects of ACh. Electrophoretic ACh, however, excited 26 of 96 brainstern neurons which were located in the medial portion of the pontomedullary reticular formation found to be essential for the cutaneous inhibitions studied, and re sponded to the same cutaneous impulses required to produce inhibitions of masseter and lumbar motoneurons. Sixty-six neurons were unaffected by ACh. There may have been non-cholinoceptive brainstem neurons. Alternatively the dose of iontophoretically applied ACh might be insufficient for some of the brainstern neurons to produce the response partly because of long distance between recording and ACh containing electrode and the sensitivity of cells could have been reduced by pentobarbitone sodium.
The latency of the first SR-evoked action potential of brainstem neurons sensitive to either physostigmine or ACh ranged from 6 to 60 msec, mostly from 6 to 24 msec. These values are in good agreement with those reported by others, although the effects of either physostigmine or ACh on these neurons were not studied (9,11,14). The duration of dis charges ranged between 5 and 30 msec. The latency and duration of discharges of physos tigmine and/or ACh excited neurons (RS and non-RS neurons) roughly coincide with those of IPSPs recorded from jaw-closing motoneurons following stimulation of the same cuta neous nerve (1,3,12). Whether or not physostigmine or ACh excited RS neurons have ascending axons was not studied, as the distance between jaw-closing motoneurons and these RS neurons is quite short. Some RS neurons, however, are known to have ascending axons (14,15). Although we did not do tests to determine whether or not IPSPs could be recorded from jaw-closing and lumbar motoneurons following stimulation of the areas where these physos tigmine and/or ACh excited neurons were located, these brainstem neurons may be cholino | 2018-04-03T04:00:57.941Z | 1978-01-01T00:00:00.000 | {
"year": 1978,
"sha1": "920c6cdf31210b1e178e3cda794612c04cb3b234",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/28/3/28_3_345/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d92b47fd5a768e860c2d30963cc8273f985985c9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
257115981 | pes2o/s2orc | v3-fos-license | Terahertz strong-field physics in light-emitting diodes for terahertz detection and imaging
Intense terahertz (THz) electromagnetic fields have been utilized to reveal a variety of extremely nonlinear optical effects in many materials through nonperturbative driving of elementary and collective excitations. However, such nonlinear photoresponses have not yet been obeserved in light-emitting diodes (LEDs), let alone employing them as fast, cost-effective, compact, and room-temperature-operating THz detectors and cameras. Here, we report ubiquitously available LEDs exhibiting photovoltaic signals of ~0.8 V and ~2 ns response time with signal-to-noise ratios of ~1300 when being illuminated by THz field strengths ~240 kV/cm. We also demonstrated THz-LED detectors and camera prototypes. These unorthodox THz detectors exhibited high responsivities (>1 kV/W) with response time four orders of magnitude shorter than those of pyroelectric detectors. The mechanism was attributed to THz-field-induced impact ionization and Schottky contact. These findings not only help deepen our understanding of strong THz field-matter interactions but also contribute to the applications of strong-field THz diagnosis. Interest in the exploration of non-perturbative nonlinear optical phenomena driven by intense terahertz fields has seen a leap forwards with the recent development in femtosecond laser-based table-top sources for strong THz radiation. The authors present intense THz-field-induced effects in ubiquitously available LEDs illuminated by strong THz pulses, paving the way to their use in detecting and imaging intense THz radiation.
C urrently, there is much interest in exploring nonperturbative nonlinear optical phenomena in various types of matter driven by intense terahertz (THz) electric and magnetic fields [1][2][3][4][5][6][7][8][9][10] . This is fueled by the impressive advancements made during the last two decades in the development of femtosecond laser-based table-top sources for strong THz radiation [11][12][13] . These intense THz electromagnetic fields have been utilized [14][15][16][17][18] to efficiently accelerate and manipulate electrons to extreme degrees, advancing the frontiers of both the technology and science of nonlinear THz-matter interactions. For example, acceleration of relativistic electron bunches in free space is promising for the realization of next-generation table-top particle accelerators and compact X-ray sources. In solids, electrons have been excited and accelerated by strong THz fields, leading to an ultrafast phase transition 19 , interband tunneling and impact ionization 20 , exciton generation, and fluorescence 21 .
These groundbreaking studies have stimulated much excitement in diverse fields of photonics, materials science, and condensed matter physics, requiring further sophistication and advancement of intense-field THz technologies. However, what is strikingly absent in this context is a convenient detector for intense THz radiation. Currently available detectors are bulky and/or expensive, often requiring liquid nitrogen or liquid helium, and are usually very slow in response. Fast, sensitive, small, inexpensive, and room-temperature-operating detectors and cameras for intense THz radiation are being sought.
Here, we report that one of the most commonly available semiconductor devices around us, light-emitting diodes (LEDs), serve this purpose. That is, these inexpensive, ordinary devices work very well for detecting and imaging THz radiation. We observed a gigantic photovoltaic signal when such LED devices were illuminated by strong-field THz pulses. This observation presents us a rare opportunity to explore the fundamental science of the ultrafast response of electron-hole pairs in a semiconductor device induced by strong-field THz pulses. More importantly, this work demonstrates that these roomtemperature operating, inexpensive LEDs are promising for developing next-generation THz detectors, cameras, and functional devices.
Results and discussion
THz-field-induced photovoltaic signal in LEDs. Intense THz pulses were generated from lithium niobate (LN) crystals via optical rectification 13,22 , and a schematic diagram of the experimental setup is shown in Fig. 1a. The generated THz radiation was vertically polarized, and its strength was adjusted by a pair of THz polarizers. We focused the THz beam to achieve a fluence of, at best,~4.1 μJ/mm 2 ; the LED detection size was only~200 × 200 μm 2 , and the focused THz beam diameter was~3.3 mm in full width half maximum (FWHM). See more details in Supplementary Fig. 1f. Under these excitation conditions, we were readily able to obtain the polarization-dependence responses via rotating the azimuthal angle of the printed circuit board (PCB) (see Fig. 1b) and monitor photosignals in an oscilloscope (Fig. 1c) in all LEDs at the same time. Typical THz temporal waveforms (see, e.g., Fig. 1d) extracted by a single-shot spectrum-encoding method 23 indicated that the generated THz pulses were near single-cycle with a frequency bandwidth of~0.8 THz (see Supplementary Fig. 1c). The measured LEDs in our experiment with different colors (red, yellow, green, blue, and white in Fig. 1e) which correspond to the bandgaps (central emission wavelength) of 1.9, 2.1, 2.3, 2.8 eV (except for the white LEDs which includes three types of LEDs of red, yellow and blue). More detailed information of the LEDs can be found in "Methods".
THz field strength and polarization dependence. To understand the origin of the photovoltaic effect in the LEDs, we examined the THz pump fluence and polarization dependence of the response of the blue LED. In the pump-fluence-dependent experiments, we used six different peak fields: 81. 4,119,155,190,223, and 241 kV/ cm (see Fig. 2a, the mentioned THz field strength value was the maximum peak electric field over the whole temporal duration. The details of the field strength calculation can be found in Supplementary Note 4). Observed time-dependent photoresponse curves are summarized in Fig. 2b. From this figure, we can see that all these signals are negative, i.e., the THz-pulse-induced current flew in the reverse direction, from the n-side to the p-side of the device. When the THz electric field was 81.4 kV/cm, the peak-to-peak photoresponse signal was~41 mV, while it increased to~600 mV when the THz field strength was increased to 241 kV/cm. The minimum detected signal was under the incident field strength of~50 kV/cm. Figure 2c plots the photovoltaic signals of the LED when it was illuminated by THz radiation with three different pulse energies: 80.0, 52.8, and 24.7 μJ. The response curves exhibit a two-fold symmetric period of 180°, which is similar to the previous work on the nonlinear photoresponse of type-II Weyl semimetals 24 , manifesting an anisotropic photoresponse related to crystallographic symmetry. We interpret this polarization-dependent photoresponse as a result of different threshold energy due to the orientation-dependent electron effective mass of GaN (see "Methods"). Figure 2d shows two photoresponse curves as a function of THz pump fluence for specific THz polarizations (marked (1) and (2) in Fig. 2c). For pump fluences lower than 1.2 μJ/mm 2 , the response tendency exhibited a quasi-quadratic behavior, while it increased linearly when the fluence was larger than 1.2 μJ/mm 2 . As an example, a response signal of a blue LED is illustrated in Fig. 2e. The response time of the detected photovoltaic signal was four orders of magnitude shorter than that of a commercial pyroelectric detector (SDX-1152, Gentec, sensor diameter = 8.0 mm, see Fig. 2e). Furthermore, the responsivity of these LEDs was as high as~10 times that of the pyroelectric detector without an amplifier, reaching 750 V/W.
THz field induced impact ionization. When a strong THz field interacts with a semiconductor, various nonperturbative nonlinear optical phenomena can occur, including high-harmonic and sideband generation 25 , the dynamic Franz-Keldysh effect 26,27 , Zener tunneling 28 , metallization 29 , and impact ionization 6,30 . Carrier generation can result in unconventional ways through some of these processes, even though the THz photon energies used are much smaller than the bandgap. For example, Zener tunneling-induced photocarrier generation has been demonstrated, although the density achieved remained relatively low 28 ; metallization can also instantaneously produce carriers but requires much higher (>100 MV/cm) field strengths. Hence, in order to explain our observed huge photovoltaic signals in the LEDs, we propose impact ionization to be the dominant mechanism. A simplified Monte Carlo simulation based on this mechanism was performed, which reproduced all salient features of our experimental results, as detailed below.
Impact ionization depicts a three-particle down-conversion process in which a high-energy (hot) conduction band electron (or valance band hole) interacts with a valence band electron, and excites the electron across the bandgap, leaving behind a hole 31 . Over the entire interaction process, the hot carrier is enforced by the THz electric field according to hdkðtÞ=dt ¼ qEðt; NÞ þ γ tot (q = e for holes, and q = −e for electrons). Here k(t) denotes the wavevector of carrier, h is the reduced Planck constant, e is elementary charge, and E(t, N) is the screened THz electric field which concerns the carriers density N. γ tot presents the total scattering rate which involves impact ionization (see "Methods"). Getting use of the impact ionization transition rate of GaN from the first principle calculation 32 , we found that the hole-initiated impact ionization transition is much more efficient than that of electrons (see "Methods"), when the incident THz pulse approached the p-doped region prior to n-doped region in the blue LED (see Fig. 3a). Accordingly, we only consider the holeinitiated events in our modeling.
From the elaboration above, a carrier multiplication process occurred in this system can be described by Q = Q 0 × MF, where Q 0 is the initial quantity of charge, and MF denotes the multiplication factor. Q 0 is determined by the initial carrier density which multiplied by elementary charge e and the volume around the positive electrode (see "Methods"). Q is found through integrating the photocurrent through the whole response temporal profile obtained by the oscilloscope. Then, MF under different field strength can be deduced. (Hollow red circles in Fig. 3b). On the other hand, MF can be evaluated by Monte Carlo simulation for different carrier relaxation time τ. Screening due to the multiplied carriers must be considered, for it changed the dielectric property of material, and affected the transmitted THz field through p-doped region (see "Methods"). Figure 3c exhibits the THz pulses with an initial field strength 200 kV/cm inside the material with (red curve) and without (gray curve) screening effect respectively. Apparently, the generated carriers greatly wakened the THz pulse, particularly at the tail.
We found that MF obtained from our experiment was sandwiched between two MF curves obtained by Monte Carlo simulation with τ = 75 fs and τ = 100 fs, respectively (green dotdot line for τ = 75 fs and blue dot line for τ = 100 fs in Fig. 3b), and believe this was a reasonable result which is consistent with the experimental result of~100 fs relaxation time for holes 33 .
So far, it can be concluded that the simplified Monte Carlo model provides a consistent physical interpretation and is in good semiquantitative agreement with the experimental results. In order to present the simulation more vividly, we depicted the carrier density multiplication dynamics during the THz-induced acceleration under peak field of 100 kV/cm (blue curve in Fig. 3d) and 200 kV/cm (red curve in Fig. 3d) respectively, so are the corresponding energy evolutions of carriers (Fig. 3e). By comparison, it is distinct that there happened one impact ionization event under 100 kV/cm incident THz field, while four times under 200 kV/cm. This model further tells us that this linear trend will persist till the built-in field is offset by the potential difference induced by THz excited photocarriers, such that the saturation voltage will reach more than 3 V based on the built-in potential inside the blue LED. In short, we fortunately observed such gigantic photovoltaic signals in blue LEDs that stimulated much interest in investigating the impact ionization process. Although this process is common in some typical devices like photomultiplier, the large and effective photocarrier multiplication is often obscured by other effects, such as phonon absorption 34 , valley scattering 35 , and exciton dissociations 36 . Huge photovoltaic signals observed here show that the carrier multiplication process induced by intense THz pulses is efficient and not obscured by other effects. We believe studying this field-induced high efficiency of photocarrier multiplication is a very important driving force for future material physics and device optimization in the THz frequency range.
THz photovoltaic response in different color LEDs. We next studied four other LEDs with different colors (green, white, yellow, and red, see Fig. 4a-d) and observed negative signals in all LEDs. Data were taken for θ = 0°(see Supplementary Fig. 4) with THz field strengths of 160, 126, and 54 kV/cm. The optimal responsivity of the green LED reached 1250 V/W. Since all the materials for these LEDs are in the III-V group (GaN-based for blue and green LEDs, GaP-based for yellow and red LEDs), the carrier lifetimes were roughly identical. Therefore, the decay time of all examined LEDs were in a similar time scale (~1 ns), as extracted in the experiments. In terms of the comparison of amplitude for distinct LEDs, early works focused on the nanosecond far infrared laser pumped photodiodes showed lager photoresponse when the bandgap of material is smaller 37 . Nonetheless, we didn't observe such simple law in the current system that many ingredients needed to be taken into account which may impact on the performance of LEDs profoundly, such as the junction depth and the doping level 38 . Thus, the complicities of LED detection system truly enriched the physical significance in the THz strong-field region.
The most striking aspect is that the sign of the THzphotoresponse of all these LEDs is opposite to (see Fig. 2b) that of conventional photovoltaic signals. To explain this, we propose a Schottky contact based mechanism, as shown in Fig. 4e. Under a simple approximation, there will be a single abrupt junction between n + like metal and p-type semiconductor (Schottky contact), which leads to the generation of an electric field E 1 with direction opposite to the electric field E 2 in the p-n junction. THz induced photocarriers move towards the n region, and positive signals are also detectable; see Fig. 4c, d. Since the THz pulses first interact with the surface region, the induced photocarriers have a much higher density in the p region than in the n region; thus, the Schottky-based negative photovoltaic signals must be stronger than those positive signals flowing towards the n region. Moreover, because of the parasitic capacitance, the negative signals are obviously faster Corresponding ultrafast photovoltaic signals monitored by the oscilloscope (load = 50 Ω). The maximum photovoltaic signal is~600 mV. c Anisotropic photovoltaic signals obtained under different THz energies of 80.0, 52.8, 24.7 μJ, respectively. The photovoltaic response has strong correlation to the crystal symmetry. However, it has no strong correlation to the THz energy. Two specific THz polarization dependent responses labeled as Perpendicular (1) and Parallel (2) in c are systematically measured by illuminating various THz pump fluences, as exhibited in (d). When the THz pump fluence is <1.2 μJ/mm 2 , both polarization dependent photovoltaic signals obey quasi-quadratic relationship with respect to the THz pump fluence, while a linear behavior is predominant for the higher fluence region. e Photoresponse comparison between the blue LED and the commercial pyroelectric detector (SDX-1152, Gentec). The photoresponse signal and response time from the LED is 11 times larger and 4 orders of magnitudes faster than those obtained in the pyroelectric detector, respectively. The error bars are defined as standard deviations which are calculated automatically via an oscilloscope after 32 samplings. than positive signals, further implying the existence of a Schottky contact, which fortunately improves the rising time up to a few hundred picoseconds. Finally, we even observed a small positive signal in red LEDs earlier than abovementioned two signals. Among these LEDs, because the barrier height of the Schottky junction is relatively low in the red LED, hot holes within the valence band are easier to emit from the boundary of p side, resulting in a small positive signal. Such small positive signals among other LEDs were not observed because the Schottky barriers are higher and the hot holes cannot generate. Since this positive signal is not used in the intense THz detection, it does not affect the performance of the LED detectors. More experiments are needed to fully understand the intensive THz-matter interactions in such devices.
According to the proposed model in Fig. 4e, it is possible to observe a positive photovoltaic signal from LEDs if their structures did not contain Schottky junctions. We observed such a phenomenon from LEDs produced by the same company with 850 nm and 940 nm emission wavelengths. As shown in Supplementary Note 8, appreciable positive photovoltaic signals were observed in the 850 nm LED, which can also be well understood by the proposed model. In this case, there is no Schottky junction replacing with the ohmic contact. We also observed a saturation behavior in the 940 nm LED, which has a lower saturation voltage than the 850 nm LED as expected because the saturation value is determined by the built-in potential which is proportional to the bandgap.
As a detector working in the strong-field region, LEDs have some advantages over other commercially available THz detectors. Both Golay cells and pyroelectric detectors suffer from a slow response time with approximately several hundred microseconds, while bolometers usually work under low temperature conditions. Schottky barrier diodes (SBDs), which are widely used in the radio-and microwave frequency ranges, are high-speed devices (<100 ps) but require advanced material growth and device fabrication techniques. Field-effect transistors usually need a direct-current bias between the gate and source. Thus, LEDs have the advantages of high speed, broadband response, small size, low cost, no bias requirement, easy fabrication, and room-temperature operation. With regard to the Schottky contact discussed in the LEDs, it mainly affects the polarity of the photovoltaic signal and response time. Therefore, we cannot name it as a Schottky barrier diode because SBDs usually respond to a sub-cycle electric field component with a different detection mechanism. Moreover, the surface contact does not affect the THz-LED detection mechanism because we observe a positive photovoltaic signal in GaAs LEDs (see Supplementary Fig. 7). THz-LED camera prototypes. Since LEDs can be used for THz detection, one can straightforwardly think about fabricating a THz-LED camera. We demonstrated two prototypes. One of them is a scanning THz-LED camera. The other is a onedimensional sing-shot THz-LED array. In the context of scanning prototype, a blue LED was mounted onto an automatically controlled three-dimensional translation stage with 25 μm spatial resolution. We exploited this prototype to scan the LED at the focal plane, obtaining the profile of a THz beam, as shown in Fig. 5a. The obtained circular beam profile has a diameter of~2 mm (FWHM), which agrees well with that imaged by the commercial camera (Spiricon-IV, Ophire) (see Fig. 5b). LED displays and various large screen devices are widely available at low costs, indicating the feasibility of a large-area, high-resolution THz-LED real-time camera. Furthermore, we extended the THz-LED camera to a one-dimensional 1 × 6 array. With this LED prototype, we achieved a real-time THz camera, and the recorded THz beam profile is depicted in Fig. 5c. In this first-generation device, due to the large size of the market available LEDs, we could not directly image a focused THz beam. However, we could measure the THz beam profile in a non-focal plane, which in turn proved the high responsivity of the THz-LED detectors. The photographs of these prototypes are exhibited in Fig. 5d, e, respectively. With these results, we believe that LED-based THz detectors and cameras may have some valuable applications in strong-field THz science and technology.
Conclusions. In summary, we showed that gigantic photovoltaic signals were produced in market available LEDs through ultrafast detection of picosecond THz transients with moderate electric field strengths. Compared with conventional photovoltaic signals, an unexpected abnormal negative signal was observed for LEDs with different colors examined in our experiments. We systematically tested several kinds of LEDs in THz polarization and pump fluence dependent studies. Experimental results can be well understood by a simplified Monte Carlo model. Based on the high responsivity, ultrafast response time, low cost, and easy integration, we further demonstrated scanning and integrated THz-LED camera prototypes. The LED-based THz detection not only help us further deeply understand intense THz matter interaction physics, but also enables other applications such as THz real-time video imaging.
Methods
THz generation and characterization. In our experiment, a commercial Ti:sapphire laser amplifier system (Pulsar 20, Amplitude Technology, Institute of Physics, Chinese Academy of Sciences) was employed to produce strong-field THz pulses in a LN crystal based tilted pulse front technique 39 . This laser can deliver maximum single pulse energy of~500 mJ with central wavelength of 800 nm, pulse duration of 30 fs at 10 Hz repetition rate. In order to obtain high optical-to-THz efficiency, we optimized the pulse duration by altering the group velocity dispersion of an acousto-optic programmable dispersive filter 13 (AOPDF, Dazzler, Fastlite). In the tilted pulse front setup, a grating with density of 1480 lines/mm and of 140 × 140 × 20 mm 3 (length×width×height) was used for tilting the intensity of the pump pulses. A half-wave plate and a plano-convex lens with f = 85 mm focal length were inserted between the grating and the LN crystal. The LN was a z-cut congruent crystal with 6.0 mol% MgO doped to improve its damage threshold. The crystal was a triangle prism with dimensions of 68.1 × 68.1 × 64 mm 3 in x-y plane and the height in z-axis was 40 mm. For the LED detection experiment, the maximum pump energy used was~60 mJ and the LN crystal was not cryogenically cooled, resulting in a maximum THz electric field of~240 kV/cm with a spectrum from 0.1 to 0.8 THz. The radiated THz signal was first collimated by a 90°off-axis parabolic mirror (OAP1) and then went through two THz polarizers (Tydex) which were used for tuning the THz pump energy. After the second parabolic mirror (OAP2), the THz pulses were focused onto the LED which was connected to an oscilloscope (DPO 4104, Tektronix), in which a photovoltaic signal was recorded.
For THz temporal waveform and spectrum characterization, we used a singleshot spectrum coding method. As illustrated in Fig. 1a, when there was no LED in the setup, the THz waves were collimated again by OAP3 and focused together with the probing beam which came from the zero-order reflection of the grating into a 1 mm thick ZnTe detector. The probing beam was first stretched by a grating pair (Grating reticle density of 1200 lines/mm, 50 × 50 × 10 mm 3 ) and then propagated through a delay line. With this method, the THz temporal electric field vector was encoded onto the chirped spectrum of the probing beam in the ZnTe crystal through electro-optic effect. The modulated probing beam spectrum was recorded by an analysis system including a lens, a BBO, a Glen-prism (GP), a spectrometer and a CCD camera, and the THz temporal waveform can be decoded and obtained. More details can be found in Supplementary Notes 1-4. The THz single pulse energy was detected by a pyroelectric detector (SDX-1152, Gentec), and its beam profile was extracted by a commercial THz camera (Pyrocam IV, Spiricon). Thus, we can estimate the error of the THz peak field strength with the information from the measured THz energy, beam size, and temporal duration (see Supplementary Note 4). All these measurements were conducted at room temperature and the THz system was not purged.
Screening effect. For a given carrier density, the dielectric permittivity of a conductive medium in the Drude model is given by, where ω p ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Ne 2 =ε 0 ε 1 m* p is the plasma frequency, τ is the carrier momentum relaxation time, and N is the density of free carriers, ε 0 is the vacuum dielectric constant, ε ∞ is the high frequency limit of the dielectric permittivity, and m* presents the carrier effective mass. The square root of the dielectric permittivity is the complex refractive index. Thus, the carrier density in the medium determines its refractive index. From Eq. (1), it is easy to see that the complex dielectric permittivity is frequency dependent, and the lower frequency components affect the dielectric response greater than that of higher frequency components. Thus, the variation of frequency-dependent refractive index due to impact ionization is crucial to be involved in the modeling of the ultrafast carrier generation.
By introducing the screening effect into the GaN-based system, we set the parameters m* = 0.2m (considering the light hole of GaN in reference 40 ), where m is electron rest mass, ε ∞ = 5.35 (in reference 41 ), N = 5 × 10 16 cm −3 , which is a typical hole density in p-doped GaN-based blue LED 42 . The initial refractive index n 0 ∼3 was referred to the experimental results in reference 43 such that the initial transmitted THz waveform is E(t) = 2E inc (t)/(1 + n 0 ), where E inc (t) is the incident THz pulse, and E(t) is the transmitted THz pulse at the interface between GaN and air. Furthermore, as to obtain the full frequency-dependent refractive index, we implemented the Fast Fourier Transform (FFT) against the THz temporal profile and calculated frequency-dependent refractive index every time when the carrier density changed. After calculating the frequency-dependent Fresnel transmission e EðωÞ ¼ 2 e E inc ðωÞ=ð1 þ e nðωÞÞ, inverse FFT was exploited to, in turn, obtain the subsequent THz temporal profile, as exemplified in Fig. 3c. Eventually, only one free parameter i.e., carrier momentum relaxation time τ was left to reproduce the experimental results.
Monte Carlo simulation. To simulate the physical process of the carrier multiplication, a Monte Carlo simulation is established. In the simulation, a carrier is propagated in a linearly polarized THz electric field with screening (see the "Screening effect" in "Methods"). The electric field strength is adjusted such that the generated carrier density and the impact times under different incident field are recorded to obtain the multiplication factor. A carrier is accelerated in the THz field, and the evolution of its energyε ca ðtÞ is calculated under the parabolic approximation ε ca ðtÞ ¼ h 2 k 2 ðtÞ=ð2m * Þ in steps of 20 fs. At every time step, the scattering rates of different scattering mechanisms are calculated considering the energy of free carriers and are referred from literature. The total scattering rate can be written as γ tot ðε ca Þ ¼ γ ph ðε ca Þ þ γ imi ðε ca Þ, where γ ph ðε ca Þ is the energy dependent phonon scattering rate 33 , and γ imi ðε ca Þ is the energy dependent impact ionization rate 32 . From the first principle calculation in reference 32 , the impact ionization rate of holes in GaN-based system is about two orders of magnitude lager than that of electrons, such that we only consider the hole-initiated impact ionization rate.
The scattering rates of above two channels are utilized to determine the scattering probabilities at every step, and in turn determine the scattering event which the hole proceeds. If the hole scatters with the optical phonon, it will lose 100 meV energy 33 , and the wavevector will change accordingly to the value of parabolic approximation. But as seen from the energy evolution in Fig. 3e, phonon does not reduce the carrier energy significantly. If impact ionization occurs, the number of holes doubles as in the process new electron and hole generate. The carrier will lose its energy. The carrier density is dynamically upgraded through the simulation by taking the Auger recombination and impact ionization according to the following equation, where ξ is a stochastic number determined by whether impact ionization happens or not. If impact ionization happens, ξ will receive a value of 2 or 1 otherwise. The Auger recombination rate is given by γ Auger ðNÞ ¼ ÀCN 3 , where C is Auger coefficient 44 . In the Monte Carlo simulation, 1000 times are simulated at every given THz field strength. The average MF is taken as the final output. A carrier momentum relaxation time τ is tuned as a parameter to reproduce the experimental results.
Initial quantity of charge estimation. What we actually measured was the photovoltaic signal with an external load of R = 50 Ω. In order to correlate the simulation and experimental results, the quantity of charge was calculated such that the photovoltaic signal was divided by the load value, following the integration through the whole photoresponse temporal profile to get Q = ∫S(t)/R · dt, where S(t) is the photovoltaic signal. For the geometry of device, because of the negative signal feature, the detected carriers should be close to the surface at the positive electrode. We believe the carriers near the negative electrode can be ignored due to the recombination of carriers and longer diffusion distance. It can also be verified experimentally that no positive signals were observed in the blue LED. Thus, we can roughly estimate the initially generated quantity of charge near the positive electrode, choosing the electrode size to be~50 μm, doping concentration 5 × 10 16 cm −3 , diffusion length~1 μm in reference 45 , and the thickness of p-doped region 0.2 μm (Fig. 3a). Combining the ingredients above, we obtained 8 × 10 −14~1 0 −13 C quantity of charge. According to the experimental integration, the detected quantity of charge under 95 kV/cm field strength was 4.6 × 10 −13 C which approaches the estimated value the most. Therefore, we choose integrated value under 95 kV/cm field as initial quantity of charge which corresponding to the MF = 1 in Fig. 3b.
Applicability of the simplified monte carlo model. Inspection of the anisotropic curves in Fig. 2c, which is consistent with Baraff's prediction that the impact ionization process is mainly due to lucky carriers at low applied electric fields 46 . In other words, these carriers will reach threshold in some certain directions more easily than others. Thus, this phenomenon reflects that the LED certainly worked at the low-field region. Only Shockley's 'lucky carriers' 47 can be accelerated to high energy level over threshold and initiated impact ionization. Others will remain below threshold. Nonetheless, how many 'lucky carriers' are lucky enough to reach this goal is unknown. It is difficult to be solved and beyond the model in this work.
In terms of other factors that may affect the applicability of our model, we need to conduct more discussions. The assumptions of this model base on a uniform system which does not include information about defects and other channels, such as an impact ionization event occurring between a free carrier and a confined carrier within an impurity state, an impurity band or confined quantum-well state. For defects, we actually measured three regular and symmetric anisotropic two-fold photoresponse curves and plotted them in Fig. 2c. However, the defects can prevent carriers from attaining high multiplication 48 and render the photoresponse irregular while the anisotropic photoresponse of blue LEDs is relatively regular. Therefore, we can conclude that the defects affect little and attest to the applicability of the model in this sense.
With regard to the second factor, we pick up the confined quantum-well state as an example, the impact ionization between the free carriers and confined carriers will result in the space-charge neutrality no longer being maintained 49 . This will alter the electric field profile within the device and subsequently the device performance. However, the measured Volt-ampere characteristics before and after the illumination of THz pulses exhibit no hysteresis phenomenon (see Supplementary Note 6), implying the confined quantum-well state affect little on this model as well. This is a reasonable result because the activation layer (MQWs part) is much thinner than other layers. The confined states are very few such that the second effect plays a small role in the total carrier multiplication process. In addition to the influence from the confined states, we eliminated the possibility arise from resonance excitations as well, which is detailed in Supplementary Note 7.
Recovery process in blue and green LEDs. After the carrier multiplication process, combining the previous theory with the large negative photovoltaic signal we have observed, we can infer the carrier transport process in the blue LED and green LEDs. Because the strong-field THz pulse excited carriers at 15 ps, it was quite short compared with the carrier lifetime of a few nanoseconds. The carrier multiplication can be regarded as an instantaneous process. In the blue and green LEDs, since there is no positive signal at the tail of the photovoltaic signal, it can be deduced that most of the carriers inside the device were mostly generated in the p-type region. Due to the presence of the surface electric field (Fig. 4e), the generated electrons drifted outwards. The holes diffused into the bulk. Electrons accumulated near the surface of positive electrode side, while holes accumulated in the middle of the p-type region. Subsequently, the accumulated electrons and holes formed an electric field opposite to the surface electric field direction.
These fields partially canceled out the original surface electric field. In the external circuit, the electrons on the surface reduced the potential at the positive electrode, forming a large negative signal. The internal carrier recombination process gradually reduced the surface potential, and finally decayed to zero while all excess carriers recombined.
LED material and structure. In our experiment, the LEDs we examined were the most common LED lamps in the market for children's toys, TV sets, monitors, telephones, computers, and circuit warnings. We mainly selected several kinds LEDs with different colors of blue, green, white, yellow and red for systematical investigations. The materials of blue and green LED lamps are InGaN with 3 mm round type (4204-10SUBC/C470/S400-X9-L, 04-10SUGC/S400-A5, EVERLIGHT), while the white, orange, and red ones are AlGaInP (204-15/FNC2-2TVA, 204-10UYOC/S530-A3, 4204-10SURC/S530-A3, EVERLIGHT). The blue LED was exemplified, and its parameters were referred (https://www.powerwaywafer.com/ epitaxial-wafer.html?gclid=EAIaIQobChMIr8qD2ZGd4wIVGIfVCh0wYwA0EAA YASAAEgJi-vD_BwE) in Fig. 3a. From top to bottom, the structure is composed by different layers with various thicknesses of 0.2 μm thick p-GaN, 0.03 μm thick p-AlGaN, the activation layer of 0.2 μm thick InGaN/GaN, 2.5 μm thick n-GaN, 3.5 μm thick u-GaN, and a 430 μm thick Al 2 O 3 substrate. For this LED, it emitted blue light when a voltage of 3 V was applied. When no voltage was applied and the THz pulse illuminates onto the LED, we can probe a photovoltaic signal. We also saw similar phenomena in LEDs with other colors, but the responses were different.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The code that supports the findings of this study are available from the corresponding author upon reasonable request. | 2023-02-24T14:17:59.927Z | 2021-01-04T00:00:00.000 | {
"year": 2021,
"sha1": "ad2be8ce6ed0fc721938ac6cc27cf9d5a2e80624",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42005-020-00508-w.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ad2be8ce6ed0fc721938ac6cc27cf9d5a2e80624",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
244119228 | pes2o/s2orc | v3-fos-license | Exploring Carbohydrates for Therapeutics: A Review on Future Directions
Carbohydrates are important components of foods and essential biomolecules performing various biological functions in living systems. A variety of biological activities besides providing fuel have been explored and reported for carbohydrates. Some carbohydrates have been approved for the treatment of various diseases; however, carbohydrate-containing drugs represent only a small portion of all of the drugs on the market. This review summarizes several potential development directions of carbohydrate-containing therapeutics, with the hope of promoting the application of carbohydrates in drug development.
INTRODUCTION
Carbohydrates are ubiquitously present in a wide range of plants, animals, and microorganisms. Their irreplaceable biological roles have been well established. To date, a large number of carbohydrate-containing drugs have been approved worldwide (Jiang et al., 2021). However, the development of carbohydrate-containing drugs seems to have slowed down in recent years. Of more than 200 drugs that have been approved during 2015-2020, only nine are small-molecule carbohydrate-containing drugs (Bhutani et al., 2021). This mini-review provides a summary and our opinion on the future of carbohydrate-containing drugs. Carbohydrates have three typical characteristics: high density of functional groups (e.g., hydroxyl), diversity of structures based on different configuration, and ideal biocompatibility as they are ubiquitous in the body. It is crucial to harness the intrinsic properties of carbohydrates in order to develop carbohydrate-containing therapeutics. Overall, five potential directions need to be focused on, namely, pure carbohydrate drugs, carbohydrate conjugates, carbohydrate scaffolds, carbohydrates vaccines, and glyconanomaterials ( Figure 1).
Pure Carbohydrate Drugs
Carbohydrates present many advantages in drug screening, such as low cost, abundance, high density of functional groups, and diversity of molecular structures. However, in clinical practice, it is rare to directly use carbohydrates as drugs or carbohydrates as the main body of drugs. Monosaccharides are ubiquitous in our body, and thus, they are difficult to directly use as drugs. However, some decorated monosaccharides have been approved for the treatment of specific diseases by mimicking functions of monosaccharides. 18 F-fluorodeoxyglucose ( 18 F-FDG) injection is a typical example ( Figure 2). 18 F-FDG is a radioactive 2-deoxy-2-[ 18 F] fluoro-D-glucose that has been used for the diagnosis of cancer in conjunction with positron emission tomography (Ben-Haim and Ell, 2009). Based on the fact that cancerous tissues take up glucose at a higher rate than most normal tissues, 18 F-FDG is preferentially uptaken by tumor cells, thus allowing clinicians to identify sites of tumors and metastases, as well as to stage cancer and monitor response to treatment.
Oligosaccharides and polysaccharides show low lipophilicity due to the presence of multiple hydroxyl groups. Typically, the less lipophilic a drug, the worse is its absorption following oral administration (Witczak, 2006). Therefore, they are usually designed for the treatment of the gastrointestinal tract diseases, where less absorption is acceptable or necessary. Lactulose (Figure 3), a disaccharide, is a typical example, which undergoes minimal gastrointestinal tract absorption and being broken into organic acids by saccharolytic bacteria and thus enhances intraluminal gas formation and facilitates bowel movements. Therefore, it has been used for the treatment of chronic constipation (Bae and Kim, 2020). In addition, intravenous administration is also an option for this type of therapeutic agent with high polarity. Heparin ( Figure 3) is a sulfated polysaccharide isolated from animal organs, and it has been used clinically as an intravenously injected antithrombotic agent for decades (Qiu et al., 2021). However, it is a highly heterogeneous mixture of polysaccharides and is associated with severe side effects. After the structure-function relationship of heparin was established using synthetic oligosaccharides, some antithrombotic agents with definite single structures were Frontiers in Pharmacology | www.frontiersin.org November 2021 | Volume 12 | Article 756724 2 developed, such as fondaparinux and idraparinux ( Figure 3) (Dey et al., 2019;Bugg et al., 2011). Idraparinux is a fully synthetic analog of the pentasaccharidic domain of heparin, which contains O-sulfation and O-methyl functionalities instead of N-sulfation and free hydroxyl groups. It has a chemically defined structure, and it is also an analog of fondaparinux. Compared with fondaparinux, idraparinux shows a higher anti-Xa activity and longer half-life, and it has been in a phase III clinical trial for the treatment of patients with atrial fibrillation and venous thromboembolic events. Recently, a seaweed-derived oligosaccharide, GV-971, has been approved in China for the treatment of Alzheimer disease (Wang et al., 2019). GV-971 is a heterogeneous mixture of acidic linear oligosaccharides ranging from dimers to decamers.
Carbohydrate Conjugates
Carbohydrate conjugates refer to carbohydrates that are used in the attachment of drugs. In this case, the carbohydrate molecule itself is not the main body of drugs; rather, it is a functional group utilized to increase their bioactivity, improve physical and chemical properties, or achieve targeting. The vast majority of the drugs containing carbohydrates that are already on the market fall into this category. This is not surprising, given that the high polarity and multifunctional properties of carbohydrates Frontiers in Pharmacology | www.frontiersin.org November 2021 | Volume 12 | Article 756724 make them ideal additions to improve drug properties. However, it is noteworthy that either these approved drugs have originated from natural products containing carbohydrate molecules (e.g., antibiotics and vancomycin; Figure 4A), or they have been designed based on the key components containing carbohydrates in the body (e.g., nucleoside analogs and cytarabine; Figure 4B) (Jordheim et al., 2013;Yilmaz and Ozcengiz, 2017). Apart from these two aspects, carbohydrates have much more potential for exploitation in drug design. For example, a number of carbohydrate-containing probes with potential diagnostic applications have been reported (e.g., KSL11; Figure 4C); they could be used to detect ions, small molecules, and enzymes (Yang et al., 2010;Li et al., 2011;Longxia Li, 2015;Li et al., 2020). Furthermore, carbohydrates (glucose or other glucose transporter substrates) have been conjugated to cytotoxins or anticancer therapeutics for the specific targeting and treatment of cancer (Calvaresi and Hergenrother, 2013). The rationale behind this strategy is that glucose transporters and glycolytic enzymes are widely overexpressed in cancer tissues, which highly correlates with poor cancer prognosis, thus making them attractive therapeutic targets for achieving anticancer drug targeting (Treekoon et al., 2021). The wide application of 18 F-FDG injection in diagnosis and staging of many types of cancer provides the strongest support to this theory. Cancers clinically staged using 18 F-FDG imaging may be good candidates for glycoconjugate targeting. According to Bensinger et al., they include lung, breast, colorectal, and endometrial carcinomas, as well as bone and soft tissue sarcomas and Hodgkin and non-Hodgkin lymphomas (Bensinger and Christofk, 2012). Actually, carbohydrate-conjugated anticancer active molecules for targeting therapy have attracted great interest and grown markedly in recent years. Several clinical trials in several countries over the past decades have been conducted on glufosfamide, a trailblazer for glycoconjugated anticancer agents (Briasoulis et al., 2003;Ciuleanu et al., 2009). In addition, some glycoconjugates have shown improved activity and selectivity compared with aglycone in vitro and in vivo (e.g., glucose-platinum conjugate; Figure 4D) (Mikuni et al., 2008;Miot-Noirault et al., 2011;Patra et al., 2016;Wang et al., 2016;. However, this drug design strategy has not yet been proven in clinical practice, which somewhat undermines the confidence of researchers in the field. The questions remain as to how to choose the proper coupling position of carbohydrate, what is the exact mechanism of glycoconjugates entry into cells, and how glycoconjugates work in vivo.
Carbohydrate as Scaffolds
Carbohydrates have high functional group density and diversity of functional group orientations, which makes them excellent scaffolds for designing bioactive compounds by appending desired substituents at selected positions around the sugar ring. This strategy has promising prospects in drug development, and it has widely been used in the design of peptidomimetics. It is known that the instability of the amide backbone is an important limiting factor in the development of peptide drugs. In addition, the amide backbone makes the peptides less permeable to membranes, which leads to their lower bioavailability. Accordingly, using carbohydrates to mimic the backbone of peptides is a promising strategy for increasing the drug ability of peptides. Since Hirschmann et al. first reported that the peptidomimetic based on β-D-glucoside scaffold ( Figure 5A) could target somatostatin receptor, several research groups have reported the various applications of peptidomimetics based on carbohydrate scaffolds in different biological fields (Hirschmann et al., 1992;Hirschmann et al., 2009). Relevant studies of peptidomimetics have been reviewed extensively elsewhere (Cipolla et al., 2005;Meutermans et al., 2006;Velter et al., 2006;Cipolla et al., 2010;Tian et al., 2015;Lenci and Trabocchi, 2020). However, the existing examples have also demonstrated the difficulty of designing single carbohydrate scaffold mimetics that maintain the level of bioactivity (and/or selectivity) of the counterparts. This is because it is difficult to ensure that the positions and orientations of the functional groups of the mimetics are exactly the same as those of the counterparts (original ligands). A possible solution is to use carbohydrates as scaffolds to build a diverse library of compounds and to use the library to screen the ideal molecules (Hollingsworth and Wang, 2000;Le et al., 2003;Lenci et al., 2016). The novel strategy has been used to screen inhibitors of protein tyrosine phosphatase 1B (PTP1B) in our group, where ribose and xylose were used as scaffold ( Figure 5B) . The successful obtainment of a potent and selective PTP1B inhibitor preliminarily proved the feasibility of this strategy. However, in the current research, carbohydrate scaffolds were modified with the same pharmacophore in one molecule, which did reduce the difficulty of the reaction, but also dramatically reduced the diversity of compounds. Presenting different pharmacophores at different substitution sites of the carbohydrate scaffold can greatly increase the diversity of compounds. Thus, based on this research idea, efficient organic synthesis methods that involve the individual sequential protection and deprotection of single hydroxyl group on sugar rings are essential. Advances in organic synthesis in the field of carbohydrate chemistry have accelerated and extended the application of this strategy (Busto, 2016;Ghosh and Kulkarni, 2020). It is expected that as the power of organic synthesis increases, it will be an alternative for drug development to utilize carbohydrates as scaffolds to generate compound libraries that are highly functional and structurally diverse and to further screen bioactive molecules.
Carbohydrates Used for Vaccines
Carbohydrates have also been used for the development of vaccines, such as carbohydrate-based antimicrobial vaccines and anticancer vaccines (Mettu et al., 2020). The star drug Prevnar 13 is a typical polysaccharide-protein conjugate vaccine; it was approved by the US Food and Drug Administration in 2010 (Gruber et al., 2012). The rationale behind carbohydrate-based vaccines is based on the theory that carbohydrates could be recognized as antigenic determinants, and they could be specifically recognized by the immune system (Temme et al., 2021). Capsular polysaccharides and lipopolysaccharides are the major constituents of the microbial cell surfaces and can be specifically recognized by the host immune system; thus, they could be exploited as the basis for the design of antibacterial vaccines. Expression of carbohydrates on the cancer cell surface is different from that on the normal cell surface. Abnormal glycosylation in primary tumors closely correlates with the survival rate of cancer patients. These abnormal oligosaccharides or polysaccharides, usually referred to as tumor-associated carbohydrate antigens, are strictly related to the metastasis of tumor cells. Based on the immunogenicity of carbohydrates, the development of carbohydrate-based vaccines provides an attractive option for the treatment of infections and cancers. The main difficulty in carbohydrate vaccine development is the poor immunogenicity due to their inherent T-cell-independent nature. In the response to this class of carbohydrate antigens, no immunological memory is established, and no T-cell responses are induced, which is markedly different from responses to proteins and peptides. One method of overcoming this problem is to conjugate the corresponding carbohydrates to a carrier protein (Galili, 2020). However, we need to solve first how to obtain carbohydrate antigens with sufficient quantity, high purity, and structural integrity. In addition, natural polysaccharides obtained by separation are usually characterized by significant heterogeneity and easy contamination. Several studies have demonstrated that synthetic conjugates show comparable activity to the native polysaccharides linked to the same carrier protein, indicating that the chemically well-defined synthetic oligosaccharide is safer than the natural polysaccharide counterpart. Moreover, fully synthetic oligosaccharide conjugate vaccines have more advantages because they could be designed to incorporate only elements required for a desired immune response, and they could produce chemically well-defined compounds in a reproducible fashion. Therefore, efficient synthetic methods are critical for the development of carbohydrate-based vaccines. Although the construction of oligosaccharides remains a challenging task due to the combined demands of elaborate procedures for glycosyl donor and acceptor preparation and the requirements of regio-and stereo-selectivity in glycoside bond formation, considerable improvements have been made in this field (Zhu and Schmidt, 2009;Ghosh and Kulkarni, 2020;He et al., 2020). It is expected that new and efficient synthetic methods will be developed in the near future, which will give access to a wide range of oligosaccharides and glycoconjugates for vaccine development.
Glyconanomaterials
Carbohydrates have been conjugated to nanomaterials for biomedical imaging, diagnostics, and therapeutics Crucho and Barros, 2019). In addition, some carbohydratecontaining drugs have been used in nanodelivery systems as cargos for enhancing drug efficacy, reducing nonspecific toxicity or improving targeting (Senanayake et al., 2011;Delorme et al., 2020). The combination of glycochemobiology and nanotechnology has provided promising new tools that could be used in imaging of cancer cells, photodynamic therapy, biosensors, and drug targeting (Reichardt et al., 2016). In this section, the application of carbohydrates in the modification of nanomaterials is highlighted. Considering their ubiquitous distribution in tissues and important functions at the cellular level, carbohydrates have been widely used in the functionalization of nanomaterials and have shown unique advantages in the development of nanomedicines. Strategies for the design, preparation, and application of glyconanomaterials have been summarized in many recent reviews (Gorityala et al., 2010;Marradi et al., 2013;Hao et al., 2016;Zhang et al., 2018;Khan et al., 2019). Generally, carbohydrates have three typical advantages in the development of nanomedicine. In addition to increasing water solubility, the main advantages are improving the biocompatibility of nanomaterial and facilitating the improvement of affinity for receptors. In terms of biocompatibility, carbohydrates are an ideal choice to overcome nanomaterial immunogenicity that has limited the use of nanomaterials in vivo to a large extent. The rationale behind this advantage is that host-like glycan structures in a form of molecular mimicry could assist some pathogens to evade recognition from the host immune system (Severi et al., 2007). Thus, it is not surprising that carbohydrate-decorated nanoparticles have lower immunogenicity compared with unmodified nanomaterials. As a major nonimmunogenic carbohydrate component, glucose has been used in the design of glyconanoparticles in the study by Frigell et al. (2013). In terms of increasing affinity, the synergy between nanomaterials and carbohydrates showed huge potential advantages. Nanomaterials, a kind of formidable platform, could make the presentation of multiple carbohydrate ligands possible via the large surface-tovolume ratio, thereby greatly increasing the affinity of carbohydrates as biofunctional ligands for specific glycanbinding proteins. The study by Reynolds et al. showed that the gold nanoparticle platform displayed a significant cluster glycoside effect for presenting carbohydrate ligands with almost a 3,000-fold increase in binding compared with a monovalent reference probe in free solution (Reynolds et al., 2012). The targeting aggregation capacity of carbohydrates has further been demonstrated in the study of nanocarrier-mediated drug delivery into the brain (Anraku et al., 2017). According to Anraku et al., the nanocarrier with a surface featuring many glucose molecules has the potential for delivering various drugs directly into the brain by crossing the blood-brain barrier (BBB) through taking advantage of a multivalent interaction between multiple glucose molecules and glucose transporters ( Figure 6) (Anraku et al., 2017). Further, carbohydrate-decorated nanoparticles could be used as the active therapeutic entities to inhibit pathogen adhesion-the first step to initiate infection. A study showed that 120 monosaccharides that functionalized tridecafullerene exhibited a potent inhibitory effect against cell infection caused by an artificial Ebola virus with half-maximum inhibitory concentrations in the subnanomolar range (Muñoz et al., 2016). More recently, Bhatia et al. reported adaptive flexible sialylated nanogels that could deform and adapt onto the influenza A virus surface via multivalent binding of sialic acid residues with hemagglutinin spike proteins on the virus surface. Based on the multivalent binding strategy, sialylated nanogels could efficiently block the virus adhesion on cells and inhibit the infection at low pM concentrations (Bhatia et al., 2020). As for carbohydrate-decorated nanomaterials, the key aspects of their performance include the proper display of carbohydrate ligands, the type and length of the spacer linkage, and the ligand density. Receptors have the best affinity for specific carbohydrate molecules; for example, glucose transporter-1 shows higher affinity to glucose compared with other monosaccharides; therefore, the selection of carbohydrate molecules is crucial in the functionalization of nanomaterials. In addition, the effect of FIGURE 6 | A glucosylated nanocarrier used to deliver drugs able to cross the BBB and reach the brain tissue. Reprinted with permission from Anraku et al. (2017).
Frontiers in Pharmacology | www.frontiersin.org November 2021 | Volume 12 | Article 756724 6 linkers on the binding affinity of glyconanoparticles has also been investigated in recent studies Richards et al., 2012;Simpson et al., 2016;Malakootikhah et al., 2017). The results showed that the binding affinity increased with the spacer linker length. The longer and more flexible spacer may provide additional spatial freedom and less steric hindrance to the attached ligands for a more efficient association with their binding partners. Regarding the effect of carbohydrate ligand density, it is reasonable that carbohydrate molecules recognize receptors principally through weak interactions such as hydrogen bonding; thus, increasing the carbohydrate ligand density may introduce cluster or multivalency effects, which could significantly enhance the binding affinity. The most representative example is that oligosaccharides usually exhibit higher binding affinity than monosaccharides toward the same lectin receptor. Although the binding affinity could be roughly quantified by technical means already available, such as surface plasmon resonance or isothermal titration calorimetry, in general, it is difficult to control the number of carbohydrate ligands conjugated to relative nanomaterials. Such imprecise preparation methods may result in ambiguities in composition and structure and batch-wise variations of prepared glyconanomaterials, which is one of the important factors limiting carbohydrates' clinical development.
CONCLUSION
Overall, the present review summarized the possible directions of carbohydrate-containing drugs based on the internal characteristics of carbohydrates. As the biological functions of carbohydrates continue to be explored and more novel carbohydrate-containing molecules are artificially designed or obtained from natural products, it is expected that carbohydrates as the treasure house of medicine will bring more surprises to us in the near future.
AUTHOR CONTRIBUTIONS
JW, YZ, and RZ conceived and designed the framework of this article. RZ wrote it. QL and DX are responsible for revising. | 2021-11-16T14:28:04.020Z | 2021-11-16T00:00:00.000 | {
"year": 2021,
"sha1": "a5cd209038e946f62f8859b98f69ea68726b7f1b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.756724/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5cd209038e946f62f8859b98f69ea68726b7f1b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44040346 | pes2o/s2orc | v3-fos-license | Genetic analysis of measles virus nucleocapsid gene identifies measles virus isolate of close similarity to Clade A viruses from Nigeria
Previous studies on the molecular epidemiology of measles virus in Nigeria shows that genotype B3 clusters 1 and 2 are the circulating measles genotype. We report the isolation of measles virus strain of close similarity to reference genotype A measles virus strain from Ibadan, Nigeria. Molecular characterization of a measles virus isolate from a child presenting with fever and rash in a hospital in Ibadan, Nigeria, was done by measles virus isolation in Vero slam cell line, RT-PCR and direct sequencing of the COOH terminal of the nucleoprotein gene of the measles virus isolate. Phylogenetic analysis of the sequence shows that isolate MViIbadan, NIE/11.01 clustered with the reference strains of Clade A. Our current report shows evidence of another circulating MV genotype different from the previously reported genotype B3 in Nigeria. We advocate for expanded national molecular surveillance of measles virus as this will aid in achieving the country’s goal of control of the disease.
The virus is relatively antigenically stable and monotypic but has genetic variation in the hemagglutinin (H) and nucleoprotein (N) genes 2 .Measles is one of the leading causes of death among young children despite the availability of a safe and cost effective vaccine for over 40 years 3 .It accounted for over 145700 deaths in 2013 globally, 51% of which are from World Health Organization (WHO) defined African region 4 .Nigeria still ranks high among countries reporting measles infection yearly, with over 7000 suspected and over 3000 confirmed cases reported in 2014 4 .
The 450 nucleotides that code for the COOH-terminus of the N protein is the single most variable part of the measles genome and analysis of this part of MV isolates or clinical specimens has helped to classify measles into eight clades (Clades A-H) and 23 genotypes 5 .Different measles genotypes and strains circulate in specific geographical regions 5,6 .Measles virus clades B3, D4, D5, D8 and H1 were isolated in the WHO Americas region between 2007 and 2009, genotypes D6 and D9, in addition to those found in the Americas, are also found in the European region, and genotype B3 is the major circulating strain in the Africa region in this period 6 .
Previous reports on circulating measles strains from Nigeria have indicated the circulation of only genotypes B3 cluster 1 and 2. However, due to the absence of integrated molecular surveillance in measles' elimination programs, there has been only a few sequence data of Nigerian measles isolates, especially in recent times 7,8 .Molecular surveillance is essential in order to observe the changes in viral genotypes over time in a particular region.It is also an important tool in assessing the effectiveness of vaccination programs 9 .This study reports the molecular characterization of a previously unreported genotype of measles virus from Nigeria.
Sample obtainment
As part of the standard care routine, a nasopharyngeal swab was collected from a 3-year-old child of suspected measles infection presenting with fever, maculopapular rash, cough and conjunctivitis at the Oni Memorial Children's Hospital in Ibadan, Oyo State, Nigeria, in November 2010.The sample was transported to the lab in virus transport medium under reverse cold chain.The swab was inoculated into tissue culture flask of VeroSLAM cell line and observed for cytopathic signs for seven days.
A nasopharyngeal swab was used as the sample type because it is the recommended site by the United States Centers for Disease Control and Prevention.In addition, measles virus is shed up to 10 day after the onset of rash and after the resolution of vireamia increasing the chances of viral nucleic acid detection up to the 14 day after the onset of rash.
RNA extraction and RT-PCR
Viral RNA was extracted from the throat swab collected and supernatant from the cell culture using QIAamp® Viral RNA kit by QIAGEN Valencia USA, according to the manufacturer's instruction.
Extracted RNA was synthesised to cDNA by reverse transcription using a commercial kit (SCRIPT DNA synthesis kit by Jena Bioscience® GmbH, Germany).Nested PCR was performed using two sets of primers: First round, fwd MN5 5-GCCATGG GAGTAGGATGGAAC-3; rev MN6 5-CTGGCGGGCTGTGTGT GGACCTG-3 and nested inner primers Nfla 5-CGGGCAAGA GATGGTAAGGAGGTCAG-3, Nr7a 5-AGGGTAGGCGGA TGTTGGTTCTGG-3, as previously described by Kremer et al. 8 using Applied Biosystems GeneAmp PCR System 9700 thermal cycler.Cycling conditions for both first and second reactions is as follows: 35 cycles of 94°C for 30 seconds, 55°C for 1 minute, 72°C for 1 min preceded by 94°C for 5 minutes and a final elongation at 72°C for 5 minutes 8 .
Sequencing
Purified amplicons were sequenced at Jena Bioscience Laboratory (Germany) by Sanger sequencing method using the second round primer.Sequence data obtained was edited and assembled with Bioedit software version 7.0.5, sequence similarity was determined by Basic Local Alignment (BLAST).
The query sequence was aligned with reference sequences downloaded from GenBank with the help of Measles Nucleotide Surveillance (MeaNS) database.Table 1 shows the names and GeneBank accession numbers of the reference sequences used for the analysis.The basis of sequence selection was predicated on selection of characterized isolates reported in publications from WHO European region laboratory 7,8 using CLUSTALW software.Phylogenetic trees were constructed in Mega version 6.06 software using Maximum likelihood and Neighbor joining methods with p distance model and 1000 bootstrap replicates.
Results and discussion
Both the swab and tissue culture sample showed the expected 560 base pairs after nested RT-PCR.Phylogenetic analysis of the measles sample MViIbadan/NIE/11.10 sequence (ccession no LN876569.1)gave a distinct BLAST search result (Supplementary File 1).From the search it was observed that the sequence with ascension no JF727650.1Leningrad is the most closely related virus sequence with our Ibadan strain.The phylogenetic tree generated revealed that the Ibadan prototype Clade A strain co-clustered with sequences close to the vaccine Edmonston Zagreb stain with 98% bootstrap value (Figure 1), suggesting the possibility of common ancestral origin.Molecular Epidemiology of Measles virus in Nigeria classifies isolates into two distinct groups within genotype B3, B3 cluster 1 and B3 cluster 2 7,8 .The B3 genotype was assigned after exhaustive genetic analysis of MV strains from Lagos and Ibadan in 1999 8 .
Our current study describes isolation of a different strain of measles from the previously described genotypes (MViIbadan/ NIE/11.10).This virus was recovered from the throat swab of a 3 year old child with no documented history of vaccination at Oni Memorial Children Hospital in Ibadan Nigeria in November 2010.In this study, although we did not independently investigate the immediate or remote factors that could have led to the introduction/ emergence of this virus genotype in Nigeria, we postulate that the virus could have been probably imported from another continent or could have been an undetected circulating strain.This is because active molecular surveillance is not in place in Nigeria, and the last report of MV molecular epidemiology was in 2005 8 .This leaves a huge gap in research information and multiple introductions of diverse measles genotypes might have taken place undetected over time.We therefore advocate for the establishment of an institution of a national surveillance program, and the inclusion of molecular epidemiology alongside national and global authorities, such as the WHO and Federal Ministry of Health Nigeria, to help achieve measles elimination goals in the country and globally.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
Figure 1 .
Figure 1.Phylogenetic tree of the partial nucleoprotein gene sequence of the measles virus.The study isolate is shown in bold red font; clinical strains from Nigeria are shown in blue font.Clades are indicated beside the black horizontal lines.The GenBank accession numbers are indicated first in the sequence labels, bootstrap values are indicated if ≥ 50%, phylogenetic tree was constructed using the Neighbor joining algorithm in MEGA 6.0 with 1,000 bootstrap replicates.
to cite this article: Genetic analysis of measles virus nucleocapsid gene identifies measles 2018, :155 virus isolate of close similarity to Clade A viruses from Nigeria [version 1; referees: awaiting peer review] F1000Research
Faneye AO, Adeniji JA and Motayo BO.How Creative Commons Attribution Licence which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Measles is an acute viral illness characterised by a prodromal illness of fever, coryza, cough, conjunctivitis and presence of Koplik spots followed by the appearance of a generalised maculopapular rash.The illness is either mild or severe depending on the immune and nutritional status of the infected person.It is caused by measles virus (MV), which is a negative sense single stranded RNA virus of the family Paramyxoviridae, genus Morbillivirus 1 . | 2018-02-11T09:06:35.948Z | 2018-02-07T00:00:00.000 | {
"year": 2018,
"sha1": "85731359b3d92e295894c6e6d5df7b8d89c9f18a",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/7-155/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "898ac2c237f22573ffd8c841d88eae8affe0f23a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
256584770 | pes2o/s2orc | v3-fos-license | Global analysis for pD-variation NMR titration of organic electrolyte complexes
ABSTRACT This is an addendum to our preceding article, ‘NMR pD-variation method to determine the proper stabilities of organic electrolyte complexes: case of histidine complexes with a cyclophane acid’ Supramolecular Chemistry https://doi.org/10.1080/10610278.2022.2134017, which proposes that the species distribution of complexes between organic electrolytes is determined by observing the pD dependence of NMR signals for a mixture of the reactants; the species distribution can be replotted in a pH scale. This addendum demonstrates that the process of data analysis is simplified even more reliably by employing a global fitting method in place of a local, successive method used previously; comparative evaluation was made using Excel® spreadsheets appended. The straightforward global analysis will facilitate the practice of pD-variation NMR titration. The application range is potentially extendable by the use of 13C NMR, which overcomes some weak points of the 1H NMR method. Grapical abstract
Introduction
Our preceding paper has shown that pD-variation NMR (nuclear magnetic resonance) titration makes it possible to determine the species distribution of organic electrolyte complexes as a function of pD and then replot the distribution in the pH scale [1]. When two species are formed in different pD ranges, the true species are identified by examining the δ versus pD plots of selected NMR signals, and the proper stability constants are determined by fitting successively the relevant titration curves. This rather complicated process of data analysis is expected to be simplified, even in a more reliable manner, by the so-called global analysis in which titration curves selected in a pair are fitted simultaneously [2]. This addendum explains the use of the global analysis in pD-variation NMR titration using appended Excel® spreadsheets, as compared with the local, successive fitting method employed in the preceding paper. The two methods have been examined in a comparative manner for the typical 1:1-complexes of monobasic acid AH and monoacidic base B; the tested titration curves were generated by simulation. Then, the global method has been extendedly applied to experimental data reported for histidine complexes with a cyclophane acid [1]. These examples have demonstrated that the global analysis is much more convenient for the practice of pD-variation NMR titration.
Case of simple acid-base complexation
To help the comparative examination of the successive and global fitting methods, the titration method is summarised here along with the definitions of the variables in least-squares calculations [1]. The chemical shifts δ of organic electrolytes AH j and BH k are dependent on pD, respectively, as follows.
Here, β DAj and β DBk are the overall protonation constants of A and B in D 2 O, respectively; δ AHj and δ BHk are the intrinsic shifts of monitored proton signals in AH j and BH k . Plots of δ ðAÞ ; δ ðBÞ versus pD are shown in Figure 1 (dotted lines) for monobasic acid AH ðlog K D ¼ 6:0Þ and a monoacidic base B ðlog K D ¼ 9:0Þ; the species distributions are presented in the top panel of the figure. For a 1:1-complex formed between the acid and base, the stability constant is defined by the following equation.
The chemical shifts of AH j and BH k are derived in the rapid exchange case as follows.
Here, δ jk ðAÞ and δ jk ðBÞ are the intrinsic chemical shifts of the monitored signals of AH j and BH k in the complex; C A and C B are the total concentrations of the reactants. In each equation, the first term presents the δ versus pD curve of protonation on each electrolyte, and the second term makes the curve displaced from the corresponding curve in the non-complexation case, responding to the type of complex and the stability constant K jk . (2) the formation of AH·BH or A·B displaces one titration curve but does not the other; (3) when specie A·BH is accompanied by a by-product A·B, for example, the net displacement of B01 is reduced due to the secondary species, whereas A01 is not affected; (4) AH·B is not detectable because of the low NMR chemical shift δ versus pD plots simulated for components A and B in 1:1-complexes AH j �BH k (j, k = 0 or 1): the plots are labelled Ajk and Bjk, respectively; C A �K jk ¼ 10; concentrations of the components in the pD range of complexation. On the basis of these features of curve displacements, the formed species AH j �BH k are identified, and the proper stability constants are determined by least-squares fitting of suitably selected curves; the experimental requirement is that the titration curves of each reactant and a mixture should be obtained under the same conditions including pH-pD conversion.
In the present examination of the methods of data analysis, titration curves are simulated for the formation of A·BH and A·B with the K 01 :K 00 ratio of 10:1 and the variables of 'true values' in Table 1. Figure 2 shows tested titration plots of δ ðAÞ and δ ðBÞ , which were generated by adding random noises of ±0.005 to δ and ±0.04 to pD (these high noise levels were intentionally chosen for examination in the worst scenarios). The stability constants are calculated step-by-step by reference to the above-described features: (1) the opposite displacements of δ ðAÞ and δ ðBÞ identify A·BH as the major species; (2) the stability constant K 01 of the complex is equated to the value determined by the local curve fitting of δ ðAÞ rather than that of δ ðBÞ because the former value is larger than the latter, and the minor species is identified to be A·B for the same reasoning; (3) the stability constant K 00 of the minor species is determined by successive curve fitting of δ ðBÞ under the constraint of K 01 to the value determined from δ ðAÞ . The parameter values obtained in each step are shown in Table 1.
In the global analysis, the process of steps 2 and 3 is performed more definitely in a single step. The simultaneous curve fitting of δ ðAÞ and δ ðBÞ for the formation of A·BH and A·B gave the reasonable results compared with the true values (cf. global fit 1 in Table 1). By contrast, another curve fitting by assuming A·BH and AH·BH yielded the illogical results like a negative value of log K jk (global fit 2), ruling out the formation of AH·BH. Thus, the global method makes the data analysis straightforward. In addition, the fitness of curve δ ðAÞ is improved over the entire pD range including the high pD range in which the minor species is formed, resulting in a slightly smaller standard deviation for the log K 01 of the major species and a larger deviation for the log K 00 of the minor species; the reliabilities of the constants may be more fairly evaluated in the global analysis.
One of the disadvantages of the pD-variation 1 H NMR method is that the formation of A·B complex, for example, displaces only the titration curve of reactant B; if this reactant does not carry a proton 3.177 (2) 3.026 (7) 3.023 (5) Global fit 2 sensitive to protonation, the titration method is useless. Exceptionally, if the complexation causes a significant δ change for A, the stability constant K 00 is possible to determine: Figure 3 examples the titration curves of A in A·B-complexation accompanied by selected δ changes, Δ 00 ¼ δ 00 ðAÞ À δ A , at a noise level of ±0.002 for δ and ±0.04 for pD. The stability constant was correctly determined with acceptable standard deviations as shown in the inset of the figure; obviously, the larger δ change results in the higher reliability. In this practice, a low degree of data scatterings is required: in the case of the present simulation at a noise level of ±0.005, K 00 =M À 1 obtained for Δ 00 ¼ 0:02 was 1720, which largely differed from the true value 1000 (with an unacceptable standard deviation of 1820), in contrast to the value K 00 ¼ 1000 576 ð Þ at the ±0.002 noise level; for Δ 00 ¼ 0:06; K 00 ¼ 650 220 ð Þ at the ±0.005 noise level, while K 00 ¼ 1052 190 ð Þ at the ±0.002 level. These weak points of the 1 H NMR method may be overcome by 13 C NMR. The potential advantage of 13 C NMR is that it may show additional signals sensitive to protonation and complexation, e.g., the signals of carbon atoms in carboxylate group and substituted heterocycles; hence, 13 C NMR has a better chance to present multiple signals suitable for being monitored, even in the A·B-complexation case. Another possible advantage is that H-decoupled sharp signals are Table 1 and by adding random noises of ±0.005 to δ and ±0.04 to pD: the solid lines present least-squares fits based on the simultaneous method (or global analysis); the local, successive method gave practically the same goodness of fit; the obtained parameter values are shown in Table 1; the dotted lines are δ versus pD plots in the non-complexation case. generally observed with large chemical shifts, and consequent small relative errors and low data-scattering degrees are expectable. Moreover, a larger number of signals may be useful for a global analysis. Potentially, therefore, 13 C NMR can extend the application ranges of the pD-variation method, with the ready use of global analysis, despite a longer time necessary for experiments.
Case of histidine complexes
The NMR signal of ring proton CH(2) in histidine responds mainly to protonation at the ring nitrogen, and that of aliphatic proton CH(α) does to protonation at the side-arm nitrogen; Figure 4 presents the pD dependence of the proton signals and the distribution of protonation species. The δ versus pD plots of the proton signals are displaced in the titration of a mixture with a cyclophane acid (cf. Scheme 1); the protonation status of the cyclophane, the experimental process (including pD determination), and the observed spectral changes were described in the preceding paper [1]. The formed complexes were identified as hdH·cyH and hd·cyH, and the stability constants were determined by local, successive curve fittings of the titration curves of CH (2) and CH(α) protons, as shown in Table 2 [1]. The simultaneous fitting of the two curves gives the consistent results in a simpler calculation process. The standard deviations suggest that the global analysis evaluates the reliabilities of the stability constants more reasonably. Table 2; the dotted lines are δ versus pD plots in the noncomplexation case.
Scheme 1.
Structures of histidine and cyclophane at pH ≈ 7 and the abbreviations.
Conclusion
The pD-variation NMR titration is advantageous for studies of organic electrolyte complexes. When two species are formed with different protonation statuses, they are identified by comparing the titration curves of suitably selected proton signals, and then the proper stability constants are determined by fitting the relevant titration curves. These processes of data analysis are made straightforward and more reliable by employing the global analysis, which may help the practice of the titration method.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by División de Ingeniería de la Universidad de Sonora (Grant no. USO316007870). | 2023-02-05T16:13:23.584Z | 2022-01-02T00:00:00.000 | {
"year": 2022,
"sha1": "ee2635dc8b404131a9bacc85dd83e7716707d7c9",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Global_analysis_for_pD-variation_NMR_titration_of_organic_electrolyte_complexes/22003862/1/files/39054605.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "039c53d911e2bdb7f9636de919212b7aafc44b11",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
51848861 | pes2o/s2orc | v3-fos-license | Detection of signal recognition particle ( SRP ) RNAs in the nuclear ribosomal internal transcribed spacer 1 ( ITS 1 ) of three lineages of ectomycorrhizal fungi ( Agaricomycetes , Basidiomycota )
During a routine scan for Signal Recognition Particle (SRP) RNAs in eukaryotic sequences, we surprisingly found in silico evidence in GenBank for a 265-base long SRP RNA sequence in the ITS1 region of a total of 11 fully identified species in three ectomycorrhizal genera of the Basidiomycota (Fungi): Astraeus, Russula, and Lactarius. To rule out sequence artifacts, one specimen from a species indicated to have the SRP RNA-containing ITS region in each of these genera was ordered and re-sequenced. Sequences identical to the corresponding GenBank entries were recovered, or in the case of a non-original but conspecific specimen differed by three bases, showing that these species indeed have an SRP RNA sequence incorporated into their ITS1 region. Other than the ribosomal genes, this is the first known case of non-coding RNAs in the eukaryotic ITS region, and it may assist in the examination of other types of insertions in
Introduction
The nuclear ribosomal internal transcribed spacer (ITS) region is part of the ribosomal DNA cistron.The ITS region is transcribed together with the 18S, 5.8S, and 28S genes but removed in the post-transcriptional processing of the rRNA.The ITS region has three separate subregions: the ITS1, the 5.8S gene, and the ITS2.The ITS1 is situated between the 18S and the 5.8S genes, and the ITS2 is situated between the 5.8S and the 28S genes.The ITS region varies significantly in length among fungal species (Taylor andMcCormick 2008, Tedersoo et al. 2015), and both of ITS1 and ITS2 form secondary structures with stems, bulges, and loops (Freire et al. 2012, Rampersad 2014).The secondary structure is important for correct processing of the rRNA; although the ITS1 and ITS2 regions are not expressed in the ribosome, there are constraints on the evolution of the ITS region, and it has both fast evolving and more conserved regions (Nazar 2004, Mullineaux andHausner 2009).However, there is still much to learn about the function of the ITS region, especially for ITS1 (Rampersad 2014, Coleman 2015).The fast evolving regions of ITS has made it a cornerstone in species/genuslevel phylogenetic inference in fungi and other organisms for more than 20 years, and it is the formal fungal barcode used for molecular species identification (Álvarez andWendel 2003, Schoch et al. 2012).
One element that has never been implicated in the context of the eukaryotic rDNA cluster and ITS evolution is the existence of non-coding RNAs (ncRNA) other than the 18S, 5.8S, and 28S rRNAs.Based on the recent identification of a ubiquitous eukaryotic ncRNA in the fungal phylum Basidiomycota, viz. the Signal Recognition Particle RNA (SRP RNA; Dumesic et al. 2015), we discovered a ~265 bases long homologue of this gene in a set of fungal ITS1 sequences (Fig. 1).The SRP RNA is an essential component of the SRP, a ribonucleoprotein particle that co-translationally directs proteins to the endoplasmic reticulum (ER) membrane.The SRP RNA acts both as a scaffold for the SRP proteins and as a regulator of the SRP by mediating a global reorganization of the SRP in response to cargo binding (Rosenblad et al. 2009, Akopian et al. 2013).
A more thorough in silico analysis verified the presence of SRP RNAs in the ITS1 region of a total of 11 fully identified fungal species (separate Latin binomials) distributed over three lineages of ectomycorrhizal basidiomycetes (Boletales: Astraeus (1 species: A. sirindhorniae), Russulales: Russula (1 species: R. olivacea), and Russulales: Lactarius (9 species: L. argillaceifolius, L. aspideus, L. brunneoviolaceus, L. luridus, L. nanus, L. pallescens, L. pseudouvidus, L. uvidus, and L. violascens).The notion of an additional ncRNA element in the ITS1 region is novel and would seem -at least at a first glance -to compromise the function of the ITS1.Hypothetically, any of contamination, chimeric unions, or other laboratory or data analysis artifacts could explain this finding.In this study we apply DNA sequencing and bioinformatics to verify the presence of SRP RNA sequences in the ITS1 region of representatives of these fungi.
SRP RNA bioinformatics
The bioinformatic analysis of non-coding RNAs such as the SRP RNA is not trivial, as the primary sequence may vary substantially as long as the secondary structure is preserved.To enable searches for SRP RNAs without requiring exact sequence matches across the full length of the SRP RNA, we used a secondary structure covariance model constructed from the full set of available ascomycete SRP RNAs with the basidiomycete SRP RNAs from Dumesic et al. (2015) added, as well as a second dataset containing all covariance models from Rfam (Nawrocki et al. 2014).These models were used in an INFERNAL v1.1 cmsearch (Nawrocki and Eddy 2013) run against flatfiles of the International Nucleotide Sequence Database Collaboration (INSDC; Nakamura et al. 2013;February 2015 release).After observing several highly significant matches to what seemed to be the fungal ITS1 region, we re-ran the search on the flatfiles from the manually curated fungal ITS database UNITE (Abarenkov et al. 2010; release 2015-08-01) using ITSx 1.0.11(Bengtsson-Palme et al. 2013) to identify the ITS1 region.A total of 63 matches and 11 fully identified species from three ectomycorrhizal basidiomycete lineages were recovered (Suppl.material 1).All matches were examined manually to verify that they displayed all the universally conserved motifs and nucleotides.The sequences were found to stem from more than 20 different published and unpublished studies.The fact that these sequences had been found multiple times independently is highly suggestive of technically sound, authentic sequence data (Nilsson et al. 2012), but to further confirm the authenticity of the sequences we re-sequenced one herbarium specimen from each of these lineages, either from the original material or from other conspecific specimens.
PCR and sequencing
To rule out systematic PCR artifacts as sources of false positives in the bioinformatics analyses, we retrieved the original, or conspecific, specimens underlying one representative from each of the genera (Table 1): the conspecific collection Astraeus sirindhorniae MA-Fungi 47735 (collected in the Philippines; herbarium MA), the authentic collection Lactarius luridus TU118993 (collected in Estonia; herbarium TU), and the authentic collection Russula olivacea TU101845 (collected in Estonia; herbarium TU).We specifically sought to use a different primer pair combination and PCR conditions than did the original sequence authors in a further attempt at generalizing our findings.For A. sirindhorniae, DNA extractions, PCR reactions, and sequencing were performed as described in Martín and Winka (2000), however using DNeasy Plant Mini Kit (Qiagen) with overnight incubation for DNA extraction.Primers used for amplification were ITS5/ITS4 (White et al. 1990; Suppl.material 1).For the specimens of Lactarius and Russula, the DNA was extracted and amplified using the primers ITS1f and ITS4b following Anslan and Tedersoo (2015).Sequences were edited and assembled using Sequencher 4.2 (Gene Codes, Ann Arbor).All sequences were examined for sequence quality following Nilsson et al. (2012).Chimera detection was undertaken using UCHIME (Edgar et al. 2011) and the UNITE chimera reference dataset (Nilsson et al. 2015;release 2015-03-11).
Results
The ITS sequences recovered from the sequencing round passed all quality control measures we exercised.In addition, no sequence was found to have the multiple DNA ambiguity symbols suggestive of the presence of several information-wise distinct ITS copies in the individuals at hand (Hyde et al. 2013).The resulting sequences manifested the SRP RNA sequence in the ITS1 region of all three re-sequenced lineages.The two authentic specimens of Russula and Lactarius produced identical ITS sequences to those already extant.The sequence from the conspecific Astraeus sirindhorniae specimen differed by three bases from the extant sequence, which is well within the expected intraspecific variation when conspecific isolates are compared across geographical distances (Thailand and the Philippines in this case).The sequences were deposited in the INSDC as accessions KU356730-KU356732.
SRP RNA-containing sequences of Russula and Lactarius were found to have an average length of some 890 bases; the corresponding average length for the SRP RNAcontaining Astraeus sequences was 840 bases.When using BLAST to find the most similar sequences of Russula, Lactarius, and Astraeus that did not contain the SRP RNA, we found that their ITS region was on average 616 bases (Russula), 644 bases (Lactarius), and 620 bases (Astraeus).This corresponds well to the length of the SRP RNA (~265 bases) for all of Astraeus, Russula, and Lactarius, allowing for some few bases of divergence considering the cross-species comparison.The distances between the SRP RNA and the surrounding genes 18S and 5.8S were almost the same within each lineage, but differed somewhat among the three lineages: 80 and 174 bases (Lactarius), 150 and 80 bases (Russula), and 55 and 132 bases (Astraeus).
Discussion
The finding that ncRNAs are located in tandem is not novel.Apart from the highly conserved nuclear rDNA cluster, some ncRNAs have been found to cluster in several protist species, e.g., SRP RNA together with U6 snRNA, 5S rRNA, SL RNA, and tRNAs in dinoflagellates (Zhang et al. 2013).Regarding the transcription of the SRP RNA in fungi, the transcriptional promoters of the SRP RNA in Saccharomyces cerevisiae (the TFIIIC-binding A-and B-box) are internal, and the SRP RNAs have a polypyrimidine termination sequence similar to other RNA polymerase III genes.Although yeast SRP RNA also has an upstream TATA box -a motif we could not clearly identify in our alignments -inactivation of this region does not result in a significant effect on transcription (Dieci et al. 2002).Therefore it is possible that our identified SRP RNAs could be transcribed independently of the transcription of the rDNA cluster.If so, the SRP RNAs found in the ITS1 region most probably are fully functional, but there may be another copy in the genome that constitutes the major transcript.Whether the SRP RNA could be a product of the rDNA processing seems less likely.Although the A 3 cleavage site that is the closest to the 5.8S should be downstream of the SRP RNA 3' end since the distance between the SRP RNA and the 5.8S is at least 80 nucleotides in the 63 sequences, the SRP RNA needs correct 5' and 3' ends to fold into the proper secondary structure.Therefore the processing of the tricistronic ribosomal transcript most probably leads to non-functional SRP RNA.The surprising lack of mutations, as compared to other identified basidiomycete SRP RNAs (Suppl.material 4), could be explained not only by the insertion being a recent event, but also by the need to preserve the secondary structure of the SRP RNA region and thus the remaining parts of ITS1.
The three species from which the SRP RNA was recovered are all ectomycorrhizal basidiomycetes and come from two different orders and two different families.Two of these lineages are closely related (Russula and Lactarius, both in Russulaceae (Russulales)); the third one -Astraeus sirindhorniae (Boletales) -comes from the same subphylum (Agaricomycotina) as the former two.Even so, the Russulales and the Boletales are separate orders, such that the presence of SRP RNAs in these fungi must be considered independent gains.In the case of Russula and Lactarius -two very speciose genera -the vast majority of the known species do not have the SRP RNAs in their ITS1.Similarly, none of the other species in Astraeus treated by Phosri et al. (2014) were found to have the SRP RNA.It would appear far more realistic to view these SRP RNAs as independent insertion events than as a plesiomorphic ITS state where the ancestor contained the SRP RNA, but where all species except the few considered here lost it (Miller et al. 2006; Suppl.materials 1-4).Indeed, we found no evidence for SRP RNA in the ITS region of any other fungus.
Table 1.
Data on the underlying specimens and PCR primers.The already sequenced specimens of Russula and Lactarius were re-sequenced with a different primer pair compared to the extant sequences.Our Philippines specimen of Astraeus sirindhorniae had never been sequenced before, but we used a different primer pair compared to the Astraeus sirindhorniae sequence generated by Phosri et al. (2014) from a Thailand collection.Primer sequences are available in Suppl.material 1B.The three previously identified SRP RNAs in the Russulales are not located in or close to the rDNA cluster (Dumesic et al. 2015), and we argue that the SRP RNA must be considered as an independently inserted element in these ITS1 sequences.Although this does not cause any problems in terms of molecular identification of these species, it does present a potential difficulty to the uncritical use of ITS sequences in phylogenetic inference in these fungal lineages.Under the assumption that the SRP RNA is found in the exact same position in the ITS1 region among species, the sequences could still be aligned jointly as long as the SRP RNA part is kept as a separate element in the multiple sequence alignment.Any failure to realize that the SRP RNA should be treated as a separate element to be scored as gaps in species that do not have the SRP RNA is certain to give rise to very noisy multiple sequence alignments and skewed inferences of phylogeny.In other words, there is a risk that alignment tools will try to align other parts of the ITS1 region onto the SRP RNA part in large alignments, which would violate homology assumptions.We briefly examined whether several of the most commonly used alignment programs were able to recognize the unique nature of the SRP RNA and not try to stack other parts of the ITS1 onto the SRP RNA.The results were generally encouraging as long as the number of non-SRP RNA containing species was kept reasonably low, with only minor manual adjustments needed a posteriori.In the worse situation where the SRP RNA insertion is not found in the exact same position across species, it will not be possible to maintain position homology in the multiple sequence alignment.In that scenario, the sequences containing the SRP RNA must be excluded from the alignment process, or the SRP RNA element must be removed.Interestingly, Eberhardt (2002) reported an unexpected 250-base insertion in the ITS1 of Russula olivacea -one of the species examined in the present study -and chose to exclude it from her multiple sequence alignment due to alignment difficulties.In hindsight it seems probable that this 250-base region indeed represents the SRP RNA.As demonstrated by the Eberhardt (2002) example, there is widespread (although not necessarily universal) awareness of the importance of examining multiple sequence alignments manually before they are put to scientific use.However, the increasing use of fully automated solutions to data harvesting and phylogenetic inference may present a concern here (cf.Antonelli et al. 2014).
Species
Our findings are not without potential shortcomings though.The number of ITS copies per fungal cell can approach 200 or more (Bellemain et al. 2010;Black et al. 2013).Whereas the process of concerted evolution is thought to homogenize the array of ITS copies (Álvarez and Wendel 2003), it is not uncommon to find evidence of two or more distinct ITS copies during sequencing work (Hyde et al. 2013).A recent pyrosequencing-based study found evidence for multiple, information-wise distinct ITS copies in 3-5% of the 99 examined species of Ascomycota and Basidiomycota (Lindner et al. 2013).The extent to which the three species examined here contain multiple distinct ITS copies is unknown but may well be low, given that we obtained single, clean PCR products and sequence chromatograms for all three species.Even so, it is conceivable that we -much like the original sequence authors -in fact amplified a rare and perhaps non-functional ITS copy.Although this would not disqualify our finding of an SRP RNA in the fungal ITS region, it would raise questions regarding whether the inclusion of the SRP RNA ruined the function of this particular ITS copy, essentially rendering the corresponding rRNA non-functional.In either case, we view this multiple-copy scenario as unlikely given our consistent obtainment, and the more than 20 independent recoveries, of the SRP RNA-containing ITS1 sequences.
Unfortunately, none of the species of the present study have a complete genome published, so a detailed analysis of the SRP RNA in the context of the genomes of these and closely related fungi will have to wait.That said, the trend that published fungal genomes tend to come without the ribosomal operon for reasons of convenience is most unfortunate (Schoch et al. 2014).We join the barcoding community in extending a plea that whenever a genome is sequenced and assembled, the ribosomal operon should be assembled into the genome as a part of that process.If this undertaking proves to be too complex, then at least the full ribosomal operon should be bundled with the genome, even if its assembly into the genome cannot be accomplished.
Conclusions
We found evidence of Signal Recognition Particle (SRP) RNAs in the ITS1 region of a total of 11 fully identified species in three ectomycorrhizal genera: Astraeus, Russula, and Lactarius.Other than the ribosomal genes, this is the first known case of noncoding RNAs in the fungal ITS region.Our finding is small step towards explaining the many insertions found throughout fungal genomes, and it adds a new element to the field of fungal ITS evolution.
Figure 1 .
Figure 1.Schematic illustration of the fungal ITS region and neighboring rDNA genes.The subregions ITS1, 5.8S, and ITS2 of the ITS region are indicated along with the SRP RNA in the first part of the ITS1.The absolute positions of the subregions and the SRP RNA are provided in Suppl.material 2. | 2018-07-25T17:04:48.550Z | 2016-05-13T00:00:00.000 | {
"year": 2016,
"sha1": "cd38da8994e9b414d845c41b5e78566219383a70",
"oa_license": "CCBY",
"oa_url": "https://mycokeys.pensoft.net/article/8579/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a1929e5632b82909efd34f3ba57674887050f899",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
208840329 | pes2o/s2orc | v3-fos-license | Expert System Based on Integrated Fuzzy AHP for Automatic Cutting Tool Selection
: Cutting tool selection plays an important role in achieving reliable quality and high productivity work, and for controlling the total cost of manufacturing. However, it is complicated for process planners to choose the optimal cutting tool when faced with the choice of multiple cutting tools, multiple conflict criteria, and uncertain information. This paper presents an e ff ective method for automatically selecting a cutting tool based on the machining feature characteristics. The optimal cutting tool type is first selected using a proposed multicriteria decision-making method with integrated fuzzy analytical hierarchy process (AHP). The inputs of this process are the feature dimensions, workpiece stability, feature quality, specific machining type, and tool access direction, which determine the cutting tool type priority after evaluating many criteria, such as the material removal capacity, tool cost, power requirement, and flexibility. Expert judgments on the criteria or attributes are collected to determine their weights. The cutting tool types are ranked in ascending order by priority. Then, the rule-based method is applied to determine other specific characteristics of the cutting tool. Cutting tool data are collected from world-leading cutting tool manufacturer, Sandvik, among others. An expert system is established, and an example is given to describe the method and its e ff ectiveness.
Introduction
Computer-aided process planning (CAPP) is an important link between computer-aided design (CAD) and computer-aided manufacturing (CAM). The amount of design information in CAD is insufficient for process planning and manufacturing, while CAM only helps engineers generate a toolpath according to the manual selection of cutting tools and cutting data for each machining feature. Selection of the optimal cutting tool that can meet multiple criteria, such as productivity, quality, and cost, is a significant step in process planning. There are many kinds of cutting tools, and their applications vary depending on their features, such as their geometric characteristics, stability, and the quality of the machining feature. The decision makers need to consider many factors and objectives, including conflicting and uncertain criteria from a large amount of cutting tool data at one time. It is very difficult to choose the appropriate cutting tools for a machining feature. Moreover, manual selection is a time-consuming and difficult process requiring a lot of process planning experience. Thus, a computer-aided cutting tool selection system based on an available cutting tool library is essential in modern manufacturing, especially for CAPP. Oral et al. developed an automatic tool selection system for the turning process based on rules [1]. With this method, according to feature information, selection is made from an appropriate cutting tool library, but the evaluation of multicriteria is ignored.
Elmesbahi developed a method for automatically choosing a cutting tool for the milling process [2]. A collection of all suitable cutters for each machining feature was constructed. The searching process was performed for the whole Sandvik library to find all available cutting tools for each machining feature, without carrying out a specific evaluation. The cutter's classification is suitable only for the Sandvik library but does not cover other cutting tool manufacturers. Vukelica constructed a rule-based system that allows the proper cutting tool to be selected according to selected criteria [3]. The huge number of rules were manually fabricated, and the quality of rule structures highly depends on the expertise of the developer. Zubair et al. proposed an algorithm to optimize machining parameters, such as cutting data for cutting tool selections, while other cutting tools characteristics were selected by a rule-based method according to Sandvik's technique guide [4]. The main limitation of these studies is that the selection processes used were mainly based on the geometric characteristics of each feature along with complex rules, while ignoring many other important evaluation criteria, such as stability, feature quality, and material removal capacity, as well as tool cost. Carpenter fabricated a flexible tool selection decision support system for milling operations when considering some important criteria in cutting tool selection [5]. The knowledge-based method considers many criteria, such as the material removal rate, tool life, cost, and time, but it ignores workpiece stability and flexibility when selecting suitable cutting tools. The procedure can cover all steps in cutting tool selection, from the cutter, holder, tool, and insert to the cutting data. However, the cutting tool type selection is not mentioned clearly. Figuring out the optimal cutting tool type in the first step plays a significant role in selecting the cutting tool in a specific cutting tool library. It can significantly reduce the time required for the searching process in the huge cutting tool database. Amaitik et al. successfully developed a neural network model for cutting tool type selection [6,7]. In this paper, only shape and quality characteristics are mentioned in the selection procedure, while the effects of other important factors in the selection process are ignored. The determination of the cutting tool type in this paper is only a general selection process, without any evaluation; however, selection should be a multicriteria decision-making problem that includes the evaluation of many attributes, including uncertain criteria. Moreover, another limitation of the method is that the cutting tool type selection result only produces one candidate instead of a list of priorities. In practical manufacturing, the search process is executed in a specific cutting tool library to find the best available option. The lack of cutting tool type priority makes the search process difficult, because if one candidate is absent, there is no substitution for specific cases. The aims of this work are to develop a method for evaluating multicriteria in cutting tool type selection and to generate a list of possible options in priority order. Finding the optimal cutting tool type in the first step makes the whole selection process more efficient and less time consuming. The selection procedure should be suitable for a variety of cutting tool manufacturers.
For the past three decades, the analytical hierarchy process (AHP) has been known as a multicriteria decision-making approach for selecting an optimal alternative. Because some of the criteria can be conflicting and uncertain, it is not easy for the decision maker to choose the best, most suitable option. However, it is easier and more accurate for experts to evaluate the priority levels between two alternatives than to simultaneously evaluate all alternatives. Moreover, assessments are performed with respect to only one specific criterion, which is easier than evaluating multiple criteria at the same time. The AHP method can combine the separate evaluation of each criterion to obtain the final priority order in consideration of multiple criteria [8]. Based on the decision maker's pairwise comparisons of the criteria, the AHP method generates a weight for each criterion. The pairwise comparisons of the alternatives or options according to specific criteria are also determined by the experts. The total weight of each option is determined. The higher the weight is, the better the performance of the option. The option with the highest weight is the optimal one. The AHP is a very powerful tool because the final ranking can be easily obtained from the individual evaluations. Since its first introduction by Saaty in the 1970s, AHP has been widely used in many applications to help decision makers solve many complex problems with multiple conflicting and subjective criteria [8]. This method has been successfully applied for the remedial modeling of steel bridges through consideration of multiple criteria [9]. The AHP method has been used widely to solve many complex problems in mechanical engineering, such as cutting tool material selection and nontraditional machining process selection [10,11].
The main problem with the AHP method is that the decision makers have difficultly choosing exact numerical values for pairwise comparison judgments. Thus, the fuzzy AHP method was developed to use the concepts of fuzzy set theory to change fixed value judgments into interval judgments. The fuzzy numbers used in judgments allow decision makers to perform better evaluations. In recent years, the fuzzy AHP method has found a huge number of applications, primarily in the manufacturing, industrial, and government sectors [12]. Jaganathan et al. used fuzzy AHP as an integrated tool to select and evaluate new manufacturing technologies [13]. Dao al. successfully applied fuzzy AHP to assess environmental conflicts [14]. The fuzzy AHP method was proposed as a way to select the best machine tool from the growing number of alternatives on the market [15,16]. With the same purpose of selecting the optimal machine center, an expert system based on the fuzzy AHP method was effectively developed by Tanselİç et.al. for the evaluation of productivity and flexibility [17]. To select the optimal cutting tool type, the fuzzy AHP method is first applied in our system to solve this problem. However, conventional fuzzy AHP is only suitable for a specific condition. It is not appropriate for multiple case selection processes that include a significant amount of input information. The important issue addressed in this work is the development of an integrated fuzzy AHP that combines multiple cases of initial conditions into one model. The fuzzy AHP model integrates the selection and evaluation criteria systems to solve many complex and diverse problems, such as cutting tool selection, reducing the number of models and calculations required. The details of the proposed methodology are described in Section 2. A case study for cutting tool type selection is presented in Section 3. An expert system based on fuzzy AHP for automatic cutting tool selection and related discussions are presented in Section 4.
Cutting Tool Selection Procedure
The proposed cutting tool selection process is carried out in five main steps based on Sandvik's selection guidance, as described in Figure 1.
Step 1: The first step is to determine the optimal cutting tool type using an integrated fuzzy AHP method. The selection is based on the machining features, including the related machining operations, basic dimensions, quality, and stability, in the evaluation of the material removal capacity, tool cost, power requirement, and flexibility. The alternatives are all possible cutting tool types for machining of this kind of feature. Figure 2 shows the most popular cutting tool types used for many types of milling operations, according to Sandvik's catalogue [18]. This classification can be applied for other cutting tool manufacturers. Tools are not divided as is normal into general types, such as face mill, end mill, side mill, etc. Instead, other characteristics are considered, such as whether there is a solid mill or insert mill, the entering angle of the cutter, and the holder type. Thus, the process is more detailed and specific, which makes the next selection step easier and more accurate. A hierarchy tree of multicriteria and alternatives is also constructed. The experts' experiences are utilized to judge the importance of criteria and the priority levels of alternatives through pairwise comparisons. Then, the final priorities are calculated for each of the decision alternatives to determine the ranking of M cutting tool types. M is the number of cutting tool types that have high priority in the ranking list. In that case, if the priority value between two alternatives is quite different, the next alternative should not be added into the ranking list. The rules for determining the cutting tool type ranking list according to all given initial conditions are automatically generated and stored in the database as default data for the next step.
Step 2: In the second step, some main dimensions for each machining feature are preliminarily determined through a calculation model. The input variables correspond to the main dimensions of specific machining features, such as the minimum width, maximum width, length, and depth, so as to determine the cutting tool's diameter range by considering the toolpath number. Step 3: Based on the cutting tool type, the specific cutter is determined from the cutter database.
Step 4: The search process is performed in a cutting tool database to determine the specific cutting tool following the preliminary cutting diameter range.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 4 of 16 Step 4: The search process is performed in a cutting tool database to determine the specific cutting tool following the preliminary cutting diameter range.
Y Y N For each machining feature
Start
Step 0: Pre-determine operation type, machine tool Step
Cutting data Database and Adjustments
Step 5b: Get the cutting parameters Step 4: The search process is performed in a cutting tool database to determine the specific cutting tool following the preliminary cutting diameter range.
Y Y N For each machining feature
Start
Step 0: Pre-determine operation type, machine tool Step Step 2: Find the preliminary cutting tool diameter/cutting length
Cutting tool Database
Insert/Grade Database
Cutting data Database and Adjustments
Step 5b: Get the cutting parameters Step 5: This step deals with the selection of other characteristics, such as the insert size, edge geometry type, and insert grade. The rule-based method is applied to choose these characteristics based on the recommendations collected from different cutting tool manufacturers' technical guides, machining handbooks, and research results [19,20]. Cutting data are also collected from the manufacturer's catalogue in addition to correction factors. The loop continues with the next cutting tool type until loop i is larger than the number of cutting tool types M. The loop should be paused, and a warning is given to stop the selection process or introduce the proper cutting tool based on the optimal cutting tool type for purchasing.
In this procedure, cutting tool type selection is the first step and is very important. The results of this step affect the next selection steps. For each type of machining feature, there are different types of tools that can be used. Without the prioritization of these cutting tool types, it is difficult to determine the time and quality of the results in the next steps. In the next selection, we primarily describe the proposed method for cutting tool type selection as a key factor of the whole procedure.
Proposed Fuzzy AHP Method for the Multcase Selection Process
The main idea of the proposed fuzzy AHP is to combine multiple selection cases in only one model. The conventional fuzzy AHP method has only one output for each case, while the proposed fuzzy AHP has many inputs. The output data also differ depending on the values of the input data. The basic process of the proposed fuzzy AHP calculation is the same as the conventional, one but the hierarchy structure is different. The detailed process is described in the following steps.
• Step 1: Constructing the hierarchical structure for the problem The hierarchical structure, as shown in Figure 3, has many levels including, goal, selection criteria, and evaluation criteria alternatives. The evaluation criteria have the same meaning as the criteria in the conventional fuzzy AHP, where all the criteria and sub-criteria are fixed. However, in the integrated fuzzy AHP, each selection criterion has different choices, known as the sub-criteria or "sub-sub-criteria" of the main selection criteria. They are also the input information for the selection process and are the most important criteria. The conventional fuzzy AHP is suitable for making one decision but is not useful for the integrated problem, where there are multiple different cases of input information, such as the cutting tool selection problem.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 16 Step 5: This step deals with the selection of other characteristics, such as the insert size, edge geometry type, and insert grade. The rule-based method is applied to choose these characteristics based on the recommendations collected from different cutting tool manufacturers' technical guides, machining handbooks, and research results [19,20]. Cutting data are also collected from the manufacturer's catalogue in addition to correction factors. The loop continues with the next cutting tool type until loop i is larger than the number of cutting tool types M. The loop should be paused, and a warning is given to stop the selection process or introduce the proper cutting tool based on the optimal cutting tool type for purchasing.
In this procedure, cutting tool type selection is the first step and is very important. The results of this step affect the next selection steps. For each type of machining feature, there are different types of tools that can be used. Without the prioritization of these cutting tool types, it is difficult to determine the time and quality of the results in the next steps. In the next selection, we primarily describe the proposed method for cutting tool type selection as a key factor of the whole procedure.
Proposed Fuzzy AHP Method for the Multcase Selection Process
The main idea of the proposed fuzzy AHP is to combine multiple selection cases in only one model. The conventional fuzzy AHP method has only one output for each case, while the proposed fuzzy AHP has many inputs. The output data also differ depending on the values of the input data. The basic process of the proposed fuzzy AHP calculation is the same as the conventional, one but the hierarchy structure is different. The detailed process is described in the following steps.
•
Step 1: Constructing the hierarchical structure for the problem The hierarchical structure, as shown in Figure 3, has many levels including, goal, selection criteria, and evaluation criteria alternatives. The evaluation criteria have the same meaning as the criteria in the conventional fuzzy AHP, where all the criteria and sub-criteria are fixed. However, in the integrated fuzzy AHP, each selection criterion has different choices, known as the sub-criteria or "sub-sub-criteria" of the main selection criteria. They are also the input information for the selection process and are the most important criteria. The conventional fuzzy AHP is suitable for making one decision but is not useful for the integrated problem, where there are multiple different cases of input information, such as the cutting tool selection problem. The next three steps are similar to those used in the conventional fuzzy AHP [13]. These are described in detail.
•
Step 2: Generating the comparison pairwise matrix based on experts' judgments.
The decision maker compares the criteria or alternatives using linguistic terms. The triangular fuzzy membership function in Figure 4 is defined by a lower limit , an upper limit , and a value , where < < , as shown in Equation (1). Table 1 gives the importance scales for both The next three steps are similar to those used in the conventional fuzzy AHP [13]. These are described in detail.
• Step 2: Generating the comparison pairwise matrix based on experts' judgments.
The decision maker compares the criteria or alternatives using linguistic terms. The triangular fuzzy membership function in Figure 4 is defined by a lower limit a ij , an upper limit b ij , and a value m ij , where a ij < m ij < b ij , as shown in Equation (1). Table 1 gives the importance scales for both the AHP and the triangular fuzzy AHP. The δ and δ' values depend on the fuzzy scale. The most popular values of δ and δ' are 1 and 0, respectively. The comparison matrix ̃ is presented as in Equation (2) .
• Step 3: Checking the consistency of each comparison matrix.
The consistency ratio (CR) is calculated and used to check the consistency of the average comparison matrix S, as shown in Equation (4), based on the average fuzzy value: If the CR is smaller than 0.1, the matrix is accepted regarding the consistency requirement.
The consistency index (CI) is calculated using the following equation: where λ max is assumed to be the eigenvalue, and n is the size of the single pairwise average comparison matrix. RI is the random index determined in the conventional AHP model. If CR > 0.1, it is necessary to adjust the values in the comparison matrix.
• Step 4: Determining the component and total weights The geometric means and fuzzy weights can be calculated. The geometric mean for each row r i is determined as shown in Equation (7), and the fuzzy weight f w i is given in Equation (8). Based on the fuzzy weight f w i , the non-fuzzy weight of each criterion M i is calculated by averaging the fuzzy weight f w i for each criterion. The normalized weight of each criterion N i is also determined by normalizing values M i :
Tool Type Selection Model
There are a lot of milling tool types, and the characteristics of machining operations are also different. In the AHP or fuzzy AHP methods, the number of alternatives is normally smaller or equal to ten [8]. The higher the alternative number is, the more complex the calculation and judgments are. Thus, it is convenient for the evaluation to fabricate each fuzzy AHP model for each specific milling operation. The number of alternatives is reduced, and the criteria are more suitable for each operation. The selection model can be presented as shown in Equation (9).
In this model, the cutting tool type selection is a function (F ToolType ) of the initial selection shape characteristics (SC), work stability (WS), feature quality (FQ), or specific type (TE) to evaluate four criteria, including the material removal capacity (MRC), power requirement (PR), tool cost (TC), and flexibility (FE): (9) Table 2 shows the selection evaluation criteria and alternatives for the most common milling operations. There are only three selection criteria for the first four milling operations, but one more criterion is added in pocket and profile machining. Figure 5 shows the developed hierarchical structure for the shoulder milling feature as a case study. The goal is to select the most suitable cutting tool type from a collection of five alternatives, which are the available cutting tool types for this kind of machining feature. The first three selection criteria can be chosen by users. Regarding shape characteristics, there are three sub-criteria-the aspect ratio (AR), width (WI), and side (SI)-as shown in Equation (10). SC Shoulder is presented for the relationship among shape characteristics for shoulder milling. The rule-based method is used to classify each sub-criterion into sub-sub-criteria. AR is the ratio between the depth (DE) and width, which has three levels, named low (LO), medium (ME), and high (HI), corresponding to DE/WI ratios in the range of (0-1), (1-2), and more than 2. There are also three levels of width criteria, namely small (SM), medium-large (ML) and large (LA), which correspond to WI in the ranges of (0-15 mm), (15-50 mm), and more than 50 mm, respectively. Depending on the tool's access direction, the side has two options: top (TO) or back (BA). Similar rules are applied when classifying feature quality into rough (RO), semi-finish (SF), or finish (FI).
To define the feature stability, three characteristics are used, namely the wall thickness, the ratio between the depth and wall thickness, and the workpiece weight. For example, if the wall thickness is less than 10 mm, the ratio is from 5 to 10 for every workpiece's weight, and the stability is low. The material removal capacity is defined as the ability to remove material from the workpiece. This depends on the cutting speed, cutting depth, and cutting width. Thus, it is quite different among all cutting tool types. The power requirement is related to the amount of machine power necessary for machining. The tool cost is the general purchasing cost of the cutting tool. Flexibility means the possibility of reusing the cutting tool for further applications. In the case of other machining operations, the same criteria structure is developed, while the first criteria have different sub-criteria and sub-sub-criteria, depending on the characteristics of each machining operation. This structure can be defined by experts to achieve suitable objectives regarding criteria, sub-criteria, and so on, using the proposed expert system.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 16 machining. The tool cost is the general purchasing cost of the cutting tool. Flexibility means the possibility of reusing the cutting tool for further applications. In the case of other machining operations, the same criteria structure is developed, while the first criteria have different sub-criteria and sub-sub-criteria, depending on the characteristics of each machining operation. This structure can be defined by experts to achieve suitable objectives regarding criteria, sub-criteria, and so on, using the proposed expert system.
Rules for the Insert and Grade Selection
According to Sandvik's machining technical guide, there are three types of cutting edge geometry inserts: light, medium, and heavy [20]. The classification is based on two important angles on the insert: the rake angle (γ) and the cutting edge angle (β). The working condition is one of the key factors for choosing insert edge geometry. For example, the light geometry has a large γ and small β, which means a more positive but weaker edge. Thus, the light geometry is suitable for unstable conditions. The cutting tool manufacturers always use their own code name for the insert grade. However, they are also classified according to the international organization standard (ISO) grade range; for example, from P10 to P50 for steel workpiece materials, and from K10 to K40 for cast iron materials. For rough operations, the grade type P50 is recommended, and for finishing operations, the grade type P10 is recommended [19]. Based on these characteristics, basic rules are established to determine the suitable insert and grade in a specific cutting tool database.
Case Study on Cutting Tool Type Selection
For a clear explanation, one case study for shoulder milling is given. Suppose that rough milling through a step 120 mm long, 12.3 mm wide, and 10.2 mm deep under the condition of a thin wall (2 mm wall thickness) is required. The initial selection is for a low aspect ratio, small width, top side, light stability, and rough quality. The hierarchy tree is shown in Figure 5. The pair-wise comparison matrix for the main criteria is shown in Table 3. It is obvious that the three selection criteria, including
Rules for the Insert and Grade Selection
According to Sandvik's machining technical guide, there are three types of cutting edge geometry inserts: light, medium, and heavy [20]. The classification is based on two important angles on the insert: the rake angle (γ) and the cutting edge angle (β). The working condition is one of the key factors for choosing insert edge geometry. For example, the light geometry has a large γ and small β, which means a more positive but weaker edge. Thus, the light geometry is suitable for unstable conditions. The cutting tool manufacturers always use their own code name for the insert grade. However, they are also classified according to the international organization standard (ISO) grade range; for example, from P10 to P50 for steel workpiece materials, and from K10 to K40 for cast iron materials. For rough operations, the grade type P50 is recommended, and for finishing operations, the grade type P10 is recommended [19]. Based on these characteristics, basic rules are established to determine the suitable insert and grade in a specific cutting tool database.
Case Study on Cutting Tool Type Selection
For a clear explanation, one case study for shoulder milling is given. Suppose that rough milling through a step 120 mm long, 12.3 mm wide, and 10.2 mm deep under the condition of a thin wall (2 mm wall thickness) is required. The initial selection is for a low aspect ratio, small width, top side, light stability, and rough quality. The hierarchy tree is shown in Figure 5. The pair-wise comparison matrix for the main criteria is shown in Table 3. It is obvious that the three selection criteria, including the shape characteristics, workpiece, and feature quality, are key factors in optimal cutting tool type selection. Thus, they have higher importance than others. The workpiece stability and feature quality have the same and highest weights in comparison with the other criteria. The consistency checking process is done using Equations (4) and (5). The CR value is equal to 0.07, which means the comparison matrix is expected to be consistent. Equations (6) and (7) are applied to determine the geometric means, fuzzy weights, and the normalized weights or priorities for the main criteria, as shown in Table 4. According to the table, the feature quality has the highest weight among the three selection criteria. The power requirement has the lowest weight among all evaluation criteria. The process to obtain the results is shown in Tables 3 and 4. All weights of the selection criteria, evaluation criteria, and alternatives of the developed hierarchy are shown in Table 5. WS1, WS2, and WS3 represent the weights for the main selection criteria, sub-criteria, and sub-sub-criteria, respectively. For example, in the first row, the last five columns represent the weights of five alternatives (SFM, IEM, LEM, SEM, and SAFM) with respect to the shape characteristics (presented in WS1) for the aspect ratio (presented in WS2) at a low level (presented in WS3). The underlined numbers mean the initial selection, and the bold numbers represent the highest weight of a specific criterion for the case study. Although the solid end mill has the highest weight for five criteria and sub-sub-criteria, the indexable end mill is the best choice for three sub-sub criteria. The indexable end mill, which still has the highest total weight, is the optimal alternative, while the second tool type choice is the solid flat end mill, as shown in Table 6.
Expert System and Discussion
The expert system based on the proposed fuzzy AHP method for cutting tool selection was implemented using Microsoft Visual Studio C# 2012, and an initial selection interface describing the cutting tool type selection process is shown in Figure 6. There are two methods that are used to determine the initial selection. The first is to enter the real values for all necessary information, such as the average width, depth, wall thickness, weight, and accuracy level for shoulder milling.
The expert system will directly convert these values into selection types by level (e.g., low, medium, and high) based on rules. The second method is that you can directly select selection types for each criterion. The next step in the process is defined as the collection of alternatives, as shown in Figure A1. After determining the alternatives, the criteria, sub-criteria, and so on are defined. The hierarchy tree view of this selection problem is presented in Figure 7, with the initial selection underlined and showing the comparison matrix definition for each criterion. The experts can evaluate the importance of the criteria and alternatives based on their own experiences with respect to specific ones through this interface. The consistency of the comparison matrix and the priority levels of criteria or alternatives are automatically calculated. Based on the ranked scores of the alternative cutting tool types, the optimal cutting tool type is selected. Figure 8 shows the final evaluation result for cutting tool type selection in the case study. The results for all cases are automatically saved in the database for further use.
of the criteria and alternatives based on their own experiences with respect to specific ones through this interface. The consistency of the comparison matrix and the priority levels of criteria or alternatives are automatically calculated. Based on the ranked scores of the alternative cutting tool types, the optimal cutting tool type is selected. Figure 8 shows the final evaluation result for cutting tool type selection in the case study. The results for all cases are automatically saved in the database for further use. After determining the appropriate cutting tool type, it is searched in the available cutting tool library to select the cutter concept. Then, other characteristics, such as a cutting tool, insert, grade, and cutting parameters, can be determined using the rule-based method. The database for the cutting tool selection system based on Sandvik's classification is created and managed using Microsoft SQL Server Express 2012. The relationship among tables in the cutting tool database is shown in Figure A2. This cutting tool database is suitable for many cutting tool libraries, including many cutting tool manufacturers and tool types. When the cutting tool library is selected, all searching processes are executed in the specific tool library. The cutting tool library used for the case study was Sandvik, which contains over 15 milling tool types, 800 cutting tools, and 350 inserts in the database. The final cutting tool selection result is shown in Figure A3. The following were selected for shoulder milling in the case study in less than 1 s: cutting tool R390-016A16-11L, insert R390-11T-304E-PL with the edge geometry L type (Light), and grade GC1025. This searching result is suitable for the specific conditions of the case study and is compatible with expert recommendations. This concept was also applied in our CAPP system and successfully tested by the Mekamic joint stock company in Vietnam, which has diverse cutting tool manufacturers, including YG-1, CMTéc, and Sumitomo. Thus, the expert system can successfully convert expert knowledge into a computational method. The main advantage of the expert system is the quick searching ability in a huge and diverse cutting tool library, which is very difficult to achieve with a process planner. After determining the appropriate cutting tool type, it is searched in the available cutting tool library to select the cutter concept. Then, other characteristics, such as a cutting tool, insert, grade, and cutting parameters, can be determined using the rule-based method. The database for the cutting tool selection system based on Sandvik's classification is created and managed using Microsoft SQL Server Express 2012. The relationship among tables in the cutting tool database is shown in Figure A2. This cutting tool database is suitable for many cutting tool libraries, including many cutting tool manufacturers and tool types. When the cutting tool library is selected, all searching processes are executed in the specific tool library. The cutting tool library used for the case study was Sandvik, which contains over 15 milling tool types, 800 cutting tools, and 350 inserts in the database. The final cutting tool selection result is shown in Figure A3. The following were selected for shoulder milling in the case study in less than 1 s: cutting tool R390-016A16-11L, insert R390-11T-304E-PL with the edge geometry L type (Light), and grade GC1025. This searching result is suitable for the specific conditions of the case study and is compatible with expert recommendations. This concept was also applied in our CAPP system and successfully tested by the Mekamic joint stock company in Vietnam, which has diverse cutting tool manufacturers, including YG-1, CMTéc, and Sumitomo. Thus, the expert system can successfully convert expert knowledge into a computational method. The main advantage of the expert system is the quick searching ability in a huge and diverse cutting tool library, which is very difficult to achieve with a process planner.
conditions of the case study and is compatible with expert recommendations. This concept was also applied in our CAPP system and successfully tested by the Mekamic joint stock company in Vietnam, which has diverse cutting tool manufacturers, including YG-1, CMTéc, and Sumitomo. Thus, the expert system can successfully convert expert knowledge into a computational method. The main advantage of the expert system is the quick searching ability in a huge and diverse cutting tool library, which is very difficult to achieve with a process planner. Compared with the neutral-network-based method for cutting tool type selection mentioned in the literature [6,7], the integrated fuzzy AHP method has great merit in terms of its simplicity and applicability. The proposed method can be used to obtain a list of candidates in priority order based on pair-wise comparison matrices, instead of just providing a single option from the huge amount of neutral network training data. Moreover, all alternatives in the proposed hierarchy structure and database are constructed in association with the cutting tool classification manufacturers in the market. Thus, it is suitable for application in practical manufacturing. In the introduced procedure, the specific cutting tool types (e.g., the solid end mill, long edge end mill, and indexable end mill), as substitutes for general types (e.g., end mill and face mill), are determined in the first step, which can restrict the searching domain in the database, as well as the processing time. In addition, the proposed expert system can automatically generate rules for cutting tool type selection depending on the given conditions, in comparison with other manually generated rules used in previous research [1][2][3][4]. It can significantly reduce errors while manually constructing a variety of complex rules. Finally, some new attributes, including stability and flexibility, are also considered.
Considering the structure of the fuzzy AHP and the number of calculations, the proposed fuzzy AHP exhibits outstanding benefits. If using the conventional fuzzy AHP, where each specific case has one hierarchy structure [15][16][17], 162 hierarchy structures need to be established for all cases of initial conditions in shoulder milling. The number of calculations increases because of the many duplicate calculations from duplicate pair-wise comparison matrixes, whereas with the integrated structure, all cases are presented in a single model and each matrix appears and is calculated once, so the number of calculations is significantly reduced. This proposed approach can expand the applicability of the fuzzy AHP to solve many other complex problems that have multiple given conditions. The developed expert system can determine the best available cutting tool in the huge and diverse cutting tool library, according to a variety of given conditions, with only a few mouse clicks, without any expert manufacturing knowledge regarding the applicability of the cutting tools. Therefore, errors do not appear during the selection process. Furthermore, the expert system can help the process planner fabricate their own selection model that is suitable for specific conditions in their company. It can also be used as a selection suggestion system for customers of cutting tool manufacturers.
Conclusions
In this paper, an expert system based on integrated fuzzy AHP was proposed to select cutting tools with respect to multiple conflicting criteria. A case study was done on a huge database from Sandvik to illustrate the applicability of the approach, and this yielded acceptable results. The main contributions of this study include the following: (1) the hierarchy structure of the fuzzy AHP was modified by adding selection criteria beside the evaluation criteria, which can expand the applicability of the fuzzy AHP to complex selection problems with many initial conditions; (2) the alternatives to the fuzzy AHP established from the popular cutting tool type classification in the market make the system more realistic for application; (3) the specific cutting tool type is evaluated and determined in the first step, which can significantly reduce the searching range in a huge and diverse database. Thus, this system also reduces the time and effort requirements of the process planner. It can be integrated with other modules for application in CAPP. In future research, we will focus on the following points: (1) optimizing cutting data selection based on cutting tool manufacturer's recommendations, and (2) combining our method with metaheuristic optimization methods and other artificial intelligent methods to enhance the efficiency and adaptability of other selection process. | 2019-10-17T09:05:14.720Z | 2019-10-14T00:00:00.000 | {
"year": 2019,
"sha1": "8589816d9e706982cdf5185bc42bd508728b6d2d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/20/4308/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3338dd0101a2c95cb2ebf4650eac02a9eba1bae9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1214212 | pes2o/s2orc | v3-fos-license | Targets downstream of Cdk8 in Dictyostelium development
Background Cdk8 is a component of the mediator complex which facilitates transcription by RNA polymerase II and has been shown to play an important role in development of Dictyostelium discoideum. This eukaryote feeds as single cells but starvation triggers the formation of a multicellular organism in response to extracellular pulses of cAMP and the eventual generation of spores. Strains in which the gene encoding Cdk8 have been disrupted fail to form multicellular aggregates unless supplied with exogenous pulses of cAMP and later in development, cdk8- cells show a defect in spore production. Results Microarray analysis revealed that the cdk8- strain previously described (cdk8-HL) contained genome duplications. Regeneration of the strain in a background lacking detectable gene duplication generated strains (cdk8-2) with identical defects in growth and early development, but a milder defect in spore generation, suggesting that the severity of this defect depends on the genetic background. The failure of cdk8- cells to aggregate unless rescued by exogenous pulses of cAMP is consistent with a failure to express the catalytic subunit of protein kinase A. However, overexpression of the gene encoding this protein was not sufficient to rescue the defect, suggesting that this is not the only important target for Cdk8 at this stage of development. Proteomic analysis revealed two potential targets for Cdk8 regulation, one regulated post-transcriptionally (4-hydroxyphenylpyruvate dioxygenase (HPD)) and one transcriptionally (short chain dehydrogenase/reductase (SDR1)). Conclusions This analysis has confirmed the importance of Cdk8 at multiple stages of Dictyostelium development, although the severity of the defect in spore production depends on the genetic background. Potential targets of Cdk8-mediated gene regulation have been identified in Dictyostelium which will allow the mechanism of Cdk8 action and its role in development to be determined.
Background
The serine/threonine kinase Cdk8 is a regulator of transcription through its association with the mediator complex [1]. This complex was originally identified as an activity that was required to allow RNA polymerase II to perform regulated transcription in vitro. Purification has revealed it be a large multi-protein complex with varying composition. The presence of a submodule containing Cdk8 and its protein partner cyclin C has been proposed to be a mechanism to regulate mediator activity, responsible for both activation and inhibition of transcription through phosphorylation of either the C terminal domain of RNA polymerase II (CTD) or of gene-specific transcription factors. The yeast orthologues of Cdk8 (Srb10) and cyclin C (Srb11) were identified genetically as suppressors of defects caused by truncation of the CTD, consistent with a role in regulation of transcription. Microarray analysis suggests that yeast lacking Srb10 show altered expression of around 3% of genes [2]. Orthologues of Cdk8 are apparent in all eukaryotes, but the mechanisms of regulation are not well defined. In S. cerevisiae, proteolysis of the cyclin C orthologue has been proposed in response to stresses such as oxidative stress [3], but this has not been reported in other systems.
In mammalian cells Cdk8 activity has been implicated in regulation of growth through its overexpression in tumour cells [4], as well as in development. Cdk8 plays an important role in the Notch signalling pathway as recruitment of Cdk8 to the promoter of a developmentally regulated gene (HES1) causes hyperphosphorylation of the intracellular domain of the Notch transcription factor, promoting its degradation and leading to subsequent down regulation of transcription of HES1 [5]. Cdk8-dependent regulation of transcription factor stability has also been reported in S. cerevisiae where phosphorylation of Ste12 and GCN4 by Srb10 promotes their degradation [6,7]. Consistent with a role in development, Cdk8 shows tissue specific expression during zebra fish development and disruption of genes encoding components of the mediator sub-complex containing Cdk8 cause developmental defects in C. elegans, Drosophila and Arabidopsis [8][9][10].
Previous reports have implicated Cdk8 activity in development of Dictyostelium amoebae [11,12] which feed and proliferate as single cells but, upon starvation, undergo a developmental life cycle [13]. Starving cells start to secrete pulses of cAMP that acts as a chemoattractant for the surrounding cells which aggregate to form a multicellular organism. This aggregate undergoes a series of morphogenetic changes leading to the generation of a fruiting body in which around 80% of the cell differentiate into spore cells while the remaining cells differentiate into stalk cells to support the spore head above the substratum. Dictyostelium strains in which the gene encoding Cdk8 have been disrupted grow poorly in shaking suspension and fail to aggregate unless supplied with exogenous pulses of cAMP [11,12]. The cells also fail to express a number of genes associated with early development, including the gene encoding the adenylyl cyclase (aca) responsible for cAMP generation during aggregation, which could explain the failure to aggregate. We further reported that for the cdk8-null strain generated in our lab (henceforth called cdk8 -HL ) pulsing with exogenous cAMP could rescue the early aggregation defect. If these cells were then plated on a surface, they went on to form aberrant structures with a defect in the ability to generate mature spore cells. This defect could be rescued by expression of Cdk8 but not by a kinase-dead version, implicating the kinase activity in this failure of development [11]. However, although early aggregation defects were identical, cdk8cells generated in a different genetic background did give rise to viable spores [12].
Here we report that microarray analysis revealed genome duplications in the cdk8 -HL cells, which may explain the phenotypic differences in different genetic backgrounds. Subsequent regeneration of the strain in a parent lacking detectable genomic duplications confirms the defects in aggregation and an important role for Cdk8 in spore cell differentiation, although milder than initially reported. Comparison of the protein expression profile of parental and the new strain of cdk8 -(cdk8 -2 ) cells revealed potential targets for direct regulation by Cdk8 which will facilitate analysis of the targets of Cdk8 in Dictyostelium which lead to defects in growth and development.
Results
Microarray analysis of cdk8 -HL cells In order to investigate differences in the reported phenotypes of strains containing disruption of the cdk8 gene in Dictyostelium and in light of the recent realisation that many laboratory strains of Dictyostelium contain genomic duplications [14], we investigated whether the cdk8cells previously isolated in our laboratory, cdk8 -HL , and their parent, Ax2P, contained genomic duplications. This analysis was carried out by microarray to compare the relative copy number of genes in genomic DNA isolated from these strains and from the strain Ax2K which is free of detectable duplications [14]. Comparison of both cdk8 -HL and its parental Ax2 line (Ax2P) with Ax2K [14] revealed a large duplication on chromosome 2 in both cdk8 -HL and Ax2P ( Figure 1 and data not shown). This analysis also revealed a further duplication on chromosome 5 within cdk8 -HL cells, which was not present in the parent Ax2P. A duplication on chromosome 2 in the region duplicated in Ax2P and its derivative, has previously been reported for another strain of Dictyostelium, Ax4 [15]. Comparison of the parental Ax2P gDNA with gDNA extracted from strain Ax4 showed that the novel chromosome 2 feature in the former strain spans the previously described 750 kb duplication in Ax4, and extends approximately a further 400 kb on one side of it. In view of the chromosomal duplications identified in the original cdk8 -HL strain, independent null strains were generated in an Ax2 background, using the Ax2K cells with no detectable duplications [14]. Three independent strains were identified in which the cdk8 gene had been disrupted and all showed identical phenotypes to each other. In order to determine whether the phenotypes described for the original cdk8 -HL strain were dependent on the genome duplications we carried out a phenotypic analysis of the newly derived cdk8 -2 cells. They showed identical defects in growth in shaking suspension to the original strain (data not shown). They also failed to form aggregates on starvation, but this defect could be rescued by exogenous pulses of cAMP, as reported for the original strain, and showed similar defects in transcription of early developmental genes such as a failure to switch off expression of cprD and a failure to induce expression of early developmental genes such as pkaC, aca and carA on starvation (data not shown) [11,12].
These results suggested that the early developmental phenotype was not influenced by the genome alterations in the original cdk8 -HL strain.
Late developmental phenotype of the cdk8 -2 strain In order to examine the newly generated cdk8 -2 strain for the presence or absence of the late developmental phenotype, cells were suspended in KK 2 buffer and pulsed with 50 nM cAMP every 5mins for 6hrs, before being harvested, washed and plated on KK 2 agar. Unlike the cdk8 -HL strain, the cdk8 -2 cells formed phenotypically normal fruiting bodies, although culmination occurred 3-4hrs later than in the Ax2 bsR control strain ( Figure 2A). Ax2 bsR was created by random insertion of the vector designed for disruption of the cdk8 gene into Figure 1 Microarray comparisons of genomic DNA from Ax2, Ax4 and cdk8 -HL strains. The log(2)ratio of abundance of each gene in each comparison is plotted against the position of that gene on the chromosome. Ax2P is the parental strain used to generate cdk8 -HL and Ax2K is the Ax2 strain with minimal genomic duplications [14]. Shown are comparison of genes on Chromosome 2 between Ax2K and cdk8 -HL , and between Ax2P (the parent of cdk8 -HL ) and Ax4, and a comparison of genes on chromosome 5 for Ax2K and cdk8 -HL . The approximate position of the putative cdk8 -HL duplications relative to Ax2K are marked with grey horizontals bars. Logratios of approximately 1 or -1 indicate 2-fold changes in copy number (duplications); logratios around zero indicate equal copy number. The known Ax4 duplication is marked with a red horizontal bar; logratios in this region average approximately zero in this comparison show that both strains have the same copy number here. Ax2P has a longer duplication than that apparent in Ax4, though in the same region, as represented by the grey bar in the middle panel. The cdk8 gene is found on chromosome 1 and so its loss is not apparent on the comparisons shown here.
Figure 2
Late developmental phenotype of cdk8 -2 cells. (A) Cells were developed in KK 2 buffer at 1 x 10 7 cells/ml with or without cAMP pulsing (50 nm cAMP added to the suspension every 5 minutes). Each strain was shaken for 6hrs at 150 rpm at 22°C before being spread onto filters at a density of 3 x 10 6 cell/cm 2 . Photographs were taken after 28hrs. The scale bar in the right hand panel represents 200 μm. (B) Expression of SpiA in cdk8 -2 cells. Fruiting bodies formed after cAMP pulsing were harvested after 24hrs or 28 hrs. RNA was extracted from these samples and resolved on a 1% formaldehyde gel, transferred to a nylon membrane and probed with a 32 P labelled fragment of the spiA gene. The blot was reprobed with a 32 P labelled fragment of the IG7 gene so as to control for loading. (C) Viability of cdk8 -2 spores. Fruiting bodies were formed by developing cells in shaken suspension with cAMP pulsing prior to plating. Equal numbers of spores from each strain were treated with heat and detergent and spread onto a bacterial lawn. Colonies resulting from hatching of viable spores were counted after 4-5 days. Each bar represents the mean (±standard deviation) of three independent experiments. Results showing statistically significant difference from Ax2 bsR are marked with a * (p < 0.05 by student t-test).
the genome of Ax2K. Upon culmination, the spiA gene (a gene induced late in spore differentiation and found to be expressed at reduced levels in the original cdk8 -HL cells) was expressed at similar levels in both the cdk8 -2 and Ax2 bsR strains ( Figure 2B).
Viability of cdk8 -2 spores
In order to investigate whether the spores generated by the cdk8 -2 strain were viable, their ability to germinate after heat and detergent treatment was tested. Fruiting bodies formed by the mutant and control strains were harvested and disaggregated. An equal number of spores from each strain were exposed to detergent and heat treatment in order to destroy any non-viable spores and undifferentiated amoebae. The number of surviving spores from each strain that were capable of germination was determined by spreading dilutions onto bacterial lawns and counting the resultant colonies.
This analysis revealed that cdk8 -2 spores were less viable than those of the Ax2 bsR strain. Whilst 98% of spores from the control strain were able to germinate after heat and detergent treatment, this was true for only 40% of cdk8 -2 spores ( Figure 2C). This defect in spore formation phenotype is much milder than that of the previous cdk8 -HL strain in which virtually no resistant spore cells were formed [11]. This data suggests that, although the phenotype was exaggerated in the duplication-containing background, there is a consistent defect in spore cell formation in the absence of cdk8.
Requirement for kinase activity of Cdk8
The cdk8 -2 strain was transformed with an extrachromosomal vector to drive expression of an epitope tagged version of Cdk8 (myc-Cdk8) from a semi-constitutive actin15 promoter (pDXA[act15::myc-cdk8) and a version containing a point mutation (pDXA[act15::myc-cdk8 kd ]) to create a kinase deficient form of the same protein (myc-Cdk8 kd ) [11]. The expression of each Cdk8 protein was confirmed by western blot ( Figure 3A). Expression of the myc-Cdk8 protein in cdk8 -2 cells resulted in rescue of all the observed growth and developmental phenotypes. In contrast, the cdk8 -2 [myc-cdk8 kd ] strain grew at a rate comparable to the cdk8 -2 strain and did not form aggregates upon starvation ( Figures 3B and 3C). The cdk8 -2 [myc-cdk8] spores exhibited a similar viability (89%) to those of the Ax2 bsR strain (104%) implying that expression of the myc-Cdk8 protein complemented the late developmental phenotype ( Figure 3D). In contrast, the cdk8 -2 [myc-cdk8 kd ] spores exhibited a similar viability (43%) to those of the cdk8 -2 strain (39%). These observations implied that the phenotypes observed in the cdk8 -2 strain were directly attributable to a loss of Cdk8 kinase activity.
Overexpression of pkaC in cdk8 -2 cells The early developmental phenotype of the cdk8 -2 cells suggested a potential gene target for regulation by Cdk8 activity. The cAMP dependent protein kinase (PKA) enzyme is vital for both aggregation and formation of spore cells (reviewed in [16]). Many aggregation-deficient strains which can be rescued by exogenous pulses of cAMP can also be rescued by restoring expression of the catalytic subunit of PKA, pkaC. As no expression of pkaC could be detected in cdk8cells ( [11] and data not shown), it was hypothesised that this may be responsible for the defect in aggregation. In order to address this question, the pDXA[act15::FLAG-pkaC] plasmid which expresses the PKA-C protein with an N-terminal FLAG-tag was transformed into the cdk8 -2 and Ax2 bsR to create the cdk8 -2 [FLAG-pkaC] and Ax2 bsR [FLAG-pkaC] strains. Cell extracts from these strains were analysed by western blot to confirm expression of the 75kDa FLAG-PKA-C protein in both strains ( Figure 4A).
Both strains were developed alongside the parental Ax2 bsR and cdk8 -2 strains. It has previously been shown that over expression of PKA-C results in more rapid completion of the developmental cycle [17]. As the Ax2 bsR [FLAG-pkaC] strain was observed to achieve culmination more rapidly than the parental Ax2 bsR strain (data not shown) it was concluded that a functional PKA-C protein was being expressed. However, expression of this protein in the cdk8 -2 [FLAG-pkaC] strain did not result in a rescue of the cdk8 -2 aggregation defect ( Figure 4B) suggesting that low levels of pkaC expression were not solely responsible for this phenotype.
Identification of proteins with altered expression levels in cdk8cells
In order to identify targets regulated by the Cdk8 protein, whole-cell extracts from vegetatively growing cdk8 -2 and Ax2 bsR strains were analysed by two dimensional gel electrophoresis and gels were stained with colloidal blue. Subsequent analysis of the protein spots found that the vast majority of protein features showed equal staining intensities in both strains ( Figure 5A). This characteristic was used to normalise the staining intensity of each gel against a standard. More detailed analysis found that two protein features were reproducibly altered in the cdk8 -2 strain in comparison with the Ax2 bsR strain ( Figure 5B and 3C). A protein approximately 42kDa (p42) in size was present at higher levels in the cdk8 -2 strain, whilst a smaller protein of 32kDa (p32) was consistently expressed at lower levels in the mutant strain ( Figure 5C).
These proteins were both possible targets of Cdk8dependent regulation so were each excised from the gel and sequenced by mass spectrometry. This approach identified p42 as 4-hydroxyphenylpyruvate dioxygenase (HPD, dictybaseID number: DDB_G0277511). This metabolic enzyme is involved in catalyzing the conversion of 4-hydroxyphenylpyruvate to homogentisate in the tyrosine catabolic pathway [18][19][20]. The p32 protein was identified as the protein product of gene DDB_G0290659. BLAST searches against the NCBI database revealed that this protein shares a large amount of similarity with short-chain dehydrogenase/reductases from a number of different species. It was thus named SDR1 (short-chain dehydrogenase/reductase 1), while the gene encoding it was named sdrA.
Expression of the hpd and sdrA genes As Cdk8 is a known transcriptional regulator, it was investigated whether the altered abundance of the HPD and SDR1 proteins were mirrored by alterations in the expression of the hpd and sdrA genes. RNA was extracted from Ax2 bsR and cdk8 -2 cells that were growing vegetatively or had been developed on KK 2 agar for 3hrs. Northern blot analysis of these samples indicated that hpd mRNA was present at equal levels in vegetatively growing cdk8 -2 and Ax2 bsR strains and could not be detected in either strain after three hours of development ( Figure 6). Examination of sdrA mRNA levels revealed that this transcript was present at higher levels in the Ax2 bsR cells than in cdk8 -2 cells during vegetative growth ( Figure 6). As with hpd mRNA no transcript could be detected after 3hrs of development in either strain. These observations implied that the reduced abundance of the SDR1 protein in the cdk8 -2 strain may be the result of reduced expression or stability of the sdrA transcript, while the altered level of HPD was not due to a direct effect on mRNA levels.
Discussion
The original cdk8 -HL strain exhibited a complex phenotype which included defects in growth and aggregation. When developed under permissive conditions, the block upon aggregation was relieved but the resultant fruiting bodies were highly defective with no mature spores [11]. All of these defects could be complemented by expressing wild type Cdk8, but not a kinase dead version, from its endogenous promoter, so were dependent on Cdk8 activity. As a consequence of these observations, it was proposed that Cdk8 may function as a regulator of cell differentiation. A potential role for Cdk8 later in development was also suggested by a reduction in the proportion of Cdk8 associated with a high molecular weight complex late in development in response to an increase in extracellular cAMP [21].
Microarray analysis of genomic DNA revealed duplications in the genome of the cdk8 -HL strain. The analysis strongly suggests that the cdk8 -HL strain contains a large duplication of approximately 1.2 Mb on chromosome 2 and another smaller duplication of approximately 300 kb on chromosome 5. Chromosome 2 is the largest of the Dictyostelium chromosomes and a number of previous reports have suggested that it may be intrinsically unstable: In the NC4 strain originally isolated by John Bonner and maintained in vegetative growth for many decades, the chromosome is maintained as two smaller fragments [22]. Similarly the Ax4, but not the Ax2 strain, possesses a perfect inverted repeat of 1.5 Mb on chromosome 2 that has been proposed to be the result of a 'breakage-fusionbridge' cycle [23]. The putative chromosome 2 duplication detected in the cdk8 -HL strain is a similar size and occurs in a similar region of the chromosome as the Ax4 duplication. A number of other reports of chromosomal abberations in this region of the genome imply that it may be a hotspot for rearrangements [14]. The frequent occurrence of duplications in many strain backgrounds suggests that the segmental duplication on chromosome 5 in the cdk8 -HL strain was not caused by genome instability resulting from loss of Cdk8 function.
Due to the cdk8 -HL chromosomal abnormalities it was decided to create a new cdk8 disruptant. Characterisation of the newly generated mutant strain indicated that it had similar growth and aggregation defects as had previously been observed in the cdk8 -HL strain [11,12]. Expression of myc-Cdk8 but not myc-Cdk8 kd rescued the growth and early development defects of the cdk8 -2 strain indicating that these phenotypes were a direct consequence of an absence of Cdk8 kinase activity. PKA is a central regulator of Dictyostelium development and its activity has been found to be essential to the expression of a number of pre-aggregative genes including aca and carA [24]. As expression of pkaC, aca and carA could not be detected in cdk8cells [11] it was suggested that Cdk8 may exert its function upstream of PKA, so a FLAG-tagged PKA-C protein was over-expressed in both the cdk8 -2 and Ax2 bsR strain. This approach results in constitutively active PKA and has been used to rescue aggregation defects in a number of strains including those deficient in ACA, Erk2 and AmiB [25][26][27]. FLAG-PKA-C protein expression in the Ax2 bsR strain resulted in rapid development; a phenotype reported to be a consequence of high PKA activity [17], consistent with the production of active protein. However, although expressing similar levels of FLAG-PKA-C as detected by western blot, the cdk8 -2 [act15/FLAG-pkaC] strain exhibited similar defects as the cdk8 -2 strain implying that defects in aggregation were not merely a consequence of defects in PKA regulated gene expression. It is possible that the levels of PKA-C expression were not high enough to rescue the defect, despite its expression from a multicopy extrachromosomal vector using a strong promoter.
The cdk8 -2 aggregation deficiency is comparable to that observed in a strain deficient in AmiB [27]. These phenotypic similarities might be expected as AmiB has been identified as a Dictyostelium homologue of Med13, a component of the same Mediator module as Cdk8 [8]. However, the observation that constitutive PKA activity rescues the amiB - [27] but not the cdk8 -2 aggregation defect is interesting as it implies that AmiB and Cdk8 operate in different signalling pathways. Such an observation would not be unprecedented as the Med13 and Cdk8 proteins have been shown to play subtly different roles during Drosophila development [8]. However, we cannot rule out that the precise level of overexpression of PKA-C is important in rescuing the phenotype.
Although the phenotypes of the cdk8 -2 and cdk8 -HL cells appeared identical with regard to growth and early development, this was not the case during later development. It was previously observed that the cdk8 -HL strain formed aberrant fruiting bodies which contained virtually no viable spores. This late developmental defect was accompanied by a decrease in expression levels of the spore-specific transcript spiA [11]. In contrast, the newly generated cdk8 -2 strain produced phenotypically normal fruiting bodies in which the spiA transcript was expressed at normal levels. Microscopic examination of disaggregated cdk8fruiting bodies revealed that the Figure 6 Expression of hpd and sdrA in cdk8 -2 cells. Cells were harvested after vegetative growth (0hrs) or development on KK 2 agar for 3hrs at 3 x 10 6 cells/cm 2 . RNA was extracted from these samples and resolved on a 1% formaldehyde gel, transferred to a nylon membrane and probed with 32 P labelled fragments of the hpd and sdrA genes. The blot was reprobed with a 32 P labelled fragment of the IG7 gene to control for loading.
strain was capable of producing morphologically normal spores (data not shown). However, these spores exhibited only 40% viability when compared to those formed by the Ax2 bsR control strain. Endogenous expression of a myc-Cdk8 protein rescued this defect in spore viability indicating that it was a consequence of loss of Cdk8 function. These data suggested that Cdk8 is involved in Dictyostelium spore cell differentiation in cells that are presumed to lack the genomic duplications present in the cdk8 -HL strain. However, the role is more subtle than when these duplications, and perhaps other unknown smaller-scale mutations, were present.
A 2D-PAGE comparison of the cdk8 -2 and Ax2 bsR strains was unable to detect any differences in the abundance of the vast majority of proteins. This is consistent with microarray analysis in which it was observed that Srb10 was involved in the regulation of only 3% of S. cerevisiae genes [2]. Despite the similarities between the cdk8 -2 and Ax2 bsR proteomes, two proteins were consistently observed to be present at different levels in the cdk8 -2 strain. One of these proteins was abnormally abundant in cdk8 -2 cells and was identified as the Dictyostelium homologue of 4-hydroxyphenylpyruvate dioxygenase (HPD). The other protein, named SDR1 exhibited homology to the short chain reductase proteins and was found to be present at unusually low levels in the cdk8 -2 strain. Short chain dehydrogenases are a large family of NAD or NADP-dependent oxidoreductases working on a variety of substrates with short carbon chains. For example alcohol dehydrogenases break down alcohols which could be toxic and are also involved in generating aldehydes and ketones during various biosynthetic processes. In bacteria and yeast this enzyme can be used to synthesize alcohol in anaerobic conditions. It is not possible to predict the substrate of the Dictyostelium SDR1 from sequence homology. The SDR1 protein has been found to become associated with phagosomes during maturation [28] and expression of the sdrA gene was also found to be upregulated as part of the Dictyostelium response to Legionalla infection [29]. HPD is a metabolic enzyme involved in catalyzing the conversion of 4-hydroxyphenylpyruvate to 2,5 dihydroxyphenylacetic acid (homogentisate or melanic acid) as part of the pathway to degrade tyrosine and phenylalanine [18][19][20]. In Dictyostelium, the HPD protein has been found to associate with the centrosome [30] and has been implicated in phagosomal maturation [28]. Microarray analysis revealed that the hpd gene is upregulated in the environmentally resistant aspidocyte cell type that is formed by Dictyostelium in response to stress [31]. Both of these proteins are catabolic enzymes found associated with phagosomes and are implicated in stress responses. This would fit for a general role for Cdk8 in regulating stress responses, consistent with data from S. cerevisiae showing regulation of cyclin C in response to environmental stresses [3]. Microarray analysis of S. cerevisiae deficient in srb10 (the cdk8 orthologue) showed altered expression of around 3% of yeast genes and half of these are genes derepressed during nutrient starvation [2], again consistent with a general role for Cdk8 in mediating metabolic responses.
Despite the difference in protein levels, the hpd mRNA was present at similar levels in the two strains. However the sdrA transcript was found to be significantly less abundant in cdk8 -2 cells than in the Ax2 bsR strain. This suggested that the lower levels of the SDR1 protein in the mutant cell line may be due to defects at the transcriptional level whereas the effect on HPD levels is post-transcriptional. The established role for Cdk8 in transcriptional regulation via the mediator complex would suggest that the sdrA gene is a candidate for direct transcriptional regulation by Cdk8 in Dictyostelium.
Conclusions
This analysis has confirmed the importance of Cdk8 at multiple stages of Dictyostelium development, although the severity of the defect in spore production depends on the genetic background. Potential targets of Cdk8mediated gene regulation have been identified in Dictyostelium which will allow the mechanism of Cdk8 action and its role in development to be determined.
Growth, development and strain generation of Dictyostelium
Dictyostelium cells were grown axenically in HL5 medium at 22°C in shaking suspension. For development in shaking suspension, exponentially growing cells were resuspended in KK2 (16.5 mM KH 2 PO 4 , 3.8 mM K 2 HPO 4 ) at 2 x 10 7 cells/ml and shaken at 120 rpm and 22°C for 5 hours, pulsed with 50 nm cAMP every 5 minutes. For filter development exponential axenically growing cells were washed in KK 2 and resuspended in LPS (40 mM KH 2 PO 4 , 20 mM KCl, 680 μM dihydrostreptomcin sulphate [pH7.2]), pulsed as above and plated at 3.5 x 10 6 /cm 2 on Millipore filters on an LPS soaked pad. The filters were incubated at 22°C in the dark.
The entire pkaC coding sequence was amplified by PCR using primers to introduce a FLAG tag sequence at its 5' terminus. This fragment was inserted into the BamH1 and Xho1 sites of pDXA-3C vector [32] to generate pDXA[act15/FLAG-pkaC] plasmid. The cdk8KO-pLPBLP vector was constructed to allow disruption of the cdk8 locus in Ax2K by the insertion of a blasticidin resistance (bsR) cassette. This vector contained the same regions of cdk8 homology as the pCDK8-KOI vector that had been used to create the original cdk8strain [11] but used the pLPBLP vector [33] as its backbone. Constructs were introduced into Dictyostelium Ax2 cells by electroporation and transformants selected by growth in the presence of G418 (10 μg/ml) or Blasticidin (5 μg/ml) as appropriate. All strains were generated with the approval of the Biochemistry Department Genetic Modification Safety Committee, University of Oxford. The strain used for the phenotypic analysis in this manuscript will be lodged as a communal resource in the Dictyostelium Stock Center (dictybase.org).
Northern analysis
Total RNA was extracted from approximately 1 x 10 7 cells using TRIZOL RNA extraction kit (Sigma) according to the manufacturer's protocol. Samples (10 μg) of total RNA were separated on a 1% formaldehydecontaining gel, blotted and probed by standard methods.
Analysis of genomic DNA by microarray
A protocol similar to that used to analyse RNA was used to compare genomic DNA (gDNA) by microarray. 5 μg of genomic DNA in T0.1E buffer (10 mM Tris-HCl, 0.1 mM EDTA, pH 8) was sonicated. 20 μl of 2.5X random primer mix (Bioprime kit, Invitrogen) was added and the mixture heated for 5mins at 95°C before being placed on ice. 5 μl of 10x dNTP mix (1.2 mM dATP, dTTP, dGTP and 0.6 mM dCTP in T0.1E), 3 μl of Cy5-dCTP or Cy3-dCTP (final concentration 60 μM) and 1 μl of Klenow fragment (40U/μl, Invitrogen) was then added and the reaction mixture incubated, away from light at 37°C. After 2hrs the reaction was stopped by the addition of 5 μl of 0.5 M EDTA (pH 8). The labelled gDNA was separated from other reaction components by passing the reaction mixture through a G-50 column. This gDNA was then precipitated, hybridised to the microarray slide overnight at 42°C [34]. Arrays were scanned using an Axon Instruments GenePix 4000B scanner and fluorescence quantified using the GenePix 3.0 software. Subsequent data processing steps were carried out using the limma package, part of the Bioconductor project, using the R statistical environment [35][36][37]. Background fluorescence was subtracted using the method of Kooperberg et al [38] and data were then normalised using the print-tip loess algorithm. Single hybridisations were sufficient to identify large segmental duplications. The array data has been deposited in ArrayExpress under the accession E-TABM-1013. The array design, also available from ArrayExpress, has the accession A-SGRP-3.
Protein identification
Protein spots were cut from the gel, digested with Trypsin and peptide masses were identified by Tandem Mass Spectroscopy (MS-MS) [39]. Peptide masses were compared to protein databases (DictyBase, NCBI, SwissProt) using BLAST for protein identification. | 2017-06-19T18:40:17.538Z | 2011-01-21T00:00:00.000 | {
"year": 2011,
"sha1": "b9e8845623dc838f9935b235811cda095e515b6f",
"oa_license": "CCBY",
"oa_url": "https://bmcdevbiol.biomedcentral.com/track/pdf/10.1186/1471-213X-11-2",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9e8845623dc838f9935b235811cda095e515b6f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55659538 | pes2o/s2orc | v3-fos-license | „SILENT HUNGER” IN THE CONTEXT OF SOME CHEMICAL PRODUCTS OF INDUSTRY
Currently more and more people are suffering from lifestyle diseases and various nutritional intolerances. Living in a constant haste and stress often causes them to pay little attention to rational nutrition, which results in shortages of necessary nutrients. Plants and, more specifically, cereals are the main source of food in the world. The purpose of this paper is to demonstrate that mineral fertilisers and food additives are necessary and have a positive impact on the quality of selected crops used as the basic raw material in the production of consumer goods. Chemical industry products such as mineral fertilisers and food additives, when used in the appropriate doses, in the correct situations and in the right time, constitute a prerequisite for correct agricultural production as they shape the appropriate standard of cereal raw materials, despite rather sparse and constantly exploited agricultural sites. Using the term “silent hunger”. Ziegler prefers to call it "invisible hunger", as it is not easy to detect at first sight by a lay person and a medical professional alike. The effects of qualitative malnutrition are not easily noticeable. People affected by it may have a normal body weight and still suffer from the effects of qualitative malnutrition, which can lead to serious health issues.
THE ISSUE OF QUALITATIVE HUNGER
Every human being's right to food is specified in Article 11 of the International Covenant on Economic, Social and Cultural Rights 4 .The right to food is undoubtedly one of the most frequently and commonly violated human rights.Hunger may be even classified as the result of organised crime.According to the FAO 5 estimates, the number of people suffering from chronic and severe malnutrition in 2010 reached 925 million (for comparison, in 2009 it was 1023 million).Almost one billion people (of the world population of seven billion) suffers from chronic hunger 6 .
22
A. Badora, S.O.Fayyadh Food of either plant or animal origin (sometimes also mineral) is consumed in order to provide the body with energy and nutrients.The basic unit of energy understood in this sense is kilocalorie.Such a measurement system makes it possible to estimate the amount of energy needed by the human body to regenerate itself.An insufficient amount of energy and a low intake of calories lead to hunger and subsequently to death.The daily number of calories required by a person depends on age, sex, body weight, type of work performed and climate 7 .The World Health Organisation (WHO) has established that an adult person requires a minimum of 2200 calories per day to survive.Calorie intake below this threshold does not allow the human body to sufficiently regenerate itself.Malnutrition often leads to the development of so called hunger diseases.Furthermore, hunger dangerously compromises the immune system of the people afflicted by it 8 .
The areas affected by hunger are unevenly distributed across the globe 9 .Nearly threequarters of starving people live in Asia, the Pacific Region and Africa.In comparison to the 1969-1971 period, the estimated percentage of the malnourished people in the world went down to 13% in the years 2005-2007, due to abundant crops 10 .Regardless of the fact that the majority of the starving people live in the developing countries, the industrialised countries are not entirely free of this problem.In May of 2012, UNICEF published a report on child malnutrition in Spain -2.2 million Spanish children under ten are chronically malnourished.The political situation and crisis in the Eastern European and former Soviet Union countries is not very good either.As early as February 2011, FAO announced that 80 countries are threatened by food shortage.According to the statistics, one in seven inhabitants of our planet is affected by hunger 11 .
Aside from people suffering from the devastating effects of malnutrition and hunger, there is also a third categorypeople suffering from qualitative malnutrition.The FAO is also concerned with those people; however, they are treated as belonging to a separate group from the former two.The term quantitative malnutrition refers to an insufficient intake of calories, while qualitative malnutrition means micronutrients, vitamin and mineral salts deficiency.Due to acute and severe qualitative malnutrition millions of children under the age of ten die every year 12 .
The effects of qualitative malnutrition are not easily noticeable.People affected by it may have a normal body weight and still suffer from the effects of qualitative malnutrition.Vitamin and mineral salts deficiency can lead to serious health issues, such as significantly greater susceptibility to infectious diseases, loss of vision, anaemia, coma, reduced knowledge acquisition skills, intellectual development disorder, various forms of physical deformities, and finally death.The most common deficiencies involve the following three elements: vitamin A, iron and iodine 13 .
The ONZ 3 describes qualitative malnutrition using the term "silent hunger".Ziegler 14 prefers to call it "invisible hunger", as it is not easy to detect at first sight by a lay person and a medical professional alike.This condition can even lead to death, the same way as calorie deficiency.However, demises caused by it are not taken into account for the FAO statistics purposes, as the statistics mainly concern calorie intake.Since 2004 the United Nations Children's Fund (UNICEF) and the Micronutrient Initiative, a non-profit organisation dealing with the problem of micronutrients deficiencyhave been conducting regular research studies, whose results are published in the reports entitled "Vitamin and mineral deficiencies.The global situation" 15 .
Anaemia caused by iron deficiency is one of the most common negative effects of qualitative malnutrition.The disease's symptoms include a reduction in blood haemoglobin level, and a weakening of the immune system.This condition is especially dangerous for children under five, as iron deficiency in children leads to irreversible damage in the form of intellectual disorders.Every four minutes a person loses his/her eyesight in the world, and in most cases, the loss is connected with malnutrition.Vitamin A deficiency affects 40 million children in the world, and 13 million of them lose their eyesight.A long-term vitamin B deficiency, on the other hand, causes beriberia disease that devastates the human nervous system.Vitamin C deficiency causes scurvy, and, in the case of small childrenrickets.Folic acid, for instance, is necessary during pregnancy.According to the World Health Organisation (WHO) estimates, every year 200,000 intellectually challenged children are born, which is due to the lack of this nutrient in the diet of their mothers during pregnancy.Iodine is an essential element needed for the proper functioning of the body.Nonetheless, a billion people suffer from its deficiency.It cannot be naturally acquired in the mountainous regions and inundation areas, in which soil is subject to water erosion, and in the southern hemisphere.If left unsupplemented, iodine deficiency leads to thyroid disease (goiter), stunted growth, and mental retardation (cretinism).Iodine deficiency in pregnant women, and by association the foetus, can also have serious and far-reaching consequences.According to The Economist [2011], zinc deficiency causes about 400,000 deaths a year.Its deficiency in small children causes severe diarrhoea, often resulting in death.Zinc deficiency also impairs the motor skills and mental abilities.
It is important to know that half the people suffering from micronutrient deficiency are usually afflicted by cumulative deficiencies of elements, i.e. they are simultaneously lacking several vitamins and minerals in their diet.Qualitative malnutrition is a direct or indirect cause of half of deaths among children under five in the world.The great majority of its victims live in South Asia and Sub-Saharan Africa.In a document of 2008, published by an organisation called Action contre la Faim, the following information can be found: "The issue of qualitative malnutrition in children is not hard to solve".It only needs to be made a priority.Unfortunately many countries of the world are lacking in goodwill 16 .It is also important to bear in mind that qualitative malnutrition devastates not only the body but also the human psyche.Macro and micronutrient deficiency cause various diseases, which in turn lead to fear, humiliation, nervous breakdown and apprehension about the future.A family without secure access to adequate supplies of good food is a broken fam-ily.As noted by Ziegler 17 , this fact is sadly illustrated by the situation in India, where thousands of farmers have committed suicide in recent years.
In the light of this chapter devoted to the issue of qualitative hunger, the need for new food sources rich in macro-and micronutrients and vitamins becomes more understandable.
SOME ASPECTS OF AGRICULTURE
Agriculture is an area of the economy that greatly utilises mineral and organic fertilisers.The quality and quantity of cereal yields is determined by many factors, including especially fertilisation.Mineral fertilisers, also referred to as artificial fertilisers, are substances extracted from the ground and subsequently processed, or manufactured chemically.Their aim is to enrich the soil with minerals necessary for plants to develop and improve the soil structure and alter its acidity.
The main physicochemical soil properties that influence the quality and quantity of cereal yields are the granulometric compositions and its variability in the soil profile, the hydrographic conditions, the pH value and the soil bacteria level (fertility).Over the last 20 years, the area allocated for crop cultivated for grain, including wheat, triticale and maize has increased; however, the area allocated to the cultivation of rye, barley, oat, buckwheat and millet, and also of leguminous plants for grain, potatoes, economic plants, fodder plants and vegetables, has decreased considerably 18 .
For many years cereals have played a vital role in the food economy of every country.Their seeds are characterised by their chemical composition and nutritional value valuable to both humans and animals.They have a positive ratio of carbohydrate and fat content to protein, along with a high starch content, a low fat content, and a high content of fibre and many minerals, vitamins and other biologically active compounds.Correct human nutrition consists of providing the body with all the nutrients necessary for its normal development.Crops resulting from vegetable production, following the meeting of specific qualitative criteria, can be used in the production of specific products.The modern manufacture of raw materials must meet a range of criteria.As indicated by numerous studies, the quality of raw materials is determined by the whole process of vegetation in the field.The appropriate quality depends on the growth conditions available in the production field, including the use of mineral fertilisers.
Mineral fertilizers and selected crop-quality indicators
Fertilisation is a factor that has a particularly strong impact on the yield and quality of grain crops.It stands for the supply of minerals to vegetables which feed on them, to the soil or as leaf sprays in the form of chemical agents (mineral fertilisers) or organic substances (natural and organic fertilisers).Fertilisation aims not only to achieve optimal cereal yields, but also to improve the specific qualitative characteristics of seeds.In Poland, mineral fertilisers (nitrogen, phosphorus, potassium, magnesium, calcium, mixed and micro fertilisers) are used, along with natural and organic fertilisers (manure, fermented and unfermented liquid manure, straw, compost, green manure and crop residue).Nevertheless, mineral fertilisation, especially with nitrogen, is of the greatest importance when it comes to crop yielding 19 .
It is claimed that for the proper vegetable growth and yielding, and the preservation of the "vegetable-soil" balance, smaller doses of mineral fertilisers should be used in better growing conditions (beneficial nutrient content in the soil, warm weather with moderate precipitation), with bigger doses otherwise.Both nitrogen excess and deficiency have an unfavourable impact on yields and their quality.Nitrogen deficiency causes the yellowing and drying of the oldest leaves, a decrease in stomatal conductance, and the retarded growth and development of vegetables 20 .
The utilisation of high doses of nitrogen facilitates rich cereal yields, but it does not always have a positive impact on their quality.Excessive nitrogen content in the soil is conducive to delayed sprouting and early entry into the developmental phases of cereals, and leads to a higher number of flowers per plant at the expense of seed/grain filling, particularly in buckwheat.By increasing nitrogen fertiliser doses (up to 30 kg N ha -1 ), we can observe a higher yield of buckwheat hulled grain and an increase in protein content, especially in defatted buckwheat hulled grain and pericarps.Higher nitrogen doses (60 kg N ha -1 ) are conducive to a decrease in both the crop yield of this plant and the content of valuable flavonoids (rutin, isoorientin, orientin, isoquestin) in pericarps.Buckwheat grain yields in the dose of 30 kg N ha -1 are considerably lower and this plant yields better when exposed to higher nitrogen doses (60 and 90 kg N ha -1 ) and lower nitrogen-phosphorus fertilisation, which is proved by the highest vegetative mass and full hulled grain mass.Higher nitrogen fertilisation causes an increase in MTN, raw protein content in grain and crop yield per hectare 21 .
High nitrogen doses, in the case of malting barley (up to 60 kg N ha -1 ), result in an increase in yields and protein content in seeds, soluble proteins, free amino nitrogen, the activity of b-and a-amylase and diastatic power, at the expense of lower grain plumpness and malt extractivity, which, in consequence, leads to worse grain quality for brewing purposes.In some research into spring barley, it was observed that the most beneficial yields of good fodder and consumption quality can be obtained when nitrogen fertilisation in the amount up to 60 kg N ha -1 is carried out in the period from full tillering to the beginning of earing.Higher amounts of this constituent (up to 120 kg N ha -1 ) do not influence crop growth, but cause changes to the amino-acid makeup of protein (a lower content of exogenous amino acids, especially lysine, methionine and isoleucine), which leads to a reduced use value of the plant's seeds 22 .
For malting barley, nitrogen doses of 40 kg N ha -1 should only be used before sowing, which will ensure the optimal nutrition of the plant with nitrogen, as assessed according to the brewing standard of protein content in grain (10.5-11.5%).Nitrogen fertilisation coupled with overhead irrigation has a positive impact on the yield and quality of both malting and forage barley.The seeds of overhead-irrigated and abundantly fertilised malting barley meet the requirements of suitability for brewing purposes, and the seeds of forage barley originating from overhead-irrigated fields, fertilised with high doses of NPK, exhibit good qualitative parameters.Thanks to complementary overhead irrigation, coupled with high doses of mineral fertiliser, it is possible to obtain high barley yield and increase soil productivity 23 .
Fertilising with other macroelements (P, K, Mg, Ca) is of lesser significance to crop quality than nitrogen fertilisation.However, the presence of these elements, and especially Mg in small quantities, is crucial for the correct growth, development and yield of plants, buckwheat in particular.The influence of potassium and phosphorus fertilisation on buckwheat yielding is insignificant, and excessive doses can even cause a decrease in yield.The elements in question must be supplied to the plant in small quantities, since potassium and phosphorus increase the content of monosaccharides in flower nectar, which causes increased foraging by pollinators, and can indirectly contribute to better seed setting.Organic and natural fertilisers have a lower impact on the quantity and quality of cereal yields than mineral fertilisers, as crops for consumer purposes are sown in the second year after the use of manure or other organic fertiliser in the field.This facilitates good supply of nutrients and prevents the emergence of many crop diseases 24 . 22 The content of macro-and microelements in the soil influences the qualitative parameters of wheat grain.As regards macroelements, the most important is the appropriate supply of nitrogen.This facilitates an increased content of protein, gluten, sedimentation index and rheological properties of dough.Phosphorus and potassium, as well as microelements (copper, manganese and zinc) contribute to obtaining grains with beneficial qualitative properties.Microelements content in the soil and their availability for plants depend on an array of factors.In some regions of the country we can observe microelements appearing in excess, which has a negative impact on the development and yield of plants.Too high copper content in wheat grain decreases the baking value of flour, while a deficiency of this element leads to a hampered growth and development of the main shoot and inhibited development of the generative organs, which, in consequence, substantially decreases the yield.Manganese deficit impairs the metabolic functions of plants and decreases the sowing value of seeds.Supplementing winter and spring wheat with microelements has a positive impact on the qualitative properties of the grains such as gluten content and sedimentation index 25 .
Barley and buckwheat yield quality indicators sometimes depend more on the content of available forms of P, K, Ca and N, and especially Mg, than on other properties of the soil.Higher cereal and protein yields of spring barley and buckwheat are observed in soils with Mg content exceeding 60 mg kg -1 of soil, P content exceeding 48 mg kg -1 of soil and K content exceeding 130 mg kg -1 of soil.A positive impact of an increased content of these elements on the specific yield structure elements and on over ground parts of crops was also recorded.On the other hand, an increased protein content was observed in grains originating from soils with lower content of minerals (Mg below 2 mg 100g -1 of soil).The impact of Mg on buckwheat and barley yielding was greater under Mg deficiency conditions in plants and was conditioned by the specific requirements of every crop.Barley yields harvested from soils richer in nutrients are usually characterised by better brewing sativum L.) Against a Background of Unconventional Binding Agents, Polish Journal of Environmental Studies, 2002, vol.11, no. 2, 109-116; Dietrych-Szóstak D., Podolska G., Wpływ nawożenia azotem naplon oraz zawartość białka i flawonoidów w orzeszkach gryki.Fragm.Agronom., maj 1, 2008, 101-109; Pecio A., Kubsik K., Zróżnicowanie plonu…, op.cit.; Hasim M.A., Mukhopahyyay S., Sahu J.N., Sengupta B., Remediationtechnologies for heavy metal contaminated ground water, J. Environ.Managem., 2011, 92, 2355-2388. 25
Additives as permitted products of the chemical industry
From the technological and health-related perspective, food additives are important factors 27 .These additives include: substances that prevent spoilage (preservatives, acids, acidity regulators, antioxidants, chelating agents, stabilisers and gases), substances that shape the sensory properties of the product (food colouring, sweetening agents and flavour enhancers), substances that give products their texture (emulsifiers, anti-caking agents, modified starches, raising agents, stabilisers, thickeners, mass-increasing agents, humectants and gelling agents), processing aids (enzymes, pressurised gases, flour treatment agents, foaming agents, antifoam agents, solvents and glazing agents).The main purpose of adding these substances during the production process and during the processing of vegetable raw materials and their products is, among other things, to streamline the course of these processes, increase product durability, and provide the product with desirable sensory, organoleptic and functional properties 28 .Additives can be used in food production only when: they do not pose a threat to the health of consumers at the proposed use level, based on available scientific evidence, there is a justified technological requirement which cannot be met in any other way that would be acceptable from the economic and technological point of view, the use of a given substance does not mislead consumers as regards the health--related value of foodstuffs.Additives cannot be used to conceal defects in foodstuffs resulting, for instance, from poor quality, incorrect production processes and unhygienic production conditions, or to make the product similar to other (better or more nutritious) products.The conditions and doses of additives in the food industry are specified by legal regulations, and in particular by the Directives on: food additives other than colours and sweeteners 29 .The aforementioned Directives specify the conditions regarding the use of additives and foodstuffs in which they can be used.In Poland the currently binding document is the Regulation of the Minister of Health of 18 September 2008 on permitted additives 42) , which is based on EU regulations 30 .The list of permitted food additives has increased in Poland from 154 to 284.
SUMMARY
A compromise between civilisation and ecology unites technological advancement and attempts at making our lives easier with health safety and care for the natural environment.Food of either plant or animal origin (sometimes also mineral) is consumed in order to provide the body with energy and nutrients.The basic unit of energy understood in this sense is kilocalorie.Such a measurement system makes it possible to estimate the amount of energy needed by the human body to regenerate itself.An insufficient amount of energy and a low intake of calories lead to hunger and subsequently to death.
It is important to know that half the people suffering from micronutrient deficiency are usually afflicted by cumulative deficiencies of elements, i.e. they are simultaneously lacking several vitamins and minerals in their diet.Qualitative malnutrition is a direct or indirect cause of half of deaths among children under five in the world.The search for new sources of plant protein enriching the diet is extremely important in the light of increasing animal protein deficits and the increasing number of consumers preferring vegetarian food.Plant proteins are important due to their diversity and accessibility of resources needed to obtain them.
On the other hand people living in the modern and busy world want to satisfy their nutritional requirements by spending as little time and money as possible.This is facilitated by modern agricultural science and processing, making food more available and durable, as well as easier to prepare and consume.However, producing such foodstuffs would not be possible without a wide selection of chemical substances, including plant protection products, mineral fertilisers and food additives, used in the food industry (preservatives, antioxidants, stabilisers and flavourings).On the other hand, the "march of chemistry" through fields and tables causes risks to both humans and the environment.
Therefore, rational nutrition means reaching a compromise between consumer convenience and health security, as well as food production intensification and natural environment protection.As highlighted by "longevity researchers" our eating habits are of crucial importance to our health and changes taking place in the human body.Poland's presence in EU structures imposes an obligation on food producers to ensure that human health and consumers' interests are protected through the implementation of food safety strategies.sprzyjają funkcjonalnej konsumpcji, jednak to nie było by możliwe bez racjonalnego używania nawozów, środków ochrony roślin i dodatków oraz konserwantów.Kompromis pomiędzy rozwojem cywilizacyjnym a ekologia wymaga technologii, aby ułatwiać życie i poprawiać bezpieczeństwo żywności oraz środowiska.Żywność bowiem, zarówno roślinna, jak i zwierzęca ma służyć poprawie zdrowia i dobrobytu człowieka. | 2018-12-06T22:02:43.302Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "7080c58bfbe29bd0f1e9fbde6466753c7f4f23b7",
"oa_license": "CCBYNC",
"oa_url": "http://doi.prz.edu.pl/pl/pdf/einh/268",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7080c58bfbe29bd0f1e9fbde6466753c7f4f23b7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
52877776 | pes2o/s2orc | v3-fos-license | Short-Term PM2.5 Forecasting Using Exponential Smoothing Method: A Comparative Analysis
Air pollution is a global problem and can be perceived as a modern-day curse. One way of dealing with it is by finding economical ways to monitor and forecast air quality. Accurately monitoring and forecasting fine particulate matter (PM2.5) concentrations is a challenging prediction task but Internet of Things (IoT) can help in developing economical and agile ways to design such systems. In this paper, we use a historical data-based approach to perform PM2.5 forecasting. A forecasting method is developed which uses exponential smoothing with drift. Experiments and evaluation were performed using the real-time PM2.5 data obtained from large scale deployment of IoT devices in Taichung region in Taiwan. We used the data from 132 monitoring stations to evaluate our model’s performance. A comparison of prediction accuracy and computation time between the proposed model and three widely used forecasting models was done. The results suggest that our method can perform PM2.5 forecast for 132 monitoring stations with error as low as 0.16 μg/m3 and also with an acceptable computation time of 30 s. Further evaluation was done by forecasting PM2.5 for next 3 h. The results show that 90 % of the monitoring stations have error under 1.5 μg/m3 which is significantly low.
Introduction
With the rapid urbanization and industrial growth, the concern about deteriorating air quality is also increasing. Deteriorating air quality has adversely influenced the quality of life and even has affected the economic growth in a negative way. While there is still lot to do to solve this problem, IoT technology has come as a glimmer of hope. The idea is to use IoT devices and cognitive computing to generate large amount of data which can be further used to enhance air quality management systems and forecasting. A typical case would include the collection and storage of data obtained from the sensors, data analytics, prediction, visualization and an alert message service in the case of unusual behavior in the air quality. When talking about Smart City Initiative, an important part of it includes developing a system not only to monitor the air quality but also provide a future forecast. Among all the pollutants, PM2.5 (fine particulate matter with diameter less than 2.5 micrometers) are considered to be very harmful for humans. These particles are responsible for causing serious respiratory diseases, asthma, and lung cancer [1] as they can penetrate into the alveolus (regions for gas exchange in lungs).
Some previous research works have already shown how participatory sensing can be utilized for event detection [2]. In addition, wireless sensor networks can be efficiently implemented for real-time monitoring of environment [3]. However accurately predicting air quality is very challenging task.
• A univariate time series prediction model is developed that performs forecast using exponential smoothing with drift. • The proposed model was used for hourly PM2.5 prediction using real-world data obtained from monitoring devices deployed in Taiwan. • We evaluated our model's performance by comparing the results with three baseline models by using data from the monitoring stations. The evaluation was based on accuracy and computation time. The model was further tested for forecasting PM2.5 for the next three hours. • The scalability of the model is tested by performing forecast for 132 air quality monitoring nodes deployed in Taichung region in Taiwan.
The rest of the paper is organized as follows. In Section 2, we discuss the related works and our motivation behind this study. In Section 3, we describe the system overview which includes the proposed architecture, Airbox Project and the deployment. We also discuss the Airbox Data and the visualization platforms. In Section 4, we explain the proposed model in detail and also discuss other baseline models. In Section 5, we implement the model on real time data and observe the results. In Section 6, we evaluate our model by comparing the results of the proposed model with other baseline models. Section 7 concludes the paper and gives a short description about the possible future works.
Related Work and Motivation
Much effort has been made by utilizing the modern day technology to develop systems which can provide real-time information and services to the users. There have several works related to air quality monitoring and providing services to the users. Grover et al. [10] proposed a Deep Hybrid Model for weather forecasting. It does not forecast PM2.5 but it predicts variables such as temperature, wind and dew point. Linear regression technique has been widely used for forecasting and analysis, including data pertaining to financial domain [11], which is fast-changing, meteorology, environment data [12], etc. However, due to complex nature of air quality data, it is not extensively used in this domain. Zheng et al. [4] performed forecasting over the next 48 h using a data driven approach. They used a predictive model based on a linear regression model and neural network. Kitchin [13] proposed that big data can enable real-time analysis of cities, urban governance and can be used as an effective tool to develop smart and sustainable cities. Time-series data can be noisy in many cases and it is not easy to perform forecast with non-stationary data. Ghazali et al. [14] used a Dynamic Ridge Polynomial Neural Network for financial time-series prediction using higher order and recurrent neural network. Hsieh et al. [15] focused on issues such as air quality forecast for an area using data from sparse monitoring stations. Khan et al. [16] investigated the use of cloud platform for big data analytics. They demonstrated how the smart city initiative can be realized by using real-time big data stored in cloud platforms. However, the loopholes of these techniques are that they simply rely on feeding a variety of features to the model. Those features belong to one particular location and a similar model is used for all the locations. The problem is that every location has different emission sources, pollutants, and concentrations; therefore, one model cannot perform accurate forecasting for all locations. Donnelly et al. [17] proposed a linear regression based method for real-time air quality forecast with high accuracy. The model is then tested for three urban sites and one rural site. In our work, we made a scalable model and, to test the scalability of our system, we applied it to 132 monitoring stations. Zheng et al. [18] implemented machine learning techniques and big data to perform urban computing. To avoid too complicated computing, Zhu et al. [19] categorized air pollution before doing prediction. However, it may lose the meaning of time-series prediction in this way. Although machine learning models have been proposed widely, when it comes to PM2.5 prediction for small time intervals, it has not been exploited much. Machine learning techniques sometimes have computational efficiency issues. To tackle this problem, Shi et al. [20] and Chen et al. [21] used Xgboost which implements read-ahead caching and utilizes parallel computing to reduce the execution time. However, this might lead to decrease in the accuracy of the prediction model.
Humans are always surrounded by sensing devices which create a kind of fusion between physical world and virtual world. In another way, we can term it as Internet of Humans (IoH) which is a combination of (IoT) and Human-Centered Computing (HCC). Regular air quality monitoring and analysis can help in improving community awareness on environmental issues. In this work, we combine networked sensing and crowd-sourcing techniques to collect streams of sensing data about the surroundings and provide insightful information service at personal, society and urban levels. Our motivation behind this work was to integrate IoT and machine learning techniques and develop a system that not only provides real-time air quality information to the users but also creates awareness among the people about issues related to poor air quality. In this work, we deal with large amount of PM2.5 data obtained from the IoT devices deployed around Taiwan. Thus, it becomes very challenging to make sure that the data is of highest quality and any anomaly in the data is detected. In addition, it becomes important to achieve high prediction accuracy and also make sure that the computation time is low so that the model can be used to create a real-time application.
Proposed System Architecture
The proposed system is shown in Figure 1. It follows a three-way approach which includes data sensing, data mining and providing services. The data were obtained from the PM2.5 sensing devices which are deployed all over Taiwan. The IoT devices provide real-time PM2.5 data, temperature and humidity levels for different regions. The data collection was a continuous process and any anomaly in that process was directly reported to the administrator [22]. The collected data were stored in the database and easily accessible. Sometimes some of the monitoring stations do not show any readings. Data monitoring component takes care of it by filtering such data and filling in the missing values depending on spatial and temporal neighboring devices. The forecast model uses exponential smoothing with a drift to predict hourly PM2.5. The forecast data can be stored and provided as a web-service side by side with the visualization of the data.
Airbox Project
The Airbox Project comprises of pilot deployment of IoT systems for PM2.5 monitoring all over Taiwan. The main motive of this project is to encourage people and motivate them to participate in PM2.5 sensing. The main inspiration behind this project is the LASS (Location Aware Sensing System) community. This community engages the people to participate in PM2.5 sensing and also encourages them to try and develop sensing devices by themselves. The project facilitates PM2.5 monitoring at a finer spatiotemporal granularity and enriches data analysis by making sure that all the measurement data are available freely to everyone [9]. The devices are installed in buildings with reliable Internet connection and power source. In addition, the data (https://pm25.lass-net.org/en/) are easily accessible which makes data analysis easy. The sensing devices in Airbox Project are designed and developed by professional manufacturers. The industrial product level devices are made in close cooperation with Edimax Inc. and Realtek Inc. in Taiwan. The devices are based on Realtek Ameba development board. The device contains a PMS5003 PM2.5 sensor and a HTS221 temperature/humidity sensor. Another version of deployed device is called MAPS (Micro Air Pollution Sensing System) which is developed by Network Research Lab at Institute of Information Science, Academia Sinica, Taiwan. It is based on MediaTek LinkIt Smart 7688 Duo development board. It has a PMS5003 sensor for PM2.5 and BME 280 for temperature/humidity. The data sensing part of the framework is shown in Figure 2. There are three major components of the sensing system.
1.
Data Producers comprise the sensors which provide sensed data. The hardware and the source codes are open source so that people can create such devices themselves.
2.
Transit Centers act as data brokers for the data sent from the data producer to data users. Multiple data brokers can be used to achieve scalability and fault tolerance.
3.
Data Users are those who use the sensed data, analyze it and create different types of applications.
For data communication, the sensing framework uses Message Queuing Telemetry Transport (MQTT) protocol [23]. MQTT is used because of it lower communication overhead, simple design and flexibility to adjust to different formats of messages. Data sampling frequency for the Airbox devices is estimated to be every 5 min. However, it was observed that the inter-sample time was 6 min for almost 80% of the devices and for the remaining it was around 12 min. There is a standby time between sample collection of an Airbox device and is found to be 5 min and it takes about 1 min to do the sampling. This makes the inter-sampling time to be 6 min. First data measurement fails in the case there is an error. In such cases, the inter-sampling time increases to 12 min. For this research, we converted the data into hourly data. The PM2.5 data were checked for missing data before proceeding with the experiments. Figure 3 shows the PM2.5 variations for based on data for the month of November 2016. It can be observed in Figure 3 that sometimes there is a trend in PM2.5 variations. For example, during the weekends, it can be assumed that most people would go out, which means more traffic and more pollutants. Thus, higher PM2.5 would be observed during the weekends rather than the weekdays. Similarly, during the morning and late evening, PM2.5 would be higher as people would be commuting. Such trends are easy to observe. In Figure 4, it can be observed that hourly PM2.5 concentrations shows different levels for different stations. Variations can be seen for different stations at different time periods. The peaks in the plot can be referred to as inflection points, i.e., the points at which the PM2.5 concentration level changes sharply. Inflection points can be considered as sudden increase in the PM2.5 values which might be caused by environmental factors or human activities. At these points, the PM2.5 concentration level changes sharply. These variations do not represent the regular air quality pattern at a particular location but an incident which might happen because to thunderstorms or strong winds. As these incidents are rare, so it is very difficult to model them using a conventional forecast model.
Data Archive and Open Data API
A data archive service has been setup which stores and provides all the records from the monitoring devices. Having such a system is very beneficial as such a service ensures that all the PM2.5 observations based on on our deployment can be easily accessed and traced. In addition, it helps us to maintain a data archive that is long lasting and contains verified data for further analysis and modeling. Another important feature of this system is that it can import PM2.5 measurement data from other open local data sources in Taiwan. This actually helps to improve the coverage area and get more and more data. Through the open data API (in the JSON data format), people can access the latest measurement data of a particular AirBox device, leading to thousands of data for any device on any particular date.
Visualization Platforms for Airbox Data
Visualization platforms have been developed to visualize the Airbox data. One of them is a visualization system that gives information about every Airbox device. These services help in understanding the impact of spatiotemporal factors on PM2.5 measurement. A dashboard has been setup that helps in visualizing the device data over a period of time. The dashboard shows PM2.5, temperature and relative humidity as shown in Figure 5a. A Voronoi diagram and real-time PM2.5 monitoring visualization has also been developed which is updated every 5 min, as shown in Figure 5b,c. A Voronoi diagram represents a partitioned plane into regions based on the distance to a specific subset. In our case, it is this the sensor location. An animation application has been developed which shows the air quality for last 24 h in the form of IDW (Inverse Distance Weighting) animation. The animation is available for whole Taiwan as well as all major regions including Taipei, Taoyuan, Taichung and Tainan. The animation is updated every 1 h. There have been many cases in which sudden change in air quality is noticed. Such animations help in understanding the trend followed by the air pollution. For major air pollution incidents, the results are regularly shared online and can also be used by the policy making agencies to analyze the trend. Figure 6 gives an example of how the IDW animation can help in understanding the trend in air pollution dispersion. It can be observed that, in the initial frame, the PM2.5 levels are normal. The air quality starts deteriorating in the northern part of Taiwan as see in the second frame. Soon, it covers the whole northern region and starts dispersing towards northwestern Taiwan. It further disperses towards the central and southern Taiwan and it can be observed that pollutants intensity decreases over the time.
Methodology
In this section, we discuss in detail the framework of the prediction model. We also discuss the other three baseline models that were used to perform the comparative analysis. The models used for performing the analysis are some of most widely used time-series forecasting models: Autoregression Integrated Moving Average (ARIMA), Neural Network Autoregression Model (NNAR) and Hybrid Model [24]. Figure 7 shows the proposed ESD modeling and forecasting framework. The proposed method is based on the Theta method [25]. The idea behind using this method for forecasting is that when considering short-term PM2.5, we assume that there is no seasonality or trend. This method uses weighted moving average of the past data as the basis to perform the forecast. The theta line can be described as
Forecasting Method Using Exponential Smoothing with Drift (ESD) Model
where Y 1 ...,Y n represents the non-seasonal original time-series and ∇ represents the difference operator. Z 1 and Z 2 can be obtained by minimizing ∑ n t=1 [Y t − Z t (θ)] 2 . Another analytical solution wisas proposed in [26]. It is given by In Equation (2), A n and B n represent the minimum square coefficient of a linear regression over Y 1 , Y 2 ,...,Y n against 1,....n. The linear regression is denoted by Based on these, it can be inferred that theta lines can be considered linear regression model's functions applied directly to data.
The Theta method can be simplified as a case of simple exponential smoothing with a drift term which is equal to half the slope of a straight line fitted to the data [26]. In simple form, the ESD model can be explained as: In the above equations, l denotes the level and b stands for the drift. The h step forecast is denoted byX t (h). α is the smoothing parameter and its value is always between 0 and 1. Weighted averages are used to calculate forecasts. The weights decrease exponentially and this is controlled by parameter α. ε t represents the one-step forecast error at t within-sample.
Baseline Models for Comparison
In this section, we discuss about the models that were used to perform the comparative analysis.
Autoregressive Integrated Moving Average (ARIMA) Model
An ARIMA model is considered to be a robust model [27] when it comes to time-series forecasting. During the forecasting process, first the model is identified, estimation of parameters is done and then a diagnostic check is performed. An ARIMA (p,d,q) model consists of p, d and q which are integers. They should be greater than or equal to zero and point to the order of the autoregressive (AR), integrated (I) and moving average (MA) components of the model [27]. Let us consider a time-series Z t , where t is an integer and Z t denotes real numbers which correspond of values at a given time t. An ARIMA (p,d,q) model can be denoted by the following equation.
In Equation (7), B s denotes the backward shift operator, and φ l and θ l are the autoregressive and moving part parameters. ε t is the error term. If d = 0, then it becomes an ARMA model [28].
Neural Network Autoregression (NNAR) model
Lately, Artificial Neural Networks (ANN) have been used extensively when it comes to time-series forecasting. ANNs can be used to model the complicated relationships between input and output variables. When we talk about NNAR model, the input is a lagged time-series and the output is predicted value of the time-series and is represented as NNAR (p,P,k)m. In the model, p and P indicate the lagged seasonal and non-seasonal inputs while k denotes the number of hidden layer nodes. m indicates the seasonality. The model has two functions. One is a linear combination function and the other one is an activation function. The linear combination function is denoted as where w i,t represents the weight function, b represents the bias and y t are the lagged time series values. The weights are randomly selected initially and later they can be updated using a "learning algorithm" [29] that minimizes the cost function. The activation function can be denoted as In this work, we considered a feed-forward neural network which is based on the Nonlinear Autoregressive Model for time series forecasting.
Hybrid Model
A time-series can be easily divided into linear and non-linear components. ARIMA model provides good forecasting but it cannot capture the non-linear components. This makes it important to have a technique which can capture the non-linear components too. This is when we can use ANNs. They can take care of the non-linear components of the data. Figure 8 shows the flowchart for the model. We used a Hybrid model [24] represented in Equation (10) where X t represents the linear components and Y t represents the non linear components.
In the initial step, these two components have to be estimated from the data. Then, the next stage is the application of ARIMA model. At this stage, ARIMA model takes care of the linear components and the residuals in the form of non-linear components are generated. Let us assume that R t are the residuals generated at time t from the linear model. Thus, it can be written as where F t is the forecast value for time t. These residuals are then modeled using neural networks. If we assume that there are n input nodes, then the neural network model for residuals can be given as Neural network defines the non-linear function f , and e is the random error generated. Finally, the forecast from the neural network is generated and Equation (10) is used to get the final output. We used an ARIMA (3,1,1) model where 3,1,1 are the values of p, d and q, respectively. For neural networks, we used an NNAR(9,5,1) model which used nine lagged inputs with five nodes in the hidden layer. The parameters for ARIMA and NNAR model were chosen after testing different combinations and selecting the one which gave the best output. For the Hybrid model, we assigned equal weights to the prediction results from ARIMA and NNAR models.
Results
For this study, the measurement data were collected from the Airbox Devices installed in Taichung area of Taiwan. The measurement data were taken for the time period between 18 January 2017 and 12 February 2017. Most of the Airbox devices are installed in elementary schools around the region with regular power connection and internet supply. This makes the data very reliable and of better quality. Thus, to make the forecasting accurate, we only considered the stations with reliable data. To test the model, we considered the hourly PM2.5 data obtained from the monitoring stations deployed in Taiwan. Eighty percent of the data were used for training and 20% for testing the model. Mean Absolute Error (MAE) and computation time were used as the parameters to analyze the results.
If y m is the actual value andŷ m is the predicted value, then the MAE can be denoted as To check how our model performs the prediction, we did an hourly forecast for the 132 monitoring nodes. The geographical locations of the monitoring nodes is shown in Figure 9. Figure 10 shows a comparison plot between the observed and the predicted PM2.5 values. It can be observed that the predicted results are very close to the original observed results. This gives an idea about the scalability of the model when implemented on a large scale.
Evaluation
To evaluate, we followed a two-step process. In the first step, we compared our model with other baseline models to see how our model performs in comparison with the others. The parameters used were mean error and computation time. In the second step, we performed PM2.5 forecast for the next 3 h to see how the model deals with short-term PM2.5 forecasting.
Evaluation by Performing a Comparative Analysis with the Baseline Models
In this part, we compared the proposed model's results with three baseline models. For comparison, we used ARIMA model, NNAR model and Hybrid model. These models are very well known models for forecasting time-series data. From the comparative analysis of all four models, as shown in Table 1, it can be observed that the ESD Model outperforms the other three models. The mean error obtained is 0.16 µg/m 3 which is significantly low when compared with other baseline models. Here, we also want to focus on the time-accuracy trade-off which can be observed in Figure 11. When talking about real-time applications, we do not only focus on low computation time but also on high accuracy. The ESD model satisfies both the conditions, i.e., high accuracy and low computation time. Figure 11. Mean error and computation time comparison with baseline models. Figure 12 shows a cumulative distribution function (CDF) plot for all four models. It can be observed from the plot that the ESD model outperforms the other three models when performing prediction for the next hour. Around 95% of the monitoring stations show a forecasting error below 1 µg/m 3 . Only NNAR and Hybrid model's performance is close to the ESD model's performance. However, it has to be taken into consideration that the maximum error calculated for ESD model is around 4 µg/m 3 , whereas for Hybrid model it is around 10 µg/m 3 and for NNAR model it is around 19 µg/m 3 .
Next 3 h PM2.5 Forecast Using the ESD Model
Based on the results for one-hour forecast, we were able to understand that the system works well even when it is implemented on large number of monitoring stations. To further evaluate our model, we tested by performing PM2.5 forecast for next three hours for all the monitoring nodes. From the CDF plot shown in Figure 13, it can be observed that the forecasting error for most of the stations is significantly low. Almost 90% of the stations have error under 1.5 µg/m 3 for all cases. With these results, we can demonstrate that the proposed model can perform short-term PM2.5 prediction with high accuracy.
Conclusions and Future Works
As air pollution continues to affect the quality of life, there is a need to have a framework that would not only monitor the air quality but would also perform data analysis, air quality forecast and provide visualization services. To make sure that people know about the future air quality well in advance, it is important to come up with an accurate forecast system. In this paper, we integrated IoT technology and artificial intelligence to come up with a PM2.5 forecast system. We designed a forecasting model using exponential smoothing which performs hourly PM2.5 forecast based on real-time data obtained from the IoT devices deployed all over Taiwan. Parameters such as mean error, accuracy and computation time were used to analyze the results. To evaluate, we tested the model on 132 monitoring stations. We compared the ESD model's results with three baseline models. With ESD model, we were able to obtain an mean error as low as 0.16 µg/m 3 whereas it was 1.19 µg/m 3 for NNAR model, 11.47 µg/m 3 for ARIMA model and 0.70 µg/m 3 for Hybrid model. In addition, we were able to obtain an acceptable trade-off between accuracy and computation time. The computation time using ESD model was 30 s which is significantly lower than other models. Our model is easy to implement and can be applied to other cities as well. The results can be used by environment protection agencies for policy management as well.
Although we have been able to achieve significant results, there are still some points we would like to address in future work. One task would be to include other weather features such as wind speed and wind direction to further improve the forecast accuracy. In addition, we would like to extend the model to do forecast for longer duration, e.g., 12, 24 and 48 h. The final task would include using the forecasting model to develop a real-time forecasting service in Taiwan.
Acknowledgments:
The authors wish to thank the Taipei City Government, Edimax Inc., RealTek Inc., the gov.tw community, and the LASS community for their support, technical advice and administrative assistance.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 2018-10-11T13:15:26.225Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "9f25ad7f902769581d1120a982d155de94e882ec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/10/3223/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f25ad7f902769581d1120a982d155de94e882ec",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
255001040 | pes2o/s2orc | v3-fos-license | The Cultural Divide and Changing Beliefs about Gender in the United States, 1974–2010
The present paper examines claims of a growing cultural divide in the United States. We analyze social change in beliefs about gender over a period of 36 years (from 1974 to 2010) in the United States using data from the nationally representative General Social Survey (GSS). We find evidence of growing gender egalitarianism until the mid-1990s, with a reversal between 1996 and 2000, and a decline in state differences in beliefs about gender over time in our decomposition analysis and multilevel models. Although we find significant differences in gender beliefs among states in the 1970s based on their voting record on the Equal Rights Amendment and based on patterns of family formation and family life associated with the Second Demographic Transition, these differences among states decreased or disappeared entirely by the early years of the twenty-first century. We highlight the implications of our findings for the ongoing public and academic debate surrounding growing cultural differences among states.
increasing social conflict between competing systems of moral understanding appears not to be supported.
At the same time, and somewhat surprisingly given the cited literatures, several scholars have argued for what we here call regional polarization-the tendency for vast differences to exist in the nature of family life and in beliefs about family issues across different regions of the country. Lesthaeghe and Neidert (2006), for example, found that gender-related indicators of the Bsecond demographic transition^(hereafter SDT) (e.g., postponement of marriage, high rates of abortion, low levels of fertility) are unequally distributed spatially in the United States, with the so-called BBlue^(Democratic party) states being more Badvanced^with respect to such indicators and the BRed( Republican party) states being more resistant. Others have looked more specifically at differences in beliefs and attitudes about gender by region, and at least with respect to beliefs about women and politics and about the employment of women, respondents in the South report more conservative gender beliefs than do their counterparts in the North (Burris 1983;Campbell and Marsden 2012;Carter and Borch 2005;Hurlbert 1989; Rice and Coates 1995). There is some debate in the literature as to whether the regional differences observed represent the effect of a southern sub-culture on beliefs and attitudes (Hurlbert 1989;Ellison and Musick 1993) or can be explained by demographic characteristics (in particular, the percentage of religious fundamentalists in the population) (Moore and Vanneman 2003). Scholars further argue that differences in the beliefs of residents of the south and the non-south have decreased over time, once income, rurality, and religiosity are statistically controlled (Campbell and Marsden 2012;Carter and Borch 2005).
In this regard, one more recent book, titled Red Families v. Blue Families (Cahn and Carbone 2010), makes what is perhaps the strongest argument about regional heterogeneity. They argue that it is not just the demographic character of regions per se that matters, but also cultural views, harkening back to what Hunter (1991) meant by Bculture war.^According to Cahn and Carbone (2010), the failure of outlooks on family issues to overlap across the spatial divide involves legal norms as well. Geared to the post-industrial economy, the Blue regions have developed norms that encourage an egalitarian family, women's labor force participation, and delays in marriage and fertility, whereas the Red regions of American society reject these new norms and favor a more traditional emphasis on Bthe unity of sex, procreation, and childrearing^(p. 128), in which early marriage is encouraged, child-bearing is restricted to marital unions, and women are encouraged to adhere to gender commitments that focus on home and family rather than on work (Cahn and Carbone 2010; see also Moore and Ovadia 2006;Moore and Vanneman 2003). Although there is a plausibility to these arguments, especially given the historic link of beliefs about women's equality to geographical location evidenced by the debates surrounding the ratification of the Equal Rights Amendment (ERA) (Soule and King 2006), the nature and extent of these potential geographic differences in beliefs about gender and the family have not been systematically examined using extant data.
In the present paper, we investigate these claims of growing differences in gender beliefs among states in the United States in order to better understand the problem of what England (2010, p. 163) referred to as the Bstalling of change^towards gender egalitarianism. Using data from 23 surveys of the General Social Survey (GSS), spanning 36 years from 1974 through 2010, we seek to better understand the uneven nature of progress toward gender egalitarianism and the extent to which entrenched beliefs in particular state contexts or periods of slowing progress toward gender egalitarianism have contributed to the stalled revolution. Our analysis has three primary aims: (a) we document the nature of social change in gender beliefs in recent periods of U.S. history, examining the extent to which the data from the GSS demonstrate the Bfeminist backlash^evident in other narratives about the early 1990s onward; (b) we investigate the extent of variation across state contexts in gender beliefs across these same periods; and (c) we examine whether there are differences in the nature of secular changes in beliefs by state context (i.e. whether there are interactions between variables representing state context and those representing period). In other words, we analyze how the specific characteristics of different states are associated with changing beliefs (or the lack thereof) over this 36-year period, specifically whether there are differences among states in progress toward gender egalitarianism.
How and Why Have Beliefs about Gender Changed over Time?
Research demonstrating rising levels of pro-feminist beliefs about gender in the United States and Western Europe was commonplace throughout the early 1970s and early 1980s (e.g., Thornton and Freedman 1979;Cherlin and Walters 1981;Mason and Bumpass 1975). colleagues (e.g., Lesthaeghe 1983, 1995;Lesthaeghe and Surkyn 1988) argue that cultural changes rooted in the Protestant reformation-namely increasing individualism, materialism, and secularism-have motivated changes in both the family and in beliefs about gender across the twentieth century. In Western Europe and North America in particular, fertility has declined (Morgan and Hagewen 2005), premarital cohabitation has increased, and rates of divorce and re-partnering have also increased (Cherlin 2009). The SDT refers to these changes in patterns of fertility, marriage, and divorce that have changed the nature of family life and have been associated with the increased labor force participation of women (Lesthaeghe 1983;Lesthaeghe and Meekers 1986;Lesthaeghe and Surkyn 1988;Surkyn and Lesthaeghe 2004). In Western Europe and North America, there has been a steadily increasing trend toward paid employment by married women, especially among those with young children.
These changes in the family and non-family roles of women are hypothesized to increase women's interest in adopting egalitarian gender beliefs (Bongaarts 2002;Morgan and Hagewen 2005). As family and non-family roles change and women begin working outside the home in increasing numbers, more women seek to achieve consistency between their beliefs and behaviors. By adopting more egalitarian beliefs about gender, women reduce the cognitive dissonance that results from a mismatch between their new roles and more conservative beliefs about the appropriate roles for women.
Also associated with these changing family and non-family roles for women is an increase in educational attainment for women. In fact, women in the United States are currently more likely to complete a 4-year college degree than are men (Buchmann and DiPrete 2006). Overall, educational attainment for both women and men increased over the second half of the twentieth century. According to exposure-based models of attitude change, when individuals are exposed to more egalitarian thinking through enrollment in higher education, they are more likely to adopt more egalitarian beliefs themselves (Bolzendahl and Myers 2004;Klein 1984;Cassidy and Warren 1996). With the changes in family life associated with the SDT, increasing numbers of individuals have gained both an interest in adopting egalitarian beliefs as well as an exposure to egalitarian thinking, resulting in the widespread liberalization of gender beliefs (Bolzendahl and Myers 2004).
Although overall trends for women in the second half of the twentieth century are toward greater labor force participation and higher educational attainment, important inequalities in the population translate into different exposure and interest effects on gender beliefs for different groups. Historically, there has been less separation of domestic and public spheres in working class and African American families which have been more dependent on the earnings of women (Coontz 1992;Damaske 2011). These differences by race and class in the household division of labor have important implications for beliefs about gender (Blee and Tickamyer 1995). Political generation also influences the impact of historical context on an individual (Brown and Rohlinger 2016). Cohorts that reached young adulthood during the debate over the ERA were likely impacted differently by this historical moment than were those who were already established in their families of procreation or were children at the time. Other scholars have demonstrated how the impact of the gender revolution on beliefs and attitudes about gender and family varies across generations in the same family (Moen et al. 1997;Gerson 2011). These are just some of the many factors that likely impact exposure to, and interest in, changing beliefs about gender.
In some ways, the passage of the ERA in 1972 reflected a culmination of changes in those early decades in the direction of relatively rapid changes in declining conservative beliefs about appropriate roles for women in society and the incompatibility of occupational and maternal roles (Self 2012). However, with the ultimate failure to achieve ratification of the ERA by the states in the early 1980s, researchers pointed to a slowing of changes in beliefs about gender. By the early 1980s, there was an already nascent growing opposition, or backlash, to the feminist gains of the 1960s and 1970s and the anticipation of a Bstalled revolution^(see Ehrenreich 1983;Faludi 1991). More recent work has confirmed that the trend toward increasing gender egalitarianism observed in the 1970s and 1980s plateaued in the mid-1990s (Cotter et al. 2011;Thornton and Young-DeMarco 2001).
The Mechanisms of Social Change
In addition to documenting the existence of changes in beliefs and attitudes over time, scholars have investigated how changes in beliefs and attitudes have occurred. Theories of social change highlight the intersection of history and biography (Mills 1959) in conceptualizing social change as motivated by both the changes undergone by individuals (in response to aging or period effects) and by the succession of cohorts or generations distinguished by the historical period in which they were raised and reached young adulthood (Mannheim 1952;Ryder 1965). Members of the same cohort or political generation share common experiences and are exposed to the same cultural and historical events in early adulthood that influence beliefs, attitudes, and political activism later in life (Brown and Rohlinger 2016;Schultz 2002). The cohort succession mechanism of social change is based on the assumption that beliefs are formed in youth and remain relatively stable thereafter (Alwin and McCammon 2003). Intra-cohort change, on the other hand, captures individuals' attempts to adapt their beliefs in response to their changing sociohistorical context.
Methods of decomposing social change into intra-cohort change and cohort succession, linking processes of individual and social change, have been used in previous research to examine changes in attitudes and beliefs toward women's work and family roles in the United States and internationally (Alwin and Scott 1996;Brewster and Padavic 2000;Brooks and Bolzendahl 2004;Firebaugh 1992;Mason and Lu 1988;Neve 1995). We are interested in synthesizing the literatures addressing the SDT, the mechanisms of social change in beliefs about gender, and regional variation in social change.
The Present Analysis
In a departure from the majority of research analyzing the mechanisms of social change (e.g., Cotter et al. 2011;Pampel 2011), we are interested in measuring the effects of state context on the nature and pace of social change in beliefs about gender. Rather than simply looking at how state of residence shapes beliefs, we investigate what state-level characteristics matter in shaping trends in gender beliefs over time. We draw on Cahn and Carbone's (2010) conceptualization of Red states and Blue states as characterized by different cultures of family life. The Red states Bmarry and have children at younger ages and are most likely to see the embrace of traditional values as critical to community well-being^whereas the Blue states Bhave the highest average ages of family formation and demonstrate the greatest support for the mechanisms that effectively deter teen births^(p. 10). Instead of dichotomizing states as Red and Blue, however, we include three measures of state-context differences: the state's voting record on the ERA and the state's advancement with respect to the SDT, as well as a measure of the economic conditions in each state.
We have a particular interest in the state's stance on the ERA as an indicator of the state's gender climate at the start of our analysis in the 1970s. In 1972, the ERA was approved by Congress and was sent to the states for ratification within the next 7 years. An extension on the ratification deadline was issued by the Congress in 1978 but, by 1982, not enough states had ratified the amendment for it to pass. In the end, 30 states ratified the ERA, five additional states initially ratified and later rescinded their ratification, and 15 states never ratified the amendment. (A list of these states is available in an online supplement.) We are interested in looking at how the state's ERA ratification status is related to individuals' gender beliefs in the state at the beginning of the period of analysis in the 1970s and whether or not such between-state patterns of gender beliefs persist into the twenty-first century. Our first hypothesis is that individuals in states that ratified the ERA will report more egalitarian beliefs on average (Hypothesis 1). We hypothesize that this positive effect weakens over time (Hypothesis 1a) and that the effect is stronger for women than for men (because of the particular significance of the vote for women's lives) (Hypothesis 1b).
Another state-level contextual effect in which we are interested is the state's advancement on indicators of the SDT. We examine how a state's proportion of never married women in their mid-20s to mid-30s, the abortion rate, and the rate of non-marital cohabitation, for example, are associated with the gender beliefs reported in that state. Based on the supposition that changes in beliefs about gender are connected to the SDT, our second hypothesis is that individuals in states that are further along on this measure of SDT will also report more egalitarian beliefs about gender on average (Hypothesis 2). We hypothesize that the greatest impact of the SDT is in the early periods of analysis (when such changes in fertility and marriage were most revolutionary, representing an important break from the family behaviors of the past) (Hypothesis 2a) and for women (because the changes in family behavior had the greatest impact on women's lives) (Hypothesis 2b).
In addition to the gender climate and advancement of the SDT, we are interested in examining the association between the economic climate in a state and reported gender beliefs. Evidence is mixed regarding the effects of economic climate on beliefs about gender. Whereas some scholars have found that a strong economy is associated with greater gender egalitarianism (Olson et al. 2007), others have found that men in particular have adopted more egalitarian gender beliefs during times of economic recession (Lee et al. 2010). The present research examines a time-varying indicator of economic conditions in the state-the unemployment rate. We expect men's beliefs to be more strongly related to economic period effects (Hypothesis 3). Because different relationships between the economy and men's beliefs have been found in different cultural contexts, there are competing hypotheses for the direction of the effect of this contextual factor.
In the end, our analysis will document the extent of social change in gender beliefs between 1974 and 2010 and test whether or not beliefs about gender have diverged over time in different states with varying gender climates and levels of advancement of the SDT. Has there been a divergence in the beliefs of people living in different state contexts since the 1970s? Or have cultural differences between states declined as we have moved away from the contentious debates of the 1970s and early 1980s over the ratification of the ERA? These are the questions that guide our analysis.
Method
Data from the U.S. General Social Survey (GSS) for 1974-2010, including restricted-access state identifiers, were analyzed Smith 1972-2010;NORC 2008). These state-identifier codes allowed us to examine state-context effects on individuals' beliefs about gender. Beliefs about women's social and political roles were measured by eight questions from the GSS tapping beliefs about the appropriateness of women in politics and their involvement in employment outside the home. These eight indicators are listed in Table 1.
In a confirmatory factor analysis of these eight measures, the parameters of a single common factor were estimated and a factor score based on these measures was constructed (Muthén andMuthén 1998-2012). Because of the way in which the GSS measured these variables, FEPRES, FEPOL, FEHOME, and FEWORK were treated as two-category ordinal measures of the latent continuous variable, and FECHLD, FEPRESCH, FEFAM, and FEHELP were treated as ordinal measures of this latent continuous variable with four ordered categories. The estimated standardized loadings in the model ranged from .47 (for FEWORK) to .87 (for FEHOME), suggesting that there are moderate to strong associations between the underlying factor and the indicators of beliefs about women's roles that we used.
The estimation method (FIML) for categorical indicators in
Mplus does not provide the usually reported goodness-of-fit statistics for the overall fit of the model. An exploratory factor analysis using a different estimation method for categorical outcomes (WLSMV) provides goodness-of-fit statistics with the downside of handling missing data using pairwise deletion. The analysis suggested that a two-factor solution with FECHLD and FEPRESCH loading on a second factor has a better fit to the data (χ 2 = 1345.212, df = 13, p < .001, RMSEA = .057, CFI = .988, SRMR = .036) compared to the one-factor solution (χ 2 = 6226.459, df = 20, p < .001, RMSEA = .099, CFI = .943, SRMR = .088). We also explored alternative confirmatory factor analysis specifications using three factors and cross-loading indicators. Although the alternative specifications using two or three factors provided a somewhat better fit to the data, the correlations between factors are strong enough (between .51 and .82) to justify collapsing the multiple factors into a single factor. In addition, given the fact that the loadings in the one factor solution are consistently high and the substantive results in our preliminary analyses from the alternative specifications were similar, for the sake of simplicity we report the results from the one-factor model.
The resulting factor score from our analysis is an unstandardized variable with a scale that is not easily interpretable. In order to make the metric of this factor score more comprehensible and the interpretation of effects in our models more meaningful, we rescaled the factor score to take on values from a minimum of 0 to a maximum of 10. Missing data due to nonresponse and to the absence of some of these indicators in certain survey years was handled using the full information maximum likelihood (FIML) estimation method. The estimation method can take into account the binary and ordinal nature of the observed indicators by calculating the relationships between indicators and the factor using logistic regressions (Muthén andMuthén 1998-2012). The factor score reflecting this latent variable will be used in subsequent analyses as our primary measure of egalitarian gender beliefs.
Independent Variables
Several individual-level controls were included in the models estimated. In addition to year of survey and birth year, measures of the respondent's employment status (full time, part time, or not working), schooling (less than high school, high school, or more than high school), household income (per capita, in thousands of dollars), urban residence (counties with towns or cities with at least 10,000 people), and religion (White Conservative Protestant, White Non-conservative Protestant, Black Protestant, Catholic, Jewish, other, no religion) (Steensland et al. 2000) were included as control variables. Our measure of religion combines both race and religion and therefore we cannot include a separate measure of race, due to multicollinearity. It is important for us to include these control variables and to estimate separate models by gender because (a) an individual's social location is associated with their beliefs about gender and their interest in, and exposure to, gender egalitarianism and (b) over the course of the study period, the GSS sample became more female and less White. State-level variables were intended to capture contextual effects on social change in gender beliefs. A set of dummy variables indicate whether the state ratified, ratified then rescinded, or did not ratify the ERA (Soule and Olzak 2004) and was included as a measure of the state's gender climate in the 1970s and early 1980s. A time-varying, state-level measure of unemployment (Bureau of Labor Statistics, 2013) indicates the economic climate in the state. A factor score indicating the advancement of the SDT in a state was constructed as well. This indicator is time-varying and is based on the factor constructed by Lesthaeghe and Neidert (2006). One difference is that Lesthaeghe and Neidert examine both state-level and county-level measures whereas our indicators are strictly at the state level. Measures of the SDT factor in 1970SDT factor in , 1980SDT factor in , 1990SDT factor in , and 2000 were included in the analysis, allowing us to analyze the relationship between lagged measures of SDT and subsequent gender beliefs in a state. We used data from the following IPUMS files: 1970 Form 1 State Sample, 1980 5% sample, 1990 5% sample, 2000 5% sample (Ruggles et al. 2009), state-level abortion tables (Henshaw and Kost 2008), and Natality Detail Files 1970, 1974, 1980, 1990, 2000(U.S. Dept. of Health and Human Services, NCHS 1970, 1974, 1980, 1990, 2000. In our analysis, the SDT factor was constructed from state-level measures of the percentage of non-Hispanic White women (age 25-29) without their own children in household, the percentage of non-Hispanic White women (age 25-29 and 30-34) never married, the percentage of non-Hispanic White ever-married women age 25-29 without own children in the household, the number of legal abortions per 1000 live births, the legal abortion rate per 1000 women (age 15-44), the fertility postponement ratio, the number of samesex households per 1000 households, the total fertility rate, the fertility rate (age 15-19), the percentage of households that are Bfamilies,^and the percentage of households with cohabiters of the same or different sex. The R 2 values from the regressions of SDT in various years on ERA dummy variables range from .143 to .312, suggesting that although the measures are associated, they tap different dimensions. Details regarding the construction of the time-varying, state-level SDT factor and descriptive statistics for the variables included in the analysis are available in an online supplement.
Analysis Methods
We decomposed the social change in measures of gender beliefs from 1974 to 2010 into that part resulting from changing beliefs within cohorts and that due to cohort differences reflected in the succession of cohorts. All analyses were conducted separately by gender based on the results of previous research showing important differences in the beliefs of men and women and in the rate of change in such beliefs (Alwin and Scott 1996;Lee et al. 2010;Mason and Lu 1988). The linear decomposition technique (Firebaugh 1989) uses the specification of a linear regression model for the variable of interest. The original model proposed by Firebaugh (1989) used an OLS regression equation with the dependent variable being the variable of interest for which change over time is analyzed and two predictors: survey year and birth year. The regression coefficients from this model may be used to compute intra-cohort change (the amount of change due to changes within cohorts) and cohort replacement (the amount of change due to cohort succession).
In our analysis, we used clustered data (individuals within states) and therefore estimated the model in a hierarchical linear modeling (HLM) framework, in order to take into account this type of clustering in the data (Raudenbush and Bryk 2002). Our basic model is a two-level HLM model with two level 1 predictors and a random intercept. The model in equation format is as follows: where x 1j and x 2j are Level-1 predictors measuring survey year and birth year, respectively, r ij is a Level-1 random effect, γ 00 , γ 10 , and γ 20 are Level-2 coefficients, and u 0j is a Level-2 random effect. We used HLM 7.01 to estimate all HLM models (Raudenbush et al. 2011). There may be bias in the estimation of state-level effects in a multilevel model on samples of individuals that are representative at national level, but that are not representative at the state level, as is the case with the GSS data (Lucas 2013). Other authors point out that, in practice, multilevel models may be estimated on nonprobability samples within level 2 units (Hox 2010). More research is needed to be able to determine whether substantive conclusions regarding level 2 effects would be affected in a multilevel analysis on a sample, such as the GSS, that is not representative at the state level. In our multilevel models, most effects are modeled as level 1 effects, and the models should produce unbiased estimates. Technically, the only level 2 effects in our models are those involving ERA, and we will proceed with caution in the interpretation of these effects.
This equation can also contain other predictor variables in order to estimate components of change net of other factors. After first estimating this model with just survey year and birth year as level 1 predictors, we then added the individual-level demographic controls as well as the contextual variables described earlier (i.e., ERA ratification status, a time-varying, state-level measure of unemployment, and a time-varying, state-level SDT factor score). The HLM model we used is a multi-level model for change over time. The model can incorporate level 2 predictors that are constant over time (such as ERA ratification status) or level 2 predictors that are timevarying (e.g., unemployment rates and state level SDT scores). However, incorporating effects of time-varying level 2 predictors in an HLM model with a level 1 / level 2 specification is somewhat problematic. The practical solution is to include predictors that vary over time, even if they are level 2 predictors, in the level 1 equations, because these are able to capture variation over time (Singer and Willett 2003). In our models ERA ratification status was added as a level 2 variable in HLM but the measures of the unemployment rate and statelevel SDT were added at level 1 because they are timevarying.
The level 2 coefficients from this model, γ 10 and γ 20 , were used, along with other information, to compute the cohort replacement and intracohort change components of secular change, defined as: where SY tf -SY t0 represents the amount of historical time elapsed between the first survey year (t 0 ) and the last survey year (t f ), BY t0 is the mean birth year at the first survey year, and BY tf is the mean birth year at the last survey year. Note that the slopes for period and cohort were fixed in the HLM models we used in our decomposition analysis; only the intercept was treated as random. In this type of model, also known as a Brandom intercepts^or an Bintercepts-as-outcomes^model (Luke 2004), different states may have different average gender beliefs scores (reflected in the random intercepts), but the effects of survey year and birth year on gender beliefs are the same across states (reflected in the fixed slopes). Social change (SC) is computed as the difference between the average value of the dependent variable (the gender beliefs factor score) at the end of the period under study and at the beginning of the period under study: where Y tf −Y t0 represents the amount of change in mean gender beliefs scores between the first survey year (t 0 ) and the last survey year (t f ). Social change, computed in this way, is a sum of the amount of intracohort change and change due to cohort replacement, plus an additional residual quantity. A small residual quantity in the linear decomposition model is to be expected and may arise as a result of nonlinearity and interaction effects (Firebaugh 1989, p. 253). After estimating the different components of social change (IC and CR), we returned to an HLM analysis to investigate the period effects driving social change in beliefs about gender. This HLM analysis is a different approach than the decomposition analysis to understanding how state-context effects are associated with gender beliefs in different periods. We changed the measurement of survey year to a series of period dummy variables (survey year 1974-1983 [reference category], 1985-1994, 1996-2000, and 2002-2010). We also included cohort as a control variable and changed the measurement of birth year to a series of cohort dummy variables (birth year before 1944 [reference category], 1944-1954, 1955-1965, and born after 1965). The cohort groups were chosen in reference to the initial passing of the ERA by Congress in 1972 as a way of capturing political generation membership (Brown and Rohlinger 2016). Those who were 29 or older in 1972 (and therefore likely to have already established their families of origin) are in the first birth cohort group, those age 18-28 in 1972 are in the second cohort group, those who were minors in 1972 (ages 7-17) are in the third group, and those not yet born or too young to be influenced by the media discourse surrounding the ERA in the early 1970s are in the fourth group.
We added interactions between the period dummies and the contextual state effects variables (i.e., state ERA ratification status, unemployment rate, and SDT factor). These interactions allow us to investigate the specific state-level contextual effects on period effects. In these additional HLM models, we allowed the slopes of the indicators of period, as well as the model intercept, to vary randomly across states. In an attempt to separate out the beliefs of long-term residents of a state from those who moved to the state after reaching adulthood, we tested our hypotheses using both the full sample and also a sample restricted to just those respondents who have not moved out of the state in which they are living since they were 16 years-old. The results for both samples were largely identical and therefore we chose to present the results from the unrestricted sample.
One limitation to our analysis is our inability to establish the direction of causality between state-context variables and gender beliefs. Although we recognize that gender beliefs in a state are not only influenced by the state ERA voting record and advancement with respect to the SDT but also in part influence these state-context factors, we cannot adjudicate between these different causal explanations. Our measures of state-context variables are lagged, however, which gives us some confidence in the interpretation of our findings as representing the effect of past state-context factors on gender beliefs.
Decomposition Analysis
The first step in our analysis was to conduct a decomposition analysis of social change in beliefs about gender from 1974 to 2010 based on a HLM with cohort and period as the only level 1 variables and with no level 2 variables included in the model. The results from this analysis are in Table 2 in the column labeled Bno controls.^Results are presented separately for women (Table 2a) and men (Table 2b). From this initial model, we see that women experienced greater social change in beliefs about gender over the entire period than did men. When the period of analysis is broken into the shorter time spans described earlier, it becomes clear that the pace of social change was inconsistent over the span from 1974 to 2010.
Because the time periods used in our analysis are different lengths, we expected some differences by period in the amount of social change. To account for these differences in period length, we present not only the total social change per period but also the amount of social change per year in each period.
For both women and men, the greatest social change in beliefs about gender occurred in the second period (from 1985 to 1994); this was followed by a period in which beliefs became less egalitarian from 1996 to 2000, and finally a period of renewed change in beliefs about gender from 2002 to 2010 in the direction of increasing egalitarianism. These differences in the rate of change across periods can also be seen by looking at the social change per year in each period, for both women and men, wherein the greatest social change per year occurred in period 2 (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994).
The decomposition of overall change into that driven by intra-cohort change and that caused by cohort replacement provides insight on why such period-to-period fluctuations exist. In period 3 (1996)(1997)(1998)(1999)(2000), for example, when beliefs about gender actually became less gender-egalitarian in the population, all of the negative social change was motivated by intra-cohort change in the direction of decreasingly egalitarian beliefs (although cohort replacement also slowed during this period.) The next decomposition model results presented in Table 2 are based on a model with demographic controls (in addition to the indicators of survey year and birth year), and the final column is based on a model with both demographic controls and state-level contextual controls. This final decomposition model, therefore, includes context indicators of the state's ERA ratification status, of the advancement of SDT in the state, and of the unemployment rate in the state. Looking at the decomposition for the entire period across the three models reported in Table 2, we can see that, for women, adding demographic controls explains away part of the intracohort change in the first model and that adding contextual variables explains away the remainder of the intra-cohort change reported in the model with no controls. This suggests that when we take the period from 1974 to 2010 as a whole, we can account for all of the change that occurred for women within cohorts through a combination of changing individual characteristics such as employment status and income and changing state-level characteristics. (Men did not experience a statistically significant level of intra-cohort change in the overall decomposition analysis.) In the follow-up multilevel (1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983) .052 *** .469 *** .379 *** .292 *** 5,078 .295 *** .236 *** 4,613 −.026 .236 *** 4,613 Period 2 (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994) .079 *** .714 *** .272 *** .424 *** 5,654 .173 * .324 *** 5,014 .068 .324 *** 5,014 Period 3 Entire period .023 *** .860 *** −.091 .970 *** 13,871 −.247 *** .834 *** 12,661 −.323 ** .833 *** 12,661 Period 1 (1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983) .055 *** .497 *** .333 *** .301 *** 4,009 .257 ** .233 *** 3,752 −.147 .234 *** 3,752 Period 2 (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994) . analysis, we further investigate how different state-contextual factors are each related to period effects. With respect to cohort replacement, demographic controls explain part of the change attributable to cohort replacement, but contextual factors have no effect on the level of cohort replacement reported. When the period of analysis is broken into shorter durations, we see that demographic and contextual controls play different roles in different time periods. For women and men, in periods 1 (1974-1983) and 2 (1985-1994), the addition of demographic controls decreases the amount of intra-cohort change. Surprisingly, for period 3 (1996)(1997)(1998)(1999)(2000), when individual and contextual variables are added to the models without controls for women and men, the amount of change attributable to intra-cohort change actually increases. Most of this increase is attributable to the addition of contextual controls and can be explained by the fact that the unemployment rate is negatively related to survey year during this period but slightly positively related to the gender beliefs factor. Therefore, when unemployment rate was not included in the model, the relationship between survey year and gender beliefs was suppressed. In the fourth period (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010), the amount of change attributable to intra-cohort change increases with the addition of individual controls. This is likely because indicators of the respondent's full-time employment status and of household income are negatively associated with survey year but positively associated with the gender beliefs factor for both women and men. For women in the fourth period, an indicator of being a White non-conservative protestant also shows a similar pattern of association and could be partly responsible for the increase in intra-cohort change observed across models.
HLM Investigation of Period Effects
In Tables 3 (for women) and 4 (for men), the unstandardized coefficients from the HLM models that further explore period effects on social change in gender beliefs are presented. When comparisons of the size of coefficients within a model are made, standardized coefficients are used (results not shown). In Model 1, the results from an HLM model without period interactions are presented. In support of Hypothesis 1 and Hypothesis 2, the level of advancement of the SDT in a state and a state's record of ratifying the ERA are both positively associated with liberal gender beliefs (but for men, ERA ratification status is not significant at conventional levels, in support of Hypothesis 1b). For both men and women, the unemployment rate is positively related to gender beliefs. From a comparison of standardized regression coefficients (results in Appendix Tables 3 and 4 in an online supplement), we find that the slope for the variable indicating advancement of the SDT in a state is greater in magnitude for both women and men than state ERA status and unemployment in predicting beliefs about gender.
In Models 2-4, we examine how these state-contextual factors shaped the nature and pace of social change through the inclusion of interaction terms between the contextual factors and dummy variables for period. We started the analysis with empirical predictions regarding the effect of SDT and ERA status on period effects but not regarding the effect of the unemployment rate on period effects (other than the prediction that unemployment rates would be more salient for men's beliefs than women's, Hypothesis 3). Unemployment rate interactions are included so that we may test the effect of state cultural context on period effects, net of the effects of economic conditions. We predicted that differences between state contexts would decline across time (Hypotheses 1a and 2a).
We expected a state's history of ratification of the ERA would have a positive effect on beliefs about gender (Hypothesis 1), especially for women (Hypothesis 1b), and that the effect would weaken over time (Hypothesis 1a). This is what we found for women: A negative effect in period 1 (1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983) of living in a state that did not ratify the ERA which is cancelled out by a positive interaction with period (which is positive, statistically significant, and greater in magnitude than the direct effect of ERA status) in the fourth period (Model 2) (see Table 3). This interaction effect is depicted graphically in Fig. 1. We used a slightly modified Model 2 in order to graph the interaction effect in Fig. 1, with time measured linearly by survey year. Following the method outlined by Bauer and Curran (2005), we calculated the regions of significance for this interaction term and found that from 1995 to 2010, no statistically significant difference was found between the gender beliefs reported by residents of states that ratified the ERA and states that did not ratify, supporting Hypothesis 1a.
No statistically significant interaction with period was found for men; when interactions with period were included in the model, ERA status was not a statistically significant predictor of gender beliefs for men in any period (see Table 4). This is an important distinction between the models for women and for men and supports the predictions of Hypothesis 1b.
Discussion
Our analysis shows that there has been a slowdown in the liberalization of beliefs about gender in the United States starting in the mid-1990s, consistent with previous research (Cotter et al. 2011;Thornton and Young-DeMarco 2001). Our decomposition analysis sheds light on the mechanisms of change responsible for the reversal in trends toward gender egalitarianism. We determined that although cohort replacement slowed in period 3 (1996)(1997)(1998)(1999)(2000), the actual reversal in trends toward increasingly liberal gender beliefs was motivated by within-cohort processes. This means that the change is not attributable to the more conservative beliefs of recent cohort groups but rather to the adoption of increasingly conservative gender beliefs within cohort groups. From the decomposition analysis, it seems that the state-context variables influenced social change in beliefs about gender as period effects and not in any consistent way as cohort effects. We further investigated the association between state context and period in the HLM analysis to see if state-context variables explained any of the variation in beliefs about gender beyond the individual-level predictors.
The HLM analysis sheds light on how state-context effects influenced social change as period effects. We found that the effects of the context variables varied by period, perhaps contributing to the fluctuation in social change observed across periods. Although living in a state that had not ratified the ERA and that was resistant to the changes in family life associated with the SDT was associated with less egalitarian beliefs about gender in the 1970s, this was no longer the case in the early years of the twenty-first century.
Our paper contributes to the literature on the slowdown in beliefs about gender that occurred in the mid-1990s in the United States by examining the roles of state-context factors in influencing the mechanisms of social change from 1974 to 2010. Because this reversal in trends toward greater gender egalitarianism occurred in the 1990s, as opposed to the 1980s as predicted by feminist theorists (e.g., Faludi 1991), Cotter et al. (2011) attribute this to egalitarian essentialism: a cultural frame based on the feminist call for gender equality coupled with social support for women who choose to focus on childrearing instead of paid employment. In this way, this trend does not represent a return to traditionalism so much as growing support for women's choice to work for pay or specialize in domestic activities. We would add to this explanation in speculating that shifting national political agendas and media reports on the hazards of being a career woman have also contributed to the observed reversal in the liberalization of beliefs about women's political and social roles in U.S. society in the 1990s and the slowing of social change in gender beliefs in the new millennium (compared to the mid-1980s-early 1990s) (Self 2012).
Our analysis shows that individual-level explanations of changing beliefs that point to changing exposure to, or interest in, liberal gender ideology cannot fully explain the pattern of change in gender beliefs observed from 1974 to 2010. Instead, we must include contextual factors in our theories of changing beliefs. In periods 1 and 2 , the addition of state-context variables explained away the intra-cohort change observed in these periods. However, individual demographic and statelevel contextual controls did not explain away the intracohort change observed during periods 3 and 4 (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010). Even controlling for changing educational attainment, religion, and income in the population, as well as our measures of state context, individuals adopted more conservative beliefs about gender from 1996 to 2000 (see also Cotter et al. 2011). Level 2 n = 48; level 1 n = 15,029. SDT and unemployment rate entered at level 1 as predictors. ICC = intraclass correlation coefficient a Variable centered around grand mean Building on Lesthaeghe and Neidert's (2006) research, we found respondents in those states that were further along with respect to the SDT and that ratified the ERA were also more liberal in their beliefs about gender in the earliest time period, supporting Hypotheses 1 and 2. The results suggest, however, that cultural differences between states declined in importance (with respect to beliefs about gender) after the early 1970s (see Models 2-4 in Tables 3 and 4). The declining effect of advancement with respect to the SDT appears to be part of the story of stalled social change in beliefs in period 3 (1996)(1997)(1998)(1999)(2000). We found in our HLM analysis that this was the first period in which the positive association between a state's advancement on the SDT and egalitarian gender beliefs disappeared. By the twenty-first century, beliefs about gender were nearly as egalitarian, or as egalitarian, across states with different ERA voting records and stages of advancement of the SDT. These results support Hypotheses 1a and 2a.
We also found that economic differences between states (as measured by the unemployment rate) continue to influence the beliefs about gender held by the state's residents, at least for men (consistent with Hypothesis 3). Men reported more egalitarian gender beliefs in states with higher unemployment rates, consistent with our past research which showed that men adopt more egalitarian beliefs during economic hard times (Lee et al. 2010). Overall, our findings confirmed our prediction regarding the importance of the ERA for women (Hypothesis 1b) but challenged our predictions regarding the importance of the SDT for women versus men (Hypothesis 2b) and confirmed our prediction regarding the importance of unemployment rates for men versus women (Hypothesis 3).
Limitations and Future Research Directions
Future research should attempt to establish the direction of causality in the association between state-context factors and individuals' beliefs and attitudes. Although we tried to address the issue of reverse causality by using lagged measures of state context in predicting beliefs about gender, we are unable to fully investigate questions of causal direction using crosssectional data. Analysis of panel data would also allow for an investigation of within-person changes in attitudes and beliefs over this time period and would be a contribution to the literature. Ideally, future research on state-context effects should also analyze data that are representative at the state level. Such an analysis would further our understanding of the different patterns of changing gender beliefs with age for different political generations in different state contexts following the failed ratification of the ERA. Level 2 n = 48; level 1 n = 12,060. SDT and unemployment rate entered at level 1 as predictors. ICC = intraclass correlation coefficient a Variable centered around grand mean * p < .05. **p < .01. ***p < .001 Sex Roles (2018) 4 1974 1978 1982 1986 1990 1994 1998 2002 2006 2010 Predicted Gender Beliefs Factor Score
Survey Year
Ratified Not Ratified Fig. 1 Relationship between the ERA ratification status of state of residence and gender beliefs factor score over the period of the analysis, 1974-2010, for women. Shaded area indicates the period (starting in 1995) in which there is no statistically significant difference between states that ratified and that did not ratify the ERA
Practice Implications
The news media have chronicled the increased partisanship in Washington (Hulse and Herszenhorn 2009;Krugman 2002) and scholars have investigated changing regional divides among Americans, particularly in terms of how such differences translate into voting patterns (Cahn and Carbone 2010;Lesthaeghe and Neidert 2006;Monson and Mertens 2011). Our research weighs into this national discussion about growing cultural divides in the United States and provides some evidence to challenge this claim. With respect to gender beliefs, cultural divides that existed in the 1970s have diminished. More specifically, historical cultural differences that culminated in the heterogeneous voting record on the ERA and differences across states in family formation and dissolution patterns were associated with large differences in gender beliefs in the 1970s, but these differences have diminished over time. When we analyze cultural differences among states in terms of gender beliefs, the story that emerges is one of declining, not growing, difference since the 1970s. Despite claims of growing regional divides in the media and in some scholarly work, our results challenge the claim of growing cultural differences at the state level. If such empirical evidence of declining cultural differences among states was more widely disseminated, this could temper the claims of the United States as a nation increasingly divided.
Conclusion
Our research advances our understanding of how social changes in beliefs about gender have varied from 1974 to 2010, for men and for women, and it provides evidence of the importance of U.S. state-context variables in explaining patterns of change. We have shed light on the characteristics of different states that are associated with the uneven patterns of change in beliefs by geographic region observed by others and have helped explain the reversal in trends toward gender egalitarianism starting in the 1990s. Overall, our findings challenge the claim of growing cultural differences between states. Rather than finding evidence of growing differences between those states that are more and less advanced with respect to the second demographic transition (SDT), we found shrinking differences and, with respect to our measure of the gender context of the state (i.e., ERA voting record), we found differences in gender beliefs that existed in the 1970s, 1980s and even early 1990s but that were no longer present by 2010. These findings all challenge the argument of a growing cultural divide among states, at least with regard to attitudes about gender egalitarianism. | 2022-12-24T15:14:55.869Z | 2017-12-18T00:00:00.000 | {
"year": 2017,
"sha1": "d024e9a2ca5d71190e8cf1b6694ebb841d71c762",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/2280740/files/2017%20Lee%20Tufis%20Alwin%20GR.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "d024e9a2ca5d71190e8cf1b6694ebb841d71c762",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": []
} |
169457553 | pes2o/s2orc | v3-fos-license | DEPLOYMENT OF LOW INTERACTION HONEYPOT IN A PRIVATE NETWORK
: A honeypot is a security system whose value lies in being probed ,attacked .A honeypot is forcefully made to be attacked to gather information about unauthorized activity Our research paper provides a brief explanation of such a system and demonstrate how it can be implemented to improve security of the organization and across critical systems and networks. In the experiment performed in this paper, such a trap is laid in the form of a low interaction honeypot deployed using pentbox 1.8 framework in a private network. The results of deployment are presented.
INTRODUCTION
As the organizations are becoming more and more dependent upon their network infrastructures, these network infrastructures are becoming more and more complicated for providing the necessary services [1]. Due to this added complicacy in the network architectures to provide seamless automation to the organizational day to day routine work the conventional network security devices are failing to provide the level of comfort required by the network administrators Hence there is a need of a context sensitive approach/technique for the prevention, detection and responding to the attacks performed on these complex networks. A honeypot is deliberately to be attacked to gather information about unauthorized activity (Kinsella, 2005). An intrusion detection system which generates no alerts may be indicator of normal network activity. However a honeypot that does not get attacked is worthless. Here in this paper we have done such a study by deploying a low interaction honeypot by using pentbox framework in a private network. The results are presented along with benefits and issues involved in such deployments.
WORKING OF HONEYPOTS
The concept of honeypot is quite simple. It acts as a resource which has no productive value, it works by deceiving intruders into believing it to be genuine system with genuine data and they attack the system without knowing that they are being observed completely. [3] Honeypots are, in their most basic form, fake information severs strategicallypositioned in a test network, which are fed with false information disguised as files of classified nature. When any external system tries to connect to the honeypot, all of its system related information, such as the IP address of the attacker, operating system, port accessed, browser, version etc. will be collected. Most important part of a honeypot system is capturing data, payloads, packets the ability to keep log, alert, and capture everything the attacker is doing.
Features of a honeypots system as suggested by Bouget & Holz
A. To divert the attention of the attacker from the real network, and making sure that the main server is not compromised. b. To identify the preferred attack methods used more commonly and make similar profiles of attackers as used by law enforcement agencies in order to identify a criminal. c. To capture new attack entities for future study d. To identify new vulnerabilities and risks of various operating systems, environments and programs which are not thoroughly identified at the moment [4] In a more advanced context, a group of Honeypots becomes a Honeynet. It acts as a tool that monitors wide group of possible threats which gives a systems administrator more information for study. It makes the attack more fascinating for the attacker due to the fact that Honeypots can increase the possibilities, targets and methods of attack [5].
HONEYPOT DESIGN AND ITS DEPLOYMENT ISSUES.
On the basis of design deployment of the Honeypot, it can be divided into Production and Research Honeypot [6] A. Production Honeypot These honeypots are used to protect the organizations in real production operating environments [7], constant attacks 24/7. These honeypots are constantly gathering more importance due to the detection tools they provide and because of the way they can complement network and host protection. This type of honeypots emulates specific services and sometimes even operating systems to lure the attackers.. A honey pot cannot prevent an unpredictable attack but can detect it. One case where they prevent the attacker is when he directly attacks the server. It will prevent attack on a production system by making the hacker waste his time on a non-sufficient target. [8].
Fig1: Honeypot inside the network B. Research Honeypot
Research honeypot collects important information about the nature of attacks, their motives and methods and tools used by attackers. Honeypot provides detailed information of the attack i.e. how attack happens, tools used by attackers etc. It is used to research the threats organizations face, and to learn how to provide better protection against those threats. According to the level of interaction of the honeypot [9], it can be divided into low, medium and high interaction honeypots
C. Low Interaction Honeypot
Low interactive honeypots are system which emulates the services and are easy to configure, deploy and setup and has lower interaction performance, It helps to detect known vulnerabilities and measure how often attackers attack
D. Medium Interaction Honeypot
This honeypot deceive the attacker with providing fake interfaces of the attacker and then stores all the activities done by attacker against different services. A medium interaction honeypot is somewhat advance than low interaction honeypot . These can range from simple port listener to a complex complete host just sitting on a network waiting to be attacked.
E. High Interaction Honeypot
These system provide real service to the intruders and full OS access to them.As attacker can get root access so the risk associated with these honeypots is the maximum.
PENTBOX 1.8 FRAMEWORK
Pentbox is a framework that consists of security and stability testing oriented tools that are commonly used in networking. It is developed in ruby and oriented to GNU/Linux systems, but is compatible with every systems where Ruby works. Tools in pentbox 1.8 :
III .Web
• HTTP directory brute force • HTTP common files brute force
HONEYPOT ARCHITECTURE AND CONFIGURATION
A monitoring infrastructure needs to be deployed in a wellcontrolled lab environment consisting of sensor, honeypot server as well as a logging server. One more sensor Wireshark will be working with the honeypot at the server.
Fig 2 Architecture of honeypot in a network
Wireshark, a network analysis tool formerly known as Ethereal, captures packets in real time and display them in human-readable format. Wireshark helps us understand what is happening in our network on a microscopic level.
Configuration of honeypot.
We have used Ubuntu operating system for setting up the server. After firing up the terminal we start the pentbox 1.8 framework .Pentbox honeypot will only work if we give sudo privileges.
First go to the pentbox directory and the path of its ruby module.
Then select the number 2-Manual Configuration [Advanced Users, more options]
.
Enter the port, message, and also where the log file will be stored.
Start Wireshark and then click on the network interface you want to use to capture the data. On a wired network, it will likely be eth0. On the honeypot we use wireshark to monitor the network traffic .We have activated honeypot on port 80 Port 80 is the port number assigned to commonly used internet communication protocol, Hypertext Transfer Protocol (HTTP). It is the port from which a computer sends and receives Web client-based communication and messages from a Web server and is used to send and receive HTML pages or data Fig3 Screen shot of Real time capture of tcp packets using Wireshark.
The server is usually the IP the TCP SYN packets are sent to, while the source of the SYN packets in the client. So you could filter on the SYN packets using "tcp.flags==2" and see which IPs are targeted. One way is to click Statistics>Conversations this will open a new window and you can click ipv4 or tcp option to check out the Destination IP/src IP/src port/dst port Fig4 Screenshot of Wireshark statistical analysis for source and destination addresses. | 2019-05-30T23:45:09.370Z | 2017-08-31T00:00:00.000 | {
"year": 2017,
"sha1": "1e7642c95a99e9c664f2097ae3dfa72fba165fcc",
"oa_license": "CCBY",
"oa_url": "http://ijarcs.info/index.php/Ijarcs/article/download/4583/4111",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5a2d7f6c50d9dc225c6ebcfe8a623e09d64379b9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
220459013 | pes2o/s2orc | v3-fos-license | Intersecting ethnic and native–migrant inequalities in the economic impact of the COVID-19 pandemic in the UK
Analyzing new nationwide data from the Understanding Society COVID-19 survey (N = 10,336), this research examines intersecting ethnic and native–migrant inequalities in the impact of COVID-19 on people’s economic well-being in the UK. The results show that compared with UK-born white British, black, Asian and minority ethnic (BAME) migrants in the UK are more likely to experience job loss during the COVID-19 lockdown, while BAME natives are less likely to enjoy employment protection such as furloughing. Although UK-born white British are more likely to reduce their work hours during the COVID-19 pandemic than BAME migrants, they are less likely to experience income loss and face increased financial hardship during the pandemic than BAME migrants. The findings show that the pandemic exacerbates entrenched socio-economic inequalities along intersecting ethnic and native–migrant lines. They urge governments and policy makers to place racial justice at the center of policy developments in response to the pandemic.
Introduction
This research addresses two social developments that have swept the world in 2020. First, the COVID-19 pandemic has had an unprecedented impact on the global economy as well as individuals' economic well-being (Ahmed, Ahmed, Pissarides, & Stiglitz, 2020). Second, the global rise of racism and anti-racism movements, often related to COVID-19 (Coates, 2020), has brought to the fore longstanding, entrenched ethnic inequalities (Li & Heath, 2016). Ethnic disparities in the health impact of COVID-19 are well documented across many countries (Bhala, Curry, Martineau, Agyemang, & Bhopal, 2020); most notably, COVID-19 infection and mortality rates are much higher among people from black, Asian and minority ethnic (BAME) groups than their white counterparts. Yet insufficient attention has been paid to ethnic inequalities, or their intersections with native-migrant inequalities, in the economic impact of COVID-19 (Hooper, Nápoles, & Pérez-Stable, 2020;Laurencin & McClinton, 2020). To fill this gap, I analyze new nationwide data collected both before and after the pandemic in the UK. I ask how, if at all, the impact of COVID-19 on people's economic well-being differs with their intersecting ethnic and migrant status. I take advantage of the longitudinal design of the dataset to capture the economic impact of the pandemic by tracing changes in people's economic well-being before and during the pandemic.
Data and sample
I analyzed data from the Understanding Society (USOC) COVID-19 survey and preceding waves of the survey. Initiated in 2009, USOC is a nationally representative longitudinal panel survey, which has oversampled BAME and migrant groups (McFall, 2013). In April 2020, the first wave of the USOC COVID-19 survey collected data from 17,452 respondents during the UK's national lockdown. While the regular USOC waves collect data from face-to-face interviews, complemented by mixed-mode techniques, the COVID-19 survey was administered through a self-completed questionnaire on the internet. Therefore, a sampling weight was provided by the USOC team to adjust for potential sample selection bias, which was used in all of my analyses.
To construct the analytical sample, I first eliminated respondents who did not have a valid record in Wave 9 of USOC, because I used data from the preceding wave to obtain key demographic information that was not collected in the COVID-19 survey. As I analyzed changes in people's employment status, I limited the sample to respondents aged 20-65. Last, I deleted 1,377 cases with missing information on the variables used in the analysis. The final analytical sample contained 10,336 UK residents ("Full Sample"), of whom 8,281 were either selfemployed or working as an employee in January-February 2020, before the COVID-19 outbreak in the UK ("Worker Sample"). See Online Supplements for detailed information on sample construction.
Economic well-being indicators
To provide relatively comprehensive coverage of the impact of COVID-19 on people's economic well-being, I focused on five indicators. The descriptive statistics are presented in Appendix A and detailed information on measurement construction can be found in the Online Supplements.
Change in employment status
Based on people's employment and furlough status in January-February and April 2020, I created a categorical variable to capture changes and continuity in people's employment: "no change" (78 %), "lost job" (4%), and "furloughed" (18 %).
Change in working hours
Based on people's working hours in January-February and April 2020, I created a categorical variable to capture changes and continuity in the respondents' working time: "increased or no change" (53 %), "(partial) reduction in time" (16 %), and "total time loss" (31 %).
Household income loss
The survey asked the respondents to report whether their household had taken any measures to deal with income loss due to the pandemic. I created a dummy variable to distinguish whether a respondent took any action in response to household income loss (yes = 41 %).
Difficulty keeping up to date with bills
In Wave 9 (2017-2019) and the COVID-19 wave (April 2020) of USOC, the respondents reported whether they were up to date with various bills. The response categories were "up to date," "behind with some bills," and "behind with all bills." Due to cell size consideration, I combined the latter two categories into "behind with bills" (7%). I used a dummy variable to capture whether people had found it more difficult to keep up to date with their bills during than before the pandemic.
Perceived financial hardship
In Wave 9 and the COVID-19 wave of USOC, the respondents were asked to describe their financial situation, which ranged from "living comfortably" through "doing alright," "just about getting by" and "finding it quite difficult" to "finding it very difficult." Due to cell size consideration, I combined the last two categories. I then created a dummy variable to capture whether a respondent found their financial situation more difficult during the pandemic than before (21 %).
Ethnic and migrant status
Based on whether one self-identified as a member of a BAME group and whether one was born in the UK, I distinguished the respondents' intersectional ethnic-migrant status: "white, native" (88 %), "white, migrant" (5%), "BAME, native" (3%), and "BAME, migrant" (4%). Due to small sample sizes (see Online Supplements), I was not able to further distinguish specific ethnic groups.
Control variables
I controlled for a series of variables: age (and its quadratic term), gender, education, mode of employment before the pandemic (selfemployment, zero hours contract, etc.), household composition, selfreported health, urban residency, long-term household income, occupational class (National Statistics Socio-economic Classification) and COVID-19 risk level; whether the respondents were key workers; whether they currently have or had ever reported COVID-19 symptoms or been tested for COVID-19; and whether they had received social benefits in January-February 2020. Marital status and region of residence were not included, as they were not statistically significantly associated with the outcome variables and their inclusion neither affected the key predictors nor helped to improve the overall model fit.
Analytical strategy
I fitted a series of binary, ordered and multinomial logit regression models for the distinct outcome indicators. Analysis of the first two outcome indicators was based on the Worker Sample and that of the other outcome indicators was based on the Full Sample. I estimated robust standard errors clustered at the household level to account for intra-household correlation. I graph predictive margins to present the findings, and the full regression results are presented in the Online Supplements. The results show the intersectional disadvantages faced by BAME migrants, who were 3.1 times more likely to lose their jobs during the COVID-19 lockdown than UK-born white British (10.1 % vs. 3.3 %, F [between-group difference] = 9.09, p < 0.01). Compared with BAME natives, UK-born white British were 1.7 times more likely to be furloughed (18.9 % vs. 11.4 %, F = 9.12, p < 0.01). While white non-migrant British were 5.7 times more likely to experience furlough than job loss (18.9 % vs. 3.3 %), the rate was as low as 1.4 times for BAME migrants (16.3 % vs. 11.4 %). These results, along with the results I report below, are after controlling for the fact that BAME groups are more likely to be self-employed and the self-employed tend to be more economically susceptible to the COVID-19 lockdown (Platt & Warwick, 2020). Fig. 2 presents the probabilities of a partial reduction in work hours (Panel A) and total work time loss (Panel B) during the lockdown for those who were in work in January and February 2020. Compared with UK-born white British (16.7 %), BAME migrants were less likely to experience a reduction in their work hours during the lockdown (10.7 %, F = 6.36, p < 0.05). Moreover, BAME natives are less likely to experience total work time loss than their white non-migrant counterparts (23.8 % vs. 30.1 %, F = 5.08, p < 0.05). Fig. 3 presents the probability of household income loss during the pandemic. The results show that compared with UK-born white British (39.6 %), all BAME and migrant groups were more likely to experience household income loss during the pandemic, with income loss being 1.3 times (F = 16.48, p < 0.001), 1.2 times (F = 7.34, p < 0.01) and 1.2 times (F = 4.71, p < 0.05) more likely for white migrants (51.4 %), BAME natives (49.3 %) and BAME migrants (48.0 %), respectively.
Fig. 4 presents the probabilities of falling behind with bills (Panel A)
and an increase in the difficulty of keeping up to date with bills during the COVID-19 lockdown (Panel B). The results in Panel A show that BAME migrants were 2.2 times (14.4 % vs. 6.5, F = 12.00, p < 0.001) more likely to report being behind with their bills than their white nonmigrant counterparts during the COVID-19 lockdown. A similar pattern was observed for an increase in the difficulty of keeping up to date with bills during the lockdown compared with before, as shown in Panel B. Compared with UK-born white British (4.6 %), BAME migrants (10.8 %, F = 7.29, p < 0.01) were 2.3 times more likely to experience an increase in the level of difficulty of keeping up to date with their bills during the pandemic. The results show that compared with UK-born white British, BAME migrants were less likely to report living comfortably but more likely to report experiencing financial difficulty. Specifically, UK-born white British (28.8 %) were 1.4 times more likely than BAME migrants (20.9 %) to report leading a financially comfortable life during the pandemic (F = 19.37, p < 0.001). In contrast, BAME migrants (11.1 %) were 1.5 times more likely than their white non-migrant counterparts (7.2 %) to report experiencing financial difficulty during the pandemic (F = 12.34, p < 0.001). As shown in Panel E, BAME migrants (26.6 %) were 1.3 times more likely than their white non-migrant counterparts (20.2 %) to experience an increase in their perceived level of financial hardship during the COVID-19 lockdown (F = 3.90, p < 0.05).
Conclusions
As we enter the third decade of the 21st century, the COVID-19 pandemic and the global rise of racism and anti-racism movements are two of the most prominent developments to define people's lives around the world. These two developments are inextricably entangled (Bhala et al., 2020). In 2018, compared with their white colleagues doing the same work, BAME employees suffered a wage shortfall of £3.2 billion in the UK (Topham, 2018). My findings uncover intersecting ethnic and native-migrant inequalities in the impact of COVID-19 on people's economic well-being, which exacerbate entrenched socio-economic disadvantages faced by BAME migrants in the UK (Li & Heath, 2016. These inequalities are evident in the negative impact of COVID-19 on people's employment status, maintenance of income, ability to keep up to date with bills, and self-perceived financial situation in the UK. Taken together, my findings underline the importance of considering social groups living at the intersection of multiple margins of society (Collins & Bilge, 2020), as the pandemic and associated lockdown have had a particularly severe impact on the economic well-being of BAME migrants in the UK. My findings not only illustrate the much more severe economic adversity facing BAME migrants than UK-born white British during the pandemic, but also indicate that BAME natives seem to enjoy a lower level of employment protection, such as furloughing, than their white non-migrant counterparts.
In future research, it will be important to trace whether ethnic and native-migrant inequalities in the impact of COVID-19 on people's economic well-being worsen as the pandemic develops. As many countries start to ease and lift lockdown measures, it will also be crucial to examine intersectional inequalities in people's long-term trajectory of (economic) recovery. Furthermore, this research urges policy makers and practitioners to develop initiatives not only to protect members of BAME and migrant groups from the adverse economic impact of the pandemic, but also to ensure racial justice as well as broader social justice (Kristal & Yaish, 2020;Qian & Fan, 2020) in the design and delivery of social protection and welfare provision during these challenging times.
Acknowledgements I gratefully acknowledge the helpful comments from the anonymous reviewer. The data used in this research were made available through the UK Data Archive. The United Kingdom Household Longitudinal Survey is an initiative funded by the ESRC and various Government Departments and the Understanding Society COVID-19 survey is funded by the UKRI, with scientific leadership by the ISER, University of Essex, and survey delivery by NatCen Social Research and Kantar Public. Neither the original collectors of the data nor the Archive bear any responsibility for the analyses or interpretations presented here. Note: BAME = black, Asian and minority ethnic. GCSE = General certificate of secondary education. Key worker = critical workers such as medical staff. Weighted statistics. See Online Supplements for detailed measurement information. a April 2020. b Reported in April 2020 referring to January-February 2020. c Reported in previous waves. | 2020-07-11T13:02:12.014Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "5b1e2087f6042a8a956fc8bbdc9377471376170e",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.rssm.2020.100528",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7de39e4e65cee8ba56d694b3a7dcb2a591ab80d0",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
53033763 | pes2o/s2orc | v3-fos-license | A systematic review of outcome and outcome-measure reporting in randomised trials evaluating surgical interventions for anterior-compartment vaginal prolapse: a call to action to develop a core outcome set
Introduction We assessed outcome and outcome-measure reporting in randomised controlled trials evaluating surgical interventions for anterior-compartment vaginal prolapse and explored the relationships between outcome reporting quality with journal impact factor, year of publication, and methodological quality. Methods We searched the bibliographical databases from inception to October 2017. Two researchers independently selected studies and assessed study characteristics, methodological quality (Jadad criteria; range 1–5), and outcome reporting quality Management of Otitis Media with Effusion in Cleft Palate (MOMENT) criteria; range 1–6], and extracted relevant data. We used a multivariate linear regression to assess associations between outcome reporting quality and other variables. Results Eighty publications reporting data from 10,924 participants were included. Seventeen different surgical interventions were evaluated. One hundred different outcomes and 112 outcome measures were reported. Outcomes were inconsistently reported across trials; for example, 43 trials reported anatomical treatment success rates (12 outcome measures), 25 trials reported quality of life (15 outcome measures) and eight trials reported postoperative pain (seven outcome measures). Multivariate linear regression demonstrated a relationship between outcome reporting quality with methodological quality (β = 0.412; P = 0.018). No relationship was demonstrated between outcome reporting quality with impact factor (β = 0.078; P = 0.306), year of publication (β = 0.149; P = 0.295), study size (β = 0.008; P = 0.961) and commercial funding (β = −0.013; P = 0.918). Conclusions Anterior-compartment vaginal prolapse trials report many different outcomes and outcome measures and often neglect to report important safety outcomes. Developing, disseminating and implementing a core outcome set will help address these issues.
Introduction
The most common type of pelvic organ prolapse (PO) is anterior-compartment prolapse. Hendrix et al. demonstrated in a group of 16,616 postmenopausal women a prevalence of anterior-compartment prolapse of 34%, and this was much higher than the rates of apical-or posterior-compartment prolapse [1]. The aetiology of pelvic organ prolapse (POP) is complex and associated with various factors such as age, menopausal status and childbirth-related pelvic floor trauma [2,3]. Possible surgical interventions include biological-graft, mesh and native tissue repair [4,5]. The development of new surgical interventions is urgently required, and potential surgical interventions require robust evaluation. Selecting appropriate efficacy and safety outcomes is a crucial step in designing randomised trials. Outcomes collected and reported in randomised trials should be relevant to a broad range of stakeholders, including women with anterior-compartment prolapse, healthcare professionals and researchers. For example, resolution of bladder symptoms is an important outcome for all stakeholders; however, it is not commonly reported across trials. Even when outcomes have been consistently reported, secondary research methods, including pair-wise meta-analysis, may be limited by the use of different definitions and measurement instruments [6,7]. A core outcome set should help address these issues. The first stage in core outcome-set development is to evaluate outcome and outcome-measure reporting across published trials. Therefore, we systematically evaluated outcome and outcome-measure reporting in published randomised trials evaluating surgical interventions for anterior-compartment prolapse. In addition, we assessed the relationships between outcome reporting quality with other important variables, including year of publication, impact factor and methodological quality.
Materials and methods
This systematic review is part of a wider project of the International Collaboration for Harmonising Outcomes, Research and Standards in Urogynaecology and Women's Health (CHORUS) (i-chorus.org) and was registered with the Core Outcome Measures in Effectiveness Trials (COMET) initiative database, registration number 981, and with the International Prospective Register of Systematic Reviews (PROSPERO), registration identification CRD42017062456. We searched bibliographical databases comprising the Cochrane Central Register of Controlled Trials (CENTRAL), EMBASE and MEDLINE from inception to September 2017. The search strategy used several MeSH terms, including bladder prolapse, cystocele and POP. Randomised trials evaluating surgical interventions for anterior-compartment prolapse were eligible. We included trials evaluating the surgical management of anterior prolapse as a unicompartmental prolapse procedure, as well as trials in which anterior repair was undertaken in addition to other surgical interventions. Non-randomised studies, observational studies and case reports were excluded.
Two researchers (CD and AE) independently screened the titles and abstracts of electronically retrieved articles. The articles potentially eligible for inclusion were retrieved in full text to assess eligibility, and reference lists were independently reviewed. Any discrepancies between the researchers were resolved by review of a third senior researcher (SKD). Two researchers (CD and AE) independently extracted the study characteristics, including year of publication, journal topicality (subspecialist, general obstetrics and gynaecology or general medicine), journal's impact factor and commercial funding (yes/no). The journal's impact factor was determined using InCites Journal Citation Reports (Clarivate Analytics, Thomson Reuters, New York, NY, USA). Funding status was identified by reviewing the article text and included the original study, c secondary analysis donation of equipment or other resources. Two researchers (CD and AE) independently assessed the methodological quality of included randomised trials using the modified Jadad criteria (score range 1-5) [8]. Studies were assessed as high quality when they achieved a score >4. Outcome reporting quality was assessed using the Management of Otitis Media with Effusion in Cleft Palate (MOMENT) criteria (score range 1-5) [9]. Studies were assessed as high quality when they achieved a score >4. The non-parametric Spearman's rank correlation coefficient (Spearman's rho) was used to explore univariate associations between outcome reporting quality and impact factor during the year of publication, year of publication and methodological quality. Multivariate linear regression analysis using the Enter model was also undertaken to assess the combined association of quality of outcome reporting and journal type, impact factor during the year of publication, year of publication and methodological quality (independent variables) with outcome reporting (dependent variable). All tests were two-tailed. Statistical significance was set at 0.05, and analyses were conducted using the SPSS statistical software (IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY, USA).
This study was reported with reference to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [6].
Summary of main findings
This study demonstrated considerable variation in outcome and outcome-measure reporting across published trials evaluating surgical interventions for anterior-compartment prolapse. Commonly reported outcomes included normalised anatomy, QoL and pain. Patient-reported outcomes were infrequently reported, and a minority of trials reported on patient satisfaction. Mesh-related complications, including erosion, shrinkage and morbidity, were rarely reported. Forty-five different questionnaires were used as measurement instruments; most were validated. Only a few trials considered cost effectiveness.
Strengths and limitations
Strengths of our systematic review include originality, a rigorous search strategy and methodological robustness. To our x knowledge, this systematic review is the first to evaluate outcomes and outcome measures in anterior-compartment prolapse trials. Study screening and selection and data extraction and assessment were conducted independently by two researchers to avoid bias. Our findings were based on outcome reporting in published randomised trials. The exclusion of observational studies may have potentially missed outcomes related to harm [89,90] and selecting only trials reported in English may have introduced selection bias. The variation of interventions for correcting anterior prolapse may have caused variation in outcome and outcome-measure reporting.
Interpretation
Randomised trials require a substantial investment of resources. Variation in outcomes and outcome measures limits the ability of trials to be combined with meta-analyses, which contributes to inevitable research waste, as identified in various areas of women's health, including childbirth trauma, endometriosis and pre-eclampsia [91][92][93][94]. This systematic review is the first step in the development of a minimum data set, which will be known as a core outcome set. It will be developed with reference to methods described by the COMET initiative, Core Outcomes in Women's and Newborn Health (CROWN) initiative and other coreoutcome-set development studies, including those on endometriosis, pre-eclampsia, termination of pregnancy, Twin-Twin Transfusion Syndrome and neonatal medicine [95][96][97][98][99]. CHORUS is aiming to work towards a standardisation of outcomes and outcome measures and subsequently establish a minimum of standards in research and clinical practice. Chorus working groups are currently evaluating reported outcomes in all areas of urogyneacology and have been registered with the COMET (registration number 981, http://www. comet-initiative.org/studies/details/981) and CROWN initiatives. Each working group has carefully considered the scope of its work [100], and CHORUS will replicate the success of other international initiatives that have standardised outcome selection, collection and reporting across preterm birth research [101].
In the absence of a core outcome, we recommend QoL (incorporating sexual function), postoperative complications, patient and physician satisfaction and postoperative prolapse, bladder and bowel symptoms be collected across all anterior prolapse trials.
Conclusion
Anterior-compartment prolapse trials report many different outcomes and outcome measures and often neglect to report important safety outcomes. Developing, disseminating and implementing a core outcome set will help address these issues.
Compliance with ethical standards
Conflicts of interest The authors report that they have no conflicts of interest.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-11-10T06:17:23.851Z | 2018-10-22T00:00:00.000 | {
"year": 2018,
"sha1": "f94c76e068383c615ee1c6de2be52b57d9d57605",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-018-3781-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f94c76e068383c615ee1c6de2be52b57d9d57605",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245078296 | pes2o/s2orc | v3-fos-license | Financial literacy and the use of financial services in Serbia
Financial services industry has always drawn a lot of attention, from possible investors, those who need financing, the government and general public. Globally, financial opportunities are becoming more attractive, but also more complex. The goal of this study is to analyze the use of financial services in Serbia. We argue that financial education and literacy are preconditions for the use of financial opportunities. Research has shown that people in Serbia are not well informed about how to make sound financial decisions. The reasons why people in Serbia do not use financial products requires to a greater extent and services special attention. In order to test the differences between people in terms of how well informed they are and which services they use and why, we conducted a survey. Our results show that people with salaries higher than 100,000 RSD are well informed but not motivated to invest. Individuals with middle income do not have enough trust and think that they are not well informed about different opportunities. Additionally, we found that men are better informed than women. This paper aims to provide an overview of the use of financial services in Serbia in order to improve financial decisionmaking processes and understand the different financial opportunities.
Introduction
The financial services industry is vast and highly diverse. The industry's scope comprises comprehensive areas like bank management (both commercial and investment banking), investment management (different forms of investment and pension funds), insurance companies, securities investments and brokerage companies, as main areas. This industry is equally important to both companies and individuals. Having in mind all the aforementioned areas, it is obvious why it is said that financial services industry is one of the main drivers of a country's economy. It serves as arena for payments, intermediation between savers and borrowers, risk transfer, liquidity etc., to mention just a few important roles financial services industry has. With those important roles come very strict and detailed legislative and regulatory practice for all the areas of financial services industry.
Financial services are also arena with very tremendous changes happening almost continually. The need for innovation is a must for all industries today, and financial services are not exception; on the contrary. Due to constant changes and innovations, financial services industry is reshaping the way it offers and distributes its services to customers. Those activities are not a one-time event, but constant urge for innovation "that will shape customer behaviors, business models, and the long-term structure of the financial services industry" (World Economic Forum, 2015).
When we talk about innovation in financial services industry, first we think of fintech innovation. Fintech (financial technologies) innovation refers to application of technologies in financial services industry. Therefore, Fintech combines financial services with technological development to help both individuals as well as all kind of businessfrom start-ups to big companies. Fintech changes the way those categories use financial services. Thanks to mobile devices, Internet, software, cloud services etc., we do not use financial services traditionally anymore. Fintech brought us financial services in a faster, more convenient and improved manner.
Considering financial services in Serbia, there are all types of financial institutions that offer different financial services. But, the development of financial institutions is not equal, in the sense that some of the financial institutions, especially banks, are far bigger than others. If we take into consideration total financial assets financial institutions in Serbia manage, then the situation is as follows: banks hold 89,2% of total assets, insurance companies 6,6%, financial leasing 2,3%, voluntary pension funds 1% and investment funds 0,9% (National Bank of Serbia, 2019; Republic of Serbia Securities Commission, 2019). It is obvious from those facts that Serbia belongs to those countries that are very bankoriented. We could conclude that traditional financial services offered by banks still dominate in Serbia.
Further in the paper we will present findings from our case study, considering financial services performance in Serbia.
Literature review
One of the goals of this study is to analyze how well people in Serbia are informed about opportunities in the market and how they make financial decisions. More precisely, we focused on financial literacy. Financial literacy can be defined as people's ability to process and make informed decisions about financial planning, wealth management, investments, pension plans and loans. Remund (2010) argues that "financial literacy is a measure of the degree to which one understands key financial concepts and possesses the ability and confidence to manage personal finances through appropriate short-term decision-making and sound, long-range financial planning." Moreover, financial literacy measures how well people understand and use personal finance-related information (Huston, 2010).
Although financial literacy is one of the crucial elements of responsible and controlled financial behavior, it does not guarantee appropriate and profitable financial decisions. The success of financial decisions depends not only on financial literacy but also on impulsiveness, behavioral biases, unusual preferences and numerous external circumstances (Huston, 2010).
The conventional economic approach suggests that people are fully rational, wellinformed and that they will always make optimal decisions. This means that people will be able to save money and take responsibilities in terms of their investments. The truth is that this approach is far from reality. Generally speaking, people do not have basic financial knowledge which is very important for making complex financial decisions.
Nowadays financial decisions are very personalized, and people should be very careful in order to organize their finances in the best possible way. On the contrary, in the past, financial decisions (e.g. pension plans, savings, investments etc.) were initiated and managed by governments and companies. Employees devoted little attention to their financial decisions, especially to pension plans (Fernandes et al., 2014). The reason for this is low level of education in terms of financial opportunities and optimal investments or savings plans. There is an ongoing discussion on how to accumulate financial knowledge early in life even if there is no opportunity for complex financial decisions thereafter.
On the one hand, financial literacy has many advantages. For instance, people with strong financial skills have better results in job planning and savings (Lusardi & Mitchell, 2014). Risk awareness is one of the characteristics of financially savvy investors who always try to invest in several ventures and diversify financial risks (Abreu & Mendes, 2010). On the other hand, financial ignorance comes with significant costs. People who do not understand basic financial concepts usually have problems with higher interest rates and transaction fees (Lusardi & Tufano, 2015). Additionally, they borrow more and save less money (Stango & Zinman, 2009). Finally, Huston (2010) summarizes: "people with low financial literacy are more likely to have problems with debt (Lusardi & Tufano 2009), less likely to participate in the stock market (van Rooij et al., 2011), less likely to choose mutual funds with lower fees (Hastings & Tejeda-Ashton, 2008), less likely to accumulate wealth and manage wealth effectively (Hilgert et al., 2003) and less likely to plan for retirement (Lusardi & Mitchell, 2006)". Different research studies on financial literacy have shown that numerous people all around the world are financially illiterate (Hilgert et al., 2003). The study shows that one out of three adults is financially literate. Women, the poor and lower educated respondents show the lowest level of financial literacy, and the results remain the same regardless of the development of the country. People who use different financial services have better financial skills. Therefore, financial knowledge is related to financial services in two directions: higher financial literacy lead to the usage of financial services, and different financial services, such as a bank account or loan, motivate people to improve their financial knowledge. Nevertheless, there is a view that financial literacy is not linked to simple decisions such as having a bank account but rather to complex portfolio decisions which come with high levels of risk (Christelis et al., 2010). On the other hand, Hilgert et al. (2003) point out that there is a strong correlation between financial literacy and day-today financial management skills. Results from different studies show that people who are better at solving mathematical problems and who are financially literate are also more likely to participate in financial markets and invest in different financial instruments (Christelis et al., 2010;van Rooij et al., 2011). In terms of complex financial decisions, what matters most are analytical skills and the capacity to do calculations (Hilgert et al., 2003).
In terms of financial education programs there are two streams of research. Some researchers suggest that costs of financial education programs are higher than the potential benefits (Mandell & Klein, 2009). On the other hand, there is evidence that education programs are highly effective because they lead to optimal financial plans and, therefore, to positive financial outcomes (Fox et al., 2005).
Numerous studies suggest positive relationship between the level of education and financial literacy and emphasize differences in financial knowledge through education. People with college degrees, in comparison to those who did not continue formal education after high school, are more likely to use financial instruments in order to make better financial decisions (Lusardi & Mitchell, 2007). In addition, it has been shown that financial literacy depends on the financial education of their parents. It might be beneficial for children to be aware of their parents' savings and investments (Lusardi et al., 2010).
In the wake of the global financial crisis, policy makers and people from the financial industry show deep concern about financial literacy especially in youth. Younger generations have much more opportunities such as exotic mortgage forms, expended and new borrowing options and investments in innovative solutions tightly linked to technological development. Moreover, new generations have a very strong desire to be successful, to have freedom and flexibility. They have high self-esteem and expectations which are used as the main drivers of change. Moreover, the value of money, risk averseness and motivation for success influence the use of financial services especially for youth. Nga et al. (2010) and products is present within new generations. In the past, people used to be savvy, better educated and prepared to take higher risk.
Not many studies analyze the awareness of financial products and services, especially in emerging countries such as Serbia. Complexity of transactions in the underdeveloped financial markets and lack of transparency lead to lower levels of trust and financial awareness. Boyd et al. (1994) argue that low-income families do not have enough experience or knowledge to choose the best possible financial options. Results show that low-income families rely solely on word of mouth when they have to make any financial decision. Contrary, high-income families are focused on interest rates and savings accounts and emphasize the importance of the friendly approach of financial experts.
The study implies that there is room for improvement in terms of financial awareness in Serbia. Financial institutions should have a proactive and socially responsible role. Boyd et al. (1994) argue that the reputation of financial institutions, clear and direct communication in terms of interest rates on loans and savings, and availability of information have more importance than other criteria. Not only is financial literacy important for individuals but also for the sector of small and medium enterprises and corporate world (Drexler et al., 2014).
Sample and procedure
This survey tests financial service awareness in Serbia. A questionnaire was developed, and pilot tested before the formal data collection. The questionnaire comprises two parts with 10 questions totally. The first part of the questionnaire refers to demographic review of the respondents. It consists of questions regarding gender, age, marital status, housing issue and education. The second part encompasses questions regarding respondent's financial services awareness in Serbia. After the question concerning working in private or public institution, the remaining questions in this part refer to income, types of financial services currently or previously used, and awareness of different types of financial services in Serbia, as well as perception of less or not using the financial services in Serbia. Whenever possible, measurement items were adapted from existing scales in literature. Some modifications were made to align the scales with the Serbian context. The response rate was 62%, corresponding to a sample size of 217 cases out of the 350 questionnaires successfully sent out using Google Forms.
Results
The financial services awareness scale was created using the five-point Likert-type scale that ranges from 1 (totally unaware) to 5 (totally aware). The question components for the construct of the scale, as well as its descriptive statistics, are listed in Table 1. The Cronbach alpha coefficient is 0.916 which points to an excellent internal consistency of a scale. The following table gives the results of Kendall's tau correlation test, which is used to test the correlations among ordinal scaled data. Table 3 gives the results of Mann-Whitney and Kruskall-Wallis tests for the scale Financial services awareness. The nonparametric tests were used since the variable is not normally distributed.
According to the results given in Table 3, there is a significant difference between men and women regarding the financial services awareness in Serbia (MW, p<0.001). Men (Me=15) are generally more aware and more familiar with the financial services than Women (Me=10). When it comes to salaries, people that make less than 55,000 RSD a month are the least aware about the possibilities of financial services in Serbia (Me=10). People that make 55,000 to 100,000 RSD are a bit more familiar with the financial services (Me=13), while people that make more than 100 000 are the most familiar with the financial services in Serbia (Me=15). This difference is significant on 0.01 level (KW, p=0.003). The explanation could be given in a fact that people that make most money are more frequently contacted by the bank operators and informed about the possibilities of these services. Ironically, they most commonly do not use these services because of their lack of interest (Table 4). According to the crosstabs results, there is a concordance between the amount of money that people make and reasons for not using financial services in Serbia. The value of Crosstabs Likelihood Ratio test is 39.551, and is significant on 0.001 level (p=0.001). It can be noted from Table 4 that people that make least amount of money mostly do not use financial services because of the lack of finances. People that earn medium amounts of money in Serbia mostly do not use financial services because of distrust in the financial system, while people that make the most money are simply indifferent and not interested in these services.
Conclusion
Financial services industry is perceived as one of the main drivers of a country's economy. Being equally important to both companies and individuals, financial services industry has always drawn a lot of attention, from possible investors, those who need financing, government and general public. Globally, financial opportunities are becoming more diverse, attractive and, on the other hand, more complex.
In this paper we start from the hypothesis that financial education and literacy are preconditions for the use of financial opportunities. Generally speaking, financial literacy leads to optimal decision making about borrowing money, investments, pension plans, participation in the financial markets etc. This study aims to provide an overview of the use of financial services in Serbia in order to improve financial decision-making processes and understand the different financial opportunities.
Not many studies analyze the awareness of financial products and services, especially in emerging countries such as Serbia. Underdeveloped financial markets and lack of transparency lead to lower levels of trust and financial awareness in markets like Serbia.
Survey results show that the amount of money that people make and reasons for not using financial services in Serbia are very much in concordance. Our results show that people with salaries higher than 100,000 RSD (app. 850€) are well informed but not motivated to invest. Individuals with middle income do not have enough trust and think that they are not well informed about different opportunities. People that make the least money mostly do not use financial services because of the lack of finance. Additionally, we found that men are more informed than women. | 2021-12-12T16:33:09.358Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f032a7a5545bf1612e60fc6b20b2f36b21cc9153",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-2120/2021/0350-21202146105B.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "aa5d57869f101a599c71b253353c4b32b8d866c7",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
259190625 | pes2o/s2orc | v3-fos-license | Cancer mortality distribution in South Africa, 1997–2016
Introduction The mortality data in South Africa (SA) have not been widely used to estimate the patterns of deaths attributed to cancer over a spectrum of relevant subgroups. There is no research in SA providing patterns and atlases of cancer deaths in age and sex groups per district per year. This study presents age-sex-specific geographical patterns of cancer mortality at the district level in SA and their temporal evolutions from 1997 to 2016. Methods Individual mortality level data provided by Statistics South Africa were grouped by three age groups (0–14, 15–64, and 65+), sex (male and female), and aggregated at each of the 52 districts. The proportionate mortality ratios (PMRs) for cancer were calculated per 100 residents. The atlases showing the distribution of cancer mortality were plotted using ArcGIS. Spatial analyses were conducted through Moran's I test. Results There was an increase in PMRs for cancer in the age groups 15–64 and 65+ years from 2006 to 2016. Ranges were 2.83 (95% CI: 2.77–2.89) −4.16 (95% CI: 4.08–4.24) among men aged 15–64 years and 2.99 (95% CI: 2.93–3.06) −5.19 (95% CI: 5.09–5.28) among women in this age group. The PMRs in men and women aged 65+ years were 2.47 (95% CI: 2.42–2.53) −4.06 (95% CI: 3.98–4.14), and 2.33 (95% CI: 2.27–2.38) −4.19 (95% CI: 4.11–4.28). There were considerable geographical variations and similarities in the patterns of cancer mortality. For the age group 15–64 years, the ranges were 1.18 (95% CI: 0.78–1.71) −8.71 (95% CI: 7.18–10.47), p < 0.0001 in men and 1.35 (95% CI: 0.92–1.92) −10.83 (95% CI: 8.84–13.14), p < 0.0001 in women in 2016. There were higher PMRs among women in the Western Cape, Northern Cape, North West, and Gauteng compared to other areas. Similar patterns were also observed among men in these provinces, except in North West and Gauteng. Conclusion The identification of geographical and temporal distributions of cancer mortality provided evidence of periods and districts with similar and divergent patterns. This will contribute to understanding the past, present, future trends and formulating interventions at a local level.
Introduction
High-quality mortality statistics are crucial for optimal health planning, decisionmaking, program evaluation, progress monitoring, and resource allocation (1,2). However, high-quality statistics are often reported at high administrative divisions but not at lower administrative levels for local public health decision-making. There is evidence of disparities in mortality risks at subnational levels driven by age, gender, and social-economic differences (3)(4)(5). South Africa (SA) is undergoing an upsurge in non-communicable diseases, such as cancer (6,7), mainly due to lifestyle-related factors such as obesity, smoking, and alcohol consumption (8). This upsurge is crucial because of its negative impact on economically productive adults (9), who contribute to the economy, and younger and older age groups also depend on them for survival (10). Thus, adult mortality in SA presents a changing set of dynamics challenging the limited healthcare resources of the region (11), because of its impact on the availability and productivity of working adults (12, 13). Mortality atlases often describe the geographical distribution of cancer mortality rates by combining years to make a single period. The burden of diseases, demographic, and many determinants associated with the well-being of populations are dynamic and change over time (14).
Cancer is a rare disease; thus, some geographic areas may present few cases leading to unreliable and unstable results as most atlases use age and gender-adjusted rates or standardized mortality rates (15). In order to provide and understand the patterns of deaths attributed to cancer over a spectrum of relevant subgroups, the time trends and disease mapping for cancer mortality (CM) should be approached in a manner to account for age and sex groups differences at the district level in different years. The best method to understand trends in health indicators is the evaluation of the outcomes of health policies that were implemented in the past and to ascertain the current health status of the population to make a well-informed public health policy in the future (16). Studies in SA investigating time trends and spatial distribution of diseases focused mainly on all causes of mortality combined, not cancer (17,18). There are studies in SA providing the patterns of cancer incidence, but the reported studies on CM are limited. Furthermore, studies on CM that encompass all cancers and show the burden of CM attributed to various cancers are lacking in SA. To the best of our knowledge, there is no research in SA providing patterns and atlases of the deaths attributable to cancer in age and sex groups per district per year. This study presents age-sexspecific geographical patterns of CM at the district level in SA and their temporal evolutions from 1997 to 2016. This may be crucial for effectively monitoring and evaluating public health policies and programs targeting CM reduction across time and sub-populations.
Study population
The mortality data provided by Statistics South Africa (Stats SA) were analyzed using Stata version 17. The registration of deaths in SA is done within three days from the date of the event. A death certificate is issued to the informant after the medical practitioner has prescribed the cause of death. All death notification forms are collected by Stats SA from the Department of Home Affairs in SA. Stats SA is responsible for compiling and processing the death records forms into mortality reports (19). There are many sources of mortality data in SA (20), but the vital death registration data provide the best and most reliable mortality data due to very high levels of completeness. We considered cancer deaths and all-cause mortality stratified by age (0-14 (adolescence), 15-64 (working-age population), and 65+ years old (elderly population) and sex (male and female). We used the 10th International Classification of Diseases codes (ICD-10) to classify the causes of death. Cancer deaths (ICD-10 codes C00-C96) corresponding to various anatomical locations were combined based on six age-sex groupings for each of the 52 districts. The top five leading causes of cancer-related deaths were estimated by adding the absolute numbers of deaths from individual cancers in the twenty-year study period. Finally, the five leading causes of cancer deaths were ranked in descending order.
Statistical analysis
The analyses were performed in two phases. The first phase quantified cancer deaths, computed proportionate mortality ratios (PMRs), and generated corresponding atlases. The second phase focused on identifying the most common cancers and plotting the graphs for PMRs. The PMRs for cancer across the districts for each year were calculated using deaths from cancer and all causes.
PMR ¼
Number of cancer deaths in each age À sex grouping for each year Number of deaths due to all causes in all age groups in the same sex for each year  100 The 95% confidence intervals (95% CIs) were calculated based on the Poisson distribution. For the spatial distribution of CM, the data were aggregated at the district level for SA in 1997, 2004, and 2016. The PMRs for cancer per district for three selected years were computed using the cancer deaths in each age-sex grouping per district per year as a numerator, while the number of deaths due to all causes in all age groups in the same sex per district per year served as a denominator. These three years were selected because we wanted to compare the spatial distribution of CM in the era before and after antiretroviral therapy (ART). Hence, ART was initiated in SA around 2004 (21). The shapefiles were joined to their district-specific corresponding aggregated dataset in ArcGIS (22), and the atlases showing the distribution of CM were plotted. Spatial analyses were conducted through Moran's I test. To compute the number of deaths for the most common cancers, the corresponding total number of deaths related to the five major cancers in each age-sex category in that particular year served as a numerator, whereas the number of deaths due to all cancers in all ages in the same sex in that particular year was used as a denominator. These were reported per 100 residents.
Number of deaths
In males, the numbers of cancer deaths were lowest in the 0-14-year-olds and highest in the 15-64-year-olds in 1997 and other years ( Table 1). This pattern was similar among females,
All-cancer mortality patterns
Disparities in the proportion of cancer deaths were observed, with PMRs being lowest in the 0-14 age group, followed by the 65+ age group, and highest in the 15-
Study area
SA has nine provinces. The present study was conducted at the district level. However, nine provinces were coded on a map to ensure readability. These regions are further subdivided into 52 districts. There are 44 districts municipalities, and 8 metropolitan districts in SA. Some provinces have many districts, while other regions have few of them. The province with the most districts is KwaZulu-Natal which has 11 districts, followed by Eastern Cape with 8 districts, and 6 districts in Western Cape. There are 5 districts in Free State, Gauteng, Limpopo, and Northern Cape, whereas North West has 4 districts and 3 in Mpumalanga. Of the eight metropolitan districts, three metropolitan districts (City of Ekurhuleni Metropolitan, City of Johannesburg Metropolitan, and City of Tshwane Metropolitan Municipality) are located in Gauteng, two (Buffalo City Metropolitan and Nelson Mandela Bay Metropolitan Municipality) in Eastern Cape, one (City of (23). Gauteng comprises the largest share of the population in SA (26.6%), followed by KwaZulu-Natal (19.0%), and Northern Cape contributes the smallest share of the population (2.2%). In terms of age structure, the populations aged younger than 15 years and 60 years or older are about 28.1% and 9.2% (23) (Figure 2).
Geographical distribution of all-cancer mortality
Cancer mortality is high among individuals with high ages. Furthermore, the five leading causes of cancer death are not childhood cancers. This aspect was reflected by a small number of cancer deaths among 0-14-year-olds in our study. Thus, it is within this context that the maps showing the distribution of CM in this age category were not plotted. There were considerable geographical variations and similarities in the patterns of CM. High cancer mortality occurred in Northern Figures 3A, B).
The distribution of CM among women aged 15-64 was more concentrated in Northern Cape, Western Cape, Eastern Cape, Free State, and Gauteng provinces in 1997. In elderly women, clusters of high CM were located in these provinces in exception of the Free State. In 1997, the reported PMRs among women aged 15-64 and 65+ years ranged from 1.66 (95% CI: 1.00-2.59) Provinces and districts of South Africa. The mortality for the top five major cancers was higher in men aged 15-64 years than the elderly men from 1997 to 2003. ( Figure 6A). Ranges were 27.32%-29.50% among men aged 15-64 years. As time progressed, the elderly population had higher CM than the working-age population from 2005 to 2016. The deaths related to five common cancers in the elderly population were 26.46%-28.94%. The five leading causes of cancer deaths accounted for more than 50% of cancer deaths in both age groups in the entire study period. Ranges were 50.90%-57.60%, with the highest CM in 1997. For the working-age and elderly populations combined among men, the number of deaths attributed to the top five cancers declined from 1997 to 2016. In women, the working-age population had higher CM than the elderly population from 1997 to 2016. The deaths related to the five major cancers in the working-age and elderly populations were 29.25%-31.24% and 20.52%-22.78%. In females, there was an increase in mortality for the top five major cancers from 1997 to 2016 in both age groups. The observed ranges were 50.08%-54.01%, as the highest mortality was in 2016 ( Figure 6B).
Discussion
This study was undertaken to provide patterns and spatial distributions of CM to better understand the evolution of age-sex PMRs for cancer over time and space in SA. The vital death statistics data have been of high coverage, especially at the level needed for assessing patterns and mapping. The study findings will contribute towards a reflection on the past, present, and future trends of CM in SA. We are aware that this study did not include the calculation of CM rates, which may estimate the risks of this disease. Most mid-year population estimates in SA are at the regional level and not often given by age-sex groups at the district level. The adopted approach has allowed for the computation of PMRs for cancer by six age-sex groups and the identification of districts presenting high and low CM. The numbers of cancer deaths increased from 1997 to 2004 in all age and sex groups. The increase may be indicative of population growth and better reporting coverage. The numbers of notified and registered cancer deaths continued to increase even from 2004 to 2016, although they declined slightly during specific periods. This may be due to variations in cancer deaths recorded across the districts in SA, which may suggest the variation in the Frontiers in Epidemiology quality and completeness of mortality data. We found disparities in all-cancer mortality in the age-sex groups, with the lowest proportions in children, followed by the elderly population, and the highest in the working-age population. The low CM in the 0-14 year-olds could be explained by the fact that pediatric cancer is underreported in SA. There are about 50% of childhood cancer cases that are not diagnosed in SA (24). The stability of the trend among children in SA probably is due to the lower incidence rate of childhood cancers, even far lower than in developed countries such as the United States. A study done in SA reported the age-standardized incidence rate (ASR) of 45.7 per million children (25), whereas the ASR in highincome countries was 180 per million (26). A study done in Nigeria reported that hematological malignancies accounted for most cancer deaths among children (27). This present study did not assess the pattern of deaths due to hematological malignancies because of the criteria used to select the five leading types of cancer death. However, hematological malignancies were included when the pattern of deaths attributable to all cancers was assessed.
Age is positively correlated with cancer (28). This could explain why cancer contributed to more deaths in the working-age and elderly populations than in children. The improvement of life expectancy in SA after the ART era could be the contributory factor to the high CM in these age categories. Of notable importance, cancer contributed to more deaths among individuals aged 15-64 years than 65+ years. The predominance of cancer deaths among working-age individuals may be due to either age profile of the population or actual disparities in CM rates in SA. The lower CM among the elderly population than working-age individuals signifies that the patients do not live long after they were diagnosed with cancer or cancer cases were successfully treated. Furthermore, there is a long waiting period from screening to initiation of treatment amongst cancer patients in SA. The waiting times for radiation treatment of gynecological and prostate cancers in SA take many weeks. Delaying cancer treatment for more than 12 weeks may be harmful. This delay is caused by the lack of access to care linked with socio-economic status, race, insurance status and urban-rural location, which significantly affect vulnerable groups, namely black Africans, poor, uninsured, and rural residents (19). Another possible explanation is that the patients might have started to seek care from traditional healers before going to cancer centers due to stigma and lack of awareness. This resulted in the presentation of cancer cases at the advanced stage; hence it is often difficult to cure cancer at its advanced stage. More worrisome is the observed increase of CM from 2006 to 2016 in the population aged 15-64 and 65+ years, which calls for the implementation of necessary interventions. The lack of awareness among the general public and healthcare professionals of cancer negatively impacts the number of patients diagnosed and those referred to appropriate services; hence, patients may die from this disease. The avoidance of these cancer deaths may include strengthening awareness, diagnostic capacities, and early treatment (29). The spatial distributions of CM showed differences between districts, indicating disparities in access to health care quality, age distribution, and the burden of cancer disease. Cancer services such as diagnostic, curative, rehabilitative, psycho-social or palliative services in SA are more concentrated in two provinces, namely, Western Cape and Gauteng. Despite the availability of cancer services in the Western Cape and Gauteng provinces, deaths due to cancer predominated in those provinces in 1997. This may be explained by the movement of patients diagnosed with cancer from their under-resourced provinces to the Western Cape and Gauteng to seek care. This increases the number of cancer patients in the well-resourced provinces leading to inadequate staffing ratio, low survival rate, and poor quality of survivorship. This study found the lowest PMR among elderly women in 2006. This suggests that women live longer than men; hence men are highly likely to involve themselves in risky behavioral patterns such as heavy drinking (30), which may contribute to cancer morbidity and mortality. However, women in this age group may die from other conditions that are linked with aging. SA is experiencing demographic, epidemiological, and nutritional transitions contributing to lifestyle-related diseases and mortality, such as cancer. Men and women aged 15-64 and 65+ years are crucial in studying CM and other noncommunicable diseases, considering that these age groups are more likely to develop and succumb to these diseases. In addition to the health impact, the findings from this study have some important economic implications. The highest mortality was observed in the 15-64-year-olds, the economically active, contributing to the country's economy. In SA, economically reproductive adults are also helpful in the welfare of younger and older age groups. In males and females aged 15-64 years, the high CM may negatively impact the economy. Cancer constituted the majority of deaths in the Western Cape, Northern Cape, North West, and Gauteng in women aged 15-64 years. CM also predominated among men in these provinces, except in North West and Gauteng. The CM preponderance in Western Cape and Gauteng provinces could be due to the high rate of inter-provincial migration among potential job-seekers and employees. Gauteng and Western Cape dominate the economy in SA; hence these provinces are zones of attraction. The in-migrants modify their lifestyles as they move from their places of origin to urbanized places and adapt to urbanization associated with cancer (31). Therefore, patients diagnosed with cancer may die from this disease if untreated. Given an increased unemployment rate in SA, many families depend on old-age pensions for survival; hence the high CM among the 65+ age group may increase poverty in many families. There are over threequarters of African adults age-eligible for a pension who stay with at least one individual younger than 21 years, and the majority live in households containing three or more generations (32).
Thus, the findings presented in this study will assist policymakers by providing empirical evidence to help them make and map policies to narrow the differences in access to quality healthcare services for districts. However, some of these initiatives were implemented, as detailed within the National Cancer Strategic Framework (33). These initiatives aimed to reduce and combat communicable and non-communicable diseases, increase primary health access to families and communities, and achieve universal health care coverage. Overall, the five leading causes of cancer death were similar among the working-age and elderly populations, but some differences were based on ranking and sex. Similar cancer profiles with some insignificant variations in their relative rankings have been reported in SA (7,34,35). Lung and esophageal cancers were among the five leading causes of cancer death in both sexes. The high prevalence of smoking in SA could be responsible for lung and esophageal CM patterns in the populace. Hence, smoking is a common risk factor for lung and esophageal cancers (36). The estimated prevalence of smoking in adult men and women in SA is 26.5% and 5.5% (37). Among men, the major contributors to CM in the working-age category were lung and esophageal malignancies, whereas lung and prostate cancers were the leading causes of cancer death in the 65+ age group. Advanced age is associated with prostate cancer (38); this explains the pattern of high CM observed among the 65+ age group. Worldwide, lung and prostate cancers are the two most common cancers that affect men, whereas lung and liver malignancies cause more deaths among males (34). Cancers of the cervix and breast caused the greatest mortality in women aged 15-64 years, while the majority of deaths in elderly women were due to breast and lung malignancies. Interestingly, deaths due to cervix cancer among females aged 15-64 hugely declined from 1997 to 2003 and increased from 2004 to 2016. This signifies that HIV-positive women might have died before developing cervical cancer, but the introduction of ART in SA from 2004 to 2016 has increased the life expectancy, which is long enough to enable the development of cervical cancer. There are projections that the incidence of cervical cancer will sharply increase in SA (39). There is high accessibility, affordability, and availability of ultraprocessed food in rural and urban areas in SA. The ultraprocessed food consumed is associated with breast cancer (40). Among females worldwide, breast and colon cancers are the top two cancers diagnosed, whereas breast and lung cancers are the leading causes of cancer death (34). The high incidence of this disease may increase the number of deaths related to the five major cancers. Hence, the five leading causes of cancer death in our study were among the top ten cancers diagnosed in South African men and women in 2019 (41). Compared to children, the risk factors associated with many cancers are more prevalent in working-age and elderly populations. Therefore, the high prevalence of cancer risk factors among these age groups studied could lead to high CM.
We found that the five leading causes of cancer deaths contributed more than 50% of cancer deaths in both working-age and elderly populations, regardless of sex. The National Department of Health in SA has prioritized the cancers of the lung, colon, cervix, prostate, and breast based on high incidences of these cancers (33). The high incidences of these major cancers were also reported in sub-Saharan Africa (42). Based on the findings of our study, it is suggested that all the top five cancers in each sex should be given priority when formulating the interventions to fight cancer. Furthermore, the implementation of interventions should target both working-age and elderly populations. In South Given that cancer treatment is expensive, there should be more financial investment in cancer and educational campaigns at a local level to avoid the diagnosis of this disease at an advanced stage. Investments in cancer research that can produce a return on investment could help drive policy (47). Addressing the inequalities within provinces can yield positive results in the fight against cancer. This can reduce pressure caused by in-migrants seeking care in the cancer centers in other provinces. A comprehensive cancer surveillance system may be crucial for reducing cancer morbidity and mortality in SA (48). It is important to emphasize that the PMRs were used to estimate whether the proportion of deaths due to cancer in residents from specific districts increased or decreased across time. This study aims not to draw conclusions about districts but rather to identify whether these districts serve as a risk marker for CM. This information could necessitate further studies to investigate factors contributing to geographic variation in CM. This study has some strengths and limitations. One of the advantages is to utilize the underutilized database to compute and compare PMRs by six age-sex groups per district per year in SA. The number of cancer deaths alone may not indicate the seriousness of this disease; hence this study used the population totals as the denominator. The advantage of PMRs is that they do not need the usage of the mid-year population as a denominator in computing them. Therefore, there was no interpolation of population totals which could have resulted in incorrectness of the mid-year population totals at the district level. Furthermore, the interpolation of population totals could have affected the computation and interpretation of some PMRs, especially for small and sparser districts. The aspect of the study that might be considered as a limitation is the lack of finer age group categories to evaluate cancer mortality, especially in the 40-64 age groups. Another limitation is that PMRs are sensitive to changes in detection or diagnostics and may not accurately reflect cancer risks.
Conclusions
This study has shown the patterns and geographical distribution of CM for different sex and age groups at the district level. This adopted approach has helped identify common and different patterns in geographical and temporal distributions of PMRs for cancer in SA. This may give evidence of periods and areas presenting high or low CM and, therefore, will significantly contribute to understanding the past, present, and future trends of CM in different areas to formulate a more focused policy at a local level. The identification of districts presenting high or low CM would be an interesting area for further research to elucidate the factors driving these patterns.
Data availability statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Human Research Ethics Committee (Medical) of the University of the Witwatersrand (M190546). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
Author contributions
MLN conceptualized the study, designed the study, analyzed the data, compiled the results, interpreted the results, and prepared the first draft of the manuscript. IE and EM contributed to concept development and revisions to the manuscript. All authors contributed to the article and approved the submitted version. | 2023-06-19T13:07:52.071Z | 2023-06-19T00:00:00.000 | {
"year": 2023,
"sha1": "83514b43246fe7323b8716590bdbf7cd56c9b427",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "83514b43246fe7323b8716590bdbf7cd56c9b427",
"s2fieldsofstudy": [
"Medicine",
"Geography"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32706074 | pes2o/s2orc | v3-fos-license | Biomechanical comparison using finite element analysis of different screw configurations in the fixation of femoral neck fractures
In this research, biomechanical behaviors of five different configurations of screws used for stabilization of femoral neck fracture under axial loading have been examined, and which configuration is best has been investigated. A point cloud was obtained after scanning the human femoral model with a three dimensional (3-D) scanner, and this point cloud was converted to a 3-D femoral model by Geomagic Studio software. Femoral neck fracture was modeled by SolidWorks software for five different configurations: dual parallel, triple parallel, triangle, inverted triangle and square, and computer-aided numerical analysis of different configurations were carried out by ANSYS Workbench finite element analysis (FEA) software. For each configuration, mesh process, loading status (axial), boundary conditions and material model were applied in finite element analysis software. Von Mises stress values in the upper and lower proximity of the femur and screws were calculated. According to FEA results, it was particularly advantageous to use the fixation type of triangle configuration. The lowest values are found as 223.32 MPa at the lower, 63.34 MPa at the upper proximity and 493.24 MPa at the screws in triangle configuration. This showed that this configuration creates minimum stress at the upper and lower proximity of the fracture line. Clinically, we believe that the lowest stress values which are created by triangle configuration encompass the most advantageous method. In clinical practices, it is believed that using more than three screws does not provide any benefit. Furthermore, the highest stresses are as follows: at upper proximity 394.79 MPa in triple parallel configuration, for lower proximity 651.2 MPa in square configuration and for screw 2459 MPa in inverted triangle.
Introduction
In our daily life, people can be faced with undesired traumas.As a result of these traumas, femoral neck fractures may occur in the skeletal system.Femoral neck fractures are serious traumas that can lead to pneumonia, pulmonary embolism or death.Therefore, fixing the accuracy and stability of these fractures is necessary.The different configurations of screw fixation are used for stabilization of femoral neck fractures.The question of the best fixation type in surgical treatment of femoral neck fractures is still subject of debate today (Basso et al., 2012;Deneka et al., 1997;Filipov, 2011;Martens et al., 1979;Zdero et al., 2010).In general, although screw fixations of these fractures have been described as appropriate, there are only few studies that contain evidence based on biomechanics regarding which configuration or how many screws result in better stabilization (Audekercke et al., 1979;Cody et al., 2000;Martens et al., 1979;Springer et al., 1991;Swiontkowski et al., 1987).In clinical practice, morbidity related to the fixation of a femoral neck fracture might be due to the configuration of screws that is used; a superior fixation absolutely will create less morbidity (PJ, 1996).Today, biomechanical analyses are performed by using finite element analysis (FEA) in many areas of medicine.In this study, we aimed to research what the best stable fixation practice might be by applying five different screw configuration types on the femoral neck fracture model with FEA method.
Computer-aided design (CAD) and finite element analysis
The human femoral model used in this study was scanned using a 3-D scanner and point cloud was obtained.This process is called reverse engineering (RE).RE process is used to copy the complex shapes and designs by special software and hardware.This method decreases the design process of a product.In particular, it is very important when CAD models of products have been lost.However, in traditional production sequence, reverse engineering typically starts with measuring an existing object so that a solid model can be deduced in order to make use of the advantages of CAD/CAM/CAE technologies (Várady et al., 1997).Later, 3-D model of femur was created using point cloud data by Geomagic Studio 10 program.The scanned data were taken as point cloud data in STL format and converted to Parasolid format using the Rapidform program.The Parasolid format was opened in SolidWorks program and a femoral neck fracture on the 3-D femur model was created as shown in Fig. 1.In order to stabilize the femur neck fracture, M = 6.5 screws (cancellous bone screw 6.5 mm diameter and 16 mm thread) were used in this study.Screws were scanned (Fig. 2) using 3-D scanner and modeled in SolidWorks program.Finally, five different configurations -dual parallel, triple parallel, triangle, inverted triangle and square (Fig. 3) -were created using this femur neck fracture model as shown in Fig. 4.
Computer-aided numerical analysis to stabilize the five configurations after fixation was performed using ANSYS Workbench software.The 3-D CAD models of the five configurations (Figs. 3 and 4) were imported into the ANSYS Workbench software to prepare the FEA.Load, boundary conditions and material models were defined in ANSYS Workbench.
Loading and boundary conditions
The mesh process was performed using hex dominant finite element for the FEA modeling after importing five different configurations of 3-D models into the ANSYS Workbench software.The FEA model has 92 552 nodes and 33 117 elements.While the mesh density for the femur was inputted as 4 mm, each screw was inputted as 1 mm, as shown in Fig. 5a.A load of 350 newton (N) in axial direction was applied to the femoral head, and it was fixed from the distal condylar articular face, as shown in Fig. 5b.Contact types the between bone-bone interaction and screw-bone interaction were defined as a frictional contact.Friction coefficients were taken as 0.46 for bone-bone interactions and 0.42 for screw-bone interaction (Goffin et al., 2013).Finally, convergence analysis was conducted as shown in Fig. 6.The force convergence was commonly used in non-linear analyses.If solution is not convergence, there are one or more problems.The purple line on the convergence graph should be acted on the cyan line in order to obtain a good solution.This status depends on 1. Titanium material was selected for screws, and its mechanical properties were taken from ANSYS Workbench Material Library.Linear isotropic material model was used for mechanical behaviors of bone and screws.The human femur was taken into account as cortical.Müller et al. (1991) mentioned that the anisotropy of bone, i.e., its different mechanical properties along different axes, does not play a major role in internal fixation and will therefore be neglected here.
Results
After inputting loading and boundary conditions, FEA analysis was solved.According to FEA results, maximum stress values on upper and lower proximity of femoral bone and screws are given in Table 2.These stress values have been evaluated according to the von Mises criteria.According to this criterion, the von Mises stress is an equivalent or effective stress at which yielding is predicted to occur in ductile materials.If the equivalent stress exceeds the yield stress of the material, yielding occurs at that point.In most literature, such a stress is derived using principal axes in terms of the principal stresses (Fig. 7) as in Eq. ( 1).In latest editions, the von Mises stress with respect to multi-axes stresses (Fig. 8) can also be expressed as in Eq. ( 2) (Jong and Springer, 2009). (1) www.mech-sci.net/6/173/2015/Mech.Sci., 6, 173-179, 2015 (2) As shown in Table 2, it is remarkable under axial loading.The lowest stress values occurring in bone and screws have occurred in triangle configuration and results are as follows: for upper proximity of femur 63.34 MPa, for lower proximity the femur is 223.32 MPa, and for screws 493.24MPa.The highest values have occurred as follows: for upper proximity of femur is 394.79MPa with triple parallel configuration, for lower proximity of femur 651.2 MPa with square configuration, and for screws 2459 MPa with inverted triangle configuration.Figure 9 shows the values of stress of axial loading on upper proximity of bone with different configurations, and Fig. 10 shows values for lower proximity bone.Lastly, stress distributions occurring in the screws are shown in Fig. 11.
Discussion
Comparison of different configurations of the screws used for stabilization of femoral neck fracture has been studied in biomechanics (Baitner et al., 1999;Holmes et al., 1993;Husby et al., 1987Husby et al., , 1989;;Selvan et al., 2004).Reviewing the literature, it is reported in studies that the ideal configuration of screws is two screws for proximal and one for distal in inverted triangle configuration (Kyle, 2009;Ly and Swiontkowski, 2009;Melvin and Happenstall, 2009;Schmidt et al., 2005).On the contrary, in another study, it is reported that triangle configuration is superior to other configurations (Selvan et al., 2004).It has also been stated that using three or four screws provides similar stability in the femoral head (Kyle, 2009;Lavelle, 2003;Ly and Swiontkowski, 2009;Schmidt et al., 2005;Swiontkowski, 2003).In a review article published by Wu (2010), it is mentioned that inverted ter stabilization, since the distribution of loads occurs firstly on the upper screw and then transfers to lower two screws in a controlled manner in triangle configuration.This situation can be explained biomechanically by pyramid structure.
It is thought that as long as the forces are transferred down in a controlled manner in the pyramid, the material is not deformed.One of the most important points is the type of triangle -whether it is an isosceles or equilateral.The stress values with this kind of configurations are equally distributed through the system.Parallel to this situation, triangle configuration created minimum stress values in our study.In clinical practice, low stress means safe fracture line and fixation technique.Stress values have been found high on fracture line and screws in dual parallel, triple parallel, inverted triangle and square configurations.We think that the reason for this situation is asymmetrical and unbalanced distribution of the stress.We believe that triangle configuration is superior in the stabilization of femoral neck fractures, and the biomechanical evidence of this case can be explained with pyramid theory.In our opinion, using more than three screws would be harmful for stabilization in clinical practices.
Conclusions
In this research, biomechanical behaviors of five different screw configurations (dual parallel, triple parallel, triangle, inverted triangle and square) that are applied for the stabilization of femoral neck fracture are investigated.The stress values on the upper and lower proximity of the femur and screws under axial loading have been analyzed, and which configuration is more advantageous has been researched.According to FEA results, it has been found that fixation in triangle configuration is more advantageous.Additionally, in clinical practice, it is thought that the use of more than three screws for stabilization will not be beneficial.We think that further biomechanical studies are needed to improve the safety of stabilization of femoral neck fractures treated with different screw configurations.
Figure 2 .
Figure 2. The scanning of screw using a 3-D scanner.
Figure 3 .
Figure 3. Schematic view of five different configurations for femoral neck fracture.
Figure 4 .
Figure 4. 3-D view of five different configurations for femoral neck fracture.
Figure 9 .
Figure 9. Stress distribution occurring at the upper proximity of femoral bone under axial loading: (a) dual parallel, (b) triple parallel, (c) triangle, (d) inverted triangle, and (e) square.
Figure 10 .
Figure 10.Stress distribution occurring at the lower proximity of femoral bone under axial loading: (a) dual parallel, (b) triple parallel, (c) triangle, (d) inverted triangle, and (e) square.
Table 1 .
Mechanical properties of bone and screw used in the FEA (Yuan-Kun et al., 2009).
Table 2 .
The stress values in bone and screws. | 2017-11-07T00:00:42.845Z | 2015-09-09T00:00:00.000 | {
"year": 2015,
"sha1": "58909fd820dc53692760bfb0d92bb600c42365b7",
"oa_license": "CCBY",
"oa_url": "https://www.mech-sci.net/6/173/2015/ms-6-173-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "58909fd820dc53692760bfb0d92bb600c42365b7",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
} |
55660640 | pes2o/s2orc | v3-fos-license | The Rise Of BIM in Malaysia And Its Impact Towards Quantity Surveying Practices
. Building Information Modeling (BIM) is a new buzzword that is gaining a momentum within the construction industry in Malaysia, and worldwide. BIM is a way of working that involves the automation of the entire project team using a 4D model that can do many of the traditional fuctions of a Quatity Surveyor (QS). As BIM is removing more of the traditional work of QS, there are rising fears that it could threaten the viability of the QS professions especially for small firms that still rely on producing bills of quantities. This paper aims to review the challenges and opportunities gained from BIM advancement in QS profession, especially in Malaysia. Despite of serious cost and commercial challenges of BIM implementation, BIM will work wonder as a tool that will cut the amount of time QS have to spend on analysis and provides opportunities to direct QS talents towards being more efficient. This is resulting to better project coordination with a lot less conflict, a more sustainable building in the final instance and also throughout the process, as less time is spent on each stage.
Introduction
The construction sector shares 12.2% of world Gross Domestic Product (GDP), with the Global Construction 2025 reported that 70% more construction work will be going on around the world (Branson, 2013).In Malaysia, the construction sector grew by 8 to 10% with 3 to 5% shares of local GDP (Bank Negara Malaysia, 2015), indicate its significant role towards the global growth and locally.Thus, it is necessary for the construction industry to undergo evolution in the era of globalization with the aim of getting international competitiveness.Owing to its complex in nature and involvement of multidisciplinary parties, construction industry faces common cost, time, and quality issues.Nevertheless, the rise of Building Information Modeling (BIM) technology has emerged as of the application tools to meet the objective of escalating productivities in a construction project.
BIM gained its popularity due to its efficiency benefits in terms of time and cost saving, and increases coordination in information exchange.It is a new approach being adopted in managing construction project life cycle activities, from project design, construction, to facilities management (Eastman et al., 2011).BIM is defined as a digital representation of the physical and functional characteristics of a facility through share knowledge resource for information, forming a reliable basis for decisions during its life-cycle (Smith, 2014).Started with the use of BIM for architectural design, the conceptual underpinnings of the BIM technology go back to the earliest days of computing, as early as 1962 (Rodrigues, 2015).Today, BIM has become ubiquitous in the design and construction fields.In developed regions, the percentage of companies using BIM jumped from 28% in 2007, to 49% in 2009, and for 71% in 2012 (McGraw-Hill Construction, 2012).
As architectural, engineering, and surveying practices are the main players in BIM working environment, this can be a significant indicator of slow uptake of BIM in Malaysian construction industry as a whole.
In any project, cost and program are key performance indicators, in addition to quality.With the advent of parametric modeling, quantity surveyors (QS) play a significant role in contributing these two fundamental parameters to the modeling process from the outset and add the most value from the earliest stage.With BIM, the QS can provide detailed and accurate estimates, automate measurement, speed up traditional estimating process, and better capture, manage, and deliver project information.
Despite its incredible benefit, BIM uptake among QS is still slow in Malaysia.There are also fear and resistance in the surveying practice towards the changes brought by BIM.There is resistance among QS to accept the changes as they suppose that BIM can perform most of the traditional QS job scope which eventually will replace the QS (Matthews, 2011, RICS, 2011, NBS, 2012) which creates a situation that QS is no longer needed in the construction market.Thus, this situation derives an aim of this study which is to provide perspective on future of BIM in quantity surveying practices through reviewing the concerns of Malaysian QS and opportunities that are awaited.
Brief History of BIM in Malaysia
The adoption of BIM in Malaysia was driven primarily by the private sector since early 2000.However BIM has only become a buzzword when the Jabatan Kerja Raya (Public Works Department, PWD) introduced BIM in construction project planning for public works in 2007 through their BIM Standard Manual and Guideline (Latiffi, et.al, 2013).In an effort to foster BIM environment in Malaysian construction industry, the Construction Industry Development Board (CIDB) has formed the BIM Steering Committee, BIM Roadmap, and BIM Portal, involving policy makers, practitioners, and academicians (CDIB, 2016).In order to help and promote the potential and benefits of BIM, PWD, CIDB and Multimedia Super Corridor Malaysia also offer initiatives namely roadshows, seminars, workshops, on top of affordable training program (Latiffi, et.al, 2014).The Construction Research Institute of Malaysia (CREAM, 2012) reported on the earliest BIM projects in Malaysia.The National Cancer Institute of Malaysia in Sepang was the first project involved the application of BIM in Malaysia, followed by other projects under BIM Pilot program such as the Healthcare Centre Type 5 Pahang, and Administration Complex of Suruhanjaya Pencegah Rasuah Shah Alam.In these pilot projects, BIM was utilized for site modeling, visualization, design review, clash analysis, 4D schedule and simulation, and record modeling (Latiffi., et.a;, 2014).Recently, BIM in Malaysia is more likely to be used for complex and highrisk projects (PWD, 2011).
Concept and the use of BIM
BIM is defined as a modelling technology and associated set of processes to produce, communicate, and analyze digital information for construction purposes (Abdullah, et.al, 2014).Fundamentally, BIM is not simply a type of software but an integrated approach between human activity to produce digital representation of physical and functional characteristics of a building through relevant software.
BIM applications span over the entire life cycle of a facility.BIM is applied in project planning, design, pre-construction, construction and post-construction (operations and maintenance) stages.In planning stage, the project team can utilize BIM in analyzing space and understanding the complexity of land regulations and space standards, hence saving time and providing the team more time to create more value-added activities (Azhar, Khalfan, & Maqsood, 2012).In this phase, advanced 3D laser scanning equipment is used to scan the site to determine the position of existing utilities accurately and then integrate them in the BIM model.On top of intensive use of BIM in designing, the user can perform quantity survey and get detailed estimates accurately which can be measured directly from the 3D models.The user can plan for site logistics and identify potential hazards on site, thus helping in the preparation safety plan.
Project managers can benefit from scheduling and monitoring function with the help of 4D and 5D models.In post-construction stage, BIM is used for transferring the data of building (spaces, systems and components) into facility management operations, keeping track of building assets, and enable scheduled maintenance and provides information of building maintenance history All the information of functional and physical characteristic of a building and the life cycle of project are included in BIM in a series of smart object, providing better decision support in the process of project development (Azhar et al., 2008).Besides, BIM allow fast and accurate updating of changes, while in conventional 3D CAD, the plans, sections and elevations are used to describe the building as independent 3D views.Furthermore, the man-hours needed to establish reliable space programs in BIM will be reduce as compared to the conventional technology.BIM also enhances effective communication between the parties involved in the project development.For example, the information such as the details of procurement, drawings, submittal processes and other specifications can be linked together in BIM, whereas the conventional approach is paper-based modes of communication.BIM increases the confidence in the completeness of the scope.This can be done through a clash analysis to detect conflict, collision, and interference due to the model form in scale, 3D space, so the user can visually check the interference in all the systems.
BIM in Quantity Surveying Practice
Quantity surveying is vital in construction field to control and manage the cost throughout the project life cycle.Traditionally, taking off and bill of quantities demanded long working hours and effort to be produced, with highrisk of human errors.As the building works are becoming more and more complex, BIM has developed into the mainstream in construction industry in developed country, BIM is expected to be embraced in QS firms to boost up the effectiveness in term of cost and value of construction processes (BCIS, 2011).Quantity surveying requires immense knowledge and skillful interpretation and practical use of the knowledge (Vineeth & Jenifer, 2014).Implementation of BIM brings opportunity to quantity surveying field in the cost management functions.
Bill of Quantities (BQ) is an important tool to carry out cost management in contruction projects.BIM technology provides a fifth dimension (5D) of BIM which can automatically produce BQ.The collaboration and integration feature of BIM in the automatic production of BQ is an enhancement to the BIM technology.This is effective in eliminating the lengthy and tedious ways of tranditional taking off as well as reducing human error.BIM is able to extract exact quantities and areas that are used to estimaste cost at anytime during the project design stage.Not only that, QS is able to recognize and relate how the total project cost is made up by each elements or spaces of the building through BIM as BIM eases the identification of relationship of quantities versus the locations and costs.This enhances the competency of QS through the comprehensive knowledge of cost determinants which in turn generate cost estimation with high accuracy and reliability.Any changes in the design will actually being automatically updated through the quantities drawn out from the modified model; it does not require QS to redo the taking off as in traditional method.The cost estimating integration with the BIM design engine, it allows the project team to execute value management durong the design phase efficiently.BIM is capable of providing a more intensive and detailed drawing compared to the traditional 2D drawing where misunderstandng and wrong assumption may be made.The more comprehensive construction information and more pricise BQ can diminish the gap among the project team members.Through understanding on how to extract the quantity from building model, bidders are able to bid with intensely competitive price.
Through the 5D feature of integrating cost estimation with building model, the estimating job of a QS has is becoming simple and straightforward as compared to the conventional paper drawing.With BIM, construction field is heading to a paperless construction.
Methodology
This study was conducted during September -December 2015 to retrieve articles related to BIM and QS issues literature.No specific key words required as inclusion criteria; a relatively small number of studies exist on the topic.Articles were retrieved from diverse platform which include journals, technical reports, and news.The reference lists of each article were reviewed in detail to find additional articles.A group of reviewers (n=46) independently read articles in full text, and recorded the main findings of challenges and opportunities of BIM in quantity surveying in Malaysia, consequently confirmed by a number of Malaysian certified QS (n=3).
Challenges of BIM for Quantity Surveying Practice
Clearly with the rate of BIM adoption among QS firms, there are challenges spelled out, as follows, in adopting BIM as it is relatively new to surveying practice:
Teamwork and collaboration
As new ways for cooperation between the project team has been offered through BIM implementation, some problems may arise among them.For instance, an issue may occur when the method for sharing model information among the team member is being determined.If traditional drawings on paper have been used by the architect, then a third party will have to construct a model to be used for the estimating and planning of the construction project.
In addition, if the team members use a wide variety of tools for modelling, then the project may require some other tools to move the models between different environments or to combine them together.Hence, it may add complications or cause errors to the project.In order to overcome this issue, the Industry Foundation Classes (IFC) standard can be adapted when data is being exchanged throughout the project life cycle.Other than that, team members can also use a model server that can enable exchange of information between the BIM applications through adaptation of IFC standards.
Legal issue
Challenges in BIM implementation may arise when the question of ownership and responsibility of the various design, analysis and datasets are asked among the team members.It will be a challenge to determine who is to pay for it and who will be in charge of its precision and accuracy.These challenges are being questioned by the experts when BIM is used in their projects.A building model will be needed by the building owners to enhance the maintenance, renovations and operations once they educate themselves about the benefits of BIM.
Use of information and changes in practice
Firms that can coordinate the design phases and include in-depth knowledge about construction will get the highest advantage as BIM implementation in a project stimulates the assimilation of construction related information and knowledge in the beginning of the design phase.Furthermore, the arrangements in contracting that prioritizes excellent cooperation will benefit the owners more when BIM is applied in a project.In addition, when BIM is applied in a certain project, a building model will be constructed and shared among the project team as it is the basis of every work process in the project.Knowledge and time are two key components to facilitate this change that will be faced by the companies, as they are also required for every important transformation in technology.
Implementation issue
To effectively start using BIM, firms need to upgrade their hardware, obtain the necessary software, and also require their employees to go for the correct BIM training in order to transform a 3D environment into a complete IBCC 2016 600060 BIM system.The firm has to comprehend every detail and plan for the implementation carefully before any transformation can take place.In addition, the firm can appoint managers to head a team responsible for planning the BIM implementation by prioritizing budget, cost and time as their guidance.Other than that, an adoption plan could be developed to predict how the future implementation changes will impact the firm's partners and clients and also how it will affect its internal departments.
Financial and time considerations
In order to implement BIM successfully, a firm needs to periodically upgrade their hardware in order to run the processing software.It is obvious that the requirement of a big technology component to implement BIM has become a challenge for the surveying industry in Malaysia.However, by preparing the same software from similar vendors for all the companies, the issue of some hardware and software being hard to operate could no longer be a bother for the industry players.Hence, in order to fix any technical issues regarding BIM adoption, there will be some impact on the firm financially as well.
According to Autodesk (2013), the price for Building Design Suite Premium, an entry level software for BIM, is US$6,825.With current exchange rate being US$1.0:RM4.21,that makes the price in Malaysia to be RM28,749.00.Furthermore, the cost calculated beforehand only includes the purchase of the most basic BIM software.The firm has to consider costs to train their employees and even hire new workers who are equipped with the knowledge and skills in BIM as they move forward to internalize a new working environment in their firm.It can be deducted that adopting BIM requires the firm to make a huge investment financially as only large organizations can afford the costly technology.
In order to transform a company to being accepting of a new technology such as BIM, a lot of time will be consumed and this is seen as a challenge as construction projects are highly affected by time.In Malaysia's construction industry, BIM experts are still very low in number.Hence, firms need to allocate time and also money in order to find the best experts to assist them in BIM implementation.In addition, firms that decide to send their workers for training in BIM technology would also have to consider the additional time and cost needed for them to master and learn the processes entirely.
Process change
For an organisation to adopt BIM fully, there will be essential and basic changes to its operational processes.Once a company successfully adopts BIM, the process of design will be brought forward to the start of the project.In addition, it will also be completed sooner with a completion level higher than those of traditional processes.Furthermore, BIM experts have considered Autodesk, Bentley and Nemetschek to be among the technology platforms that are interoperable even though they do not run that smoothly together, despite there being some protests to it.
Human resource and organisational issue
For any organisation to fully adopt BIM in their projects, their human resource department will need to go through a complementary change in terms of their skills alongside an essential process change within the organisation.Furthermore, there will also be simultaneous increase of competency in the supply chain of the project, which also includes the contractors, designers, and also the developers.
People working in the organization can also be another challenge to face.When the important people in the firm are reluctant to accept the new tools and technology, it makes it that much harder to change their behaviour into liking and accepting them.When the staff resists and show reluctance to BIM, it will become a big challenge to the top management to persuade them to accept the new change to the organization.Their reluctance to learn something new or try the new softwares and technology will be the hardest challenge that the top management will have to overcome in order for BIM to be implemented successfully.
The role of top management is the most crucial as they have to decide the direction and the plans of action for the organisation to adopt the new technology while reducing the employee's resistance to change.In addition, the employees will be motivated as they witness the organisation's dedication in fulfilling their responsibilities seriously.The motivation shown to the employees is one of the ways to help decrease the reluctance of the employees in accepting the new change as well as to encourage them to be more confident in trying out the IT applications themselves.
In addition, there are some organisations that are scared to take the risk in changing their business process by adopting BIM.They worry about the huge uncertainty that may or may not affect their established firms, as well as the large cost that they have to bear.Other than that, managing the big change in terms of technology may also cause a challenge for the managers.The staff may feel threatened or anxious by the new technology that is involved with accepting BIM as they feel that their roles and jobs in the organisation will be completely taken over by the softwares and hardwares that BIM requires in order to function.Hence, managers must know how to tackle the technology change in the organisation.
Furthermore, another challenge faced by the organisation is their belief that their well-recognized firms will be affected by the new technology that comes with BIM implementation.When a new technology is being adopted, the business processes of a firm will also be affected, which can also impact the firm's significant rate of output.This, in turn, will also increase the risk in producing the outcomes of a project as well as the client's preset goals.IBCC 2016 600060
Professional support
In Malaysia, the Government constraints and mandates the limits of the Engineering Consulting Services (ECS).
Even though BIM has been known to provide interchange of electronic data with no boundaries, the management of professions, trades and industries are still kept fixed in cultural and geographical boundaries, as well as political identities.As a step to break down these barriers in the ASEAN community, there have been some movements in the region.However, aside from the complex projects, most of the local companies are still being operated by local staffs that are following the local guidelines and mandates.Hence, some potential barriers have been found in various countries as as in Malaysia, in order for BIM to be adopted in Malaysia's ECS.Therefore, it has been proven that in order for BIM implementation to be successful, there must be presence of consultants or professionals equipped with the skills and expertise to support it.
Technical challenges
There are obvious reasons as to why BIM adoption in Malaysia has not really been as much as anticipated.It is hard to adopt new information technology (IT) in the industry due to technical reasons as compared to social issues.Problems such as the lack of support system, difficulty to understand the complex softwares and also the low number of skilled technical experts are among the reasons.
Furthermore, BIM may have been accepted at a low rate because of the need for detailed and precise models to fix the issue of interoperability.Other than that, in order for important information to be communicated and integrated successfully among the components, there is a requirement for excellent practical strategies to be developed.This could lead to another technical reason as to why BIM has not been the choice for some firms.In addition, they also have to ensure every data developed is computable as it is part of BIM's technical requirements.
Management
Currently, there have not been any specific directions or orders in ways to use or implement BIM.There are no clear instructions on how to use and apply BIM in construction practises.Even though a few software companies have developed some products to help with the implementation of BIM, they have only managed to cover the aspects of quantity instead of implementing it as a whole.Therefore, for BIM to be adopted by firms easily there must be some clear and specific guidelines to be followed and the steps to implement BIM must be standardized for all firms.Another management challenge is the question that arose among the stakeholders, which is who shall be responsible for developing the models and how the costs will be distributed.Hence, to clear these issues, solutions have to be created by researchers, vendors and professional organizations together so that the use of BIM will increase in the Malaysian industry.
Before BIM, the facilities manager had very limited influence in the planning process of the building.They were only able to introduce strategies to maintain the building after the owner has received possession of the building.With implementation of BIM, the managers can be included in the stages of designing and construction.With BIM's visual ability, stakeholders such as tenants, maintenance staff and also service agents, can get a hold of any information that they might need even before the completion date.The biggest challenge will be for the owners, as they have to plan wisely for the right time to allow all the different parties to be included in the early processes of the building.
Opportunities for Quantity Surveyors
The competitiveness of QS is highly improved.As the cost estimation can be done through the quantities extraction at any stages during the design (Boon, 2009), the efficacy of a QS is highly enhanced.For example, with conventional way, QS is only able to finish up a complicated earthwork in couple of days but by extracting the quantities from BIM only takes less than an hour.When variations arise, QS can re-extract the quantities from the modified model and avoid the tedious human measurement.A competent and high efficiency QS poses better chances in surveying field.Not only that, the improved quality of estimation produces quality tender which will enable the QS to help company to open up more business opportunity and manage to get repeat business by winning the tender (Autodesk, 2003).
Transformation in job scope
Through the benefits brought by BIM to the surveying field in term of ease of quantity take off (Davidson, 2008), QS in malaysia do not need to spend time to measurement and find out the dimensions in from a 2D drawing.Unnessary assumptions and human errors can also be avoided.Now, with a highly accurate BQ produced in a very short time, QS is freed up to do more on cost management not in term of the manual taking off and adding up.There is no doubt that the QS still need basic skills regarding measurement to make use of the BIM technology, however, QS now saves more time and they are now no longer a wetware calculator, but they need to think more on what client's needs and wants.There are so many aspects that a QS need to cover.
QS not only provide advices on the construction methods and construction materials, QS also need to handle the legal issues.QS also have to be major on feasibility studies, value management, cost planning, tendering and dispute resolution (Gee, 2011).This opens the gate for Malaysian QS to involve in a construction project more comprehensively, but not major in measurement like current industry practice.This is heading to the QS job perspective in developed country construction industry, the best example to cite will be United Kingdom.
Re-branding of Quantity
With BIM, it is an era for Malaysia QS to move away from just BQ, there is a rebreanding their job from a profession that only prepares BQ to a higher end consultants.In future, BIM will turn QSs to profession doing cost management, in term of cost modelling and also general advisor in pre-contract stages.QS is no longer branded with "Construction Industry Accountant", but also branded as construction industry legal specialist as they are the one handle the legal based issues in quantity surveying field such as dispute resolution and giving legal advices regarding construction contract issues such as the procedure and situation where the contrcator is entitled from variation claim, the liability and risk allocation of each parties in the contracts and so on.QSs' proactive and knowledge based advices taking in all the aspects and situations of a project in consideration is important to the client (Verster, 2006).Not only limited to that, by eliminating the lengthy quantity take off by BIM, QSs can be more engaged in contruction management especially value management and risk management.Other than that, QSs also involve in giving advices on safety and healthy and also quality control (Cunningham, 2014).Hence, BIM has provide the opportunity for QSs in Malaysia surveying to be rebrand from the root as a measuring profession to a profession serving towards value-added into a project.BIM aids the Malaysian surveyors to handle numbers, measurements and the rigid measurement rules with ease and helps them to think more on strategy by utilising critical thinking.
Globalization
BIM opens up the gate for QS in Malaysia to have a same communication platform.In conventional way of current Malaysia surveying practices, QS are using the Standard Method of Measurement of Building Works Second Edition (SMMS) as their measurement rules reference.Different standard menthos of measurement are used in different places.This BIM, there is a communciation barrier between the QS across different region.According to Heiskanen (2014), the very first international survey carried out in 2013 revelaed that consultants that do not adopt BIM were struggled to secure international project.Malaysian QS do not face difficulties when engaging in an internatioal projects where there are different surveyors from different backgorund who has previously learnt different standard methods of measurements.BIM softwares has ease their works without background barrier.This is one of the contribution factor to the success for local contractor to strive for success in international projects as the coordination and communication are greatly improved trough BIM (Broquetas, 2010).
Beyond limit
Malaysian surveying industry produces QS that are expertise in measurement with local standard rules og measurement.When they are to engage in international project or being posted to other branch of company overseas, they requires intensive training to learn the Qs practices in other country.Hence, the possibility of a QS to venture into international market is hard and limited due to the resistance to be accepted by international surveying market as different places poses different practices.
BIM is seen as a solution to this problem.It is easier for Malaysian surveyors to venture to international market as BIM is linked to software to enabe extraction of quantity.Whenthe surveyors know how to handle the software, it takes a very short time for a QS to catch up with the job scope especially in producing accurate BQ.The other aspects such as the legal aspect and also the risk and value management is a broad knowledge and no region boundary as it is something critical and analytical which requires QSs' to think out of the box.
Therefore, BIM is a tools that enables Malaysian surveyors to go beyond the limits, they are no longer bound to just one country.They can hop to different countries and still serve as QS with ease.BIM enables QSs not to be rooted and stucked in to serve in one country and cannot go far.To see from other perspective, surveyors and experts from other places can also be imported to gain their stregth and power so that they can be adopted in Malaysia surveying field.This is a vice versa relationship.
Conclusion
Through the continuous advancement in technology, construction industry cannot deny the changes but have to adopt it.BIM, as the latest trend in construction industry, can help to improve the construction process in so many ways such as providing rapid visualization, giving better decision support, providing rapid and accurate changes updating, providing better and efficient communication and so on.All these aspects meant to improve and secure the overall quality of the project, making local construction project to be more competent in international construction industry.BIM not just emphasize on the design stages, it is actually a comprehensive technology which covers throughout the pre-construction stage, construction and post-construction stage.
The challenges that come with BIM implementation affect every sector of the organisation that wish to adopt it.There are challenges in teaming and collaboration as well as the question of documentation ownership and production.Furthermore, implementation issues, use of information, changes in process as well as financial and time considerations are some other challenges that must be faced by the organisations.As people are one of the most tangible factors, the challenges will greatly affect the human resource and organisational issues of the firms.In addition, the challenges that may be faced by the organisation could also be categorized into management and technical challenges.It is clear to see that BIM carries along a lot of issues and challenges with its implementation.However, if an organisation can manage smartly and take all the right steps in overcoming them, it will get to experience the convenience and powerful technology that BIM can offer.Now leaps far and see a bigger picture on the opportunities for Malaysian surveying practices with BIM implementation.BIM aids in the quantity surveying practices in their on preparing BQ, cost estimation, updating of cost and even in tendering process.With all this, BIM eventually is a way for the QS to get out from the conventional method of doing quantity take off which is very tedious which in turn will transform the job scope of surveyors, making them to be rebrand as higher end consultants.This is not the end of their opportunity as BIM provides a platform for local surveyors to integrate internationally with other surveyors.Hence, local surveyors are not nationally restricted.In fact, they can go beyond the limit and works in international established companies or projects with ease.This is also a golden opportunity for local surveying practices to immerse with new strength and knowledge from other places as BIM ease effective communication. | 2018-12-06T05:14:17.682Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "68df99148b4951c8d33eaaed8cbe7fd5eba74e56",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/29/matecconf_ibcc2016_00060.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68df99148b4951c8d33eaaed8cbe7fd5eba74e56",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
267332580 | pes2o/s2orc | v3-fos-license | Probiotics and Paraprobiotics: Effects on Microbiota-Gut-Brain Axis and Their Consequent Potential in Neuropsychiatric Therapy
Neuropsychiatric disorders are clinical conditions that affect cognitive function and emotional stability, often resulting from damage or disease in the central nervous system (CNS). These disorders are a worldwide concern, impacting approximately 12.5% of the global population. The gut microbiota has been linked to neurological development and function, implicating its involvement in neuropsychiatric conditions. Due to their interaction with gut microbial communities, probiotics offer a natural alternative to traditional treatments such as therapeutic drugs and interventions for alleviating neuropsychiatric symptoms. Introduced by Metchnikoff in the early 1900s, probiotics are live microorganisms that provide various health benefits, including improved digestion, enhanced sleep quality, and reduced mental problems. However, concerns about their safety, particularly in immunocompromised patients, warrant further investigation; this has led to the concept of “paraprobiotics”, inactivated forms of beneficial microorganisms that offer a safer alternative. This review begins by exploring different methods of inactivation, each targeting specific cellular components like DNA or proteins. The choice of inactivation method is crucial, as the health benefits may vary depending on the conditions employed for inactivation. The subsequent sections focus on the potential mechanisms of action and specific applications of probiotics and paraprobiotics in neuropsychiatric therapy. Probiotics and paraprobiotics interact with gut microbes, modulating the gut microbial composition and alleviating gut dysbiosis. The resulting neuropsychiatric benefits primarily stem from the gut-brain axis, a bidirectional communication channel involving various pathways discussed in the review. While further research is needed, probiotics and paraprobiotics are promising therapeutic agents for the management of neuropsychiatric disorders.
Introduction
Neuropsychiatry refers to the scientific and medical approach toward conditions that include both neurological and psychological manifestations.It seeks to integrate neuroscience and psychiatry in the assessment and subsequent treatment of such conditions [1].Neuropsychiatric disorders encompass a range of diseases that affect both the brain and mental health.They are highly prevalent globally, impacting individuals of all ages and backgrounds.As per reports from the World Health Organization (WHO), an estimated 12.5% of the global population is afflicted by neuropsychiatric disorders such as anxiety, bipolar disorder, major depressive disorder (MDD), attention-deficit/ hyperactivity disorder (ADHD), and autism spectrum disorder (ASD).These disorders, especially those associated with erratic mood changes, have been identified as a significant cause of suicide [2].It is evident that neuropsychiatric disorders are a prevalent and significant concern in the global health arena, and they demand attention and allocation of resources for their mitigation.Studies have linked the gut microbiota to neurological development and brain function through the gut-brain axis (GBA), suggesting Samriti Balaji Mudaliar and Sumith Sundara Poojary contributed equally to this study.that changes in the gut microbial populations could influence the development of neuropsychiatric symptoms [3].Given this connection, beneficial microbes offer a natural therapeutic alternative to conventional drugs for treating and alleviating the symptoms of neuropsychiatric conditions.The term "bacteria" or "microbes" is often associated with the stigma that they are inherently harmful.However, this perception is not entirely accurate, as they can be both beneficial and detrimental depending on the context.While some microbes can cause infections and diseases, others play essential roles in maintaining human health and the environment.Probiotics represent these beneficial viable microbes that provide health benefits when consumed in appropriate amounts [4,5].
Ilya Metchnikoff, a scientist from Russia, is credited with putting forth the idea of using live microorganisms to enhance human well-being in the early 1900s.He noted that individuals from Bulgaria who consumed fermented dairy products lived longer and experienced fewer illnesses, and he attributed these positive outcomes to microbes present in their fermented dairy.This concept laid the groundwork for the modern understanding of probiotics and the crucial role of the gut microbiome in health.Today, several microbes, mainly bacteria and yeasts, are considered probiotics, such as Lactobacillus reuteri, Lactobacillus rhamnosus, Lactobacillus casei, Bifidobacterium bifidum, and Saccharomyces boulardii.
However, the health benefits they provide are still being investigated by ongoing scientific studies [6,7].Furthermore, the exact means through which they optimize the gut microbiome and provide health benefits are still not completely understood but evidence suggests the involvement of multiple factors such as amensalism via the release of antimicrobials, stimulating the secretion of mucus to protect the gut lining, contending with invading pathogenic microbes for adhesion sites, stabilizing gut barrier function, and inducing immunological responses, while the psycho-emotional benefits are mediated by the gut-brain axis (GBA) [8,9].
Probiotics exist in a variety of fermented, dairy, and nondairy food products such as yogurt, kefir, sauerkraut, kimchi, and miso which can be included in correct portions in the daily diet; alternatively, they can also be taken separately as supplements in the form of tablets or capsules in appropriate doses.The global probiotics market is anticipated to compound at a rate of 7.5% annually till 2030, starting with a value of 58.17 billion US dollars in 2021 [10].
Despite the potential benefits, several concerns have been associated with probiotic use.Due to the widespread and inappropriate use of antibiotics, several probiotics have been shown to exhibit resistance to widely used antibiotics.Thus, the possibility of horizontal transfer of these resistance genes to pathogens or the transfer of virulence genes from pathogens to probiotics is a pressing issue [11,12].Furthermore, there have been reports of infections such as bacteremia, fungemia, sepsis, and endocarditis in immunodeficient patients and patients having underlying disorders such as AIDS and ulcerative colitis [13][14][15].Moreover, metabolic effects such as D-lactic acidosis also have been observed [16].Hence, safety assessments of probiotics, particularly concerning specific population subsets such as immunocompromised patients, are required before they can be commercially produced.
A new concept of paraprobiotics, also known as nonviable, dead, inactivated, or ghost probiotics, has been introduced to represent inactivated microbes that provide various health benefits, including immunomodulatory, antiinflammatory, antioxidant, and anti-hypertensive effects.Paraprobiotics can be defined as non-viable microbial cells, as well as cellular components, microbial fractions, and bacterial lysates, that improve the health of the host through interactions with the gut microbiome when administered in sufficient quantity.They generally comprise peptidoglycans, polysaccharides from the microbial cell walls, cell surface proteins, and teichoic acid [17].Substantial evidence, both in vitro and in vivo, has shown that the beneficial effects of paraprobiotics are comparable to those exhibited by probiotics, despite the former consisting of only dead or inactivated cells that lack the ability to proliferate and colonize in the gut microbiota, unlike live probiotics that can interact with gut microbes of the host thereby regulating the composition of the gut microbiota.This seemingly counterintuitive phenomenon has been termed the "probiotic paradox" [18].While additional research is required to elucidate the mechanism of action of paraprobiotics, studies have reported certain methods by which they exert health benefits, such as host interactions mediated by dead cell components leading to an immune response, and production of beneficial microbial metabolites including peptides and short-chain fatty acids (SCFAs) [18,19].
The health advantages associated with both probiotics and paraprobiotics are multifaceted.These benefits include improved sleep quality, alleviation of symptoms associated with irritable bowel syndrome (IBS), and potential anticancerous effects, as illustrated in Fig. 1.However, this review primarily focuses on the neuropsychiatric benefits of probiotics and paraprobiotics.Paraprobiotics present several compelling advantages when compared to probiotics, particularly regarding their safety and potential to positively impact host health.As previously explained, probiotics, which contain live bacteria could be problematic for patients with underlying disorders and carry several safety issues.In contrast, paraprobiotics do not carry such risks since they are rendered inviable prior to administration.This characteristic enhances their likelihood of receiving regulatory approval for use as supplements or functional ingredients in food products.Another noteworthy benefit of paraprobiotics is their extended shelf life and lack of interaction with the other constituents of food products.They also maintain structural and functional stability across a wide temperature range.This simplifies their management and transportation within the food industry.Furthermore, paraprobiotics display bioactivity even when incorporated into non-dairy food matrices.This is significant because non-dairy substrates are typically challenging environments for the survival of traditional probiotics.Moreover, the capacity of paraprobiotics to thrive in such matrices has the potential to diversify the functional food market by allowing their integration into a broader spectrum of products [20].
This review introduces the potential therapeutic effects of probiotics and paraprobiotics in the management of neuropsychiatric disorders like anxiety and MDD in light of their effect on the gut microbiota which, in turn, affects neural functioning through the GBA.The main objectives of this review are to explain the concept of probiotics and paraprobiotics, to describe the microbiota-gut-brain axis and its significance, and finally, to highlight the potential of probiotics and paraprobiotics in treating neuropsychiatric problems.The first section of the review focuses on the different inactivation methods employed for the conversion of probiotics into paraprobiotics, namely, thermal treatment, supercritical carbon dioxide (SC-CO 2 ) technology, high pressure, ohmic heating, sonication, ionizing radiation (IR), pulsed electric field (PEF), and ultraviolet (UV) rays.The process involved in each inactivation method has been briefly explained followed by details of its mechanism of action and examples of its application.In the subsequent section, the focus shifts to the GBA and its connection to neuropsychiatric disorders.The signaling pathways and regulatory constituents involved in the GBA, viz., the vagus nerve, immunological activity, inflammatory reflex and neuroinflammation, microbial metabolites, and neuroactive compounds are explored.The influence of each of these elements on the development of neuropsychiatric disorders like ASD is also discussed in this section.Lastly, the final section delves into the applications of probiotics and paraprobiotics in alleviating the symptoms of neuropsychiatric problems.The results of preclinical studies and clinical trials are delineated in order to provide the readers with an overview of the current status of the ongoing research on probiotics and paraprobiotics in neuropsychiatric therapy.
Inactivation Methods
Paraprobiotics can be obtained from live probiotics through chemical means using acids and formalin or through physical processes such as thermal treatment, ohmic heating, and ionizing radiation (IR), as highlighted in Fig. 2, with each method having different mechanisms that alter their structural properties and may have contrasting health effects.Hence, monitoring the impact of various inactivation methods on the structural properties of the bacteria as well as the quantitative and qualitative maintenance of probiotic properties by each of these methods is extremely crucial [20,22].
Thermal Treatment
Thermal treatment is highly conventional and the most widely used process in the production of paraprobiotics since it is well-developed and requires low investment costs.It involves the exposure of probiotic species to high temperatures, thereby inducing cellular damage, as explained in Table 1, consequently making them non-viable.
Thermal treatments can be classified into two categories based on the extent of heat applied and the goalpasteurization and sterilization.Pasteurization, named after Louis Pasteur, involves the use of mild temperatures (< 100 °C).It may further involve inactivation by heating to higher temperatures (72 °C) for around 15 s or about 63 °C for 30 min; while the former is an example of high-temperature short time (HTST) pasteurization, the latter uses the lowtemperature long time (LTLT) method.Though pasteurization may not be suitable for inactivation of spore-forming probiotics such as Bacillus coagulans, these spore-forming microbes can be inactivated by other methods that have been subsequently discussed.Sterilization involves the use of higher temperatures (> 100 °C in most cases) for a short time to render the probiotics non-viable or inactive [19,[23][24][25].
The thermal resistance of probiotic species is one of the parameters that need to be considered during this treatment.The D-values and Z-values are employed to measure how resistant the microbial species are to heat or thermal treatments.The D-value, also known as decimal reduction time, is defined as the time required at a specific temperature to reduce the cell viability to 10% while the Z-value is the temperature necessary to bring about a tenfold reduction in the D-value [19,25].
Microorganisms display diverse responses to temperature changes, and these responses are shaped by the differences in their mechanisms for resisting heat.The ability to withstand heat is a critical aspect for microorganisms, as it determines their ability to survive and thrive in varying temperature environments.Certain strains of probiotics such as L. plantarum can accumulate osmoprotectants like glycine betaine or trehalose.These substances serve a protective role by stabilizing proteins and cellular structures, safeguarding cells from damage caused by elevated temperatures, ensuring the Fig. 2 Common methods for the production of paraprobiotics [19,21] Table 1 Sites of damage due to exposure to heat
Site Damage References
Cell wall/outer membrane The outer membrane (OM) of Gram-negative cells is one of the structures affected by heat.Loss of lipopolysaccharide and vesicles.Morphological and structural changes or blebbing have also been reported.[23,30]
Peptidoglycan wall
The chelation of magnesium ions in the cell wall had an impact on vital metabolic processes within the cell.It was shown that exposing Lactobacillus bulgaricus cells to a temperature of 64 °C caused damage to their cell walls since it increased their susceptibility to penicillin.[23,30,31] Cytoplasmic/inner membrane Cells become leaky which eventually leads to cell death.Loss of respiration activity, osmotic homeostasis, and pH homeostasis.The loss of ions, UV-absorbing substances, and other cytoplasmic materials has also been reported in various species.[31][32][33][34][35] Ribosomes and RNA RNA degradation.Ribosome degradation was also observed which depended on the concentration of magnesium ions in the medium which increases ribosomal resistance toward heat.[23,30] DNA Single-strand breaks (SSB) or double-strand breaks (DSB) are introduced.[23,30,36] maintenance of proper protein folding, preserving cellular membranes, and preventing water loss, thereby enhancing the resilience of probiotic cells in the face of heat stress [26,27].Others produce heat shock proteins (HSPs) such as GroEL and GroES when subjected to high temperatures [28,29].Thus, heat resistance mechanisms differ based on the specific strains employed due to differences in the genetic constitution, physiology, and form, i.e., vegetative or sporous, and environmental factors like growth medium, temperature, and pH, among others, which need to be examined individually for assessing the response of probiotics to heat treatments and thereby optimizing the inactivation protocol.Therefore, understanding the temperature ranges is crucial for the development of efficient thermal treatment protocols and the dosage may vary accordingly.Standardization is achieved by subjecting microorganisms to diverse temperatures over varying time periods and evaluating their viability.The resulting data aids in the determination of heat resistance characteristics specific to the microorganisms under study, forming the basis for their heat resistance profile.Other methods to determine thermal inactivation include the thermal death time (TDT) tube method and the submerged-coil heating apparatus [24].Studies comparing the dosage and duration of the inactivation treatment against the health benefits of the inactivated paraprobiotics are necessary to develop an appropriate inactivation protocol.
Supercritical Carbon Dioxide (SC-CO 2 ) Technology
The principle behind utilizing the supercritical carbon dioxide (SC-CO 2 ) technology to inactivate microbes lies in leveraging the unique properties of supercritical CO 2 .Carbon dioxide (CO 2 ) has a critical point value of 31 °C and 7.38 MPa.In this state, CO 2 exhibits both gas-and liquid-like properties, enabling enhanced solvating power and diffusion capabilities along with low viscosity [37,38].Optimizing key parameters such as temperature, pressure, and exposure time is necessary to achieve effective microbial inactivation.SC-CO 2 can penetrate the cell membrane of microorganisms due to its low viscosity and high diffusivity.This penetration disrupts membrane integrity, resulting in increased permeability and leakage of intracellular components.When SC-CO 2 dissolves in water or aqueous solutions, it forms carbonic acid, which leads to a decrease in pH.The acidic environment can disrupt microbial cellular processes and compromise their survival.Furthermore, SC-CO 2 can extract lipids and other essential cellular components from microbial cells.The supercritical fluid's solvating power allows it to dissolve and remove lipids, proteins, and other crucial components necessary for microbial viability [39,40].
High Pressure
The use of high pressure to inactivate microbes is another emergent technique that finds application in paraprobiotic production.In this technique, microorganisms are suspended in a fluid medium like water that allows for pressure transmission and then subjected to high hydrostatic pressure ranging from 30 to 350 MPa in a high-pressure homogenizer [41].
The range of pressure employed as well as the treatment duration are both very critical in this technique.As demonstrated in Fig. 3, the effect on the microbe varies for different pressure intervals.At a pressure of 50 MPa, protein synthesis is inhibited along with a decrease in ribosome number, and upon a one-fold increase in pressure to around 100 MPa, vacuoles get compressed, and proteins are reversibly denatured.Further increase in pressure to 200 MPa crosses the threshold of lethality leading to cell membrane damage which causes subsequent leakage of cell contents.Finally, the proteins undergo irreversible denaturation accompanied by the complete leakage of cell contents at around 300 MPa [42].Deadly impact to the cell is seen upon irreversible disturbances in the transport system coupled to the membrane at pressures greater than 400 MPa.As a result, depending on the treatment, intensity, and process duration, varying impacts on cell integrity will be produced [43].
Fig. 3 Cellular alterations caused by high pressure [42] Microbes that have been inactivated by high pressure do not lose their probiotic properties since specific disrupted cell fractions are recognized by members of the immune system, thus eliciting a proinflammatory response.Additionally, certain probiotics continue to secrete microbial metabolites even after the pressure exceeds the threshold of lethality [18].Though it is apparent that the increase in pressure is directly proportional to the rate of inactivation, equipment design limitations and the required extent of cell inactivation must also be considered.In order to decrease the duration and pressure, this technique can be used in combination with other inactivation methods such as thermal treatment [44].
Ohmic Heating
Inactivation via ohmic heating is achieved by passing an electrical current through the target microorganisms, causing them to heat up due to the resistance encountered by the electrical current.This process, also known as electrical resistance heating or Joule heating, leads to a rapid and even rise in the temperature of the substance which can potentially be used as an alternative technique to inactivate desired microbes for the production of probiotics as opposed to traditional heating methods.The inactivation mechanism of probiotics using ohmic heating is the same as that of traditional thermal treatments, along with non-thermal damages due to electroporation; however, ohmic heating has several advantages over conventional heating methods, including reduced processing time, enhanced quality retention of the food matrix, and improved energy efficiency [20,45].Electron micrographs of Lactobacillus casei subsp.paracasei after conventional and ohmic heating are shown in Fig. 4.
During the process of ohmic heating, the internal generation of heat ensures that substantial temperature gradients do not occur in the sample, thus reducing the risk of overheating.This stands in stark contrast to conventional methods, where uneven heat distribution within the sample is common.Nevertheless, this technique still faces challenges related to overheating or underheating in certain conditions.As the conductivity increases with temperature, controlling the endpoint temperature becomes difficult.Furthermore, the irregular shapes of treatment chambers can lead to local temperature variations, especially when dealing with diverse probiotic sample properties.Electrode geometries are customized to enhance uniform heating.Issues arise with the survival of microorganisms in crevices and temperature differences in mixtures of probiotic samples.Strategies such as using a heating medium with equal conductivity, combining [20] ohmic heating with other methods, and developing specific models based on experimental data have been explored to address challenges and ensure precise temperature control during the preparation of paraprobiotics [46].
Sonication
Sonication is an alternative non-thermal method that employs ultrasound or sound waves with frequencies more than 20 kHz.The desired effects of sonication are caused by cavitation.The introduction of high-intensity ultrasound into a medium containing probiotics causes a cycle of alternating compression and rarefaction.This leads to the formation of bubbles or "voids".These bubbles expand and collapse violently, generating high temperatures and pressures that can reach 5500 °C and 50,000 kPa, physically damaging the probiotic cells.The mechanical stresses and shock waves produced during cavitation can rupture cell membranes, disrupt cellular structures, and impair vital metabolic functions [47,48].Furthermore, sonication can be performed in combination with other treatments, such as UV and heat [24,49].The efficacy of ultrasonic inactivation is influenced by several factors, such as the ultrasound frequency and intensity, duration of exposure, target organism, and the properties of the liquid medium.The SEM images of L. delbrueckii ssp.bulgaricus after sonication treatment are provided in Fig. 5.
Different frequencies of ultrasound waves can be used to disrupt the cell wall and cell integrity of probiotics, thus facilitating the release of several biomolecules like proteins and DNA that have beneficial properties.Since the optimal frequencies depend on the strain used and the intended application, standardization is a critical step prior to inactivation in order to maximize the release of the necessary cellular component for the desired purpose.A study conducted on Saccharomyces cerevisiae showed the effect of different frequencies on the structure and growth of the microbe, with 20 kHz causing significant external damage to the cell morphology and structural integrity [50].For probiotics, on the other hand, lower frequencies are used in order to maintain cell viability by reducing the sheer stress on the cells.Controlled sonication at low frequencies of up to 100 kHz enhances microbial growth and productivity; however, in certain cases, it can increase cell permeability.Further, studies have shown that the effect of sonication on microbes is also influenced by the amplitude intensity and duration of exposure.Another study conducted on Lactobacillus brevis demonstrated that sonication treatment at 23 kHz with an amplitude of 10 µm enhanced the cell count and proliferation rate while the same frequency caused cell death and hindered metabolic activity on increasing the amplitude to 15 µm [51].Thus, probiotic-specific studies examining the effect of sonication at different intensities for different time periods are of utmost importance in fine-tuning the sonication parameters for the required cell viability, metabolite production, and biological function depending on the intended application.
Ionizing Radiation (IR)
Other non-thermal methods can also be employed for inactivation of bacterial species.For instance, the irradiation method utilizes Ionizing Radiation (IR) like X-rays, electron beams, and gamma rays for microbial inactivation [53,54].There are no significant differences in the inactivation effectiveness of the three radiation sources [54,55].The primary sources of gamma rays are radioisotopes, such as Cobalt 60, which has a half-life of 5.27 years, and Cesium 137, which has a half-life of 30.17 years.The dose of irradiation applied is an essential factor to be considered in the irradiation process, which is measured in Grays (Gy).The radioresistance of bacterial species must be looked upon while opting for this process.In a study conducted by Sychev et al. [56], Fig. 5 Scanning electron microscope (SEM) images of L. delbrueckii ssp.bulgaricus 11842 a before sonication treatment and b after sonication for 6 min [52] Bifidobacterium bifidum (5 × 10 8 CFU/ml in powder form) was dissolved in water and irradiated with 60 Co gamma rays, with doses in the range of 1 to 20 Gy, using an Issledovatel IN-1 irradiator.The optimal dose of gamma rays of 12 to 14 Gy was determined by measuring the concentration of superoxide dismutase, an antioxidant enzyme, in the medium.DNA is the principal cellular target that governs the loss of viability in this process; double-strand breaks are induced in the DNA due to the action of ionizing radiations.These radiations also generate reactive oxygen species (ROS), which further damage the DNA, eventually leading to cell death [56].
Apart from the radioresistance of the target organism, the dose of irradiation also depends on several other factors including the source of radiation, the penetration depth, the approved safety limits of irradiation, the cell components that need to be preserved, and the intended use of the paraprobiotic.An important parameter while deciding the dose is the decimal reduction dose or D 10 value which is the irradiation dose that reduces the total cell viability to 10% of the original value when all other conditions including time, temperature, type of radiation, and microbial strain, are kept constant [57].Gamma radiation is generally preferred for inactivation due to its established use in the preservation of food products such as vegetables, pulses, and fruits for human consumption [58].A recent study published in 2022 by Porfiri et al. [57] using different strains of lactic acid bacteria demonstrated strain-specific differences in the probiotic responses to gamma irradiation.Strains showing higher resistance to gamma irradiation were seen to better preserve the beneficial properties of the live probiotics.The presence of a surface layer (S-layer) in the microbial cell wall of certain strains could potentially play a role in absorbing the radiation, thereby preserving the immunomodulatory cell wall components [57].
The effect of irradiation on paraprobiotics is examined post-treatment using different techniques.Cell viability is generally assessed through colorimetric assays like MTT and spectrofluorometric analyses [59].The reproductive capability is evaluated via plate counting, colony-forming unit (CFU) assays, and growth curve analyses [60,61].Confocal laser scanning microscopy (CLSM) is generally employed to observe cell growth and assess structural integrity.Several approaches such as comet assays as well as polymerase chain reaction (PCR) methods like quantitative PCR and enterobacterial repetitive intergenic consensus PCR (ERIC-PCR) are employed to analyze DNA damage in inactive microbes [62].Flow cytometry is an especially viable technique for the post-inactivation examination of paraprobiotics since several parameters including DNA content, cell size, and mitochondrial function can be measured parallelly by using specific fluorescent dyes.Table 2 contains a list of commonly used fluorescent dyes and the specific cell functions that they are used to assess [19].
Pulsed Electric Field (PEF)
The pulsed electric field is a non-thermal technology that can be used for the purpose of inactivating microorganisms.It is achieved by placing the target microbes between two electrodes that constitute a treatment chamber gap followed by the employment of short-duration pulses with high-voltage electric fields in the range of 5 to 90 kV/cm [70,71].The precise mechanisms underlying PEF-induced microbial inactivation are not fully explicated, but it has been shown that permeabilization of microbial membranes, i.e., electroporation,
SYTO-9
To assess membrane integrity [69] occurs due to the application of PEF.Under minimal pulsation conditions, the membrane damage would be reversible, whereas more severe conditions would lead to irreversible damage, ultimately resulting in cell death [70,72,73].
In order to optimize PEF parameters such as pulse duration, treatment duration, and electric field strength for achieving necessary inactivation levels, heat resistance mechanisms of the target microorganism must be studied, and the kinetics of PEF-induced inactivation must be understood.These kinetics can be assessed by measuring the microbial cell viability at varying strengths of the electric field with a uniform increase in the duration of treatment which is the product of the pulse width in µs and the total number of pulses.However, pulse application generates a Joule heating effect which, in turn, increases the electrical conductivity, thus altering the pulse width and electrical field strength [74].Therefore, Heinz et al. [75] proposed the determination of optimal PEF parameters based on the total specific energy, which is influenced by the treatment duration, electrical field strength, and the treatment chamber's electrical resistance properties.In the case of Listeria monocytogenes, the total specific energy and the treatment duration required for a specific inactivation level were seen to decrease upon increasing the electric field strength [74].
Ultraviolet (UV) Rays
Ultraviolet (UV) rays are electromagnetic rays having wavelengths between 200 and 400 nm.They are non-ionizing rays that can be used for sterilization purposes.Damage to the DNA is considered the key lethal effect of UV on bacterial strains, the inactivation of probiotics results from the formation of pyrimidine dimers in DNA and RNA, which would lead to microbial death due to affected metabolic functions or mutations in key genes [19,76].Other lethal or sub-lethal damages include the formation of ROS, which react with DNA and cellular proteins [77].Membrane permeability and molecular transport can also be affected by damage to the cell membrane which may lead to cell inactivation [78].
Generally, the process of UV inactivation includes exposing the live probiotics suspended in culture media to the UV source emitting light of a particular wavelength for a specified time period.The effectiveness of the treatment is then tested by plating the culture on an agar plate and checking for cell growth as well as colony formation after 72 h of incubation.The absence of detectable cell growth indicates that the inactivation procedure was successful.Currently, flow cytometry is a more commonly used technique to check for cell viability.In one study, Lactobacillus rhamnosus GG was inactivated by subjecting the cell suspension to UV radiation from a 39-W germicidal UV lamp placed at a distance of 10 cm for a duration of 5 min.Though the treated culture did not show any growth on the agar medium, it was able to reduce IL-8 levels in Caco-2 cells, as effectively as live Lactobacillus rhamnosus GG probiotics [79].
UV treatment is very effective in inactivating a variety of microorganisms while preserving their beneficial immunomodulatory properties.Studies have demonstrated negligible differences in the beneficial effects of certain microbes when administered in their live form, as probiotics, and in their UV-inactivated form, as paraprobiotics [22].Being a non-thermal method, it does not damage cell wall components like peptidoglycans and lipoteichoic acid which interact with immune cells to regulate the immune system [80].Since UV treatment does not use any toxic chemicals or reagents, there is no risk of harmful residues being left behind in the paraprobiotics.Additionally, UV treatment is currently used in several industries to treat products for human consumption like food and pharmaceuticals, making it a reliable and generally accepted method of inactivation.One common example is the UV-based sterilization of drinking water before its consumption and use for cooking purposes [81].Furthermore, as thermal methods such as pasteurization may not kill spore-forming species, irradiation methods using UV and gamma rays provide a viable alternative.However, the effect of UV radiation on specific probiotic organisms remains unclear due to the influence of external parameters such as the germicidal wavelength, the duration of exposure, the target microorganism, and the specific strain used [78].
Techniques such as heat, ionizing radiation, and ultrasound can be combined with UV to produce a synergistic effect which would increase the efficiency of inactivation.In order to efficiently combine different treatments, the inactivation mechanisms involved in each technique must be understood [49,82,83].
Other inactivation methods include lyophilization, spray drying, and pH modification [19].Lyophilization is achieved by freezing the target microorganisms, applying a high vacuum, and finally warming the microbes under a vacuum thus leading to the sublimation of water.In spray drying, the feed solution, containing probiotic cells and dissolved or suspended protectant solids, undergoes atomization to form small droplets that are rapidly dried in a hot airflow to convert them into particles [84].The pH sensitivity of microorganisms can also be exploited to make them nonviable.As previously mentioned, the inactivation method used, its intensity and conditions, as well as the strain used may affect the properties of paraprobiotics.For instance, a study conducted by Ouwehand et al. [85] evaluated the inactivation of nine probiotic strains to assess the effect of heat, UV, and gamma irradiation on the adhesive property of probiotics using which they attach to the immobilized intestinal mucus.Inactivation by heat and gamma rays showed a decrease in the adhesive ability of the probiotics with the exception of Propionibacterium freudenreichii and Lactobacillus casei Shirota, respectively, where an opposite effect was observed.Inactivation by UV did not cause any difference in the adhesion properties.
Various inactivation parameters including intensity, duration, and frequency need to be tuned individually for specific applications.These parameters differ according to the strain used, microbial resistance patterns, initial microbial population, and environmental factors such as growth media.Hence, dose-response studies are necessary to optimize the treatment parameters and to choose a dosage that provides specific advantages.
Gut-Brain Axis (GBA)
The gut microbiota has an essential role in human health and physiology, and gut dysbiosis, i.e., imbalance in the gut microbial populations, can have severe neuropsychiatric repercussions.Symptoms of anxiety and depression are explicitly correlated to alterations in the microbiota.Gut dysbiosis can occur due to various reasons such as eating habits, sleep patterns, diseased states, and medications such as antibiotics.Probiotics and paraprobiotics offer a viable solution for the restoration of a healthy, balanced gut microbiome [86,87].
This relationship between the gut microbes and the brain is mediated by a bidirectional communication termed the "Gut-Brain axis" that occurs through multiple different pathways which are regulated by various components as illustrated in Fig. 6.
Vagus Nerve
The vagus nerve, the tenth cranial nerve in the autonomic nervous system, plays a critical role in gut-brain signaling [88].It is a major constituent of the parasympathetic division and coordinates several physiological functions such as respiration, heart rate, and gut motility [89].As demonstrated in Fig. 7, the sensory components of the vagus nerve transmit various sensory gut signals, including nutrient availability, microbial activity, and gut health, to the brain, thus affecting the host response [88].This is primarily accomplished through vagal terminals formed along the gut epithelium by the vagal nerves which form two main detectors of mechanical signals-intra-ganglionic laminar endings (IGLEs) in combination with enteric neurons and intramuscular array endings (IMAs) in the muscle layers [90,91].The vagal afferent nerves that pass through the intestinal and antral glands as well as the taste buds in the proximal esophageal tract express specific receptors to detect serotonin, gastro-intestinal neurohormones, peptides, and other specific signaling molecules released by enteroendocrine cells in response to intestinal nutrient levels for the regulation of nutrient absorption and digestion.Special enteroendocrine cells that form synapses with vagal terminals are termed neuropods [88].Enteroendocrine cells generally express receptors to detect fatty acids, glucose, and amino acids in the gut.Several microbial metabolites produced by the gut microbiota like lipopolysaccharides (LPS) are also detected by the enteroendocrine cells through toll-like receptors (TLRs) located on their surface [92].Pathogenic microbes directly communicate with the vagus nerve through leaky gut barriers while beneficial microbes generally affect enteroendocrine signaling, potentially through a serotonindependent mechanism.These messages regarding food intake, digestion, gut microbial activity, nutrient absorption, and ultimately, gut health, are transmitted to specific regions of the brain, including the hypothalamus, the locus coeruleus, and the hippocampus through the nucleus tractus solitarius (NTS) which is the main processor of sensory signals from the gut [88].Vagal efferent nerve fibers originate from the dorsal motor nucleus of the vagus in the brainstem and transmit motor signals from the brain to the enteric nervous Fig. 7 Diverse gut-brain connections through the vagus nerve [88] system.The enteric neurons are triggered by vagal efferent nerves by the release of acetylcholine, thus regulating smooth muscle contractions that control peristalsis and food propulsion as well as the secretion of gastric acid, digestive enzymes, alkalis, mucus, and other substances necessary for digestion and nutrient absorption [93].
However, since neurotransmitters like serotonin, acetylcholine, and gamma-aminobutyric acid (GABA), as well as neuropeptides such as substance P, neuropeptide Y, and oxytocin, are produced by the endocrine nervous system, the vagal gut-brain signaling pathway also mediates stress response, mood, cognition, and overall brain function [94].The excitation of the vagal afferent nerves in the gut has been associated with neuropsychiatric disorders like depression and anxiety [95].In an experimental study, vagotomized mice were seen to be insensitive to probiotic treatment for anxiety-like behaviors, implying the imperative role of the vagus nerve in anxiolytic effects and gut-brain communication [96,97].Further, vagus nerve stimulation (VNS) therapy using electrical impulses was successful in reducing the symptoms of treatment-resistant depression (TRD).More specifically, the administration of the probiotic Lactobacillus rhamnosus (JB-1) led to increased levels of the neurotransmitter GABA in mice, which has implications for the treatment of anxiety and MDD [98].
Immunological Activity
The immune system plays a crucial role in regulating the bidirectional interaction between the gut microbiota and the CNS, thus influencing the pathophysiology of neuropsychiatric disorders.Cytokines, produced by immune cells, are important messengers in the gut-immune-brain communication.They have two main types-proinflammatory cytokines that increase inflammation and anti-inflammatory cytokines that decrease inflammation.Gut bacteria can interfere with the balance between the two types of cytokines by upregulating the expression of one type of cytokines while downregulating the other.Increase in the expression of proinflammatory cytokines, such as IFNγ and TNFα, have been shown to cause central neuroinflammation, ultimately leading to psychological stress and depressive symptoms [99].For instance, a study conducted on mouse models showed that the oral administration of heat-killed wild-type Lactobacillus casei as a paraprobiotic caused the suppression of IFNγ and TNFα.This immunomodulatory property can potentially be used in ameliorating the symptoms of neuropsychiatric disorders [100].
Interleukins are a subset of cytokines and specific interleukins are involved in the regulation of inflammation.During certain neuropsychiatric disorders like MDD, the NLRP3 inflammasome stimulates proinflammatory pathways by activating IL-1b signaling which is critical in the gut-immune-brain association [101].Other proinflammatory interleukins include IL-2, IL-6, IL-8, IL-12p70, and IL-17A.As mentioned previously, it has been experimentally shown that UV-inactivated Lactobacillus rhamnosus GG (LGG) suppresses IL-8 production to the same extent as live LGG that is administered as a probiotic [79].This introduces the possibility of using UV-inactivated LGG as an immunomodulatory paraprobiotic for certain psychological imbalances.
Microbial-associated molecular patterns (MAMPs) like flagellin and LPS are conserved, class-specific molecular structures present on the surface of microbes that are identified by pattern recognition receptors (PRRs) including Toll-like receptors (TLRs) and NOD-like receptors (NLRs) which are present on the surface of host cells and bind to specific MAMPs.However, since these MAMPs are produced by both pathogenic invaders and commensal gut microbes, several mechanisms are employed to prevent the destruction of the gut microbiota by the intestinal epithelial cells while also maintaining immune homeostasis.One particular mechanism is stratification, i.e., minimization of contact between gut microbes and intestinal epithelia which is accomplished by the mucus layers coating the inner wall of the intestine.Another strategy is compartmentalization in which gut microbiota are contained within specific zones to reduce their exposure to the systemic immune response.This is achieved by the epithelial production of antibacterial molecules like RegIIIγ which prevent gut microbes from reaching the small intestine as well as reduced gut microbial colonization in the stomach and duodenum due to their highly acidic environments [102,103].
Inflammatory Reflex and Neuroinflammation
The inflammatory reflex is a neurophysiological mechanism by which the vagus nerve mediates cytokine production, tinflammation, and overall immune function.The afferent vagus nerve fibers regulate immune-to-brain communication by detecting peripheral inflammatory molecules and subsequently sending signals to specific regions of the brain, including the hypothalamus.The efferent vagus nerve is involved in cholinergic signaling from the brain to the immune system which suppresses cytokine production.Inflammatory mediators including bacterial peptides, lipopolysaccharides, and cytokines activate the afferent vagal nerves which ultimately leads to an increased production of the adrenocorticotropic hormone (ACTH) that has an anti-inflammatory effect.Afferent signals also hinder cytokine production by increasing glucocorticoid levels and stimulating the release of the melanocyte-stimulating hormone (MSH), an anti-inflammatory protein.Once these signals are transmitted to the brain, the efferent motor signals stimulate the release of acetylcholine (Ach), a neurotransmitter that binds to α7 nicotinic acetylcholine receptors (α7nAChR) on the surface of immune cells like macrophages, thus suppressing cytokine production [104][105][106].A preclinical study observed increased inflammation due to high levels of TNF-α and IL-1β cytokines as well as depressive symptoms possibly due to BDNF-TrkB signaling by the brain-derived neurotrophic factor (BDNF) and its receptor, tropomycin receptor kinase B (TrkB) specifically in the nucleus accumbens region of the brain [107].This is supported by experimental data that correlates inflammation-induced BDNF activity with the onset of depression in rat models [108].
Neuropsychiatric disorders like depression are known to be triggered by poor lifestyle and unhealthy diet which cause gut dysbiosis as well as stress which interferes with the hypothalamic-pituitary-adrenal (HPA) axis, thus eliciting a proinflammatory immune response [109].The stressinduced immune response, coupled with the over-production of proinflammatory cytokines due to gut dysbiosis, can lead to neuroinflammation, i.e., inflammation in the central nervous system (CNS), through the gut-brain axis, which in turn dysregulates neurotransmitter metabolism, neural synaptic plasticity, and neuroendocrine function, ultimately affecting emotional regulation and overall behavior [110].This is further supported by a clinical trial in which the use of 2 × 10 9 CFU/g each of probiotics including Bifidobacterium bifidum, Lactobacillus acidophilus, and Lactobacillus casei was shown to have anti-depressive effects in MDD patients [111].Microglia are immune effector cells that control neuroinflammation by releasing immune mediators and maintaining neuronal connectivity in the CNS.However, microglial activation by bacterial LPS and IFNγ could lead to the release of proinflammatory factors like IL-6, thus causing neurotoxicity due to increased neuroinflammation.Dysfunction of microglia has been associated with the onset of depression due to neuroinflammation in coordination with the disruption of the HPA axis [112,113].Studies have shown that specific microglial activation in certain parts of the brain like the prefrontal cortex plays a major role in MDD.A direct correlation has also been established between the severity of depression and the level of microglial activation in the anterior cingulate [114].
Microbial Metabolites
Chemical compounds produced during microbe-mediated metabolic reactions are known as microbial metabolites.They include short-chain fatty acids (SCFAs), bile acids, phenols, thiols, and amino acids.Gut microbial metabolites affect the central nervous system, either directly or indirectly due to which gut dysbiosis can critically impact the pathophysiology of various CNS disorders, such as ASD.It has been observed that microbial metabolites can regulate ASD-like symptoms both positively and negatively.A decrease in levels of SCFAs like butyrate has been noted in the fecal samples obtained from ASD patients [115].It has been experimentally shown using a mouse model that the administration of C 4 fatty acids like sodium butyrate can attenuate ASD symptoms and improve social behavior by regulating the expression of ASD-related genes [116].
SCFAs are organic compounds formed as end-products of microbial fermentation, especially in the caecum and proximal colon of the gastrointestinal tract.The major SCFAs in the gut include propionate, acetate, and butyrate.SCFAs help in the maintenance of gut barrier integrity via the regulation of tight-junction proteins and the stimulation of increased production of mucin, a constituent of the mucus lining, by the gut epithelium [117].In vitro studies have demonstrated the ability of SCFAs to improve intestinal permeability and epithelial barrier function in a concentration-dependent manner [118,119].SCFAs mediate immune homeostasis in the gut by regulating inflammation.Preclinical trials have shown the ability of SCFAs to upregulate the production of IL-10-producing regulatory T-cells, thus increasing the levels of IL-10, an anti-inflammatory cytokine [120].They also interact with the gut-brain-axis by binding to G-protein coupled receptors (GPCRs) on the surface of enteroendocrine cells and triggering the production of neuropeptides like glucagon-like peptide 1 (GLP-1), ghrelin, and peptide YY [121].The role of SCFAs in energy metabolism is primarily through butyrate oxidation which accounts for up to 70% of the energy supply to colonocytes [122].SCFAs play a role in regulating the metabolism of glucose and lipids by binding to specific GPCRs.They activate the oxidation of fatty acids while inhibiting fatty acid synthesis and lipolysis through an AMPK-mediated pathway [123].Moreover, SCFAs also influence the production of neurotransmitters in the brain.Acetate has been shown to increase the hypothalamic levels of GABA and lactate in vivo, possibly via anorectic signaling in the arcuate nucleus [124].In vitro studies have demonstrated that SCFAs like butyrate and acetate modulate serotonin levels by regulating the expression of tph1 which encodes tryptophan 5-hydroxylase 1, the enzyme necessary for serotonin production [125].Similarly, propionic acid and butyric acid regulate the expression of tyrosine hydroxylase which is required for the production of major neurotransmitters such as epinephrine, norepinephrine, and dopamine [121,126].Thus, the use of microbial metabolites, especially particular types of SCFAs, as paraprobiotics in disorders like ASD is currently being explored, but their clinical application remains challenging due to the heterogeneity of the results.
Neuroactive Compounds
Neuroactive compounds include a variety of substances like neurohormones, neuropeptides, neurohormones, neuromodulators, mediators, and metabolites from various sources that influence the neural system and thereby affect brain function as well as other CNS-related activities.Since they play a vital role in mediating and regulating the microbiota-gut-brain signaling pathways, abnormal levels of these compounds are associated with the pathogenesis of neuropsychiatric disorders such as ASD and MDD [127].
Tryptophan metabolism is an important component of the GBA.The metabolism of tryptophan, a derivative of indole, to kynurenine, can be either upregulated or downregulated depending on the gut microbes involved.Kynurenine can either be metabolized into kynurenic acid which is a neuroprotective metabolite or into quinolinic acid.It has been experimentally observed that MDD patients have lower kynurenic acid levels even though tryptophan metabolism occurs at a fast rate, possibly due to the preferential conversion of tryptophan into quinolinic acid [128].This reduction in kynurenic acid levels causes a disruption in the neuroprotective and neurodegenerative cascades, thus contributing to the development of depressive symptoms.Thus, in order to manage the pathophysiology of MDD, kynurenine levels must be controlled.Experimental evidence indicates that treatment with Bifidobacterium spp., in combination with an improved diet, can reduce kynurenine levels and improve depressive symptoms due to which it is a promising probiotic that can be used to manage symptoms in MDD patients [129].
Additionally, several neuroactive indole derivatives like serotonin, indole-3-propionic acid (IPA), and melatonin are produced by the gut microbiota during tryptophan metabolism.The overproduction of indole has been linked to symptoms of anxiety and depression in preclinical trials, thus implicating individuals with gut microbiota that produce higher levels of indole in the development of neuropsychiatric problems [130].Indole alkaloids influence neural transmission by interacting with GABA receptors and serotonin receptors, thus exhibiting an anti-depressive effect that can potentially be used in neuropsychiatric therapy [131].Further, indole derivatives like IPA and melatonin have been shown to act as neuroprotectants via their antioxidant and anti-inflammatory activities, thus regulating neural signaling [132].Indole derivatives exhibit these neuroprotective effects both in vitro and in vivo by increasing dopamine levels and enhancing dopamine uptake, scavenging ROS to alleviate oxidative stress, and modulating cytokine production, including the downregulation of TNF-α, IL-1β, and IL-6, in order to control neuroinflammation [133].
Applications
Several studies have established the viability of probiotics and paraprobiotics in the management of neuropsychiatric disorders including MDD, ASD, and anxiety.
Clinical and preclinical studies indicating the health benefits of probiotics are summarized in Table 3.
Though research is still in the nascent stage as compared to probiotics, several preclinical and clinical studies have demonstrated the favorable neuropsychiatric effects of different paraprobiotics as well.The studies, along with their results, have been encapsulated in Table 4. Unlike probiotics, paraprobiotics cannot multiply, making the dose size an important consideration.In one particular study, Murata et al. [151] employed 2 different dosage forms-1 × 10 10 (10 LP) and 3 × 10 10 (30 LP)-on human subjects.The study found that the supplementation had a greater positive impact on both common cold and mood in the 10 LP group compared to the 30 LP group.The researchers believe that this may be due to the dose-independent nature of the treatment since probiotics generally exert immunomodulatory effects in a dose-dependent manner [152].However, this difference in positive impact may also be due to the fact that the 30-LP group had a smaller sample size, as more participants dropped out before the intervention in this group.In contrast, another study testing the efficacy of Bacillus sp.NP5 paraprobiotic at different density levels on the resistance of Nile tilapia fish to Streptococcus agalactiae infection showed that maximum immune response and disease resistance was observed at the highest dose of 10 10 CFU/mL [153].The dependency of health benefits on the dosage size is yet to be conclusively determined since many studies assessing the effect of paraprobiotics on neuropsychiatric symptoms have not considered possible changes in the observed effects upon altering the dosage while keeping all other factors constant.Additionally, comparative trials between probiotics and paraprobiotics at different concentrations on the same subjects help to elucidate the required change in dosage upon inactivation.For instance, a recent study conducted by Elham et al. [154] revealed that the cytotoxic effect of live L. casei probiotic on CaCo2 cells was much higher than that of the heatinactivated paraprobiotic from the same microbial strain at each concentration that was tested, indicating that the inactivated paraprobiotic must be administered in higher doses to elicit a similar effect as compared to the viable probiotic.Further, both treatments showed a dose-dependent increase in their cytotoxic potential.It is important to note, however, that these results are specific to the microbial species and cell line in question and cannot be generalized for all probiotics and paraprobiotics.Research into this particular aspect in the context of neuropsychiatric disorders is critical to determine whether a similar increase in dosage will be required for inactivating probiotics that have the ability to alleviate neuropsychiatric symptoms.
Conclusion
There exists a paradoxical relationship between neuropsychiatric disorders and gut dysbiosis.The GBA mediates a two-way communication between the gut microbiota and the brain, thus facilitating the involvement of the gut microbes in the development and regulation of the nervous system.Thus, gut dysbiosis plays a major role in the pathogenesis of neuropsychiatric disorders such as ASD and MDD.Evidence suggests that both probiotics and paraprobiotics possess the ability to ameliorate neuropsychiatric symptoms and disorders through the GBA due to their potential to optimize the gut microbiome through interactions with the gut microbes.However, probiotics pose several safety concerns including susceptibility of immunocompromised patients and development of antibiotic resistance.Despite the advantages of paraprobiotics over probiotics in terms of safety and shelf life, they are not without limitations.The main disadvantage of paraprobiotics is the lack of standardized inactivation protocols which are necessary for their commercial production and quality control.Further research is required to compare and establish standard parameters for paraprobiotic production since the inactivation of probiotics by different methods may lead to variations in the properties of the paraprobiotics obtained and their conferred health benefits.Moreover, the mechanism of action of paraprobiotics and the role of specific cellular components must be elucidated.Research into this aspect is hindered by the difficulty in accurately determining the cell viability of probiotics, resulting in the erroneous attribution of the beneficial effects of paraprobiotics to their viable counterpart.Additionally, specific criteria must be established for choosing the target microorganisms, and definitive tests are required for determining the biological activity of inactivated microbes.Most importantly, the efficacy and sustainability of paraprobiotics in the gut must be assessed, given their inability to reproduce.Reports suggest that gut mucosal colonization resistance to probiotics may nullify their potential health benefits [161].Therefore, time-point assays need to be performed to determine the optimal treatment duration and the prolongation of health benefits after the treatment period.In order to translate the potential advantages of paraprobiotics into health benefits, the optimum dosage, frequency and the total duration of consumption need to be evaluated [162].Unless paraprobiotics are able to induce lasting changes in the gut microbiota, their health benefits will be limited to the time period of administration, unlike probiotics which proliferate and colonize in the gut, leading to long-term beneficial effects.While paraprobiotics may be able to rectify gut dysbiosis by regulating the growth of different microbial populations thereby producing longterm health benefits, it is difficult to reach a definitive conclusion due to the lack of sufficient experimental evidence, thus highlighting the exigent need for further research.In conclusion, though further work is necessary to optimally channel their beneficial properties, probiotics and paraprobiotics are highly viable natural therapeutic agents for the management of neuropsychiatric disorders.
Table 2
[19] dyes used in flow cytometry and their functions.Reproduced with permission[19]
Table 3
Effects of probiotics on neuropsychiatric symptoms and disorders
Table 4
Inactivation methods, dosage, duration, and neuropsychiatric effects of paraprobiotics | 2024-02-01T06:17:23.762Z | 2024-01-31T00:00:00.000 | {
"year": 2024,
"sha1": "57abd1384a1f573d7718a86868b32f7f0f1e436b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12602-024-10214-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a82e80fa23b0f56c746f30c53bcbcf30627a44f",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34453869 | pes2o/s2orc | v3-fos-license | 18F-FDG PET/CT in Patients with Nodular Pulmonary Amyloidosis: Case Report and Literature Review
A 62-year-old woman was found to have multiple bilateral pulmonary nodules showing different 18F-fluorodeoxyglucose (FDG) uptakes on positron-emission tomography/computed tomography (PET/CT). Only the largest nodule in the left lower lobe showed an increased 18F-FDG uptake on PET/CT. Three nodules were surgically resected from different lobes of the left lung. Two lobes were benign and showed amyloid deposition. The largest nodule in the left lower lobe showed adenocarcinoma and a heavy amyloid deposition. Pulmonary amyloidosis should be added to the differential diagnosis for cases with multiple pulmonary nodules that show different 18F-FDG uptakes on PET/CT. To the best of our knowledge, this is the second reported case of a lung nodule consisting of adenocarcinoma and amyloid deposition.
Introduction
Amyloidosis is a rare disorder in which insoluble fibrillar proteins are deposited in extracellular tissue [1]. Pulmonary involvement of amyloids may be localized or systemic, primary or secondary, hereditary or acquired [2]. Nodular pulmonary amyloidosis may manifest as single or multiple nodules, which are able to calcify or cavitate. It is usually misconstrued as primary lung carcinoma or metastatic tumor.
Positron-emission tomography/computed tomography (PET/CT) with 18 Ffluorodeoxyglucose (FDG) is used to evaluate patients with possible cancers [3]. In the case of pulmonary nodules, 18 F-FDG PET has been demonstrated to have a high sensitivity and specificity for malignancy [4,5]. In the present study, we report an unusual case of multiple pulmonary amyloid nodules in a patient with different 18 F-FDG uptakes on PET/CT. We also conducted a review of the literature in PubMed, EmBase and the ISI Web of Science looking for cases with histologically proven pulmonary amyloidosis who had undergone 18 F-FDG PET/CT.
Case Report
A 62-year-old nonsmoking female presented with a 2-month history of cough with white phlegm and occasionally blood-tinged sputum. The patient did not have any other significant medical condition. She denied having any other symptoms, including chest pain, dyspnea, weight loss, fevers, chills, and night sweats. The findings of her physical examination were unremarkable.
The tumor marker NSE was mildly elevated to 17.63 μg/l (normal range 0-16.30). The other tumor markers, including CYFRA19, CEA and SCC, were within normal limits. The complete blood count, serum electrolytes, renal and liver function, and comprehensive metabolic profile findings were normal. Sputum smears and cultures were negative for acidfast bacilli, fungi or other microorganisms. A pulmonary function test indicated increased airway resistance (2.05 cm H2O/l/s, 139% of predicted). An arterial blood gas analysis obtained while breathing room air revealed a PO2 of 92.6 mm Hg, a PCO2 of 45.5 mm Hg and a pH of 7.407. The electrocardiogram was normal and the echocardiogram revealed normal cardiac function. No echocardiographic signs of restrictive cardiomyopathy or cardiac amyloidosis were found.
High-resolution CT of the chest revealed multiple bilateral pulmonary nodules varying in size up to 3.5 cm, with no evidence of lymphadenopathy ( fig. 1). The largest nodule measuring 3.5 × 2.5 cm was noticed in the posterior segment of the left lower lobe ( fig. 1b). The mass and a few nodules showed focal, punctate calcifications. Calcification in the nodules was apparent in the mediastinal windows ( fig. 1, right). Mediastinal lymphadenopathy was not present. The multiple lung nodules were suspicious of metastatic lesions from a hidden malignancy. To rule out a malignancy of the nodules, the patient underwent an 18 F-FDG PET/CT scan ( fig. 2). The PET/CT scan indicated an intense 18 F-FDG activity in the left lower lobe (standard uptake value = 6.1), corresponding to the largest pulmonary nodule on the CT image. The degree of activity was highly suspicious of malignancy. The rest of the nodules in the lung fields did not show any uptake on the PET/CT scan. There were no metastases to other organs or bone lesions anywhere.
The patient underwent open lung biopsy to investigate the possibility of malignancy. Surgical exploration revealed widespread palpable nodules present on the surface of the left lung. Two small nodules from the apical and anterior segment of the left upper lobe, and the lingular segment of the left lower lobe, were wedged out and sent for frozen section procedure. The histopathologic findings of the two nodules were benign ( fig. 3a). Then, the largest mass in the posterior segment of the left lower lobe was wedged out.
Histologically, all the resected nodules contained massive deposition of homogenous eosinophilic amorphous material with focal calcification. The eosinophilic material stained positive with Congo red ( fig. 3c) and showed apple-green birefringence under polarizing microscopy, features pathognomonic of amyloidosis ( fig. 3d). In addition to the amyloid material, minimally invasive adenocarcinoma (mainly) and a papillary predominant (focally) growth pattern were also found in the largest mass from the left lower lobe which showed Afterwards, the patient was investigated for evidence of myeloma or plasma cell dyscrasias. All subsequent investigations including serum and urine protein electrophoresis and immunofixation, bone marrow biopsy and immunohistochemistry were normal. Until now, the patient has been followed up regularly with serial CT scans for 9 months. Her condition has remained unchanged, without significant clinical, physiological or radiological deterioration or evidence of systemic amyloidosis or recurrence of the adenocarcinoma.
Literature Search
We searched for previous cases of patients with histologically proven pulmonary amyloidosis who had undergone 18 F-FDG PET/CT in the following databases: PubMed, EmBase and the ISI Web of Science. The search was limited, including the period from the year 2000 to October 2014, and to human studies and English-language publications. In the PubMed database, the search words were 'amyloidosis', 'pulmonary', 'lung', 'PET/CT', and ' 18 F-FDG'. Corresponding words were used in the EmBase database and the ISI Web of Science.
Radiologically, the nodular parenchymal pattern appeared as solitary (36%) or multinodular (64%) infiltrates in any lobe. Nodules ranged in diameter from satellite nodularity to 5.5 cm. Thirty-three patients had a positive 18 F-FDG uptake on PET (table 1), whereas the maximum standard uptake value ranged from 1.2 to 15 (with a mean of 4.7, n = 21), and in 71% of cases, it was >2.5. Histopathologically, pulmonary involvement in amyloidosis can be associated with mucosa-associated lymphoid tissue lymphoma, plasma cells, giant cells, and other immunoreactive cells (macrophages, monocytes and lymphocytes).
Discussion
Amyloidosis is a group of disorders characterized by extracellular deposition of proteins in a β-pleated sheet fibrillar form. The most common presentations are nephrotic syndrome, idiopathic peripheral neuropathy, cardiomyopathy, and unexplained hepatomegaly [1]. Pulmonary involvement rarely causes symptoms unless gas exchange in alveolar structures is severely affected by amyloid deposits [2,18,24]. Histologically, amyloid deposits are identified on the basis of eosinophilic amorphous deposits which take up Congo red stain and typically exhibit apple-green birefringence when examined under polarized light [1,7,22,24,25].
Pulmonary nodules are seen in various pulmonary diseases, including tumor, tuberculosis and infection, but amyloidosis is infrequently considered as a candidate in the differential diagnosis of such lesions. The incidence of pulmonary amyloidosis is unclear because many cases are diagnosed incidentally during open lung biopsy or at autopsy. Quaia et al. [12] reported 76 patients with pulmonary tumor(s) suspected for malignancy between 2004 and 2006, and only 1 case was identified with amyloidosis.
For pulmonary amyloidosis, there are three types of location: parenchymal nodules (nodular parenchymal form), diffuse interstitial deposits (diffuse alveolar septal form) or submucosal deposits in the airways (tracheobronchial form) [2]. Calcification is common [15,19,20,23], and a chest CT scan may show calcified deposits in over one third of patients with pulmonary amyloidosis [26]. In this study, the patient presented with multiple amyloid nodules with partial calcification in both lung fields on CT scan ( fig. 1).
Nodular pulmonary amyloidosis is characterized by single or multiple parenchymal nodules or masses. The size of the nodules varies from a few millimeters to several centimeters. It is usually a silent disease and found incidentally on chest radiographs in asymptomatic, older individuals [17-19, 24, 25]. It may show a slow progression of increased size or number of nodules but not always reveals a restrictive pattern of lung function or impairment of gas exchange. The natural history of nodular pulmonary amyloidosis is associated with a relatively benign prognosis [2,18,19,21,25].
Because of its nodular appearance, nodular pulmonary amyloidosis is usually misconstrued as neoplasm. The differential diagnoses of multiple nodules include a broad spectrum of etiologies: infections, pneumoconiosis, tumor, sarcoidosis, rheumatoid arthritis, and other uncommon illnesses such as amyloidosis or pulmonary alveolar microlithiasis [2,27].
Several case reports mention patients with nodular pulmonary amyloidosis who underwent 18 F-FDG PET/CT, most of them showing an increased FDG uptake (table 1), but no FDG uptake has also been mentioned (table 2). A recently published study by Glaudemans et al. [22] evaluated the role of 18 F-FDG PET/CT in a group of patients with both systemic and localized amyloidosis; 18 F-FDG uptake was seen in all patients with localized amyloidosis, but none was seen in all patients with systemic amyloidosis. The giant cells in localized amyloidosis may participate in the transformation of the soluble full-length light chains into insoluble fibrils [31]. The high amounts of serum amyloid P make the amyloid deposits unavailable for inflammatory phagocytic cells in systemic amyloidosis [32]. Two studies reported by Baqir et al. [20,24] showed that mucosa-associated lymphoid tissue lymphoma was associated with pulmonary amyloid and 18 F-FDG uptake, which might be due to plasma cell differentiation [33].
One case report by Miyazaki et al. [34] demonstrated the association of pulmonary amyloidosis with adenocarcinoma, but without manifestation by PET scan. In the current case, our patient had a cough of 2 months' duration with no other symptoms. Being a F-FDG PET/CT in Patients with Nodular Pulmonary Amyloidosis: Case Report and Literature Review 793 nonsmoker as well as the normal physical examination and negative routine workup results suggested a nonmalignant condition. However, due to the multiple bilateral pulmonary nodules found on chest CT and the need to exclude a neoplastic process, a PET/CT scan was initially performed. Intense focal 18 F-FDG uptake of the largest nodule in the left lower lobe on the PET/CT scan indicated the possibility of malignancy. However, other nodules did not show an increased 18 F-FDG uptake on PET/CT imaging and were thought to be benign. Later, an open-lung biopsy was performed. Although histopathologic findings of two nodules without increased 18 F-FDG uptake suggested a benign lesion, a subsequent biopsy of the nodule with increased 18 F-FDG uptake was still clinically required, which was later shown to be composed of adenocarcinoma and amyloid deposition. The nodules with a normal 18 F-FDG uptake on PET/CT showed amyloid deposition. Thus, for multiple pulmonary nodules, the differential diagnosis should include malignant neoplasm, and histological confirmation is mandatory. Our study emphasizes the importance of screening for multiple pulmonary amyloid nodules with PET/CT in the case of suspected malignancy before a biopsy is undertaken.
A nodular pattern of amyloid deposition surrounding the adenocarcinoma in the lung is occasionally seen [34]. Little is known about the association between amyloidosis and the neoplastic condition. Intratumoral amyloid deposition may contribute to the pathogenesis of the neoplastic condition [35]. Carcinoma-associated antigens might induce the deposition of the amyloidogenic immunoglobulin light chain and nodular lesions [36]. Considering that the lung cancer in our patient was present in only one of the multiple pulmonary amyloidosis nodules, amyloid deposition has probably developed before the neoplastic condition.
In conclusion, our case of pulmonary amyloidosis presenting with multiple nodules showed that PET/CT can be useful in disease management and decision-making before a biopsy is undertaken. Pulmonary amyloidosis needs to be added to the differential diagnosis when multiple pulmonary nodules show different 18 F-FDG uptakes on PET/CT. The largest nodule in the left lower lobe consists of minimally invasive adenocarcinoma (mainly) and a papillary predominant growth pattern (focally) as well as massive interstitial deposition of homogenous eosinophilic amorphous material. c Congo red staining of the lung lesion. The amorphous eosinophilic material is colored pink or red with the use of the Congo red stain. d The eosinophilic material showed apple-green birefringence under polarizing microscopy when stained with Congo red, consistent with amyloidosis. | 2017-04-15T09:12:40.466Z | 2014-11-28T00:00:00.000 | {
"year": 2014,
"sha1": "e6dee0bca880ec062ff5d4ac62b68eb54a842eff",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/369112",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e6e722235142dbe314d574d6aa12107ae56b89e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267468975 | pes2o/s2orc | v3-fos-license | Morphometric analysis of chronicity on kidney biopsy: a useful prognostic exercise
ABSTRACT Chronic changes on kidney biopsy specimens include increasing amounts of arteriosclerosis, glomerulosclerosis, interstitial fibrosis and tubular atrophy, enlarged nephron size, and reduced nephron number. These chronic changes are difficult to accurately assess by visual inspection but are reasonably quantified using morphometry. This review describes the various patient populations that have undergone morphometric analysis of kidney biopsies. The common approaches to morphometric analysis are described. The chronic kidney disease outcomes associated with various chronic changes by morphometry are also summarized. Morphometry enriches the characterization of chronicity on a kidney biopsy and this can supplement the pathologist's diagnosis. Artificial intelligence image processing tools are needed to automate the annotations needed for practical morphometric analysis of kidney biopsy specimens in routine clinical care.
INTRODUCTION
Morphometry is a technique utilized for the analysis of the spatial distribution and size of tissue structures.In practice, it is accomplished using quantitative image analysis by first annotating with a computer program the different microstructures on whole slide images of stained histological sections of tissue biopsies magnified with light microscopy [1 ].The annotations are typically outlines of the individual microstructures seen for a particular class ( e.g.glomerular tufts, proximal tubules, or arteries) on whole slide images of tissue biopsies.Annotations can also be applied to the ultrastructures on tissue biopsies magnified with electron microscopy images.Computational pathology uses these computer-assisted electronic annotations to determine counts and areas for each microstructure or ultrastructure.The counts and areas are then used to estimate quantitative measures such as the size or density of different structures on the tissue biopsy.As these annotations are typically two-dimensional, stereological models are often used to estimate the three-dimensional properties of the microstructures from two-dimensional annotations.A common problem is the variable orientation of tubular structures on two-dimensional sections.For example, the minor axis of the tubule profile can be used to approximate the true diameter of a tubular structure ( e.g, proximal tubule diameter) [2 ].Another approach is to average across multiple structures if the orientations are reasonably 'random' such that the area of these individual structures is reflective of the average orientation ( e.g.average cross-sectional tubular area) [3 ].
Kidney histological sections have gained a particular interest in the application of morphometric analyses due to the characteristic organization of nephrons, the spatial distribution of different microstructures, and the biological importance of the size and density of microstructures [4 ].An example of the annotations needed to estimate % globally sclerotic glomeruli ( GSG) with GSG traced in red and non-sclerotic glomeruli ( NSG) in cyan.The %GSG is calculated by the number of GSG divided by the total number of glomeruli.The %IFTA is calculated from area of all IFTA foci divided by cortex area.The IFTA foci density is calculated from counts of all IFTA foci divided by cortex area ( per cm 2 ) .( B) An example of the annotations needed to estimate % interstitial fibrosis and tubular atrophy ( IFTA) with IFTA traced in black and cortex traced in green.( C) An example of the annotation needed to estimate %luminal stenosis with lumen traced in yellow and intimal thickening traced in red.Arteriosclerosis is assessed by %luminal stenosis from intimal thickening as calculated from the area of intima divided by the areas of intima and lumen.
Morphometry has been widely applied in kidney tissue to quantify chronic changes in the glomerular, tubulointerstitial, and vascular compartments and to monitor progression in patients with repeat biopsies [4 ]. Figure 1 is an example of the annotations needed to estimate % globally sclerotic glomeruli ( GSG) , % interstitial fibrosis and tubular atrophy ( IFTA) , and % artery luminal stenosis from arteriosclerosis ( intimal thickening) .An important advantage of applying morphometry to detect chronic changes is standardization.In particular, the common scoring of chronic changes on kidney biopsy is often inaccurate and there is limited agreement between different pathologists scoring chronic changes [5 -7 ].Morphometry can also provide a continuous score that detects subtle variation in mild chronic changes missed by visual inspection.For example, %IFTA of 2% versus 8% is often grouped together as < 10% by visual inspection scoring [8 ].This is further complicated by the need for thresholds that increase with age for chronic changes that distinguish abnormal from normal [9 ].
While morphometry is used as a research tool, it has not been routinely used in clinical workflows to evaluate kidney biopsies as it is tedious and time consuming.Morphometry may also oversimplify kidney structures into a set of quantitative measures that do not account for all important pathological findings of the structures.For example, an estimate of percentage globally sclerotic glomeruli ( GSG) does not account for whether the GSG have a solidification or ischemic subtype [10 ].This is clinically important as solidification is always due to disease whereas ischemic GSG also occur in healthy aging [9 ].Morphometry can also be inaccurate due to biopsy quality, overor under-staining, or inadequate tissue sample to make precise measures.However, these same factors can also affect kidney biopsy assessments by visual inspection.This review focuses on the application of morphometry to clinical biopsies of the kidney and its prognostic significance.The review was based on both our knowledge of this field and a literature search using PubMed, Ovid MEDLINE, and Google Scholar.The following search terms were used: ( 'Renal Biopsy') AND ( 'morphometry') AND ( 'prognosis' OR 'chronicity' OR 'diagnosis' OR 'management' OR 'treatment') .
KIDNEY BIOPSY MORPHOMETRY STUDY TYPES
Kidney biopsy morphometry has been applied to study different specific patient populations ( Table 1 ) .Morphometry has also been used to study microstructures and pathology that is prognostic for kidney failure or related outcomes ( Table 2 ) .Reviewing published studies, Periodic acid-Schiff ( PAS) stained biopsy images appear to be the most common used in analyses, followed by Masson's trichrome stained biopsy images.Staining with PAS has been generally preferred by most studies due to stain quality being more uniform across different histopathologic laboratories.While more labor intensive and requiring experienced laboratory technicians, some pathologists prefer Jones' silver stain for %IFTA [11 , 12 ].Of particular interest is whether morphometric measures are prognostic for outcomes independent of concurrent clinical assessment of kidney function ( particularly GFR and proteinuria) and CKD risk factors ( particularly hypertension, diabetes, and obesity) .Such analyses are helpful for determining whether microstructural pathology is prognostic for outcomes along pathways not well detected by current clinical assessments of kidney health.These analyses clarify the added value of kidney biopsy to the prognostic assessment of various patient populations.The 'unselect' general population would be of particular interest with respect to prognosis with various kidney microstructural morphometry measures on kidney biopsy.However, due to the invasive nature of kidney biopsies, such a study is not feasible.All human kidney biopsy studies require some clinical justification for obtaining a kidney biopsy.This inherently leads to selection bias as abnormal kidney function ( particularly proteinuria) influences the selection of patients that undergo a kidney biopsy.Living kidney donation and radical nephrectomy for kidney tumor are the unique setting in medicine where a kidney biopsy can be obtained intraoperatively ( low risk of bleeding complication) and obtained in patients not selected on abnormal kidney function.As both these populations undergo a nephrectomy, repeat kidney function assessment over time after the nephrectomy often occurs as part of follow-up care.
Living kidney donors are subjected to a thorough predonation examination of their overall and kidney health status before the actual donation.Given the requirement of normal kidney function and the presence of relatively low chronic kidney disease ( CKD) risk factor burden prior to kidney donation, donors provide a particularly useful setting for understanding the normal age-related changes that occur in healthy kidneys.Kidney tumor patients that undergo a radical nephrectomy have more CKD risk factors and abnormal kidney function as a population than living kidney donors.Similar to donors, they are also not selected on abnormal kidney function to justify a kidney biopsy.Large wedge sections of the non-tumor parenchyma from radical nephrectomy specimens allow unique study of kidney tissue specimens with 20-fold more cortex than the typical needle core biopsies.Large wedge sections also allow for morphometric study of microstructural patterns that may vary by depth, a factor that is difficult to discern from needle core biopsies.
NEPHRON SIZE AND NUMBER
Several studies have utilized morphometry evaluation of both kidney biopsy images and computed tomography or magnetic resonance kidney images to calculate the nephron number from the density of glomeruli in the cortex multiplied with the total volume of cortex per kidney [13 -17 ].Low nephron number predicts adverse kidney function outcomes in both living kidney donors and tumor patients [16 , 18 ].Nephron number and nephron size have a reciprocal relationship due to compensatory enlargement of nephrons with low nephron endowment or nephron loss due to aging and disease.Because radiographic kidney imaging that can accurately delineate cortical volume is often not available, nephron number is not available in most patients that undergo kidney biopsy.While renal pathology reports will comment on glomerulomegaly when severe, more subtle manifestations of nephron enlargement may go unnoticed by visual inspection alone.Morphometry can estimate nephron size in a more standardized and quantitative manner.In particular, nephron size can be assessed by measures of non-sclerosed glomerular volume or by cortex volume per non-sclerosed glomerulus ( reciprocal of glomerular density) .Morphometric assessment of glomerular volume and glomerular density can be performed from the areas of glomerular profiles and cortex using the Weibel-Gomez stereological model, though this approach effectively assumes the size of different glomeruli are relatively similar [19 ].An explanation of the un- [17 ].The yellow box shows derivation of 1.382, the coefficient for spheres.If we model glomeruli as spheres, the mean glomerular profile area is used as the mean slice area.The darker gray shaded box shows how glomerular volume is calculated.This formula also uses a coefficient of 1.01 to account for an estimated coefficient of variation of 10% for glomerular diameters across multiple glomeruli in a patient [18 ].
derlying math used to calculate the mean volume of glomeruli on a kidney biopsy section is shown in Fig. 2 [17 ].
There are several clinical characteristics associated with nephron number and nephron size.Low birth weight is associated with low nephron endowment and enlarged nephrons ( larger glomerular volume and lower glomerular density) [20 -23 ].In a large study of living kidney donors and kidney tumor patients, clinical characteristics that independently associated with larger glomerular volume were family history of end-stage kidney disease ( ESKD) , male, tall stature, obesity, diabetes, and proteinuria [24 ].Larger glomeruli were also associated with more globally sclerotic glomeruli and with modest increases in interstitial fibrosis consistent with compensatory enlargement of remaining nephrons with nephrosclerosis [24 ].An autopsy study also found diabetes and hypertension independently associate with larger glomerular volume [25 ].In biopsy-proven hypertensive nephropathy, low glomerular density correlated with overt proteinuria [26 ].The association of enlarged nephrons ( larger glomerular volume or lower glomerular density) with higher BMI or obesity has been well described in many studies and patient populations [27 -29 ].An increase in single nephron GFR is also associated with enlarged glomeruli [30 ].
Glomerular volume and its clinical associations can also vary by cortical depths.A study of large wedge sections from kidney tumor patients, spanning from the capsule to the corticomedullary junction, found the glomerular volume was largest in the mid cortical region and smallest in the superficial region [31 ].Taller stature, obesity, hypertension, diabetes, proteinuria, and current smoking are associated with larger glomerular volume at all cortical depths, but obesity is more strongly associated with glomerular volume in the superficial cortex [31 ].Glomerular volume by depth was somewhat different in an autopsy kidney study, where the largest glomerular volume was in deep cortex [32 ].However, among patients with preserved kidney function and nephron number, glomerular volume was larger in both the middle and deep cortex compared to the superficial cortex [32 ].There is evidence that enlarged nephrons and low nephron number are associated with a worse kidney prognosis in a variety of patient populations.Larger glomerular volume and low nephron number predicted a low GFR early and long-term after kidney donation and graft failure in the recipient [16 , 18 , 33 , 34 ].Larger glomerular volume predict outcomes as progressive CKD in several studies with different population [3 , 5 , 35 ].Table 3 summarizes morphometric measures of chronic changes and their prediction of adverse kidney outcomes in living kidney donors, kidney transplant recipients, kidney tumor patients, and native kidney disease patients.
Glomerular volume can be assessed at the tuft or capsule level.The Bowman's ( urinary) space between the capsule and tuft can also be studied morphometrically.One study in living kidney donors studied the measurements of all cross-sectional areas and volumes of glomeruli at the capsule and tuft level and defined a ratio between the two ( i.e. glomerular tuft volume divided by Bowman's volume or G/B) [36 ].While the G/B ratio did not associate with the same risk factors associated with large glomerular volume ( e.g.obesity) [36 ], patients with nephrosclerosis and a low G/B ratio were at increased risk for progressive CKD [37 ].A low G/B ratio ( or relatively increased Bowman's space) may reflect glomerular hyperfiltration beyond that detected by enlarged glomerular volume alone.
PODOCYTES MORPHOMETRY
Podocyte morphometry usually involves the counts of podocyte per glomerular tuft, podocyte density, and podocyte volume.With aging, most losses of podocytes are due to loss of glomeruli rather than a decrease in podocyte counts among remaining glomeruli [38 ].Hypertension associates with lower podocyte density and larger podocyte volume.In living kidney donors, hypertension and aging were associated with lower podocyte count; however, hypertension alone associated with lower podocyte density and larger podocyte volume independent of age [38 ].Among normal kidneys at autopsy, those with more nephrons had more podocytes per glomerulus as well as higher podocyte density.While the counts of podocytes did not differ by cortical depth, there was higher podocyte density in superficial glomeruli due to smaller glomeruli and smaller podocytes [32 ].Older age was associated with lower podocyte counts, particularly in superficial glomeruli [32 ], a finding that parallels a higher frequency of glomerulosclerosis among superficial glomeruli but not deep glomeruli with older age [31 ].Further studies are needed to understand the prognostic implications of podocyte morphometry.
GLOMERULOSCLEROSIS
The %GSG is perhaps the one morphometric measure that is routinely reported in clinical practice.Glomerular counts can be reported per section or as total count across serial sections, but undercounting of glomeruli by visual inspection alone is a common problem.A morphometric approach ensures standardized counting of glomeruli and inclusion of partial counts for glomeruli bisected by the biopsy needle [16 ].Global glomerulosclerosis is evident in both aging and in kidney disease.Among kidney donors, age associated much more strongly with %GSG than did hypertension [39 ].It is perhaps not well appreciated that smaller amounts of cortex tissue on a needle core biopsy is itself associated with more glomerulosclerosis on biopsy [40 ].This occurs because loss of nephrons itself leads to smaller cortical tissue biopsy samples, though clinical skill and chance are often thought to be the only reasons for inadequate cortex on a needle core biopsy.The upper reference limit ( defined as 95th percentile) for the number of globally sclerosed glomeruli ( GSG) increases with older age as determined from normotensive living kidney donor biopsies [41 ].These thresholds can help distinguish patients who have glomerulosclerosis due to CKD rather than aging alone.Glomerulosclerosis occurs more in the superficial cortex with age and is accompanied by non-sclerosed glomeruli with an ischemic appearance ( capillary wrinking and capsule thickening [31 ]) .Whereas low eGFR, hypertension, and interstitial fibrosis associate with glomerulosclerosis at all cortical depths, and diabetes more strongly associated with glomerulosclerosis in the deeper cortex [31 ].Another study on autopsy kidneys found diabetics without hypertension had more glomerulosclerosis in the deep cortex [25 ].
Numerous studies have linked higher %GSG to a higher risk of adverse kidney outcomes in a variety of patient populations [21 , 22 , 42 , 43 ].In patients whose kidney biopsy diagnosis was 'benign nephrosclerosis', %GSG and proteinuria were the most significant predictors of a 30% decline in eGFR from baseline [44 ].Among nephrotic syndrome patients, an increased risk of progressive CKD was only evident when the %GSG exceeded agebased thresholds for GSG [41 , 45 ].Among kidney allografts at a 5-year surveillance biopsy, higher %GSG as well as higher % ischemic-appearing glomeruli were predictive of subsequent allograft loss [46 ].
INTERSTITIAL FIBROSIS AND TUBULAR ATROPHY
The severity of IFTA is an important prognostic indicator of chronic changes on kidney biopsy that is often inaccurately and imprecisely scored by visual inspection [5 ].IFTA occurs on a continuum with mild forms having basement membrane thickening without significant atrophy and minimal surrounding interstitial fibrosis to more mature and severe tubular atrophy with basement membrane disruption and substantial surrounding interstitial fibrosis.Annotation of IFTA is tedious and one approach is to identify and annotate clusters of IFTA foci where atrophic tubules are bunched together and surrounded and connected by interstitial fibrosis.Among the standard stains obtained on clinical biopsies, trichome stained sections are often used to morphometrically assess severity of IFTA.One study found collagen III staining optimal for morphometry with better interobserver reproducibility compared to morphometry by visual inspection [7 ].The %IFTA ( percentage of cortex area that is IFTA) is often viewed as the best biopsy assessment of CKD; it is notable that it often has rather modest correlation with eGFR [47 , 48 ].When needle core kidney biopsies are mostly or only medulla, it is worth noting morphometric assessment of %IFTA in cortex and in medulla are correlated, particularly by PAS staining ( r = 0.85) [49 ].
While morphometric assessment of IFTA has largely focused on %IFTA ( area of IFTA divided by area of cortex or %IFTA) , the IFTA foci count density ( number of IFTA foci divided by area of cortex) appears to be just as important for assessing chronic changes in the kidney and their prognosis.One study morphometrically assessed different patterns of IFTA and inflammation on wedge sections of kidney tumor patients including %IFTA, IFTA foci density, %striped IFTA, %subcapsular IFTA, %inflammation, and %subcapsular inflammation [50 ].After adjusting for %IFTA, inflammation outside of IFTA predicted a higher risk of CKD progression and non-cancer mortality, while subcapsular inflammation predicted a higher risk of non-cancer mortality [50 ].However, neither inflammation outside of IFTA or subcapsular inflammation predicted outcomes after further adjusting for kidney function and CKD risk factors [50 ].In renal allografts, inflammation within IFTA is currently regarded as a component of chronic active T cell-mediated rejection [51 ].In kidney tumor patients, inflammation within IFTA did not predict outcomes independent of %IFTA [50 ].Striped pattern of IFTA may reflect chronic ischemia from calcineurin inhibitor toxicity [52 ].In kidney tumor patients, striped pattern of IFTA did not predict outcomes independent of %IFTA [50 ].
After adjusting for %IFTA and clinical characteristics, only increased IFTA foci density predicted progressive CKD [50 ], in other words, at the same severity of %IFTA, patients who had more numerous small scattered IFTA foci had a greater risk of progressive CKD than those with fewer and larger IFTA foci.The IFTA foci density showed a pattern of stronger correlation with older age and lower cortical thickness on biopsy and lower cortical volume on CT or MRI imaging.Loss of nephrons is a dynamic process and foci of IFTA progressively atrophy leading to contraction of the kidney cortex.Because progressive atrophy of IFTA foci both decreases %IFTA and increases IFTA foci density, both %IFTA and IFTA foci density are complimentary rather than redundant in assessing IFTA severity.This was further confirmed in a morphometric study of chronic changes on native kidney biopsies [5 ].Table 4 summarizes studies of IFTA via manual morphometry.
ARTERIOSCLEROSIS
Arteriosclerosis ( due to fibrointimal thickening) and arteriolar hyalinosis leads to nephron ischemia and nephrosclerosis.Morphometric assessment of arteriosclerosis can be determined by severity of luminal stenosis by intimal thickening.One approach is to take the cross-sectional area of intima divided by the combined area of intima and lumen to assess the % luminal stenosis from intimal thickening [53 ].When multiple arteries are present the severity of arteriosclerosis can be averaged across arteries or the artery with the most severe arteriosclerosis used for analyses.More work is needed to determine the optimal approach to quantifying arteriosclerosis that is the most prognostic.Other approaches to arteriosclerosis involve intimal thickening being calculated by comparing the thickness of the intima to that of the media in the same segment of the vessel [26 ].Importantly, not all needle core kidney biopsies will have a medium to large artery from which luminal stenosis from intimal thickening can be assessed via morphometry.Orientation of artery profiles and partial arteries ( bisected by the biopsy needle) can also contribute to bias in the assessment of arteriosclerosis.One approach to deal with tangential sectioning of blood vessels is not to account for the orientation of the vessel in the calculation but using the most orthogonal arteries to the plane of the biopsy available.This has been used successfully to predict outcomes [53 ].
Morphometric assessment of arteriolar hyalinosis is even less developed.The Banff criteria classify arteriolar hyalinosis based on descriptive categories of no hyalinosis, mild to moderate hyaline thickening in at least 1 arteriole, moderate to severe PAS-positive hyaline thickening in more than 1 arteriole, or severe PAS-positive hyaline thickening in many arterioles [54 ].This approach effectively combines number of involved arterioles with the severity of arteriolar hyalinosis among involved arterioles.The non-specific nature of arteriolar hyalinosis in kidney allografts and the possibility of chronic calcineurin inhibitor toxicity complicates the prognostic interpretation of arteriolar hyalinosis [55 ].Other approaches for arteriolar hyalinosis consider only the proportion of arterioles exhibiting any hyalinosis categorized into < 5%, 5%-25%, and > 25% [26 ] .Arteriolar hyalinosis can be further subclassified into concentric lesions or focal ( partial) lesions [5 ].In living kidney donors, higher levels of arteriosclerosis by morphometric measures of luminal stenosis by intimal thickening in implantation biopsies associated with hypertension at a short-term follow-up visit [18 ].However, long-term risk of CKD and incidence of hypertension was neither predicted by morphometric luminal stenosis nor arteriolar hyalinosis in another study [34 ].Among kidney tumor patients, after accounting for clinical characteristics ( particularly age) , morphometric artery luminal stenosis did not predict progressive CKD [3 ].However, kidney allograft loss in kidney recipients was predicted by increased morphometric luminal stenosis of arteries [54 ].Morphometirc measures of arteriolar hyalinosis is a better predictors of progressive CKD in native kidney disease patients than is artery luminal stenosis from intimal thickening [5 ].Kidney cortex thinning from arteriosclerosis can also affect the quality of a kidney biopsy.In particular, presence and severity of arteriosclerosis by morphometric luminal stenosis by intimal thickening is increased when there is less cortex present on a needle core biopsy [40 ].
CONCLUSION
Morphometry has proved to be a useful tool in quantifying chronic changes ( essentially CKD) on kidney biopsy specimens that are prognostic for adverse kidney events such as kidney failure or a progressive decline in eGFR.Advantages of morphometry are better accuracy and reproducibility than visual assessment, and a saved and auditable record of annotations used to quantify severity of structural pathology.The optimal combination of morphometry measures to assess CKD prognosis appears to be one that includes %glomerulosclerosis ( GSG, ischemic glomeruli, and segmental sclerosis) , %IFTA, IFTA foci density, and presence of any arteriolar hyalinosis.This combination of morphometry measures is superior in predicting progressive CKD and ESKD to commonly used chronicity scores based on visual inspection [5 ].A major limitation is that morphometry is tedious and time-consuming.
Implementation in the clinical practice will likely require automation.In particular, artificial intelligence ( AI) models are needed to automate the annotation of microstructures followed by computer programs that perform morphometric and stereological calculations.Current limitations for such approaches include small datasets that come from single institutions, which can limit generalizability.Collaboration between institutions for sharing of data can help improve the algorithms leading to better performing models.In addition, quality control of the AI generated annotations by pathologists is needed given the wide spectrum of pathologies and artifacts that occur on whole slide images of kidney biopsy sections.Future studies are needed to determine if the added clinical and prognostic information from morphometric analysis of kidney biopsy images is of sufficient value to justify the computational costs and quality control efforts of applying AI models within a clinical practice workflow.
Figure 1 :
Figure 1: Example of morphometry to assess nephrosclerosis.( A) An example of the annotations needed to estimate % globally sclerotic glomeruli ( GSG) with GSG traced in red and non-sclerotic glomeruli ( NSG) in cyan.The %GSG is calculated by the number of GSG divided by the total number of glomeruli.The %IFTA is calculated from area of all IFTA foci divided by cortex area.The IFTA foci density is calculated from counts of all IFTA foci divided by cortex area ( per cm 2 ) .( B) An example of the annotations needed to estimate % interstitial fibrosis and tubular atrophy ( IFTA) with IFTA traced in black and cortex traced in green.( C) An example of the annotation needed to estimate %luminal stenosis with lumen traced in yellow and intimal thickening traced in red.Arteriosclerosis is assessed by %luminal stenosis from intimal thickening as calculated from the area of intima divided by the areas of intima and lumen.
Figure 2 :
Figure 2: Calculation of glomerular volume.A hypothetical example of a biopsy with five glomerular profiles is shown.Light gray shaded box shows Weibel and Gomez stereology model for random slices of spheres[17 ].The yellow box shows derivation of 1.382, the coefficient for spheres.If we model glomeruli as spheres, the mean glomerular profile area is used as the mean slice area.The darker gray shaded box shows how glomerular volume is calculated.This formula also uses a coefficient of 1.01 to account for an estimated coefficient of variation of 10% for glomerular diameters across multiple glomeruli in a patient[18 ].
Table 3 : Risk of CKD outcomes with morphometric of measures of chronic changes across different populations.
[18]] | 2024-02-06T16:58:55.416Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "666372eba3e8ba3b37a8664e5a75f73402ec8b37",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/advance-article-pdf/doi/10.1093/ckj/sfad226/56588499/sfad226.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "244dd085e891282bc231b44b39d1747f39ec18d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44022463 | pes2o/s2orc | v3-fos-license | Lepton parameters in the see-saw model extended by one extra Higgs doublet
We investigate the radiative generation of lepton masses and mixing angles in the Standard Model extended by one right-handed neutrino and one extra Higgs doublet. We assume approximate rank-1 Yukawa couplings at a high energy scale and we calculate the one loop corrected charged lepton and neutrino mass matrices at the low energy scale. We find that quantum effects generate, for typical high energy parameters, a hierarchy between the muon and the tau mass, a hierarchy between the solar and the atmospheric mass splittings, and a pattern of leptonic mixing angles in qualitative agreement with experiments.
Introduction
The origin of the fermion mass hierarchies and mixing angles remains as one of the biggest mysteries in Particle Physics. Moreover, the progress in the determination of the neutrino mass splittings and mixing angles over the last decade, far from illuminating the mystery, has drawn a picture showing striking differences between the neutrino parameters and the quark or charged lepton parameters [1,2]. The puzzle is three-fold: i) neutrino masses are much smaller than the quark and charged lepton masses, ii) the mass hierarchy in the neutrino sector is milder than in the quark and charged lepton sectors, iii) the entries of the leptonic mixing matrix are all O(0.1) while the entries of the quark mixing matrix display the strong hierarchy |V ub |, |V cb | ≪ |V us |.
A plausible explanation for the small neutrino masses compared to the quark or the charged lepton masses consists in introducing heavy Majorana right-handed neutrinos, much heavier than the electroweak symmetry breaking scale [3]. This is the renown see-saw mechanism. In this framework, the overall neutrino mass scale parametrically depends on the square of the neutrino Yukawa coupling and on the inverse of the right-handed neutrino mass. Therefore, the see-saw mechanism makes the generic prediction that neutrino masses should be much smaller than the quark or charged lepton masses, although the precise value cannot be predicted. Furthermore, and due to the high scale of lepton flavour violation, this framework predicts tiny rates for the rare lepton decays [4], in agreement with the stringent experimental bounds BR(µ → eγ) ≤ 5.7 × 10 −13 [5], BR(τ → µγ) ≤ 4.4 × 10 −8 [6], BR(τ → eγ) ≤ 3.3 × 10 −8 [6]. Whereas the seesaw model is a compelling framework to explain the smallness of neutrino masses, it tends to generate too large neutrino mass hierarchies [7]. This drawback of the see-saw mechanism is automatically cured by introducing a second Higgs doublet. As discussed in [8,9], even if the tree level neutrino masses display very large hierarchies, quantum effects induced by the second Higgs doublet generate a mild hierarchy between the heaviest and next-to-heaviest neutrino masses which is, for well motivated choices of the high energy parameters, in qualitative agreement with the value inferred from oscillation experiments.
The generation of the mass hierarchies for the quark sector in the two Higgs doublet model was investigated in [10]. Assuming tree level Yukawa couplings with very hierarchical eigenvalues (such that they can be considered in practice as rank-1), quantum effects generate, for generic high energy parameters, a mass hierarchy between the second and third generations which is of O(0.01), in qualitative agreement with the data. Moreover, assuming aligned tree level quark mass matrices, quantum effects generate |V us | = O(0.1) and |V ub |, |V cb | ≪ |V us |, also in qualitative agreement with the measured values.
It was also pointed out in [10] that a similar mechanism would generate radiatively a muon mass in the presence of right-handed neutrinos (see also [11]). In this paper we carefully explore the generation of charged lepton and neutrino masses, as well as the leptonic mixing angles, in a simple scenario where the Standard Model is extended by right-handed neutrinos and one extra Higgs doublet. In section 2 we present the model and we calculate the lepton masses and mixings in a simplified scenario with just one right-handed neutrino and where all Yukawa couplings are rank-1 at the cut-off scale, hence only one generation has non-vanishing tree-level masses. Then, in section 3 we discuss the impact of quantum effects on the leptonic parameters, finding rank-2 charged lepton and neutrino mass matrices. In section 4 we briefly address the generation of leptonic rare decays in this framework and, finally, in section 5 we present our conclusions. 2 The two Higgs doublet model extended with right-handed neutrinos: Tree level results We consider in this paper an extension of the Standard Model consisting in adding one extra Higgs doublet and at least one right-handed neutrino, singlet under the Standard Model gauge group. We do not impose any global or discrete symmetry on the model. Then, the flavour dependent part of the leptonic Lagrangian is given by: where i, j = 1, 2, 3 are flavour indices, a = 1, 2 is a Higgs index andΦ a = iτ 2 Φ * a . It is convenient to work in the Higgs basis where one of the Higgs fields, say Φ 2 , does not acquire a vacuum expectation value. Therefore Φ 0 1 = v/ √ 2, with v = 246 GeV, and Φ 0 2 = 0. In this basis, the charged lepton and Dirac neutrino masses are proportional to the Yukawa couplings Y e , is diagonal and real. We assume that the mass scale of the right-handed neutrinos is much larger than the electroweak symmetry breaking scale and the mass of all the extra Higgs states H 0 , A 0 , H ± , which we denote collectively by m H . Hence, at the energies relevant to current experiments, the right-handed neutrinos are decoupled and the theory can be conveniently described by the following effective Lagrangian: where, at the scale of the lightest right-handed neutrino mass, the coefficients of the dimension 5 operators κ (ab) read: Hence, with our choice of basis for the leptonic and the Higgs fields, the neutrino mass matrix at the scale of the lightest right-handed neutrino mass depends just on the coupling κ (11) : which is diagonalized in the standard way: We consider in what follows a scenario with one right-handed neutrino with mass M maj and where all the Yukawa couplings are rank-1 at the cut-off scale Λ, with Λ > M maj . By an appropriate choice of the basis for the leptonic fields, it can be checked that the Yukawa couplings to the Higgs Φ 1 can be written as: On the other hand, the charged lepton Yukawa coupling to the Higgs Φ 2 must take the most general form of a rank-1 matrix, namely: where E L,R are 3 × 3 unitary matrices. The Yukawa matrix elements are Y (2) e,ij = y (2) e (E L ) * 3i (E R ) 3j , hence only the last row of the unitary matrices is relevant, which we parametrize as: and similarly for E R . Besides, the neutrino Yukawa coupling to the Higgs Φ 2 must take the most general form of a column matrix: Analogous parametrizations for the quark Yukawa couplings can be found in [10]. In what follows we will neglect all the phases for simplicity. With this parametrization for the parameters at the cut-off scale Λ it is straightforward to calculate the tree-level charged lepton and neutrino masses. The result is: The leptonic mixing matrix, on the other hand, is not univocally defined since an internal rotation between the first two generations of charged lepton or neutrino fields leaves the Lagrangian invariant. As a result, only the 33 element of the mixing matrix is defined, which reads U 33 | tree = cos α.
Quantum effects on charged lepton and neutrino parameters
We consider now the impact of quantum effects on the scenario presented in the previous section, where the Standard Model is extended with one extra Higgs doublet and one right-handed neutrino, under the assumption that all Yukawa couplings are rank-1 at a cut-off scale Λ ≫ M maj . The one loop corrected Yukawa couplings at the scale of decoupling of the right-handed neutrinos approximately read: where the beta-functions β can be found in Appendix A. As discussed in [10], the one-loop corrected Yukawa coupling Y (1) e (M maj ) is rank-2, hence radiatively generating a non-vanishing muon mass. Besides, as a consequence of the breaking of the mass degeneracy between the first-and second-generation of leptons, the matrices that diagonalize the Yukawa coupling become unambiguously fixed. More specifically, casting Y (1) e = U e L diag(y e , y µ , y τ )U † e R , we find for the Yukawa eigenvalues, while for the unitary matrices U e L and U e R , At the scale M maj we redefine the leptonic fields to make the Yukawa coupling to the Higgs Φ 1 diagonal, which we denote byỸ The decoupling of the heavy right-handed neutrino generates dimension five operators, which can be calculated from matching the full theory to the effective theory at the scale M maj . The result is: which is rank 1 and therefore only has one non-vanishing eigenvalue. The Yukawa couplingsỸ while the unitary matrices U eL , U eR at the scale m H are approximately equal to the corresponding matrices at the scale M maj , given by Eqs. (14,15). On the other hand, the coupling κ (11) (M maj ) has two degenerate (vanishing) eigenvalues and quantum effects can significantly alter the structure of the mass matrix. Indeed, as discussed in [8,9], quantum effects in the two Higgs doublet model increase the rank of this coupling, thus breaking the degeneracy among the eigenvalues and fixing univocally the angles of the mixing matrix. Using the results of [8], one finds, again neglecting subleading effects, where λ 5 is the coupling constant in the potential term V ⊃ 1 Besides, the columns of the leptonic mixing matrix read: with |Ỹ (a) νi | 2 and where the neutrino Yukawa couplingsỸ (a) are all evaluated at the scale M maj . It is now straightforward to calculate the leading contributions to the charged lepton and neutrino masses, as well as to the leptonic mixing matrix, in terms of the parameters defining the Yukawa couplings at the cut-off scale, Y (a) x (Λ), x = e, ν, u, d. The ratio between the muon and tau masses is approximately given by: where we have defined p x , q x being complicated functions of the angles that diagonalize the Yukawa matrices Y x (Λ), and which are explicitly given in Appendix B. On the other hand, the ratio between the masses of the heaviest and the next-to-heaviest neutrino is: Here, cos ψ measures the misalignment between the vectors Y (2) ν and Y (1) ν and is given in terms of the high energy parameters by cos ψ ≡ cos α cos θ ν + sin α sin θ ν cos ω ν . Finally, we find for the mixing angles: tan θ 12 ≃ sin ζ e L cos α (cos α sin θ ν cos ω ν − sin α cos θ ν ) − cos ζ e L sin θ ν sin ω ν cos α sin θ ν (cos ζ e L cos ω ν + sin ζ e L sin ω ν ) − cos ζ e L sin α cos θ ν , sin θ 13 ≃ − sin ζ e L sin α , tan θ 23 ≃ cos ζ e L tan α , where tan ζ e L = P Q , with P , Q defined in Eq. (26). In the case α = 0 one recovers the zero-th order structure of the CKM matrix derived in [10]. Therefore, in this framework the different pattern of mixing angles in the quark and lepton sectors is related to the amount of misalignment of the tree level (rank-1) Yukawa couplings to the Higgs Φ 1 , parametrized by one single angle α.
The ratio between the muon and the tau masses depends on various high-energy parameters, nonetheless, it is possible to estimate the typical size of this ratio for reasonable assumptions about the model parameters. For generic values of the angles that diagonalize the Yukawa couplings Y (a) x , one finds p x , q x = O(0.1) for all x = e, ν, u, d. Moreover, assuming y (2) x ∼ y (1) x , we expect P and Q to be dominated by the up-type Yukawa couplings or, possibly, by the neutrino Yukawa coupling if the right-handed neutrino mass is sufficiently large. Note also that in the former case the mass ratio is enhanced by a larger logarithm. Therefore, and setting for concreteness log(Λ/M maj ) ∼ 10, log(Λ/m H ) ∼ 30, we find a ratio between the muon and the tau mass approximately given by: We show in Fig. 1, left plot, the probability distributions of m µ /m τ in a logarithmic binning from performing a random scan of the angles (with flat distributions between 0 and 2π) and fixing for concreteness y GeV and m H = 10 4 GeV. It was also assumed that the muon generation is dominated either by quantum effects induced by the right-handed neutrino (blue line) or by the top quark (red line); in each case we fix, respectively, y The probability distributions for other values of the Yukawa couplings can be straightforwardly determined with an appropriate rescaling of the horizontal axis of the figure. The large hierarchy measured between the muon and tau masses suggests that the radiative generation of the muon mass is dominated by the right-handed neutrino in the loop, due to the shorter renormalization group running. Nevertheless, the muon mass could also be generated by the top loop for appropriate choices of the Yukawa couplings y Assuming again y (2) ν ∼ y (1) ν and λ 5 = O(1), we find m 2 /m 3 = O(0.1), also in qualitative agreement with experimental data. This is again confirmed by our numerical analysis, shown in Fig. 1, right plot, where we present the probability distribution of m 2 /m 3 in a logarithmic binning fixing for concreteness y Lastly, concerning the expected values of the leptonic mixing angles in this framework, we first note that the angle ζ e L is generically neither maximal nor small, as follows from Eq. (29), thus leading to a sizable charged lepton contribution to the neutrino mixing. Then, assuming sin α = O(0.1), all three mixing angles are expected to be neither maximal nor small, also in qualitative agreement with global fits to neutrino oscillation experiments, which give as central values sin θ 12 ≃ 0.55, sin θ 13 = 0.15 as sin θ 23 = 0.64 (assuming normal mass hierarchy) [2].
The model contains a large number of high energy parameters, nevertheless it is possible to determine univocally the parameters α and ζ e L in terms of the low energy observables sin θ 13 and tan θ 23 . From Eq. (28) it follows that: sin 2 ζ e L ≃ sin 2 θ 13 (1 + tan 2 θ 23 ) sin 2 θ 13 + tan 2 θ 23 .
The measured leptonic mixing angles can then be easily accommodated in this framework for generic high energy Yukawa structures, namely Yukawa matrices with mixing angles which are neither maximal nor small.
Lepton Flavour Violation
Any model of neutrino masses predicts a non-vanishing rate for the charged lepton flavour violating decays. In the Standard Model extended by right-handed neutrinos and one extra Higgs doublet, and working as usual in the basis where the charged lepton and the right-handed neutrino mass matrices are flavour diagonal, the Yukawa couplingsỸ break lepton flavour and induce the rare decays ℓ i → ℓ j γ via quantum effects, suppressed by powers of the heavy Higgs mass or the heavy neutrino mass, respectively. In the framework of fermion mass generation discussed in this paper M maj ≫ m H , therefore we expect the leptonic rare decays to be dominated by diagrams involving the couplingỸ (2) e . The value of this coupling at low energies approximately readsỸ e U e R , where Y (2) e was given in Eq. (7) and U e L , U e R in Eq. (14). An explicit calculation yields tan ζ eR = tan ω eR , therefore the couplingỸ (35) For generic values of the high energy mixing angles, the elements of the second and third column of this matrix will have comparable entries. We therefore consider for concreteness the process µ → eγ, which is the most strongly constrained by experiments. For a wide range of parameters, the largest contribution to this process are two loop Barr-Zee diagrams [12,13] with a top-quark in the loop. Assuming m H ≫ v, the branching ratio is given by: Here, m t denotes the top quark mass, Y (a) u33 is the Yukawa coupling of the top quark to the Higgs Φ a (expressed in the basis where Y (1) u is diagonal), λ 6 is the coupling constant of the potential term V ⊃ λ 6 (Φ † 1 Φ 1 )(Φ † 1 Φ 2 ) and f (z) is a function defined in [13] which evaluates f (2) ≈ 1. Assuming again y (2) e,u ∼ y (37) Therefore, the non-observation of the process µ → e γ typically requires the particles of the extended Higgs sector to have masses larger than ∼10 TeV, unless the couplings y (2) e,u and λ 6 are suppressed. A large mass scale for the exotic Higgs states, however, barely affects the leptonic mass ratios and mixing angles calculated in section 3, which depend at most logarithmically on m H .
Conclusions
We have studied the impact of quantum effects on the leptonic parameters in the Standard Model extended by one right-handed neutrino and one extra Higgs doublet. No additional discrete or global symmetry was assumed. We have shown that, starting with rank-1 Yukawa matrices at a cut-off scale, quantum effects generate a mass for the muon and for the next-to-heaviest neutrino which are in qualitative agreement with the experimental data. The radiatively induced mass hierarchies mostly depend on the eigenvalues and flavour structure of the tree level Yukawa matrices and, in the case of the neutrino mass hierarchy, also on the value of the quartic coupling λ 5 . The dependence on the mass scales of the model is, on the other hand, only logarithmic and hence fairly mild. Furthermore, in this framework it is generically expected a leptonic mixing matrix with all entries O(0.1), namely with mixing angles that are neither maximal nor small, also in qualitative agreement with neutrino oscillation experiments. Similar conclusions are expected in models with more right-handed neutrinos and/or rank-3 Yukawa matrices, provided the tree level mass hierarchies are much larger than the radiatively generated ones. This framework contains new sources of lepton flavour violation that induce rare leptonic decays, such as µ → eγ, with a rate that depends on the inverse mass of the extra Higgs states to the fourth power. On the other hand, the fermionic mass ratios depend only logarithmically on this mass, therefore, by sufficiently increasing the mass of the exotic scalar states, it is possible to suppress the rates for the rare decays without significantly affecting the mechanism of generation of the fermionic mass hierarchies and mixing angles discussed in this paper.
A Beta functions
The one-loop β functions of the Yukawa couplings Y (a) e,ν are defined from the renormalization group equations where µ is the renormalization scale. They were calculated for the multi-Higgs doublet model in [14] and read: where summation over repeated indices is understood and Θ(x) is the Heaviside function, which takes into account the fact that for µ < M maj the right-handed neutrinos are decoupled. For energy scales below the right-handed neutrino Majorana mass, the theory is described by the charged lepton Yukawa coupling Y (a) e and by the dimension five operators κ (ab) . The renormalization group equations for the couplings κ (ab) are where the corresponding beta functions were also calculated in [14] and read: where the quartic couplings λ are defined by V ⊃ 1
B Functions P and Q
The functions P and Q, defined in Eqs. (26), determine the ratio between the muon and the tau mass, Eq. (24), as well as the contribution to the leptonic mixing from the charged lepton sector, Eq. (29). The functions P and Q are linear combinations of the functions p x and q x , x = e, ν, u, d, which are explicitly given by: p e = 1 4 sin 2θ eL sin 2θ eR sin ω eL , q e = 1 4 sin 2θ eL sin 2θ eR cos ω eL , p ν = −2 cos θ eL sin θ e R cos α sin θ ν sin ω ν + sin θ eL sin θ e R (cos α sin ω eL cos θ ν + sin α sin θ ν (sin ω eL cos ω ν − 2 cos ω eL sin ω ν )) , q ν = −2 cos θ eL sin θ e R cos α sin θ ν cos ω ν + sin θ eL sin θ e R cos ω eL (cos α cos θ ν − sin α sin θ ν cos ω ν ) , p u = 3 cos θ uL cos θ uR sin θ e L sin θ e R sin ω eL , q u = 3 cos θ uL cos θ uR sin θ e L sin θ e R cos ω eL , p d = 3 cos θ dL cos θ dR sin θ e L sin θ e R sin ω eL , q d = 3 cos θ dL cos θ dR sin θ e L sin θ e R cos ω eL . | 2014-09-17T14:44:49.000Z | 2014-09-17T00:00:00.000 | {
"year": 2014,
"sha1": "1bebcb398409c5785678381caa4ba2991d47db62",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2014)089.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "1bebcb398409c5785678381caa4ba2991d47db62",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52294674 | pes2o/s2orc | v3-fos-license | Developing a New Generation of Therapeutic Dental Polymers to Inhibit Oral Biofilms and Protect Teeth
Polymeric tooth-colored restorations are increasingly popular in dentistry. However, restoration failures remain a major challenge, and more than 50% of all operative work was devoted to removing and replacing the failed restorations. This is a heavy burden, with the expense for restoring dental cavities in the U.S. exceeding $46 billion annually. In addition, the need is increasing dramatically as the population ages with increasing tooth retention in seniors. Traditional materials for cavity restorations are usually bioinert and replace the decayed tooth volumes. This article reviews cutting-edge research on the synthesis and evaluation of a new generation of bioactive dental polymers that not only restore the decayed tooth structures, but also have therapeutic functions. These materials include polymeric composites and bonding agents for tooth cavity restorations that inhibit saliva-based microcosm biofilms, bioactive resins for tooth root caries treatments, polymers that can suppress periodontal pathogens, and root canal sealers that can kill endodontic biofilms. These novel compositions substantially inhibit biofilm growth, greatly reduce acid production and polysaccharide synthesis of biofilms, and reduce biofilm colony-forming units by three to four orders of magnitude. This new class of bioactive and therapeutic polymeric materials is promising to inhibit tooth decay, suppress recurrent caries, control oral biofilms and acid production, protect the periodontium, and heal endodontic infections.
Introduction
Tooth caries is a widespread problem in the world. More than half of all dental restorations fail within 10 years, and recurrent (secondary) caries is a main reason for failures [1][2][3]. Replacing the failed restorations accounts for 50-70% of all tooth cavity restorations performed [4]. This represents a large economic burden; for example, the annual expense for restoring tooth cavities in the U.S. was $46 billion in 2005 [5]. In addition, the expense is rapidly climbing because of an aging population with longer life expectancies, and seniors are retaining more and more of their natural teeth [6]. Tooth-colored polymeric composites and bonding agents are the primary materials for restoring tooth cavities [3,[7][8][9][10][11][12]. This is because advances in polymer chemistry and filler particle compositions have enhanced the composite restoration properties [13][14][15][16][17][18]. However, one key disadvantage is that polymeric composite materials tend to accumulate more oral biofilms than other dental materials such as metals and ceramics [19]. Oral biofilms ferment carbohydrates and produce acids that can lead to dental caries [20,21]. Therefore, researchers have devoted effort to synthesizing new antibacterial polymers for dental applications [22][23][24][25][26][27]. In general, antibacterial dental resins and composites can be divided into two classes. Class 1 uses polymerizable quaternary ammonium methacrylates (QAMs) where the antibacterial agent is bonded to be part of the polymer network. Class 2 uses filler particles with antibacterial activities that are filled into the polymer matrix. For class 1, QAMs were developed and incorporated into dental polymeric materials [22,23]. The first such material, 12-methacryloyloxydodecyl-pyridinium bromide (MDPB), was copolymerized with dental polymers and provided potent anti-biofilm effects [22,23]. Since then, other antibacterial resins were also synthesized and exhibited the capability to hinder bacterial growth and biofilm formation [25,26,[28][29][30][31][32][33][34][35][36]. For Class 2, antibacterial fillers such as silver, zinc oxide and bioglass particles were mixed into polymer matrices, in which the antibacterial effect was achieved by the release of the agents [37][38][39][40][41][42][43]. While some studies reported sustainable long-term release of ions to exert antibacterial effects [44], other studies showed that the release and antibacterial efficacy decreased with increasing time [42,43]. Controlled long-term release of antibacterial agents has great potential for dental applications to combat caries and oral pathogens, especially via the use of nanotechnology and recharge and re-release mechanisms. This article focuses on Class 1 and reviews innovative developments in QAM-containing dental polymers and their exciting potential in restorative, preventive, root caries, periodontal, and endodontic applications.
Antibacterial Polymeric Dental Composites
Novel antibacterial polymeric composites were synthesized with functions to reduce oral biofilm acids and dental caries formation [22,23]. Antibacterial monomer MDPB was copolymerized into a resin composite which substantially reduced the glucan synthesis by Streptococcus mutans (S. mutans), a major cariogenic species, on the composite surface [45,46]. This was achieved without negatively influencing the composite's mechanical properties and degree of polymerization conversion. A separate study synthesized polymeric composites with antibacterial and fluoride-releasing properties, which caused a large decrease in S. mutans biofilm formation [26]. Another study synthesized novel nanoparticles of quaternary ammonium polyethylenimine (QPEI) and incorporated them into a polymeric composite [47]. The QPEI composite resulted in a strong anti-biofilm activity in human participants in vivo against oral salivary bacteria [47]. In another study, researchers developed a furanone-containing composite with antibacterial functions, achieving a 16%-68% decrease in the viability of S. mutans grown on the composite surface [48].
Recently, a new class of QAMs with the alkyl chain length (CL) from 3 to 18 were developed and mixed into dental polymers to develop composites [36]. The QAMs were developed using a Menschutkin method in which a tertiary amine had reaction with an organo-halide [25,49]. Five QAMs with different CL values of 3 to 18 were produced. To fabricate a composite, the model polymer matrix was made of bisphenol A glycidyl dimethacrylate (BisGMA) and triethylene glycol dimethacrylate (TEGDMA) (Esstech, Essington, PA) which were mixed at 1:1 by weight, although the method was also applicable to other polymer matrices as well. To render the BisGMA-TEGDMA resin light-curable, camphorquinone (0.2%) and ethyl 4-N,N-dimethylaminobenzoate (0.8%) were added. This polymer matrix was denoted BT. To develop the composite, a filler level of 50% mass fraction of silanated glass filler particles (barium boroaluminosilicate glass, median particle size = 1.4 µm, Caulk/Dentsply, Milford, DE, USA) were incorporated for improving the mechanical properties to enable the composite in load-bearing restorations [36]. In addition, nanoparticles of amorphous calcium phosphate (NACP) were also mixed into the composite at a 20% mass fraction for the releases of calcium and phosphate ions and remineralization properties. Each QAM with each CL was incorporated into the composite at a 3% by weight [36]. The flexural strength and elastic modulus of the composite indicated that adding 3% QAM did not negatively compromise the mechanical properties ( Figure 1A). All the QAM composites possessed mechanical properties similar to those of the composite without QAM and a commercial control composite without antibacterial properties [36]. [36]. In addition, nanoparticles of amorphous calcium phosphate (NACP) were also mixed into the composite at a 20% mass fraction for the releases of calcium and phosphate ions and remineralization properties. Each QAM with each CL was incorporated into the composite at a 3% by weight [36]. The flexural strength and elastic modulus of the composite indicated that adding 3% QAM did not negatively compromise the mechanical properties ( Figure 1A). All the QAM composites possessed mechanical properties similar to those of the composite without QAM and a commercial control composite without antibacterial properties [36]. To test the antibacterial properties, saliva from human donors was used as an inoculum to obtain oral biofilms consisting of organisms from the mouth. This enabled the use of a dental plaque microcosm biofilm model [36]. Live/dead staining assay of two-day biofilms grown on the composite surface showed that increasing the CL of the QAM in the polymeric composite strengthened the antibacterial potency, which was the greatest at CL16. Raising the CL further to 18 reduced the antibacterial activity, compared to that of CL16. This was consistent with the lactic acid results from the biofilms on the surfaces of the composites (Figure 2) [36]. The two-day microcosm biofilms grown on the two control composite surfaces yielded the greatest amounts of lactic acid. Raising the CL from 3 to 16 substantially reduced the lactic acid production, reaching the minimum acid at CL16. Therefore, CL16 appeared to possess the strongest antibacterial activity among the groups tested. For the composite with CL16, the acid production of the adherent biofilms was reduced by an order of magnitude when compared with control composites. This acid reduction could contribute to reducing tooth mineral dissolution and caries occurrence [36].
Regarding the antibacterial mechanism, the QAM-incorporated polymer composite had quaternary amine N + with positive charges which could interact with the cell membrane of the bacteria having negative charges. This could disrupt the membrane and cause cytoplasmic leakage, leading to bacterial death [30]. Other possible antibacterial mechanisms include preventing material transports across the bacterial cell membrane, interfering with signaling pathways or adhesive molecules at the bacterial wall, etc. It was suggested that quaternary ammonium materials with relatively long chains would be particularly effective with insertion into the bacterial membrane, thus inducing physical disruption to compromise the bacteria [22,23,30]. Indeed, a previous report To test the antibacterial properties, saliva from human donors was used as an inoculum to obtain oral biofilms consisting of organisms from the mouth. This enabled the use of a dental plaque microcosm biofilm model [36]. Live/dead staining assay of two-day biofilms grown on the composite surface showed that increasing the CL of the QAM in the polymeric composite strengthened the antibacterial potency, which was the greatest at CL16. Raising the CL further to 18 reduced the antibacterial activity, compared to that of CL16. This was consistent with the lactic acid results from the biofilms on the surfaces of the composites (Figure 2) [36]. The two-day microcosm biofilms grown on the two control composite surfaces yielded the greatest amounts of lactic acid. Raising the CL from 3 to 16 substantially reduced the lactic acid production, reaching the minimum acid at CL16. Therefore, CL16 appeared to possess the strongest antibacterial activity among the groups tested. For the composite with CL16, the acid production of the adherent biofilms was reduced by an order of magnitude when compared with control composites. This acid reduction could contribute to reducing tooth mineral dissolution and caries occurrence [36].
Regarding the antibacterial mechanism, the QAM-incorporated polymer composite had quaternary amine N + with positive charges which could interact with the cell membrane of the bacteria having negative charges. This could disrupt the membrane and cause cytoplasmic leakage, leading to bacterial death [30]. Other possible antibacterial mechanisms include preventing material transports across the bacterial cell membrane, interfering with signaling pathways or adhesive molecules at the bacterial wall, etc. It was suggested that quaternary ammonium materials with relatively long chains would be particularly effective with insertion into the bacterial membrane, thus inducing physical disruption to compromise the bacteria [22,23,30]. Indeed, a previous report on antibacterial glass ionomer materials showed greater antibacterial potency by using longer chain lengths [50]. Another study on three-dimensional biofilms also demonstrated that the oral biofilm thickness and the mass of biofilms were substantially reduced when the alkyl chain was raised from 3 to 16 [51]. These findings are in agreement with Figure 2 showing an increasing antibacterial potency for composites with increasing CL from 3 to 16, with CL16 being the most potent [36]. However, when CL was further increased to 18, the anti-biofilm potency was reduced. A possible explanation may be that when the alkyl chain becomes excessively long, the alkyl chain may be bent or curled. This would then contribute to the partial covering of the positively-charged quaternary ammonium groups, thus to some extent blocking the interactions electrostatically with the bacterial cells, and yielding a decrease in the anti-biofilm efficacy [34,36]. Another possible reason is that increasing the chain length leads to a larger thermal fluctuation amplitude that reduces the probability of these molecules penetrating into the outer bacterial membrane. Further study is needed to determine and understand the relationship between the quaternary amine chain length and the antibacterial potency. Meanwhile, tailoring and tuning of the polymeric compositions are needed to optimize the anti-biofilm, acid reduction, and mechanical and physical properties of dental composites. on antibacterial glass ionomer materials showed greater antibacterial potency by using longer chain lengths [50]. Another study on three-dimensional biofilms also demonstrated that the oral biofilm thickness and the mass of biofilms were substantially reduced when the alkyl chain was raised from 3 to 16 [51]. These findings are in agreement with Figure 2 showing an increasing antibacterial potency for composites with increasing CL from 3 to 16, with CL16 being the most potent [36]. However, when CL was further increased to 18, the anti-biofilm potency was reduced. A possible explanation may be that when the alkyl chain becomes excessively long, the alkyl chain may be bent or curled. This would then contribute to the partial covering of the positively-charged quaternary ammonium groups, thus to some extent blocking the interactions electrostatically with the bacterial cells, and yielding a decrease in the anti-biofilm efficacy [34,36]. Another possible reason is that increasing the chain length leads to a larger thermal fluctuation amplitude that reduces the probability of these molecules penetrating into the outer bacterial membrane. Further study is needed to determine and understand the relationship between the quaternary amine chain length and the antibacterial potency. Meanwhile, tailoring and tuning of the polymeric compositions are needed to optimize the anti-biofilm, acid reduction, and mechanical and physical properties of dental composites.
Figure 2.
Lactic acid production by two-day dental plaque microcosm biofilms on the composites vs. QAM amine alkyl chain length (CL) (mean ± SD; n = 6). The polymeric composite using CL16 had the strongest anti-biofilm activity. Values indicated by dissimilar letters are statistically significantly different from each other (p < 0.05). Adapted from [36], with permission from © 2015 Springer Nature.
Antibacterial Dental Bonding Agents
Bonding agents are used clinically to adhere the restoration to enamel and dentin, enabling the restoration to sustain repeated chewing forces in the oral environment without detachment. However, the weakest link of the restoration is the bonded composite-tooth interface, and its failure is the primary reason for the failure of the entire restoration. Therefore, extensive efforts were made to enhance the dentin bond strength and investigate the mechanisms of the tooth-restoration bond [7,52]. Studies indicated that it would be advantageous for the bonding agent to be antibacterial in order to suppress biofilm acids and avoid caries formation at the tooth-composite margins [22,23,28,29,31]. Studies suggested that antibacterial adhesives could help eradicate the residual bacteria inside the tooth cavity, as well kill the invading bacteria due to marginal leakage, which otherwise would allow the oral bacteria to invade into the tooth-restoration margins [28,29]. Indeed, previous studies demonstrated that dental adhesives with MDPB incorporation were able to kill S. mutans growth [23,53]. In addition, methacryloxyl ethyl cetyl dimethyl ammonium chloride (DMAE-CB) was synthesized and incorporated into adhesive to inhibit bacterial growth [54]. Furthermore, antibacterial primer containing MDPB was also developed, which demonstrated Lactic acid production by two-day dental plaque microcosm biofilms on the composites vs. QAM amine alkyl chain length (CL) (mean ± SD; n = 6). The polymeric composite using CL16 had the strongest anti-biofilm activity. Values indicated by dissimilar letters are statistically significantly different from each other (p < 0.05). Adapted from [36], with permission from © 2015 Springer Nature.
Antibacterial Dental Bonding Agents
Bonding agents are used clinically to adhere the restoration to enamel and dentin, enabling the restoration to sustain repeated chewing forces in the oral environment without detachment. However, the weakest link of the restoration is the bonded composite-tooth interface, and its failure is the primary reason for the failure of the entire restoration. Therefore, extensive efforts were made to enhance the dentin bond strength and investigate the mechanisms of the tooth-restoration bond [7,52]. Studies indicated that it would be advantageous for the bonding agent to be antibacterial in order to suppress biofilm acids and avoid caries formation at the tooth-composite margins [22,23,28,29,31]. Studies suggested that antibacterial adhesives could help eradicate the residual bacteria inside the tooth cavity, as well kill the invading bacteria due to marginal leakage, which otherwise would allow the oral bacteria to invade into the tooth-restoration margins [28,29]. Indeed, previous studies demonstrated that dental adhesives with MDPB incorporation were able to kill S. mutans growth [23,53]. In addition, methacryloxyl ethyl cetyl dimethyl ammonium chloride (DMAE-CB) was synthesized and incorporated into adhesive to inhibit bacterial growth [54]. Furthermore, antibacterial primer containing MDPB was also developed, which demonstrated strong antibacterial functions [53].
In addition to MDPB, chlorhexidine (CHX) particles were mixed into a primer to obtain antibacterial properties [55]. Besides modifying commercial bonding agents with antibacterial agents, novel experimental bonding agents with antibacterial functions were also developed.
More recently, a therapeutic adhesive was synthesized that contained three agents: a QAM named dimethylaminododecyl methacrylate (DMADDM) with antibacterial activity, nanoparticles of silver (NAg), and NACP for remineralization [56]. This bonding agent showed a long-term durability in dentin bond strength. There was no reduction in dentin bond strength from one day to six months of immersion in water, while the commercial control bonding agent lost approximately one-third of its dentin bond strength at six months ( Figure 3) [56]. Although many dental adhesives show satisfactory dentin bond strength in the short term, the long-term durability and stability of the resin-dentin interface remain a big challenge [57,58]. The resin-dentin bond strength demonstrated progressive decreases with increasing time in aging [59][60][61][62]. The reason for this decrease was attributed to the hydrolysis and enzymatic degradation of the exposed collagen and the adhesive resin, leading to the degradation of the hybrid layer at the dentin-adhesive interface [60]. There was water sorption in the aqueous oral environment because of the polar ether-linkages and the hydroxyl groups in the adhesive resin [61]. This could cause hydrolysis especially for the relatively more hydrophilic components in the resin [60,63]. Furthermore, the bacterial enzymes and the matrix metalloproteinases (MMPs) in the host tissues likely contributed significantly to the hybrid layer degradation [64]. During the dentin bonding, the MMPs were released and activated, which in turn could break down the collagen fibrils which became unprotected in the hybrid layer [64][65][66][67]. Such a damage of the collagen would in turn further increase the water sorption content, thus producing even more collagen degradation and causing deterioration in the dentin bonded interface [58]. In previous studies, CHX was shown to possess capabilities to inhibit the MMPs and suppress the enzymes [68]. Indeed, CHX was shown to nearly completely inhibit the collagen degradation of the demineralized dentin [69,70]. However, CHX can be dissolved in and cannot be co-polymerized with the resin, and, therefore, would be released in a relatively short amount of time, thus losing its long-term anti-MMP efficacy [24]. In contrast, DMADDM in Figure 3 was co-polymerized and immobilized in the polymer structure, and would not be leached out to diminish its effect over time, and hence could provide long-term MMP-inhibition [56]. Its durable anti-MMP effect likely contributed to maintaining the dentin bond strength without any decrease from one day to six months of water-aging treatment [56]. strong antibacterial functions [53]. In addition to MDPB, chlorhexidine (CHX) particles were mixed into a primer to obtain antibacterial properties [55]. Besides modifying commercial bonding agents with antibacterial agents, novel experimental bonding agents with antibacterial functions were also developed.
More recently, a therapeutic adhesive was synthesized that contained three agents: a QAM named dimethylaminododecyl methacrylate (DMADDM) with antibacterial activity, nanoparticles of silver (NAg), and NACP for remineralization [56]. This bonding agent showed a long-term durability in dentin bond strength. There was no reduction in dentin bond strength from one day to six months of immersion in water, while the commercial control bonding agent lost approximately one-third of its dentin bond strength at six months ( Figure 3) [56]. Although many dental adhesives show satisfactory dentin bond strength in the short term, the long-term durability and stability of the resin-dentin interface remain a big challenge [57,58]. The resin-dentin bond strength demonstrated progressive decreases with increasing time in aging [59][60][61][62]. The reason for this decrease was attributed to the hydrolysis and enzymatic degradation of the exposed collagen and the adhesive resin, leading to the degradation of the hybrid layer at the dentin-adhesive interface [60]. There was water sorption in the aqueous oral environment because of the polar ether-linkages and the hydroxyl groups in the adhesive resin [61]. This could cause hydrolysis especially for the relatively more hydrophilic components in the resin [60,63]. Furthermore, the bacterial enzymes and the matrix metalloproteinases (MMPs) in the host tissues likely contributed significantly to the hybrid layer degradation [64]. During the dentin bonding, the MMPs were released and activated, which in turn could break down the collagen fibrils which became unprotected in the hybrid layer [64][65][66][67]. Such a damage of the collagen would in turn further increase the water sorption content, thus producing even more collagen degradation and causing deterioration in the dentin bonded interface [58]. In previous studies, CHX was shown to possess capabilities to inhibit the MMPs and suppress the enzymes [68]. Indeed, CHX was shown to nearly completely inhibit the collagen degradation of the demineralized dentin [69,70]. However, CHX can be dissolved in and cannot be co-polymerized with the resin, and, therefore, would be released in a relatively short amount of time, thus losing its long-term anti-MMP efficacy [24]. In contrast, DMADDM in Figure 3 was co-polymerized and immobilized in the polymer structure, and would not be leached out to diminish its effect over time, and hence could provide long-term MMP-inhibition [56]. Its durable anti-MMP effect likely contributed to maintaining the dentin bond strength without any decrease from one day to six months of water-aging treatment [56]. Water-aging for six months caused a decrease of 35% in dentin bond strength for the commercial control bonding agent. In sharp contrast, the novel bioactive bonding agents containing DMADDM, Values indicated by dissimilar letters are significantly different from each other (p < 0.05). Water-aging for six months caused a decrease of 35% in dentin bond strength for the commercial control bonding agent. In sharp contrast, the novel bioactive bonding agents containing DMADDM, NAg, and NACP showed no decrease in bond strength from one day to six months of water-aging. Adapted from [56], with permission from © 2013 Elsevier.
In addition, the bonding agent with DMADDM, NAg, and NACP incorporation possessed a strong antibacterial function with no decrease in the antibacterial potency from one day to six months of water-aging ( Figure 4) [56]. This is consistent with the antibacterial agent being copolymerized and covalently bonded with the polymer network. This long-term antibacterial activity is beneficial considering that recurrent caries at the tooth-restoration margins is the primary cause for failures. By suppressing biofilm growth and reducing acids and enzymes, the antibacterial bonding agent could help suppress secondary dental caries. Furthermore, when the clinical requirements prevented the complete removal of the caries tissues such as avoiding the perforation of the pulp, as well as in minimal intervention dentistry [71], greater amounts of carious tissues were left in the tooth cavity. These carious tissues contained numerous residual bacteria inside the dentinal tubules in the prepared tooth cavity. The unpolymerized primer with the DMADDM anti-biofilm monomer, once applied to tooth cavity, would have direct contact with the tooth structure when flowing into the dentinal tubules, thereby eradicating the residual bacteria in the tubules. Then, upon polymerization, the adhesive resin at the margin would be in contact with the new invading bacteria, thus inhibiting their growth into the microgaps at the tooth-restoration interfaces [72]. While the six-month water-aging study indicated that the DMADDM copolymerization and covalent bonding with the polymer network enabled a long-term antibacterial activity, further longer-term study lasting for more than two years is needed on dentin bond strength, biofilm response, and caries prevention at the margins. NAg, and NACP showed no decrease in bond strength from one day to six months of water-aging. Adapted from [56], with permission from © 2013 Elsevier.
In addition, the bonding agent with DMADDM, NAg, and NACP incorporation possessed a strong antibacterial function with no decrease in the antibacterial potency from one day to six months of water-aging ( Figure 4) [56]. This is consistent with the antibacterial agent being copolymerized and covalently bonded with the polymer network. This long-term antibacterial activity is beneficial considering that recurrent caries at the tooth-restoration margins is the primary cause for failures. By suppressing biofilm growth and reducing acids and enzymes, the antibacterial bonding agent could help suppress secondary dental caries. Furthermore, when the clinical requirements prevented the complete removal of the caries tissues such as avoiding the perforation of the pulp, as well as in minimal intervention dentistry [71], greater amounts of carious tissues were left in the tooth cavity. These carious tissues contained numerous residual bacteria inside the dentinal tubules in the prepared tooth cavity. The unpolymerized primer with the DMADDM anti-biofilm monomer, once applied to tooth cavity, would have direct contact with the tooth structure when flowing into the dentinal tubules, thereby eradicating the residual bacteria in the tubules. Then, upon polymerization, the adhesive resin at the margin would be in contact with the new invading bacteria, thus inhibiting their growth into the microgaps at the tooth-restoration interfaces [72]. While the six-month water-aging study indicated that the DMADDM copolymerization and covalent bonding with the polymer network enabled a long-term antibacterial activity, further longer-term study lasting for more than two years is needed on dentin bond strength, biofilm response, and caries prevention at the margins.
Antibacterial Composite for Tooth Root Caries Treatments
Senior people generally show greater risks of forming tooth root caries due to gingival recession and less saliva flow [73]. Periodontitis can lead to gingival recession, which in turn leads to more and more root surfaces to be exposed to the oral environment. Reduced saliva leads to more plaque buildup and less remineralization by saliva. These factors contribute to an increased risk of root caries. Root caries can be treated with Class V restorations. However, these restorations often have margins that are subgingival, which can provide pockets for bacterial growth that are difficult to clean, thus gradually leading to the loss of the periodontal attachment of the tooth. Indeed, it is a well-established knowledge that microbial biofilms are the primary etiological factor that causes
Antibacterial Composite for Tooth Root Caries Treatments
Senior people generally show greater risks of forming tooth root caries due to gingival recession and less saliva flow [73]. Periodontitis can lead to gingival recession, which in turn leads to more and more root surfaces to be exposed to the oral environment. Reduced saliva leads to more plaque buildup and less remineralization by saliva. These factors contribute to an increased risk of root caries. Root caries can be treated with Class V restorations. However, these restorations often have margins that are subgingival, which can provide pockets for bacterial growth that are difficult to clean, thus gradually leading to the loss of the periodontal attachment of the tooth. Indeed, it is a well-established knowledge that microbial biofilms are the primary etiological factor that causes periodontitis [74]. There are three primary species that are most often found in subgingival plaque from the periodontitis and periimplantitis areas: They are Porphyromonas gingivalis (P. gingivalis), Prevotella intermedia (P. intermedia), and Aggregatibacter actinomycetemcomitans (A. actinomycetemcomitans) [75]. In the periodontal pockets, these bacteria can generate virulence factors which lead to the gradual loss of the alveolar bone and the bone in periapical regions [75]. In areas with progressing periodontitis, P. gingivalis can serve as a keystone pathogen and as a portion of the climax group in the periodontal biofilms [76]. Being able to use estrogen and progesterone as an essential source of nutrients instead of using vitamin K, the P. intermedia species is connected with pregnancy gingivitis and periodontitis [77]. The third species, A. actinomycetemcomitans, is related to localized aggressive periodontitis. In addition to these three species, the fourth species, Prevotella nigrescens (P. nigrescens), is related to both healthy and diseased periodontium, and is biochemically comparable to P. intermedia [78]. In addition, Fusobacterium nucleatum (F. nucleatum), the fifth species, is linked to greater probing depth and periodontal ligament reduction [79]. Moreover, F. nucleatum can also promote the invasion of P. gingivalis into the gingival epithelial and aortic endothelial cells [80]. Last, Enterococcus faecalis (E. faecalis), the sixth species, is mainly considered an endodontic pathogen; however, it is also discovered in biofilms in the regions and in the saliva of patients who have chronic types of periodontal infections [81].
Therefore, these six species were selected in a recent study [82]. That study reported a novel polymeric composite for Class-V tooth cavity restorations with therapeutic functions to combat the six types of pathogens related to the start and the exacerbation of periodontitis [82]. The polymer matrix consisted of ethoxylated bisphenol A dimethacrylate (EBPADMA) and pyromellitic glycerol dimethacrylate (PMGDM) at 1:1 mass ratio (referred to as EBPM). Dimethylaminohexadecyl methacrylate (DMAHDM) was added at 3% mass fraction into the composite. Disks of the polymeric composite were transferred to a new 24-well plate. Each type of bacteria was inoculated in 1.5 mL of medium at 10 7 CFU/mL concentration in each well and cultured for 24 h. Then, the biofilm-disk constructs were transferred to new 24-well plates. New medium was added and the samples were cultured for another 24 h, thus totaling two days of culture to grow biofilms on the polymer surface [82]. Figure 5 shows the biofilm biomass after curing for two days which was measured using the absorbance values tested at OD 600nm [82]. The commercial control composite and the EBPM composite with 0% DMAHDM had similar biomass values. The EBPM composite with 3% DMAHDM had much less biofilm biomass. Therefore, the DMAHDM composite diminished the biomass of the biofilms for all six types of periodontitis-related pathogens [82].
In another study, protein-repellent agent 2-methacryloyloxyethyl phosphorylcholine (MPC) and antibacterial agent DMAHDM were combined in the polymeric composite to inhibit periodontal pathogens [83]. Figure 6 shows the polysaccharide amounts produced by the biofilms on the composites with: (A) P. gingivalis, (B) P. intermedia, (C) A. actinomycetemcomitans, and (D) F. nucleatum [83]. Biofilms on the commercial control composite and EBPM control composite produced similar quantities of polysaccharide. In contrast, biofilms on the EBPM + 3DMAHDM + 3MPC composite produced much less polysaccharide. Hence, the composite EBPM + 3DMAHDM + 3MPC could suppress periodontal pathogens and their production of the extracellular matrix [83]. Furthermore, the addition of MPC and DMAHDM into the polymeric composite did not adversely affect the mechanical properties. In addition, the use of dual agents of MPC + DMAHDM exerted a substantially more potent anti-biofilm activity, than using MPC or DMAHDM alone, against periodontal pathogens [83]. Therefore, the polymeric composite containing 3% DMAHDM and 3% MPC appeared to be the optimal composition. It showed a high potential for applications in Class-V tooth cavity restorations to inhibit periodontal biofilms, by reducing biofilm CFU by four orders of magnitude for all the types of periodontitis-related pathogens examined in that study [83].
Antibacterial Bonding Agents Inhibiting Periodontal Pathogens
Three bioactive agents (NACP for remineralization, MPC for protein-repellency, and DMAHDM for anti-biofilm activity) were combined into a polymeric bonding agent to suppress periodontal pathogens [84]. The adhesive contained PMGDM, EBPADMA, 2-hydroxyethyl methacrylate (HEMA) and BisGMA at 45/40/10/5 mass ratio (referred to as PEHB). The dentin shear bond strength results showed that adding 30% NACP into the adhesive did not compromise the dentin bond strength, compared to the control without NACP. In addition, incorporation of 5% DMAHDM + 5% MPC into both the primer and the adhesive did not negatively influence the dentin bond strength, compared to PEHB-NACP group without DMAHDM and MPC [84]. However, the incorporation of 5% DMAHDM + 7.5% MPC did lower the bond strengths. Therefore, a mass fraction of 30% NACP was incorporated into the adhesive, and mass fractions of 5% DMAHDM + 5% MPC were incorporated into both primer and adhesive [84]. Figure 7 shows the (A) metabolic activity, (B) polysaccharide, and (C) biofilm colony-forming units (CFU) for the multispecies periodontal biofilms [84]. The commercial bonding agent control and the PEHB-NACP without DMAHDM and MPC had similar CFU counts, indicating that NACP had little anti-biofilm activity. In contrast, DMAHDM or MPC each alone substantially decreased the
Antibacterial Bonding Agents Inhibiting Periodontal Pathogens
Three bioactive agents (NACP for remineralization, MPC for protein-repellency, and DMAHDM for anti-biofilm activity) were combined into a polymeric bonding agent to suppress periodontal pathogens [84]. The adhesive contained PMGDM, EBPADMA, 2-hydroxyethyl methacrylate (HEMA) and BisGMA at 45/40/10/5 mass ratio (referred to as PEHB). The dentin shear bond strength results showed that adding 30% NACP into the adhesive did not compromise the dentin bond strength, compared to the control without NACP. In addition, incorporation of 5% DMAHDM + 5% MPC into both the primer and the adhesive did not negatively influence the dentin bond strength, compared to PEHB-NACP group without DMAHDM and MPC [84]. However, the incorporation of 5% DMAHDM + 7.5% MPC did lower the bond strengths. Therefore, a mass fraction of 30% NACP was incorporated into the adhesive, and mass fractions of 5% DMAHDM + 5% MPC were incorporated into both primer and adhesive [84]. Figure 7 shows the (A) metabolic activity, (B) polysaccharide, and (C) biofilm colony-forming units (CFU) for the multispecies periodontal biofilms [84]. The commercial bonding agent control and the PEHB-NACP without DMAHDM and MPC had similar CFU counts, indicating that NACP had little anti-biofilm activity. In contrast, DMAHDM or MPC each alone substantially decreased the biofilm CFU than those of the controls. Furthermore, the incorporation of 5% DMAHDM + 5% MPC resulted in the lowest metabolic activity, polysaccharide, and biofilm CFU counts. The CFU of the periodontal biofilm grown on the PEHB + 5DMAHDM + 5MPC polymer was three orders of magnitude less than that grown on the PEHB control polymer [84].
Materials 2018, 11, x FOR PEER REVIEW 10 of 17 biofilm CFU than those of the controls. Furthermore, the incorporation of 5% DMAHDM + 5% MPC resulted in the lowest metabolic activity, polysaccharide, and biofilm CFU counts. The CFU of the periodontal biofilm grown on the PEHB + 5DMAHDM + 5MPC polymer was three orders of magnitude less than that grown on the PEHB control polymer [84]. On a clean polymer surface in the oral environment, the saliva-derived proteins deposit on the polymer first, and then bacteria start to attach to the polymer. Salivary protein adsorption on the surface is required and is a prerequisite for oral bacteria adherence on the polymer surface [85]. This mechanism indicates that developing a protein-repellent polymer can greatly reduce biofilm growth on the polymer restoration in the oral environment. MPC is a methacrylate with phospholipid polar group in the side chain, with the capability to reduce protein adsorption and bacterial adhesion [34]. The mechanism for the protein-repellency was attributed to that in the hydrated MPC polymer, a large amount of free water exists around the phosphorylcholine groups which could detach the proteins [86]. Adding 5% of MPC into the bonding agent decreased the amount of protein adsorption by more than an order of magnitude [84]. In addition, combining the MPC with DMAHDM incorporation produced the strongest suppression of periodontal biofilms. The periodontal multi-species biofilm CFU was approximately 10 9 counts on the control adhesive polymer. The CFU was decreased to 10 8 counts by the use of MPC. The CFU was lowered to 10 7 counts with the use of DMAHDM. In contrast, the CFU was reduced to only 10 6 counts when both On a clean polymer surface in the oral environment, the saliva-derived proteins deposit on the polymer first, and then bacteria start to attach to the polymer. Salivary protein adsorption on the surface is required and is a prerequisite for oral bacteria adherence on the polymer surface [85]. This mechanism indicates that developing a protein-repellent polymer can greatly reduce biofilm growth on the polymer restoration in the oral environment. MPC is a methacrylate with phospholipid polar group in the side chain, with the capability to reduce protein adsorption and bacterial adhesion [34]. The mechanism for the protein-repellency was attributed to that in the hydrated MPC polymer, a large amount of free water exists around the phosphorylcholine groups which could detach the proteins [86]. Adding 5% of MPC into the bonding agent decreased the amount of protein adsorption by more than an order of magnitude [84]. In addition, combining the MPC with DMAHDM incorporation produced the strongest suppression of periodontal biofilms. The periodontal multi-species biofilm CFU was approximately 10 9 counts on the control adhesive polymer. The CFU was decreased to 10 8 counts by the use of MPC. The CFU was lowered to 10 7 counts with the use of DMAHDM. In contrast, the CFU was reduced to only 10 6 counts when both MPC and DMAHDM were used together in the bonding agent [84]. This synergistic reduction in biofilm growth on polymer surfaces was related to the mode of action, which was contact-inhibition [22,23]. When the cell membrane of the bacteria with negative charges contacts the quaternary amine N + with positive charges on the polymer, the membrane could be disrupted, thus causing cytoplasmic leakage [30,47]. This mechanism of contact-inhibition implied that, when the polymer surface was covered by the salivary protein pellicles, the polymer surface was separated from the overlaying biofilm. This reduced the extent of contact, and hence the contact-inhibition efficacy was decreased. Therefore, because of the protein-repellency of the MPC, it helped diminish protein coverage on the polymer surface, thus exposing more polymer surface with quaternary amine N + sites, thereby promoting the contact-inhibition ability. Therefore, the dual use of DMAHDM and MPC in the dental polymer could work synergistically to maximize the periodontal bacteria inhibition capability [84].
Antibacterial Polymeric Endodontic Sealers
Endodontic treatment is needed to eradicate bacterial infection in the tooth root canal, to avoid the microorganisms from harming the periapical healing and causing apical lesions [87]. Clinically, the anatomic complexity of the tooth root canal renders the complete debridement of bacteria practically impossible [88]. Such persistence of bacteria in the tooth root canal often results in post-treatment diseases [89]. One promising approach to address this challenge is the development of antibacterial root canal sealers with the capability to kill endodontic pathogens.
A recent study developed a bioactive endodontic sealer with a good sealing ability in bonding to root dentin, indicated by a push-out strength being similar to those of commercial control without bioactive properties [90]. The push-out bond strength results to root wall dentin are shown in Figure 8A. The addition of 5% DMAHDM and 3% MPC into both the primer and the sealer paste did not adversely influence the dentin bond strength. However, when 5% DMAHDM and 4.5% MPC were incorporated together into the sealer, the push-out strength decreased. Hence, the composition of 5% DMAHDM and 3% MPC was determined to be optimal and was employed for the endodontic sealer and the primer [90]. Figure 8B shows the CFU results of the multispecies endodontic biofilms grown for 14 days on polymer samples. The commercial control group and the PEHB control polymer had similar CFU results. The addition of either DMAHDM or MPC alone reduced the endodontic biofilm CFU. The bioactive endodontic sealer containing 5% DMAHDM and 3% MPC had the lowest biofilm CFU. The 14-day endodontic biofilm CFU on the PEHB + NACP + 5DMAHDM + 3MPC polymer samples was three orders of magnitude less than that on the PEHB+NACP control polymer samples [90].
In a study on three-dimensional (3D) biofilms grown on dental polymer surfaces, the percentage of live bacteria was determined as a function of the location of the 2D cross-section inside the 3D biofilm at various distances from the polymer surface [91]. Near the surface of the polymer which contained DMAHDM, there were more dead bacteria in the biofilm. In the 3D biofilm away from the polymer surface, the percentage of live bacteria increased, likely due to a decrease in the contact-inhibition efficacy [91]. These results are consistent with the results of the DMAHDM-containing endodontic sealer, which achieved a greater reduction in the biofilm CFU at 3 days, compared to the reduction at 14 days [90]. The reason for the more killing effect at 3 days, and less killing effect at 14 days, was likely related to the contact-killing mechanism. The compromised bacteria on the polymer surface acted as a bridge for the further adherence and growth of bacteria, and the next layer of bacteria were away from the polymer surface with a reduced extent of contact-inhibition [92]. Therefore, the contact-inhibition mode of action would indicate that the antibacterial activity against the 14-day biofilms would be decreased because of a lack of direct contact when the microbes lived in the 3D biofilm structure away from the polymer surface. Therefore, the 14-day biofilm model represented a rigorous test of antibacterial activity. The fact that the PEHB + NACP + 5DMAHDM + 3MPC polymer was able to successfully kill and reduce the 14-day biofilm CFU by three orders of magnitude ( Figure 8) indicates a novel bioactive endodontic sealer with an extremely potent anti-biofilm function [90]. Novel dental biomaterial development has the potential to bring tremendous benefits to treatment efficacy and improve the quality of life [7][8][9][10][11][12][13][14][15][16][17]93,94]. Further investigation is needed to achieve the long-lasting biofilm-eradication, therapeutic effects, and tooth-protection via the new bioactive dental polymeric materials using clinically-relevant experiments in the oral environment of human participants.
Conclusions
Currently available dental polymeric composites and bonding agents for tooth cavity restorations are usually bioinert. Since oral bacteria and biofilms play an important role in dental caries and oral infections, a new generation of dental polymeric materials are being developed that are bioactive and possess therapeutic effects including antibacterial, acid-reduction, protein-repellent, and remineralization capabilities. This article reviewed cutting-edge research on the development and properties of novel antibacterial dental polymeric composites, antibacterial bonding agents, bioactive root caries composites for senior patients, adhesives that can suppress periodontal pathogens, and antibacterial and protein-repellent endodontic sealers that can kill endodontic pathogens. Substantial reductions in oral biofilm metabolic activity, acid production, biomass, and polysaccharide synthesis were achieved with the tailored polymeric compositions. Biofilm CFU counts were reduced by three to four orders of magnitude. One advantage of QAM-containing polymers is that the antibacterial agent is co-polymerized and covalently bonded with the polymer, and hence it has long-term antibacterial function that is not leached out and lost over time. The disadvantage is that QAM-polymers rely on the contact-inhibition mechanism, with reduced antibacterial efficacy when the polymer surface is covered by a layer of salivary proteins. As alluded in the Introduction, one potential future development would be to combine strategies from Class 1 and Class 2 so that the dental polymer would possess long-term contact-inhibition as well as the release of antibacterial agents to inhibit bacteria away from the polymer surface throughout the three-dimensional biofilm. The advances in the anti-biofilm properties and therapeutic capabilities of the new generation of dental polymeric materials are expected to bring significant benefits to a wide range of restorative and preventive dental applications. The push-out bond strength values to tooth root dentin (mean ± SD; n = 6). All the groups had similar strengths, except PEHB+NACP+5DMAHDM+4.5MPC which had a lower strength (p < 0.05). (B) The CFU of endodontic biofilm on the endodontic sealer grown for 14 days (mean ± SD; n = 6). In each plot, values with dissimilar letters are significantly different from each other (p < 0.05). Adapted from [90], with permission from © 2017 Elsevier.
Conclusions
Currently available dental polymeric composites and bonding agents for tooth cavity restorations are usually bioinert. Since oral bacteria and biofilms play an important role in dental caries and oral infections, a new generation of dental polymeric materials are being developed that are bioactive and possess therapeutic effects including antibacterial, acid-reduction, protein-repellent, and remineralization capabilities. This article reviewed cutting-edge research on the development and properties of novel antibacterial dental polymeric composites, antibacterial bonding agents, bioactive root caries composites for senior patients, adhesives that can suppress periodontal pathogens, and antibacterial and protein-repellent endodontic sealers that can kill endodontic pathogens. Substantial reductions in oral biofilm metabolic activity, acid production, biomass, and polysaccharide synthesis were achieved with the tailored polymeric compositions. Biofilm CFU counts were reduced by three to four orders of magnitude. One advantage of QAM-containing polymers is that the antibacterial agent is co-polymerized and covalently bonded with the polymer, and hence it has long-term antibacterial function that is not leached out and lost over time. The disadvantage is that QAM-polymers rely on the contact-inhibition mechanism, with reduced antibacterial efficacy when the polymer surface is covered by a layer of salivary proteins. As alluded in the Introduction, one potential future development would be to combine strategies from Class 1 and Class 2 so that the dental polymer would possess long-term contact-inhibition as well as the release of antibacterial agents to inhibit bacteria away from the polymer surface throughout the three-dimensional biofilm. The advances in the anti-biofilm properties and therapeutic capabilities of the new generation of dental polymeric materials are expected to bring significant benefits to a wide range of restorative and preventive dental applications. | 2018-09-24T13:35:15.196Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "0293134677ebdee96a98d972b17571c91133bee8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/9/1747/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0293134677ebdee96a98d972b17571c91133bee8",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258225645 | pes2o/s2orc | v3-fos-license | Enhancing the Stability of Nicotine via Crystallization Using Enantiopure Tartaric Acid Salt Formers
Crystallization of nicotine, an oil prone to degradation at room temperature, has been demonstrated to be an effective means of creating nicotine-based materials with tunable thermal properties and improved resistance to photo-induced degradation. Herein, we show that both isomers of enantiomerically pure tartaric acid are highly effective salt formers when combined with nicotine. Both salts exhibit enhanced photostability, and with a melting point of 143.1 °C, the salt prepared using d-(−)-tartaric acid possesses one of the highest melting points for a crystalline nicotine solid reported to date.
■ INTRODUCTION
Solid-state forms of active pharmaceutical ingredients (API) often possess enhanced stability relative to the liquid state. 1,2 The decreased stability can lead to shorter shelf lives and difficulties associated with their safe handling and storage. For example, propofol is a liquid anesthetic on the World Health Organization's (WHO's) list of essential medicine that is known to oxidize upon oxygen exposure. 3 As such, solid-state formulations utilizing crystal engineering have been explored as means of enhancing their stability and prolonging their shelf life. 4 Furthermore, Aakeroÿ et al. demonstrated that halogenated liquid chemicals can be stabilized in the solid state with improved properties using co-crystallization. 2 Additionally, the solid-state formulations, especially crystalline forms, provide a means of elucidating important noncovalent interactions in these materials through structure determination that can guide the design of future materials.
Nicotine, a widely consumed pharmacologically active substance, exhibits photo-induced degradation as well as degradation from ambient atmospheric air. 5 Under prolonged exposure to air or ultraviolet (UV) irradiation, pure nicotine decomposes into the primary components oxynicotine, nicotinic acid, and methylamine. 6 Conversion of the pure phase oil to the crystalline state provides a means of both tuning the thermal properties of the solid and enhancing the resistance to degradation.
Capucci et al. utilized crystal engineering principles to isolate and characterize three novel nicotine co-crystals. 7 This work demonstrated that nicotine can be isolated in the solid state with tunable properties such as melting point. While this was an important step forward to understanding and studying nicotine in solid forms, the synthesized nicotine co-crystals were created with halogenated and unsafe coformers, thus making them unsuitable for human consumption. As such it is necessary to utilize compounds that are suitable for human consumption to create nicotine solids through crystal engineering. Previous work has been done utilizing the family of malic acid compounds, orotic acid, and gentisic acid as such substances are listed as generally recognized as safe (GRAS) compounds by the U.S. Food and Drug Administration (FDA). 8−11 This work demonstrated a general approach for the creation of safer nicotine salts using GRAS-listed compounds.
Nicotinium bitartrate dihydrate is a commercially available nicotine salt used in many nicotine lozenges and pouches. This commercial material is produced using L-tartaric acid. Despite widespread use, the crystal structure and properties have not been reported. With the broader usage of these materials on the rise, herein, we report the synthesis and structural characterization of the nicotine crystallized with D-(−)-and L-(+)-tartaric acid (Scheme 1). Thermal characterization and photostability measurements on the nicotinium tartrate salts were measured to assess the performance of tartaric acid as a salt-forming agent relative to the previously reported salt formers.
■ EXPERIMENTAL SECTION Materials. (S)-Nicotine (>95%) was acquired from TCI. L-(+)-Tartaric acid (99%) and D-(−)-tartaric acid (99%) were each purchased from Alfa Aesar. Ethanol (190 proof, USPgrade) and n-heptane were purchased from Decon Laboratories, Inc. and Fisher Scientific, respectively. Dimethyl sulfoxide-D 6 (D, 99.9%) and methanol-D 4 (D, 99.8%) were purchased from Cambridge Isotope Laboratories, Inc. Deionized (DI) water was obtained through the use of an in-house seven-stage reverse osmosis system. Salt Synthesis. (S)-Nicotinium bis-L-(+)-tartrate dihydrate was synthesized using a modified slow evaporation setup. L-(+)-Tartaric acid (600.4 mg, 4.0 mmol) was added to a 20 mL scintillation vial with 10 mL of DI water and 10 mL of ethanol (190 proof). (S)-Nicotine (0.32 mL, 2.0 mmol) was added. The resulting solution was vortexed for 30 s at 3000 rpm on a VWR Mini Vortexer MV I. The solution was then stored in the dark uncapped to allow for crystal formation, while the solvent slowly evaporated. Once the solvent was about 90% evaporated, the crystalline product was collected via vacuum filtration and washed with n-heptane (3 × 5 mL) (694.7 mg, 69.69%). The yield was computed based on the formula weight of the L-tartrate salt (F.W. 498.4 g mol −1 ).
(S)-Nicotinium bis-D-(−)-tartrate was synthesized using a modified slow evaporation setup. D-(−)-Tartaric acid (600.3 mg, 4.0 mmol) was added to a 200 mL tall beaker with 50 mL of ethanol (190 proof). (S)-Nicotine (0.32 mL, 2.0 mmol) was added. The resulting solution was vigorously stirred for 2 min. The solution was then stored in the dark uncapped to allow for crystal formation while the solvent slowly evaporated. Once the solvent was about 90% evaporated, the crystalline product was collected via vacuum filtration and washed with n-heptane (3 × 5 mL) (827.2 mg, 89.45%). The yield was computed based on the formula weight of the D-tartrate salt (F.W. 462.4 g mol −1 ).
Single-Crystal X-ray Diffraction (SC-XRD). A Bruker SMART APEX-II CCD diffractometer installed at a rotating anode source (Mo Kα radiation, λ = 0.71073 Å) and equipped with an Oxford Cryosystems (Cryostream700) nitrogen gasflow apparatus was used to collect single-crystal X-ray diffraction data. Five sets of data (360 frames each) were collected by the rotation method with 0.5°frame-width (ω scans) with 2.0 s exposure times for the single crystalline sample of (S)-nicotinium bis-L-(+)-tartrate dehydrate and 10.0 s exposure times for the single crystalline sample of the (S)nicotinium bis-D-(−)-tartrate salt. Diffraction patterns obtained at low temperatures were of poor quality, potentially due to a low-temperature phase transition; thus, the data collections were performed at room temperature. Using Olex2, the structures were solved with intrinsic phasing via the ShelXT structure solution program and refined with the ShelXL software suite using least squares minimization. 12−14 Images of the structures were created using Olex2 and the Mercury 4.0 2021.2.0 visualization and analysis of crystal structures software suite. 15 In the L-tartrate salt, atomic coordinates of H atoms attached to heteroatoms were freely refined with the exception of H11, H18, and H2A, which were placed geometrically (O−H = 0.82 Å; N−H = 0.98 Å). Difference contour mapping revealed density between O12 and O9 corresponding to a proton (H12A). This proton likely is shared between the two heteroatoms and has been modeled as such, resulting in the long bond level B alert during the checkCIF/PLATON report. Thermal parameters for H atoms attached to heteroatoms were constrained to be U iso (H) = 1.5U eq (N) or 1.5U eq (O). H atoms connected to carbon atoms were placed geometrically (C−H = 0.95 Å) and refined with thermal parameters constrained to be U iso (H) = 1.2U eq (C). DFIX and DANG restraints were utilized in the L-tartrate salt to ensure chemically reasonable hydrogenbonding geometries. Absolute configuration was assigned based upon the stereochemistry of the (S)-nicotine API. The Flack parameter was omitted in accordance with IUCr standards for reporting a structure when the absolute configuration is assigned based upon a known reference molecule.
In the D-tartrate salt, atomic coordinates of H atoms attached to heteroatoms were freely refined. Two q-peaks (0.37 and 0.26 e − Å −3 ) were located between O12 and O8, respectively. The peak located 0.896 Å from O12 was assigned as a proton and subsequently split with the second peak residing 0.933 Å from O8. This resulted in partial protonation of O12 [62(15) %] and O8 [38(15) %]. Thermal parameters for H atoms attached to heteroatoms were constrained to be U iso (H) = 1.5U eq (N) or 1.5U eq (O). H atoms connected to carbon atoms were placed geometrically (C−H = 0.95 Å) and refined with thermal parameters constrained to be U iso (H) = 1.2U eq (C). Absolute configuration was assigned based on the stereochemistry of the API (S)-nicotine. The Flack parameter was removed in accordance with IUCr standards for reporting a structure in which the absolute configuration is assigned based on a known reference molecule, which in this structure is (S)-nicotine.
Crystal Melting Points. A Stuart SMP10 melting point apparatus was utilized to measure the melting point of the synthesized compounds. Four replicates were run for each salt.
Differential Scanning Calorimetry (DSC). A differential scanning calorimeter, model DSC Q200 (TA Instruments) was used to measure the thermal transitions of the sample. Approximately 10 mg of the synthesized compound was placed into an aluminum pan and sealed. The salt was scanned from 0°C to above the melting point observed on the Stuart SMP10 at 20°C min −1 under argon and nitrogen flow (10 mL min −1 . each) for two full cycles. Enthalpy of fusion was computed from the integrated area under the curve, and entropy of fusion was computed by applying Gibbs' free energy in accordance with equilibrium between the solid and liquid state being achieved at the fusion point.
UV Photodegradation. NMR analysis was done on a representative sample of the salt formers L-(+)-tartaric acid and D-(−)-tartaric acid, (S)-nicotine, and the synthesized (S)nicotinium tartrate salts. Each sample was then irradiated with ultraviolet (UV) light in a home-built vented box with air flow for 24 h using four Southern New England Ultraviolet Company RPR-3000A UV bulbs (λ = 300 nm). NMR analysis was then carried out on each sample to screen for any UV photodegradation of products.
Nuclear Magnetic Resonance (NMR). NMR analysis was carried out using a Bruker NEO 400 MHz NMR spectrometer Scheme 1. Chemical Structures of the API Nicotine (Left) and the Salt Formers L-Tartaric Acid (Middle) and D-Tartaric Acid (Right) ACS Omega http://pubs.acs.org/journal/acsodf Article equipped with an iProbe and AutoTune assembly with variable temperature control, as well as a SampleCase autosampling unit. 80 transients were run for each sample. Appropriate deuterated solvents are labeled in each spectrum. Spectra were normalized to an intensity of 100. Hirshfeld Surface Analysis. The Hirshfeld surface of each nicotinium tartrate salt was generated using Crystal Explorer 17.5. 16 For all salts, the d norm surface was mapped using the color scale with the range −0.050 au (red) to 0.600 au (blue). In addition, two-dimensional (2D) fingerprint plots were generated as the outer nuclei (d e ) versus the inner nuclei (d i ) using an expanded interaction distance ranging from 0.6 to 2.8 Å.
Infrared (IR) Spectroscopy. Infrared spectral analysis was carried out on a PerkinElmer Spectrum Two FTIR spectrometer equipped with an attenuated total reflectance (ATR) module. Eight scans were averaged for each spectrum.
Nicotinium Tartrate Salts Structural Characterization.
Two nicotinium tartrate salts were synthesized; one utilizing D-(−)-tartaric acid and one utilizing L-(+)-tartaric acid. Single crystals of (S)-nicotinium bis-L-(+)-tartrate dihydrate (henceforth referred to as the L-tartrate salt) suitable for X-ray diffraction were obtained from slow evaporation of a water solution containing nicotine and L-tartaric acid in a 1:2 ratio, respectively. Similarly, single crystals of (S)-nicotinium bis-D-(−)-tartrate (henceforth referred to as the D-tartrate salt) were synthesized from slow evaporation from an ethanol solution containing a 1:2 ratio of nicotine to the salt former. The asymmetric unit of the L-tartrate salt contains two nicotinium molecules, four tartrates, and four waters ( Figure 1). Two doubly protonated nicotinium dications were chargebalanced with four L-tartrate monoanions. Protonated pyridinium groups interacted with an adjacent tartrate, while the major disordered species of the protonated methylpyrrolidinium group exhibited hydrogen-bonding interactions with neighboring water molecules. The minor component of the disordered methyl-pyrrolidinium moiety exhibits hydrogen bonding to a neighboring tartrate. Numerous discrete or Dtype hydrogen-bonding interactions were observed throughout the asymmetric unit. Notable motifs were also observed including multiple S(5) intramolecular interactions within tartrate molecules and R 1 2 (5) interaction motifs, which were observed between the disordered nicotinium and the nearest adjacent tartrate. It is of note that this ring-type motif is only observable when the disordered methyl-pyrrolidium resided in the more abundant conformation.
The L-tartrate salt bulk structure packs with asymmetric units stacking along [010] to form columns. These columns then pack along c* with each adjacent column being rotated 180°about [010] (Figure 2). At the interface of the columns, an array of discrete hydrogen-bonding interactions intercalate to bind the columns together.
The nicotinium moieties of the D-tartrate salt were also observed to be doubly protonated. The D-tartrate salt has two discrete or D-type hydrogen-bonding interactions in the asymmetric unit. These interactions are between each of the nitrogens on the nicotinium and the tartrates. It was noted that the D-tartrate molecules each formed infinite C 1 1 (7) chains with adjacent tartrate molecules. Chains were observed as running parallel to [100], while the other series of chains ran parallel to [010] (Figure 2). The nicotinium molecules act as bridges between the chains along [100] and the chains along [010].
Hirshfeld Interaction Analysis and Comparison. Hirshfeld surface analysis provides a method for the quantification of a variety of intermolecular interaction properties such as the electrostatic potential or d norm of a crystalline system. 17−19 Insight into these interactions and their relevance to the physical properties of the crystals provide a basis for the design of future system designs.
The interaction environment for the API (S)-nicotinium was analyzed for the L-tartrate salt and the D-tartrate salt utilizing d norm surface mapping. The fingerprint plots are provided in the Supporting Information. The interactions were then compared to (S)-nicotinium interaction environments from the previously reported malate and orotate salts. The L-tartrate salt surface was generated for the nondisordered nicotinium, as well as one for each of the possible occupancies of the disordered nicotinium molecule that was present in the asymmetric unit (three in total). As observed in Table 1, when the L-tartrate salt API environments are examined, we find that within the disorder, the more abundant The nicotinium interactions found in the tartrate salts were then compared to the API interactions previously reported for the nicotinium malate salts, nicotinium gentisate salt, and nicotinium orotate salt ( Figure 3). Interestingly, the tartrate salts described herein had significantly more H−O/O−H interactions than any of the malate salts or the gentisate and orotate salt, likely due in part to the additional hydroxyl groups present in the tartaric acid salt former in contrast to the orotic acid and malic acid salt formers used in the orotate and malate salts, respectively. The tartrates also had nearly half the percentages of H−C/C−H interactions in comparison to the previously reported salts. The tartrates also had about onetenth of the H−N/N−H interactions in comparison to the malate salts. Like the gentisate and malate salts, the tartrate salts exhibited no N−N-type interactions, leaving only the orotate salt possessing this unique interaction type. The percentage of O−C/C−O interactions present in the L-tartrate surfaces was about 3 times higher than those found in the malate, gentisate, and orotate salts, with the exception of the D-malate, which possessed no interactions of this type. This demonstrates that while the O−C/C−O interactions are a relatively small percentage across all systems, they contribute slightly more in each tartrate salt relative to the other solids compared herein. Relatively small percentages of C−C interactions were present across all computed tartrate surfaces similar to the gentisate and orotate salts, which possessed C−C interactions at a slightly higher abundance.
Thermal Characterization. Understanding the thermal behavior of the crystalline solids can prove useful in the design and engineering of future materials. 20−22 Salt melting points, initially determined utilizing a Stuart SMP10 digital melting point apparatus, were determined to be 90−93°C for the Ltartrate salt and 139−144°C for the D-tartrate salt. Differential scanning calorimetry (DSC) based on these melting points was performed on a sample of each tartrate salt. Each sample was cycled twice with the heating cycles of the first scan of each tartrate salt shown in Figure 4, and the associated thermal properties are listed in Table 2. No evidence for a recrystallization event was observed for either salt during the second scan indicating the salts melted into an amorphous material as observed for other nicotine salts. 8−10 The full twocycle DSC scans for each tartrate salt with exothermic transition having a positive heat flow are present in Figures S5 and S6.
The L-tartrate salt exhibited a sharp endothermic peak associated with melting at 93.7°C. This thermal event was accompanied by an enthalpy of fusion (ΔH fusion°) of 50.89 kJ mol −1 and an entropy of fusion (ΔS fusion°) of 13.899 × 10 −2 kJ mol −1 K −1 . Likewise, the D-tartrate salt exhibited a sharp endothermic peak associated with melting at 143.1°C. This melting event was accompanied by an ΔH fusion°v alue of 38.55 kJ mol −1 and an ΔS fusion°v alue of 9.261 × 10 −2 kJ mol −1 K −1 .
Compared to the other nicotine salts that have been reported, the L-tartrate salt represents the second-lowestmelting-point nicotine salt, with the nicotinium D-malate salt possessing a 0.3°C lower melting point as indicated via DSC. However, the L-tartrate salt exhibits the largest ΔH fusion°a nd ΔS fusion°v alues of any reported nicotine-containing solid. The D-tartrate salt represents the second-highest-melting nicotinecontaining solid that has been reported. The previously reported nicotinium orotate hemihydrate possesses a melting point of 135.5°C while the D-tartrate salt melts 7.6°C higher at 143.1°C, thus making it the highest-melting nicotine salt reported in the literature. Both of the nicotinium tartrate salts offer enhanced thermal stability and higher melting points over any of the reported nicotine co-crystals, much like all other nicotine salts that have been reported.
The salt melting points were compared to the tartaric acid melting point (T m ), the difference being ΔT m . This measure of performance was reported previously for the nicotinium malate and orotate salts and the halogenated nicotine co-crystals by Capucci ΔT m value of 215.4°C. Thus, the D-tartrate salt ΔT m value resides within the range of the malate salts, while the L-tartrate salt ΔT m value resides within the ΔT m range of the halogenated nicotine co-crystal values.
UV Photodegradation Testing. Pure nicotine is known to exhibit a range of sensitivities including air, moisture, and light which contribute to the degradation of the API. 5,6 Despite the fact that nicotine degradation pathways are not well studied, it is known that degradation products such as oxidized nicotine, methylamine, and nicotinic acid may form as a result of exposure to ultraviolet (UV) irradiation. 24 Crystalline forms of nicotine may offer the ability to curb degradation by restricting the quantity of molecular oxygen that can diffuse through the material. Thus, crystal engineering provides a means of stabilizing and effectively limiting UV-induced degradation of the API nicotine. By improving the stability of nicotine under ambient conditions, the shelf life and storage conditions of this API may be improved.
The photostability of each of the synthesized nicotinium tartrate salts was analyzed by irradiating a sample of each in a UV photoreactor for 24 h. After the irradiation period, a representative portion of the sample (10 mg) was analyzed via 1 H NMR spectroscopy. The extent of UV-irradiation-induced damage was assessed by comparing the spectrum of each sample prior to irradiation to the spectrum acquired after UV irradiation. In addition to the salts, the pure API as well as each salt former were analyzed before and after 24 h of irradiation as controls for this series of experiments.
Each of the salt formers, L-tartaric acid and D-tartaric acid, showed no detectable degradation in the post-UV irradiation spectra compared to the spectra acquired prior to irradiation. In contrast to this, the pure API (S)-nicotine exhibited numerous new peaks in the spectrum after irradiation, thereby confirming that the neat API does undergo UV-light-induced degradation. Like the salt formers, both the nicotinium Ltartrate salt ( Figure 5) and the nicotinium D-tartrate salt ( Figure S15) exhibited no degradation in the post-irradiation spectra. This is much the same as previously reported nicotinium orotate and nicotinium malate salts, thus further substantiating that crystalline forms of nicotine offer an approach to isolate photostable nicotine materials. 8,10 ■ CONCLUSIONS Enantiomerically pure tartaric acid has been demonstrated to be a highly effective salt former when combined with nicotine. Unsurprisingly, both salts exhibit extensive hydrogen bonding within their respective lattices, undoubtedly due to the large number of donors and acceptors found in both components.
Despite the presence of similar interaction environments determined through the analysis of the Hirschfeld surfaces, the salts possessed remarkably different thermal properties. While the L-tartrate salt melts at a modest 93.7°C, the D-tartrate salt melts at 143.1°C, one of the highest melting points for a crystalline nicotine solid reported to date.
Similar to other reported nicotine salts, neither nicotinium tartrate salt reported herein exhibited any detectable photodegradation upon prolonged exposure to UV irradiation, which further supports the claim that the crystallization of nicotine is an effective means of creating nicotine-based materials with improved resistance to photo-induced degradation. Further, the ability to synthesize these materials utilizing GRAS compounds demonstrates the feasibility of this approach to the design of safer nicotine-based materials through crystal engineering. | 2023-04-20T15:09:10.550Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "efce21609ee91d6d981b6f78de3e275d725f96dc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1021/acsomega.3c00849",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "604bdc761768d0bb4963547572b3d1ef35f03a7c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239998362 | pes2o/s2orc | v3-fos-license | Locally Differentially Private Bayesian Inference
In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy. LDP provides client-side privacy by adding noise at the user's end. Thus, clients need not rely on the trustworthiness of the aggregator. In this work, we provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP, conditioned on locally perturbed observations. Stronger privacy protection (compared to the central model) provided by LDP protocols comes at a much harsher privacy-utility trade-off. Our framework tackles several computational and statistical challenges posed by LDP for accurate uncertainty quantification under Bayesian settings. We demonstrate the efficacy of our framework in parameter estimation for univariate and multi-variate distributions as well as logistic and linear regression.
Introduction
In many practical settings, researchers only have access to small and sensitive data sets. In order to prepare against overconfident incorrect findings in these settings, properly assessing the uncertainty due to finiteness of the sample becomes a crucial component of the inference. Besides the uncertainty estimation for which there is a wide range of tools in the Bayesian literature, also the privacy of the data subjects should be preserved.
In recent years, local differential privacy (LDP) (Ev-Equal contributions from the first two authors. fimievski et al., 2003;Kasiviswanathan et al., 2008) has become the gold-standard for inference under untrusted aggregation scenarios, enabling strong clientside privacy protection. Despite rich history of investigation and large-scale deployments, LDP has been rarely paired with Bayesian inference.
In this work, we initiate the study of developing methods for Bayesian inference under LDP constraints, and quantify the uncertainty as a function of the scale of the noise injected. We focus on the non-interactive setting, where each client participates only in a single round of data collection.
A generic solution for performing Bayesian inference under LDP would be to try enforcing LDP in a DP variant of a general-purpose Markov chain Monte Carlo (MCMC) algorithm e.g. DP-SGLD (Wang et al., 2015;Li et al., 2019). This is not an attractive solution for the following reasons: the iterative nature of these sampling algorithms demands multiple rounds of LDPcompliant data collection. Next, the cost of client-side implementation of a new computationally demanding sampling algorithm could be hard to justify when it is possible to collect sufficiently accurate information for other methods with a lightweight implementation. Additionally, the privacy cost scales proportionally to the number of iterations, which in the case of SGLD is equivalent to the number of posterior samples.
In contrast to the centralized DP setting, under LDP each input or sufficient statistic is perturbed at client side, and the aggregator is forced to infer the parameters only with access to the privatized inputs. These constraints suggest that for a non-interactive setting, similarly to the frequentist counterparts, Bayesian inference should be decoupled from data collection and conditioned directly on the privatized inputs.
Bernstein and Sheldon (2018) proposed a sufficient statistics based approach to correctly quantify the posterior uncertainty by making the model noise-aware, i.e. by including the (centralized) DP noise mechanism into the probabilistic model.
Our solution extends the applicability of noise-aware models to LDP. The sheer magnitude of noise due to arXiv:2110.14426v1 [stat.ML] 27 Oct 2021 LDP (Ω( √ N ) for a sample size of N , compared to O( 1 ) in central DP) makes our LDP inference problem much more challenging than in Bernstein and Sheldon (2018)'s centralized solution. Under central DP, the earlier works are based on perturbed sum of sufficient statistics and therefore the latent data appear in the model as an aggregate over the individuals. In contrast, under LDP the number of latent variables (the true unperturbed data) grows linearly with the number of samples, since the aggregator can only observe perturbed data.
Related work
Following Williams and McSherry (2010)'s pioneering work, several approaches that combine differential privacy (DP) and Bayesian inference (Wang et al., 2015;Foulds et al., 2016;Jälkö et al., 2017;Honkela et al., 2018;Heikkilä et al., 2019;Park et al., 2020) have been proposed. These works demonstrate that it is possible to adapt Bayesian learning techniques to obey the privacy constraints of DP, but they do not attempt to address the additional uncertainty the DP mechanism induces to the inference. Bernstein and Sheldon (2018) showed how to make a model noise-aware by including the DP noise mechanism as a part of the probabilistic model describing the data generating process in a statistical inference task. Later, these techniques were applied for linear regression models by Bernstein and Sheldon (2019) and for generalized linear models by Kulkarni et al. (2021). These works are based on centralized DP which assumes the presence of a trusted curator to perturb and release the sensitive computation.
The work closest to ours is from Schein et al. (2019). They perform Bayesian inference for Poisson factorization models under a weaker variant of LDP. While there is some overlap in the way we handle latent vari-ables, we consider a completely different set of problems, i.e. parameter estimation of univariate and multivariate distributions and linear and logistic regression, which have not been studied beyond the point estimators, despite such a great progress in LDP.
Contributions
Our work makes the following contributions.
We propose a probabilistic framework for
Bayesian inference under LDP, which captures the entire data generating process including noise addition for LDP. Use of our methods does not require any changes in the existing LDP data collection protocols since these run as a downstream task at aggregator's end. The classic centralized model of differential privacy presupposes the presence of a trusted aggregator that processes the private information of individuals and releases a noisy version of the computation. The local model instead safeguards user's inputs by considering the setting where the aggregator may be untrustworthy. Then, the user needs to add noise locally before sharing with the aggregator. Consider two users each having an abstract data points, x and x , from a domain D.
Definition 2.1. For ≥ 0, δ ≥ 0, a randomized mechanism M satisfies ( , δ)-local differential privacy (Kasiviswanathan et al., 2008) if for any two points x, x ∈ D, and for all outputs Z ⊆ Range(M), the following constraint holds: Lower value of and δ provides a stronger protection of privacy. When δ = 0, M is said to satisfy pure-LDP or -LDP.
Among many desirable properties of a privacy definition, (L)DP degrades gracefully under repeated use and is immune to post-processing. The latter means that the privacy loss of M cannot be increased by applying any randomized function independent of the data to M's output.
For practical implementations of differential privacy, we need to quantify the worst-case impact of an individual's record on the output of a function. This quantity is referred to as sensitivity and is defined as follows: We now review some basic LDP perturbation primitives that will be referred to in the paper.
Analytic Gaussian Mechanism (Balle and Wang, 2018). Classical Gaussian mechanism satisfies ( , δ)-DP when ∈ (0, 1). Much higher values are commonly used in practice in LDP scenarios. Balle and Wang (2018) proposed an algorithmic noise calibration strategy based on the Gaussian cumulative density function (CDF) to obtain a mechanism that adds the least amount of Gaussian noise needed for ( , δ)-DP: We now recall two -DP primitives for releasing univariate inputs.
Randomized Response (RR) (Warner, 1965). The 1-bit randomized response is a utility-wise optimal (Chan et al., 2012;Kairouz et al., 2016) way of releasing the sum of bits under LDP. When x ∈ {0, 1}, a user satisfies -DP by reporting the perturbed input z from the following distribution: The unbiased point estimate for the mean of a sample of size N can be recovered by
Our setting and goal
In this work, we assume the non-interactive untrusted aggregation setting, commonly considered in LDP works. We have N non-colluding users, each user i ∈ [N ] holding a private data point x i . We model the as an independent and identically distributed (iid) sample from a probability distribution with density function f parameterized by θ.
Each user i ∈ [N ] locally perturbs x i to obtain their privatized version z i using a LDP compliant mechanism M and shares it with the aggregator. Using Z = {z 1 , .., z N } and the knowledge of M, the aggregator intends to recover the posterior distribution Pr [θ|Z].
We emphasize that from the DP perspective the posterior inference is a downstream task and can performed with any off-the-shelf sampling algorithm. We pay a privacy cost only once for perturbing the inputs, and obtain the posterior distribution without further privacy cost as a consequence of the post-processing property of (L)DP.
In the next two sections, we propose two simple ways of modeling single-round LDP-compliant data collection protocols. where θ are the model parameters.
Linear regression
For linear regression, the sufficient statistics s ∈ R ( d+2 d ) for each input {x, y} are We assume that each individual perturbs these sufficient statistics locally using the Gaussian mechanism. Next, we split the sum of perturbed sufficient statistics Z as the sum of true sufficient statistics S and the sum of local perturbations ( i ζ i ). Conveniently for Gaussian noise, the sum of perturbations is also Gaussian with the variance scaled up by N . Note, that for other noise mechanisms that are not closed under summation (e.g. Laplace mechanism), we could still approximate the sum of noise as a Gaussian using the central limit theorem.
Using a normal approximation to model the latent sum of sufficient statistics S and Gaussian perturbation as the noise mechanism, we can marginalize out the latent S and obtain the following posterior for the model parameters (full derivation in Section 13.1 in the supplementary article): are the mean and covariance of S. This model is similar to Bernstein and Sheldon (2019)'s model that considered this problem in the centralized setting, and we use the closed form expressions for µ s and Σ s from their work.
Our contribution. Bernstein and Sheldon (2019)'s solution for linear regression satisfies -DP, but with a sensitivity expression depending quadratically on d because they assume bounds on the individual components of x. Moreover, they analyze the sensitivities for the three components of s separately. Adding noise with a scale proportional to d is likely to result in so high level of noise under LDP that any signal in the data becomes indistinguishable. Our main contribution for this model is in the form a new bound for ∆ 2 (t) for linear regression with no dependence on d. We show in Lemma 3.1 that analyzing all three components of s together instead of treating them separately as in earlier works, leads to a tighter bound on ∆ 2 (t). Towards this goal, we define the following convenience functions: Using above functions, t(x, y) = [t 2 (x), yt 1 (x), y 2 ].
Consider the Gaussian mechanism The tight ( , δ)-DP for M is obtained by considering a Gaussian mechanism with noise variance σ 2 1 and sensitivity Proof. Proof can be found in Section 9.1 in the Supplement.
Logistic regression
Analogously, we can derive an approximate sufficient statistics based logistic regression model. We use the privacy results and the calculations for µ s , Σ s from Kulkarni et al. (2021). The privacy results are summarized in Section 13.2 in the Supplement.
Modeling with perturbed inputs
To generalize our framework beyond models with sufficient statistics, we consider the probabilistic model where z denotes the perturbed observation of the latent input x. This formulation allows us to work with arbitrary input distributions Pr[x | θ] and Pr[z | x]. However, introducing N latent variables x i , one for each input, quickly makes the inference computationally infeasible as N increases. To overcome this, we marginalize out the latent x: As another benefit of working with the perturbed inputs, we do not need to presuppose any downstream usage of the data, contrary to the sufficient statistic based models. This means that the privacy cost is paid once in the data collection, and the aggregator can then use the data for arbitrary inference tasks.
In the remainder of this Section, we exemplify how to marginalize the likelihood for different inference problems. We will highlight different types of subproblems within each example (e.g. modeling clipping and enforcing the parameter constraints in the model), and demonstrate how to solve them. Note that these solutions extend to a broad class of inference tasks, much beyond the scope of these examples. All derivations are in the Supplement.
Unidimensional parameter estimations
We first demonstrate the marginalization in a typical statistical inference task, where we seek to find the parameters of a 1-dimensional generative process (e.g. parameters of Gaussian, Poisson, geometric, and exponential distribution).
Many of such distributions have unbounded support. The work from Bernstein and Sheldon (2018) suggests to specify a bound [a, b] on the data domain and discard all points outside it. This not only hurts the accuracy but also burdens the LDP protocol to spend budget to mask non-participation. Instead, we have each user map their input falling outside this bound to the bound before privatizing. For example, for Gaussian distribution, if x ∈ [a, b], the user clips their input to a if x ≤ a or to b if x ≥ b. We adhere to our probabilistic treatment of data, and model the clipped perturbed observations using a rectified probability density function where Pr[x | θ] denotes the pdf of x prior to clipping and δ the Dirac's delta function. As a consequence of clipping, we observe peaks at a and b in the rectified density function.
In the Supplementary material (see Sections 10.1 and 10.2), we show the marginalization for parameter estimation task of Gaussian and exponential observation models. In both of the cases we have used Laplace noise to satisfy pure -DP.
Sufficient condition for marginalization. Consider that we observe z by perturbing x using Gaussian perturbation and that x is assumed to be bounded within the interval (a, b). In general, we would like to evaluate the following integral to marginalize out the However, this integral is often intractable. The next result will present a general class of models for x that allow the marginalization in tractable form.
Lemma 3.2. Assume the probabilistic model where C is a normalization constant, g and h are polynomials and h is at most a second order polynomial, then the integral marginalizing x out of Pr[z, x] becomes tractable.
Proof. Proof can be found in Section 9.2 in the Supplement.
Histogram aggregation
We now focus on the problem of histogram aggregation, which has enjoyed significant attention under LDP. We assume each user i holds an item Aggregator's aim is to estimate the normalized fre- Among several approaches proposed, we pick Wang et al. (2017)'s 1-bit randomized response method Optimal Unary Encoding to specify the likelihood. This method satisfies -DP while also achieving theoretically optimal reconstruction error.
(2017) represent each input x = k as a one-hot encoded vector e ∈ {1, 0} d with e k = 1 and e l = 0, ∀l = k. All users then apply 1-bit RR using probabilities p and q at each location j ∈ [d] of e and obtain z j as follows: Upon collecting all perturbed bit vectors In a strong privacy setting, or when the true counts are low,f k 's can be negative or may not sum to 1. Our approach. We model the data generation process as a multinomial distribution parametrized by Constraining θ's simplexity in a model itself that describes data generation may improve accuracy, and eliminates the need of any further post-processing. The calculations for likelihood Pr[z, x = k|θ] can be found in Section 11 in the Supplement.
Linear models
Next consider a linear model where an outcome y depends on variables x as follows: In Section 12.1 of the Supplement, we show how to marginalize the inputs (x, y) when g(x) = x (linear regression) and both X and y are perturbed using Gaussian mechanism and in 12.2 for g(x) = 1/(1+exp(−x)) (logistic regression) where X is perturbed with Gaussian mechanism and y using the 1-bit RR.
Public data sets
For histogram-related experiments on public data, we use Kosarak (Benson et al., 2018) data set which consists of 990K click streams over 41K different pages. Our data set consists of 11K records of randomly sampled 10 items from the top-10K most frequent items.
For logistic regression, we use the Adult (Blake and Merz, 1998) data set (with over 48K records) from UCI repository to predict whether a person's income exceeds 50K.
Finally, we train a linear regression model using Wine (Cortez et al., 2009) data that comprises of roughly 6.5K samples with 12 features. We set R y = 2. A 80/20% train/test split was used.
Private baseline -LDP-SGD
We compare our regression models with a local version of DP-SGD adapted from Wang et al. (2019b) to our case study. They make the privacy cost invariant to the number of training iterations by having each user participate only once in the training. They partition the users into the groups of size G = Ω( d log(d)
2
). However, unlike in standard DP-SGD, the aggregator makes model updates for group i only after receiving the noisy gradients from group i−1. The gradients are perturbed using Gaussian noise (with ∆ 2 = 2) after clipping their L 2 norm to 1. In total, 2N − G messages are exchanged in the protocol, which is roughly 2N for large enough 's.
Pre-processing for regression tasks. To reduce the training time to be more manageable, we use principal component analysis (PCA) to reduce the original dimensionality of our data sets. Additionally, we center our data sets to zero mean.
Default settings and implementation
Both noise-aware and non-private baseline models are implemented either in Stan (Carpenter et al., 2017) or NumPyro (Bingham et al., 2018;Phan et al., 2019). We use No-U-Turn (Homan and Gelman, 2014) sampler, which is a variant of Hamiltonian Monte Carlo. We run 4 Markov chains in parallel and discard the first 50% as warm-up samples. We ensure that the Gelman-Rubin convergence statistic (Brooks and Gelman, 1998) for all models remains consistently below 1.1 for all experiments. For regression experiments, we fix R = 1 to use Corollary 3.1 and 13.1.
Prior selection. The priors for model parameters are specified in Table 8, and in Section 8 in the Supplement. The arguments to these priors are the hyperparameters for the model.
Uncertainty calibration
For the deployment of the learned models, it is crucial that the uncertainty estimates of the models are calibrated. Overestimation of variance would lead to uninformative posteriors whereas underestimation can lead to overconfident estimates. To assess this, we use synthetic data sampled from prior predictive distribution and learn the posterior distributions for the model parameters. We repeat this multiple times, and count the fraction of runs that included the data generating parameters for several central quantiles of the posterior samples. Figure 1 shows the posterior calibration for the Gaussian parameter estimation model as a function of . We can see that our posteriors are well calibrated even for the most strict privacy constraints. The plots for the histogram and exponential distribution models can be found in Figures 10 and 11 in the supplement.
Accuracy of private posteriors
We can also test the accuracy of our model by directly comparing the private and non-private posteriors. Figure 2 compares the mean absolute deviation over 50 repeats in the empirical cumulative density function (ECDF) of the private posteriors of the multinomial probabilities from the non-private for the Kosarak data set. We verify that our private posteriors, despite adhering to LDP guarantees, are quite close to the nonprivate posteriors.
Additionally, to motivate the benefit of including the whole data generation process in a single model, we compare the posterior means of our histogram aggregation model to the private point estimates. As mentioned in Section 3.2.2, the denoised point estimates may not satisfy simplexity, especially for strong privacy parameters and/or low sample size. We can enforce simplexity in these estimates with additional post-processing e.g. by computing their least squares approximations (Hay et al., 2010). While the least squares solution is well principled, it is disconnected from the main LDP protocol. By modeling θ as a multinomial distribution, we automatically satisfy θ's simplexity in the model itself. This leads to better utility in some cases. Figure 3 compares the mean squared error in reconstruction for a three dimensional histogram with true weights [0.7, 0.2, 0.1] for both methods for = 0.5. We observe that the error in our model is lower by 10-30%, specially in low sample size regimes. The comparison for more values can be found in Figure 9 in Supplement.
Accuracy of inference
Besides being able to correctly estimate the uncertainty of model parameters, we want our inference to also be accurate. Towards this goal, we compare the performance metrics of our regression models to LDP-SGD (Section 4.2).
Linear regression. We trained the linear regression model using the Wine (Cortez et al., 2009) data set. Figure 4 shows that the sufficient statistics based Bayesian model performs well under strict privacy guarantees. The comparison method (LDP-SGD) starts to perform better when we increase the privacy budget. As the sufficient statistic based model is trained on clipped inputs, it may underestimate the test targets which are unclipped, thus hurting the RMSE.
Logistic regression. We run the sufficient statistic based model for all 48K records of the adult data set. However, for the input based model (Section 3.2.3), we use randomly sampled 10K/48K records to reduce the training time. We choose c = 0.5 because this split yielded the best utility in our internal experiments (not included). For a fair comparison, we also run LDP-SGD with the same 10K records. Figure 5 compares the mean AUC for LDP-SGD, and both models for various privacy parameters. We verify that for ≤ 0.8, the sufficient statistics based Bayesian model outperforms DP-SGD. For large enough 's, sufficient statistics based Bayesian model achieves nearly the same level of performance as DP-SGD without additional rounds of messages from aggregator. The input based model cannot, however, perform at a similar level. This is possibly caused by the large amount of noise (due to the budget split), that suppress the covariance information necessary for accurate inference. In contrast, the covariance structure is perturbed relatively cheaply in the sufficient statistics based solution due to tight privacy accounting. We exclude the input based solution from our next plot.
Discussion and concluding remarks
In this work we have initiated the study of designing noise-aware models for performing Bayesian inference under LDP. Our models are well calibrated and outperform the point estimations for small privacy/sample size regimes.
With hierarchical modeling, these approaches easily extend to other interesting scenarios for pooled analysis between different sub-populations, such as personalized privacy budgets ( Feature sub-sampling is routinely used in LDP protocols to amplify privacy and reduce communication. The current framework does not efficiently model protocols involving sub-sampling because unsampled features in each input become the latent variables of the model, thus exploding the model complexity. The question of modeling the uncertainty due to subsampling is left as a future exercise.
Acknowledgement
This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI) Grants 325572, 325573, the Strategic Research Council at the Academy of Finland (Grant 336032) as well as UKRI Turing AI World-Leading Re-searcher Fellowship, EP/W002973/1. We are grateful to the Aalto Science-IT project for their computational resources.
In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS.
Chebyshev approximation
The polynomial expansion can be used to approximate a non-linear function by expressing it as a sum of monomials. Among several choices, Chebyshev polynomials (Mason and Handscomb, 2002) are often chosen because the quality of approximation is uniform over any finite interval [−R, R], R > 0. The Chebyshev polynomial expansion of degree M for the sigmoid of x T θ is given below.
In the above expression, (x) e = j∈ [d] x ej j , x ∈ R d , e ∈ N d . m e = m! d j=1 ej ! are the multinomial coefficients. The constants b 0 , b 1 , · · · , b M are the Chebyshev coefficients computed over an arbitrary finite interval. In general, M order approximation contains d+M d summands. Figure 7 shows a 2nd order expansion of the sigmoid function.
Quality of approximation. We would like to empirically understand the quality of approximation for the 1+exp(x T θ) ), for a fixed θ ∈ R d and Σ ∈ R d×d . Figure 6: We plot the mean absolute difference between empirical expectation of sigmoid(x T θ) and its 2nd order Chebyshev approximation with b 0 , b 1 , b 2 ∈ [−3, 3]. The mean absolute deviation is computed over 50 independent runs. In each run, we sample a new θ from standard normal distribution, and the empirical expectation is taken over 1000 samples drawn from a zero centered multivariate Gaussian with a random full rank co-variance matrix. The co-variance matrix remains fixed for a given dimension for all runs. We also restrict the L 2 norm of these samples to 1. We can observe that the approximation is moderately accurate.
For the first order terms, Similarly, (y 2 − y 2 ) 2 ≤ R 4 y . For the second order terms, Finally, where c 1 = 9.2 Proof of Lemma 3.2 Proof. Let us write g(x) = M j=0 g j x j and h(x) = 2 i=0 h i x i . Now the integral in (7) becomes We can clearly see that the exponential term inside the integral is yet another Gaussian kernel of x. Next we write the polynomial inside the exponential in a quadratic form: Denote s 2 = σ 2 1−h2 and m = 2z+h1 2−2h2 , we get where N (·; m, s 2 ) denotes the probability density function of a Gaussian with mean m and variance s 2 . Now, after removing the constant factors out of the integral, we are left with integral where Φ is the cumulative density function of a std. normal distribution. Now in order to conclude the proof, it suffices to show that the truncated normal distribution has the non-central moments in an tractable form, which has been shown in the past for example by Flecher et al. (2010).
We denote Φ θ (a) = Pr[x ≤ a] as the cumulative density function of the corresponding density function.
Gaussian distribution
We assume the distribution is rectified from both sides with (a, b) and data points are perturbed with -DP Laplace noise. We intend to learn both mean µ and variance σ 2 . Let θ = [µ, σ]).
Now we focus on the integral in step 26. Since x ∈ [a, b] and z ∈ R, the sign of the term |z − x| depends on z. To simplify, we set limit l = max(a, min(b, z)).
Putting this all together yields
Linear regression
Consider the following model where we observe data through Gaussian perturbation.
x ∼ N (0, Σ) z y | y ∼ N (y, σ 2 * ). (34) Next we show how to marginalize the latent inputs out from the model. We start by moving the z Now note that since the y depends on x through θ T x, the innermost integral can be written in terms of θ T x, which in turn follows a Gaussian distribution N (θ T h x , θ T Aθ): Finally, we substitute above into (37) and recover Pr[z x , z y ] = N (z x ; 0, Σ + Σ * ) Pr[z y | y]N (y; θ T h x , θ T Aθ + σ 2 )dy (40) = N (z x ; 0, Σ + Σ * )N (z y ; θ T h x , θ T Aθ + σ 2 + σ 2 * ).
Logistic regression
Consider the following model where we observe data through Gaussian and randomized response perturbation.
In the above expressions, b 0 , b 1 , b 2 ∈ R are the Chebyshev coefficients. We can easily solve the expectations in 44 as below.
13 Calculations for linear/logistic regression -sufficient statistics
Sufficient statistics based posterior inference under LDP
Assume that we are directly modelling the sum of locally perturbed sufficient statistics Z = S + H as a sum of two sums S = i s i , H = i ζ i . ζ i ∈ N ( d+2 d ) (0, Σ s ), i ∈ [N ]. Let θ , Σ be the parameters of a regression task at hand and µ s , Σ s be the moments of normal approximation of S. Σ * is the diagonal co-variance matrix of Gaussian noise. We repeat the inference 100 times for N = 1000 and d = 6. For each run, we add-up the number of true θ i , i ∈ [d]'s in 50%, 70%, 90%, and 95% posterior mass, and divide the final count across executions by 100d. Figure 11: Exponential parameter discovery: For N = 1000, we repeat the inference for 80 times for Exponentially distributed data (clipped to 5) perturbed with Laplace noise for various values. In each independent run, we draw θ from prior. We compute the fraction of runs that included the true θ in 50%, 70%, 90%, and 95% posterior mass. The legend shows the quantile intervals that captures 50%, 70%, 90%, and 95% posterior mass. | 2021-10-28T01:16:43.499Z | 2021-10-27T00:00:00.000 | {
"year": 2021,
"sha1": "39f676d1d548de20371a3819029d0bcde36f15fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "39f676d1d548de20371a3819029d0bcde36f15fd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
231680338 | pes2o/s2orc | v3-fos-license | Solvent-Free Mechanochemical Synthesis of ZnO Nanoparticles by High-Energy Ball Milling of ε-Zn(OH)2 Crystals
A detailed investigation is presented for the solvent-free mechanochemical synthesis of zinc oxide nanoparticles from ε-Zn(OH)2 crystals by high-energy ball milling. Only a few works have ever explored the dry synthetic route from ε-Zn(OH)2 to ZnO. The milling process of ε-Zn(OH)2 was done in ambient conditions with a 1:100 powder/ball mass ratio, and it produced uniform ZnO nanoparticles with sizes of 10–30 nm, based on the milling duration. The process was carefully monitored and the effect of the milling duration on the powder composition, nanoparticle size and strain, optical properties, aggregate size, and material activity was examined using XRD, TEM, DLS, UV-Vis, and FTIR. The mechanism for the transformation of ε-Zn(OH)2 to ZnO was studied by TGA and XPS analysis. The study gave proof for a reaction mechanism starting with a phase transition of crystalline ε-Zn(OH)2 to amorphous Zn(OH)2, followed by decomposition to ZnO and water. To the best of our knowledge, this mechanochemical approach for synthesizing ZnO from ε-Zn(OH)2 is completely novel. ε-Zn(OH)2 crystals are very easy to obtain, and the milling process is done in ambient conditions; therefore, this work provides a simple, cheap, and solvent-free way to produce ZnO nanoparticles in dry conditions. We believe that this study could help to shed some light on the solvent-free transition from ε-Zn(OH)2 to ZnO and that it could offer a new synthetic route for synthesizing ZnO nanoparticles.
Description
Page no.
Characterization methods
High-resolution scanning electron microscopy (HR-SEM) images of the -Zn(OH)2 crystals were taken using a fieldemission, FEI, Helios 600 HR-SEM. Samples were sputtered with a 3 nm layer of iridium in order to reduce charging effects. Reflectance spectra of the powders was measured using a Cary 500 Scan UV-Vis spectrophotometer equipped with a diffused reflection UV (DRUV) solid sample holder. Dynamic light scattering (DLS) measurements were carried out using a Malvern Zetasizer 3000 H AS. All powder samples were suspended in double-distilled water (0.1 mg/mL), bath sonicated for 10 min, and then measured ten times. FTIR spectra were collected using a Thermo Scientific Nicolet iS10 FTIR spectrometer equipped with a Smart iTR attenuated total reflectance (ATR) sampler containing a single bounce diamond crystal. Data were collected and analyzed using OMNIC software. Spectra were collected in the 650-4000 cm -1 range at a spectral resolution of 4. Aggregate size was measured by DLS; powders were suspended in DDW and measured 10 times. The average particle size in solution (nm) and relative abundance of each size are presented in Figure S8. The DLS graph of 2 Cycles (red line) presents one large peak centered at around 1,000 nm and a much smaller peak centered around 250 nm; the large peak corresponds to residues of -Zn(OH)2 in the sample, as confirmed by XRD and TEM, and the small peak, with a much lower abundance in the sample, fits a small ZnO nanocrystalline aggregate, which is smaller than all other products due to the smaller number of ZnO nanocrystals compared to the other samples.
5 Cycles also presents two peaks in the DLS graph, one centered at around 1,300 nm and the other centered around 490 nm; in this case, the large aggregates do not correspond to residues of -Zn(OH)2, as no diffraction peaks that fit -Zn(OH)2 were detected by XRD, however large quantities of a-Zn(OH)2 were detected with TGA analysis, therefore we believe that the large peak has to do with the presence of large ZnO/a-Zn(OH)2 aggregates that later break down with the milling process. As for the 8 and 10 Cycle samples, we only find one peak in DLS, meaning that all of the aggregates in the sample are of relatively the same size, however the 10 Cycles sample presents a much sharper peak in comparison to 8 Cycles, which means that it has a much narrower size distribution and a higher homogeneity. The 8 Cycles sample also contains a certain amount of a-Zn(OH)2, so the presence of a small amount of ZnO/a-Zn(OH)2 aggregates could explain the wider distribution of sizes. Altogether, from the DLS results we learn that throughout the transition from -Zn(OH)2 to a-Zn(OH)2, and finally to ZnO nanoparticles, the aggregate size decreases until it collapses to an average size of 500 100 nm with a narrow size distribution.
Where ( ∞ ) is the KM function assuming measured reflectance of an ideal infinitely thick specimen, K is the adsorption coefficient, and S is the scattering coefficient of the sample. The KM function could be used to calculate the band gap of the semiconductor by placing ( ∞ ) instead of α in the Tauc equation (2) and plotting (αhv) 2 versus energy (eV). The region in the graph that shows a steep, linear increase of light absorption with increasing energy is characteristic of semiconductor materials, and the x-axis intersection point of the linear fit of the Tauc plot gives an estimate of the band gap energy.
(2) For all powders, the band gap was found to be around 3.22 eV, regardless of the number of milling cycles, probably because the crystalline size in all of the samples is of the same order, therefore the minor changes in size did not have an effect on the electrical properties. The results are presented in Figure S5. The structure of the milled powders was also investigated by Fourier transform infrared (FTIR) spectroscopy. All spectra, shown in Figure S9, show a wide peak at 3352 cm −1 in the characteristic area of OH stretching vibrations (3000-3500 cm −1 ). We believe that this peak could be attributed to a-Zn(OH)2, which is known to exist in all samples from the TGA analysis ( Figure 4). In the 10 Cycles sample, the peak intensity decreased to a large extent, and the peak shifted to 3395 cm -1 , which is characteristic to water/ethanol molecules adsorbed onto ZnO [3]. In addition to the peaks at the 3000-3500 cm −1 region, a very small peak at 2975 cm −1 corresponds to C-H stretching of an alkane, and two additional peaks at 1508 and 1382 cm −1 correspond to C-O stretching and O-H bending [4,5]. This data is consistent with ethanol molecules that are adsorbed in the milled powders. | 2021-01-23T06:16:24.757Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "53ced4c7af63fa58cb51a330d57494297f4b450f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/1/238/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd9fba1c959bd658766e3c30a7cbdf7a4bff73b7",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244101767 | pes2o/s2orc | v3-fos-license | Speed Compatible Green Wave Corridor with The Internet of Things
Today, Intelligent Transportation Systems are becoming more and more common. Green wave systems in Smart Transportation Systems are used with fixed values of which phase duration is preset and the total cycle time does not change. It is needed for these systems to include the smart transportation class and adapt to the environment. With this study, the green wave system has been realized by transforming the green wave system into Intelligent Transportation Systems on a corridor with varying phase duration. Data collection points on the green wave corridor, speed detection points, and junction control devices were communicated and the duration of the junctions was changed according to the traffic density, and the waiting times of the other vehicles and pedestrians in the corridor were reduced and traffic safety was increased.
Introduction
Systems used in information and communication technologies such as infrastructure, vehicle, traffic management, and movement management in other types of transportation along with road transportation are called Intelligent transportation systems [1]. There is an increase in smart transportation systems day by day. Smart junctions, green wave corridors, variable message systems, speed detection systems, etc. systems are aimed to relieve traffic, make human life easier, reduce traffic accidents and loss of lives. Figure 1 shows the smart transportation system infrastructure [2].
Roads in residential areas where traffic is heavily used are called arteries. it is aimed that a vehicle can proceed without any hesitation in multiple arteries used in green wave corridors. A fixed speed plan is implemented on such roads. For example, in an artery with a 30 seconds green period wave for cars, it is aimed to give way to pedestrians at regular intervals. In this process, traffic intensifies, and heavy traffic occurs with vehicles that want to enter the artery through secondary roads. As the Internet of Things (IoT) technology increases in intelligent transportation systems, a traffic system that can be self-managed and works in accordance with variable time and variable speeds can be created. Traffic lights are aimed at traffic safety rather than speeding up traffic. Intelligent transportation systems bring safety and speed as well as safety with the Internet of Things technology.
Figure 1. Intelligent Transportation System
Especially metropolitan cities with heavy traffic problems use the Urban Simulation of Urban Mobility (SUMO) program with the help of developing technology [3]. Because traffic does not have a precise model of traffic flow due to its high complexity and chaotic organization, researchers often try to predict traffic using simulation programs. In this area, many simulation packages are found in the software architecture paradigm and differ in the models that define traffic itself. Unlike most other simulation software packages, SUMO is an open-source program that a researcher can adapt and use to their needs [4]. It has a fuzzy logic control and Proportional Integral (PI) type control based traffic light controller, light controls design. The results can be compared with traditional fixed phase times, fuzzy logic-based, and PI-based control [5].
Intelligent Traffic System (ITS) is one of the newest research topics on the Internet of Things. Due to the increasing number of vehicles, traffic jams can occur in settlements. Most traffic light control systems still use stand-alone systems where each traffic light junction is manually identified by officers. This causes traffic lights to fail to adapt to traffic density, which often results in cars accumulating and traffic jams [6]. To reduce traffic jams, it is necessary to provide an open road to vehicles in the urban area. Considering that it consists of traffic and connection points (junction, terminal, station, etc.), it can be stated that the creation, renewal, and management of these systems are of great importance for the solution of the problems that occur.
In this study, the simulation results in the SUMO software were analyzed by using the fixed-time conventional green wave corridor and the information to be obtained from IoT supported devices, vehicle counting and speed determinations, and variabletime green wave corridor.
Internet Of Things
The Internet of Things (IoT) concept was first described in 1999 in a presentation by Kevin Ashton, using the application of Radio Frequency Identification (RFID) technology in a firm's supply chain, and its use and benefits [7]. IoT can be defined as a network structure where the machines exchange data among themselves, without the need for human intervention and data entry, the devices collect information and self-decide with the information collected. In other words, IoT can also be defined as the ability of addressable objects in a network to create communication among themselves with a specific protocol [8].
While there were 4.9 billion devices on the internet in 2015, a report published by Gartner stated that the number of Internetconnected objects increased by 16% in 2017 compared to 2016, and 8.4 billion objects will be used worldwide [9]. Today, around 30 billion devices are used. It is expected to reach 75 billion devices in 2025 [10]. As can be seen from this data, the IoT market is growing and becoming more and more common in many parts of our daily life, starting with the adoption of smartphones and computers, including smartwatches and wristbands, smart televisions, smart home appliances, and even smart cars. Figure 2 shows many areas where IoT is used [11]. It shows some of the studies in the fields of Smart Home, E-Health, Smart Environment, Smart Water, Smart Agriculture, Smart Livestock, Smart Energy, Smart Cities, Smart Measurement, Industrial Control, Security and Emergency Situations, Shopping, Logistics [12]. In the near future, as a result of the data exchange between the internet and the objects, they will be able to order the number of eggs in your refrigerator at home, whether they are finished or not, as required by your market. For example, when traveling in traffic, your car, which detects traffic jams, conveys to your family or the person you will gather that there may be a delay in your arrival; you may think that your portable medical device, which measures your blood pressure continuously, tells your doctor that your blood pressure is changing, rising or falling by SMS [13].
Speed-Warning Green Wave
Green wave system; It is a signaling system created in order to avoid stopping at the red light in succession at traffic lights located at short distances in the main arteries. The system, which makes it possible to switch all traffic lights in green when traveling at a certain speed on the selected routes, thus enables time and fuel savings. Figure 3 shows a path using the green wave.
Figure 3. The road and sign indicating that the green wave is used
In addition to standard green wave systems, it is to determine the speed of vehicles with radar systems by aiming to maintain speed. A digital tracking system (VTS -Vessel Traffic System) is shown in Figure 4, indicating the Green wave system. To ensure that the vehicles complete the corridor at average speeds with time changes in the traffic lights and the pedestrians cross the road safely when there are few or no underground vehicle counters.
There are two important items at the heart of this idea. • Speed detection with radars. • Vehicle counting system.
Adding Radar On the Corridor
The radar is based on the principle of returning to the transmitter by reflecting after the radioelectric waves, which are published as a very narrow and short-term beam, hit an obstacle. The word Radar was created by shortening the initials of the words Radio Detecting and Ranging, which means finding and locating with Radio in English. The RADAR vehicle sends radio waves towards the car approaching it. Waves that hit the incoming vehicle and measure the speed of the vehicle according to the round-trip time. By measuring the average speeds of the vehicles passing on the corridor, the phase duration of the junctions in the corridor will be determined. These phase times will give us the elapsed time between two junctions. The new generation radar view is shown in figure 5.
Figure 5. A radar system implemented in Uşak
The IoT device, which controls the radar system, records the data determined by the average speed of the vehicle and the data related to the speed of the first vehicle. By sending this information to the junction controller, it provides the speed parameters in the system.
Counting vehicles with Bluetooth, RFID, and Loop Detector
The Internet of Things was first used in 1999 by Kevin Ashton in a presentation on the benefits of Radio Frequency Identification (RFID) technology for P&G [13]. With the IoTbased smart traffic information system, data were obtained with RFID tags and detection devices. This data is aimed to reduce the density and environmental pollution in traffic by using traffic simulation modeling in the NetLogo program [7]. Today, the Fast Pass System (HGS), Automatic Pass System (OGS), secure site and car park entrances, etc. RFID cards used in places are found in many vehicles. The realistic time between two locations can be calculated by tracking these cards.
An example of an RFID tag reading process is given in Figure 6. With this example, the entry time of a vehicle with an HGS label is determined. Since the same system is also at the highway exit, the time taken by a vehicle between two points can be calculated. with mobile phones and other mobile devices wirelessly using short-range radio frequency (RF) signals [14].
It is used for 2 main purposes.
• To pair the devices with each other and enable them to communicate with each other.
• Enabling wireless file exchange.
For these 2 purposes, while the Bluetooth modules are communicating, one takes the master position and the other takes the slave position. The module, which is in the master position, organizes communication and is the party that sends the data. The device in slave state is the address where the data is sent in this communication. After the master and slave position is set by Bluetooth technology, data transfer starts between two different electronic devices. The devices used can be fixed or portable. Thanks to Bluetooth, a connection is created between devices. Within this network, many transactions such as data exchange, information transfer, using the printer, sending e-mail are performed. Different devices in the same production environment can be monitored and kept under control.
Loop Detectors create an electrical magnetic field thanks to the copper cable laid on the ground. Loop detectors are also called Metal Mass Detectors. It is mostly used in parking systems, opening and closing of barriers automatically, during automatic issuance of tickets. It is also used to detect whether a vehicle is on asphalt. By connecting a few rounds of cables to the ground to the loop detector card, an electromagnetic field will be created on the asphalt. Thus, the determination of metal density is carried out with changes in this electromagnetic field. With any metal passing over this ground, that is, the electrical field, they perform the commands we want thanks to the settings on it. Thus, vehicles can be detected and the traffic load at the crossroads can be measured and unnecessary waiting will be prevented [15].
Devices such as Beagleboard Beaglebone, Rasperry Pi; where microprocessors are insufficient to perform operations, where computers are required to be used, these devices are defined as a computer that can perform more operations, is costeffective and contains only necessary hardware [16]. Figure 7 shows the Raspberry Pi and Beagleblone Black devices [17].
IoT BeagleBlone, Raspberry Pi devices; can be integrated with loop detector, RFID reader, Bluetooth detectors. Thanks to the features of these devices, the vehicle number information passing through the points where the devices are located is determined, and the information obtained is sent to the junction control devices [18].
Figure 7. Raspberry Pi (on the left) and Beagleblone Black
After the vehicle number information is received from the Loop detector, RFID reader, and Bluetooth devices, the vehicle number values are entered in the SUMO program, we can calculate the times on the corridor and adjust the status of the next traffic light. At the same time, knowing that the vehicle is present or not, we can give way to the pedestrians or vehicles coming from the secondary road. You can gain extra functions with the scripts that will be integrated into this program. Various traffic algorithms can be created with these functions. Vehicle detection speed detection scripts integrated into the SUMO program were implemented with an algorithm created by introducing them like an IoT device.
As a result of these studies, we can determine the speed of that artery, increase the life safety, average speeds, and density in the artery by controlling vehicle counting and speeds on routes with green waves. We can also inform other devices on the system.
The principle "continue with the same speed and do not wait in the traffic lights" which is used in the green wave system can be applied not only to the vehicles on this road but also to the pedestrians and the vehicles on the secondary road.
In Figure 8, a 3-point green wave corridor is drawn on the SUMO application. The counting of the vehicles is done using loop detectors.
Algorithm and Flow Diagram
The green wave is that, at successive signaling junctions, the drivers progress regularly at a certain speed without getting caught in the red light. In Figure 9, there is the traditional green wave algorithm. In this algorithm, the duration of the junctions that form corridors with each other is given fixed at the beginning. Accordingly, it is checked whether only the phase change period has come. Adaptive green waves can adjust real-time traffic status, signal times according to demand and system capacity. It is a system that generates signal times according to the number of vehicles on the sensors located at a certain distance at the entry and exit of each junction arm and the occupation data. When the traffic volume is larger than normal, the detector records this information and sends it to the control unit. The controller turns the red light green in turn and changes the total phase length to keep traffic volume in balance. Adaptive systems are the most ideal systems that minimize total delays, as they provide the right if not, go to the 10th step. 9. change the Phase 10. Go to the 5th step, system and lamp control 11. End
Figure 9. Traditional Green Wave Algorithm
Unlike traditional green wave systems, it receives more field data input using IoT and processes this data to an improved algorithm. It applies the outputs resulting from the operation of the algorithm at the traffic lights on the road. In an IoT-supported green wave corridor, green wave coordination is achieved by making phase changes of the lamp groups with the help of values obtained from IoT devices. In Figure 10, the green wave coordination algorithm with IoT support is described. In this algorithm, the start times are fixed, and the data of the IoT devices are analyzed every second and the phase changes of the groups are made. When data cannot be received from IoT devices, the algorithm is terminated by switching to the traditional green wave mode.
Conclusions and Recommendations
Today, traffic lights are used to ensure traffic safety, which leads to an increase in traffic density. Attempting to regulate this problem by applying green waves only at junctions that occur in the straight artery only benefits the intensity following that route. As a result of these researches, it is determined that the speed of the artery, the increase of life safety, the average speed, and the density of the vehicles in the artery are determined by controlling the vehicle number and speeds on the routes with green waves. In addition, information is sent to other devices on the system.
The traditional green wave is coordinated traffic signaling, which is created for drivers to progress regularly at a certain speed without catching red light, especially in successive signaling junctions. Adaptive systems, on the other hand, are the most ideal systems that minimize the total delays because they provide the right to pass according to the instantaneous values of traffic densities.
The main conclusions of the study should be summarized in a short Conclusions section. A road simulation with 10000 vehicles was created using SUMO traffic simulation software. As a result of the simulation, the values in Table 1 were obtained.
For vehicles moving on the road, the reduction in waiting times has caused both speed, less fuel consumption, and less time loss. A 20% reduction in waiting time reduces environmental pollution and saves fuel.
Even if the average speed is expected to be equivalent to the determined speed in the green wave corridor, the deficiency in the traditional system could not provide this value. When the time is calculated according to the number of vehicles, we see that the average speed increases by 20%. | 2021-10-18T17:51:55.484Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "ccd8ed2e2dc0c8cab91b0771e29a326f8d6e7014",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/en/download/article-file/2000795",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "84a3f8589a5676622f240dd98251fe0dbd074780",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
74637485 | pes2o/s2orc | v3-fos-license | Participants’ Perspectives of a Culturally Competent Diabetes Education for Hispanic/Latinos
Background: ¡Si, Yo Puedo Controlar Mi Diabetes! (Sí, Yo Puedo) is a culturally appropriate diabetes self-management education program targeting underserved Hispanic/Latinos. Purpose: The purpose of this article is to report on our post-test focus group observations that elaborate upon quantitative evaluation results that are published elsewhere. Methods: Following a seven-week intervention, we conducted seven focus groups to capture participants’ perspectives about managing their diabetes before and after participating in classes. These sessions were held during a one-month post-intervention (reunion) session. Results: Participants were mostly female (77%; N=34) with a mean age of 58.8 years. Perceived improvements in eating habits, blood glucose testing, and physical activity were among the positive outcomes of attending the program. Barriers to diabetes self-management included struggles changing lifestyle habits, accepting disease diagnosis, and financial issues. Despite these concerns, participants found Sí, Yo Puedo to be beneficial, especially with psychosocial support. “Not feeling alone” was a prevailing sentiment expressed by participants. Conclusions: Overall, participants indicated the program was relevant to their needs. This study suggests that Sí, Yo Puedo is an effective program to reach Hispanic/Latinos and improve their health outcomes. © 2014 Californian Journal of Health Promotion. All rights reserved.
Introduction
Hispanic/Latinos are at an increased risk for developing diabetes and suffering diseaserelated complications. Diabetes rates are appreciably greater for Hispanic/Latinos (11.8%) than non-Hispanic whites (7.1%) (Centers for Disease Control and Prevention [CDC], 2011). This group also has higher rates of end-stage renal disease and are less likely to engage in diabetes self-care practices, such as taking a diabetes self-management education (DSME) class or monitoring their blood glucose levels (Thackeray, Merrill, & Neiger, 2004;CDC, 2009;Campbell, Walker, Smalls, & Egede, 2012). As these data suggest, poorly controlled diabetes is a burden experienced by many Hispanic/Latinos.
Barriers to Diabetes Care
Diabetes self-management education is vital to improving health outcomes. Yet, for disadvantaged Hispanic/Latinos, barriers to diabetes care are a concern. Recent reports show that 34% of the 44 million U.S. Latinos aged 65 years or younger do not have medical insurance, as opposed to only 12% of whites not having coverage (Perez-Escamilla, 2010). Numerous factors have been associated with their poor diabetes outcomes including limited access to health care, language barriers, cultural factors, lack of cultural competence among health care providers, living in poverty, limited social support, low health literacy, and poor coping with medical co-morbidities (Cersosimo & Musi, 2011;Brown & Hanis, 2014;Brown, García, Winter, Silva, Brown, & Hanis, 2011;Scollan-Koliopoulos, Schechter, Caban, & Walker, 2012). Overcoming these obstacles requires health care system changes regarding policies to increase health coverage and access to care, and at the provider and patient level to improve communication through culturally appropriate educational materials and programs (Cersosimo & Musi, 2011).
Culturally Competent Care
Culturally competent care is a solution to improve the accessibility and effectiveness of health care services for minority groups (Truong, Paradies, Priest, 2014;Funnell et al., 2011). For example, tailoring DSME interventions for ethnic minority populations compared to conventional DSME interventions reportedly improves participants' diabetes knowledge and healthy lifestyle behaviors (Hawthorne, Robles, Cannings-John, Edwards, 2010). Other data support that culturally appropriate programs are more effective than generic quality improvement programs for reducing disparities in intermediate effects of diabetes such as hemoglobin A1c, a measure of good blood glucose control (National Institute of Health [NIH], 2011. As this evidence reveals, customizing DSME programs yields benefits that can ameliorate diabetes disparities for Hispanic/Latinos.
Diabetes Education for Hispanic/Latinos
Reaching underserved Hispanic/Latinos with customized DSME interventions delivered in community settings is a strategy that has shown promise (Liebman, Heffernan, & Sarvela, 2007;Babamoto et al., 2009;Philis-Tsmikas, Fortmann, Lleva-Ocana, Walker, & Gallo, 2011). Evidence is mounting in favor of community-based DSME programs for Hispanic/Latinos. Positive gains in glycemic control (HbA1c < 7), diabetes self-care behaviors, self-efficacy, and health status were among the benefits cited in recent literature (Rosal et al., 2005;Sixta & Ostwald, 2008;Brown et al., 2011;Philis-Tsmikas et al., 2011;Peña-Purcell, Boggess, & Jimenez, 2011). Conducting classes in the location where participants reside will also have greater appeal because it is a familiar and comfortable place. This was the case in a 20-year research study, where Hispanic/Latinos preferred their DSME classes held close to home, such as at the local church or school (Brown & Hanis, 2014).
Bringing programs to the community, particularly group classes, offers many other advantages. These benefits include providing an informal atmosphere for open discussions and questions, facilitating support from family and friends, allowing convenience in scheduling and location for the patient, promoting cultural relevance and appropriateness of instructional techniques, stimulating collective learning, and encouraging social interactions (Brown, Garcia, & Winchell, 2002;Ingram, Gallegos, & Elenes, 2005;Norris et al., 2002). With the numbers of Hispanic/Latino population expected to rise, DSME programs catered to this group will be a critical need. ¡Si, Yo Puedo Controlar Mi Diabetes! (Si, Yo Puedo) was developed by Texas A&M AgriLife Extension Service as a statewide initiative to meet the demand for a DSME program tailored for its Hispanic/Latino clientele, especially in rural areas. Its approach included delivering classes in Spanish, incorporating their dietary preferences in the nutritional content, emphasizing socialization and family participation, and integrating cultural beliefs and values in the curriculum. The strategies employed in Si, Yo Puedo parallel best practices used in other DSME programs targeted for Hispanic/Latinos (Sixta & Ostwald, 2008;Philis-Tsmikas et al., 2011;Vincent, Pasvogel & Barerra, 2007). In the pilot study, we were able to achieve clinical and behavioral improvements. Specifically, participants lowered their A1c and increased their self-efficacy, diabetes knowledge, and self-care behaviors (Peña-Purcell, Boggess, & Jimenez, 2011). This study reports on focus groups that were conducted as a post-test qualitative evaluation to contextualize the impact of Si, Yo Puedo on participants. The purpose of this article is to report on our focus group observations to elaborate upon our published study.
Study Design
At post-test, we conducted seven focus groups to capture participants' experiences managing their diabetes after participating in classes. These sessions were held one-month post-intervention during a reunion session. Texas A&M University Institutional Review Board approval was obtained to conduct the study.
Sample and Site Characteristics
Thirty-four Hispanic/Latinos with type 2 diabetes agreed to participate in the study. All participants completed a five-week Si, Yo Puedo class series in Starr County, Texas. Participants were purposively recruited because of their participation in Si, Yo Puedo, which allowed them to provide feedback specific to the effectiveness of the course. Si, Yo Puedo is a group-based intervention designed to equip participants with knowledge and lifestyle skills in diabetes self-management. The curriculum content aligns with the American Diabetes Association's standards for DSME (Funnell et al., 2011). Table 1 details the Si, Yo Puedo intervention strategies and their respective targets for change in knowledge, attitudes, and behaviors. Diabetes self-care behaviors include diet, physical activity, self-monitoring blood glucose, and medication adherence.
Table 1
Si, Yo Puedo Intervention Strategies
Intervention Strategies
Knowledge Attitudes
Behaviors Diet
Seven focus groups were formed from four Si, Yo Puedo classes held in one community center, two churches, and a public library. We planned two focus groups per class to facilitate better group dynamics and sharing. While eight to twelve participants are optimum for focus groups, our size varied from two to seven due to low attendance (Kreuger & Casey, 2009).
Focus Group Protocol
At the beginning of the focus groups, participants were orally administered an informed consent in their preferred language (Spanish or English). Demographic data were obtained from our baseline assessment (Peña-Purcell et al., 2011). Si, Yo Puedo class facilitators, trained by the lead author to conduct focus groups, moderated the sessions. A semistructured interview process guided the focus group discussions.
There were 11 open-ended questions that were designed to elicit information about participants' self-care habits before and after attending Si, Yo Puedo (Table 2). Focus group items addressed the seven essential diabetes self-care behaviors taught in class: 1) eating healthy, 2) monitoring blood glucose, 3) engaging in physical activity, 4) visiting the doctor routinely, 5) checking your eyes, teeth, and feet, 6) taking your medications, and 7) managing stress. Focus group discussions were digitally recorded and were approximately 60 to 90 minutes in length. Spanish was the preferred language for six of the seven focus groups, with one conducted in English. In addition to conducting focus groups, we obtained participants' self-reported gender, date of birth, literacy, education, income, and insurance.
Table 2
Focus Group Questions Reflecting on your experience BEFORE (or AFTER) taking the class, the following are questions about your health habits managing diabetes: 1. How were your eating habits BEFORE taking the class?
How are your eating habits AFTER taking the class? 2. How often did you check your blood sugar BEFORE taking the class?
How often do you check your blood sugar AFTER taking the class? 3. How physically active were you BEFORE taking the class?
How physically active are you AFTER taking the class? 4. How often did you see your doctor BEFORE taking the class?
How often do you see your doctor AFTER taking the class? 5. Were you taking your medicines as prescribed by your doctor BEFORE taking the class?
Are you taking your medicines as prescribed by your doctor AFTER taking the class? 6. How often did you check your eyes, teeth, and feet BEFORE taking the class?
How often do you check your eyes, teeth, and feet AFTER taking the class? 7. How well were you managing your stress BEFORE taking the class?
How well are you managing your stress AFTER taking the class?
The following are general questions about managing your diabetes: 8. What is most difficult in managing your diabetes? Explain. 9. What is easy in managing your diabetes? Explain. 10. Is there something that we didn't ask that you think is important to add to this interview?
Feel free to share any additional thoughts you have.
Data Management and Analysis
A team of trained bilingual (Spanish/English) graduate students transcribed the audiotaped focus group discussions. To maintain the integrity of participants' responses, transcribed focus groups were written in the original language (Spanish or English). Participant identifiers were removed to protect confidentiality.
A content analysis and constant comparative method were used to analyze the data (Creswell, 2002). Because our research team members are bilingual, we analyzed the Spanish transcription in its original language. Through a content analysis, our team subjectively analyzed the text data to code the information and identify themes (Hsieh & Shannon, 2005). Data units were identified; using the constant comparative method, codes were assigned to this information, and categories were formed. In the first step, transcription was "segmented" into meaningful data units (Merriam, 1998). During this process, our research team read the text repeatedly to identify units or meaning. Open coding, the second step, assigned labels or codes to data units that gave interpretation to these text segments. Finally, we categorized common elements of the codes, which are abstractions derived from the data that not only describe the data but also interpret the data (Merriam, 1998). The research team placed the individual units of transcribed data on note cards. Using a process of consensus, the team coded data units to illuminate the meaning of the transcription. The note cards, particularly for the small group analysis, provided visual manageability for the emergent categories and promoted group participation into each unit of data. Given that context plays an important role in understanding a culture and, similarly, language in the context, we coded and categorized text in Spanish.
Participant Characteristics
Participants were predominately female (77%; N=34), had a mean age of 58.8 years, had less than a high school diploma (40%), had a yearly income of less than $20,000 (57%), were uninsured (48.7%), and most had median literacy levels (score of 50%). Demographic characteristics of participants are described in Table 3.
Themes of Focus Groups
From the seven focus groups, 765 units of data were extracted. Each of these significant statements was numbered sequentially from the first conducted session to the last. From the content analysis, three themes emerged: diabetes self-care behavior improvements, challenges to managing diabetes, and program benefits. Given the importance of their cultural foods, one participant summed up a collective concern restricting consumption of these foods: "…staying away from traditional foods that we are accustomed to eating. We still munch here and there, and we just can't seem to drop that off." Restricting consumption of their favorite foods was a hurdle to overcome, as indicated in the following statement: "For me also the food, the rice, tortillas, and beans…they were my favorite foods, and now it has to be less…that is the hardest part." For some participants (8/34), the fact of not being able to eat what they were used to eating caused them to experience negative emotions:"…Yes, it depressed me at the beginning because I was going to let go of many things that I truly liked…" When discussing physical activity, barriers to exercising were reported (6/34): "…I only don't walk when it is very cold or I can't." Another commented, "…I have many problems with one leg. And my back has been operated on, and I have lots of problems with this. I know it would help, but I can't walk a lot. Because of this, I don't exercise."
Theme 3. Acceptance of Diabetes Diagnosis.
A final challenge to diabetes management was accepting their diabetes diagnosis (6/34). The following statement captures this psychological impediment as a hurdle to overcome: "…well it was also difficult when they diagnosed the diabetes because maybe I did not have a lot of information, but today after the classes, I see things in another way; I mean, I know how to live with diabetes." Theme 3. Benefits from the Program. Several benefits of the program were reported in the focus groups (16/34): a better understanding of their disease, physical appearance enhancement, and social support. Most participants cited that an increase in diabetes knowledge helped them with many aspects of managing their diabetes, including eating healthier and preventing complications. The encouragement and empathy that participants received from each other was a valuable outcome of Si, Yo Puedo. The following statement illustrates this finding: "The support, the support… with people that have the same condition like you. You learn to overcome it better." Another participant commented, "We don't want to feel that we are alone going through this. And we learn from each other…what we go through on a daily basis. And we can get our strength by knowing somebody else is also going through what we are going through and that support."
Discussion
This qualitative study confers with our published study reporting improvements in diabetes selfcare management (Peña-Purcell, et al., 2011). Reducing portion sizes, making healthier food choices, engaging in physical activity, and improving blood glucose monitoring were areas participants experienced the most notable positive changes. These observations also agree with research testing DSME for Hispanic/Latinos that showed improvements in dietary habits and self-care behaviors (Castillo et al., 2010;Peña-Purcell et al., 2011;Philis-Tsimikas et al., 2004).
Food Choices
Mirroring our published study where participants progressed from three to six days per week in following a healthful eating plan, our cohort agreed that Si, Yo Puedo helped them learn how to make better food choices. Nonetheless, a common issue among participants was struggling to give up foods they enjoyed most. One major issue was that their food preferences are part of their cultural identity. Confirming findings by Rosal, Goins, Carbone, and Cortes (2004), participants endorsed eating their usual traditional foods but in smaller amounts. Participants in our study did not mention modifying traditional meals in a healthier manner, but we speculate Si Yo Puedo raised their awareness of how to make better choices. Based on their responses, eating less of these foods-especially monitoring carbohydrate intake-is a step in the right direction to achieve glycemic control (American Diabetes Association, 2007). The emphasis in Si, Yo Puedo was to provide options to modify and not eliminate their cultural foods. To this end, incorporating healthy food choices and recipes that are typical for Hispanic/Latinos was a strategy employed in the course.
Physical Activity
Confirming our published study where participants made significant strides engaging in at least 30 minutes of daily physical activity from two to six days per week, participants cited they were increasing their level of physical activity (Peña-Purcell, et al., 2011). This also lends support to other studies reporting similar improvements in physical activity among Hispanic/Latinos attending culturally relevant DSME (Castillo et al., 2010;Rosal et al., 2005). Our findings are encouraging given that previous research has found that Latinos are less physically active than other populations (Ahluwalia, Mack, Murphy, Mokdad, & Bales, 2003). Supporting other studies, we also observed that exercising might be problematic for people with diabetes due to physical limitations and environmental factors, such as a lack of green space and unsafe places (Castillo et al., 2010;Lopez-Class & Jurkowski, 2010).
Blood Glucose Testing
While it was evident that participants were making progress in monitoring their blood glucose, infrequent testing was an area of concern. Our findings conflict with the pilot demonstration where the largest gains were testing the recommended number of times per day from two days to seven days (Peña-Purcell,et al., 2011). Because the focus groups were conducted five weeks post-intervention, we speculate some may have lost motivation to maintain the healthful habits learned in class. Another possible explanation is the fear or dislike of finger pricking, as expressed by one participant. Pain associated with blood glucose testing is one possible reason why people may refrain from regular testing (Heinemann, 2008). Rosal et al. (2004) cited that Hispanic diabetes patients feared using a glucose meter because they disliked the pain from finger pricking. This concern underscores the importance of DSME to help patients develop problem-solving skills to tackle problems that hinder them from routinely checking their blood glucose.
Acceptance of Diabetes Diagnosis
Several participants expressed difficulty accepting their diabetes diagnosis. Refusing to accept, or denial, is a common response among chronic disease patients, and can potentially lead to poor metabolic control (Taylor, 2009). In a study of New Mexico Hispanics, participants denied their diabetes diagnosis for years before coming to acceptance (McCloskey & Flenniken, 2010). As noted by the authors, denial impacted participants' readiness and ability to take steps to manage their diabetes (McCloskey & Flenniken, 2010). Readiness to change in diabetes education programming is a phenomenon that necessitates further study. Elucidating this occurrence can bring a new dimension to DSME specific to the psychological concerns affecting participants' ability to control their disease.
Our study agrees with prior research that social support has a positive effect on diabetes selfmanagement (Rosal et al., 2005;McCloskey & Flenniken, 2010). Most notably, group-based interventions provide an outlet to share with others experiencing the same concerns (Baig et al., 2012). People living with chronic disease often struggle with feelings of isolation due to their illness. Group-based DSME programs offer participants an opportunity to connect with others who share similar health problems. Through peer support, participants can express their feelings and anxieties with others who have the same disease and often the same frustrations (Funnell, 2010). This support may be particularly helpful when family or friends are not available. In many instances, peer support may be the only outlet to express emotions with others who can relate to their experiences.
As a final observation, we cannot overlook the cultural implications of the large representation of women in our investigation. The Hispanic value of familia influences decisions Latinas make about their health. In the traditional maternal role of Latinas, putting family needs first may result in conflict with both adhering to cultural norms and diabetes management recommendations (Mauldon, Melkus, & Cagganello, 2006). Preparing healthy meals may be problematic for Latinas, who are customarily the primary caregiver in the home (Padilla & Villalobos, 2007). Latinas may feel it is selfish or a burden to the family to buy special foods for a diabetic diet, which are often costly for a lowincome household (Oomen, Owen, & Suggs, 1999). The following sentiment reflects this problem: "Well, I had to cook for other people, and I had to eat that food because I don't want to cook for myself. And that's why it's very hard…" Because self-denial and "going without" are considered honorable characteristics for Latinas, adherence to a diabetes self-care regimen may be a struggle (Thackeray et al., 2004). Carbone, Rosal, Torres, Goins, and Bermudez (2007) suggest that traditional gender roles can potentially constrain patients' ability to make healthful changes. Overcoming this problem is complex; however, including family members in DSME classes may help to educate them about diabetes management. In doing so, they will likely have a better understanding of how to best support their family members' efforts in diabetes self-care and learn about how healthy lifestyle habits can serve as a diabetes prevention measure.
Limitations
First, improvements were based on self-reported qualitative data. Second, participants were predominately Mexican-American, and the findings may not be applicable to other Hispanic/Latino subgroups. Because Si, Yo Puedo was primarily tested with Mexican-Americans in Texas border communities, our next steps are to evaluate its effectiveness with other subgroups of Hispanic/Latinos. We expect the core theoretical principals of Si, Yo Puedo, e.g., self-efficacy, goal setting, and social support, to translate well with other similarly underserved populations. Finally, there were greater numbers of women compared to men, which may limit the transferability of the findings to males. Future studies involving a more heterogeneous sample are recommended.
Implications
Culturally competent care is a professional standard for those delivering health promotion programs. Si, Yo Puedo advances this initiative and furthers our understanding of the merits of a DSME program designed for Hispanic/Latinos. Of importance, Si, Yo Puedo exemplifies a culturally sensitive remedy to reduce diabetes disparities that favorably influence health outcomes for this population. | 2019-03-12T13:05:07.124Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "cfe75271cfd97b115e0ceddce87f9984fbc0afe2",
"oa_license": "CCBY",
"oa_url": "https://journals.calstate.edu/cjhp/article/download/2145/1962",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "76d406e642c70a04bc59e08af30eda58cde37d91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55564400 | pes2o/s2orc | v3-fos-license | The Methods of Terminal Capacity Analysis
The capacity is a critical issue at many airports today. There are tools for capacity examining and management. But terminal operations are comprehensive and complex and some methods are not suitable. This paper deals with two basic approaches in methods of analyzing the terminal capacity. KeywordsCapacity, Kendall notation, Queuing theory
INTRODUCTION
The continuous growth of air traffic leads to problematic issues in the field of air transport capacity.An insufficient capacity in the previous periods limits the development of current activities.Limitations arise on the passenger's arrival at the airport, at check-in desks, security controls, waiting rooms and so on.These issues affect passengers directly.That is the reason why passengers perceive them more than other constraints.The passenger satisfaction and airport operation efficiency have also impact on the level of service, demand calculation [1], airport's earnings and so on.It is necessary to understand the issues of capacity and examine them in greater detail.
II. CAPACITY -A KEY ISSUE
The capacity expresses the ability to absorb something.Generally, we distinguish between static and dynamic capacity.Static capacity is the maximum number of people or objects in the monitored area.
For example it is appropriate to monitor the capacity of parking spaces, passenger's waiting rooms or airspace capacity.The static capacity gives information about the state of terminal capacity, as well as airspace capacity.Information can illustrate the state of terminal capacity (2,000 passengers of total capacity 3,500 -57% occupancy).
A dynamic capacity is also an important parameter to be considered.It deals with questions such as how many vehicles can arrive at the airport per hour, the number of checked-in passengers in ten minutes, how many planes can land during the day, and so on.Dynamic capacity is calculated as the number of passengers (airplanes) expressing the occurrence of the monitored phenomenon per time unit.
Only the mutual and adequate combination of the static and dynamic capacity leads to smooth operation of air traffic.Achieving maximum of the planned capacity in individual areas or even its exceeding has a very negative impact on the customer satisfaction or on aviation safety.
The following figure (Figure 1) shows an interrelation between the static and dynamic capacity.An illustrative example: there are five trains arriving every fifteen minutes at the airport.Each train brings 300 passengers whereas one checkin desk will handle 40 passengers per hour.The figure shows the variation of the average number of waiting passengers depending on the number of check-in counters.
In this example, the required minimum static terminal capacity varies in the range from 500 to 1,500 passengers, depending on the dynamic capacity.For optimal analysis and assessment of the situation in the context of aviation, it is suitable to use queuing theory.
A. What is queuing theory about?
Queuing theory is a part of applied mathematics.It is a study of system operations where the recurrence of a sequence of operations is present.The emergence time and the time of the occurrence of such operations are usually random.The aim is to specify dependencies between the entry requirement character, lines productivity and operation efficiency.For this purpose, it is appropriate to define the basic terms that are relevant to this area.Those can be found in the standard books written on this topic, see e.g.[2,3].Kendall notation characterizes a range of queuing models.Kendall proposed describing the queuing models by using three factors in 1953 [4].Nowadays, after its enlargement, we use five symbols for classification -A/B/C/D/E.The letter A specifies the arrival process (the time distribution between arrivals (new requirements) to the queue) and B describes the service time distribution.Letter C specifies the number of servers, D is the capacity of the queue and E is the queuing discipline.
In case of A or B the letter G is used for general distribution, M for the exponential distribution, D for deterministic times, etc.
C. The Queuing theory in passenger terminal operations
The capacity is a widely discussed issues, see e.g.[5,6,7].The following example illustrates the usage of queuing theory in passenger terminal operations.Let us consider now an airplane with a capacity of 140 passengers.There are 2 check-in counters in the terminal.Each counter is able to handle 60 PAX per hour in average with the exponential service time distribution.In this case, the checking-in process (2 hours) is divided into six sections, each lasting twenty minutes.The passengers are arriving according to a Poisson stream while the distribution of the number of arriving passengers for the time section is: (1) 4.3%, (2) 14.3%, (3) 27.1%, (4) 25.7%, (5) 21.4% and ( 6) 7.2%.The system is stable for each section.The Kendall's notation, for this instance, is M / M / 2 / ∞ / FIFO.In order to calculate an average queue length E[F], the following formulas are used.
The figure (Figure 3) shows the results of the calculation.The average number of customers in the queue is highest in the third period when reaching nearly 18 waiting passengers.To deal with the problems of the queuing theory, there are two basic methodsanalytic method (formulas) and simulations.
In the previous section, there was shown an example of solving the problem by the analytical method.The main advantages of this method are fast results and insight.The disadvantage is a considerable limitation of using this method.It is often inaccurate or inapplicable.If the processes do not meet the primary assumptions, the simulation methods have to be used so as to obtain the results.
In the simulation, the system is replaced by a model with the same characteristics.The simulation methods are known for the broad applicability, accurate and realistic modeling.The advantage is in possibility of studying the dynamics of the entire system and the ability to create non-standard situation models.The disadvantage is in the need for huge amount of input data.Consequently, the process of creating the simulation model may be very slow and expensive.The appropriateness of using simulation models for passenger terminal operation, however, is indisputable.
The following figure (Figure 4) shows a basic check-in simulation model for two counters.The model is based on Petri nets and although it has so simple configuration, it allows obtaining the first information.Petri nets can be used not only for creating simulations, but also for easy description of the system.Results can be deterministic, which corresponds to a mathematical equation, or stochastic.To obtain stochastic results, the system must be described by the probability.
After the extension of the model and determination of all important characteristics of the system, the number of checkedin passengers can be determined.In comparison with the analysis method, the stochastic simulation always provides a different result (Figure 5).For examining the entire process of handling passengers it is appropriate to create a complex simulation model by using specialized software designed for this purpose.This specialized software works on the basis of Monte Carlo method which is useful when it is difficult or impossible to use other mathematical methods.
IV. CONCLUSION
As it was described in the article, the aviation systems are considerably complex and difficult.It is not convenient to use analytical methods for solving problems related to the capacity.In contrast, using of the simulation methods is a suitable tool for solving problems related to the capacity.The simulation methods thus enable the analysis of the current system state as well as an effective forecasting.The authors plan to continue with solving the mentioned issues through creating of simulation models.
Figure 1 .
Figure 1.The dependence of the number of waiting passengers on the number of check-in counters B. Basic terms Basic queuing model (Figure 2) is characterized by the following terms.The arrival process of customers describes the laws of the origin and arrival of a new requirement.The arrival rate (λ) is equal to the mean value of the number of customers entering per time.Arrival times are usually independent and have a common distribution (for example customers arrive according to a Poisson stream).The service times are usually independent and identically distributed.The service times are commonly deterministic or exponentially distributed.The service rate (μ) equals to the mean value referring to the number of customers served by one line per time.
Figure 2 .
Figure 2. The basic queuing theory model The usage may be shown at check-in counters.For example, there are 640 incoming passengers per hour.The arrival rate (λ) is 640 PAX per hour.The average number of handled passengers at one check-in counter is 40 per hour.There are 18 check-in counters.The service rate (μ) is 720 PAX per hour.Another important value is the utilization rate (ρ).The utilization rate is calculated by dividing the arrival rate by the service rate.For this example, the result for the service rate is approximately 0.89 (i.e.89%).Queue discipline defines the order in which requests are delivered to the server.Common orders are FIFO (First In -First Out, i.e. in order of arrival), LIFO (Last In -First Out), P-FIFO (FIFO with Priorities), SIRO (Search In Random
Figure 3 .
Figure 3.The average number of customers in the queue
Figure 4 .
Figure 4.The basic check-in simulation model
Figure 5 .
Figure 5.The number of checked-in passengers | 2018-12-05T03:57:53.020Z | 2016-01-15T00:00:00.000 | {
"year": 2016,
"sha1": "c32944ff67a54477b03c4618f4687918eaf9afbf",
"oa_license": "CCBY",
"oa_url": "https://ojs.cvut.cz/ojs/index.php/mad/article/download/3584/3511",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c32944ff67a54477b03c4618f4687918eaf9afbf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14901934 | pes2o/s2orc | v3-fos-license | Structure Based Discovery of Small Molecules to Regulate the Activity of Human Insulin Degrading Enzyme
Background Insulin-degrading enzyme (IDE) is an allosteric Zn+2 metalloprotease involved in the degradation of many peptides including amyloid-β, and insulin that play key roles in Alzheimer's disease (AD) and type 2 diabetes mellitus (T2DM), respectively. Therefore, the use of therapeutic agents that regulate the activity of IDE would be a viable approach towards generating pharmaceutical treatments for these diseases. Crystal structure of IDE revealed that N-terminal has an exosite which is ∼30 Å away from the catalytic region and serves as a regulation site by orientation of the substrates of IDE to the catalytic site. It is possible to find small molecules that bind to the exosite of IDE and enhance its proteolytic activity towards different substrates. Methodology/Principal Findings In this study, we applied structure based drug design method combined with experimental methods to discover four novel molecules that enhance the activity of human IDE. The novel compounds, designated as D3, D4, D6, and D10 enhanced IDE mediated proteolysis of substrate V, insulin and amyloid-β, while enhanced degradation profiles were obtained towards substrate V and insulin in the presence of D10 only. Conclusion/Significance This paper describes the first examples of a computer-aided discovery of IDE regulators, showing that in vitro and in vivo activation of this important enzyme with small molecules is possible.
Introduction
Insulin degrading enzyme (IDE), an 113 kDa cytosolic metallopeptidase, found in bacteria, fungi, plants, and animals, is involved in the clearance of a variety of peptides including insulin, amyloidb, glucagon, amylin, b-endorphin, insulin-like growth factor II (IGF-2), atrial natriuretic peptide (ANP), transforming growth factor a (TGF-a) [1,2,3].IDE consists of two 56 kDa catalytic Nand C-terminal domains having four structurally homologous ab roll domains [4].These two N-and C-terminal domains, connected by a loop of 28 amino acid residues, constitute a large catalytic chamber where peptides smaller than 70 residues can fit.IDE can be in an open or closed state based on substrate-bound and substrate-free crystal structures of IDE [5,6].In the open state, substrates can diffuse in and out of the chamber, while free and bound substrates are degraded in the closed state.A unique feature of substrate recognition of IDE is that N-terminal of the IDE substrates make contact with a highly conserved region, named exosite, located ,30 A ˚away from the catalytic center prior to cleavage [5].This site allows proper positioning of the C-terminal end of peptides, and these peptides are subsequently directed towards the catalytic site.
There are several proposed mechanisms for the regulation of IDE activity [7,8,9].One other possibility towards regulation of IDE activity is by small molecules, and it is possible to find small molecules that bind IDE and regulate its activity for stabilizing conformations or disrupting hydrogen bonding between N-and Cterminal halves.The first known small molecules that bind and activate IDE are ATP and other nucleotide polyphosphates [10,11].ATP is reported as an inhibitor of IDE [11]; however, interestingly it is found that it activates the degradation of shorter substrates by70 fold, higher than the wild type protein alone [12,13].
The regulation of IDE function with small molecule organic compounds is becoming a very attractive strategy for the treatment of AD and T2DM.Cabrol et al. introduced two novel compounds that stimulated the proteolysis of only short peptides of IDE synergistically with ATP using high-throughput screening [12].To our knowledge, these two molecules are the only small molecules that enhance the catalytic activity of IDE reported in the literature.In another study, a set of IDE peptide-inhibitors were designed to regulate the catabolism and activity of insulin.These resulting compounds were about 106 times more potent than existing inhibitors, and crystallographic analysis revealed that inhibitors stabilized IDE's ''closed,'' inactive state [13].In a recent study by Tang and colleagues, a short peptide substrate bradykinin was discovered to increase the activity of IDE with selective binding to the exosite [6].However, this peptide was found to have low affinity and high K m due to its failure to bind to the exosite and catalytic site simultaneously.The discovery of bradykinin as a low affinity IDE activator provided the knowledge basis for the design of molecules that may modulate the function of IDE.Consistent with this finding, we hypothesized that similar to bradykinin, binding of small molecules to the IDE exosite could play a regulatory role in substrate binding and subsequent IDE proteolysis.
In this study, we utilized structure based virtual screening approach to identify novel compounds that bind IDE exosite to enhance degradation of specific substrates by IDE.We performed virtual screening of over one million small molecular weight compounds, using the docking software Autodock 3.0.5.We identified 10 hit compounds that have high binding energies with the exosite of IDE, and further tested the effect of these compounds with fluorogenic assay using substrate V, insulin and amyloid-b.Three out of these 10 compounds, D3, D4 and D6, have been found to enhance the activity of the IDE towards insulin, substrate V and amyloid-b degradation, respectively, while the fourth compound, D10, enhanced the proteolysis of both substrate V and insulin by IDE.In addition, we show that degradation of insulin by IDE can be enhanced in a cellular setting, increasing the proteolysis of internalized insulin.In order to demonstrate further that these compounds indeed bind to the exosite of IDE, we used site-directed mutagenesis to synthesize IDE with mutations at the exosite, and tested those mutants to see whether such mutants have altered the effectiveness of our compounds.Our experimental analysis demonstrated that mutations of IDE exosite residues Gln363 and His332 to alanine resulted in lower initial and maximum rate of proteolysis of insulin by IDE.The novel compounds proposed in this study could serve as initiators for lead optimization studies, which could ultimately lead towards the design of compounds for the treatment of hyperinsulinemia and Alzheimer's disease.
Analysis of Molecular Dynamics Simulation
We applied an energy minimization and molecular dynamics simulation for the refinement of atom coordinates of IDE (pdb id: 2G47) as described previously [14] .This procedure is required to have a 3D structure similar to IDE at physiological conditions before docking calculations in order to calculate binding energies at realistic conditions.The simulation showed a slight increment after minimization up to approximately 0.4 ns, and then no significant deviations for the rest of the simulation were observed, which demonstrated the convergence of RMSD and the stability of the given structure within 1.2 ns time period.The final structure at the end of the MD simulation was employed for molecular docking calculations.
Virtual screening and detailed docking
The database of over 1,000,000 commercially available compounds was used for virtual screening as described previously [14] and the experimentally determined inhibition constant (IC 50 ) values were shown to support computational predictions about the activity of the compounds found [15,16].The study of Malito et al. confirmed that binding of a short peptide substrate, bradykinin, to the exosite region of IDE enhanced the degradation of short peptide substrate [6].Therefore, the exosite region was selected as the target region for computer-aided drug design.The free binding and docking energies of the selected candidates were in the range of 29 to 15 kcal/mol.The top-ranking compounds from the virtual screening were chosen for further detailed docking simulations.
Following the virtual screening, the selected compounds were subjected to detailed -docking calculations at the exosite with increased grid resolution (from 0.375 A ˚to 0.180 A ˚).The compounds with minimum binding energies were selected and then analyzed according to important characteristics such as docking positions, hydrogen bonding interactions with the exosite residues (336-342 and 359-363) and close proximity, van der Waals interactions, exposed surface area, and pocket occupancy.Consideration of Lipinski rules [17], and visual analysis of compounds in terms of compound conformations yielded a final number of 10 compounds.A list of the top 10 compounds with estimated binding and docking energies in both virtual screening and detailed docking as well as their interactions with the residues at the exosite are listed in Table 1 (see Figure S8 for molecular structure of the compounds).
Computational analysis of the novel compounds
The structures and conformation of compounds D3, D4, D6 and D10 at the IDE exosite, and their interactions with IDE residues and distances between interacting atoms are shown in Figure 1.The threshold value for hydrogen bonding interaction is considered as 3.5 A ˚; that is hydrogen bonding interaction is established if a distance between a hydrogen donor atom and a hydrogen acceptor atom is smaller than this threshold value as well as the angle between these atoms.As shown in Figure 1A, D3 interacts with the exosite residue His336 and Gly361 and also additional contacts were observed with Glu453 and Tyr609.D4 interacts with three exosite residues, Gly339, Gly361, Gln363 (Fig. 1B), and additional interactions were observed with His335 and Tyr609.As can be observed in Figure 1C, D6 creates interactions with Gly361, Gln363 at the exosite and additional interaction with His332 is also observed.Finally, D10 has interactions with Gln363 residue at the exosite and His332 as shown in Figure 1D.
It can be postulated that, binding of bradykinin at the exosite stimulates the switch from close to open state; however, the stability and duration of open state can be increased using more efficient compounds.The conformational change for IDE is required to enhance the catalytic activity due to its complex allosteric nature.It can be speculated that our lead compounds can enter into a partially opened chamber due to their small sizes; consequently interacting with exosite residues with similar mechanism as bradykinin.This may result in a conformational change of IDE by reducing its catalytic chamber size, and this shift may yield the open state which will subsequently allow substrates to interact with the catalytic site [6].
Effect of novel compounds on the enzymatic activity of IDE
The 10 compounds given in Table 1 were subjected to further characterization and their effect was measured on the proteolytic activity of IDE using different substrates.Initially, activity assays were carried out at fixed 20 mM concentrations for all molecules to identify the effect of those compounds on IDE activity.IDE activity was quantified by monitoring the change in fluorescence for the proteolysis of fluorogenic-substrate V, FITC-insulin-bound antibody, and FAbB by recombinant human IDE purified from E.coli BL21 [6,18,19].Analysis for the effect of compounds on IDE activity showed that, D3 increased the IDE-mediated insulin degradation by 72% (Fig. S1B), while D4 enhanced IDE-mediated substrate V and insulin degradation by 36% and 60%, respectively (Fig. S1A, B).Also, D6 increased the amyloid-b degradation by 20% (Fig. S1C).Further, D10 enhanced IDE-mediated FITCinsulin-mAb and substrate V degradation by 290% and 65%, respectively (Fig. S1A, B).Next, an EC 50 value of each molecule that enhanced IDE activity was determined with a dose response assay, where specific activity of IDE was measured with different concentrations of the lead compounds and then the plot was fit to Sigmoidal Hill-4 parameter equation.Calculated EC 50 values for D4 were found as 7 mM and 13 mM for substrate V, and insulin degradation, respectively (Fig. S2A, B).EC 50 value for D3 on insulin degradation was approximately measured as 20 mM (Fig. S2C), whereas EC 50 value for D6 on amyloid-b degradation was 2 mM (Fig. S2D).Furthermore, EC 50 values for D10 were found as 13 mM and 12 mM for substrate V, and insulin degradation, respectively (Fig. S2E, F).Among all these four compounds, D10 seems to be the potent activator of IDE, when proteolysis of insulin is considered.
The effect of ATP on insulin degradation was further tested with compounds D3, D4 and D10, and we observed that IDE activity did not change significantly when proteolysis was carried out in the presence of 0.1 mM ATP (Fig. S3).Cabrol et al. found a synergetic effect of ATP with potential compounds; thereby in order to test the effect of ATP on IDE activity in the presence of activators, we, performed a fluorogenic assay with different concentrations of the novel compounds [12].The novel compound, D10 enhanced the substrate V degradation by 60%, however this compound is more potent on insulin degradation with an increase in the IDE activity of 290% (Fig. S3C).Finally, ATP had no effect on amyloid-b degradation in presence of D6 (data not shown).
Effect of Compounds on Initial Rate of Proteolysis by IDE
In order to explore the effect of lead compounds on the initial activity of the IDE, we carried out time series experiments both in the presence and absence of the lead compounds.Among the lead compounds, only D10 increased the initial rate of substrate V degradation by about 2 fold (Fig. S4).On the other hand, D3, D4, and D10 significantly increased the initial rate of proteolysis activity of IDE on FITC-insulin bound with antibody by about, 7.5-, 12-, and 49-fold, respectively (Fig. 2A).D3, D4 and D10 did not have any effect on the proteolysis activity of IDE for amyloid-b (Fig. S1C).Furthermore, our results indicated that D6 accelerated the initial rate of amyloid-b proteolysis by IDE by a factor of 1.3 (Fig. S4B).It is observed that initial activation of enzyme increased even at very low concentrations of lead compounds (Fig. 2A).Interaction of these compounds with exosite residues similar to bradykinin might be stabilizing the enzyme conformation in its active state.As a result, interaction of the lead compounds with IDE may change the conformation of IDE by reducing the size of the catalytic chamber, and this shift may promote the open state of
Verification of exosite as the binding site by lead compounds via directed mutagenenesis
Since D10 has been found as the strongest activator of IDE towards the proteolysis of insulin, we analyzed the binding site of the compound D10 for the further analysis.According to the computational results, D10 binds to Gln363 at the IDE exosite and His332 (Fig. 1D).We carried out site-directed mutagenesis by replacing these amino acids into Ala in order to verify that D10 indeed binds to this region via these amino acids.We first mutated Gln363 to Ala and then expressed and purified mutant protein through nickel resin.The effect of D10 on the proteolytic activity of mutant IDE (Gln363Ala) was measured towards FITC-insulin-mAb degradation.Enzymatic in vitro assay showed that enhanced activation in the presence of D10 was reduced by 10-fold with mutant IDE (Gln363Ala).The activation was about 49-fold in the presence of D10 with wild type IDE (Fig. 2B).Although mutant IDE activity was less responsive to D10 compound compared to wild type IDE, it appeared as if D10 were interacting with IDE through His322 and could enhance the activity of mutant IDE.Therefore, we generated a double mutant IDE (Gln363Ala, His332Ala) in order to eliminate the binding of D10 to the exosite region completely.Analysis of double mutant demonstrated that D10 could not bind to IDE exosite (Fig. 2B).This was verified by enzymatic assay which resulted in similar initial rate of proteolysis for both double mutant and wild type IDE (Fig. 2B).If Gly363 were in fact one of the amino acids that interacts with D10, one would expect to see amino acids (Gly 362 and Lys 364) that are adjacent to the Gly363 would not contribute to the binding of the D10.Therefore we mutated adjacent amino acids (Gly 362 and Lys 364) to Ala and showed that D10 compound could in fact interact with Gly363.The initial rate of proteolytic activity of double mutant IDE (Gln363Ala, His332Ala) is comparable to wild type IDE, where D10 increased the activity of IDE over 49-fold as can be observed from Figure 2B.These results suggest that D10 binds to IDE through Gly363 at the exosite, and His 332, and in turn can increase the catalytic activity in a specific manner.Furthermore, we have carried out competition assay with bradykinin in the presence of D10 in order to demonstrate further that D10 binds to the exosite region of IDE. Assay is performed with three different concentrations of bradykinin (2.1, 4.2 and 8.4 mM) and with varying concentrations of D10.We chose 2.1 mM and 8.4 mM concentrations as minimum and maximum amounts of bradykinin in this experiment, as K m value for bradykinin was measured as 4.2 mM in a previous study (Fig. S6) [6].This experiment demonstrated that degradation of insulin in the presence of D10 was decreased by about 50% when there was bradykinin in the proteolytic medium, even at high concentrations of bradykinin (8.4 mM) (Fig. S6).This could be attributed to the competition of D10 with bradykinin to access to the exosite residues of His332 and Gln363 (Table 1).
Cell Viability Test
We characterized the activity of novel compounds in a cellular assay to establish whether there were any cytotoxic effects.We used an established MTT (3-(4,5-dimethylthiazolyl-2)-2, 5-diphenyltetrazolium bromide or thiazolyl blue) method using HELA cells to determine whether the small molecule affects cell viability [20].In this method, a purple formazan dye forms as a result of the cleavage of the yellow tetrazolium MTT within metabolically active cells.The resulting precipitate of the intracellular formazan can be dissolved in a detergent solution and quantified spectrophotometrically at 595 nm.We measured the cytotoxicity by incubating HELA cells in the presence of novel compounds at concentrations ranging from 1 to 100 mM for 24 h.Cell viability was measured using the MTT method after 12 h culture.The extent of cell death was expressed relative to a control containing DMSO.We found that D4 did not affect viability significantly from a concentration of 2 mM to a concentration as high as 100 mM at which cells remained 60% viable (Fig. S5B).We also observed that approximately 85% of cells remained viable at a concentration of 7.5 mM, which is close to the EC 50 value of D4.The other novel compounds, D3 and D10 did not demonstrate any toxicity at high concentrations (Fig. S5A, D).We observed that D6 showed toxicity at higher concentrations (Fig. S5C), while D6 is not toxic at its EC 50 value of 2 mM.These results indicate that lead compounds are not toxic to mammalian cells.Therefore, as a next step we tested the effects of lead compounds on insulin catabolism using HeLa cells.
Cellular-based degradation of insulin
Biochemical analysis indicated that lead compounds increased the activity of IDE enzyme towards different substrates, predominantly towards insulin.In order to demonstrate further that these molecules can display activity in vivo conditions, we examined the effects of lead compounds on insulin catabolism in cultured cells.It was shown that HeLa cells express insulin receptor, and thus take FITC-labeled insulin inside and degrade it intracellularly [13,21,22].First, FITC-insulin was added to the culture that contains HeLa cells and incubated for 2 hours in order to allow for sufficient uptake of FITC-insulin by HeLa cells.In order to ensure that FITC-labeled insulin is translocating into the cells through insulin uptake receptor, and that cytoplasmic IDE can degrade FITC-insulin, we conducted live cell imaging of HeLa cells, and recorded fluorescence.At various time points compounds, D3, D6, and D10, were added; and cells were washed with Hank's balanced salt solution (HBSS), the these cells were collected and lysed with appropriate buffer.After centrifugation, cell free extracts were used to measure the change in fluorescence by luminometer.The cells that were treated with D10 and D3 displayed increases in the intensity of fluorescence.Consistently, we observed higher fluorescence signal from D10 and D3 treated cells suggesting higher FITC-insulin degradation by cytosolic IDE (Fig. 3A).Compound D10 increased in vivo proteolysis of FITCinsulin by about 32 fold with HeLa cells compared to control group (Fig. 3A).However, for the case with compound D6, which was used as a control, fluorescence intensity did not change significantly compared to the samples treated with D10 and D3.These findings are consistent with the data obtained by in vitro assay, showing enhanced degradation of insulin by D10 and D3 (Fig. 3A).Notably, D6 had no effect on insulin catabolism as expected (Fig. 3A). Figure 3B shows fluorescence microscope images of HeLa cells ten minutes after the addition of compounds D3, D6, D10, as well as images of HeLa cells treated with only DMSO and no-FITC insulin as controls.This figure further verifies the significant increase in fluorescence observed in the presence of compound D10, which is consistent with our in vitro assay (Fig. 3B).
Discussion
Virtual screening has become one of the major components in the drug discovery approach within the last few years [23].When 3D structure of the protein is known, structure based screening is preferred, whereas ligand based screening is utilized when experimental inhibition/activation data is available in the literature [24].The presence of the crystal structure of IDE makes structure based drug design a viable approach to find novel molecules that could regulate overall activity of the IDE.Here, we employed structure based virtual screening, enzymologic characterization, cellular based assay to discover potent compounds that regulate IDE activity.The hit rate in experimental highthroughput screening (HTS) efforts of drug-like compounds varies from 0.01 to 1% [25].Cerchietti et al. achieved a 10% hit rate using computer-aided drug design to identify BCL6 inhibitors [26].In another study, novel inhibitors of protein tyrosine phosphatase-1B were identified with a hit rate of 34.8% (127 potent compounds out of 365 selected molecules) [27].Here, we achieved 40% (4 out of 10 tested) of hit rate using structure based drug discovery approach, which is a faster, more economic and efficient strategy compared to other HTS or experimental screening techniques.We utilized structure based virtual screening approach to identify four compounds that bind to IDE exosite, and have potential to enhance cleavage of specific substrates by IDE.Three compounds, D3, D4 and D6, have been found to enhance the activity of the IDE towards insulin, substrate V and amyloid-b degradation, respectively, while compound D10, enhanced the proteolysis of both substrate V and insulin by IDE.A recent study by Cabrol et al. showed enhanced degradation of amyloid-b by two compounds, Ia1 and Ia2, in the presence of short peptide substrates and ATP, while in other study by Leissring et al. [12] a rational design approach based on analysis of combinatorial peptide mixtures was used to develop IDE inhibitors [12,13].To our knowledge, the compounds discovered in this study are the first potential lead compounds that enhance IDE activity independently of ATP or small IDE substrates.The optimization of the structure of small molecular weight organic compounds identified in this paper and further in vivo characterization may make it possible to design molecules with therapeutic utility for hyperinsulinemia, type 2 diabetes and Alzheimer's disease.
The most significant novelty that these compounds have is that they are the only known selective and independent activators of IDE.Specifically, to our knowledge compound D10 is the only potent small molecule discovered to increase proteolysis of insulin by IDE.The distinctive IDE structure made it difficult to discover compounds with therapeutic value, as IDE has two distinct domains with C and N terminal halves, and can only degrade its substrates when the protease is in closed configuration.Previous study by Malito et al. highlighted the complexity involved in the substrate binding and recognition mechanism of IDE, and demonstrated that IDE uses its exosite to anchor the N-terminus of its substrates which are then cleaved through a stochastic process at the catalytic site [6].Previous crystal structures of bradykinin with IDE also showed that bradykinin can only bind to the exosite but could not reach to the catalytic site.This observation also suggested that bradykinin could play a regulatory role to enhance substrate binding and degradation by reducing the entropy of short peptides in the chamber.The lead compounds that are presented in this paper may enhance binding of different substrates to the exosite and then direct substrates to the catalytic site for further proteolysis.Our site directed mutagenesis and competition assay at the IDE exosite results with these novel compounds also suggest that binding of compounds to the exosite indeed occurs, and that binding to the exosite is critical for enhancing substrate specific degradation by IDE.It is established that catalytic rate of IDE is affected by flexibility of substrates as well as ability of substrates to gain access to the catalytic chamber.Furthermore, we observed that bradykinin competed with compound D10 for the binding to the IDE exosite.These results suggest that the compounds enter into the partially opened catalytic chamber during IDE conformational switch, and to direct substrates towards the catalytic site resulting in enhanced catalysis for specific substrates.
Various studies have shown that ATP enhances the IDE activity of some substrates and it sometimes plays a synergistic role with the drug-like compounds [12,13].We observed that the activity of IDE for insulin degradation did not change in the presence of ATP.The concentration dependent D3, D4 and D10 experiments were repeated in the absence and presence of ATP, and the activity of IDE showed the same trends in these conditions.Song et al. showed that nucleotides have no effects on the hydrolysis of the physiological substrates insulin and amyloid-b peptide [10].We also observed that ATP has no effect on the insulin catabolism in the presence of D3, D4, and D10 in our experiments (see Fig. S3).
The most significant impact of the compounds reported in this paper from a biomedical point of view is that these compounds have the potential to have medicinal value on insulin signaling.IDE activators with desirable pharmacokinetic properties may be significant for diabetes, as it is questionable to combat with diabetes or hyperinsulinemia through genetic mutations of IDE gene [28].For example, IDE knockout mice have been shown to develop diabetic phenotype, whereas overexpression of IDE in Alzheimer's disease mouse models have been shown to completely eliminate amyloid plaque formation and downstream cytopathology [29,30].In contrast to the permanent lifelong activation of IDE, drugs that only transiently enhance IDE activity for specific substrates may be expected to improve glycemic control.As a result, the goal of enhancing amyloid -b or insulin degradation by pharmacological compounds has been regarded as best, but a challenging objective due to the unusual properties of this enzyme.
Finally, the activators that are introduced in this study may function as important tools to manipulate IDE experimentally.The use of activators might also be important for experimental and clinical settings.Further, these compounds would be useful to address various questions related to the chemical biology of IDE.Our cellular based assay demonstrated that compounds that enhance degradation of insulin, D3, D6, and D10, not only enhance degradation of insulin in vitro and in vivo but they are also permeable through cell membrane which may allow for evaluation of their roles in animal models for diseases such as hyperinsulinemia, diabetes, and Alzheimer's.
Molecular dynamics simulation
The coordinate of the initial structure was obtained from the Protein Data Bank (PDB id: 2G47) [4].The reported structure was a catalytically inactive IDE mutant IDE-E111Q in complex with amyloid-b peptide (Ab 1-40) at 2.1 A ˚. MD simulations were carried out by using the NAMD software, version 2.6 with the PARAM22 version of the CHARMM force field [31,32].The protein was solvated in a rectangular box including TIP3P water molecules and counter ions.First, the system was minimized through 10000 steps by keeping the backbone fixed, and then backbone atoms were relaxed through 10000 steps.Next, the system was heated to 310 K with 10 K increments (10 ps simulation at each increment).
After the equilibration of the system, a molecular dynamics simulation was carried out with constant temperature (310 K) and pressure control using the Langevin piston method.The time-step of the simulation was set to be 2 fs and, the bonded interactions, the van der Waals interactions (12 A cut-off), and the long-range electrostatic interactions with particle-mesh Ewald (PME) were included in the calculations to define the forces acting on the system.The damping coefficient was set to be 5 ps-1 using Langevin dynamics to handle pressure control.The simulation was carried out for 1.2 ns, and the final stable structure was used later in the docking calculations.
Molecular Docking
For small docking calculations docking software AutoDock 3.05 which was available for public access was employed [33].This version of Autodock predicts the optimal conformations of the receptor-ligand complex and report binding affinity scores by assuming a structure model with a rigid receptor (protein) and a flexible ligand.The scoring function of Autodock consists of electrostatic, Lennard-Jones, hydrogen bond, solvation, and torsional entropy terms.Lamarckian genetic algorithm (LGA) was employed for the ligand conformational search.The residues 336-342 and 359-363 of IDE were taken as the targets for docking, since this region was defined to be located in the exosite of IDE [6].
Construction of expression plasmid for IDE protein
cDNA of hIDE was PCR amplified with primer sets that contains unique restriction enzyme sites (XhoI and NotI) and hexa-histidine tag (66His).The primers that have been used for amplification of hIDE cDNA are forward primer: 59cttgcggccgcaatgcggtaccggtaccggctagcgtg-93 and reverse primer: 59-gtgctcgaggagttttgcagccatgaa-93.PCR reaction is performed in total volume of 50 ml containing 100 ng of plasmid, 40 pmol of each primer, 0.2 mM dNTPs, and 2 unit of Taq DNA polymerase.33 cycles of amplification reaction is performed with the conditions of 95uC for 30 s, 55uC for 30 s and 72uC for 3 min.The reaction mixture is kept at 95uC for 4 min before the first cycle, and after the 33 rd cycle additional extension period is applied at 72uC for 10 min.Then, PCR products were purified through the gel extraction method and digested with Xho I and Not I enzymes along with pET21-b bacterial expression vector.Then, cDNA of hIDE were subcloned into pET21-b according to the Maniatis et al. and named as pET21b-hIDE [34].
Site Directed Mutagenesis
Site-directed mutations of the specified hot spot residues were introduced to hIDE by PCR as previously described using appropriate plasmid [35].The presence of the specific mutations was verified by DNA sequencing at Burc Laboratory (Istanbul, Turkey).
Expression and purification of IDE protein
pET21b-hIDE was transferred to E.coli BL21 cells for expression of recombinant hIDE.Once cells reached to OD A600 0.6-0.8,IPTG was added to a final concentration of 400 mM and hIDE was expressed at 37uC for 3 h.Next the cells were harvested and sonicated in a lysis buffer which was composed of 50 mM NaH 2 PO4, 300 mM NaCl, 10 mM imidazole, 100 mM PMSF, 50 mM protease inhibitor cocktail and 0.1 mg/ml lysozyme at a pH of 8.0.The soluble protein fraction was separated by centrifugation at 90006 g for 30 min, and filtered through a Ni-NTA column.After rinsing with 20 mM imidazole, the histidine-tagged enzyme was eluted by NaCl and 250 mM imidazole buffer.The purified IDE was concentrated using a dialysis tubing cellulose membrane (MWCO 12000 Da) and stored at 280uC in a storage buffered as mentioned previously [36].The concentration of the IDE protein was measured as 0.02 mg/ml.The integrity of hIDE was characterized by electrophoretic separation and followed by Western-Blot.Proteins using anti-His IgG and membrane were visualized by the ECL plus system.
Degradation of Substrate V by IDE
To assess the utility of these novel compounds as chemical regulators for IDE activity, we used fluorescence assay with fluorogenic substrate V (7-methoxycoumarin-4-yl-acetyl-NPPGFSAFK-2, 4-dinitrophenyl).Reaction was carried out in the presence of 105 ml of 10 ml of 40 mM substrate V; 10 ml of 0.02 mg/ml of IDE in 83 ml of 30 mM KH 2 PO 4 at pH of 7.3, and various concentrations (0-35 mM) compounds and also with various time intervals (0-4 hours).The reaction mixture was incubated at 37uC for 2.5 h.The hydrolysis of substrate V was monitored using a Tecan Safire2 microplate reader with excitation and emission wavelengths set at 300 and 395 nm, respectively.
Binding of monoclonal antibody (mAb) to FITC-insulin
In order to measure degradation of insulin by IDE, FITCinsulin is bound to the monoclonal mouse anti-human-insulin, so that the rotation of free FITC and possible changes in fluorescence polarization could be prevented.We performed enzymatic assays, using only FITC-insulin bound antibody as a background fluorescence and checked the effects of compounds on the cleavage of FITC-insulin-mAb.FITC-insulin is bound to mouse anti human insulin antibody in PBS at 37uC for 1 hour, which is higher than the time required to obtain maximum binding of insulin-FITC conjugate and mAb [37].It is observed that 400 nM mAb completely binds to 50 nM FITC-insulin (Fig. S7).Since molecular weights of FITC-insulin and mAb are about 5.8 kDa and 60 kDa respectively, we observed very slight shift in the gel as expected.In figure S7, it is observed in lane 4 that 50 nM (100 ng) insulin is completely bound to 400 nM mAb in lane 4 (Fig. S7).Lane 1 represents 400 nM antibody, lane 2 shows 100 nM insulin, and lane 3 expresses the binding between 400 nM antibody and 100 nM insulin.The binding between insulin and mAb is shown by performing (%5 and %15) native polyacrylamide gel electrophoresis.
Degradation of FITC-insulin-mAb by IDE
The utility of these novel compounds were tested on degradation of insulin by IDE.First insulin was labeled with FITC as described previously [37].Briefly, 500 mg insulin dissolved in 1.5 ml of 0.1 M Na-carbonate, at pH 7, and 0.2 mM EDTA.Then, 4.5 mg FITC was dissolved in 200 ml acetone and was added dropwise to previous solution at room temperature for 20 h ([FITC]:[insulin], 3:1).Finally, unreacted FITC was removed by dialysis and the FITC-insulin was stored at 220uC.The proteolysis of insulin by IDE was monitored by measuring the changes in fluorescence, where the reaction was carried out with 100 nM of FITC-insulin-mAb and IDE (80 ng) in 100 ml KH 2 PO 4 buffer at a pH of 7.3.Reaction was carried out at 37uC for different time intervals (0-4 hours).The increase in fluorescence was monitored by using Fluoroskan Ascent FL microplate reader (Thermo).
FAbB degradation assay
This assay was performed as follows: 10 ml of 50 nM of FAbB was dissolved in a buffer which consisted of 50 mM HEPES, 100 mM NaCl, % 0,05 (v/v) bovine serum albumin, at a pH of 7.4.The reaction was initiated in a total volume of 100 ml with 20 mM drugs by adding 20 ml of 0.02 mg/ml of IDE for 2.5 h at 37uC.The reaction was stopped by adding 2 ml of 100 mM of protease inhibitor (1, 10-phenanthroline).Uncleaved FAbB was precipitated with Neutravidin-coated agarose beads by gentle rocking for 30 min.and centrifugation for 10 min at 14000 g.The supernatant was transferred to the black 96 well plates and increase in fluorescence (488 excitation, 525 emission) was measured with a multi-label plate reader.
Enzymatic competition assay
The effect of the bradykinin on the clearance of insulin was measured in the presence of compound D10.Competition reactions were carried out at 37uC by mixing 90 ml of 10 nM FITC insulin in 50 mM potassium phosphate buffer (pH 7.3) and different concentrations of bradykinin (2.1 mM, 4.2 mM and 8.4 mM).Various concentrations of D10 was added (0-50 mM) and reaction was initiated with the addition of 80 ng IDE.The fluorescence increase was monitored by using Fluoroskan Ascent FL microplate reader (Thermo).
Kinetic analysis
The effect of the lead compounds on the initial rate of proteolysis of IDE was determined by measurement in fluorescence with time both in the presence and absence of the lead compounds.The kinetic data for IDE-mediated insulin, substrate V and amyloid-b degradation were plotted and fitted using SigmaPlot software (Sigmoidal, Hill, 4 parameter).For the initial rate calculations, kinetic data were fitted using SigmaPlot software (Hyperbola; Single Rectangular I, 3 Parameter).
Cell -based degradation of insulin
Cells were cultured at 37uC under a humidified atmosphere of 5% CO 2 in medium supplemented with 10% fetal bovine serum.Then, FITC-insulin was added to cell medium and incubated for 2 h at 37uC.Next, compounds were loaded at different time intervals separately.Cells were washed with Hank's balanced salt solution (HBSS), then disrupted with 16 passive lysis buffer.Fluorescence was measured with a luminescence multi-label plate leader, and changes in fluorescence was imaged by fluorescence microscope at Burc Laboratory (Istanbul, Turkey).
MTT assay for evaluating cell viability
The cytotoxicity assay was performed using human cervical cancer HeLa cells.Cells were cultured at 37uC under a humidified atmosphere of 5% CO 2 in medium supplemented with 10% fetal bovine serum and dispersed in replicate 96-well plates with 2.5610 4 cells/well.Compounds were then added.After 24 hours exposure to the chemical compounds, the cell viability was determined by 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenpyltetrazolium bromide (MTT) assay [38].The optical density (OD) of the wells was determined using ELISA plate reader at a test wavelength of 600 nm and a reference wavelength of 630 nm.Each measurement was carried out in triplicate.
Figure 1 .
Figure 1.The interactions between compound D3 and the exosite of IDE.The compound is shown as purple sticks, amino acid residue belonging to the IDE exosite are shown as red sticks (Gly361) (A).The interactions between compound D4 and the exosite of IDE.The compound is shown as green sticks, amino acid residues belonging to the IDE exosite are shown as red sticks (Gly339, Val360, Gly361) (B).The interactions between compound D6 and the exosite of IDE.The compound is shown as purple sticks; amino acid residues belonging to the IDE exosite are shown as red sticks (Gly363, His332, Gly361) (C).The interactions between compound D10 and the exosite of IDE.The compound is shown as purple sticks; amino acid residues belonging to the IDE exosite are shown as red sticks (Gly363, His 332) (D).doi:10.1371/journal.pone.0031787.g001
Figure 2 .
Figure 2. Measurement of initial insulin binding rates of IDE.Effects of different drugs (D3, D4, and D10) on initial binding rate of insulin (A).Initial insulin binding rates of mutant IDE in the presence of 20 mM D10 (B).Data are mean 6SEM for 3 independent experiments (p,0.0001).doi:10.1371/journal.pone.0031787.g002
Figure 3 .
Figure 3. Effects of IDE activators on insulin catabolism in Hela cells.Compound D10 and D3 show ,%240 and %65 increases respectively in the magnitude of IDE activity for insulin degradation in HeLa cells (A).D6 does not change insulin degradation as expected (A).Representative images of live HeLa cells pre-loaded with FITC-insulin and imaged at various drugs (20 mM) and absence of drugs (B).Data are mean 6SEM for 3 independent experiments (p,0.0002).doi:10.1371/journal.pone.0031787.g003 | 2018-04-03T04:14:40.272Z | 2012-02-15T00:00:00.000 | {
"year": 2012,
"sha1": "7249ba496fd30164a5975280253206382f975998",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0031787&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7249ba496fd30164a5975280253206382f975998",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15317880 | pes2o/s2orc | v3-fos-license | Treatment of large proximal ureteral stones: extra corporeal shock wave lithotripsy versus semi-rigid ureteroscope with lithoclast
Purpose Assessment of safety and efficacy of extracorporeal shockwave lithotripsy versus semi-rigid ureteroscope with lithoclast for treatment of large proximal ureteral stones. Materials and methods The study included 147 patients with large upper ureteral stones. SWL and ureteroscopy were performed in 71 and 76 patients respectively. Patients in the SWL group were treated with Siemens: - Modularis lithovario under intravenous sedation on an out patient basis. Patients in the ureteroscopy group were treated with (7.5 Fr) semi-rigid ureteroscope and lithoclast under spinal anesthesia on a day care basis. Results Stone - free rate for in situ SWL was 58% (41 of 71) patients. For semi-rigid ureteroscope accessibility of the stones was 94% (72 of 76) and the stone free rate was 92% (70 of 76) No major complications were encountered in both groups. Mean stone size was 1.34 ± 0.03 cm in the SWL group and 1.51 ± 0.04 in the ureteroscopy group. Conclusions Our study demonstrates that ureteroscopy with lithoclast can be considered as acceptable treatment modality for large proximal ureteral calculi and can be considered as fist line for treatment of large proximal ureteral stones.
Introduction
Most ureteral stones pass spontaneously. Those that do not can be removed by either shock wave lithotripsy or ureteroscopy. Open surgery is appropriate as a salvage procedure or in certain unusual circumstances. SWL has been recommended as first line treatment for proximal ureteral calculi less than I cm. for large proximal ureteral calculi it remains to be defined [1].
Stone size is an important variable in determining the out come of SWL, but little information is available on the influence that stone size has on the treatment of proximal ureteral stones. Advances during the last 2 decades with the advent of small diameter ureteroscopes and intra corporeal lithotripsy such as ultrasound, electro hydraulic, lithoclast and more recently the Holmium: YAG laser, have allowed more successful and safer endoscopic removal of upper ureteral calculi [2][3][4][5]. In this study for treatment of large proximal ureteral stones we compared treatment outcomes in patients undergoing semi-rigid ureteroscope and lithoclast with in situ SWL.
Materials and methods
This study included 147 patients with lager upper ureteral stones (more than 1 cm) treated at urology department El Minia university hospital in the period from June 2001 to November 2007.
Patients were informed about the SWL and ureteroscopy as the two treatment modalities and the advantages and disadvantages and side effects of both techniques were explained to patients. According to patient choice, SWL was performed in 71 patients and urteroscopy in 76 patients.
Pre operatively patients were clinically evaluated by plain X-ray of the kidney, ureter and bladder, ultrasound and or excretory urography to confirm stone size, location and degree of hydronephrosis. The upper ureter was defined as the segment between the ureteropelvic junction and the upper border of the sacroiliac joint. The Inclusion criteria included proximal ureteral stones more than one cm. that fails to pass spontaneously causes recurrent renal colic and or obstructive uropathy. Patients with active urinary tract infection, congenital anomalies and previous SWL, stent placement or open surgery of the ureter were excluded.
Ureteroscopy was preformed using long semi-rigid ureteroscope 7.5 Fr. Pre operative antibiotic was administered, spinal anesthesia was used in most of the patients, cystoscopy was performed and retrograde pyelogram then guide wire (GW) was introduced past the stone, Glide wire was used when required. In case of difficulty to pass the GW it was introduced under vision through the ureteroscope, balloon dilation was used. Lithoclast was used to disintegrate the stone using a 2-3 Fr probe in single or multiple modes the number of shocks could be adjusted to avoid stone migration. A stone cone or nitinol tipless dormia basket was used to guard against stone migration when expected. Significant gravels were retrieved using dormia basket. Double J stent, 5-6 Fr, was placed at the end of the procedure in all except 3 patients. The stent was left for 2-3 weeks based on the degree of impaction of the stone and manipulations performed and were removed on out patient basis. All patients were treated on a day care basis.
Patients with in situ SWL were treated using (Siemens modularis litho vario) lithotriptor under intravenous sedation (Bethidine). The used voltage ranged from 12 to17 K.v. The maximum number of shocks was 3000. At the start the rate of shock wave/minute was adjusted to 60 for the first 500 shock wave then increased to 90 shock wave/minute. Patients were treated on an out patient basis. Post treatment abdominal X ray was obtained 2-3 weeks after SWL. The characteristics of patient age, sex and stone size were determined for each group. Stone analysis was performed using crystallography when possible.
Post operative evaluation included KUB, ultrasound for all patients, occasionally excretory urography or non contrast helical CT until the patient is stone free. Treatment outcomes were assessed by being stone free on KUB I month after treatment. Re-treatment and additional procedures were documented. Statistical comparison between both groups was used by the Fisher 2-sided exact test.
Results
Ureteroscopy was performed in 76 patients; in 72 patients the stones were accessible, while in 4 patients due to Angulations/tightness of the ureter it was difficult to reach the stone. The initial stone free rate of ureteroscopy using lithoclast was 92%. The procedure failed in 2 patients due to edema and angulations at the site of the stone, both of them were treated by open surgery.
Double J stents were inserted in all successfully treated patients due to large stones and to avoid post operative obstruction and aid in stone passage after removal of the stents. Balloon dilation was used for most of patients to facilitate stone retrieval. Trans ureteroscopic balloon dilation just distal to the stone after disimpaction was done in 3 patients with stricture and edema below the stone.
A stone cone was placed under vision to avoid proximal stone migration in most of the patients after stone disimpaction. In patients with too hard stones a nitinol tipless dormia basket with detachable handle was used to catch the stone before disintegration to achieve good contact of the probe with the stone. The mean operative time was 52 minutes (range 38-98). Ureteral stents were left for 2-3 weeks. Patient's age, sex and stone characteristics were comparable between the 2 groups of patients (table 1).
SWL was performed in 71 patients. The initial stonefree rate for in situ SWL was 58% (41 of 71) patients. The mean operative time was 68 minutes range (59 -78). In 13 patients with failed SWL a second SWL session was performed which succeeded in 2 patients. Ureteroscopy was done for 14 patients with failed SWL of whom 12 (86%) became stone free. percutaneous stone management was performed successfully for one patient. The remaining patients preferred to do open surgery. Stone analysis was performed for 23 patients in whom stone fragments were available for analysis (table 2).
The results of our study clearly demonstrates that the ureteroscopy group received better results compared to SWL group (p = 0.003). There were no major complications in each group. There were recurrent attacks of renal colic requiring emergency ureteroscopy in 1 case, hematuria and flank soreness in the SWL group. Most of the complaints after ureteroscopy were related to stents.
Discussion
Shock wave lithotripsy (SWL) is the least Invasive treatment for upper urinary tract calculi and is recommended as first line therapy [1]. Stone clearance after SWL is variable and influenced by stone size, location and composition. The results of treatment for proximal ureteral calculi either in situ or after stent placement range from 57 to 96% with a high re-treatment rate of 5 to 60% [1,[6][7][8][9].
The success rate of repeat SWL after failed initial SWL treatment is relatively low [10]. SWL has success rate above 80% for small upper ureteral stones. However, the success rate for large impacted upper ureteral calculi is low with the highest success rate Around 60% [11][12][13][14][15].
Tawfick International Archives of Medicine 2010, 3:3 http://www.intarchmed.com/content/3/1/3 Shock wave lithotripsy does not assure complete relieve of obstruction and is associated with prolonged attacks of pain during stone passage.
Our success rate for SWL in this study after single session was 58% this is comparable to other studies [8,14,15]. This low success rate could be attributed to the limited number of shock waves in single session and the large size of the stones requiring higher power index [ [7,8], and [14]]. It is also important to mention that all cases in our study were treated in situ.
Complications in SWL group included post operative pain (colic) requiring emergency ureteroscopy in 1 patient, haematuria, flank soreness and urosepsis. Retreatment with SWL for 18 patients succeeded only in 3 patients confirming the low success rate of repeat SWL [10]. The ability to predict the response of a stone to shock wave lithotripy would optimize ureteral stone management [1].
Recent development of small diameter semi-rigid and flexible ureteroscopes with the availability of Holmium YAG laser markedly improved the success rate for treating proximal ureteral stones. A success rate of around 50% for proximal ureteral calculi using large diameter rigid ureteroscopes improved to greater than 90% using small diameter ureteroscopes [14][15][16][17].
Most of the studies dealing with ureteroscopy for proximal ureteral stones use the Holmium YAG laser for disintegration [14,15,17] being able to destroy all forms of stones using small diameter quartz fibers, large calculi can be fragmented in to dust like particles during fragmentation decreasing the need for fragment retrieval. It can also be used through rigid and flexible ureteroscopes [15][16][17]. The only disadvantage of Holmium YAG laser is its cost. In this study we used pneumatic lithotripsy (lithoclast) for disintegration being costeffective, available, and effective, comes with small diameter probes and can be used through durable semi-rigid ureteroscopes.
Balloon dilation was frequently used because stone fragments are larger with the lithoclast compared to Holmium YAG laser. Significant fragments were retrieved using nitinol tipless dormia basket. Stents were more frequently used due to the same reason large stone size mucosal edema and polyps.
Our initial stone -free rate of ureteroscopic lithoclast lithotripsy for proximal ureteral calculi was 92%, which is lower than but still close to other reported series using the Holmium YAG laser [14,15].
The main difficulty in the ureteroscopy group was failure to approach the stone because of tortuous ureter, angulations and edema at the site of the stone which masks the exposure and disintegration of the calculus. In four patients in our study we were not able to reach the stone due to angulations of the ureter and in 2 more patients even after reaching the stone it was difficult to fragment the stone due to marked edema around the stone. It was helpful to have an adequate irrigation and to negotiate the stone by a second guide wire (under vision) or glide to have good exposure of the stone in impacted cases prior do disintegration.
Proximal migration of the stone is a potential limitation with the use of the lithoclast. Different methods have been used to avoid proximal stone migration including use of combined lithoclast and lithovac, use of Dretler stone cone and ante grade occlusion balloon catheter. In this study stone cone was placed under vision to avoid proximal stone migration, it also helped to sweep the small stone fragments during its removal. For hard stones nitinol tipless dormia basket with detachable handle was used to entrap the stone prior to disintegration. Similar to other studies using stone cone stone migration was avoided [18][19][20].
Several studies as well as our study proved that treatment out come of ureteroscopy was not influenced by stone burden or composition contrary to SWL results which is influenced by both factors [13][14][15].
Although ureteroscopy is more invasive than ESWL complications after ureteroscopy were limited in our study which is the same for most recent studies owing to use of small diameter ureteroscope 7 F and effective pneumatic lithotripsy and fine retrieval devices. Most of the complications in our study were related to use of stents [ [14][15][16][17][18], and [21]]. Considering the four available methods that can be used for large proximal ureteral calculi (According to the guide lines of American urological association) open surgery, PCN, ureteroscopy and ESW) our study supports the use of ueteroscopy, being effective irrespective of stone size or composition, allows immediate relieve of obstruction, comes with minimal morbidity not affected by obesity, bleeding diathesis or previous open surgery. In addition to safety the economic value of using the durable semi-rigid ureteroscope is attractive.
Conclusions
In experienced hands use of small diameter semi-rigid ureteroscope and lithoclast with the availability of fine retrieval devices and stone cone allows for safe and effective method for treatment of large proximal ureteral stones. In comparison to SWL it comes with higher stone free rate, comparable complications and ensures immediate relief of obstruction. | 2014-10-01T00:00:00.000Z | 2010-01-28T00:00:00.000 | {
"year": 2010,
"sha1": "99e6cf5c1f9e093aa90dff3d656348fe2375c803",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/1755-7682-3-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99e6cf5c1f9e093aa90dff3d656348fe2375c803",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118536798 | pes2o/s2orc | v3-fos-license | Valence Instability of YbCu$_2$Si$_2$ through its quantum critical point
We report Resonant inelastic x-ray scattering measurements (RIXS) in YbCu$_2$Si$_2$ at the Yb L$_{3}$ edge under high pressure (up to 22 GPa) and at low temperatures (down to 7 K) with emphasis on the vicinity of the transition to a magnetic ordered state. We find a continuous valence change towards the trivalent state with increasing pressure but with a pronounced change of slope close to the critical pressure. Even at 22 GPa the Yb$^{+3}$ state is not fully achieved. The pressure where this feature is observed decreases as the temperature is reduced to 9 GPa at 7K, a value close to the critical pressure (\itshape{p\normalfont{$_c$}}\normalfont $\approx$ 7.5 GPa) where magnetic order occurs. The decrease in the valence with decreasing temperature previously reported at ambient pressure is confirmed and is found to be enhanced at higher pressures. We also compare the f electron occupancy between YbCu$_2$Si$_2$ and its Ce-counterpart, CeCu$_2$Si$_2$.
One of the major issues not fully resolved in the study of rare-earth (RE) intermediate-valence (IV) compounds is the interplay between magnetic and valence instability, especially when the system is driven by the application of an external parameter as pressure towards a magnetic quantum critical point (QCP), where strong spin and/or valence fluctuations are expected to arise. Especially noteworthy are the IV REM 2 X 2 (where RE=Ce,Yb, M=transition metal, X=Si,Ge) compounds widely studied during the past decades owing to the interesting physical properties they exhibit as heavy fermion behavior, different types of magnetic order, superconductivity and non-Fermi liquid behavior. All these phenomena are based on the non-integer occupancy of the 4f orbital and to the competition among the significant energy scales of these systems, i.e crystal electric field effect (CEF), Kondo effect and Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction. One important aspect to get a deeper understanding of these phenomena, is a systematic comparison between cerium and ytterbium systems. Yb is often considered to be the hole equivalent of cerium. Pressure tends to drive Yb from its nonmagnetic Yb 2+ (4f 14 ) state to a magnetic Yb 3+ (4f 13 ) state. As pointed out by Flouquet and Harima [1] there are however significant differences between the 2 families. In Yb the deeper localisation of the 4f electrons, and the larger spin orbit coupling lead to a different hierarchy of the significant energy scales. One consequence is that whereas in Ce the valence change will be quite restricted, in Yb applying pressure is expected to induce larger changes of the valence. YbCu 2 Si 2 is the ideal prototype Yb system, apparently behaving as expected. The application of pressure produces a mirror image of a typical cerium phase diagram, with decreasing Kondo temperature, increasing magnetic fluctuation contributions to the resistivity, and for pressures above 8 GPa mag-netic order. Additional interest is the recent discovery of ferromagnetism. [2] Furthermore, a recent study has shown that for its Ce counterpart CeCu 2 Si 2 [3], a significant valence crossover is induced with pressure, which might support recent theoretical developments that valence fluctuations considerably enhance superconductivity. It is therefore of particular interest to follow how the Yb valence changes as pressure allows to scan the full phase diagram from deep in the paramagnetic state through the critical pressure and into magnetic order. As the energy scales and the magnetic ordering temperature are low, it is important to combine the extreme conditions of high pressure and low temperature. The recent development of truly bulk sensitive and resonant spectroscopic techniques has significantly improved the accuracy of the study of the electronic structure under high pressure conditions [4][5][6]. The measurement of the RE valence and how it evolves under high pressure and at low temperatures turns out to be a relevant tool because it quantifies the hybridization between the f electron with the conduction band [6][7][8]. However, up to date, few experimental results have been reported so far under such extreme conditions [3]. Experiments performed at ambient pressure and low temperatures or at room temperature and high pressures are more accessible. Some results have been reported using different spectroscopic techniques as x-ray absorption (XAS), partial-fluorescence yield mode (PFY) and resonant xray emission (RXES) for different REM 2 X 2 systems as YbCu 2 Si 2 [6,9], YbNi 2 Ge 2 and YbPd 2 Si 2 [10]. At ambient pressure, all of them are in a mixed valent state with a valence v = 2.88, 2.91 and 2.94 respectively. The valence-temperature dependence down to 20 K shows a continuos decrease for the three compounds of ∆v = 0.06, 0.1 and 0.06 respectively. When pressure is applied (at 300 K) the Yb ion is driven towards its trivalent config-uration, even though it does not fully achieved it, by an amount of ∆v = 0.07 for YbNi 2 Ge 2 and ∆v = 0.05 for YbPd 2 Si 2 . This trend was already proposed by theory based on thermodynamic arguments which pointed out a higher stability of the trivalent state at sufficient high pressures. [11] All these measurements were taken too far from a critical region, either in pressure or temperature, in order to give accurate details of the strong correlations among the low energy interactions in these compounds. In this paper we report a detailed study of the Yb valence from RIXS measurements in the moderate heavy fermion YbCu 2 Si 2 (γ ≈135 mJ mol −1 K −2 ) [12] at several points of the p-T phase diagram (p max = 22 GPa, T min = 7 K) and for the first time, very close to the ferromagnetic order state [2], as summarized in Fig.1. Red [13] and black stars [14] correspond to the temperature at the maximum of the resistivity curve (T Max ) which is considered to be about the same order of magnitude of the Kondo temperature, T K , which strongly decreases with pressure up to 15 GPa. We have used high quality single crystals grown and for p>13 GPa from Ref. [14]). T M and pc are the ferromagnetic order temperature and the critical pressure respectively. Filled red points correspond to the positions where the valence was measured. Points with a black stroke correspond to the valence-temperature dependence at 7 and 15 GPa. The valencepressure dependence was measured at 300, 15 and 7 K. T M ax (red [13] and black stars [14]) is related to the Kondo temperature (see details in the text).
by in-flux method (using MgO crucibles) as described in detail elsewhere [13]. Measurements were performed at the beamline ID16 of the European Synchrotron Radiation Facility (ESRF, Grenoble). The undulator beam was monochromatized with a Si(111) monochromator and focused to a size of 40 µm (Vertical) and 100 µm (Horizontal) at the sample position. The scattered x-rays were analyzed by a Rowland circle spectrometer equipped with a spherically bent Si (620) crystal. The energy resolution was about 1.5 eV. A sample ∼20 µm thick was loaded in a membrane-driven diamond anvil cell (DAC) with silicon oil as a pressure transmitting medium. A helium circulation cryostat was used to measure at low temperatures down to 7 K. Pressure in the DAC was estimated by the fluorescence of the ruby chip placed in the pressure chamber.
The main advantage of working in resonant regime (RIXS) is the possibility to selectively enhance the Yb +2 (2p 6 f 14 5d 0 ) or Yb +3 (2p 6 f 13 5d 1 ) component by an appropriate choice of the incident photon energy hν in compare to PFY-XAS experiments which is also a core-hole and "high resolution" spectroscopy technique where a specific fluorescence channel is measured. Smaller relative changes between the intensities of the two components can be better distinguished in the RIXS spectra and therefore a higher accuracy in the estimation of the valence is obtained. [6] The RIXS spectra of YbCu 2 Si 2 are summarized in Fig.2. The two main features at an energy transfer, E t = hν in − hν out , of 1525.5 and 1530.5 eV correspond to transitions from the Yb +2 and Yb +3 components of the initial mixed valent ground state (|g = a f 13 + b f 14 ). They are separated by ≈ 5 eV which corresponds to the Coulomb repulsion between the 4f hole and the 3d core hole in the final excited states (3d 9 f 14 ǫd) and (3d 9 f 13 )ǫd (ǫd is an electron added to a valence band of d character). We measured at a hν in = 8.9404 keV to enhance the Yb +2 component. This value was chosen after the PFY-XAS spectra were acquired to monitored the Yb Lα 1 (2p → 3d) line while hν in was swept across the Yb L 3 (2p → 5d) absorption edge and we could see for which hν in the intensity for the Yb +2 was maximized. For convenience, all spectra are normalized to the maximum of their respective Yb +2 signal and plotted versus E t . There is a third weak low-energy feature that shows up only at high pressures (p> 7 GPa) around E t = 1520 eV. It corresponds to a quadrupole-allowed (E2) 2p 6 4f 1 3 → 2p 5 4f 1 4 transition of Yb +3 better distinguished with PFY-XAS [6]. Fig.2(a) shows the spectra measured at 7 K, the lowest temperature that could be reached due to technical limitations of the membrane-driven pressure cells and in the cooling power of the cryostat. A clear increase of the Yb +3 intensity under pressure compare to Yb +2 is observed while the weak E2 feature stays mostly unchanged. At 15 K, in Fig.2(b) and at 300 K (not shown here) a similar trend under pressure is noticed. This spectral weight transfer from Yb +2 towards Yb +3 is in accordance with the delocalization process of a 4f electron under pressure. It has been reported [6] that the valence of YbCu 2 Si 2 at ambient pressure decreases when the system is cooled. This tendency is also observed in our results under pressure. As can be clearly seen in Fig.2(c) and (d) at 7 GPa and 15 GPa respectively, the intensity of the Yb +3 peak decreases with temperature, i.e, the system is driven to the divalent state. A quantitative estimation of the Yb valence from the spectral data was calculated using the following expression v = 2 + I +3 / I +2 + I +3 .
The integrated I +2 and I +3 intensities where evaluated by fitting a superposition of two Gaussian functions to our data, one for each spectral contribution, and also an arctangent function to fit the background. The weak E2 transition was not included in this analysis. The values are reported in Fig.3. The valence-pressure depen- Fig.3(a) clearly shows how the trivalent state is favoured under pressure and gives a precise picture of how the Yb valence changes at temperatures and pressures close to the critical region. The valence increases monotonously with pressure for all temperatures we measured: 300 K, 15 K and 7 K with a ∆v (up to 15 GPa) of 0.12, 0.15 and 0.18 respectively. The main result of this work is a distinctive change of slope or kink which occurs at a pressure p k above which a significant decrease of the rate ∂v/∂p for p > p k is found. An estimation of p k (as indicated with an arrow in the inset of Fig.3(a) for the values at 7 K) reveals that it decreases from p k ≈ 11 GPa (300 K) to ≈ 9 GPa (7 K) approaching p c . Above p k the valence keeps increasing slowly under pressure and it does not achieve the fully trivalent state even at 22 GPa, the highest pressure achieved. Because the lowest temperature possible was about 7 K, we could not measure the valence in the magnetically ordered state, however for p > 12 GPa, the system was very close to this (see Fig.1), only about 1 K above T M . Although we cannot exclude that the valence will jump to 3 at the onset of magnetic order, this seems unlikely. An apparent value of less than 3 could also arise from inhomogeneity: there is evidence that the onset of magnetic order occurs as a first order transition, and a previous Mössbauer study found a magnetic and a non-magnetic component just above p c . However this would imply that this phase separation exists even at 22 GPa, which also seems extremely unlikely. We therefore conclude that in this high pressure region of the phase diagram magnetism sets in for a valence value less than 3, so in the mixed valency state. Fig.3(b) shows the valence-temperature dependence at three different pressures: ambient pressure [6], at 7 ± 1 GPa and at 15 ± 1 GPa, very close and well above p c respectively. The values at 300 K are estimated from Fig.3(a). The valence-temperature behavior is, at first sight, roughly the same for all pressures. At ambient pressure, slightly valence changes down to 200 K are detected but below this temperature the slope |∂v/∂T | is considerably enhanced. The decrease of the valence is much more pronounced at high pressure, but the temperature below which this decrease occurs does not seem to change under pressure. No significant differences can be discerned between 7 GPa and 15 GPa. This implies that this decrease is not related directly to the formation of the heavy fermion state, as although at ambient pressure the Kondo temperature is estimated to be of the same order as the temperature where this feature is found, at higher pressures it decreases strongly (see Fig.1). Recent valence measurements carried out in CeCu 2 Si 2 in similar extreme conditions [3] offers the valuable and rare opportunity to compare the 4f occupancy (n f ) under pressure and at low temperatures between a Ce-based compound and its Yb-counterpart. CeCu 2 Si 2 is very sensitive to small differences in stoichiometry which can lead at ambient pressure to find samples with an antiferromagnetic (AF) transition around 0.7 K or a superconductivity below 0.65 K or the combination of the two [15][16][17][18] . However it is generally accepted that at ambient pressure CeCu 2 Si 2 is very close to an AF QCP. Under pressure superconductivity is enhanced and shows a two-dome shape (the first dome is located at p c and the second one at p v ≈ 4.5 GPa). The strong magnetic fluctuations that arises near p c are held responsible for the electron pairing near the magnetic-QCP. For the second dome at p v the f-electron occupancy and associated valence fluctuations might play an important role in superconductivity [3,19]. As YbCu 2 Si 2 (bulk modulus, B 0 =168 GPa) [20] is harder than CeCu 2 Si 2 (B 0 =112 GPa) [21] it is more appropriate to compare their 4f electron (Ce) and hole (Yb) occupancy, n f , respect their molar volume change by using the p-V relation V = V 0 exp (−κ∆p) rather than the applied external pressure. The molar volume at ambient pressure and at low temperatures, V 0 , was calculated using the a and c values from ref. [22]. The molar volume change is normalized to the critical molar volume for each compound V c = 146.012 Å 3 and 165.324 Å 3 for YbCu 2 Si 2 and CeCu 2 Si 2 respectively. The upper axis has been shifted in order to align the critical molar volume for both compounds as indicated by the vertical dotted line. The blue and red arrows indicate the increasing sense for pressure. For a ∆p ≈ 8 GPa the variation of the molar volume is about 6.7% in CeCu 2 Si 2 while 4.5% for YbCu 2 Si 2 and their ∆n f change is about 13.5% and 15.4% respectively. This result bears out what it was suggested theoretically [1]: the differences between Yb and Ce-based systems which leads to a different hierarchy in the main energy scales (T K , CEF) might allow a wider scan in the valence between the divalent and trivalent states in the former compounds. Further studies in other compounds would be of great interest in order to clarify if this is a general tendency in Ce and Yb systems or a particular trend of RECu 2 Si 2 family (with RE=Ce, Yb). The effect would be even more dramatic if we compare ∆n f for a same ∆V/V c . From these results it is tempting to conclude that in YbCu 2 Si 2 valence fluctuations will play a significant role, and that superconductivity should be sought not at the critical pressure but somewhat below this. This is however an extreme simplification, and other differences should be taken into account, not least the ferromagnetic nature of the magnetic order, and pos-sible first order type of the critical point in YbCu 2 Si 2 .
dence in
In conclusion, we have studied by resonant x-ray spectroscopy at the Yb L 3 edge the valence properties of YbCu 2 Si 2 under high pressures and low temperatures, near the critical region and the magnetic order phase. A significant and continuous change under pressure of the f electron occupancy is observed, with a distinctive change of slope close to the critical pressure where magnetic order occurs. However, the fully trivalent state is not yet achieved at the higher pressure of this study, which implies that YbCu 2 Si 2 remains in a mixed-valency state for a wide range of its phase diagram, also in the magnetic ordered phase. The results also show that a larger valence change under pressure is achieved in the Yb-based system compound compare to its Ce counterpart but further studies are needed to verify if it is a general trend between Ce and Yb systems. We thank J. Flouquet, H. Harima and S.Burdin for fruitful discussions. This work was supported by the French ANR agency within the project Blanc PRINCESS | 2012-03-15T21:15:07.000Z | 2012-03-14T00:00:00.000 | {
"year": 2012,
"sha1": "d77f2873efd96b39b047d3a5e64ba47637a6476a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.3567",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d77f2873efd96b39b047d3a5e64ba47637a6476a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
252804783 | pes2o/s2orc | v3-fos-license | Tele-Rehabilitation Service from the Patient's Perspective: A Cross-Sectional Study
This study aimed to describe patients’ perceptions of tele-rehabilitation (TR) and investigate the association between TR-related factors and both the patients’ age and type of rehabilitation services. A cross-sectional survey was conducted to obtain data about patients’ demographic and medical information, technological familiarity as well as patients’ experience and opinions about TR. The 227 patients completing the survey reported a mean ± SD age of 40.7 ± 13.9 years and musculoskeletal disorders as the most common condition treated by TR. The majority of patients expressed satisfaction and confidence with their therapists’ ability to assess and treat their problems using TR. Approximately 75.3% of participants stated that therapists demonstrated a strong understanding of their health conditions, while 82% reported that TR entailed a convenient service during COVID-19. The study found associations between age and patient's ability and confidence to use technology as well as a relationship between the type of treatment received and participants’ overall opinions. Patients demonstrated acceptance, confidence, and satisfaction with TR during COVID-19. Patient age and treatment type fulfill a major role in patients’ perceptions of TR.
Introduction
Tele-rehabilitation (TR) is a rehabilitation service provided at a distance through either audio and/or video digital media. It is communally used to address patient needs (1-3) through diagnostic, therapeutic, preventative, counseling, and consultative services via interactive communication technology (4). TR contains several advantages, such as reduced treatment expenses, time savings, improved access, and continued care for clients with infectious diseases.
Physical therapists (PTs) as a provision adopt a mixture of hands-on and hands-off skills to treat patients, which most often requires a huge physical contact between patient and therapist making virtual treatment sometimes difficult and may be not accepted by patients and/or caregivers. TR services still require improvements, especially for treating home-based patients with a trained team and technological model of care. In particular, PTs comprise primary practitioners that possess experience in managing injuries, functional recovery, movement, and pain. Prior research (5) reported that in comparison to office-based PT, TR programs demonstrate similar levels of functional improvements among patients with knee Osteoarthritis (OA) and recommended TR for older populations in remote areas. A systematic review revealed strong concurrent validity (6) for PT assessments that use range of motion, muscle strength, gait, balance, pain, swelling, and body functions.
Previous studies (7,8) investigating TR contain limitations, including small sample sizes and the lack of a control group. Moreover, these studies failed to investigate important aspects of TR, including patients' perceptions of remote care. Few studies have focused on patients' satisfaction, with only two investigations examining chronic pain sufferers' perspectives of TR (9,10). While a previous study found that people with knee and/or hip OA report positive perceptions about TR for providing physical therapist-prescribed exercise services (11), few works investigate whether the same perceptions exist among patients with other health conditions assessed and treated by different rehabilitation team members. This gap highlights the need for studying patients' experience with TR and technologies used to deliver rehabilitation services. Therefore, this study aims to describe patients' experience with TR by exploring patients' familiarity, experience, and opinions of TR and its technology in Saudi Arabia. Moreover, this investigation considers patients' age and type of rehabilitation services along with the association between those and other factors.
Participants
All participants/caregivers were recruited from the MRD at KSUMC. Eligible patients were those who received remote treatment using the TR program in the period from June 1 to September 30, 2020. Patients who have been treated using the TR program were called to complete a semistructured survey designed and validated by the research team. All patients with any age or diagnoses referred to the MRD were eligible to participate in the study.
TR Program
TR program embraces the concept of providing interactive remote consultations and treatment sessions for patients who require rehabilitation services on a virtual platform. The program has been designed to facilitate rehabilitation services regardless of the patient locations using audio/ video conference call in special virtual clinic platform and system developed by the Information and Technology department at KSUMC using the Microsoft webinar and internal eSihi (Electronic Medical Record system) software.
The process of recruiting patients to the program featured three steps.
1. The triage coordinator reviewed patients' electronic referrals and medical files by considering diagnoses and complaints before evaluating possibilities and risks of serving the patient through the program. The triage coordinator is a senior physical therapist with 17 years of experience. He has been the triage coordinator for about 5 years before the beginning of the study. There is an unpublished triage system used at the Medical Rehabilitation Department since 2015 which has been built by expert therapists and the research team inside the department. 2. All applicable patients were referred to a specialized therapist. During the initial appointment, the therapist assesses the patient condition and schedules him/her through the virtual clinic. During the consultation, the therapist performs the usual evaluation and provides a comprehensive treatment plan, sends the exercise program to the patient using the Physiotools program, and schedules him/her for follow-up appointments. This initial assessment happens via videoconference.
If the patient's case is complex and the therapists determined that a videoconference cannot be used to make the decision and treat the patient. Then, face-to-face assessment is crucial and the therapist will schedule an appointment for the patient to be seen physically at the MRD, and if it is possible, the patient will then move back to virtual follow-up when it is applicable. The treatment session may include thermotherapy (application of heat or cold), therapeutic exercises, active and assistive limb mobilization, flexibility training, aerobic exercises, balance, and gait training. All the participants or caregivers were provided with handouts of treatment procedure (including appropriate time and number of repetitions) to make sure nothing of treatment missed out.
The TR program was performed as a complete treatment session. During the session, the therapist fully assesses and treats the patient. The patient has been asked to perform the required treatment while the therapist watches to make sure the patient can do the treatment appropriately. The number of sessions and time of sessions were varied according to the case and the need of the patient. The therapistpatient meeting happens virtually. If there are any home exercises needed, the therapist explained such exercises and made sure the patient can apply in a correct manner.
Survey Instrument
This study examined patients' experiences and opinions about TR by using a semi-structured survey, which underwent development processes according to a very structured method to design the survey. The first step was to identify the best method and type of survey to be used, communication method, and the length of the survey. The second step was to determine the target population and to gather all required information and necessary questions, gathered from the literature review in the areas of TR, telehealth, technology adoption, and other related fields as well as feedback from senior physical therapists through a focus group session. The survey was then tested on 15 patients who demonstrated similar characteristics to the targeted sample. This pilot aimed to verify the survey structure, absence of ambiguous or leading questions, clarity of wording, inclusion of important issues, and time required to complete the survey. Based on this testing, a revision and the final version of the questionnaire was created. Appendix 1 shows the used survey.
All patients who received virtual rehabilitation services were invited to complete the survey. Once the patient was discharged from the TR program, s/he was listed to be called by the investigators. Before completion, participants received a clear introduction of the study purpose. The survey included four major areas: demographic and medical information, patients' familiarity, experience, and opinions about TR. After obtaining participant consent, a data collection team conducted the survey in a 10-min phone call using department phones at the end of the intervention period.
Data Analysis
All statistical analyses employed SPSS Statistics 25 (IBM Inc., Chicago, IL, USA). Descriptive statistics for all participants expressed the frequency and percentage for all categories. Researchers investigated associations using a chi-square test with the P value set at ≤0.05.
Results
Researchers contacted 275 participants to complete the survey, 227 of whom responded, yielding an 82.5% response rate and mean age of 40.7 ± 13.9 years. The majority of participants fell in the 40-60-and then 25-39-year-old categories, representing 46% and 24%, respectively. Most participants were female (73%), unemployed (61%), and obtained a bachelor's degree (35%) or a high school diploma (25%). The most common treatments for this sample included Musculoskeletal (MSK) and Orthopedic PT. Nearly 57% of the sample never received any prior physical therapy treatment sessions in the department. More details about the breakdown of personal information appear in Table 1.
Most patients felt very (59%) or moderately confident (25%) that their therapists successfully assessed and treated their problems using TR. Most respondents would recommend this service to others, especially during COVID-19 pandemic. However, more than 60% of the sample believed that TR failed to provide as effective a service as traditional face-to-face assessment ( Table 2). More than half of the sample (55%) experienced improvements in their condition after TR treatment. Most respondents reported a high level of satisfaction with TR. Finally, many patients (83%) did not face any technical problems during TR, which, along with other details, appears in Table 2.
Approximately 22% of patients believed that therapists experienced challenges during the assessment. Those patients were questioned more to understand the kinds of challenges they mean using a qualitative analysis. Among this group, more than half (54%) believed that the therapist struggled to reach the correct diagnosis, and 15% described problems with the technologies. The remainder reported that the therapist's assessment challenges related to other issues, such as commitment to the treatment program (11.5%), inability to attend in person (11.5%), and uselessness of the software (8%).
Approximately 17.2% of participants reported technical problems during TR. Among those patients, 38% reported difficulties with software, 28% encountered impaired internet signals or connection interruptions, 24% encountered audio and/or visual problems, and the remaining 10% reported communication difficulties with the therapist.
Most participants (98.7%) accessed the internet through smartphones, while the rest used other devices, such as tablets, laptops, and desktop computers, with more than 90% of respondents confident in using electronic devices. In addition, most participants (86%) were very familiar in using the internet during past 5 years, with 96% of patients using it daily. Table 3 and Figure 1 show patients' opinions about their TR experience. About 75.3% of respondents agreed that the therapist possessed a strong understanding of their health conditions. Most participants (96%) failed to find any breach of privacy during the TR process. While 36% of respondents believed that phone calls provided equal benefits to a face-to-face treatment, 48% felt otherwise (16% patients were neutral). Similarly, 43% reported that using video conferencing equaled to face-to-face conversations while 35% disagreed with this opinion (22% of patients were neutral).
Most participants (74%) preferred not to physically attend their appointment during the pandemic and 82% of the participants stated that TR constituted a convenient form of healthcare during the COVID-19 pandemic. Most respondents agreed that TR comprised an appropriate (70%), useful (64%), effective (57%), affordable (89%), and safe (94%) treatment method during COVID-19. While 58% of the participants agreed that a TR-based treatment plan would improve their present health conditions, 20% disagreed with this opinion.
Results concerning the association between age and other factors appear in Table 4. Age demonstrated a significant association with the presence of technical difficulties and/or problems, confidence in using electronic devices, and belief in the therapist's ability to completely monitor the patients' health via TR.
Significant associations between treatments provided and other factors appear in Table 4. The current treatment provided was significantly associated with the opinions about therapists' understanding of patients' health through TR, and belief in therapists' ability to completely monitor patients' health conditions via TR.
Discussion
The current study aimed to describe patients' experiences and opinions regarding TR. Most patients expressed moderate-to-high confidence that therapists can assess and treat health conditions when using TR. Furthermore, the majority of patients indicated satisfaction when using TR and believed that therapists strongly understood their health conditions. However, most respondents concurred that face-to-face assessment possesses greater effectiveness than TR. Patients reported disagreement about their confidence in using audio or video conferences to discuss health conditions. Moreover, the study found an association between patients' age and type of rehabilitation services along with other factors pertaining to TR. These findings support previous studies, one of which reported high satisfaction levels when using TR services and the relationship between therapists and patients (12). In addition, another study from Sweden found that patients treated via TR after shoulder joint replacement reported that therapists treated their pain and guided them during home exercise programs (13). However, another paper from Canada revealed that patient populations and settings failed to impact high rates of satisfaction among patients and therapists (14). The greatest beneficiaries of TR involved patients in the 40-60-year-old category, followed by patients aged 25-39 years. Patients over 60 comprised only 7.5% of the participants. Moreover, most respondents held educational degrees of high school or above, indicating that their educational status suggests familiarity with technology. MSK PT represented the most common type of rehabilitation services among participants. Prior research reported that TR was highly effective among patients with MSK conditions, even in older demographics (15). This finding, combined with the present study, indicates the need to continue ensuring the effectiveness of TR for MSK disorders and all other health conditions.
Participants expressed confidence about therapists' knowledge and abilities to assess and treat health conditions via TR. Most respondents (95%) reported that TR services protected privacy, which related to moderate-to-high levels of satisfaction during TR assessment and treatment. These findings support previous studies about the convenience, ease of use, and privacy of TR (11). In addition to safety, privacy comprises another issue mentioned in the American Telemedicine Association's (ATA) A Blueprint for Telerehabilitation Guidelines in 2010 (16) as well as in Saudi culture.
Most participants deemed face-to-face assessment is more effective than TR. This finding may result from the presence of chronic pain, which, according to previous evidence done in the Netherlands, reduced patients' confidence in performing exercises via TR (9). Moreover, prior research reported that physical contact constitutes an important factor for building trust between therapists and patients (17). However, 78% of participants in the present study reported that therapists correctly assessed patients' health conditions via TR, while 22% felt concerned about the diagnoses. This study highlighted the need for investigations that determine whether exercise performance outcomes differ based on physical touch. Previous evidence found that the absence of physical contact impedes the formation of an emotional bond (9). Conversely, previous research from Canada reported that patients found TR "at least as effective as traditional face-to-face physical therapy following primary total joint arthroplasty (18)." Most participants experienced improvements after TR treatment. This finding concurs with previous literature in which patients reported that therapists customized exercise treatments to their ability, pain, and fatigue (19), which studies of patients with OA supported over the short term (20,21). Moreover, previous evidence found that only 23% of patients preferred the lack of physical contact during TR (11). Furthermore, the type of medical condition may fulfill a role in TR. These collective findings suggest several issues regarding the relationship between therapists and patients, quality of health information, and organizational challenges (22). Therapists can address concerns about the therapeutic relationship by scheduling at least one in-person session with the patient.
Australian participants indicate a preference for audio calls during TR, which concurs with a study of clients who have undergone arthroplasty reporting relatively equivalent outcomes between a six-week in-person rehab program and internet-based TR (23). Similarly, a UK-based phone service revealed similar outcomes when comparing TR assessments with regular physical therapy for patients with MSK (24). Another research reported that a four-week TR session yielded significant improvements in several outcomes among patients with low back pain, such as pain, muscular strength, and functional capacity (25). These results extend beyond rehabilitation patients, as phone-based telecare reduced hospital admissions among patients complaining of heart failure (26). This finding may result from patients' improved access to specialists and lower costs of TR. However, a minority of patients expressed confidence that TR provides an equitable platform for discussing their conditions in comparison to face-to-face conversations. This result indicates the need to explore reasons that people prefer physical contact for rehabilitation. Moreover, the impact of education and/or experience on changing this belief remains unknown. A previous study found that one-third of patients expressed uncertainty about the benefits of therapeutic exercises for OA despite their prominence in OA treatment (27). This study found that TR may represent the most convenient method of rehabilitation, especially during COVID-19, as most participants appreciated the ability to remain at home. Most participants deemed TR as an appropriate, useful, effective, affordable, and safe treatment method during COVID-19. Most respondents also believed that treatment prescribed via TR improved their health conditions. Finally, most patients expressed their desire to recommend TR to their friends, thus indicating high satisfaction levels with TR during COVID-19.
Participants' positive opinions and experiences may partially result from the high percentage of patients (83%) who did not experience any technical problems during TR. This result may reflect the accelerating digital transformation in Saudi Arabia (28). As studies from other countries (12,19) reported that patients express concerns about technology. To resolve this issue, therapists can potentially combine TR with in-person services.
This study found that most patients in Saudi Arabia confidently used and accessed the internet via smartphones, concurring with previous research in which all participants had a mobile phone (11). In the present study, among the 17.2% of the participants who reported technical problems during TR, the majority experienced difficulties with software (38%), followed by impaired internet signals or connection interruptions (28%), audio and/or visual problems (24%), and communication difficulties with therapists (10%). Such findings may result from patients' cognitive, motor, or social impairments, as individuals with neurological issues may struggle to use technology and communication devices. However, TR services still require more research with clinical outcome measurements using longitudinal observational studies and randomized clinical trials.
This current study constitutes the first investigation to examine the association between age and other factors associated with TR. Among all cohorts, the 0-14 age group demonstrated the greatest likelihood of facing technical problems, which may relate to difficulties that caregivers/parents face while managing children and dealing with technologies, indicating the need for improved understanding of therapists' instructions. Finally, the confidence in using electronic devices was not an issue among all age categories except patients aged 61 years or older who appeared to be less confident. This may be justified by the Digital Transformation that has been accelerated in Saudi Arabia (28), as it was mentioned earlier in this section.
Among all age groups, parents/caregivers of patients aged 0-14 years most often expressed their belief that therapists could monitor health conditions over phone and/or video conferencing. This finding may encourage professionals to use TR for patients aged 14 years old and younger. Conversely, patients aged 15-24 years had the greatest likelihood of disagreeing with this statement, which may indicate the need to provide education about TR and its impact on this age group.
This study constituted the first investigation to examine associations between treatment type and other factors related to TR. Among all therapy types, patients receiving neurological (Adult / Paediatric) rehabilitation had a greater likelihood of agreeing that therapists possessed a strong understanding of health conditions through TR and could remotely monitor health conditions. Although such cases may suffer from complicated cognitive and motor disorders, this finding provides promise for the possibility of using TR in such cases; however, additional studies with larger sample sizes need to confirm such findings.
Future Recommendations and Limitations
This study used a non-probability sampling technique. It was not feasible to draw a random probability-based sample of the population due to the pandemic time and cost considerations. However, researchers reduced selection bias by contacting all patients treated via TR during the study period. Another limitation concerns the small sample size. This limitation may have been the reason behind the high proportion of females and patients with MSK. Therefore, there is a need for future studies to target more males and non-MSK patients. However, the patients' female-to-male ratio at the Medical Rehabilitation Department is 65%-35%, which might have caused the high proportion of female participation in this study
Conclusion
This study revealed high levels of acceptance and satisfaction with TR services, including confidence in therapists' understanding of patients' health conditions as well as ability to assess and treat. The age of patients and type of treatment play a prominent role in patients' perceptions toward using TR. Lastly, patients indicated their preference for TR during COVID-19. Additional studies should confirm findings with complicated neurological cases and include more rehabilitation departments throughout Saudi Arabia. | 2022-10-11T16:31:35.921Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "0b99c9d8bae9c2ba08532947bfaabc3112ecbeee",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6ac33d52e8e24e1bc5c655b2aaf8fca8af673b0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145013235 | pes2o/s2orc | v3-fos-license | Sola Fides at the Core of Varieties: Luther as Religious Genius in William James’s Thought
Classical American pragmatism is an estimable source to understand religion as an inescapable human endeavor. Despite logical divergences among the approaches of classical figures, they share a primeval interest in religion. Charles Sanders Peirce, for example, gave religion a central role in his philosophy, especially through his conception of evolutionary love; Josiah Royce linked religion and morality in an attempt to assert the vitality of the idealist tradition. Finally, John Dewey fought “militant atheism” through a vision that could be called religious naturalism. As part of the core of this movement, William James’s philosophy has religion as a fundamental issue too. Numerous books and articles have already been written on James’s views on religious experiences and religion. Thus, it is not my purpose to repeat or rephrase either James or the literature, but to explore a couple of neglected connections: the line that can be drawn between Luther and James, on the one hand; and on the other hand, the role that Hegel’s conception of religion could play in relation to Luther and James. Accordingly, I will support the view that inwardness is one of the foremost ways to understand the links between these thinkers. Given this general framework, one of the main issues relates to institutional creeds, particularly to the role of the Protestant spirit and denominations in classical American philosophy. As regards James, he tends to oscillate between identifying his writings as belonging to the general spirit of Protestantism, on
Introduction
Classical American pragmatism is an estimable source to understand religion as an inescapable human endeavor.Despite logical divergences among the approaches of classical figures, they share a primeval interest in religion.Charles Sanders Peirce, for example, gave religion a central role in his philosophy, especially through his conception of evolutionary love; Josiah Royce linked religion and morality in an attempt to assert the vitality of the idealist tradition.Finally, John Dewey fought "militant atheism" through a vision that could be called religious naturalism.As part of the core of this movement, William James's philosophy has religion as a fundamental issue too.
Numerous books and articles have already been written on James's views on religious experiences and religion.Thus, it is not my purpose to repeat or rephrase either James or the literature, but to explore a couple of neglected connections: the line that can be drawn between Luther and James, on the one hand; and on the other hand, the role that Hegel's conception of religion could play in relation to Luther and James.Accordingly, I will support the view that inwardness is one of the foremost ways to understand the links between these thinkers.
Given this general framework, one of the main issues relates to institutional creeds, particularly to the role of the Protestant spirit and denominations in classical American philosophy.As regards James, he tends to oscillate between identifying his writings as belonging to the general spirit of Protestantism, on s__ n__ l__ lc the one hand, and as referring to religious experiences beyond the Protestant framework, on the other. 2The first tendency clearly appears in several parts of James's work. 3The literature has also reflected this dualistic viewpoint.David Hollinger, for example, points out that James unequivocally belongs to the Protestant tradition. 4Meanwhile, the second tendency can be found in James's definition of religion at the beginning of The Varieties of Religious Experience: Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine.(34; emphasis in original) Some outstanding interpreters of James's work, such as Ralph Barton Perry and Henry Levinson, emphasize that these kinds of statements grasp the core of the Jamesian point of view on religion. 5Consequently, in the circumscription of this topic, James would ground his conception of religion in the psychology of religious experiences and not in a particular theology, doctrine, or sectarian view.
My position in this paper, despite this Jamesian circumscription of the topic, is that there are several Protestant issues within James's works, and especially in Varieties. 6Those aspects seem to refer mainly to Calvinism (in their different versions), which appears to be the skeleton of Varieties (see Hollinger; Levinson 15).Besides, Calvinism could be interpreted as a biographical source of James's religious tendencies.As Gerald Myers writes: Biographers have conjectured about the source of James's [religious] need.It may have resulted from the strict Presbyterianism of his powerful and wealthy grandfather, the elder William James, or from his father's rebellious, freethinking bouts with Calvinism, other influences may have been the piety of his mother, his early reading of Jonathan Edwards, and his study of the Great Awakening in American history.(256) From Myers's words, we could infer that, despite the numerous sources used in Varieties, only the Protestant ones are essential to understand James's religious outlook.Meanwhile, within the literature, some authors relate James to the two fundamental branches of Protestantism: Calvinism and Lutheranism.Amanda Porterfield, for instance, refers to the dual Protestant character in James's viewpoint: Luther's conception of faith as an essential feature of religion, on the one hand, and Calvin's idea of moral strenuousness empowering religious believers, on the other (147).
Besides Porterfield, three authors-to the best of my knowledge-have stressed the relevance of Luther's influence in James's religious thought.One __s __n __l lc the pluralist 10 : 1 2015 of them was David Zehnder, who relates James to Lutheran Reformation (301).He points out one aspect of James's conception of religion (i.e., selfsurrender), which will be widely referred to in the corpus of this paper. 7eanwhile, William Spohn wrote an insightful interpretation of James's religious thought: A particular narrative of Christianity had already determined the shape of redemption that he assumed to be nonsectarian.He [James] extrapolated a common pattern from his sources, but they were mostly Protestant and almost exclusively Christian.In the Varieties he relied heavily on the conversions studied by Edwin Diller Starbuck.They were taken primarily from Western sources and interpreted through conventional theological frameworks, particularly the paradigm of justification through faith articulated by Reformation Christianity.Martin Luther defined Christian conversion according to Augustine's reading of Paul's conversion experience.Romans and Galatians in effect are interpreted through the story of Augustine's Confessions.(32) 8 Spohn clearly remarks upon something that is essential to the interpretation I present in this article: at the core of Varieties, there is a particularly Protestant-more specifically, Lutheran-way to understand religious experience.The third author who relates James with Luther was Paul Hinlicky.He offers a sharp interpretation of Luther's dual influence on James's thought: Luther's conception of religion should be considered, on the one hand, as belonging to the healthy mentality, especially through his doctrine of justification by faith; and on the other hand, it should also be interpreted as exemplifying the religion of sick souls.But how can this duality be explained?Hinlicky points to Luther's view of "enthusiasm" as the most relevant element in religion (Hinlicky 25), as the key to the understanding of this duality.Beyond creeds and rites, there exists an inescapable fundamental closeness in the relationship between the individual and God.In my interpretation of Hinlicky, Luther's conception of religion would be for James either the relationship of enthusiastic hope of the healthy mentality, (the ecstatic selfsurrender) or the trembling despair of sick souls. 9I think that this is one of the reasons why Luther is the paradigmatic model of religious genius in Varieties.I will return to this issue in the corpus of the article.Summing up, there seems to be at least four approaches to understanding James's thought on religion: in the first place, interpreters who point out the nonsectarian and non-institutional character of James's religious conception as its more important aspect (e.g., James's circumscription of the topic and conclusion in s__ n__ l__ lc Pluralist 10_1 text.indd82 11/4/14 3:00 PM Varieties; Perry; Levinson; see also note 5); secondly, authors who think that James's conception of religion should be understood as a form of inherited and diluted Calvinism, which belongs to the liberal Protestant tradition and emphasizes the moral strenuousness that religion gives us (especially Porterfield; Hollinger); thirdly, those who appeal to the influence of Swedenborg's heterodox theology through the writings and influence of Henry James, Sr. (Croce; Habegger); and finally, authors who think that one of the most important sources to understand James's religious thought is Martin Luther's Protestantism (Zehnder; Spohn).My article clearly belongs to the fourth interpretative approach.I attempt to focus on one aspect of James's thought, namely its link with Luther and Lutheranism, which is examined from both a narrow and a comprehensive perspective.The former refers to the role played by Luther's conceptions of religion within James's work-specifically in Varieties.The latter deals with Luther's thought and the development of Lutheranism from its origin, and the role that James's conception could play in connection with it.
Grounded on those perspectives, I defend three hypotheses in this work: first, that James's conception of religion is markedly influenced by Luther's thought in central parts of Varieties; second, that the Lutheran core of James's view lies in the split he makes between religion and morality, on the one hand, and in the distinction between the content of faith and having faith, on the other hand; third, that James's conception of religion is one of the most radical representatives of a philosophical way to interpret Luther's thought: the primacy of Luther's conception of faith against Luther's idea of Church and sola scriptura-a comprehensive perspective integrating James's Varieties into the philosophical reception of Lutheranism. 10
I. Luther and Lutheranism
It is well known that at the turn of the sixteenth century, Martin Luther triggered a schismatic religious revolution that undermined Catholic hegemony, changing religion in the world forever.I will briefly examine some aspects of this complex phenomenon, that is, Reformation in its Lutheran vein, which are relevant to dealing with James's conception of religion: (1) Luther's idea of grace, (2) Luther's conceptions of justification by faith alone/justification by works, (3) Luther's idea of salvation, (4) Luther's idea of sola scriptura, and (5) Luther's concept of church.From my perspective, these notions represent the core of Luther's religious thought.The first three conceptions are closely connected.Luther's idea of grace turns around God's mercy as a gift conferred on human beings independently of their works.Luther paradigmatically sustains this position in the following paragraph of The Good Works: Look here!This is how you must cultivate Christ in yourself, and see how in him God holds before you his mercy and offers it to you without any prior merits of your own.It is from such a view of his grace that you must draw faith and confidence in the forgiveness of all your sins.Faith, therefore, does not originate in works; neither do works create faith, but faith must spring up and flow from the blood and wounds and death of Christ.(38; emphasis added) Furthermore, Luther maintains that we do not deserve this gift from God.So, as Alister McGrath writes: "[G]race is the 'undeserved and unmerited divine favour towards humanity'" (88).This is, as McGrath argues, a fundamental issue: It is the idea of grace as the unmerited favour of God which underlies the doctrine of justification by faith, generally and rightly regarded as underlying the origins of the Lutheran Reformation in Germany.( 89It is well known that the target of Luther's attack was the sale of indulgences.An indulgence was a document that assured its possessor a remission of sins for himself or for a soul in Purgatory.First, Luther rebelled against the excess of indulgence sellers (mainly in 95 Theses).In subsequent books, however, Luther opposes the very idea of indulgences and questions the authority of the Catholic Church to sell or give indulgences (particularly in The Babylonian Captivity of the Church).By attacking indulgences and the authority of the Catholic Church, Luther insists on the idea that to be justified before God does not entail any good work on our part but only our belief in Him and His word.Thus, what matters is not our merits before a righteous God, but our faith.
Grace and faith are related issues in Luther's thought.While grace is a divine gift for humanity, it needs to be accepted by human beings.Such acceptance is what we call faith, which entails self-negation to give place to Christ: Now if God is to live and to work in him, all this vice and wickedness must be choked and uprooted, so that in this event there is rest from of all our works, thoughts and life, so that henceforth (as St. Paul says in Galatians 2:20) it is no longer we who live, but Christ who lives, works and speaks in us.(Luther, Good This is a crucial issue regarding James's conception of religion since it entails "relaxation" or "quietism" and not "action" as a central common topic between James and Luther.Then, it is from the combination of grace and faith that the idea of salvation emerges, that is, our salvation-the third aspect of Luther's thought that I mention-depends on both the undeserved grace that God confers on us and our acceptance of such grace, that is, our faith in God and His word. The fourth and fifth issues focus on objective aspects of Luther's thought.The former relates to the doctrine of sola scriptura.As it is well known, Luther attacked the authority of the Catholic Church and its system based on seven sacraments by simplifying it.For him, there were only two sacraments: baptism and communion.Such a simplification was based on the idea that the Holy Bible is the only authoritative source and that rituals added through different temporal processes are not important.Thesis 62 is an example of this conception: "The true treasure of the church is the most holy gospel of the glory and the grace of God" (Luther, 95 Theses 13).
The last issue that I will refer to is Luther's idea of Church, which is one of the most complex concepts within his thought.The question that arises is what the nature of a church is.Luther faced two options: either to sustain that there exists a visible church connected with the apostolic past-which poses the problem of differentiating such a church from the Catholic Church-on the one hand, or to hold that actual church is only in Heaven, on the other hand-which brings the problem of differentiating between such a church and the radical wing of Protestantism.Underlying the problem of the nature of the church is the relationship between the church and the Bible.Put differently, why do we need a church if we have the Bible?I will refer to these features below.
It has often been said that Luther's thought entails paradoxical claims.Features (1)-(3) mentioned above, for example, mainly refer to one aspect of Luther's thought, namely the foundation of religion in individual consciousness through the appeal to grace and faith.On the other hand, features (4)-( 5) make reference to another aspect of Luther's thought, which is its foundation in the objective source of the Bible and the church.Put differently, (1)-(3) refer mainly to inward conviction as the source par excellence of religion, while (4)-(5) focus on sources that are independent of individuals.How can Luther's thought be understood in this context?
One option is to distinguish between the young and the mature Luther, as Gerhard Ebeling and Ginzo Fernández,among others,did. 11 Despite the medieval framework of Luther's thought, he has written paragraphs like this, where a new idea of liberty is at stake.This is the kind of source that philosophical receptions of Luther-particularly Hegel-refer to.In other words, the philosophical course of Lutheranism relates mainly to subjectivism as its central feature, which considers our experience of God to be the fundamental aspect of religion to the extent that we are guided by the spirit. 12eanwhile, an example of the second position can be found in Luther's Catechism.There, he insists that plain people should learn not through scriptures but through some simple means such as the Creed: In the first place let the preacher above all be careful to avoid many kinds of or various texts and forms of the Ten Commandments, the Lord's Prayer, the Creed, the Sacraments, etc., but choose one form to which he adheres, and which he inculcates all the time, year after year.For [I give this advice, however, because I know that] young and simple people must be taught by uniform, settled texts and forms, otherwise they easily become confused when the teacher to-day teaches them thus, and in a year some other way, as if he wished to make improvements, and thus all effort and labor [which has been expended in teaching] is lost.
Also our blessed fathers understood this well; for they all used the same form of the Lord's Prayer, the Creed, and the Ten Commandments.Therefore we, too, should [imitate their diligence and be at pains to] teach the young and simple people these parts in such a way as not to change a syllable, or set them forth and repeat them one year differently than in another [no matter how often we teach the Catechism].(Luther, Small Catechism 313) To put it differently, Luther's conception of religion can be interpreted as being divided into two veins: subjectivist and objectivist.In the former, s__ n__ l__ lc Pluralist 10_1 text.indd86 11/4/14 3:00 PM the sola fides principle tends to prevail, while in the latter, the sola scriptura principle-or, paradoxically, the authority of the Lutheran Church-is the prevailing one.While philosophical views tend to emphasize the subjectivist aspect of Luther's thought, theological orthodoxy relies on the principle of sola scriptura, minimizing the scope of the principle of sola fides.Thus, Luther's duality explains the diverse receptions of his work: from the Biblicism of the theological orthodoxy of the sixteenth century through the seventeenth century to the philosophical recreations of Schleiermacher, Hegel, and Schopenhauer.This is a fundamental issue regarding the reception of Luther's thought.For philosophical receptions of Luther (particularly Hegel's), the fundamental aspect is represented by the principle of sola fides (and the individual's liberty for individuals that this entails), but they have the problem to integrate this principle with the "objectivist" dimension of Luther's thought: the conceptions of sola scriptura and of church.Meanwhile, the theological receptions of Luther have the opposite problem, that is, how to coherently relate the sola scriptura principle with the spiritualist domain of the principle of sola fides.
Within this framework, I will mainly refer to the principle of sola fides, since it is fundamental to the various philosophical receptions of Luther's thought.Regarding James, I take two essential ideas: self-surrender (one of the expressions more frequently used in Varieties), and the idea of the split between moral and religious experiences.In Luther's thought, the latter idea is clearly expressed in his distinction between justification through works and justification through faith.In other words, for Luther, doing a morally right act means nothing without faith.Faith comes first as God's gift, and as a consequence of having faith, we act correctly.Thus, there is, in my view, a clear distinction between religious experience and moral experience, since the latter presupposes the former.
One aspect of the named tension, which should be highlighted, relates to the modernity of Luther's thought: Does he set the outset of the modern era?Does his conception of religion belong to an authoritarian and ecclesiastical culture?(Troeltsch, Protestantism and Progress 10).The point I would like to emphasize is that there are grains of Luther's thought that could be interpreted as the seeds of modern and contemporary ideas.In the frame of this article, and having in mind James's conception of religion, I follow Troeltsch's insight of Protestantism and Progress, specifically his notion of Luther's "miracle of the idea."__s __n __l lc Pluralist 10_1 text.indd87 11/4/14 3:00 PM the pluralist 10 : 1 2015 He needed for the personal life something purely personal.The means was therefore faith, sola fides, the affirmation, by the complete surrender of the soul to it, of that thought of God which has been made clear and intelligible to us in Christ.(192) Troeltsch's conception is relevant to my paper for two reasons: first, he highlights as its most relevant feature the subjectivist element of Luther's thought; second, this subjectivist element can conduct us from Luther to James, that is, one can appreciate James's role in the reception of Luther's thought, as I will show below.
II. Luther at the Core of Varieties
James's conception of religion has several characteristics: first, its defense of the individual right to believe despite living in a time of secularization and growing atheism, particularly in academic circles; second, a pluralistic view on religion that involves the divine we believe in is based on our inclinations and preferences; third, like his contemporary Durkheim, James has drawn a kind of distinction between the sacred and the profane. 13wo aspects regarding what I call James's Lutheranism are of the utmost significance: first, the relevance of linking institutional and conceptual Lutheranism within James's thought; second, the direct influence of Luther's work on James's conception of religion.The first one will be considered in the next section.Thus, I will refer now to the influence of Luther on James's thought, specifically in Varieties. 14o put it directly, my hypothesis is that James's Lutheranism has two aspects: first, his clear distinction between morality and religion; and second, his distinction between faith and the content of faith. 15Regarding the first aspect, it can be unambiguously found in some passages of Varieties, for example, when James writes: And here religion comes to our rescue and takes our fate into her hands.There is a state of mind, known to religious men, but to no others, in which the will to assert ourselves and hold our own has been displaced by a willingness to close our mouths and be as nothing in the floods and waterspouts of God.In this state of mind, what we most dreaded has become the habitation of our safety, and the hour of our moral death has turned into our spiritual birthday.(Varieties 46) In this paragraph, James sharply distinguishes between religion and morals.But there is yet another element, namely that James's conception focuses not only on the idea that religion shows us a dimension that morality cannot s__ n__ l__ lc Pluralist 10_1 text.indd88 11/4/14 3:00 PM reach, but also on religious experiences as being sometimes the result of an anti-moralistic method: Under these circumstances the way to success, as vouched for by innumerable authentic personal narrations, is by an anti-moralistic method, by the "surrender" of which I spoke in my second lecture.Passivity, not activity; relaxation, not intentness, should be now the rule.(Varieties 95; emphasis added) Two Jamesian elements can be pointed out in the previous paragraph: first, that morality never cures as religion does.In other words: even the best suffer for their weakness, and their vitality is like a shadow destined to die; second, that sometimes passivity-and not activity-is the key to solving our spiritual problems.
For my purposes, one point to highlight is that this passivity or quietism is explicitly connected with Luther's view on religion: On the whole, one is struck by a psychological similarity between the mind-cure movement and the Lutheran and Wesleyan movements.To the believer in moralism and works, with his anxious query, "What shall I do to be saved?"Luther and Wesley replied: "You are saved now, if you would but believe it."(Varieties 94) The scope of quietism has often been discussed in the literature about Luther and Lutheranism. 16 In other words, has Luther promulgated a religion based only on an individual that "surrenders"?Regarding the individual's role, is it active only to the extent that he affirms or negates the call of grace?Different answers can be given to those questions.Usually, some courses of Lutheranism attempt to overcome the interpretation that attributes quietism to Luther's conception of religion, whose central exponent is Ernst Troeltsch.In terms of the German author: [I]t does not become really intelligible until we see that his basic idea is that of grace as the gift of God, which objectively precedes and implies everything else, and that this divine grace is only obscured by human effort.Thus this stress upon free grace and human impotence leads Luther into an emphasis upon spiritual freedom and abandonment, which merges almost imperceptibly in a kind of Quietism.(Social Teaching 497) Like Troeltsch, I think that Luther's conception of grace leads us to quietism.For my purposes, however, this is a marginal issue.The relevant focuses are two: first, that James interprets Luther à la Troeltsch, that is, attributing quietism to his conception of religion; second, that James uses this interpretation to ground his own view on religion.To understand both focuses, it is necessary to introduce some core issues of Varieties.
It is well known that three types of religious mentalities are presented in Varieties: the healthy, the sick soul, and the twice-born.Though radically different, these mentalities share a feature: they are intelligible only in terms of faith (or through justification by faith) and not of justification by works (in Lutheran terms).
For James, healthy souls are those that cannot perceive evil in the world.Moreover, in extreme cases, "in some individuals optimism may become quasi-pathological" (Varieties 75).What relationship does this bear with Lutheranism?They are not linked at a doctrinal level but through justification by faith, the mechanism proposed by Luther as fundamental.Put differently, the core of this mechanism is not action but our acceptance of God's grace.In James's terms: "It is but giving your little private convulsive self a rest, and finding that a greater Self is there" (Varieties 96).This giving up is overtly associated with Lutheran theology: Give up the feeling of responsibility, let go your hold, resign the care of your destiny to higher powers, be genuinely indifferent as to what becomes of it all, and you will find not only that you gain a perfect inward relief, but often also, in addition, the particular goods you sincerely thought you were renouncing.This is the salvation through selfdespair, the dying to be truly born, of Lutheran theology, the passage into nothing of which Jacob Behmen writes.(Varieties 96) Despite James's criticism of the shortcoming of healthy souls-which I will refer to below-they show us central features of human nature, that is, that one needs at least a dose of healthy character for survival.Moreover, in their religious aspect, these souls are fundamental in order to help us grasp the essence of religion, namely the faith-state, in James's words.He describes these features as follows: The systematic cultivation of healthy-mindedness as a religious attitude is therefore consonant with important currents in human nature, and is anything but absurd.In fact, we all do cultivate it more or less, even when our professed theology should in consistency forbid it.We divert our attention from disease and death as much as we can; and the slaughter-houses and indecencies without end on which our life is founded are huddled out of sight and never mentioned, so that the world we recognize officially in literature and in society is a poetic fiction far handsomer and cleaner and better than the world that really is.(Varieties 80) This mentality is of fundamental importance for my argument, as it shows us one crucial issue: what is at stake is not the content of religious doctrines but the mechanism that allows religious experience to rise.This mechanism is based on the anti-moralistic, Lutheran method of the surrender of the self, which is exemplified in the following paragraph: Martin Luther by no means belonged to the healthy-minded type in the radical sense in which we have discussed it, and he repudiated priestly absolution for sin.Yet in this matter of repentance he had some very healthy-minded ideas, due in the main to the largeness of his conception of God.(Varieties 110) In other words, Luther's healthy mind does not relate to the doctrinaire level, but to his idea as to how the relationship between the individual and God works, that is, through the method of self-surrender.This necessarily entails the split between religious and moral experience, since the latter is grounded on the former.
Sick souls, on the other hand, perceive only evil in the world, a perception which is acutely felt, as pointed out by James: Not the conception or intellectual perception of evil, but the grisly blood-freezing heart-palsying sensation of it close upon one, and no other conception or sensation able to live for a moment in its presence.(Varieties 135) James sustains that this variety of religious experience is richer than the first one, in the sense that it allows us to comprehend more features of the world.Not only is this mentality presented as compatible with Luther's thought, but it is described by James by quoting Luther.Regarding the actuality of pessimism, the essential mark of this mentality, James writes: The completest religions would therefore seem to be those in which the pessimistic elements are best developed.Buddhism, of course, and Christianity are the best known to us of these.They are essentially religions of deliverance: the man must die to an unreal life before he can be born into the real life.(Varieties 138) But more specifically, James maintains that Protestant theology perfectly fits this kind of religious temperament: no works it can accomplish will avail.Redemption from such subjective conditions must be a free gift or nothing, and grace through Christ's accomplished sacrifice is such a gift.(Varieties 198-99) After this statement relating Protestant theology to sick souls, James quotes a long paragraph from Luther's Commentary on the Galatians, and he concludes with the following words: That is, the more literally lost you are, the more literally you are the very being whom Christ's sacrifice has already saved.Nothing in Catholic theology, I imagine, has ever spoken to sick souls as straight as this message from Luther's personal experience.(Varieties 199-200) From a theological perspective, the message in this paragraph is of remarkable importance.However, what I want to stress is not a theological but a methodological issue, namely that James shows us once more that religious experiences are grounded on an anti-moralistic method, that is, giving up of the self and trust in the divinity.
The last mentality presented by James in the Varieties is the twice-born.It is the most complete of all religious souls because they have seen both good and evil, that is, both sides of the abysm, and they have recovered through a process that James calls redemption: The process is one of redemption, not of mere reversion to natural health, and the sufferer, when saved, is saved by what seems to him a second birth, a deeper kind of conscious being than he could enjoy before.(Varieties 131) When James deals with these souls, he does not recur to Luther's works.However, from a logical point of view, he uses the same anti-moralistic Lutheran method to explain the way these souls should be conceived of: It may come gradually, or it may occur abruptly; it may come through altered feelings or through altered powers of action; or it may come through new intellectual insights, or through experiences which we shall later have to designate as 'mystical.'However it comes, it brings a characteristic sort of relief; and never such extreme relief as when it is cast into the religious mould.Happiness!Happiness!Religion is only one of the ways in which men gain that gift.Easily, permanently, and successfully, it often transforms the most intolerable misery into the profoundest and most enduring happiness.(Varieties 146) Thus, for twice-born as well as for healthy and sick souls, the most important feature is their ability to find a vital sense putting themselves aside and resting on a greater Self.I have just depicted the main features of James's religious mentalities and have shown how they share a crucial aspect, namely the split between morality and religion.But these mentalities share another essential feature: passivity.How does this explain the strenuousness that James sometimes attributes to moral agents and religious geniuses?I think that the analogy with Luther's view is enlightening.I have said above that, for Luther, moral experiences depend on basic and previous religious experiences.It is the same for James.There exists a religious passivity that could give way to strenuous moral actions.But passivity emerges first and, in James, by the anti-moralistic method of self-surrender.In other words: it is not through good works that one reaches God (or salvation) but through faith, and this can be the origin of a strenuous and fertile moral life. 17ll religious experience is therefore radically different from moral experience.I have called this "James's Lutheranism" because the American philosopher grounded this conception on Luther's thought, particularly in a methodological way: the method of the surrender of the self.
Regarding belief and faith, James takes another fundamental step in Varieties, that is, he openly separates the intellectual content of faith from having faith.Unsurprisingly, James recurs to Luther to ground his position: Faith that Christ has genuinely done his work was part of what Luther meant by faith, which so far is faith in a fact intellectually conceived of.But this is only a part of Luther's faith, the other part being far more vital.This other part is something not intellectual but immediate and intuitive, the assurance, namely that I, this individual I, just as I stand, without one plea, etc., am saved now and forever.(Varieties 200) Faith has two connotations within James's work.In his melioristic account of religion, he conceives an active faith, that is, an energetic faith that helps to produce the desired object.This is crystal clear in The Will to Believe.However, in the mystical approach that I am taking in this paper, faith means something different, that is, to put aside ourselves and our supposed good and natural self-sufficiency to give place to the divinity.In other words, this is a faith related to the idea of self-surrender.
Meanwhile, Luther's conception of religion focuses-through the principle of sola fides-on individual experience as a primordial source of religion, but it conserves a very restrictive way to conceive the content of this experience through the principle of sola scriptura and the authority of the church.On the other hand, James distinguishes between faith and the content of faith.This step is central since it allows setting the core of religious experience within the individual.James's Lutheranism consists in taking one III.In the Philosophical Path of Lutheranism: James on Religion Lutheranism unambiguously marked the development of Western philosophy, particularly in Germany.From Leibniz, Kant, and Schleiermacher to Feuerbach, Marx and Nietzsche, all German philosophy reacted to the work of the great reformer.As philosophical courses of Lutheranism, German philosophers refer mainly to the Lutheran principle of sola fides.One point to stress is how they reacted to Lutheranism, either by professing or rejecting it.I will leave aside those philosophers who rejected Lutheranism-the most radical being Marx and Nietzsche-and will briefly examine the philosopher who paradigmatically took the project of Lutheranism as his own: Hegel.One of the essential features of Hegelian philosophy is its emphasis on inwardness as a constitutive element of religion.For Hegel, Luther as well as Germany accurately understood the essence of the message, and therefore they considered inwardness to be the core of Christianity.In Hegel's words: The time-honored and cherished sincerity of the German people is destined to effect this revolution out the honest truth and simplicity of its heart.While the rest of the world are urging their way to India, to America . . .we find a simple Monk [Luther] looking for that specific embodiment of Deity which Christendom had formerly sought in an earthly sepulcher of stone, rather in the deeper abyss of the Absolute Ideality of all that is sensuous and external-in the Spirit and the Heart-the heart.(Philosophy of History 518-19) 18 Consequently, a central feature of Hegel's appropriation of Luther's thought is his remark on the pre-eminence of inwardness and the principle of sola fides.But Hegel was a Lutheran who, in spite of firmly opposing the principle of sola scriptura and orthodox theology, defended the authority of the Lutheran Church.In his Philosophy of History, for example, he sustains that the Kingdom of the Spirit starts with Reformation and, more relevant to my purposes, that the Lutheran Church has the mission to instill universality and freedom into the world.Moreover, Hegel maintains that liberty and the Lutheran Church are two sides of the same coin: Truth; and this subjectivity is the common property of all mankind.Each has to accomplish the work of reconciliation in his own soul. . . .Thus that absolute inwardness of soul which pertains to religion itself, and Freedom in the Church are both secured.Subjectivity therefore makes the objective purport of Christianity, i.e. the doctrine of the Church, its own.(Philosophy of History 520) The colossal magnitude of Hegel's philosophical project can be interpreted as an attempt to close the cycle that Luther had opened when undermining the authority of the Catholic Church.However, the vertiginous course of post-Hegelian European philosophy illustrates the great difficulties that such a program had: Lutheranism started to disintegrate, among other reasons, between Nietzsche's sharp darts and Kierkegaard's existentialism.Particularly Kierkegaard prefigures the destiny of Lutheranism in James's conception of religion. 19Thus, Hegel's endeavor to reconcile inwardness and the sola fides principle with the necessary universal authority of the Lutheran Church fails.However, his unrivaled portrait of Luther's profound transformation of religion remains, that is, the rejection of external authority and the subsequent primacy of individuals through the principle of sola fides and inwardness.Then, Hegel's best legacy is his conception of Luther as a religious genius.
Meanwhile, in America, the relationship between philosophy and religion followed a very different trail.Protestantism had been essentially Calvinist since Colonial times.Within this context, American Lutheranism had an unimportant philosophical impact and produced very limited theological developments, as sustained by the contemporary Lutheran theologian Martin Marty. 20 In America, then, philosophy arises mainly from the reception of commonsense Scottish realism (Emerson being perhaps the most notable exception).Scottish realism was also changing its face to the extent that science (especially Darwinism) was undermining the theoretical framework of a biblical culture.It was within this context that James wrote on religion.In America and Germany, therefore, very different paths were followed that resulted in the marriage between Protestantism and philosophy: while the former consisted of a philosophical realism tied to Calvinism, the latter was an idealistic project grounded on the universal mission that Lutheranism had. 21ccordingly, within the American cultural scene, Scottish realism, Darwinism, and liberal theology formed the context where James's philosophy developed. 22Some interpreters infer that James's philosophical (and religious) views were shaped by them.However, to understand James's conception of religion as endorsing a kind of liberal theology is, in my view, deeply mistaken. 23One of the main targets of James's attack was the so-called liberal Christianity or liberal Protestantism: The advance of liberalism, so-called, in Christianity, during the past fifty years, may fairly be called a victory of healthy-mindedness within the church over the morbidness with which the old hell-fire theology was more harmoniously related.We have now whole congregations whose preachers, far from magnifying our consciousness of sin, seem devoted rather to making little of it.They ignore, or even deny, eternal punishment, and insist on the dignity rather than on the depravity of man.They look at the continual preoccupation of the old-fashioned Christian with the salvation of his soul something sickly and reprehensible rather than admirable; and a sanguine and "muscular" attitude, which to our forefathers would have seemed purely heathen, has become in their eyes an ideal element of Christian character.(Varieties 91) Although, in the previous paragraph, James is not judging but only describing the situation of Protestantism, he is very critical of the idea of religion being absorbed into morality, that is, he does not agree with the Pelagian ethos that Protestantism had within the second part of the nineteenth century because it does not take account of the morbid side of religious experiences.
Meanwhile, Darwinism-which is another source usually mentioned by James-is useless within this context because science and religion address dissimilar human needs (see note 13).Finally, Scottish realism (the last constituent of the cultural scene in nineteenth-century America that I will refer to) is worthless to understand sick souls, and it was early rejected by James because of its atomism. 24ithin this common framework, I think there is an insightful way to grasp James's conception of religion from a different perspective.In other words, despite the enduring attacks carried out by James against Hegel (and Hegelianism), they share a fundamental view: Luther is a religious genius who has transformed our conception of what a religious experience is, on the one hand; and this transformation is grounded on the idea of inwardness and the principle of sola fides, on the other hand.
Consequently, what I call Hegel's best legacy-to conceive Lutheran religion as accurately finding the essence of Christianity in interiority-finds its highest radicalism in James's view on religion, since both depict Luther as the most important modern religious genius.James is as explicit as Hegel concerning this issue: s__ n__ l__ lc Pluralist 10_1 text.indd96 11/4/14 3:00 PM Just as romantic love seems a comparatively recent literary invention, so these experiences seem to have played no great part before Luther's time; and the best way to indicate their character will possibly be to draw a contrast between the inner life of ourselves and of the ancient Greeks and Romans."("Reason and Faith" 200; emphasis added) 25 From the essential words of this paragraph, that is, the inner life of ourselves, it can be inferred that James thinks of modern individuals-like himself-as following the path opened by Luther.Within this course, Hegel was halfway between sola fides and the authority of the Lutheran Church.James, on the other hand, unfolds the decisive consummation of Luther's individualism: The pivot round which the religious life, as we have traced it, revolves, is the interest of the individual in his private personal destiny.Religion, in short, is a monumental chapter in the history of human egotism.The gods believed in-whether by crude savages or by men disciplined intellectually-agree with each other in recognizing personal calls.Religious thought is carried on in terms of personality, this being, in the world of religion, the one fundamental fact.Today, quite as much as at any previous age, the religious individual tells you that the divine meets him on the basis of his personal concerns.(Varieties 491) However, Luther's religious genius consists not only in giving rise to the individualization of religion but-as insightfully pointed out by James-in describing a very particular kind of religious experience: Luther was the first moralist who broke with any effectiveness through the crust of all this naturalistic self-sufficiency, thinking (and possibly he was right) that Saint Paul had done it already.Religious experience of the Lutheran type brings all our naturalistic standards to bankruptcy.You are strong only by being weak, it shows.(Pluralistic Universe 137) Looking at Varieties through this paragraph of A Pluralistic Universe, one can grasp Luther's awesome relevance to focus on a fundamental current nucleus of religion: the morbid side of existence that James calls "the sick souls."This invention-in terms of James-not only makes Luther our contemporary, but it allows us to infer that he is not only a common religious genius, that is, a religious genius as George Fox, St. Teresa, or St. Ignatius Loyola.In other words, Luther is a religious genius who originated the contemporary way to conceive religion, for two reasons: on the one hand, he was the first to buck the naturalistic trend in religion; on the other hand, since __s __n __l lc Pluralist 10_1 text.indd97 11/4/14 3:00 PM the pluralist 10 : 1 2015 Luther's life and work, we have had a radically new approach to religious experiences, that is, we have to take sick souls seriously.Summing up, religious inwardness shows us a philosophical course that can be drawn from Luther to Hegel and James.Hegel, however, is still trapped in the Lutheran-universalist-conception of church, while James puts aside rituals and doctrines as central elements of religion and focuses on feelings, acts, and experiences.This Jamesian move let us conceive contemporary creeds as contingent expressions of religious experiences and, for this reason, more suitable for pluralistic societies.
Thus, if we conceive the sola fides principle as the fundamental legacy of Lutheranism, James could be interpreted as a Lutheran who gave the coup de grâce to the last objective remains of orthodox Lutheranism: the authority of the Church.This could be seen as a decisive step in the history of philosophical receptions of Lutheranism.Consequently, James asserts a distilled Lutheranism coherently grounded only on the principle of sola fides without any connection with the objective aspect of Luther's thought, that is, by taking seriously the role of the individual before the divinity, James becomes one of the most radical representatives of the Lutheran principle of sola fides.If one examines philosophical receptions of Luther, the Jamesian is one of the first to really leave the individual alone before the divinity, that is, to explicitly avoid any kind of necessary mediation.In James's words: [T]he individual transacts the business by himself alone, and the ecclesiastical organization, with its priests and sacraments and other gobetweens, sinks to an altogether secondary place.The relation goes direct from heart to heart, from soul to soul, between man and his maker.(Varieties 29)
Conclusion
Not only did Luther's conception of religion change religious thought forever, but it also had an enormous impact on Western philosophy. 26For philosophy, this Lutheran impact is centrally owed to the principle of sola fides, as I have attempted to show.German philosophy, particularly, is impossible to understand without a Lutheran background.However, I think it was not in Germany but in America where the philosophical course of Lutheranism reached its maximal radicalism and coherence.It was William James who, putting aside doctrines and rites as the elementary forms of religion, established the central importance of the principle of sola fides.
In the present article, I have defended three theses about James's conception of religion: first, that it was markedly influenced by Luther's thought; second, that it presents a dual Lutheranism (split between religion and morality grounded on the method of self-surrender/distinction between the content of faith and having faith); and third, that it represents one of the most radical interpretations of Luther's thought, that is, the primacy of Luther's conception of faith against his idea of church and sola scriptura.These theses have a different scope.If I am right about the first one, we should consider what consequences James's dual Lutheranism implies: the split between religion and morality is nothing but to give a new form to the principle of the great reformer, and only faith and grace capture the essence of religion.Meanwhile, the distinction between faith and the content of faith gives James's Lutheranism its particularity, namely the radicalization of Luther's best intuition that only faith captures the essence of religion, while disregarding its articulation in creeds as only a contingent aspect.This is the cardinal importance of James if we consider his work as a philosophical course of Lutheranism.In other words, the distinction between faith and the content of faith implies the definitive victory of the principle of sola fides against the principle of sola scriptura.Thus, through James's work, the nails of Wittenberg reverberate in the streets of New England. 1. Quoted in Hockenbery Dragseth v. 2. I do not attempt to deal here with James's own religious experiences and needs.My purpose is to examine James's Varieties from a conceptual point of view and to establish its connections with Luther and Lutheranism.
3. James explicitly identifies himself with Protestant traditions in several works.In The Will to Believe (16), for example, he says: It is evident that unless there be some pre-existing tendency to believe in masses and holy water, the opinion offered to the will by Pascal is not a living option.Certainly, no Turk ever took to masses and holy water on its account; and even to us Protestants these means of salvation seem such foregone impossibilities that Pascal's logic, invoked for them specifically, leaves us unmoved.
See also Varieties 265.
4. In Hollinger's words: Such cases "in the annals of Catholic saintship," he [James] says explicitly, "make us rub our Protestant eyes."There is no question who the "we" or the "us" is whenever James invokes these portentous pronouns.What casts James's treatment of Fox and other canonical Protestants into bold relief, then, is not so much his fleeting use of Muslim and Jewish cases but the sustained treatment he gives to Catholics.One can easily get the impression that Varieties is a non-individual harvest of the most intense spiritual moments of all the major religions, especially of Christianity, embracing Catholic as well as Protestant variations.But no.Varieties is constructed to foreground certain religious sensibilities and not others, and to present the core of religion in general as having been most attractively manifest in exactly the cultural tradition to which James's listeners and readers were directly heir.( 13) 5. Ralph Barton Perry (257), for example, writes: "But while James identifies religion with certain specific experiences revealed, he did not identify religion with any particular creed."A similar interpretation can be found in Levinson (21).
6.This is the interpretation of some important scholars, too.Hollinger (11), for example, that: James's ostensibly species-wide account of religious experience is deeply Protestant in structure, tone, and implicit theology.Even the categories of religious experience around which Varieties is organized, and the order in which James describes them, have this quality.As theologian Reinhold Niebuhr and others have pointed out, James, by moving from "healthy-mindedness" to the "sick soul" to the "divided self " to "conversion" and then to "saintliness," follows the prescribed sequence of the evangelical Protestant conversion narratives.
Meanwhile, Julius Rubin (18) points out the Protestant features of James's religious thought: "Although ostensibly a general work on religion, James's essays actually explored Protestantism, in particular, the forms of religious experience found within early nineteenth-century New Englander spiritual biographies." 7. Also Gale (256) and Myers (472) sustain that the core of Varieties rests on the notion of self-surrender.
8. Other paragraphs of Spohn's article show his deep understanding of James's conception of religion: "James interpreted his experience of depression and deliverance in this pattern, even though it evoked a faith that was not necessarily Christian.He acknowledged that Protestant theology, particularly Luther's, has 'admirable congruity' with his account of the transformation of 'the sick soul'" (37).Moreover, Spohn stresses (33) the centrality of Christianity for James's thought: "The central symbol of the Christian tradition, the death and resurrection of Jesus Christ, as interpreted through Evangelical Christianity, lurks behind the language of a theory that claims to transcend traditional symbols and doctrines."Finally, Spohn identifies (33) a key element underlying James's approach to religion: "However anonymously, James stands within the tradition of Augustinian piety that dominates American religion."9. Hinlicky (25) writes: Another, more specific clue to the theological dead-end to which the method of "enthusiasm" comes is the schizophrenic treatment accorded to Luther in Varieties.Luther appears in these pages as an example of healthy "mind cure" (Varieties 107-08).But this supposed mile marker on modern culture's road to religious inwardness could also be classified with the sick soul's crisis of "self-surrender" (Varieties 211), as James describes the sick old man Luther who "looked back on life as if it were an absolute failure" (Varieties 137).
Although I disagree with Hinlicky's view that James's dealing with When James was in his healthy Promethean frame of mind he tingled all over at the thought that we are engaged in a Texas Death Match with evil, without any assurance of possibility of victory.This possibility forms the basis of his religion of meliorism.But there is a morbid side to James's nature, a really morbid side, that "can't get no satisfaction" from the sort of religion that his pragmatism legitimates.In order to "help him make it through the night" he needs a mystically based religion, which gives him a sense of absolute safety and peace that comes through union with an encompassing spiritual reality.( 16) An opposing way to understand James's conception of religion can be seen in Michael Slater ("Metaphysical Intimacy" 122): The third and final criterion, moral helpfulness, denotes the values of a belief or experience for the moral life, and in the context of James's studies of religious experience this may very well be the most important of the three.In particular, James thinks that it is the good dispositions (VRE 24-26) produced by religious and mystical experience which provide us with the best indicators of their truth, though such dispositions are valuable in their own right on moral grounds.Also, James's Pragmatism (144) helps to support Slater's interpretation.Two different conceptions of faith are at stake here: James's mystical aspect is grounded on a passive concept of faith; meanwhile, in his Promethean facet, faith is an active-or strenuous-human endeavor linked with morals.What role does Varieties play within James's thought?I think there is a tension in the book between the melioristic and the mystical sides.The latter-essentially through the idea of self-surrender-is fundamental to what I conceive as the core of the book, that is, the description of the three religious mentalities.Meanwhile, the former side plays a central role in the conclusion through Leuba's notion of use (Varieties 399).It could therefore be thought that in Varieties, the tension between the melioristic and mystical sides of James's philosophy of religion finds its utmost expression.I ground my interpretation on the mystical side of James's conception of religion.
11. Gerhard Ebeling (21) has clearly drawn this distinction between theological orthodoxy and philosophical re-creations of Luther's thought: The period of Protestant orthodoxy modified the view of him [Luther] as a prophet, emphasizing more his restoration of pure doctrine.A section de vocatione Lutheri actually occurs as a part of dogmatic theology in some works of early Protestant orthodoxy.Pietism made a similar appeal to Luther, but it distinguished between Luther in his earlier and later years.A contrast was drawn between the period when he was supposed to have become embittered by his opposition to the enthusiastic sects and by disputes within Protestantism,-if not actually inclined to a certain extent to return to Catholicism-and the early period when the gospel of repentance and grace, still pure and close to the devotional literature of German mysticism which was so highly regarded by pietism, remained clearly visible.The Enlightenment 12. Several philosophers have emphasized that the sola fides principle should prevail over the sola scriptura principle (Leibniz, Lessing, and Kant, among others).A detailed examination of this issue can be seen in Ginzo Fernández.
13.I have written an article relating James and Durkheim: see Viale, "Sacred/Profane."14.Although references to Luther are not important within The Correspondence of William James, and they are made mainly in an informal and not conceptual way, there is a letter to Leuba dated 1904 (396), where James writes the following: Religious men largely agree that this sense has been that of their "best" moments, best not only when passing, but when looked back upon.The notion of it has leaked into mankind from their authority, the rest of us are imitative, just as we are of scientific men's opinions.We live by Luther's just as we do by Newton's suggestions, and find ourselves helped by both.I will argue, therefore, that James conceives of Luther as one of the most paradigmatic models of religious geniuses.
15.Although I ground my argument on Varieties, it is important to highlight that as early as 1884, James entertains a clear distinction between morality and religion.Analyzing, for example, The Literary Remains by Henry James, Sr., he endorses the following paragraph: "It shows the depth of Mr. James's religious insight that he first and last and always made moralism the target of his hottest attack, and pitted religion and it against each other as enemies, of whom one must die utterly, if the other is to live in genuine form.The accord of moralism and religion is superficial, their discord radical" (James, Essays in Religion and Morality 62-63; emphasis added).James Pawelski (22) stresses this difference too: James's distinction between moralism and religion can be succinctly described as referring to two different ways individuals can accept the universe.James argues that, although both moralists and religionists have a finite place in the order of things, they respond to their finitude in different ways.Moralists accept it only because they must.Their muscles are tensed in struggle against the order of things.Religionists, on the other hand, accept, welcome, and even love the universe and their finite place in it.Moralists respond willfully: religionist, passionately.Paradoxically, however, the result of these responses is that moralists are on the defensive and find it harder to act in the world, whereas religionists are free to be aggressive in their active relations with the world.16.Reinhold Niebuhr (187-88), for example, has accused Luther of quietism: "Sometimes he lapses into mystical doctrines of passivity or combines quietism with a legalistic conception or the imputation of righteousness.'Without works' degenerates into 'without action' in some of his strictures against the 'righteousness of works.'"A critical analysis of this position can be seen in Brent W. Sockness.17.A paragraph by Douglas Anderson (9) could be interpreted in a similar way: The self-surrender or submission, ironically, leads to a rebirth.The converted soul feels empowered in ways that she or he had not previously experienced. . . .[T]he difference from the free power and agency felt by the healthy-minded is that the personal power now has a clearly self-transcendent source: a living ideal or power.The converted soul finds herself or himself living in a wider life where things can be seen more clearly.
18. Hegel (519) follows: "Luther's simple doctrine is that the specific embodiment of Deity-infinite subjectivity, that is true spirituality, Christ-is in no way present and actual in an outward form, but as essentially spiritual is obtained only in being reconciled to God-in faith and spiritual enjoyment." 19.I owe clarification of this point to Professor John McDermott.20.Marty (79) clearly states that action was more necessary than theoretical development: The Evangelical empire and the people in it needed guidance and interpretation.To think of this activity as "doing theology" seems confusing to many people.In their model, theology demands great minds.It is difficult to make a case for many religious geniuses on the American scene between Jonathan Edwards and Josiah Royce, William James, or Walter Rauschenbusch over a century later.Among the churchmen, perhaps Horace Bushnell alone is much read a century later.At the fringes of religious life an occasional spirit like Ralph Waldo Emerson attracted notice.
21. Hegel's reception in America (particularly among St. Louis Hegelians) pointed out the relevance of religion, but generally the Lutheran framework of the German philosopher was disregarded.Concerning this issue, Henry Pochmann (26), for example, writes the following: When the St. Louis Movement began, the difference between what Parker called the transient and the permanent in religion had not been so sharply drawn nor were they so clearly understood as they are today.The drawing of this distinction between religion itself on the one hand and the expression of religion in doctrines and rites, or the application of religion through institutions, on the other, is one of the great achievements of the nineteenth century.
22. Bruce Kuklick (10) points out this link: If we consider all American religious movements of the nineteenth century, the views of the numerically unimportant Unitarians were such that many Christians would not have considered them co-religionists.Nonetheless, both the Unitarians and the other denominations that spoke for the socioeconomic elites in various parts of the country found a theoretical rationale in the same philosophy: Scottish realism.This school of thought held great interest for those in American Protestantism who were concerned with philosophical problems of religion.
23.This point was very clearly made by Barton Perry (259): "From James's emphasis upon uniqueness of the religious experience, and its specific factual implications, arose his paradoxical relation to contemporary Christianity.Religion of the sort in which he was interested was closer to the simply piety of the evangelical sects than to that of modern religious liberalism."However, to understand James's conception of religion as belonging to liberal Protestantism is a very common mistake within the literature.Examples of this misunderstanding can be found in the works of Hollinger; Porterfield (140-41); and Dorrien (224-25) It is needless to remind you once more of the admirable congruity of Protestant theology with the structure of the mind as shown in such experiences.In the extreme of melancholy the self that consciously is can do absolutely nothing.It is completely bankrupt and without resource, basic insights, namely the fact of believing that one is saved as a fundamental ground of religion, putting aside doctrines and rites.
[
B]ut the heart-the emotional part of man's Spiritual nature-is recognized as that which can and ought to come into possession of notes I finished this paper during my visit to the Department of Philosophy at Texas A&M University (College Station) (December 2013-March 2014) where I was supported with a fellowship from CONICET (Argentina).I thank my host, Gregory Pappas, and especially Professor John McDermott for useful discussion on the draft of this article.The language was revised by Rita Karina Plasencia.
as its pattern and example, emphasizing the freedom of conscience which he had made possible. . . .[T]he dominant view, inspired by denominational and idealistic motives, regarded Luther as the forerunner of the modern period.
on James's thought.I really appreciate the anonymous reviewers' observation that I should include Hinlicky's book in my article.10.I am fully aware that there are several possible perspectives on James's view of religion and that they depend on what one considers to be the core of his thought.Those forms have been labeled Prometheanism(McDermott, Introduction xix)and mysticism.The first is clearly exemplified in Richard Gale's interpretation: ence ; among others.24.I also owe clarification of this point to Professor McDermott.25.This article was based on a William James's conference: "In 1906, while lecturing at Leland Stanford, William James accepted an invitation to contribute to a discussion | 2019-05-05T13:05:26.000Z | 2015-02-05T00:00:00.000 | {
"year": 2015,
"sha1": "0641a0374ecb5b9270cd48527307ffe272dc4a0d",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/105202/2/CONICET_Digital_Nro.aaaa9dcd-1ffe-484f-80e4-527c102c5bd1_A.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6860b28c2998d820d764e6c41956002956d6bd05",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
232381094 | pes2o/s2orc | v3-fos-license | Profiles of Peripheral Immune Cells of Uncomplicated COVID-19 Cases with Distinct Viral RNA Shedding Periods
The heterogeneity of immune response to COVID-19 has been reported to correlate with disease severity and prognosis. While so, how the immune response progress along the period of viral RNA-shedding (VRS), which determines the infectiousness of disease, is yet to be elucidated. We aim to exhaustively evaluate the peripheral immune cells to expose the interplay of the immune system in uncomplicated COVID-19 cases with different VRS periods and dynamic changes of the immune cell profile in the prolonged cases. We prospectively recruited four uncomplicated COVID-19 patients and four healthy controls (HCs) and evaluated the immune cell profile throughout the disease course. Peripheral blood mononuclear cells (PBMCs) were collected and submitted to a multi-panel flowcytometric assay. CD19+-B cells were upregulated, while CD4, CD8, and NK cells were downregulated in prolonged VRS patients. Additionally, the pro-inflammatory-Th1 population showed downregulation, followed by improvement along the disease course, while the immunoregulatory cells showed upregulation with subsequent decline. COVID-19 patients with longer VRS expressed an immune profile comparable to those with severe disease, although they remained clinically stable. Further studies of immune signature in a larger cohort are warranted.
Introduction
The coronavirus disease 2019 (COVID-19) is an ongoing disaster causing a catastrophic loss in lives and socioeconomic well-being worldwide. The clinical presentation varies widely from asymptomatic cases, mild respiratory symptoms, and fever to severe organ failure, septic shock, and death. Critical inflammatory response and acute lung injury, in addition to lymphopenia and cytokine release syndrome, have been reported as critical clinical features of the COVID-19 patients, especially among those with comorbidities [1][2][3]. Current policy in several countries involves antibody testing in determining immunity to Viruses 2021, 13, 514 2 of 9 the SARS-CoV-2 infection, and a study has indicated 90% of severe COVID-19 patients develop IgG antibodies within the first 2 weeks of symptomatic infection, which coincides with the disappearance of the virus [4]. However, a key question concerns antibodies in asymptomatic and mild disease individuals, as the population may present with low virus-binding antibody titers [5,6].
COVID-19 patients present with heterogeneity of immune response [7]. Understanding the actual physiological and immunological processes along the disease course is crucial to identifying and rationalizing effective treatments and policymaking. While discordance between the severity of clinical presentations and the infectiousness has been brought to public attention, little is known about how the immune cells evolved in mild COVID-19 cases with a prolonged SARS-CoV-2 viral RNA shedding (VRS) period. We identified uncomplicated COVID-19 patients with distinct viral RNA conversion time and evaluated the peripheral immune cells to expose the immune system's interplay. Moreover, the immune cell profile's dynamic changes in the prolonged cases were also evaluated along the disease course.
Subject Characteristics
Four adult patients admitted to a tertiary medical center with confirmed COVID-19 diagnosis, according to World Health Organization interim guidance and positive real-time reverse polymerase chain reaction (PCR) examination from throat/nasal swab samples, were included in the study. Patients were discharged in the absence of fever or dyspnea for at least 3 days, improvement in both lungs on radiography, if any, and three consecutive nasal-swab samples plus a sputum sample negative for viral RNA obtained at least 24 h apart. Concurrently, four adults (age 22, 45, 52, and 56 years old, one male) with no travel and contact history nor presenting symptoms were recruited as healthy controls (HCs). Subject demographics and clinical information are shown in Table 1. Figure 1). All patients presented with no symptoms requiring supplemental oxygenation or critical care, consistent with uncomplicated cases.
Isolation of Peripheral Blood Mononuclear Cells (PBMC)
Venous blood was drawn from subjects to collect mononuclear cells from blood buffy coats through separation using SepMate tubes (STEMCELL Technologies, Vancouver, BC, Canada) according to the manufacturer's instructions. In brief, heparinized blood was diluted two-fold with phosphate-buffered saline, layered on top of Lymphoprep, and centrifuged at 1200× g, for 10 min with the brake on. PBMCs were collected by pouring supernatant into a 50 mL polypropylene tube, washed twice with PBS, and counted using a hemacytometer with trypan blue (Lonza, NH, USA) to determine cell viability.
Staining and Flow Cytometry Analysis
PBMCs were submitted to flow cytometry immunofluorescence assay using Attune Nxt Flow Cytometer (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's recommendation. After washing with PBS, Fc receptors were initially blocked using Human FcR blocking reagent (Miltenyi Biotec, Germany) for 30 min at 4 °C, followed by cell surface labeling by specific primary antibodies. The following antibodies were adapted into multiple panels: CD4, CD8, CD14, CD11c, CD16, CD19, CD25, CD62L, HLA-DR, CD56, CD45RA, CD45RO, CCR3, CCR5, CCR6, CCR10, CXCR3, and CXCR5 (Thermo Fisher Scientific, Waltham, MA, USA). Results were illustrated as the percentage of positive cells or as the ratio of the mean of fluorescence intensity (MFI) from the antibody of interest to the isotype control antibody.
Statistical Analysis
Data from flow cytometry were analyzed using GraphPad Prism (La Jolla, CA, USA). Mean values with standard deviations were presented in the data graphs, and Student's t-test or one-way ANOVA analysis of variance followed by Newman-Keuls' posthoc test was performed for statistical analysis. Results with p-values < 0.05 were considered significant.
Results
We first scrutinized the frequency of peripheral immune cells from the four patients as a ratio to the mean of HCs. Peripheral blood mononuclear cells (PBMCs) from patients 01 and 02 were collected on the day of discharge, thus in the recovery (negative VRS) state. In contrast, for patients 03 and 04, PBMCs were isolated on days 14 and 23 of hospitalization, respectively, during the active disease (positive VRS) state. For further analysis of the kinetics of immune cell expression throughout the disease course, two consecutive blood drawings following the first were obtained from patients 03 and 04. PBMCs were collected from patient 03 on days 21 and 34, and patient 04 on days 29 and 57, of hospital stay, respectively; all were during the active disease state ( Figure 1).
Isolation of Peripheral Blood Mononuclear Cells (PBMC)
Venous blood was drawn from subjects to collect mononuclear cells from blood buffy coats through separation using SepMate tubes (STEMCELL Technologies, Vancouver, BC, Canada) according to the manufacturer's instructions. In brief, heparinized blood was diluted two-fold with phosphate-buffered saline, layered on top of Lymphoprep, and centrifuged at 1200× g, for 10 min with the brake on. PBMCs were collected by pouring supernatant into a 50 mL polypropylene tube, washed twice with PBS, and counted using a hemacytometer with trypan blue (Lonza, NH, USA) to determine cell viability.
Staining and Flow Cytometry Analysis
PBMCs were submitted to flow cytometry immunofluorescence assay using Attune Nxt Flow Cytometer (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's recommendation. After washing with PBS, Fc receptors were initially blocked using Human FcR blocking reagent (Miltenyi Biotec, Bergisch Gladbach, Germany) for 30 min at 4 • C, followed by cell surface labeling by specific primary antibodies. The following antibodies were adapted into multiple panels: CD4, CD8, CD14, CD11c, CD16, CD19, CD25, CD62L, HLA-DR, CD56, CD45RA, CD45RO, CCR3, CCR5, CCR6, CCR10, CXCR3, and CXCR5 (Thermo Fisher Scientific, Waltham, MA, USA). Results were illustrated as the percentage of positive cells or as the ratio of the mean of fluorescence intensity (MFI) from the antibody of interest to the isotype control antibody.
Statistical Analysis
Data from flow cytometry were analyzed using GraphPad Prism (La Jolla, CA, USA). Mean values with standard deviations were presented in the data graphs, and Student's t-test or one-way ANOVA analysis of variance followed by Newman-Keuls' posthoc test was performed for statistical analysis. Results with p-values < 0.05 were considered significant.
Results
We first scrutinized the frequency of peripheral immune cells from the four patients as a ratio to the mean of HCs. Peripheral blood mononuclear cells (PBMCs) from patients 01 and 02 were collected on the day of discharge, thus in the recovery (negative VRS) state. In contrast, for patients 03 and 04, PBMCs were isolated on days 14 and 23 of hospitalization, respectively, during the active disease (positive VRS) state. For further analysis of the kinetics of immune cell expression throughout the disease course, two consecutive blood drawings following the first were obtained from patients 03 and 04. PBMCs were collected from patient 03 on days 21 and 34, and patient 04 on days 29 and 57, of hospital stay, respectively; all were during the active disease state (Figure 1).
An overall profile of the immune cell compartment is presented in Figure 2. We discovered a lower T cell population and a higher B cell in COVID-19 patients than HCs. Patient 03 was remarkably lower, while patient 04 showed a higher myeloid cell compartment than HCs and the shorter VRS cases. To further understand the in-depth Viruses 2021, 13, 514 4 of 9 expression of immune cell subsets, we applied a multi-parameter staining strategy to differentiate at least 35 distinct immune cell subsets (Figure 3). Although our cohort presented with normal lymphocyte count, CD19 + -B cells were notably upregulated in both prolonged VRS cases, while CD3 + CD56 − CD4 + (CD4) cells and CD3 + CD56 − CD8 + (CD8) cells were downregulated. Along the extended disease course, evident recovery of T lymphocytes was observed, while B lymphocytes remained higher compared to both the shorter VRS patients and HCs (Figure 4). CD62L + cells comprised of both naïve and central memory T cells were downregulated in both CD4 and CD8 populations of all confirmed patients. Further, CD45RA + expressing-CD4 cells showed brief upregulation and downregulation in shorter and prolonged VRS cases compared to HCs, respectively. The effector CD4 cells represented by CD62L − HLA-DR − and the memory CD4 cells represented by CD45RO + were prominently upregulated relative to HCs, as expected in the course of infection [9]. However, no noticeable difference was observed in the CD62L − HLA-DR − CD8 cells. An overall profile of the immune cell compartment is presented in Figure 2. We discovered a lower T cell population and a higher B cell in COVID-19 patients than HCs. Patient 03 was remarkably lower, while patient 04 showed a higher myeloid cell compartment than HCs and the shorter VRS cases. To further understand the in-depth expression of immune cell subsets, we applied a multi-parameter staining strategy to differentiate at least 35 distinct immune cell subsets (Figure 3). Although our cohort presented with normal lymphocyte count, CD19 + -B cells were notably upregulated in both prolonged VRS cases, while CD3 + CD56 − CD4 + (CD4) cells and CD3 + CD56 − CD8 + (CD8) cells were downregulated. Along the extended disease course, evident recovery of T lymphocytes was observed, while B lymphocytes remained higher compared to both the shorter VRS patients and HCs (Figure 4). CD62L + cells comprised of both naïve and central memory T cells were downregulated in both CD4 and CD8 populations of all confirmed patients. Further, CD45RA + expressing-CD4 cells showed brief upregulation and downregulation in shorter and prolonged VRS cases compared to HCs, respectively. The effector CD4 cells represented by CD62L − HLA-DR − and the memory CD4 cells represented by CD45RO + were prominently upregulated relative to HCs, as expected in the course of infection [9]. However, no noticeable difference was observed in the CD62L − HLA-DR − CD8 cells. Th1 subset population, represented by CXCR3 + CD3 + CD4 + markers, showed unique downregulation on the prolonged cases, followed by improvement along the disease course, to a comparable level with the shorter VRS patients and the HCs. Meanwhile, the anti-inflammatory subset Th2 and Treg, represented by CCR3 + CD3 + CD4 + and CCR5 + CD25 + CD3 + CD4 + markers, respectively, both the naïve and memory phenotypes, were upregulated in our COVID-19 cohort relative to HCs, suggesting immunoregulation and counter-inflammatory mechanisms [10]. In the prolonged cases, Treg population further showed an apparent decline as the disease progressed, to a level comparable with HCs, while the expression remained high in recovered patients with shorter VRS.
Overall, the NK cell population was lower in patients with a shorter VRS period than HCs, which was dominated by circulating the CD56d-NK subtype. In contrast, the frequency of CD8 + CD56 + -NKT cells, notably on CD8 + CD56 + CD3 + -NKT CD8 cell population, was higher in the COVID-19 cohort, regardless of their VRS period. Both NK cells and NKT cell populations later decreased along the disease course in patients 03 and 04, while the NKT cell number remained high in patients 01 and 02. Th1 subset population, represented by CXCR3 + CD3 + CD4 + markers, showed unique downregulation on the prolonged cases, followed by improvement along the disease course, to a comparable level with the shorter VRS patients and the HCs. Meanwhile, the anti-inflammatory subset Th2 and Treg, represented by CCR3 + CD3 + CD4 + and CCR5 + CD25 + CD3 + CD4 + markers, respectively, both the naïve and memory phenotypes, were upregulated in our COVID-19 cohort relative to HCs, suggesting immunoregulation and counter-inflammatory mechanisms [10]. In the prolonged cases, Treg population further showed an apparent decline as the disease progressed, to a level comparable with HCs, while the expression remained high in recovered patients with shorter VRS.
Overall, the NK cell population was lower in patients with a shorter VRS period than HCs, which was dominated by circulating the CD56d-NK subtype. In contrast, the frequency of CD8 + CD56 + -NKT cells, notably on CD8 + CD56 + CD3 + -NKT CD8 cell population, was higher in the COVID-19 cohort, regardless of their VRS period. Both NK cells and NKT cell populations later decreased along the disease course in patients 03 and 04, while the NKT cell number remained high in patients 01 and 02.
Compared to HCs, CD11c + HLA-DR + -dendritic cells (DCs) were upregulated regardless of the VRS period. The total monocyte number was uniquely downregulated on the patient with the most extended VRS period. These monocytes were primarily of the CD14 ++ CD16 − -classical monocyte subsets (data not shown); thus, downregulation on the most extended VRS period patient was apparent in this subset. The frequency remained relatively steady throughout the disease course. CD14 ++ CD16 + non-classical monocytes were upregulated on both prolonged VRS period patients, while CD14 ++ CD16 + intermediate monocytes were downregulated unless for the most extended VRS period patient.
Discussion
Transmitted primarily via respiratory droplets, the viral load of SARS-CoV-2 reaches its peak within 5-6 days of symptom onset [11,12]. Released viral RNA is recognized as a pathogen-associated molecular pattern and triggers a local immune response by recruiting macrophages and monocytes, followed by priming of adaptive T and B cells [3]. Recruitment of immune cells, notably lymphocytes, into the airway may explain the lymphopenia commonly observed from peripheral blood count in patients [13]. Although WBC and lymphocyte counts remained within the normal range in our cohort, patients with prolonged VRS period displayed notably higher and lower frequencies of B and T cells, respectively. Previous studies have similarly reported a decline in CD4 and CD8 cells in acute moderate or severe COVID-19 cases, which improved during the resolution period [7,14,15]. Dissecting further, HLA-DR + CD4 + cells were downregulated, while HLA-DR + CD8 + were upregulated, in COVID-19 patients compared to HCs. Earlier studies have also reported a high frequency of CD38 + HLA-DR + CD8 + T cells in uncomplicated COVID-19 cases (compared to healthy control) and resolved severe cases (compared to severe persistent cases). However, the observation on CD38+HLA-DR + CD4 + T cells was contradictory with our current result [16,17]. Co-expression of CD38 and HLA-DR uniquely represents cell activation to viral infection [18], while HLA-DR expression reflects a more general T cell activation. Thus, the discrepancy may not be directly explained between these studies.
Dissecting the Th cell phenotypes, we observed an apparent lower CXCR3 + Th1 subset, along with higher CCR3 + Th2 and CCR5 + Treg frequencies, in our COVID-19 cohort. The Th1 cell-polarized response is activated by the destruction of airway tissue within the airway and is parallel with previous observations in SARS-CoV and MERS-CoV infections [19]. Moreover, CD4+ cells specific for the SARS-CoV-2 spike protein have been identified in acute infection and have a Th1 cell cytokine profile [20]. Although only scarce evidence is available on other Th cell subsets in COVID-19 cases to date, higher blood plasma levels of anti-inflammatory IL-6 and IL-10 related to Th2 subsets [21,22], and lower Treg [23], were reported to be associated with patients requiring intensive care in the hospital. We observed an interesting dynamic of Th1 and Treg cell frequency in our longer-VRS cases. Th1 showed an apparent increase during the active disease and decline preceding negative conversion of PCR testing. In contrast, Treg showed time-dependent decline along the disease course. This may point out to resolving inflammation and viral clearance.
Lower NK cell counts have been correlated to disease severity [23,24], explaining the apparent decline as the disease progressed in our prolonged patients. However, why the NK cell frequencies, dominated by the circulating CD56d-NK, were also low compared to HCs during the recovery phase of our shorter VRS cases remains unknown. The upregulation of the CD8 + CD56 + -NKT subset suggested a possible role in antiviral mechanisms both by its direct cytolytic effect and indirect activation of antibody-producing B cells [25,26]. Along the disease course, both NKT and NKT CD8 cells showed a further increase, followed by a noticeable decline, indicating possible viral clearance and disease resolution.
In some patients, dysfunctional immune response triggers a cytokine storm that mediates widespread lung damage. A previous study identified human monocytes as the primary source of IL-1, IL-6, and nitric oxide-the main hallmarks of the event [27]. Conflicting observations on monocyte profile upon COVID-19 infection are noted. Zhang et al. found significantly increased circulating CD14 + CD16 + monocytes from COVID-19 patients, with high enrichment of intermediate and non-classical subtypes [28]. However, the study did not compare the cohort with a healthy control group. On the other hand, Sanchez-Cerrillo et al. pointed out a substantial decrease in circulating monocytes in COVID-19 patients, with specific enrichment of intermediate and non-classical monocytes in the lungs of patients with the severe and critical disease [29]. Moreover, a sudden decrease in monocyte expression of HLA-DR, indicating monocyte dysfunction, immediately preceded progression to severe respiratory failure [30]. Overall, circulating CD14 + CD16 + monocyte frequency was notably lower in our most prolonged VRS period patient. The CD14 + CD16 ++ -non-classical monocyte subset showed marked increase in both patients with prolonged VRS, while CD14 + CD16 ++ -intermediate monocytes were downregulated in all cohort except the patient with the longest VRS, suggesting correlation to immunopathology. A previous study has reported similar observations in uncomplicated patients compared to HCs [16].
This study has several limitations. First, only a small number of patients were enrolled in the study representing a distinct VRS period, and this may not universally reflect COVID-19 patients. Second, one of our longer VRS patients was notably older than the others, which could have impacted the immune profile and response to COVID-19 infection (reviewed in [31]). It is also worth noting that, in the present study, VRS was detected with PCR assay only, instead of virus isolation. As PCR may detect viable and non-viable viruses, further studies may adapt convincible approaches to determine the viral shedding state.
Conclusions
In the present study, we provided novel contributions to understanding the spectrum and kinetics of immune responses on uncomplicated COVID-19 patients with distinct positive VRS periods. We observed that patients with a prolonged VRS period showed an immune profile comparable to those with severe disease, as observed by lower CD4, CD8, and NK cell frequencies, although they remained clinically stable throughout hospitalization. We also characterized a dynamic of the CXCR3 + Th1 cell proportion, which gradually increased during the acute phase and decreased preceding viral RNA clearance, as well as anti-inflammatory CCR3 + Th2 and CCR5 + Treg and NKT cells, notably the NKT CD8 cells, which showed a high frequency on the acute period with subsequent decline along with disease resolution. While our data indicate that a unique inflammatory signature is associated with different viral RNA clearance status, further study should involve a larger cohort to define the value of these immune cell signatures as predictive biomarkers. Informed Consent Statement: Informed consent was obtained for participants involved in the study. Data Availability Statement: All data generated or analyzed during this study are included in this published article. | 2021-03-29T05:23:33.102Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "5a5a96257c2379bd36a58439e275d65ff535bbff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/13/3/514/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a5a96257c2379bd36a58439e275d65ff535bbff",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255808140 | pes2o/s2orc | v3-fos-license | Strain-based and sex-biased differences in adrenal and pancreatic gene expression between KK/HlJ and C57BL/6 J mice
The ever-increasing prevalence of diabetes and associated comorbidities serves to highlight the necessity of biologically relevant small-animal models to investigate its etiology, pathology and treatment. Although the C57BL/6 J model is amongst the most widely used mouse model due to its susceptibility to diet-induced obesity (DIO), there are a number of limitations namely [1] that unambiguous fasting hyperglycemia can only be achieved via dietary manipulation and/or chemical ablation of the pancreatic beta cells. [2] Heterogeneity in the obesogenic effects of hypercaloric feeding has been noted, together with sex-dependent differences, with males being more responsive. The KK mouse strain has been used to study aspects of the metabolic syndrome and prediabetes. We recently conducted a study which characterized the differences in male and female glucocentric parameters between the KK/HlJ and C57BL/6 J strains as well as diabetes-related behavioral differences (Inglis et al. 2019). In the present study, we further characterize these models by examining strain- and sex-dependent differences in pancreatic and adrenal gene expression using Affymetrix microarray together with endocrine-associated serum analysis. In addition to strain-associated differences in insulin tolerance, we found significant elevations in KK/HlJ mouse serum leptin, insulin and aldosterone. Additionally, glucagon and corticosterone were elevated in female mice of both strains. Using 2-factor ANOVA and a significance level set at 0.05, we identified 10,269 pancreatic and 10,338 adrenal genes with an intensity cut-off of ≥2.0 for all 4 experimental groups. In the pancreas, gene expression upregulated in the KK/HlJ strain related to increased insulin secretory granule biofunction and pancreatic hyperplasia, whereas ontology of upregulated adrenal differentially expressed genes (DEGs) related to cell signaling and neurotransmission. We established a network of functionally related DEGs commonly upregulated in both endocrine tissues of KK/HlJ mice which included the genes coding for endocrine secretory vesicle biogenesis and regulation: PCSK2, PCSK1N, SCG5, PTPRN, CHGB and APLP1. We also identified genes with sex-biased expression common to both strains and tissues including the paternally expressed imprint gene neuronatin. Our novel results have further characterized the commonalities and diversities of pancreatic and adrenal gene expression between the KK/HlJ and C57BL/6 J strains as well as differences in serum markers of endocrine physiology.
Introduction
Type 2 diabetes (T2D) is a progressive disease with multiple contributing etiologies, in which diverse genetic backgrounds interact with environmental factors to promote insulin insufficiency, frequently associated with peripheral insulin resistance, and subsequent inability of pancreatic beta cells to compensate [1,2]. Obesity is a major risk factor for type 2 diabetes, and the steep escalation in the global prevalence of both obesity and T2D has been accompanied by emerging evidence of clinically relevant sex-related differences, with males diagnosed at a younger age and with a lower body mass index (BMI), whilst females generally have a higher total fat mass, which is a major risk factor for T2D [3,4]. There are also significant sex-related differences in the incidence of diabetic complications as well as differences in the counter-regulatory adrenal response to hypoglycemia and exercise stress [5][6][7][8]. Additionally, clinical studies have shown that there are a number of important sex-related differences in pancreatic islet function and pathophysiology, as well as the response to treatment regimens [9]. These realizations have emphasized the need for animal models which combine cost-effectiveness with biological relevance in order to further our understanding of the etiology and treatment of diabesity. Rodent models are particularly appropriate because they are plentiful, cost-effective and technically suitable for manipulation. They have also been used to provide detailed analysis of gene expression in relevant target organs using validated microarray analysis. A further advantage is that these models can also be used to study the etiology and treatment of diabetes-associated aspects of behavior such as anxiety, depression and cognitive impairment [10,11].
Experimental diabetic models can be classified into (a) spontaneous or genetically-derived, (b) transgenically derived, (c) diet-induced obesity (DIO), (d) chemically induced or (e) surgically induced. In this regard the C57BL/6 J strain is especially well-suited for prediabetes research due to its susceptibility to DIO [12]. Even when placed on a standard chow diet, C57BL/6 J mice develop moderately elevated fasting blood glucose and HbA1c levels, as well as attenuated glucose tolerance compared to a number of inbred strains [1,13,14]. However, overt fasting hyperglycemia (fasting blood glucose levels of ≥250 mg/dL [15]) may only be achieved by dietary manipulation and/or chemical ablation of the pancreatic beta cells [1]. Additionally, heterogeneity exists in the obesogenic effects of hypercaloric feeding together with sex-dependent differences with males being more responsive [16,17], a situation which does not accurately reflect the human condition. Notwithstanding, the C57BL/6 J DIO model has been highly successful despite the over-reliance on males within the experimental paradigms [18,19], however several alternative strains are available which encompass various advantages and differences (1). One of these, the KK inbred strain, has been utilized for the investigation of the metabolic syndrome and prediabetes due to their inherent glucose intolerance, insulin resistance and hyperinsulinemia [20]. Initially bred for increased body weight, KK mice are characterized by moderate hyperglycemia and hyperphagia even when maintained on a standard chow diet [21]. The selectively-bred KK/HlJ substrain [22] has been used in the study of diabetic nephropathy [23,24], fatty liver disease [25] and corneal degeneration [26]. Fewer studies have addressed sex-dependent differences within this strain, and even less is known about their suitability for behavioral research.
We recently performed a study in which we compared the glucose-and insulin-related physiological characteristics of both male and female KK/HlJ mice with those of the well-characterized C57BL/6 J strain, as well as strain-and sex-dependent differences in diabetes-related behavioral characteristics [27]. Other reports have demonstrated that in addition to their susceptibility to diabesity and as part of the counter-regulatory response to hypoglycemia, the KK/HlJ strain is prone to albuminuria and age-related vascular mineralization [28]. Morphological analysis of KK mice pancreatic Islets have revealed marked hypertrophy and hyperplasia, with enlargement of the Golgi and Endoplasmic Reticulum [29]. Additionally KK mice display enlargement of the adrenal cortex with hyperplasia of the zona fasciculate and reticularis cells, a more extensive Golgi apparatus and markedly fewer lipid vesicles present [30]. In order to characterize these differences to further our understanding of KK/HlJ glucocentric metabolism, we have now employed microarray genomic analysis [31]. The aim of the present study was therefore to examine the strain-and sex-related differences in pancreatic and adrenal gene expression profiles which might account for the glucose-and insulin-related physiological differences and similarities between the C57BL/6 J and KK/HlJ strains. It is hoped that the investigation will contribute to the future development of biological sex-based and personalized therapies.
Materials and methods
Animals and treatments C57BL/6 J (stock #000664) and KK/HlJ (stock #002106) mice of both sexes were obtained from the Jackson Laboratory (Maine, USA) in two batches aged between 5 and 12 weeks upon arrival, and acclimatized for between 7 and 20 days in a sterile holding facility with veterinary monitoring before transfer to the main animal facility, where they were bred to increase study numbers. At the holding facility all animals were given free access to standard chow (Saudi Grains Organization (SAGO) Riyadh, catalog #1005: 4% crude fat, 20% crude protein, 3.5% crude fiber) and ad libitum water as previously described [27]. All next generation animals were used for the experiments described below. Experimental subjects were housed 3 to a cage (N = 18 per strain and per sex), in a controlled environment (pathogen-free conditions of 12 h light/dark cycle, 22 ± 2°C) with free access to standard chow and water. Food and water intake was measured at 6 and 13 weeks of age by the subtraction method. The care of the animals was in accordance with the protocols approved by the Animal Care and Use Committee of the King Faisal Specialist Hospital & Research Centre.
Glucocentric measurements Insulin tolerance test (ITT)
Changes in the response to exogenous insulin challenge were assessed by a random-fed ITT performed at 18 weeks of age. A baseline blood glucose reading was established from arterial blood collected from the tail using a glucometer (Contour Next, Bayer NJ). An intraperitoneal injection of insulin (Sigma, IL) was administered at a dose of 0.75 U/kg body weight, and whole blood glucose levels were measured at 15, 30, 45 and 60 min after injection as previously described [27]. Assessment of insulin tolerance was made after calculating the Area Under the Curve for glucose (AUC GLUCOSE ), the rate of glucose utilization (K ITT ), and the half-life of glucose levels (T 1/2 ). AUCs were calculated using the trapezoidal rule. K ITT , defined as the percentage decline in glucose per minute, was calculated from the natural log (Ln) of glucose concentrations between time t1 and t2, formula K ITT = (Ln(t1) − Ln(t2))t2 − t1 × 100. The serum T 1/2 , defined as the time in minutes required for the glucose concentration to be halved, was calculated as [32]:
Biochemical analysis
At the conclusion of the study (20 weeks of age), blood glucose levels were assessed in 6-h fasted animals using arterial blood collected from the tail [33]. Mice were then euthanized with a mixture of xylazine and ketamine (10 mg/kg and 100 mg/kg respectively) and blood was rapidly collected from the inferior vena cava and processed for further analysis. Plasma insulin was measured by ELISA (Cat# 90080; Crystal Chem Inc., IL). The homeostasis model assessment of insulin resistance (HOMA-IR) was calculated from fasted insulin and glucose levels according to the formula [34]: Ãðblood glucose ðmmol=LÞ=22:5: Serum glucagon, corticosterone and aldosterone analysis was performed using Crystal Chem Cat# 81518, Enzo Life Sciences kit Cat# ADI-900-097 and Enzo Life Sciences kit Cat# ADI-901-173 respectively, according to manufacturer's recommendations. ELISAs were used to detect changes in the metabolic hormones Leptin and C-peptide, as well as cytokines IL-6 and TNF alpha according to manufacturers' instructions (Mouse Metabolic Magnetic Bead Multiplex assay, Catalog #MMHMAG-44 K; MerkMillipore).
RNA isolation
Total RNA was prepared from snap-frozen male and female adrenal and pancreatic tissue using Qiagen RNeasy Lipid Tissue Mini Kit Cat # 74804 (Qiagen, CA, USA) according to the manufacturer's instructions, and stored at − 80 o C, as described previously [35]. This method was slightly modified for pancreatic RNA extraction, according to De Lisle, 2014 [36]. RNA integrity was measured using a 2100 Bioanalyzer instrument and an RNA 6000 Nano LabChip assay (Agilent Technologies, CA, USA). RNA concentrations were determined by absorption at 260-nm wavelength with an ND-8000 spectrophotometer (Nanodrop Technologies, DE, USA).
Microarray gene expression analysis
Gene expression was analyzed using 12 GeneChip (R) Mouse Gene 2.0 ST arrays representing 26,515 genes as previously described [35]. To minimize the differences of individual variability and increase the statistical power for the identification of potential biomarkers, microarray analysis was performed using equal amounts of purified RNA pooled from all of the study subjects (N = 18 per treatment group), and applied to 3 identical arrays from the same batch. Targets were prepared from pancreatic and adrenal tissues and microarrays were processed as described in the Affymetrix GeneChip Whole Transcript Expression Analysis manual using the Ambion WT expression kit and Affymetrix WT Terminal Labeling Kit as per manufacturers' instructions. Briefly, approximately 100 ng adrenal and 500 ng pancreatic of total RNA was used to synthesize double-stranded DNA with random hexamers tagged with a T7 promoter sequence. Arrays were scanned using the Affymetrix 3000 7G scanner and GeneChip Operating Software version 1.4 to produce. CEL intensity files. This software also provided summary reports by which array QA metrics were evaluated including average background, average signal, and 3′/5′ expression ratios for spike-in controls, β-actin, and GAPDH. Microarray data was deposited at the MIAME compliant NCBI gene expression hybridization array data repository (GEO: http://ncbi.nlm.nih.gov/geo) under accession # GSE141313 and GSE141310 (expression data from pancreatic and adrenal tissue respectively).
Quantitative PCR (qPCR) validation of microarray analysis
qRT-PCR was performed on a LightCycler 480 instrument (Roche Molecular Biochemicals, Mannheim, Germany) using the Hot start reaction mix for SYBR Green I master mix, (Roche) as previously described [37]. Amplifications were according to cycling conditions suggested for the LightCycler 480 instrument in the SYBR Green Master Mix handbook (initial activation at 95°C for 5 min; 45 cycles of 94°C for 15 s, primer dependent annealing temperature for 20 s, 72°C for 20 s). All PCR reactions were performed in triplicate using cDNA synthesized from the same batch and starting amount of total RNA. Primer pairs were synthesized in a local facility in our institution and used at a final concentration of 1 μM (microM). A complete list of the genes and primer sequences are detailed in Supplemental Table s1. Relative gene expression values were analyzed using the 2^−ΔΔCT method [38]. Pearson correlation analysis between qPCR and microarray data were displayed using a scatter plot.
Data analysis
Statistical analyses were performed using IBM SPSS statistics software version 20 (SPSS Inc., Chicago, IL) as previously described [27,35]. Data were presented as means ± SEM for body characteristics and Insulin Tolerance test (ITT). Differential pancreatic and adrenal gene expression analysis were performed using the Partek Genomic suite software version 6.6 (Partek Incorporated, USA) using samples of either pancreatic or adrenal tissue pooled from mice (N=18, applied in triplicate) grouped by strain (KK/ HlJ or C57BL/6 J) and sex (male or female). The probe set data were categorized and grouped by means of Principal Component Analysis (PCA) and Robust Multi-Array Average (RMA) algorithm was used for background correction [39] as implemented in the microarray analysis software (MAS). The standard RMA algorithm used the log 2 transformed perfect match (PM) values followed by quantile normalization. The transformed PM values were then summarized by median polish method. Probesets without unique Entrez gene identifiers were removed from further analysis and values below log 4 were filtered out. For identification of strain-and sex-dependent differentially expressed genes (DEGs) we used a 2-factor design (male KK/HlJ versus male C57BL/6 J; male KK/KlJ versus female KK/KlJ; female KK/KlJ versus female C57BL/6 J; male C57BL/6 J versus female C57BL/6 J) with significance set at p< 0.05. Regulated genes were identified using False Discovery Rate (FDR) method [40] in which p-values were adjusted simultaneously across multiple subgroup comparisons. The significant and differentially expressed genes were selected by means of cut-off fold change (>±1.4) and FDR-adjusted ANOVA p-value.
We next selected subsets of DEGs for further analysis which were expressed either in a strain-specific manner irrespective of sex, or sex-dependent irrespective of strain, using a fold-change cut-off of (>±1.4). Ingenuity Pathway Analysis (IPA) software (Ingenuity Systems, Redwood City, CA) was used to further analyze the functionality of the identified subsets. Genes with known gene symbols according to the Human Gene organization (HUGO) and their corresponding expression values were uploaded into the IPA software, where gene symbols were mapped to their corresponding gene object in the Ingenuity Pathways Knowledge Base (IPKB). To perform functional enrichment tests of the candidate genes, we used IPA for Gene Ontology (GO) term analysis of Biological Function and Diseases pathways, and the Database for Annotation, Visualization and Integration Discovery (DAVID) for Subcellular Compartment localization. Networks of potentially interacting DEGs were identified using IPA and placed into node-edge diagrams comprised of focus molecules (DEGs identified by the microarray analysis) and other interacting molecules. All the edges were supported by at least 1 reference from the scientific literature or from canonical information contained in the IPKB. Nodes were displayed using various shapes that represent the gene product evaluated by the scores which were generated through calculation in the IPA software and represented the significance of the molecules in the network. The analysis also included pathways with intermediate regulators that involve more than one link to create a comprehensive picture of the possible gene interactions.
Glucose and insulin homeostasis
In addition to significantly heavier body weight together with greater pancreas and visceral adipose tissue weights, impaired glucose homeostasis was apparent in KK/HlJ mice of both sexes as demonstrated by a greater incremental change in the AUC GLUCOSE during a randomfed Insulin Tolerance test administered at 18 weeks of age (p < 0.001, Table 1). Conversely, female C57BL/6 J mice had a higher first-order K ITT , shorter glucose T ½ and lower fasting serum glucose concentrations, indicative of a greater insulin sensitivity in these mice (p < 0.05). Both female and male KK/HlJ groups had overt hyperinsulinemia, with between 3.8-and 4.8-fold increases in serum insulin and C-peptide levels compared to the C57BL/6 J strain, accompanied by markedly elevated HOMA-IR levels.
Selected serum hormone and pro-inflammatory cytokine analysis
Levels of glucagon and corticosterone were elevated in female mice of both strains; whereas serum leptin and aldosterone were elevated in KK/HlJ mice of both sexes ( Table 1, P≤ 0.0001). Proinflammatory serum IL-6 levels were increased in the KK/HlJ strain, significantly (P≤0.05) in females and with a trend towards significance in males; whereas TNFα was mildly elevated only in male C57BL/6 J mice (Table 1, P= 0.015).
Pancreatic gene expression
Affymetrix microarray analysis of sex-and strainregulated differences in pancreatic gene expression was used in order to gain insight into the mechanisms underlying the observed variances in glucose and insulin homeostasis, as well as hormonal differences. We used a False Discovery Rate and a significance level set at 0.05 to identify 41,345 probesets for further analysis between the 4 experimental groups i.e., male (M) or female (F), KK/HlJ (KK) or C57BL/6 J (C57). In order to identify strain-and/or sex-specific subgroups, 2-factor ANOVA was applied to the four relevant contrast groups KK-M v C57M; KK-M v KK-F; KK-F v C57-F; C57M v C57-F, resulting in the detection of 10,269 gene with known identities. Genes were considered significant by applying a combined criterion for true expression differences consisting of ±> 1.4 fold change within a given contrast and a P-value < 0.05, and the numbers of DEGs between these contrast groups were depicted in a 4-way Venn diagram indicating the overlapping associations between order to further illustrate the differences in pancreatic gene expression between the 4 groups, we used Partek hierarchical cluster analysis of the log transformed data to generate a heatmap depicting color-coded differences in the expression of the top 80 genes based on row zscores using Euclidean distance measurements and average linkage method for clustering ( Fig. 1b). In addition, pancreatic gene expression was further classified by differences in strain-biased expression ( Fig. 1c) and sexbiased expression ( Fig. 1d) for subsequent analysis.
Strain-related pancreatic gene expression
Differences in pancreatic gene expression could provide molecular insights into strain-related differences in glucocentric phenotypes. Figure 1c indicates that out of the 1017 genes exhibiting strain differences in male mice (Set 1), the majority (N=646) were up-regulated in the KK/HlJ strain, whereas in female mice (Set 3), more than twice as many genes (N=1389) were expressed at significantly higher levels in the C57BL/6 J strain. In order to understand more about the function of these strainbiased genes, we used Gene Ontology (GO)-enrichment analysis in the Biological Processes and Diseases category using the Ingenuity Pathways Knowledge Base (IPKB), and Cellular Compartment localization analysis using DAVID. Figure 2a indicates the top 8 biological functions and diseases associated with the straindependent DEGs which were increased in KK/HlJ relative to C57BL/6 J plotted as enrichment scores (−log [pvalue]). In keeping with the differences in glucocentric physiology and serum analysis that were observed between the two strains (Table 1), top biological functions and disease ontologies associated with these pancreatic DEGs included molecular transport, glucose tolerance, obesity and cancer. Positionally, these genes mapped to cellular compartments including secretory, integral components of the plasma membrane and cytoplasmic vesicles. (Fig. 2b). Conversely, the top biological functions and diseases associated with pancreatic genes upregulated in the C57BL/6 J strain included systemic autoimmune syndrome, Diabetes and cellular compromise: degradation of cells (Fig. 2c). Cellular compartment analysis of DEGs upregulated in C57BL/6 J mice were mapped to immunoglobulin complex formation, plasma membrane and blood microparticles (Fig. 2d, P≤0.05). We next used IPA software to create a molecular network of functionally related pancreatic genes which were increased in the KK/HlJ strain based on highest fold changes (P≤0.05). Figure 3A indicates the top network of DEGs which were significantly upregulated in KK/HlJ mice of both sexes compared to C57BL/6 J by ≥± 1.4fold. In agreement with the~4-fold elevation in serum insulin levels that we observed in this strain, significant pancreatic up-regulated genes included many genes coding for proteins associated with the formation of β-cell insulin granules, also known as dense core secretory vesicles (DCSVs), including islet amyloid polypeptide (IAPP: Amylin), which was increased with respect to C57BL/6 J mice by an average of 4.51-fold in males and females. IAPP was functionally linked with insulin 1 (Ins1) and insulin 2 (Ins2), increased by an average of 2.04 and 3.06-fold; and Ins1 was functionally associated with a major βcell transcription and proliferation regulatory molecule: islet-specific transmembrane protein tyrosine phosphatase, receptor type, N (PTPRN, also known as IA-2), up-regulated by an average of 3.8-fold compared to the C57BL/6 J strain. Ins1 and PTPRN were functionally linked to key DCSV exocytosis-regulatory tSNARE molecule SNAP25 (synaptosomal-associated protein), upregulated by an average of 2.5-fold; as well as being linked to increased vitamin D receptor gene expression (VDR: 2.25-fold). Other molecules on this network and known to be associated with the biogenesis, regulation and function of pancreatic β-cell insulin granules included Secretogranins SCG2 and SCG3, the Staninocalcin STC2, Chromogranin CHGB, key diabetes susceptibility gene Proprotein Convertase Subtilisin/ kexin type 2 (PCSK2), the synaptotagamin SYT5 and the outward rectifying potassium channel KCNK16 (TALK1), all upregulated in the KK/HlJ strain by between 1.51 and 6.75-fold. The network of highly upregulated pancreatic genes also included the major endocrine regulatory molecule fibroblast growth factor 21 (FGF21), increased by an average of 4.67-fold and functionally linked to IAPP; Apolipoprotein A4 (APOA4) linked to Apolipoprotein B and to Apolipoprotein A1, proinflammatory arachidonate 12-lipoxygenase (ALOX12: 4.10-fold) functionally linked to pancreatic Phospholipase A2 group IIA (PLA2G2A: 8.20-fold) and further linked to arginase type II (ARG2: 3.36-fold), the mitochondrial form of the enzyme which is known to be induced by obesity [41]. Strongly upregulated genes in the periphery of the network included the intracellular Golgi-associated NAD-synthesizing enzyme NMNAT2 (nicotinamide nucleotide adenylyltransferase 2: increased by an average of 23.53-fold), sucrase isomaltase (Sis: 10.47-fold), Mucin 13 (Muc13: 10.14-fold), solute carrier family 13a1 (SLC13A1: 5.76-fold) and serine peptidase inhibitor class A (SERPINA7: 3.23-fold).
Conversely, the top network of functionally-related pancreatic genes with the highest expression in the C57BL/6 J strain included many immune-related genes such as immunoglobulin heavy variable (Ighv) genes Ighv1-9, Ighv10-1 and Igkv9-120 and several other Ighv transcripts all of which were elevated from between 17.81-to 55.33-fold compared to the KK/HlJ strain (Fig. 3b, P≤0.05). In addition, several other immunologicallyrelated transcripts were elevated in the pancreatic tissues of C57BL/6 J mice, including antigen-presenting H2-T22 (histocompatibility 2, T region locus 22: average of 17.34fold), together with genes encoding cell surface recognition molecules CD180, CD79B and CD8b1.
Sex-biased pancreatic gene expression
As stated previously, Fig. 1d indicates the Venn diagram analysis of 440 sex-dependent pancreatic gene expression in the two strains. We found a total of 224 genes with sex-biased expression of ≥± 1.4-fold in the pancreatic tissues from KK/HlJ mice, and 216 sex-biased genes in C57BL/6 J mice. In the KK/HlJ strain, there were 134 genes upregulated in males compared to 90 down-regulated, whereas in the C57BL/6 J strain we detected only 64 upregulated and 152 down-regulated DEGs. In order to be included as truly sex-biased, we looked for genes with a significant (P≤0.05) fold change of ≥± 1.4 in the ANOVA comparison of expressed genes in males from both the KK/HlJ and the C57BL/6 J strain compared to females from both strains. Within this constraint there were 27 qualifying pancreatic genes which included the Y-chromosome linked Eif2s3y, Uty and DDX3Y genes, all upregulated in males from both strains by between 11 and 18-fold compared to females ( Table 2). Other genes with male-biased expression included twelve of the Major Urinary Proteins (MUPs) which were overexpressed in males by between 2.69-8.01-fold compared to females; as well as neuronatin (Nnat), lysine-specific demethylase 5D (KDM5D), Complement Component 7 (C7) and haptoglobin (HP). Genes with female-biased expression in both strains were far fewer but included the well-characterized xchromosome linked gene XIST (X-inactive specific transcript), the cell adhesion molecule Glycam1, PDK4 (pyruvate dehydrogenase kinase, isoenzyme 4) and SUSD3 (Sushi Domain Containing 3), all with increased expression in females by between 1.7-and 98.7-fold ( Table 2, P≤0.05).
Adrenal gene expression
In addition to differences in adiposity and glucose homeostasis between the C57BL/6 J and KK/HlJ strain, our analysis indicated strain-and sex-dependent differences in serum corticosterone and aldosterone, both of which are adrenal-derived steroid hormones. For identification of strain-and sex-regulated adrenal genes we applied the same inclusion criteria i.e. expression differences consisting of ±> 1.4 fold change within a given contrast i.e. KK-M v C57M; KK-M v KK-F; KK-F v C57-F; C57M v C57-F, and a P-value of ≤0.05. This approach resulted in the detection of 10,338 DEGs which were used to generate a four-set Venn diagram in order to analyze the numbers of common and unique genes within this dataset (Fig. 4a, P≤0.05). The highest numbers of DEGs were found in Sets 1 (KK-M v C57-M: 2792 genes) and Set 3 (KK-F v C57-F: 2711 genes), indicating greater strain-related gene expression differences compared to sex-related differences in Sets 2 (KK-M v KK-F: 2587 genes) and Set 4 (C57-M vs C57-F: 815 genes). In the overlapping adrenal gene sets, there were 102 significant genes detected between the 4 comparison groups, and the highest number of genes were found in the subset of Set 1 and Set 3 (Fig. 4a, 852 genes, P≤0.05). Figure 4b shows a differentially expressed gene heatmap with z-score hierarchical clustering of genes based on DEGs with the greatest fold change difference in individual sets. The differences in adrenal gene expression between the four contrast groups were further classified using 2-way Venn diagrams of strain-biased (Fig. 4c) and sex-biased (Fig. 4d) expression for subsequent analysis.
Strain-related adrenal gene expression
To better understand the unique gene expression patterns of male and female KK/HlJ and C57BL/6 J adrenal glands, and to look for any patterns of expression which could account for the adrenal hyperplasia and other changes reported by light microscopy studies [30], we next considered subgroups of strain-dependent DEGs, defined as being either up-regulated in both male and female KK/HlJ mice compared to male and female C57BL/6 J mice by a factor of ≥1.4-fold, or up-regulated in both male and female C57BL/6 J mice compared to male and female KK/HlJ mice. This resulted in a set of 497 adrenal genes upregulated in KK/HlJ mice compared to C57BL/6 J, and 717 which were down-regulated irrespective of sex (Fig. 4c), which were analyzed for enrichment of functional annotation using IPA. Figure 5a shows the top 8 Biological Functions and Diseases associated with these subsets, including cell-to-cell signaling: neurotransmission, nervous system development and cancer: melanoma categories. These genes compartmentalized primarily to the neuronal body, synapses and cell membranes, according to the DAVID database (Fig. 5b). Conversely, the top biological function and disease ontologies for adrenal genes upregulated in C57BL/6 J mice of both sexes included glucose metabolism disorders, concentration of lipid, and systemic autoimmune syndrome and inflammation (Fig. 5c). Top cellular compartments included the extracellular exome, extracellular space and blood microparticles (Fig. 5d, P≤0.05).
To further identify key adrenal genes involved in this cellular function and establish the connectivity between these genes, we next used IPA software to create the top molecular networks of functionally related genes which were differentially expressed between the two strains based on highest fold changes. Figure 6a shows the top network of adrenal DEGs with significantly stronger expression in the KK/HlJ strain which includes the progesterone-catabolizing enzyme Akr1c18 (AKR1C3: 17β-HSD) which was elevated by 3.1-fold in male KK/HlJ and 15.8-fold in female KK/HlJ mice compared to the C57BL/6 J strain. We also found an increase in the metabolic marker oxidoreductase enzyme Retinol saturase (RETSAT), which was elevated by 2.3-and 7.4-fold in male and female KK/HlJ respectively. Other strain associated genes which were markedly increased in the KK/HlJ strain included the neuroendocrine marker secretagogin (SCGN: average of 5.38-fold increase), linked to the key dense core secretory vesicle (DCSV)-regulating chromogranin CHGB and to PTPRN; as well as the apolipoprotein B mRNA-editing enzyme APOBEC2 (average of 4.56fold increase). Other noteworthy genes upregulated in the KK/HlJ strain included the membrane-anchored metabolic regulator vanin1 (VNN1) which has previously been shown to be involved in the development of adrenocortical neoplasia [42], the adrenocortical enzyme glutamic pyruvate transaminase 2 (GPT2), ribosomal protein RPL30, the glycerophosphodiester phosphodiesterase GDPD3 and several of the small nucleolar RNA, C/D box genes including Snord53, SNORA62, and SNORA74A.
Genes which were upregulated by greater than 5-fold in C57BL/6 J adrenal tissues of both sexes compared to the KK/HlJ strain included more than 10 of the Major Urinary Proteins such as Mup1, 2,3,8, 13,14 and 15; (Fig. 6b). We also found significant increases in the expression of the albumin gene (ALB) by between 21.5-and 80.7-fold, increased fibrinogen gamma chain (FGG) expression, as well as increased expression of several members of the serine peptidase inhibitor family: SERPINA1c, SERPINA1d and SERPINC1. Other noteworthy upregulated genes included Apolipoproteins A-1 (APOA1), A-II (APOA2) and Apolipoprotein B (APOB), Group specific component (GC: the gene encoding Vitamin D binding protein), carbamoylphosphate synthase 1 (CPS1), uncoupling protein 1 (UCP1) and proliferation marker TMEM45B.
Sex-related adrenal gene expression
Our analysis showed greater numbers of true sex-biased gene expression in the adrenal glands compared to the pancreatic tissue, with 471 genes exhibiting a significant (P≤0.05) fold change of ≥± 1.4 in the ANOVA comparison (Fig. 4d), which comprised of 292 genes with upreglated expression in the males of both strains compared to 179 gene with female-biased expression in both strains. Table 3 indicates the top 30 genes which were expressed at significantly higher levels in male adrenal tissues from both KK/HlJ and C57BL/6 J mice ranked by fold change, as well as the top 30 genes with femalebiased expression in the 2 strains. We also found 14 transcripts with significant fold changes but without known gene names assigned. The highest ranking adrenal genes included several that were also overexpressed in a sex-biased pattern in the pancreatic tissue, such as Y-chromosome linked DDX3Y Eif2s3y and Uty, all upregulated in males from both strains by between 9 and 18-fold compared to females. Also greatly overexpressed in male adrenal tissues of both strains we found 4 genes encoding amplified spermatogenic transcripts X (Astx, Astx1c, Astx3 and Astx4d), male-specific histone demethylase lysine-specific demethylase 5D (KDM5D), LY6D, and the proteoglycan aggrecan (Acan : Tables 3, 8.52-fold, P≤0.05). Genes which are less-commonly associated with male-biased expression included matrix metallopeptidase-12 (MMP12), neuronatin (Nnat), Hydroxy-Delta-5-Steroid Dehydrogenase, 3 Beta gene (HSD3B6) and CXC motif chemokine receptor type 4 (CXCR4). Table 3 also shows the top 30 genes with femalebiased expression in adrenal tissue, based on fold-change magnitude. One of the genes with the highest level of female-biased expression was Akr1c18 (Aldo-Keto Reductase Family 1, Member C18, also known as 20αhydroxysteroid dehydrogenase), in agreement with previous in situ hybridization studies [43]. Additionally we found a high female-bias in the expression of Xist, as well as Akr1D1 (Steroid 5β-reductase), FETUB (fetuin-B), Hamp2 (hepcidin antimicrobial peptide 2), NR0B1 (Nuclear Receptor Subfamily 0, Group B, Member 1, also known as Dax1) which is a key dominant-negative regulator of transcription; and GPAM (Glycerol-3-Phosphate Acyltransferase, mitochondrial).
Gene expression common to both tissues
The final part of our microarray analysis was to examine the subset of genes which were differentially expressed between the two strains in both the adrenal and pancreatic tissues of male and female mice. Figure 7 shows a heatmap with z-score hierarchical clustering of genes with the highest differences in sex-biased or strain-associated expression in both the adrenal and pancreatic tissues identified by 2-factor ANOVA in the four comparisons KK-M v C57M; KK-M v KK-F; KK-F v C57-F; C57M v C57-F, and a P-value of ≤0.05. As expected the analysis showed a number of commonalities in sex-biased gene expression, with y-chromosomal genes Uty, DDX3Y, Eif2s3y, all exhibiting large differences in expression of between 10 and 22-fold in male adrenal and pancreatic tissues; whereas the only gene with significant female-biased expression common to both tissues was XIST, the gene encoding X inactivation-specific transcript. Other genes with malebiased expression of greater than 2-fold in both tissues and strains included neuroendocrine signaling molecule neuronatin (Nnat), the major urinary pheremone Mup20 (Darcin), and complement component 7 (C7) (Fig. 7, P≤0.05). Interestingly our analysis detected more than 60 genes with strain-biased expression common to both the pancreas and adrenal endocrine tissues, as shown in Table 4 in which only genes with an averaged fold change are classified as being either (a) significantly upregulated in both endocrine tissues of KK/HlJ mice by ≥1.5-fold (N= 32), or (b) upregulated in both the pancreas and adrenal glands of C57BL/6 J mice by ≥1.5-fold (N=36). The gene ontologies of these common genes are indicated in Fig. 8a which shows the top Biological functions and Disease categories of pancreatic and adrenal genes which were commonly upregulated in KK/HlJ mice of both sexes, and included the categories of control of the volume and morphology of pancreatic Islet beta, delta and APUD (endocrine polypeptide) cells as well as development of synaptic plasticity. Conversely, the ontology of pancreatic and adrenal genes which were commonly upregulated in C57BL/6 J mice included genes involved in the development of Diabetes, the antimicrobial response and inflammatory conditions (Fig. 8b).
We were interested to establish the functional relationships between these commonly upregulated pancreatic and adrenal genes. Figure 9a indicates the top network generated from genes upregulated in the KK/ Fig. 7 Heatmap with hierarchical clustering of differentially expressed genes common to both adrenal and pancreatic endocrine tissues which exhibited significant strain-or sex-biased expression (FDR < 0.05). Z-scores denote the relative gene expression levels with red and blue representing high and low expression, respectively HlJ strain in both tissues based on magnitude of fold change, and featured DCSV-associated PCSK2, increased by an average of 3.12-fold; PCSK1N, upregulated by 1.80-fold, together with chromogranin CHGB upregulated by 3.6-fold and SCG5, the gene encoding molecular chaperone secretogranin 5 upregulated by an average of 2.28-fold. Key DCSV membrane protein PTPRN (IA-2) was functionally linked to the glutamate ionotropic AMPA receptor GRIA2 (GluA2) via the neuronal cell adhesion gene APLP1, otherwise known as amyloid beta (A4) precursor-like protein 1. Our analysis also showed that although PTPRN was elevated in KK/HlJ mice by an average of 3.8-fold (Table 4) Genes with higher expression in the pancreata and adrenal glands of C57BL/6 J mice of both sexes were ranked by fold change and the highest genes were functionally networked in Fig. 9b. This network included gammaaminobutyric acid A receptor, subunit alpha 3 (GABRA3: 8.32-fold increase) linked to small GTPase RAB6B (8.13fold increase); IFIH1 (Interferon induced with helicase C domain 1, also known as MDA5: 1.99-fold) linked to CFD (adipsin: 5.32-fold) and to Ifna4 (Interferon alpha 4:2.95fold) and H2-T22 ((histocompatibility 2, T region locus 22:17.34-fold) and TSPAN6 (tetraspanin 6: 1.77-fold). Other notable genes upregulated in C57BL/6 J mice of both sexes and mapped to this network included the peroxisomal inflammatory marker DECR2 (2-4-dienoyl-Coenzyme A reductase 2: increased by 2.12-fold) functionally linked to Adig (Adipogenin: 2.65-fold); and H2BC4 (Histone Cluster 1 H2B Family Member C), which was functionally linked to TNFα. Our analysis also identified 13 strain-associated DEGs common to both tissues and sexes, with predicted gene identification numbers but without recognized gene names (listed in Table 4 for reference).
Validation of microarray analysis using qRT-PCR
In addition to our serum analysis which included insulin and related pancreatic and adrenal hormones, we used quantitative real-time PCR (qRT-PCR) in order to confirm our microarray results, using a selection of 25 pancreatic and adrenal genes randomly chosen based on biological relevance (Fig. 10a-f). A complete list of these genes and the Primer sequences are inventoried in Supplementary file S1. Pearson correlation coefficients between the microarray analysis and qRT-PCR were calculated and displayed as a scatter plot (Fig. 10f, R 2 = 0.7812, P≤0.001).
Discussion
Small-animal models of diabesity are an important and cost-effective tool in the scientific investigation of the global increase in obesity and diabetes. Our analysis of strainand sex-based differences in pancreatic and adrenal gene expression is a continuation of our previous research on the physiological and behavioral differences between these 2 strains in terms of their usefulness as rodent models of the pathogenesis and treatment of these conditions. To our knowledge this is the first systematic analysis of gene expression differences, and the data complements previous light microscopic and morphometric studies concerning involvement of the pancreatic and adrenal glands in the etiology of diabesity [29,30]. Our analysis confirms previous findings that KK/HlJ mice are hyperinsulinemic [29], and one of the most interesting findings was that the pancreata of KK/HlJ mice expressed higher levels of more than 10 well-characterized genes involved in the formation and function of β-cell insulin granules: heterogeneous populations of dynamic membrane-bound vesicles loaded with insulin, but also composed of over 150 proteins which act as a signaling node and metabolic sensor for the biogenesis, transport and storage of insulin [44]. Insulin granules store their cargo in at least two functionally distinct compartments: a fluid suspension containing many small molecules and proteins, and a dense core composed primarily of insulin and zinc. Early studies by Nakamura and Yamada described an increase in the number of pancreatic islets compared to the C57BL/6 J strain together with a 4-fold elevation in pancreatic insulin bioactivity [29]; and in addition to a similar elevation in serum insulin we found a 2 to 3-fold higher level of pancreatic insulin (Ins1 and Ins2) gene expression, as well increased levels of transcripts coding for secretory molecule Islet amyloid polypeptide (IAPP), a 37-residue hormone peptide which contributes towards glucose homeostasis by inhibiting glucagon release and regulate insulin secretion [45]. Other insulin granule-associated genes previously identified by proteomic studies [46] and upregulated in the KK/HlJ mice pancreata included key pro-IAPP processing enzyme PCSK2, together with the diabetes islet cell autoantigen Protein tyrosine phosphatase receptor type N (PTPRN, IA-2), Synaptosomal-Associated Protein SNAP25, Synaptotagamin SYT5, Chromogranin CHGB, Secretogranins SCG2 and SCG3, the Staninocalcin STC2, and the outward rectifying potassium channel KCNK16. We also noted a substantial increase in the expression of pancreatic NMNAT2 (Nicotinamide mononucleotide Significance between the four groups is represented as * at P value of ≤0.05. (f) Scatter-plot presentation of changes in expression of 16 selected genes as measured by qRT-PCR and microarray adenylyltransferase 2) in these mice. Although the exact role(s) of NMNAT2 have yet to be fully elucidated, there is considerable evidence that it may promote cancer cell survival by accelerating glycolysis via a mechanism that includes a reduction in the expression of the transcriptional regulator sirtuin SIRT6 [47]. This suggests a proliferative role for NMNAT2, and indeed previous studies have indicated that the pancreas of KK/HlJ mice exhibited hyperplasia [28,29] although to date this is the first indication of the involvement of NMNAT2 in pancreatic hyperplasia. Taken together this would suggest that there is a good functional relationship between the observed strainassociated differences in pancreatic gene expression and the physiological differences associated with the KK/HlJ strain documented in Table 1 and detailed in other relevant publications [27][28][29][30].
Our analysis indicated an increase in serum IL-6 in addition to the observed increase in visceral fat in KK/ HlJ mice. Previous studies have shown a direct correlation between epididymal adipose tissue and inflammatory status [48]; and pro-inflammatory markers present as highly expressed in our KK/HlJ pancreatic gene network included ALOX12, PLA2G2A, MUC13 and ARG2 all of which have been associated with inflammatory paradigms in previous research [49][50][51], and all of which have been shown to play various roles in pancreatic inflammation, oxidative stress and glucose metabolism [52][53][54][55].
We observed a very different microarray profile of upregulated DEGs in the pancreata of C57BL/6 J mice. Amongst those genes with the highest fold change differences between the two strains, genes encoding immunoglobulin G heavy chain variable (Ighv) region were prominent, as well as genes of the immunoglobulin Kappa (κ) Locus. Studies by Tong & Liu [56] have shown that IgG-positive cells comprised about 1.4% of the total pancreatic cells in mice forming a thin septum surrounding the pancreatic ducts; and as with humans there are distinct differences in the repertoire of Ighv and Igκ variable sequences between inbred mouse strains [57]. Interestingly, differential expression of the adipokine complement factor D (CFD: adipsin) was elevated in C57BL/6 J mice, and other studies have shown that not only does CFD regulate the alternative complement pathway by generating complement component C3a, but it also augments pancreatic β-cell insulin secretion in vivo [58], suggesting a key role in glucose homeostasis.
We were interested to ascertain the identities of strain-biased genes common to both pancreatic and adrenal endocrine tissues, and whether we could identify any functional relationships between these DEGs. Amongst those upregulated in the KK/HlJ strain we identified a network of over a dozen functionally linked genes with 2-fold or higher increases in expression compared to the C57BL/6 J strain including the proprotein convertase subtilisin/kexin type 2 (PCSK2), found within dense core secretory vesicles (DCSVs) of neuroendocrine tissues including the adrenal and pancreatic glands, where it is known to be involved in the cleavage and activation of several hormones and neuropeptides [59]. We also found increases in the diabetes-associated Islet-cell autoantigen PTPRN, a 60-kDa type 1 membrane protein associated with the pancreatic and adrenal DCSVs [60,61] together with Chromogranin B, a master regulator of DCSV biogenesis and function [62]. These three DEGs seemed to form a functional cluster common to both adrenal and pancreatic tissues in our microarray analysis and point to an increase in the number of DCSVs in the neuroendocrine tissues of KK/HlJ mice. In the case of the pancreas these would contain insulin and zinc, whereas in adrenal chromaffin cells the cargo would consist of catecholamines, neuropeptides and also micro RNAs [63]. Evidence for an increase in KK-mouse pancreatic and adrenal vesicular content is provided by earlier light microscopy studies [29,30]. Functional deletion of PTPRN (IA-2) in mice results in impaired secretion of insulin, whereas overexpression leads to an increase in DCSVs and insulin secretion [64], which may have contributed to the hyperinsulinemia that we [27] and others [29] have observed in the KK/HlJ strain. However, because PTPRN is expressed in several other neuroendocrine tissues, other studies have shown that double knock-out of PTPRN and homologue PTPRN2 (IA2-β) causes female infertility due to a reduction in pituitary DCVs and subsequent lowering of serum luteinizing hormone levels, as well as anxiogenic behavior and learning deficits associated with a decrease of norepinephrine, dopamine and serotonin in the brain [65]. Since in our study we did not detect a significant increase in PTPRN2 during microarray analysis, and because our behavioral studies indicated that the hyperinsulinemic KK/HlJ mice with elevated PTPRN expression are well adapted for behavioral research studies and in some cases even superior to the C57BL/6 J strain [27], it is tempting to suggest that in certain types of behavior and learning tests PTPRN has a more pronounced effect than PTPRN2, as has indeed shown to be the case in studies by Cai and Notkins [64]. Furthermore, the biogenesis of DCVs are under the control of Chromogranin B (CHGB) [66], which also exhibited increased gene expression in the adrenal and pancreatic tissues of the KK/ HlJ mice. Recent studies have shown that siRNAtargeted loss of CHGB in vitro impairs glucosestimulated insulin secretion and reduces the density of insulin-containing granules [67]. Additionally, in the adrenal gland loss of CHGB in knockout mice was shown to reduce chromaffin granule abundance by approximately 35% which promoted deregulation of catecholamine release; whereas lentiviral over-expression led to a greater abundance of DCVs [68], suggesting a key role for CHGB and PTPRN in endocrine secretion. Increased expression of both PTPRN and CHGB genes in KK/HlJ mice may explain the higher levels of serum insulin and aldosterone that we observed in these mice.
In terms of sex-biased gene expression common to both tissues, we found as expected a number of wellcharacterized genes such as female-biased XIST and Ychromosome markers Eif2s3y, Uty, DDX3Y and KDM5D [69]. One of the most interesting genes with male-biased expression was Neuronatin (Nnat), first identified as selectively expressed in newborn mice and involved in neurogesis, but more recently known to be a paternally expressed imprint gene [70] involved in a number of diverse functions including neuronal growth and brain development [71], synaptic plasticity [72], cellular stress response [73] and glucose-mediated insulin secretion [74]. Nnat has two splicing variant forms: the α form being composed of three exons which can be alternatively spliced to produce Nnat β. The expression of each isoform is tissue-specific and in some instances may be regulated by nutrient status, for example white adipose tissue Nnat is expressed at higher levels in mice fed a high-energy diet than in low-fat diet-fed mice [75]. Other studies have shown that whereas βcell-targeted knockout of Nnat in male mice fed a standard chow diet did not impair fasting blood glucose levels, βcell KO-Nnat+/−p mice fed a high-energy diet exhibited elevated fasting blood glucose together with glucose intolerance compared to wildtype C57BL/6 J mice, despite no apparent effect on body weight, feeding or energy expenditure [76]. In our study, all mice consumed the same standard chow diet, and the majority of the glucocentric differences that we observed were strain-related, although males from both strains had poorer insulin tolerance than the females. Given the diversity of functions attributed to Nnat, it would also be interesting to know the significance of the observed male-biased expression of Nnat in the adrenal tissue of these mice.
When we examined sex-biased differences in the pancreatic and adrenal tissues separately we found many interesting sex-bias differences in gene expression. This observation is in agreement with Yang et al who showed that in mice, sexually dimorphic genes are quite tissuespecific [77]. In the pancreas we found a subset of (MUPs) with clear sex-biased expression towards the males in both strains, as has been previously observed in other studies [78]. In the adrenal glands of these same mice however, no such bias in expression was observed. What was more interesting is that we found greater numbers of adrenal genes with sex-biased expression than pancreatic sex-biased DEGs. Moreover in the KK/ HlJ adrenal glands there were larger numbers of femalebiased genes compared to males, whereas in the C57BL/ 6 J adrenal glands, male-biased genes predominated; and the mirror opposite situation in the pancreata, as shown by our Venn diagram analysis. This is in agreement with previous studies of conserved mammalian sexdifferences in gene expression, which have shown that across the body there are marked spatial differences in sex-bias, with some tissues have greater sex-biased genes than others. For example a recent study by Naqvi et al [79] found that mammalian adrenal and pituitary tissues have greater numbers of sex-biased DEGs compared to mammalian heart, liver, thyroid or brain. In humans, one study found that the heart and kidney express a number of DEGs with opposite trends in sex-bias. Genes from the RNA U1 family were found to be sex-biased towards the female in the heart, whereas in the kidneys the same genes were more abundantly expressed in males [80]. Other researchers have noted temporal differences in murine sex-biased expression in the sense that many genes can exhibit female-biased expression at one post-natal developmental time-point, and malebiased during another [81,82], indicating that the transcription of sex-biased murine genes is regulated not only by genetic background, but by many influences, including epigenetic factors: hormonal, environmental, nutritional status, in addition to spatial and temporal variance.
In conclusion, we have carried out a comprehensive analysis of strain-and sex-biased differences in the expression of pancreatic and adrenal genes in male and female C57BL/6 J and KK/HlJ mice, as an extension of our previous work on the glucocentric, physiological and behavioral differences in these strains [27]. Our data may contribute to the understanding of differences in smallanimal models for research into the pathogenesis of diabetes, obesity and associated disorders. | 2023-01-15T14:32:18.988Z | 2021-03-12T00:00:00.000 | {
"year": 2021,
"sha1": "64f70334edd8845d6f001178d3872a7fc229ac12",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/counter/pdf/10.1186/s12864-021-07495-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "64f70334edd8845d6f001178d3872a7fc229ac12",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
18237905 | pes2o/s2orc | v3-fos-license | Characteristics of Fibromyalgia Independently Predict Poorer Long‐Term Analgesic Outcomes Following Total Knee and Hip Arthroplasty
Objective While psychosocial factors have been associated with poorer outcomes after knee and hip arthroplasty, we hypothesized that augmented pain perception, as occurs in conditions such as fibromyalgia, may account for decreased responsiveness to primary knee and hip arthroplasty. Methods A prospective, observational cohort study was conducted. Preoperative phenotyping was conducted using validated questionnaires to assess pain, function, depression, anxiety, and catastrophizing. Participants also completed the 2011 fibromyalgia survey questionnaire, which addresses the widespread body pain and comorbid symptoms associated with characteristics of fibromyalgia. Results Of the 665 participants, 464 were retained 6 months after surgery. Since individuals who met criteria for being classified as having fibromyalgia were expected to respond less favorably, all primary analyses excluded these individuals (6% of the cohort). In the multivariate linear regression model predicting change in knee/hip pain (primary outcome), a higher fibromyalgia survey score was independently predictive of less improvement in pain (estimate −0.25, SE 0.044; P < 0.00001). Lower baseline joint pain scores and knee (versus hip) arthroplasty were also predictive of less improvement (R2 = 0.58). The same covariates were predictive in the multivariate logistic regression model for change in knee/hip pain, with a 17.8% increase in the odds of failure to meet the threshold of 50% improvement for every 1‐point increase in fibromyalgia survey score (P = 0.00032). The fibromyalgia survey score was also independently predictive of change in overall pain and patient global impression of change. Conclusion Our findings indicate that the fibromyalgia survey score is a robust predictor of poorer arthroplasty outcomes, even among individuals whose score falls well below the threshold for the categorical diagnosis of fibromyalgia.
Objective. While psychosocial factors have been associated with poorer outcomes after knee and hip arthroplasty, we hypothesized that augmented pain perception, as occurs in conditions such as fibromyalgia, may account for decreased responsiveness to primary knee and hip arthroplasty.
Methods. A prospective, observational cohort study was conducted. Preoperative phenotyping was conducted using validated questionnaires to assess pain, function, depression, anxiety, and catastrophizing. Participants also completed the 2011 fibromyalgia survey questionnaire, which addresses the widespread body pain and comorbid symptoms associated with characteristics of fibromyalgia.
Results. Of the 665 participants, 464 were retained 6 months after surgery. Since individuals who met criteria for being classified as having fibromyalgia were expected to respond less favorably, all primary analyses excluded these individuals (6% of the cohort). In the multivariate linear regression model predicting change in knee/hip pain (primary outcome), a higher fibromyalgia survey score was independently predictive of less improvement in pain (estimate 20.25, SE 0.044; P < 0.00001). Lower baseline joint pain scores and knee (versus hip) arthroplasty were also predictive of less improvement (R 2 5 0.58). The same covariates were predictive in the multivariate logistic regression model for change in knee/hip pain, with a 17.8% increase in the odds of failure to meet the threshold of 50% improvement for every 1-point increase in fibromyalgia survey score (P 5 0.00032). The fibromyalgia survey score was also independently predictive of change in overall pain and patient global impression of change.
Conclusion. Our findings indicate that the fibromyalgia survey score is a robust predictor of poorer arthroplasty outcomes, even among individuals whose score falls well below the threshold for the categorical diagnosis of fibromyalgia.
The estimated lifetime risk of symptomatic knee osteoarthritis is ;45% (1). Between 1991 and 2010 the number of total knee arthroplasties (TKAs) per capita among US Medicare beneficiaries nearly doubled, and there was a 59% increase in revision TKA (2). Based on temporal trends in aging and obesity, the numbers of TKA and total hip arthroplasties (THAs) are anticipated to increase substantially in the coming years (3,4). Although TKA and THA have been shown to improve chronic pain and function (5), studies estimate that ;20% of TKA and 10% of THA patients fail to derive the desired analgesic benefit (6)(7)(8)(9). Cross-sectional studies of long-term pain outcomes have identified pain in other locations, as well as negative affect and cognitions (i.e., depression and catastrophizing, respectively) as independent risk factors for lack of improvement in pain following TKA and THA (7,8,10,11).
One possible explanation for the differences in long-term analgesic outcomes may be mechanistic. There is a growing appreciation of the importance of augmented central nervous system pain processing and other symptoms in many chronic pain states (12,13). A number of pain disorders without clear peripheral pathology have been given specific names, such as fibromyalgia, irritable bowel syndrome, and interstitial cystitis. The most "systemic" of these conditions, fibromyalgia, is characterized by widespread body pain and comorbid somatic symptoms (i.e., fatigue, poor sleep, depression, and memory difficulties), all of which are thought to be of central nervous system origin (12). Research has demonstrated that these patients have alterations in central neurotransmitters that, at least in part, lead to both augmented pain and sensory processing and the comorbid symptoms. Opioids, nonsteroidal antiinflammatory drugs, surgical procedures, and other peripherally directed interventions are generally thought to be less effective for central pain states (12). Our group recently showed that patients with higher fibromyalgia survey scores consumed substantially more opioids in the acute postoperative period after TKA and THA (14). Most importantly, the fibromyalgia survey score is not just a dichotomous label; rather, it appears relevant as a continuous variable within the population (15). For example, every 1-point increase in the fibromyalgia survey score from 0 to 31 was associated with consuming an adjusted 9 mg more oral morphine equivalents to treat postoperative pain following THA and TKA (14).
Additional support for the hypothesis of poorer outcomes in patients who have characteristics of fibromyalgia comes from earlier studies. For example, poorer long-term analgesic outcomes in arthroplasty patients have been associated with multifocal pain, one of the hallmarks of fibromyalgia (6)(7)(8)10,16). One of the physiologic correlates for fibromyalgia and other conditions where pain is thought to have become centralized is diffuse hyperalgesia (12). Two recent crosssectional postoperative studies using quantitative sensory testing showed that patients with pain after revision TKA have more widespread body pain and lower pain thresholds (17,18). To date, no prospective study has been performed to show that measures of centralized pain are associated with poorer arthroplasty outcomes, nor has any previous study compared the predictive value of these measures versus classic measures of negative affect (depression and anxiety) or cognitions (catastrophizing) already known to be associated with poor outcomes.
Given the current utilization and future projections for arthroplasty (2)(3)(4), the ability to predict poorer outcomes and triage patients to alternate analgesic therapies that might be more effective (e.g., centrally acting analgesics) has enormous socioeconomic implications. Thus, the objective of this prospective, observational cohort study was to assess the associations between fibromyalgia survey scores and chronic pain outcomes after primary TKA and THA. We hypothesized that patients with higher fibromyalgia survey scores would report less long-term pain reduction following arthroplasty.
PATIENTS AND METHODS
Study design. University of Michigan Institutional Review Board approval was obtained. The reporting of this prospective, observational cohort study conforms to the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) Statement (19). Between July 27, 2010 and May 31, 2013, adult patients ($18 years old) who were scheduled for primary, unilateral TKA or THA were prospectively recruited prior to surgery. Patients were excluded if they were undergoing bilateral arthroplasty, were undergoing revision arthroplasty, did not speak English, were unable to provide written informed consent, or were incarcerated. All patients provided written informed consent. Acute postoperative outcomes (anesthesia/acute pain) from a portion of this cohort were published earlier (14), but the long-term surgical outcomes explored herein have not been presented previously.
Phenotyping battery. Patients completed validated self-report questionnaires prior to surgery, and other relevant data were obtained from surgical and anesthesia medical records. Additional details of the preoperative phenotyping battery have been described previously (14).
Fibromyalgia survey criteria. The fibromyalgia survey is a validated self-report measure consisting of 2 scales, with one assessing widespread body pain and the other evaluating comorbid symptoms (12,15). First, the Widespread Pain Index (WPI) was calculated using the Michigan Body Map to assess the 19 specific body areas described in the measure (score 0-19). The second aspect of the criteria was evaluated using the comorbid Symptom Severity (SS) scale (score 0-12). The resulting total fibromyalgia survey score ranged from 0 to 31. Previously described cut points were used to categorize patients as "fibromyalgia positive" (15). Specifically, patients were classified as fibromyalgia positive if they had a WPI of $ 7 and SS score of $ 5 or a WPI of 3-6 and SS score of $ 9 (20).
Pain severity. The 4 pain severity questions from the Brief Pain Inventory (BPI) (worst, least, average, and right now; with a numeric rating scale of 0-10, where 0 5 no pain and 10 5 pain as bad as you can imagine) were used to create a single composite score (0-10) for severity of overall body pain (22).
Pain descriptors. The PainDETECT Questionnaire is a 9-item screening tool used to detect descriptors of neuropathic pain. Scores of $19 suggest that a neuropathic component is likely (23). The neuropathic pain assessment was specific to the surgical site (knee or hip) (23).
Psychological measures. The Hospital Anxiety and Depression Scale (HADS) contains 7 questions about anxiety and 7 questions about depression, with a score range of 0 to 3 for each question (score 0-21 for each measure, with higher scores indicating more depressive symptoms and anxiety) (24). The Coping Strategies Questionnaire-Catastrophizing contains a subscale for pain catastrophizing, which is a valid and reliable measure of this cognition. Scores range from 0 to 36, with higher scores indicating more catastrophizing (25).
In addition, research assistants recorded all opioid use as an average daily dose. Each opioid was then converted to oral morphine equivalents using previously described conversions (26) (Palliative Care Consortium [http://www.gha. net.au/Uploadlibrary/406205172GRPCC-CPG002_1.0_2011-Opioid.pdf] and The Hopkins Opioid Program [www.hopweb. org]). Body mass index (BMI) and the American Society of Anesthesiologists (ASA) physical function score (an ordinal measure of comorbidities with a range of 1 to 5 [https://www. asahq.org/resources/clinical-information/asa-physical-statusclassification-system]) were queried from the anesthesia electronic medical record (Centricity; General Electric Healthcare). The primary anesthetic was categorized as general anesthesia alone, general anesthesia plus femoral nerve block, general anesthesia plus neuraxial anesthesia (spinal or epidural), or neuraxial anesthesia alone. In a review of patient records, it was noted that only 4.2% of the cohort had radiographic evidence of arthritis rated as something other than "severe" (0.52% mild and 3.65% moderate). Due to the low variability, radiographic evidence of arthritis was not included as a covariate in the outcomes analyses.
Longitudinal assessment. Patients were evaluated 6 months after arthroplasty using the same questionnaires that were assessed in the baseline phenotyping noted above. The outcomes assessments were sent and returned by postal mail.
Postoperative record review. The medical records of all included participants were reviewed for potential complications or nonphenotypic factors that might explain more pain or disability in the 6-month followup period (reviews conducted by AGU, BRH, and NIW). Patients who had additional arthroplasty (i.e., THA or TKA for a different joint), hardware fracture, revision surgery, postoperative joint infection requiring incision and drainage surgery, and/or other significant surgery or postoperative outcomes (e.g., coronary artery bypass, subdural hematoma) prior to their 6-month outcomes assessment were excluded from the long-term analyses. These exclusions are displayed in the patient flow diagram ( Figure 1) and further detailed in the Results section below.
Primary and secondary outcomes. Primary outcomes analyses were conducted excluding the patients who met the criteria for being "positive" for fibromyalgia to ensure that the results were not driven by this small subset of patients. All analytic models were also conducted with the entire cohort (including fibromyalgia-positive patients) minus the exclusions previously noted (data not shown). Patients who had fibromyalgia symptoms but did not satisfy the previously defined thresholds are referred to as having "subclinical" disease.
The 6-month change in knee/hip pain (WOMAC pain subscale) was used for the primary outcome. Secondary outcomes included change in the composite measure of the BPI (mean of the current, worst, least, and average pain response [range 0-10]) and the patient's global assessment of change.
Statistical analysis. Data were entered into the APOLO Electronic Data Capture system (27). Missing data for the validated instruments were handled as follows. Catastrophizing, fibromyalgia survey score, and WOMAC scales were computed with complete data only. BPI scales were calculated as the mean of all items or the mean of 3 items if 1 item was missing. If more than 1 item was missing, no scale score was computed. The PainDETECT Questionnaire score was calculated as the sum of all 9 items or the sum of 8 items if 1 item was missing. If more than 1 item was missing, the PainDETECT Questionnaire score was not calculated. HADS subscales were calculated using the sum of all subscale items. If 1 item was missing, the missing value was imputed with the mean of the remaining items. The scale score was then computed as the sum of all items. If more than 1 item was missing, no scale scores were computed. Data were analyzed using R software version 3.1.1.
The cohort was divided into thirds based on the preoperative fibromyalgia survey score for preoperative descriptive data. These tertiles were not based on previously defined cut points. The continuous score from the fibromyalgia survey criteria was used for linear and logistic regression outcomes models, with the models presented excluding fibromyalgiapositive patients. Additional models included these patients for comparison (data not shown). The change in knee/hip pain (WOMAC pain subscale) and overall body pain (BPI) were analyzed as a continuous score using multivariate linear regression models. Both knee/hip and overall pain were also analyzed using a multivariate logistic regression model with a successful outcome defined as a 50% improvement in pain 6 months after arthroplasty. The patient's global assessment of change was dichotomized as patient responses of "very much improved" or "much improved" versus all other responses for a multivariate logistic regression model. Given the invasive nature of arthroplasty, the "slightly improved" response on the patient's global assessment of change was not deemed a successful outcome. All of the baseline covariates were included in the regression models. Age, BMI, and self-report measures were analyzed as continuous variables, and other demographic responses, primary anesthetic type, ASA score, and knee versus hip surgery were either dichotomized or treated as categorical variables as appropriate.
Model-based hypotheses testing and backwards variable selection were conducted using likelihood ratio tests. Briefly, we identify a set of all potential explanatory variables as the first step. The second step in building a regression model is to identify the best combination of explanatory variables to include in the model. The model is first calculated with all potential explanatory variables (full model), then recalculated after dropping the variable with the least significant association with the response variable. Significance is assessed by the likelihood ratio test. The process continues until all variables remaining in the model are statistically significant (28,29). For transparency, the fibromyalgia survey score results in the full model prior to backwards selection are presented in the results.
RESULTS
Recruitment and retention. A total of 1,034 patients were approached for participation, of whom 665 agreed to participate (64.3%). The mean 6 SD age of the cohort was 62.3 6 11.3 years, and 52.3% were women. The cohort was predominantly white (91.4%), and 41.6% had TKA (versus THA). There were no differences in age (P 5 0.76) or sex (P 5 0.34) when comparing participants to nonparticipants; however, there was a significantly higher proportion of nonwhites in the nonparticipant group when compared to participants (86.3% of the nonparticipants were white and 93.2% of the participants were white; P 5 0.001). Some patients were excluded from the analysis, due to additional arthroplasty during the followup period (n 5 50), hardware fracture (n 5 27), joint infection (n 5 25), revision of the same joint during the followup period (n 5 6), and other medical adverse events recorded during the followup period (n 5 7). Some patients were excluded for multiple reasons. After postsurgical exclusions withdrawals, and loss to followup, there were 464 patients with 6-month outcome data (80.6% of eligible participants retained) (Figure 1). Patients lost to followup reported a significantly worse preoperative phenotype (e.g., greater pain, more anxiety and depressive symptoms, lower function, etc.).
Higher fibromyalgia survey scores predictive of poorer outcomes regardless of whether individuals met criteria for fibromyalgia. There was a wide distribution of fibromyalgia survey scores (mean 6 SD 6.35 6 4.18, median 6, range 29, interquartile range 6). A total of 6.2% of the patients scored at or above the previously defined cut points for meeting the criteria for being "fibromyalgia positive" (15). These patients were excluded from the outcomes analyses unless otherwise noted. Higher fibromyalgia scores were associated with higher preoperative pain severity and use of neuropathic pain descriptors, more negative affect (i.e., depression, anxiety), increased tendency to catastrophize pain, worse physical function, and more opioid use ( Table 1).
All of the covariates listed in Table 1 were included in the multivariate regression models. Seventythree patients (18.2%) did not meet the threshold of at least 50% improvement in the WOMAC pain subscale for the logistic regression model. The outcome was best predicted by the fibromyalgia survey score (P 5 0.00032), as well as the baseline WOMAC pain score and THA (versus TKA) (Figure 2). The fibromyalgia survey score was predictive of failing to meet the threshold for improvement, with the odds increasing by 17.8% for every 1-point increase on the scale. The same covariates were predictive in the multivariate linear regression model for the WOMAC pain subscale (continuous measure) ( Table 2).
The fibromyalgia survey score was also predictive of robust change in all of the secondary outcomes. The fibromyalgia survey score independently predicted reduced improvement in overall pain (BPI). For every 1-point increase in the fibromyalgia survey score, patients reported an adjusted 0.19 points less improvement In addition to the other independent predictors in the multivariate linear regression model of change in overall pain (Table 3), the ASA physical function score and depression were predictive in the multivariate logistic regression model. Thirty-seven patients (8.4%) did not meet the patient's global assessment of change threshold for success, and the fibromyalgia survey score was again predictive of failed outcomes. The odds of failing to meet the threshold for success for the patient's global assessment of change increased by ;18% for every 1-point increase in fibromyalgia survey score (OR 1.18 [95% CI 1.05-1.31], P 5 0.0038). The higher tertiles of the fibromyalgia survey score had higher rates of failure to meet the thresholds for change in knee/hip pain (WOMAC knee/hip pain severity [difference not significant]), overall body pain (BPI), and patient's global assessment of change ( Table 4).
The models were also conducted with the fibromyalgia-positive patients included, and fibromyalgia survey scores remained predictive of change in linear and logistic regression models for knee/hip pain and overall body pain, as well as the patient's global assessment of change logistic model (data not shown).
No measure of negative affect (depression, anxiety), negative cognitions (catastrophizing), or neuropathic pain consistently remained in the best models presented, suggesting that these covariates have less predictive power than the fibromyalgia survey score. In the full multivariate models with all of the candidate predictors (prior to backwards selection), the fibromyalgia survey score was a significant predictor for failure to meet the threshold for knee or hip pain improvement on the * Independent predictors of change in pain using the Brief Pain Inventory (BPI) over the 6-month followup period are shown. Negative numbers indicate less improvement in pain. Patients categorized as fibromyalgia positive were excluded from the analysis. R 2 5 0.44. OME 5 24-hour total oral morphine equivalents (measured in mg); THA 5 total hip arthroplasty; TKA 5 total knee arthroplasty. † Nonwhite, non-African American race.
DISCUSSION
In this large, prospective, observational cohort study of arthroplasty outcomes, patients with higher preoperative fibromyalgia survey scores were less likely to report improvement in pain in the affected knee or hip (Table 2 and Figure 2), overall body pain (Table 3), and global impression of change. To our knowledge, this is the first study to describe the fibromyalgia survey criteria (15) as a predictor of long-term pain outcomes after surgery. Most importantly, this measure showed this predictive ability across the entire cohort, not just in the 6% of the individuals studied who met the categorical criteria for fibromyalgia. This single, simple-toadminister measure was a powerful predictor of a poor outcome and was the only preoperative phenotypic measure to consistently show predictive utility across the different outcome domains. As such, the measure may have value in screening for appropriateness for arthroplasty in the clinical setting.
Patients with higher fibromyalgia survey scores had higher levels of pain in the affected knee or hip, as determined by WOMAC scores, and higher levels of overall body pain, as determined by BPI, preoperatively ( Table 1). As is often noted in trials of analgesic thera-pies (16), patients with higher baseline pain have more room to improve and are thus more likely to improve when change in pain is assessed as the primary outcome (Table 2 and Figure 2). Nonetheless, despite starting with higher baseline pain, patients with higher fibromyalgia survey scores were still less likely to meet the threshold for change in overall pain and change in affected knee or hip pain.
The findings of this study provide some additional mechanistic rationale for a portion of the failures following TKA and THA. Total joint replacement addresses what has long been thought to be an exclusively or predominantly peripheral disease. The current conceptualization of fibromyalgia is that the central sensitivity inherent to this and related conditions leads to pain augmentation/amplification (12). Despite the overwhelming data supporting this conceptualization, there are some that contest these ideas. Patients with fibromyalgia survey scores that were elevated but still subclinical (e.g., not meeting previously described cut points to be fibromyalgia positive) demonstrated poorer outcomes. We suggest that augmented central nervous system pain processing is likely more prominent in patients with moderate and higher fibromyalgia survey scores when compared to those with lower scores; however, future studies are needed to assess this concept. Individuals in this study all had severe enough radiographic evidence of arthritis to qualify for arthroplasty, so they did all presumably have some ongoing peripheral nociceptive input that might benefit from arthroplasty. However, these data suggest that in some individuals with arthritis or other peripheral nociceptive input, this superimposed pain amplification is clinically relevant and might even be playing a more prominent role in a given individual's overall pain experience as opposed to what is occurring solely in the knee or hip (12,13). The notion that there is a subset of individuals with osteoarthritis, or any other chronic pain condition, that has centralized pain augmentation is well supported by quantitative sensory testing and neuroimaging studies (30,31). This study, however, is the first large prospective study to show that it may be very important to use simple screening tools such as the 2011 fibromyalgia survey criteria to identify clinical or subclinical fibromyalgia in medical practice (17,18,31). Combined with our recent study showing that this same measure was also predictive of markedly increased opioid requirements in the immediate postoperative period (14), it now appears that individuals with this phenotype respond differentially to treatments for both acute (i.e., opioids) and chronic (i.e., surgery) pain. Additional research is needed to identify the precise biologic underpinnings of the poorer outcomes associated with higher fibromyalgia survey scores, as well as whether this measure might finally allow us to move toward the elusive "personalized analgesia" sought for acute and chronic pain. It is possible that patients with higher fibromyalgia survey scores would be more likely to benefit from therapies targeting centralized pain, including medications (e.g., serotonin and norepinephrine reuptake inhibitors, gabapentinoids), exercise, and cognitive behavioral therapy (12). These findings could have implications for other pain therapies, including minimally invasive interventions and surgery for spine pain (32).
Other measures and concepts have been studied and described as predictive of chronic postsurgical pain outcomes. Of these, measures of affect, catastrophizing, and neuropathic pain descriptors have gained a great deal of interest (7); however, none of these measures was retained in the best models for the outcomes analyzed, suggesting that these are at least weaker predictors of the outcomes assessed when compared to the fibromyalgia survey score. Wylde et al (7) described depression as independently predicting poorer pain status in a cross-sectional study after TKA and THA. Whereas meeting criteria for depression was predictive, the adjusted OR for depression (categorical variable) was only 1.27-1.29. By comparison, they found that the adjusted OR for having higher pain scores after arthroplasty in patients with widespread body pain (pain at $5 locations) was 11.8 for TKA and 14.8 for THA. These data are derived from a single, academic center cohort and therefore may not be generalizable. In fact, one manner by which we know this cohort to be unusual is the low overall rate of categorical fibromyalgia; only 6% of study participants met the criteria for fibromyalgia, whereas in other osteoarthritis cohorts this is generally found to be 10-20% (33). Also, the cohort was followed up for 6 months after surgery, and it is possible that some patients may have continued to improve after that time point. There are limited data to support continued improvement after 6 months; however, there are multiple postoperative, cross-sectional studies showing a high proportion of the population with continued moderate to severe pain (6)(7)(8)10). Hence, we believe that a 6-month time point allowed for sufficient time to evaluate outcomes and also avoided additional attrition. In its favor, this study included the core components recommended by the Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) group (34). The retention rate was very high; however, the group lost to followup reported a worse overall pain phenotype, which could have influenced the results.
The fibromyalgia survey score was a robust predictor of poorer arthroplasty outcomes. Fibromyalgialike features suggest the presence of augmented central nervous system pain processing and may provide a mechanistic explanation for the failure of a peripherally driven intervention in some patients. | 2018-04-03T05:58:54.031Z | 2015-04-27T00:00:00.000 | {
"year": 2015,
"sha1": "9db8f3f21404bf87ffcc5bd3c317e2f3a7bb8e36",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/art.39051",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9db8f3f21404bf87ffcc5bd3c317e2f3a7bb8e36",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54667774 | pes2o/s2orc | v3-fos-license | EXECUTIVE COMPENSATION : A MULTI-TASKING MODEL
This study develops a model of a multi-tasking executive whose behavior is motivated by the specific forms of compensation received. This model extends the theory of corporate finance in two significant ways: first, it examines risk-averse executive behavior in a multitasking environment, and, second, it yields a theoretical understanding of why one form of variable compensation provides different incentive than another. As a generalization, we find that option compensation is more effective than stock compensation in inducing the executive to take on investment risk, while the inverse is true for inducing the executive to issue debt or pay dividends.
Introduction
In a modern firm, equity holders have largely given up any role in the daily operations of the firm; instead, executives act as their agents and purportedly carry out their wishes through a wide range of activities.Equity holders, however, retain control over executives through compensation policy.This study develops a model of a multi-tasking executive whose behavior is motivated by the specific forms of compensation received.It investigates the effects of different forms compensation on risk-averse executive behavior in a multitasking environment in order to determine how different firm policies are motivated by different compensation structures.
The need for some form of variable incentive as part of an executive"s compensation plan has been long understood, but there is a lack of models explaining why specific types of variable compensation are needed and what effects those different types will have on the behavior of executives. 17The difficulty is exacerbated by the many forms of variable compensation increasingly in use. 18Beyond a fixed salary, executives receive, among many other types, bonuses, stock options, premium-priced stock options, performance shares, performance units, restricted equity, phantom equity, dividend-based compensation, etc. 19 If the labor 17 Jackson and Lazear (1991), Choe (1999) and Nohel and Todd (2000) are a few exceptions. 18 Yermack (1993) documents the dramatic increase in the use of variable compensation for managers, while Holderness, Kroszner and Sheehan show that executive ownership has also increased.. 19 For the full range and explanations of individual forms, see Smith and Watts (1982) or Murphy (1985).market for executive talent is well-functioning, there must exist some undisclosed need for this plethora of forms and means of compensating executives. 20This study examines the use of two proto-typical forms of variable compensation, an equity position and stock options, as inducements for executives.The first section of the paper reviews the literature and isolates some flaws of earlier models.The second part section constructs a tractable, discrete time model of risk-averse executive behavior.The third section examines the implications of that model, and the fifth section concludes.
Two broad assumptions underpin the model.First, we assume that the firm operates in Berle and Means environment 21 , i.e., equity holders are individually well-diversified, and their numerous asset holdings make it inefficient for them either to devote a significant amount of time to an individual firm or to acquire the necessary firm-specific knowledge.Control is delegated to executives with specialized skills and knowledge who run the firm, but are themselves beholden to the equity holders for compensation.Specifically, we assume that equity holders delegate control over three policy areas: 1) investment policy-executives decide how the firm"s resources are invested, 2) financing policy-executives decide how capital is acquired to fund those projects, 20 It does not seem likely that these are perfect substitutes for each other; however, some researchers assume this: Hall (1998), for instance suggests that exchanging CEO"s stock holdings with options would approximately double pay-to-performance sensitivity.Later we shall see that these two forms of compensation are optimal over different ranges of volatility, and we cannot assume that their characteristics and the behavior they engender are identical. 21 Berle and Means (1932) and Fama and Jensen (1983).and 3) payout policy-executives decide what level of dividends are distributed to equity holders; but, equity holders retain control over compensation policy-equity holders decide both the levels and forms of compensation given to executives.
Our second assumption concerns the risk preferences of these agents.Executives do not hold diversified portfolios, instead a significant portion of their total income is derived from the firm and their wealth is highly correlated with the value of the firm. 22Lacking a well-diversified portfolio, executives are highly risk-averse especially relative to equity holders (Amihud and Levi (1981)). 23
The Literature
The "multitasking" problem arises when agents are required to perform multiple tasks that have complex interactions (see Crawford (1994), Feltham and Xie (1994), Holmström and Milgrom (1991), Prendergast (1999), and Sinclair-Desgagne (1999)).In this context, we jointly model the investment, financing and payout decisions 24 and observe the effects each form of compensation on executive multi-taking behavior.
While a number of studies indicate the need to offer variable compensation (for example, Antia and Mayer (1984) or Smith andWatts (1982, 1986)), few 25 explicitly characterize the optimal forms. 26This study seeks to fill this gap by constructing a model that predicts the effects of different forms of compensation on executive and (consequently) firm behavior.
Risk-averse executives with fixed compensation are concerned with two liabilities: first, they themselves have a direct claim against the firm for their own future fixed compensation, and, second, since the cost of financial distress and bankruptcy would reduce future fixed compensation, executives are indirectly concerned with future liabilities to debt holders.Executives are concerned that firms retain sufficient wealth to cover both their own future claims and those of debt holders.Both of these induce the executive to reduce investment risk, debt level and dividend payouts in order to safeguard wealth for use 22 See Coffee (1988) for the most developed exposition of the differing risk attitudes of managers and equity holders.There has been relatively little empirical study of the risk aversion of managers, but see Moers and Peek (2000). 23This is further substantiated by studies showing that the level of executive compensation is higher in firms with more risk, see Per (1999). 24As Holmström (1992) notes, the problem with executive action is not a lack of effort or "slacking", as in other compensation scenarios, but the choosing between efforts toward self-gain rather than shareholder wealth. 25We shall consider below the handful of studies that distinguish between different forms of compensation. 26Some studies provide general observations about the advantages of one form of compensation over another, but none offers an explicit model.against these future liabilities.
The Investment Problem: Risk-averse executives have an incentive to lower asset risk to reduce firm volatility below that optimal for equity holders creating the problem of under-investment.A range of models have sought to describe executive risk-taking behavior and the effect upon it of differing compensation design; 27 Most have argued that executives have little opportunity to diversify their wealth portfolio (Heckerman (1975), Jensen and Meckling (1976), Smith and Stulz (1985), Lambert (1986), Hirshleifer and Suh (1992), Hermalin (1993), McConaughy and Mishra (1997), Gray and Cannella (1997), Murphy (1998)). 28Executives can, however, have their compensation altered in such a way that their incentives will be aligned with those of equity holders.Some studies only recommend a generalized form of performance based compensation (e.g., McConaughy and Mishra (1997)), while others specifically model equity (e.g., Bizjak, Brickley and Coles (1993)) or option based compensation (e.g., Haugen and Senbet (1981), Green (1984), Hirshleifer and Suh (1992)).None of these, however, compare the efficacy of alternate forms of variable compensation in solving the under-investment problem.
The Financing Problem: Risk-averse executives also have an incentive to issue less debt than is optimal for equity holders creating the problem of under-leveraging (Ross (1977), Grossman and Hart (1982), Antia and Meyer (1984), Jensen (1986), Lang (1987), Firth (1995), Mehran (1992), Garvey and Hanka (n.d.)). 29While the debt asset substitution problem has occupied much of scholars" interest in capital structure, 30 a second and independent asset substitution problem occurs between managers and equity holders (the asset substitution problem occurs when incentives are not aligned and the agent has an incentive to undertake investments with different 27 The empirical results of these studies, as well as the empirical studies in this area, are reviewed later. 28A parallel problem can be found in the actions of fund managers (Chevalier and Ellison (1999), Carpenter (1998)). 29While managers have, in general, a motive to reduce debt, in the context of a takeover threat, they may have reason to increase debt in order to fend off that threat (Garvey and Hanka (n.d.)). 30That is, equity holders have an incentive to shift to more risky assets once debt has been issued; the option-like characteristics of levered equity entails that the value of equity is increasing in the volatility of the underlying firm (and the value of debt decreasing).A conflict is created, since the risk of investment may not be fully observable (and thus not contractible by debt holders), and equity holders have the incentive to substitute riskier for less risky investments in order to extract wealth from debt holders once debt has been issued.Debt holders, of course, anticipate this shift, and they charge correspondingly higher rates-unless the equity holders have some mechanism to precommit to an investment policy of low risk.characteristics than the principal, especially with regard to the risk of the investment).Like equity holders, managers also have incentives to shift asset risk-though in a direction opposite to that of equity holders.Managers receiving a fixed, expected compensation from the firm"s cash flows would seek to reduce volatility below that optimal for equity holders.Managers have an interest in reducing the risk of the projects undertaken by the firm, since lower investment risk (or smaller investments in risky projects) will decrease the volatility of firm income.This creates the problem of underinvestment.
The Payout Problem: Finally, risk-averse executives have an incentive to lower dividend yield below that optimal for equity holders creating the problem of the over-retention of earnings (Smith and Watts (1982), Jensen and Smith (1985)).Unfortunately, the effect of equity holder-executive conflicts and compensation design on payout policy has been largely neglected.While some scholars have tested the empirical relationship 31 (largely within the tradition of the pay-for-performance studies with dividends as a proxy for performance), theoretical models are scarce.One series of studies develops the notion (Easterbrook (1984)) of payout policy as a mechanism for reducing agency costs associated with external funding and as a substitute for executive ownership (Rozeff (1982), Crutchley and Hansen (1989), Schooley and Barney (1994), Chen and Steiner (1999)).Lambert, Lanen, and Larcker (1989) test the possibility that the use of option compensation would reduce dividends, while Fenn and Liang (1999) hypothesize the opposite effect for equity-based compensation.Chang (1993) and White (1996) approach the agency conflict over the payment of dividends themselves, and see dividend-based compensation as a means to force executives to pay them out (rather than mis-use the capital).While individual studies have considered each of these agency conflicts in isolation, none has addressed the multitasking question nor has any offered a rationale for choosing between different forms of variable compensation.
Overview
Executives are delegated control over the investment, financing and payout policies of the firm and set these to maximize the utility of their own compensation.Executives are compensated through two contingent claims: option compensation, a European call upon the value of equity (contingent upon the terminal value of the equity); and, second, equity compensation, a dividend cash flow and a capital gains cash flow. 32But to model the risk-averse executive, we must further introduce a non-linearity in the form of a utility function with the risk-averse characteristics described below.The value of compensation to the executive is the non-linear, discounted utility of these two contingent claims.
The approach will be to develop a discrete model using a binomial tree structure to represent the value processes of the firm and the securities valued upon it.Executives, under a given compensation structure, will choose the optimal corporate policies (from a discrete set of possibilities) maximizing their own utility.
The Firm
The firm begins with an initial equity endowment, and executives, by implementing different investment, financing and payout policies, may alter that value.
We assume that equity holders and debt holders are well diversified and operate in a complete market, so that we can endogenize the no arbitrage value of both
The Executive
The executive is risk-averse and the executive"s utility function, u(x), is twice differentiable, additive and time independent, i.e., a standard von Neumann-Morgenstein utility function. 33We use a simple negative exponential utility function, () satisfying the general conditions (u' > 0, u'' < 0) for a risk-averse utility function (when > 0) with = 0.25.As such the utility of executive compensation will be increasing in the value of compensation, but decreasing in the volatility of compensation.Further, executives acknowledge a time value to utility and discount future utility by their intertemporal discount rate of utility, r u .Executives obtain all of their wealth from their investment of human capital in the firm.We assume that all executive cash flows are consumed, and that executives do not save, do not hold independent portfolios and cannot hedge the risk of variable compensation. 34
Executive Compensation
While there is, in practice, a great range of forms of variable compensation, we consider the two most common.First, executives may receive compensation in the form of equity participation in the firm modeled 32 Equity compensation may be regarded as a contingent claim since it is contingent upon the firm being solvent. 33This environment is an application of the more general model developed in Mirrlees (1976), Holmström (1979), and Grossman and Hart (1983). 34Ofek and Yermack (1999) show that managers may "unwind" positions if they have and sell shares which they already own.as a restricted equity plan.That is, conditional upon the solvency of the firm, executives receive dividends cash flows throughout their employment, but only obtain the share value at the terminal date.Second, executives may receive stock options in the form of European call options that can be exercised only at the termination date. 35he option and equity compensation is initially expressed as a proportion of unlevered firm value: thus, a 1% equity position is a restricted equity grant equal to 1% of the value of the unlevered firm, and a 2% option position is a European call on 2% of the value of the unlevered firm with an exercise price equal to the initial value of the firm and an expiration date equal to the terminal period.After the compensation is awarded, executives select the capital structure maximizing utility.Since the grant of an equity stake to executives is restricted and the options cannot be exercised early, we assume that executives neither participate in the equity repurchase nor exercise options prior to the terminal date, and their equity and option proportions are adjusted for any change in the leverage of the firm.
Executive Objective Function and Choice Variables
Executives have discrete choice variables corresponding to the areas of corporate policy under their sway, i.e., investment, financing and payout policies: they may choose the investment volatility by choosing the standard deviation of aggregate investment risk; 36 they may choose the level of debt by determining the coupon paid to debt holders; 37 and they may choose the payout to equity holders in the form of the dividend yield.The objective function of the executive is to set the optimal investment, financing and dividend policies that maximize their personal utility.
Model Structure
We construct (for a given set of parameters) a binomial tree of the price paths of the unleveled firm.At each node, we can then price equity using the Leland equity formula to obtain a binomial tree of levered equity values (Leland (1994)).The utility received by the executive at each node is the value of the utility function for the total compensation received at that node.Since there is a time value to utility, executives discount the utility at each node by the intertemporal discount rate of utility.Utility is assumed to be independent and additive, so the aggregate utility of a compensation structure is the sum of the weighted 38 discounted utility at each node.We utilize a grid search to find the corporate policies that maximize executive utility for a specified compensation structure.
To explore the implications of this model we use a benchmark set of parameters: 39 The firm"s initial equity endowment is $1,000.00.The risk free rate of interest is assumed to be 5% and the corporate marginal tax rate 40%; the former is a typical value for that rate over a long-term economic horizon, later approximates the marginal tax rate for a large corporation.Following general practice (Murphy 1998), option compensation is awarded at-the-money, and it has a five year expiration date.The cost of bankruptcy is 10%.
Implications of the Model
While the goal of our study is to examine the implications of compensation structure in a multi-tasking environment, it is worth briefly considering the simpler cases.We begin by examining the effects of the two forms of variable compensation in three cases; namely, those in which the executive has control over only one of the three choice variables (investment, debt and dividend), while the other two are exogenous and constant.
In each figure below, the surface depicts the optimal selection of one choice variable by an executive maximizing personal utility over the possible combinations of option and equity compensation, i.e., each surface consists of points returned by the grid search.The range of equity compensation is allowed to vary from zero to 2%, while the option compensation may vary up to 3%. 40 Thus the origin, in the foreground, depicts the executive"s choice when they have no compensation.The left section of the surface is the region with relatively more equity compensation and the right with relatively more option compensation.
The Single-Tasking Case: Investment Policy
As we would expect, since the value of options are increasing in the volatility of the underlying security, it is option compensation that is most compellingly induces more risky investment, and an increase of 38 For simplicity, we use the pseudo probabilities as weights. 39While many typical values are used in the benchmark, this is not to imply that the model is in any way "calibrated" to real market conditions. 40It must be recalled that these values reflect the compensation percentage before the executive employs any leverage.After debt is issued the percentages and consequent compensation and utility are adjusted to compensate for changes in leverage.
investment risk is most normally to be associated with option compensation.
The Single-Tasking Cases: Financing Policy and Payout Policy
By contrast, stock compensation motivates both the issues of debt and dividends.The executive will take advantage of the tax subsidy for debt over almost any combination of equity and option combination, so long as there is at least some significant equity component in that mixture.
Only in the case of solely option compensation, the executive will not issue debt.
A similar relationship occurs with payout policy.The payment of a dividend provides no additional cash flow to an executive compensated with almost exclusively option compensation, and the payment of a dividend both decreases the value of the stock and increases the expectation of bankruptcy, and, consequently, the executive"s options on that stock.
Increases in both debt level and dividend payments are normally to be associated with option compensation.
The Double-Tasking Case: Financing and Payout Policies
While there are three possible scenarios in which the executive has control over two of the three corporate policies, i.e., investment policy and financing policy, investment policy and payout policy, and financing policy and payout policy, only last is of interest.In the first two, option compensation influences the executive to raise the level of investment risk, while stock compensation influences the executive either to raise debt or to issue dividends.The case of financing policy and payout policy is more interesting, since both of these were, in the single-tasking cases, motivated by stock compensation.
The behavior of the executive can be described in terms of three "regions" over the policy surfaces: I.When there is almost exclusively option compensation, the executive neither issues debt nor pays a dividend.II.When the compensation mixture is most heavily dominated by stock compensation the executive both issues debt and pays dividends.But, while the maximum dividend is paid, the maximum level of debt is not issued.
III.When there is a significant amount of both stock and option compensation, but option compensation dominates, the executive issue the maximum level of debt, but does not pay a dividend.Region I is explained for the reasons noted above (0); that is, these actions do not benefit the executive with almost exclusive option compensation.
The other two regions (II and III) imply that the use of debt versus dividend payment presents a trade-off for the executive, though not one that is wholly exclusive.In paying a dividend, the value of the firm declines-thus, decreasing the amount of debt that can be issued.Inversely, issuing debt (since in our model debt is substituted for equity) decreases the possible dividend.The interesting implication of this model is that, in this trade-off, option compensation is more effective in motivating the executive to issue debt, than is stock compensation.This makes intuitive sense, since option compensation is strictly decreasing in the dividend payment, but not in the debt level. 41 41 Recall that a change in capital structure is accomplished by issuing debt and repurchasing equity, and that the executive"s compensation percentages are adjusted to reflect the fact that they do not participate in the equity repurchase.
When stock compensation predominates, executives select the maximum dividend; when option compensation dominates the dividend is reduced and is strictly below that maximum.Dividends have contrasting effects on the compensation cash flows to executives: first, dividends are beneficial in that they allow executives with stock compensation to receive cash flows prior to the terminal period.By decreasing the duration of compensation cash flows, they increase the executive"s utility.Second, however, the payment of dividends decreases firm value and increases the probability of financial distress.Since bankruptcy is costly, this effect decreases the value of both option and equity compensation.With stock compensation, the advantages of dividends predominate over those of debt: executives increase the utility of their compensation more through the payment of dividends, since dividends are fixed cash flows accruing over the life of the compensation structure, and suffer relatively small harm from the increased cost of financial distress.By contrast, with option compensation, there is no advantage to the payment of dividends, and the loss is two-fold: first, there are the costs of financial distress, but, even when the firm remains solvent, the payment of a dividend decreases the probability that the option will be as far in-the-money at expiration.Thus, when the option component of compensation dominates the equity component, the payout ratio is decreased.When option compensation is sufficiently high there is a payout incentive cost.
The Multi-Tasking Case: Investment, Financing and Payout Policies
Finally, we consider the full multi-tasking case, where the executive has control over all three firm policies: investment, financing and payout.
Since this is the central result of the study and the interactions are complicated, we will discuss each region in some detail.Essential to understanding these results is the risk-averse attitude of the executive.While implementing any of the policies may increases compensation and utility, any implementation will increase risk.As we place more policies under the control of the executive, we intensify the risk-return trade-offs and the likelihood that one or more policies will not be implemented.
The curvature of the utility function captures the declining marginal utility of income and places a practical ceiling on the amount of risk that the executive will bear.The executive must choice among the various policies those that will maximize utility (given the specific compensation mixture).Again, we can distinguish a series of "regions" over the policy surfaces: I: When there is almost exclusively option compensation, the firm neither issues debt nor pays a dividend, since (as we have considered above (0)) neither of these policies increases the utility derived from option compensation.II: As discussed above (0), high levels of stock compensation not only induce the executive to restrict investment risk, but also cause them to select paying dividends over issuing debt when they are mutually exclusive due to the excess risk that employing both policies would engender.III: The increase in stock compensation over Region I shifts the executive from increasing investment risk to issuing debt and paying dividends.Since debt increases the utility from option compensation more than do dividends (0), the presence of significant option compensation causes the executive to favor the maximum level of debt over that of dividends.While the connections between compensation structure and firm policies is complex, we can see how the polices implemented by the executive vary over the policy surface in response to two factors: First, the differing incentives of stock versus option compensation, and, second, the limitations placed on the executive"s choices by the "risk ceiling" imposed by a risk-averse utility function.
Conclusion
We know there needs to be a variable component to executive compensation, but a "multitasking" problem arises when agents are required to perform multiple tasks that have complex interactions.We have assumed that equity holders delegate control over three policy areas: 1) investment policy-executives decide how the firm"s resources are invested, 2) financing policy-executives decide how capital is acquired to fund those projects, and 3) payout policy-executives decide what level of dividends are distributed to equity holders.Thus, the decision for equity holders is how to structure compensation so that executives will establish investment, financing and payout policies that are optimal for equity holders.
This model extends the theory of corporate finance in two significant ways: first, it examines risk-averse executive behavior in a multitasking environment, and, second, it yields a theoretical understanding of why one form of variable compensation is preferable to another.Ceteris paribus, we can say that, option compensation is more effective than stock compensation in motivating the executive to increase investment risk, while the inverse is true for motivating the executive to issue debt or pay dividends.Within the trade-off between issuing debt and paying dividends (given that a certain level of stock compensation is present), option compensation motivates the executive to issue debt more than pay dividends.In the single-tasking cases, the model suggests that, in most cases, some option compensation is necessary to stimulate the executive to make any increase in investment risk, while some stock compensation is necessary to stimulate the executive to issue any debt or pay any dividend. | 2018-12-12T15:33:10.864Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "f6dba141915ec86eaf61fcb37052664c336512b5",
"oa_license": "CCBYNC",
"oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=4469&hash=fc2eee3639a04037ff99997d0d099bbcffad9535",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f6dba141915ec86eaf61fcb37052664c336512b5",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
17805015 | pes2o/s2orc | v3-fos-license | Origin and development of modern medicine at the University of Padua and the role of the “Serenissima” Republic of Venice
[first paragraph of article] The University of Padua Medical School played a fundamental role in the history of medicine. Padua is a very old town, probably one of the oldest in North Italy. Traditional legend tells that the Trojan prince Antenore founded Padua in 1183 BC. At the beginning of the Roman Empire, Padua was an important town, both for its strategic position as an ultimate defence point against barbarian populations of North Europe and for its famous horse breeding, which made it the main supplier of horses to the Roman army. In the late Middle Ages, even before the rule of Venice, Padua was a prosperous city state adhering to the values of tolerance, civilization and democracy. During that era, Padua was particularly famous for its school of civil and religious law, which was the cornerstone for the upcoming university.
INTRODUCTION
The University of Padua Medical School played a fundamental role in the history of medicine. Padua is a very old town, probably one of the oldest in North Italy. Traditional legend tells that the Trojan prince Antenore founded Padua in 1183 BC. At the beginning of the Roman Empire, Padua was an important town, both for its strategic position as an ultimate defence point against barbarian populations of North Europe and for its famous horse breeding, which made it the main supplier of horses to the Roman army. In the late Middle Ages, even before the rule of Venice, Padua was a prosperous city state adhering to the values of tolerance, civilization and democracy. During that era, Padua was particularly famous for its school of civil and religious law, which was the cornerstone for the upcoming university.
The University of Padua is one of the oldest universities in the world. It was founded in 1222 when a large number of scholars and professors left the University of Bologna to look for more academic freedom. As the city of Padua was long recognized for its cultural richness and liberal schools, the University was established spontaneously, not by "ex privilegio", which was a special decree of the emperor or the Pope needed to build a new university at that time. 1 For centuries, many influential physicians and scientists contributed in the development of modern medicine in this institution. In addition to its impact on the history of modern medicine, the University of Padua hosted many eminent figures in all fields of science and humanities, such as Pietro d'Abano (physician and philosopher), Francesco Petrarca (poet), Pietro Bembo (poet and linguist), Giacomo Casanova (humanist), Prospero Alpino (botanist and voyager), Pietro Arduino (botanist), Nicolaus Copernicus (astronomer and physician), Elena Cornaro Piscopia (philosopher and the first woman to have a doctorate degree in the western world) and Galileo Galilei (mathematician and astronomer).
THE INFLUENCE OF THE REPUBLIC OF VENICE
Venice was originally created on a group of 118 small islands in the lagoon in the 6 th century by populations of the countryside escaping from the barbarian invasions of these times (Figs. 1, 2). This unique town became economically prosperous from the Middle Ages to the 18 th century, mainly due to trades with Byzantine, Muslim and Asiatic populations. Other than its status as a maritime authority, Venice also expanded in the Italian mainland; at its full expansion, it dominated a great part of northeast Italy, the Dalmatian coast from Istria to Albania, the Greek Peloponnese, Crete and Cyprus. Its rule has been on average well accepted by the dominated populations, because Venetian administrators left adequate independency to local authorities and favoured economical prosperity to the ruled states.
The most famous symbol of Venice is the Lion of St. Mark (Fig. 3). The statue of the bronze lion was placed on the top of one of two columns in Piazza San Marco in 1172 or 1184 (Fig. 4). The statue probably originally comes from the Middle East and dates to the late classic period or/and to the beginning of Hellenism (4 th -3 rd century B.C.). There is an interesting story behind why a maritime city like Venice takes a lion as its mascot. According to an old Venetian legend, St. Mark had got lost after being shipwrecked at the Venetian Lagoon and found rest by the Venetian shores. An angel, in the form of a winged lion, appeared to the apostle and foretold him that one day he would repose and would be adored in those lands: "Pax tibi Marce, evangelista meus. Hic requiescet corpus tuum", which means: "Peace to you, Mark the Evangelist. Here rests your body". In 828, relics, believed to be the body of St. Mark, were stolen from Alexandria, Egypt, by two Venetian merchants and were taken to Venice. The renowned St Mark's Basilica was built there to house these relics and the Lion of St. Mark has since become the symbol of the city. 2 In 1404, Padua fell under the rule of the Republic of Venice. Venetian administrators decided to improve the quality of the University of Padua. Already famous and prosperous, Venice politicians rendered the University of Padua as the principal centre of teaching and research of the Republic, while continuing to guarantee the freedom of thought as the typical characteristic of this institution. Since that time, research in the University incremented substantially. Venice brought the best professors of the time and it guaranteed them great freedom of research and teaching, so to justify the motto "Universa universis patavina libertas", the freedom of Padua is universal and for everyone. The administrators of the University, also called the Riformatori, were delegated by the Great Council of Venice to rule the University and to secure freedom and tolerance for students and professors, who came from all the Europe. Leonardo Donato (or Donà) (1536 -1612), who later became a famous Doge of Venice, was one of these Riformatori. Donato contributed to the call of Galileo to Padua, permitted the construction of the anatomical theatre under the teaching of Fabrici D'Acquapendente, and defended the autonomy of Venetian Republic. Interestingly, Donato became Doge of Venice when Galileo started to use the telescope to investigate the moon, the Milky Way, Venus and Jupiter (Fig. 5).
There is another important historical fact that best exemplifies the tolerance of Padua. After the Catholic Reformation, the University of Padua remained the only university under the Catholic reign still open to Protestants students and professors. In fact, it became the favourite academic destination for north Europe students, who were largely protestants. In the Hall of Forty of the Bo Palace, which exhibits 40 of the most renowned foreign students, more than the half of the portraits are of physicians from north European protestant countries such as England, Poland and Germany. One portrait was of Olof Rudbeck (1630 -1702), a Swedish physician who founded an anatomical theatre in his hometown Uppsala on the model of that in Padua (Fig. 6). Of note, some claim that Rudbeck was the first to describe the lymphatic system.
THE STRUCTURE OF UNIVERSITY OF PADUA MEDICAL SCHOOL AT THE RENAISSANCE
The golden age of Padua Medical School started with the Renaissance, a cultural movement between the 14 th and the 17 th centuries that profoundly influenced Europe and was characterized by the rediscovery of classic Greek and Roman texts and philosophy. At that time, the school was structured in five chairs: theoretical medicine, practical medicine, anatomy and surgery, botany and semeiotic. Each chair had Ordinary and Extraordinary positions. The Extraordinary position was given to teachers who had some experience in teaching, whereas the Ordinary position was given to teachers who were at the Extraordinary level for at least three years. 3 Interestingly, both Ordinary and Extraordinary chairs had two teachers: one in Primo and the other in Secundo loco, both giving their lectures in the same day and time but in different halls. In this manner, the administrators of the University could observe which lecture was more attended by students in order to determine the quality of the teaching. The professor in Primo loco was older and when he retired, the younger professor moved from Secundo to Primo loco, establishing by that a sort of promotion. The Extraordinary chair had also a third position reserved to professors born in Padua and Venice, because Padua and Venetian citizens and patricians could not teach in the university in normal circumstances. Venetian administrators decided this rule to avoid the abuse of chairs by private interests and nepotism.
Theoretical medicine was dedicated to the teaching of disease causes, symptoms and therapeutics, and was based on the texts of Avicenna (Ibn Sina), Galen and Hippocrates. Practical medicine was taught with analysis of clinical cases taken from the classic literature, in particular Avicenna and Razhes (ElRazy). 3 Anatomy and surgery were based on human and animal dissections in the anatomical theatre. Botany was under the name Ad Lecturam and Ad Ostensionem simplicium, and was related to Padua's Botanic Garden. The Botanic Garden of Padua was the first botanic garden in the world,
ADVANCES IN ANATOMY
During the Renaissance, the most significant contributions of Padua were related to the study of anatomy, which is still recognised as the base of all medical disciplines. Modern anatomy and anatomical illustration were brought into existence by the work of Andreas Vesalius (1514-1564), a Belgian scholar and teacher of anatomy and surgery at the University of Padua. He produced two seminal texts; Tabulae anatomicae sex in 1538 and the De humani corporis fabrica in 1543. Vesalius' faith in direct observation of the natural world was based on Aristotle's philosophy, which was the scientific methodology of Padua. In the De humani corporis fabrica masterpiece, Vesalius founded modern anatomy and liberated this discipline from the traditional teachings of Galen. 6,7 Vesalius proved that the human anatomy according to Galen that ancient and medieval medicine followed, was not actually based on the study of man. Given that the dissection of human body was culturally inacceptable in Galen's times of ancient Roman Empire, Vesalius demonstrated that Galen used to study the anatomy of different animals, particularly apes: "Indeed, those who are now dedicated to the ancient study of medicine [ . . . ] are beginning to learn to their satisfaction how little and how feebly men have laboured in the field of Anatomy to this day from the times of Galen, who, although easily chief of the master, nevertheless did not dissect the human body; and the fact is now evident that he described (not to say imposed upon us) the fabric of the ape's body, although the latter differs from the former in many respects" (Vesalius 1543, p. 12). 7 Moreover, the anatomical works of Vesalius were magnificently illustrated, probably by Jan Stephan van Calcar (1499 -1546), a scholar of Titian (Fig. 7). These illustrations, bridging the gap between art and science, became the reference model for all following anatomical depictions.
Others eminent anatomists of Padua Renaissance Medical School were Realdo Colombo (1516-1559), Gabriele Falloppia (1523 -1562), Girolamo Fabrici d'Acquapendente (1533-1619), Giulio Cesare Casserio (1522-1616), and Johannes Wesling (1598-1649). Realdo Colombo, scholar of Vesalius, was the first to describe the pulmonary circulation in the western world. 8 In the context of pulmonary circulation discovery, it is important to mention AbulHassan Alaa Eldin ElKarshy, best known as Ibn Nafis (1213 -1288), a Syrian-born physician who spent most of his life in Cairo, Egypt. In the work Commentary on the Anatomy of the Canon of Avicenna, Ibn Nafis clearly described the pulmonary circulation and refuted Galen's theory on the existence of pores in the interventricular septum allowing blood to pass from right to left ventricle. This manuscript was only rediscovered in the first half of the 20 th century. 9 Therefore, it is unknown whether Colombo, and later Harvey, had studied the works of Ibn Nafis or not. Falloppia described the falloppian tubes and had many other important contributions in anatomy. Fabrici d'Acquapendente, a pioneer in embryology and comparative anatomy, was considered the father of embryology as well as a renowned surgeon. Most importantly, Figure 7. Illustration, from De humani corporis fabrica, of a "flayed" which shows external muscles. d'Acquapendente designed the first stable anatomical theatre of the world, which was inaugurated in 1595 and still exists at the Bo Palace of the University of Padua (Fig. 8). Casserio and Wesling were the first to describe what is known today as the Circle of Willis 10 (Fig. 9). Casserio, who was Fabrici's scholar and opponent, also gave original descriptions of the eye and other sense organs.
The Padua Renaissance anatomical school was the most prominent in Europe. It reflected a period in which the first basic problem of medicine started to be solved. This basic problem was the analysis of the structure of human body and its internal organization, the so-called "microcosm". The school of anatomy in Padua provided new concepts that raised critical questions to the plausibility of the traditional humoral theory, which influenced both the theory and practice of medicine since the times of Hippocrates. It postulated that the factors responsible of function and dysfunction of human body are not referred to anatomical structures, but rather to the fundamental fluids, thought to compound human organs and organism, i.e., the four humours: blood, black bile, yellow bile, and phlegm.
ADVANCES IN PHYSIOLOGY
The next step, both historically and epistemologically, was the development of physiology. The discovery of blood circulation by William Harvey (1578 -1657) marked the beginning of human physiology in the modern sense. Before this discovery, human physiology was based on Galen theory, which postulated that arteries and veins were two systems almost completely separated. He thought that that venous blood was generated in the liver, from where it was distributed and consumed by all organs of the body, and that a 'spirit' circulated in arteries and was mixed with blood passing trough hypothetic pores in interventricular septum. While being a scholar of Fabrici d'Acquapendente in Padua, Harvey first conceived the idea of blood circulation that was later published in his famous 1628's book, De motu cordis. 11 Harvey proved that the amount of blood coming out each day from the left ventricle could only be explained with a movement in a close circuit. Harvey supported his discovery with the correct interpretation of veins valves described by Fabrici 12 (Fig. 10) and with the application of Aristotelian concepts on the perfection of circular movement, which he studied in Padua. In fact, Harvey quoted Fabrici and Aristotle in 1628, declaring that they were the principal source of inspiration for him. 11 It is noteworthy that Harvey used the mathematical and quantitative approach in his studies. This methodology was also well established in Padua, thanks to the teachings of Galileo Galilei (1564 -1642), who was professor of mathematics in the time of Harvey stay in Padua (1592-1610).
The quantitative approach, following the Galilean example, had also been systematically developed in medicine by Santorio Santorio (1561 -1636). Santorio, a professor of theoretical medicine in Padua, invented several instruments to measure physiological parameters, such as temperature and pulse frequency 13 (Fig. 11). He was one of the pioneers of iatromechanism.
The 17 th century developments of physiology corresponded to a new range of questions in medicine. These questions were consequent to the advances of anatomy and were related to the attempt of interpreting the functions of the discovered anatomical structures. For example, the discovery of the pancreatic duct by Georg Wirsung (1589-1643) in Padua was the subject of an intense discussion about its physiological function. 14
ADVANCES IN PATHOLOGY
Padua witnessed a further fundamental advancements in medicine in the 18 th century. It was the shift from merely studying the normal anatomy and physiology to analysing the natural history of disease and understanding sick structures and functions. Giovanni Battista Morgagni (1682-1771) (Fig. 12), Professor of Anatomy in Padua Medical School, introduced the anatomo-clinical method, which still represents an up-to-date scientific approach of combining basic research and clinical practice. 15 Celebrated as the father of physiopathology, Morgagni's pathological anatomy was deeply clinically and physiologically oriented. The principles of Morgagni were based on the idea that altered functions cause deformations in organs. 16 Morgagni learned from his 'spiritual master' Marcello Malpighi in Bologna (1628 -1694) to apply the mechanistic theory of the human body or iatromechanism. 17 This theory was based on the view of blood as a collection of particles and the body as a set of mechanical tubes, engines and implements. Iatromechanism can be defined as the school of medicine by which all physiological happenings were treated as rigid consequences of the laws of physics. This is in contrast to the humoral theory, which is based on non-materialistic, 'spiritual' humors that govern the body. In Padua, Antonio Vallisneri (1661-1730) and Domenico Guglielmini (1655-1710) believed in the philosophy of iatromechanisms. On the other hand, Alessandro Knips Macoppe (1662-1744), Figure 11. The instrument invented by Santorio (on the left) to measure pulse frequency, called the "Pulsilogio". Professor of Practical Medicine in Padua, was an opponent of iatromechanism. Macoppe believed that it was useless to study and cure diseases in this mechanical sense and that it was enough to empirically observe natural history of diseases and the effects of therapies. Of note, Macoppe wrote a manual of "medical etiquette" that was widely diffused at his time and represented one of the first modern manuscripts about the topic. 18
OTHER FIELDS OF MEDICINE
Besides anatomy, physiology and pathology, the Padua Medical School had contributed in many other fields of medicine. Alessandro Benedetti (1450 -1495), a Padua anatomist, described the spread of syphilis (or the Gallic disease) in Italy with the descent of Charles VIII of France in 1494 -1495. 19 Girolamo Fracastoro (1476/8 -1553) attributed the plague epidemics of Venice to seminaria, a sort of living-like particles that would be identified as viruses and bacteria three centuries later 20 (Fig. 13). Fracastoro conceived seminaria as material particles able to transmit the infection in methods different to living entities. However, seminaria had some similar properties to modern conceptions, because they were believed to multiply inside the host or remain latent for a given period outside. Those epidemics, when studied from a medical history point of view, represented great intellectual advances.
Bernardino Ramazzini, professor of theoretical medicine, (1633 -1714) (Fig. 14) founded occupational medicine with his book "De morbis Artificum diatriba". 21 The development of occupational medicine can be recognized as a further step towards the concept that diseases occur due to specific causes and in concrete contexts.
THE LAST TWO CENTURIES
A very important advancement between the 19 th and 20 th centuries, was the development of constitutional medicine by Achille de Giovanni (1838 -1916), a Paduan clinician, Dean of the Faculty of Medicine and Magnificent Rector (Fig. 15). Constitutional medicine was the basis for the study of the hereditary nature of diseases and also for the development of endocrinology. 22 The progress of microbiology from the late 14 th century, emphasized external causes, the 'seeds' of the diseases. The constitutional medicine balanced this concept by a new analysis of the internal causes of the disease, the individual 'ground' that could help or hinder disease development.
Therefore, the individual 'constitution' became an important subject. It was designed as a set of morphological, functional and psychological characteristics of the patient, as well as the hereditary history (that is currently explained by the genetic heritage). The different constitutional types were examined at the beginning in particular with anthropometric instruments that were able to detect the proportions between the different parts of the body. Also, the harmonies and disharmonies of the development were understood; with the latter considered as possible sources of morbidity. De Giovanni designed and used an anthropometric table (Fig. 16), which allowed him to perform a large amount of measurements and to establish the existence of three fundamental morphological types, each characterized by specific morpho-functional combinations and pathological predispositions. Constitutional medicine reserved a special attention to endocrinology, which considers hormones as essential ingredients in the individual and psychological development. In fact, the founder of modern endocrinology, Nicola Pende (1880 -1970), was a student of De Giovanni.
In the last century, with the introduction of anesthesiology and surgery, Padua remained in the vanguard. Thanks to the late prominent cardiothoracic surgeon, Vincenzo Gallucci (1935Gallucci ( -1991, the first heart transplantation in Italy took place in Padua 23 (Fig. 17). Padua has also been a pioneer in other organ transplantation programs, such as for kidney and liver. Bassini, Cevese, Belloni, Aloisi, Siliprandi, and Rizzi were the main protagonists of this recent history and deserve to be mentioned. Today, Padua is keeping up with new discoveries in both clinical and basic sciences (genetics, transgenic mice, molecular biology), novel therapies and up-to-date imaging technologies (total body scan, magnetic resonance, computer tomography angiography, positron emission tomography).
CONCLUSION
History is not just analysis and memory of the past. It must always be viewed as a research that shows that the past constantly reverberates in the present and projects itself into the future. Therefore, we believe that the Padua's history of medical advances can also be projected in the future. All through its history, Padua adhered to the principles of intellectual liberty, proper scientific methodology and universal tolerance. Today, when medicine has become a global scientific phenomenon, these principles remain the solid foundation for further scientific achievements and prosperity of humankind. | 2017-10-18T15:37:31.371Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "902f9899a771f25a6ffd5cbb984bfdab76f74612",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5339/gcsp.2013.21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4ace987c8ad1d6c7d192e2ead7ded289a2dd66b",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264157411 | pes2o/s2orc | v3-fos-license | Prevalence and associated factors of self-medication among the college students in Tehran
1 Department of Public Health, Maragheh university of Medical Sciences, Maragheh, Iran. 2 Department of Public Health, School of Health, Shahid Beheshti University of Medical Sciences, Tehran, Iran. 3 Social Determinants of Health Research Center, Kurdistan University of Medical Sciences, Sanandaj, Iran. 4 Department of health education, School of Health, Kermanshah University of Medical Sciences, Kermanshah, Iran . 5 Health Network of Gilan-e Gharb, Kermanshah University of Medical Sciences, Kermanshah, Iran.
INTRODUCTION
Self-Medication is the provision and consumption of drugs for the treatment of ailments and diagnosed symptoms by people (Fresle and Wolfheim, 1997).Any medication without earlier medical advice regardless of the cause, amount, and duration of use is considered as Self-Medication (WHO, 2001).Today, self-medication is one of the greatest social, economic and public health problems in many countries such as Iran (Bennadi, 2014).This occurs through the consumption of an industrial or home-made drug, providing the medicines without prescription, use of previously prescribed drugs in similar cases, use of residual or additional drugs at home, or refusing to take a prescribed drug currently (WHO Guidelines, 2000).It also includes the use of alternative therapies such as herbal remedies, dietary supplements, drugs that are traditionally made at home, use of medications prescribed for a person to treat other family members; especially in treatment children and the elderly (WHO Drug Information, 2000).Evidence suggests that there is no the correct pattern of drug consumption in Iran and medicinal system is faced with the some problem such as of excessive, inappropriate and arbitrary drug consumption (Tavakoli, 2001).Among the causes of high drug consumption in Iran compared to the average and standard in the world, can be pointed to self-medication without a prescription, and common false culture of drug use (Davati et al., 2008).Incorrect use of medicines is a global problem and has been reported in European countries about 68% (Bretagne et al., 2006) and in America about 42% (Combest et al., 2005).Previous studies also have shown that the prevalence of self-medication among students is high, so that have been reported as follow: 65.2% in medical student of Bangladesh (Alam et al., 2015), 88.18% in Karnataka, India (Patil et al., 2014), 76% in Karachi (Zafar et al., 2008).Studies also have shown that the prevalence of self-medication in Iranian students have ranged between 35 and 83% (Purreza et al., 2013;Khaksar et al., 2006;Baghiani and Ehrampoush, 2006).From the most important causes influencing this behavior can be pointed to economic problems, lack of access to doctors, not enough time for medical consultation and availability of drugs (Khaksar et al., 2006), busy doctors' offices, incomplete delivery or similar delivery of drugs by pharmacies, previous use of some drug and healing, similar symptoms (Heidari et al., 2000) waste of time and the possibility of dismissal from work (Asefzadeh et al., 2002).Now selfmedication has led to increased bacterial resistance, failure in optimal treatment, unintentional and intentional poisonings, drug market disruption, financial loss, and \ increasing per capita of drugs consumption in the community (Hughes et al., 2001).Arbitrary medication can also lead to delays and disruption in disease diagnosis, heightening a disease, impaired treatments, increasing the side-effects, and even life endangerment (WHO Global strategy for Containment, 2001).Due to the continuous expansion of self-medication phenomenon in the community and individual direct role in the selection and use of drugs, this study was conducted to determine the prevalence of self-medication and its associated factors among students in Tehran.
METHODS
This was a cross sectional descriptive study which was conducted in 2016.The population of this research was students studying in universities in Tehran.These universities include: University of Tehran, Shahid Beheshti University, and Islamic Azad University.The samples were selected by cluster sampling method.In this process; one third of the total colleges from each university were selected randomly as a cluster and within the each cluster, samples were recruited through convenient sampling method.The total number of clusters was 89 schools and out of which we selected 30 colleges (one-third) considering the number of faculties at each university.To determine the number of samples, maximum prevalence were considered (P = 0.5), and with confidence of 95% (α = 0.05) and the amount of acceptable error (d = 0.03), the sample size was calculated 1067 people.Considering the Cluster Effect equal to 2.1, the volume was increased to 1300 people.The number of samples in each of the universities was calculated according to the number of students and considering the sex ratio of students (60% female and 40% male).Students were enrolled who were willing to cooperate in the study, had no any diagnosed chronic diseases, or were not treated during the study period.Data collection tool in this study was a researcher made questionnaire, based on previous national and foreign studies.The questionnaire was composed of 5 sections including questions related to demographic data (7 questions), questions related to knowledge (10 questions), questions related to situations where self-medication is carried out (14 questions), questions about types of drugs used (16 questions), and questions about the causes of self-medication (15 items).To determine the validity of questionnaire, the content validity ratio (CVR) was calculated using the comments of a panel of experts, as well as by referring to the table of Lawsche in whichthe items were considered as important and necessary items if they had a calculated ratio greater than 0.62 and an acceptable level of statistical significance (p <0.05).To calculate the content validity index (CVI), experts were asked to rate the question based on simplicity, clarity and relevance using 4-part Likert scale.The items with CVI score higher than 0.79, between 0.7 to 0.79, and less than 0.7 were considered as appropriate, questionable, and unacceptable respectively and finally unacceptable items were excluded.The final version of the questionnaire was offered to 30 students and 10 days later again those same people completed the questionnaires.Spearman-Brown correlation coefficient to determine the Test-retest reliability and Cronbach's alpha for the internal consistency of items were 0.74, and was 0.78 respectively.
To collect the data, the researchers went to college and gave questionnaires the students.After explaining the purpose of study and ensuring participants from the confidentiality of information, Informed consent form on the front page of the questionnaire was obtained from participants and then they completed questionnaires about 30 minutes.Data were analyzed with SPSS version 19 and using the central indices, t-test, chisquare, Pearson correlation coefficient and logistic regression.
RESULTS
From 1,300 selected sample size, 1269 people responded to the questionnaire with a mean age of 21.13±1.19years (response rate=97.6%).766 of the responders were woman (60.35%) and 503 of them were man (39.65%).other demographic information regarding research subjects shows in Table 1.The results showed that the rate self-medication in the past six months in subjects was 80.7%.The highest rate of selfmedication was related to headache 65.4%, colds 41.9%, menstrual difficulties in women 49.3%, and cough or sore throat 27.2%, respectively (Table 2).Also the most frequenteddrugs used for self-medication were: Analgesic 65.2%, cold tablets 53.1%, and antibiotics 42%, respectively (Table 3).The most important reasons for the self-medication given with subjects were: simplification of the disease (64.6%), having previous experience of the disease and its treatment (40%), a positive result of earlier self-medication, and lack of belief to the doctors both with 30%.On the other hand lack of access to a doctor, restrictions on physical or mental condition, and belief about safety of medications had the lowest frequency (Table 4).In this study gender, type of college, insurance status, and marital status were evaluated to assess the possible relationship with self-medication.Results showed that there was a statistically significant differences between self-medication with gender, university, and level of knowledge (p <0.05).So that nonmedical students are more likely to do self-medication (OR = 3.42) and those with poor knowledge compared to students with good knowledge are more likely to performed self-medication (OR = 9.2).There were not significant differences between having/not having health insurance, and marital status with self-medication (Table 5).
DISCUSSION
In this study, 80.7 percent of respondents cited the history of self-medication in the past six months.A review study in Iran has estimated the overall rate of self-medication in the general population about 53%, among students 67%, in housewives 36%, and in the elderly 68% (Azami-Aghdash et al., 2015).Studies in other countries have reported a high prevalence of self-medication in students, for example92.3% in Slovenia, (Klemenc-Ketis et al., 2010)76% in Karachi (Zafar et al., 2008), and 98% in Palestine (Sawalha, 2008).Some other studies have reported the self-medication in the students lower than the mentioned rates above, for example in students of Islamabad, Pakistan 42% (Hussain and Khanum, 2008) as well as in Nigerian students 56.9% (Olayemi et al., 2010).
In our study, headaches, colds, menstrual problems, cough, and sore throat were most cases conducive to selftreatment.These findings are consistent with results of other studies (Pandya et al., 2013;Kasulkar and Gupta, 2013;Zafar et al., 2008;Sawalha, 2008).In the most mentioned studies, headache is the most common cause of self-medication which can be explained that headache is a common symptom and exists in most diseases; therefore patients have to take medicine to relieve headaches.On the other hand in the general population of Iran, the highest rate of self-medication is performed on respiratory diseases (Azami-Aghdash et al., 2015).Also in our study, most drugs used for self-medication were: Analgesic, cold tablets, and antibiotics, respectively.According to most diseases and conditions conducive to self-medication, (headaches, colds, menstrual issues and cough and sore throat), it is logical that people are taking such drugs for self-medication.Studies in the general population also show that analgesics and antibiotics were the most drugs used for selfmedication (Sahebi et al., 2009;Seyam, 2003).Also, 79.4 percent of those who had treatment for cold had consumed antibiotics (Moghadamnia and Ghadimi, 2001).In studies conducted in other countries, similar results have been obtained (Lukovic et al., 2014;Pandya et al., 2013;Kasulkar and Gupta, 2013).
As the results of this study showed, the arbitrary use of antibiotics is high (42% of all people).In the study of Sarahroodi, 42.3 percent of medical students and 48% of non-medical students had self-medication with antibiotics, of which 73.3 percent had used for respiratory problems such as sore throat and colds (Sarahroodi and Arzi, 2014).Patil and colleagues reported that antibiotics among undergraduate medical students were the most commonly self-medicated and reported by 248 (63.91%) students, out of which only 92 (37.1%) students completed the full course of antibiotic regimen (Patil et al., 2014).Also, in many studies, arbitrary use of antibiotics with more than 30 percent was one of the five leading drugs in self-medication (Pandya et al., 2013;El Ezz and Ez-Elarab, 2011;Sawalha, 2008;Zafar et al., 2008).
In our study, people had a self-medication because they had underestimated and simplified their disease, had previous experience of the disease and its treatment, and had a positive result of earlier self-medication.This finding confirms the results of other studies (Sawalha, 2015;Lukovic et al., 2014;Zafar et al., 2008).On the other hand in some studies, the main reasons for self-medication were reported the appropriate information about diseases and medicine (Khaksar et al., 2006) and lack of time (Ali et al., 2010) respectively.
In this study there was a significant difference between sex and self-medication and the self-medication in men was higher than women, in these terms the finding of this study are consistent with results of other studies (Lukovic et al., 2014;Purreza et al., 2013;Khaksar et al., 2006), however in some studies, the selfmedication have been reported in men higher than women (Sawalha, 2008).In addition, studies in the general population have shown that self-treatment in Iranian women is higher than men (Seyam, 2003).The results also showed that awareness and university had been correlated with the degree of self-medication in the past six months.Self-medication in people who had better knowledge and were students at the Medical Universities were lower than the other groups.This finding confirms the results of other studies (Lukovic et al., 2014;Sawalha, 2008).
Limitations
Our study has some limitations.First, the study was cross-sectional, and each variable was measured only once; exposure and outcome were measured simultaneously, and this issue prevented the detection of the causal relationships.Second, the study was based on self-reported data and participants were asked about the period of the last 6 months, which may be some wrong data were collected and analyzed in the study because recall bias.
CONCLUSION
Self-medication among students is very important.Our study and other studies have shown high levels of Self-medication in this group of society.In our study self-medicate was associated with being male, lower awareness and non-medical student.On the other hand reducing self-medication with antibiotics is required to adopt strict rules and regulations that people cannot afford these drugs without a prescription.As well as conducting education about the negative effects of drugs for students and other community groups is essential.
Financial support and sponsorship: Nil.
Table 1 :
Demographic characteristics of participants.
Table 2 :
Frequency of conditions and situations leads to self-medication.
Table 3 :
Frequency of drugs used for self-medication in the past 6 months.
Table 4 :
Reasons for the self-medication in perspective of student.
Table 5 :
Factors affecting self-medication at the past 6 months. | 2019-03-31T13:32:40.858Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "5623410be51c9307c2695d7ee4022d6fca4f035b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7324/japs.2017.70720",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "47ae110ac73175e2733f8c199dc28bb346a172ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232405863 | pes2o/s2orc | v3-fos-license | Morus alba L. Plant: Bioactive Compounds and Potential as a Functional Food Ingredient
Morus alba L. (M. alba) is a highly adaptable plant that is extensively incorporated in many traditional and Ayurveda medications. Various parts of the plant, such as leaves, fruits, and seeds, possess nutritional and medicinal value. M. alba has abundant phytochemicals, including phenolic acids, flavonoids, flavonols, anthocyanins, macronutrients, vitamins, minerals, and volatile aromatic compounds, indicating its excellent pharmacological abilities. M. alba also contains high nutraceutical values for protein, carbohydrates, fiber, organic acids, vitamins, and minerals, as well as a low lipid value. However, despite its excellent biological properties and nutritional value, M. alba has not been fully considered as a potential functional food ingredient. Therefore, this review reports on the nutrients and bioactive compounds available in M. alba leaves, fruit, and seeds; its nutraceutical properties, functional properties as an ingredient in foodstuffs, and a microencapsulation technique to enhance polyphenol stability. Finally, as scaling up to a bigger production plant is needed to accommodate industrial demand, the study and limitation on an M. alba upscaling process is reviewed.
Introduction
Morus alba Linn (M. alba), also known as white mulberry, belongs to the Moraceae family [1]. It is a small deciduous tree cultivated in various tropical, subtropical, and temperate countries, including China, Japan, Korea, Thailand, Indonesia, India, Vietnam, Brazil, Africa, and others [2,3]. Many traditional medicines incorporate M. alba fruit, leaves, roots, branches, and bark in Ayurveda medication systems due to their health benefits and antioxidants [4].
M. alba contains abundant bioactive compounds, including phenolic acids, flavonoids, flavonols, anthocyanins, macronutrients, vitamins, minerals, and volatile aromatic compounds [5,6]. Its fruits and leaves contain significant amounts of quercetin, rutin, and apigenin; ferulic, chlorogenic, and protocatechuic acids are also the significant compounds in fruits. These natural bioactive compounds hold potent biological activities proven to exhibit excellent pharmacological effects against various diseases. These include antioxidative, diuretic, antiobesity, hypoglycemic, hypotensive, anticholesterol, antidiabetic, and antimicrobial properties [7,8]. Moreover, the high quantity of phenolic compounds also contributes to M. alba's functional properties in food applications. For example, the flavonoids and caffeoylquinic acids in M. alba could benefit as colorants, flavorants, food fortificants, antioxidants, preservatives, and antimicrobial agents against bacteria and fungi, all of which are essential in the food industry. Simultaneously, their anthocyanins could act as natural antioxidative food colorants [9,10].
People consume mulberry species in various countries due to their nutritiousness, deliciousness, nontoxicity, and abundant active benefits. The leaves of M. alba species are rich in protein, carbohydrates, fiber, and vitamins, especially ascorbic acid and βcarotene [13]. Studies have also found that the leaves contain a high amount of important minerals such as calcium (Ca), potassium (K), magnesium (Mg), zinc (Zn), and many others. Moreover, according to Sánchez-Salcedo et al. [12], M. alba leaves possessed high iron (Fe) values (119.3-241.8 mg/kg) and a low level of sodium (0.01 mg/100 g), making them a suitable diet material for sodium-restricted individuals. The leaves also contain a considerable amount of organic acids, including citric acid (0.26-3.85 mg/g FW), malic acid (7.37-12.49 mg/g FW), tartaric acid (0.085-0.212 mg/g FW), succinic acid (1.02-5.67 mg/g FW), lactic acid (0.29-0.83 mg/g FW), fumaric acid (0.058-0.39 mg/g FW), and acetic acid (0.029-0.1 mg/g FW), which contribute to the potential health benefits of M. alba leaves [14].
The same nourishing richness is in M. alba fruit, with a protein content higher (10.15-13.33%) than other mulberry species [6]. A study by Owon et al. [15] showed a higher protein value of M. alba fruit (12.98%) as compared to black mulberry (10.85%), golden berry (9.16%), and strawberry (7.65%). Their great protein amount has proven their capability in contributing to protein's recommended dietary allowance (RDA), which is 0.8 g/kg of body weight [16]. A considerable percentage of minerals, including N, P, K, Mg, Mn, Ca, Zn, Cu, Fe, and Se, were observed in M. alba fruits, while a higher ascorbic acid value was obtained from M. alba fruit (22.4 mg/100 g) than the M. rubra species (19.4 mg/100 g) [1,11]. A summary of these macro-and microelements can be seen in Tables 1 and 2.
Zhang et al. [43] suggested flavonols, flavonoids, anthocyanins, hydroxynamic acids, and benzoic acids as the significant polyphenol composition in M. alba fruit. The fruit's polyphenol composition was significantly influenced by their maturity stages. The brackish fully ripe M. alba fruit extract contained higher total sugar and anthocyanins, while the ripe red fruit contained higher ß-carotene and ascorbic acid [44]. This is because sugar acts as a precursor in synthesizing anthocyanins, hence their higher value in mature fruit. A study by El-Baz et al. [45] found that types of solvent influenced the extraction of phytochemicals. Ethanol (EtOH) extract was the most efficient solvent, as 18 compounds were isolated, mostly esters (96.32%), followed by dichloromethane (DCM) and ethyl acetate (EtOAC) fractions with a total of 12 compounds. The major bioactive compound groups in M. alba fruit are shown in Table 4. The rich reports on M. alba fruit's bioactive-compound content suggest its excellent potential for functional food-product development.
M. alba fruit also has potent antioxidative activities. According to D'urso et al. [59], phenylpropanoids and flavonols were the two main compounds responsible for the fruit's antioxidant activity. High total flavonoid content (187.23 mg/g QUE of DW) expressing strong DPPH (IC 50 = 0.518 mg/mL) and FRAP (0.685 at 4.0 mg/mL) were obtained from M. alba fruit [34]. The flavonoids showed haemolytic and antihaemolytic ability in red-bloodcell haemolysis of H 2 O 2 induced in mice. In addition, the inhibition of lipid peroxidation in the liver, microsome, and mitochondria were detected as 45.5%, 42.8%, and 39.4%, respectively. The total antioxidant activity of M. alba fruit was similar to that of strawberry (39.40 and 51.31 mg Trolox/g of fresh weight, respectively) [59].
Meanwhile, in M. alba seeds, stronger DPPH radical activity than L-ascorbic acid (IC 50 = 31.5 µM) and α-tocopherol (IC 50 [48]. These results indicate that M. alba contains antioxidative phenolic compounds potentially useful as nutraceutical agents in functional foods ( Figure 1). However, temperature influences the contents of phenolic acids and flavonoids during drying processes [60]. Therefore, studies using different drying methods, type of solvents, and extracting conditions that are more efficient and can effectively maintain yields, bioactive components, and biological activities should be pursued. Overall, the existing literature has evidently indicated the efficient antioxidative capacity of M. alba both in vivo and in vitro, making M. alba a promising ingredient for nutraceutical development.
Antidiabetic Property
Natural plant extracts can ameliorate insulin production and inhibit intestinal glucose absorption, thus becoming a significant catalyst in managing diabetes [63]. Ahn et al. [64] supplemented M. alba in diabetic mice for 14 days and found a significant reduction
Antidiabetic Property
Natural plant extracts can ameliorate insulin production and inhibit intestinal glucose absorption, thus becoming a significant catalyst in managing diabetes [63]. Ahn et al. [64] supplemented M. alba in diabetic mice for 14 days and found a significant reduction of plasma total cholesterol, hepatic T-C, and triglyceride concentrations. However, the M. alba leaf-supplemented group showed the most significant activities, including decreased plasma glucose and insulin, and elevated levels of protein S6 kinase (pS6K), phosphorylated Akt (pAKT), and phosphorylated (p)-AMP-activated protein kinase (pAMPK). These showed that M. alba leaves have better antidiabetic and antidyslipidemic properties compared to the fruits. Moreover, M. alba leaf extract and oligopeptides (0.5-3 kDa) were found to create potent inhibition of both α-glucosidase and α-amylase enzymes [58,65]. This allows a delay in the breakdown of sugars, thus reducing glucose's absorption rate and controlling postprandial hyperglycemia. From this, it was discovered that M. alba leaves contain 30-170 mg/100 g DW of DNJ, which is a potent α-glucosidase inhibitor that effectively suppresses postprandial blood glucose elevation with just 6.5 mg of its administration [66]. Additionally, Jeon and Choi [67] have isolated eight compounds containing α-glucosidase inhibitory activity, with chalcomoracin (IC 50 = 6.00 µM) and 4 -prenyloxyresveratrol (IC 50 = 28.04 µM) holding the most significant inhibition ability.
That aside, a two-week supplementation of M. alba fruit extract could improve insulin sensitivity and reduce hepatic glucose production in type 2 diabetic mice [68]. The fruit-supplemented mice showed a significantly lower level of blood glucose and blood glycosylated haemoglobin (HbA1c) (9.083%) as compared to the diabetic-control and rosiglitazone mice groups (12.921% and 7.454%, respectively). The fruit extract also significantly enhanced the activation of pAMPK and p-Akt substrate of 160 kDa (pAS160). Consequently, it significantly increased the GLUT4 level in skeletal muscles and decreased the glucose 6-phosphatase and phosphoenolpyruvate carboxykinase levels in the liver, thus suppressing the hepatic gluconeogenesis process [68].
Antihyperlipidaemia and Antiobesity Activity
Studies have found that the M. alba leaves could significantly reduce the levels of cholesterol, triglyceride, and lipid peroxidation in blood plasma and post a mitochondrial fraction of cholesterol-induced mice, followed by decreases in body-weight gain, the atherogenic index, coronary artery indices (CRIs), Lee's index, and waist circumference [69,70]. The downregulation of leptin (0.39-fold) and resistin (0.71-fold), along with the upregulation of adiponectin (1.53-fold), have aided M. alba leaves to work directly on vis-
Antihyperlipidaemia and Antiobesity Activity
Studies have found that the M. alba leaves could significantly reduce the levels of cholesterol, triglyceride, and lipid peroxidation in blood plasma and post a mitochondrial fraction of cholesterol-induced mice, followed by decreases in body-weight gain, the atherogenic index, coronary artery indices (CRIs), Lee's index, and waist circumference [69,70]. The downregulation of leptin (0.39-fold) and resistin (0.71-fold), along with the upregulation of adiponectin (1.53-fold), have aided M. alba leaves to work directly on visceral fat mass and attenuated their adiposity and derangements. The abilities of M. alba leaves in suppressing obesity, reducing visceral adiposity, and cardiometabolic alteration was accredited to their polyphenols, particularly chlorogenic acid, quercetin, caffeic acid, rutin, and kaempferol [70].
Ten compounds were found to have antiadipogenic activity, in which three were benzofuran derivatives, two were phenolic derivatives, two were flavonoids, one was an alkaloid, one was lignin, and one was coumarin. These compounds possessed 2.1-36.6% of antiadipogenic activity on 3T3-L1 adipocytes [71]. The efficacy of M. alba leaves to significantly reduce the expression of the key regulator of LDL receptor, the proprotein convertase subtilisin/kexin type 9 (PCSK9), was reported by Lupo et al. [72]. This report was followed with significant reduction of 3-hydroxy-3-methyl-3-glutaryl coenzyme A reductase (HMGCR) and fatty acid synthase (FAS) in mRNA levels of HepG2 cells by 51.1% and 37.2%, respectively. Whereas, at the protein level, expression of the LDL receptor was elevated by 1.8-fold. However, in hepatic cells, no significant effect on the PCSK9 promoter activity was found, despite its expression inhibition in both mRNA and protein levels.Both 5% and 10% M. alba fruit supplementation have significantly down-regulated serum and liver thiobarbituric-acid-related substances (TBARS) in mice [73]. In contrast, a significant increment of red blood cells, liver superoxide dismutase, and blood glutathione peroxidase levels only occurred in 10% fruit-supplemented mice. The fluctuated HDL-C and LDL-C in high-fat animals were claimed to be contributed by the total anthocyanin (0.0087%) and flavonoid (0.39%) contents in the fruit [73]. These findings showed the capacity of M. alba fruit to manage hyperlipidaemia. In another study, M. alba fruit modulated obesityinduced cardiac dysfunction by inhibiting lipogenesis and fibrosis, and enhancing lipolysis in high-fat diet-induced obesity [74]. Fruit extract also attenuated reactive oxygen species (ROS), vascular function, inflammatory markers, and lipid accumulation while reducing collagen in obese rats; these are considered factors for defense against and remediation of obesity-related cardiac dysfunction cardiac fibrosis. The chemical structures of compounds possessing antihyperlipidaemia can be seen in Figure 3. To sum up, M. alba leaf and fruit extracts contain significant obesity-repressing and cholesterol-reducing properties, making them an excellent defensive agent against obesity, atherosclerosis, and hyperlipidaemiarelated disorder. Nevertheless, the active components' mechanism to induce the uptake of LDLR and LDL is still unknown. Therefore, further study needs to explore a more profound understanding of M. alba's antihyperlipidaemia mechanism.
In the context of Alzheimer's disease, a 28 day administration of 100 mg/kg ethanolic M. alba leaves enhanced both learning and memory function in spatial-memory-impaired rats, whereas the 200 mg/kg and 400 mg/kg extracts inhibited memory impairment [78]. A high dose (160 mg/kg daily) administration of DNJ could improve cognitive impairment, alleviate Aβ deposition in the hippocampus through β-secretase 1 (BACE1) inhibition, and reduce the expression of brain inflammatory factors such as TNF-α, IL1β, and IL-6 in mice [79]. DNJ minimizes the decline of brain-derived neurotrophic factor/tyrosine kinase receptor (BDNF/TrkB) signal pathway in mouse hippocampus. This therefore implies its ability in improving the hippocampal neuron microenvironment [79]. Similar results for M. alba fruit extract, including improved spatial memory and learning aptitude and decreased neuron apoptosis and Aβ plaques in mice hippocampus and cortex tissues were reported [80]. M. alba fruit also increased anti-inflammatory cytokines (IL-4) and reduced astrocytes and proinflammatory cytokines (TNF-α, IL-1β, and IL-6) in both hippocampus and cortex. Thus, it alleviated neuroinflammation [80]. That aside, M. alba fruit extract has significantly reversed the AlCl 3 neurotoxin-induced alteration of striatum neurotransmitters and decreased 50% of acetylcholinesterase (AChE) activity in the brain. A significant increment in norepinephrine, epinephrine, 5-hydroxytryptamine serotonin, and dopamine neurotransmitter were observed in the treated mice [81]. Based on these data, M. alba could attenuate Alzheimer's disease via Aβ deposition alleviation, inflammatory-factor reduction, and BDNF/TrkB signaling pathway alleviation, thus proving their protective effects on memory and learning abilities. and 37.2%, respectively. Whereas, at the protein level, expression of the LDL receptor was elevated by 1.8-fold. However, in hepatic cells, no significant effect on the PCSK9 promoter activity was found, despite its expression inhibition in both mRNA and protein levels.Both 5% and 10% M. alba fruit supplementation have significantly down-regulated serum and liver thiobarbituric-acid-related substances (TBARS) in mice [73]. In contrast, a significant increment of red blood cells, liver superoxide dismutase, and blood glutathione peroxidase levels only occurred in 10% fruit-supplemented mice. The fluctuated HDL-C and LDL-C in high-fat animals were claimed to be contributed by the total anthocyanin (0.0087%) and flavonoid (0.39%) contents in the fruit [73]. These findings showed the capacity of M. alba fruit to manage hyperlipidaemia. In another study, M. alba fruit modulated obesity-induced cardiac dysfunction by inhibiting lipogenesis and fibrosis, and enhancing lipolysis in high-fat diet-induced obesity [74]. Fruit extract also attenuated reactive oxygen species (ROS), vascular function, inflammatory markers, and lipid accumulation while reducing collagen in obese rats; these are considered factors for defense against and remediation of obesity-related cardiac dysfunction cardiac fibrosis. The chemical structures of compounds possessing antihyperlipidaemia can be seen in Figure 3. To sum up, M. alba leaf and fruit extracts contain significant obesity-repressing and cholesterol-reducing properties, making them an excellent defensive agent against obesity, atherosclerosis, and hyperlipidaemia-related disorder. Nevertheless, the active components' mechanism to induce the uptake of LDLR and LDL is still unknown. Therefore, further study needs to explore a more profound understanding of M. alba's antihyperlipidaemia mechanism.
Neuroprotective Ability
According to Gupta et al. [75] and Rayam et al. [76], M. alba leaves could facilitate gamma-aminobutyric acid (GABA) transmission, thus exhibiting an anticonvulsant ability in rats [77]. A 25 mg/kg methanolic leaf extract actually postponed the onset of pentylenetetrazole (PTZ)-induced chronic seizure. In contrast, 50 and 100 mg/kg extracts decreased convulsion time length [75]. These concentrations also significantly reduced the maximal electroshock-induced tonic hindlimb extension [75]. Meanwhile, M. alba leaves' significant antiepileptic activity at 200 and 400 mg/kg at 6.8 and 3.16 s, respectively, was observed by Rayam et al. [76]. Parkinson's disease is a dysfunctional motor disorder caused by progressive degeneration of dopaminergic neurons and glyphosate linked to oxidative stress [82]. According to Rebai et al. [82], M. alba leaf extract's protective ability could fight the neurotoxicity and harmful effects of glyphosate. At the same time, it attenuated the levels of lactate dehydrogenase, malondialdehyde, and protein carbonyls. Leaf extract also chelated free ions to scavenge H 2 O 2 , and increased calcium and superoxide dismutase activity in the brain. These neuroprotective effects of M. alba leaves are due to synergism or antagonism among the bioactive phenolic components in the leaf extract.
Moreover, M. alba fruit extract also significantly improved and delayed the progression of Parkinson's-disease-like motor and nonmotor impairment symptoms in the 1-methyl-4-phenyl 1,2,3,6-tetrahydropyridine (MPTP)/probenecid-rat group [83]. M. alba fruit has inhibited olfactory dysfunction of rats based on shortened pellet retrieval time; amelioration of hypokinesia and bradykinesia, and inhibition of Gait dysfunction. Fruit extract also inhibited dopaminergic neuron degeneration and Lewy body formation through α-synuclein suppression, while amending the negative effects of MPTP/probenecid on ubiquitin expression in both the substantia nigra and striatum [83]. The cyanidin-3-glucoside (C3G) ( Figure 4) from M. alba fruit has dose-dependently prevented membrane damage, and conserved mitochondrial function and mitochondrial membrane potential (MMP) of the primary cortical neurons in rats when exposed to oxygen-glucose deprivation for 3.5 h [84], suggesting the C3G neuroprotective effect. To sum up, results from various studies served as proof of the worthy neuroprotective effects of M. alba leaves and fruit. This makes them a promising nutraceutical ingredient to target neurodegenerative disorders, especially Alzheimer's and Parkinson's diseases.
Antimicrobial and Antiviral Activity
Over the past decade, the interest in plant-derived antimicrobial drugs has increased dramatically. Due to the high production cost and side effects of antibiotics, as well as their non-targeted mechanism of actions, various medicinal plants have been tested and showed positive effects on bacteria and microorganisms. Therefore, they are being used as a means against bacteria-caused diseases. Accordingly, M. alba leaves were reported to inhibit the growth of Staphylococcus aureus, Bacillus cereus, and Pseudomonas fluorescence [85]. However, out of the chosen 13 cultivars, only two cultivars showed low to moderate inhibitory effects on Escherichia coli. The extracts with the highest total phenolic contents could inhibit the three bacterial strains, verifying the correlation between the leaf extract's total phenolic content and their antibacterial ability [85]. Thabti et al. [86] found that the aqueous and methanolic M. alba leaf extract showed antibacterial activity against Salmonella ser. Typhimurium, Staphylococcus epidermis, and Staphylococcus aureus.
With . Staphylococcus aureus showed the highest sensitivity to the leaf extract, while no effect against Aspergillus niger was observed.
On the other hand, morin isolated from M. alba fruit exerted moderate growth-inhibiting activity against Streptococcus spp. [88]. Investigation of the antimicrobial abilities of pectin isolated from M. alba fruit ( Figure 5) showed antibacterial activity against selected Gram-positive and Gram-negative bacteria: Bacillus cereus, Staphylococcus aureus, Streptococcus mutans, E. coli, Pseudomonas aeruginosa, and S. Typhimurium at concentrations of 500-1000 μg/mL [89]. In addition, methanolic and water extracts of M. alba leaves were reported to be active against S. Typhimurium and Staphylococcus aureus [90]. However, acetone extract expressed better inhibitory activity on Gram-positive bacteria, followed by ethanolic and methanolic extracts. In contrast, no inhibition was detected in any of the extracts against Gram-negative bacteria [91]. In another study, an attempt at biotransform-
Antimicrobial and Antiviral Activity
Over the past decade, the interest in plant-derived antimicrobial drugs has increased dramatically. Due to the high production cost and side effects of antibiotics, as well as their non-targeted mechanism of actions, various medicinal plants have been tested and showed positive effects on bacteria and microorganisms. Therefore, they are being used as a means against bacteria-caused diseases. Accordingly, M. alba leaves were reported to inhibit the growth of Staphylococcus aureus, Bacillus cereus, and Pseudomonas fluorescence [85]. However, out of the chosen 13 cultivars, only two cultivars showed low to moderate inhibitory effects on Escherichia coli. The extracts with the highest total phenolic contents could inhibit the three bacterial strains, verifying the correlation between the leaf extract's total phenolic content and their antibacterial ability [85]. Thabti et al. [86] found that the aqueous and methanolic M. alba leaf extract showed antibacterial activity against Salmonella ser. Typhimurium, Staphylococcus epidermis, and Staphylococcus aureus.
With Staphylococcus aureus showed the highest sensitivity to the leaf extract, while no effect against Aspergillus niger was observed.
On the other hand, morin isolated from M. alba fruit exerted moderate growthinhibiting activity against Streptococcus spp. [88]. Investigation of the antimicrobial abilities of pectin isolated from M. alba fruit ( Figure 5) showed antibacterial activity against selected Gram-positive and Gram-negative bacteria: Bacillus cereus, Staphylococcus aureus, Streptococcus mutans, E. coli, Pseudomonas aeruginosa, and S. Typhimurium at concentrations of 500-1000 µg/mL [89]. In addition, methanolic and water extracts of M. alba leaves were reported to be active against S. Typhimurium and Staphylococcus aureus [90]. However, acetone extract expressed better inhibitory activity on Gram-positive bacteria, followed by ethanolic and methanolic extracts. In contrast, no inhibition was detected in any of the extracts against Gram-negative bacteria [91]. In another study, an attempt at biotransforming M. alba fruit extract with Lactobacillus brevis DF01 and Pediococcus acidilactici K10 yielded effective antibacterial activity against S. Typhimurium through the reduction of growth and bacterial-produced biofilm [92]. Moreover, in Jacob et al. [93], DNJ derived from M. alba fruit was shown to exert a positive effect against bovine viral diarrhea virus (BVDV), GB virus-B (GBV-B), woodchuck hepatitis virus (WHV), and hepatitis B virus (HBV).
Meanwhile, M. alba seeds exerted an antiviral effect against the foodborne viral surrogates feline calicivirus-F9 (FCV-F9) and murine norovirus1 (MNV-1) [49]. The addition of 1 mg/mL of seed extract during pretreatment significantly inhibited FCV-F9 and MNV-1 plaque formation by 65% and 47%, respectively. Among the polyphenols, cyanidin-3rutinoside showed the best reduction of MNV-1 polymerase gene expression. The significant inhibition of viral infection by M. alba seeds in the pretreatment stage suggested their best effect was on the initial stage of viral replication.
Based on these studies, the antimicrobial and antibacterial activity of M. alba leaves and fruit have been demonstrated to be effective, especially against S. Typhimurium, which is a globally known foodborne enteric pathogen accountable for acute gastroenteritis associated with undercooked or contaminated foods. Moreover, a lack of antiviral drugs could allow M. alba to be utilized as an effective alternative to prevent and control virus-related diseases. Nonetheless, further studies focusing on specific high antioxidative compounds and biotransformation are still required to support the antimicrobial and antiviral ability of M. alba to better control foodborne bacterial and viral surrogates.
Cytotoxicity and Anticancer Activities
Mulberry species are well known as a source of compounds that inhibit the initiation and development of cancers. The primary anticancer protective ability is due to the number of antioxidants available in M. alba. Several studies on various human cancer cells, including cervical cancer, lung carcinoma, hepatocellular carcinoma, breast cancer, and colorectal cancer have been conducted to verify the anticancer ability of M. alba. A study on methanolic M. alba leaf extract used against P19 embryonal carcinoma showed the highest cytotoxic effect with IC50 of 273, 117, and 127 μg/mL at 48 h, 96 h, and 144 h of treatment, respectively, as compared to the other 3 plants extracts [94]. In Fathy et al. [95], leaf extract inhibited HepG2 proliferation by repressing nuclear factor kappa B (NF-κB) gene expression and modulating the biochemical markers alfa-fetoprotein (AFP), alkaline Meanwhile, M. alba seeds exerted an antiviral effect against the foodborne viral surrogates feline calicivirus-F9 (FCV-F9) and murine norovirus1 (MNV-1) [49]. The addition of 1 mg/mL of seed extract during pretreatment significantly inhibited FCV-F9 and MNV-1 plaque formation by 65% and 47%, respectively. Among the polyphenols, cyanidin-3rutinoside showed the best reduction of MNV-1 polymerase gene expression. The significant inhibition of viral infection by M. alba seeds in the pretreatment stage suggested their best effect was on the initial stage of viral replication.
Based on these studies, the antimicrobial and antibacterial activity of M. alba leaves and fruit have been demonstrated to be effective, especially against S. Typhimurium, which is a globally known foodborne enteric pathogen accountable for acute gastroenteritis associated with undercooked or contaminated foods. Moreover, a lack of antiviral drugs could allow M. alba to be utilized as an effective alternative to prevent and control virusrelated diseases. Nonetheless, further studies focusing on specific high antioxidative compounds and biotransformation are still required to support the antimicrobial and antiviral ability of M. alba to better control foodborne bacterial and viral surrogates.
Cytotoxicity and Anticancer Activities
Mulberry species are well known as a source of compounds that inhibit the initiation and development of cancers. The primary anticancer protective ability is due to the number of antioxidants available in M. alba. Several studies on various human cancer cells, including cervical cancer, lung carcinoma, hepatocellular carcinoma, breast cancer, and colorectal cancer have been conducted to verify the anticancer ability of M. alba. A study on methanolic M. alba leaf extract used against P19 embryonal carcinoma showed the highest cytotoxic effect with IC 50 of 273, 117, and 127 µg/mL at 48 h, 96 h, and 144 h of treatment, respectively, as compared to the other 3 plants extracts [94]. In Fathy et al. [95], leaf extract inhibited HepG2 proliferation by repressing nuclear factor kappa B (NF-κB) gene expression and modulating the biochemical markers alfa-fetoprotein (AFP), alkaline phosphatase (ALP), gammaglutamyl transpeptidase (γ-GT), and albumin (ALB). Moreover, the leaves induced morphological changes in HepG2 cells to a more mature hepatocyte form [95]. Morphological changes also occurred in HeLa cells with 50 mM of morin isolated from M. alba leaves, followed by the formation of flocculent apoptosomes at 150 mM and spherical suspensions of dead/apoptotic cells at 220 mM. This morin-induced apoptosis occurred via multiple pathways, including increment of death-receptor expression, mitochondrial pathway, the elevation of apoptosis-related gene expression, and ROS-induced apoptosis [96].
Furthermore, in the area of different extraction solvents, n-hexane fractions of M. alba fruit expressed the best cytotoxicity on HCT116 (IC 50 = 32.3 µg/mL), while dichloromethane (DCM) fractions expressed best on MCF7 (IC 50 = 43.9 µg/mL) [45]. Moreover, the DCM fruit fraction was safely recognized based on its zero effect on a normal cell line (Bj1), while EtOAc fraction afflicted 47.3% cytotoxicity on Bj1. Nonetheless, low to no effect was seen from the extracts on HepG2 (0-22.8%) and PC3 cancer cells (0-39.6%). This showed that a sample's cytotoxicity varied with extraction yields based on the solvent's varying polarity [98]. Cho et al. [99] showed that the cyanidin-3-glucoside (C3G) from M. alba fruit could inflict cytotoxicity and dose-dependently increased human breast cancer cell death. M. alba C3G's active apoptosis occurred via elevation of cleaved caspase-3, reduction of Bcl-2, and DNA fragmentation. Moreover, 25 days of a C3G diet dose-dependently reduced the size of the tumor in tumor-transplanted mice. This proves their capability to inhibit the proliferation and growth of cancer cells in both in vitro and in vivo models [99].
A newly found indole acetic acid derivative from M. alba fruit revealed dose-dependent cytotoxicity on HeLa cells. The apoptosis mechanism was deduced via the death-receptormediated extrinsic pathway and mitochondria-mediated intrinsic path owing to the activation of caspase-8 and -9 [100]. Intriguingly, Ramis et al. [101] observed that the leaves exhibited only a slightly better cytotoxicity effect on colon cancer cells as compared to fruit, despite their higher total phenolics, total flavonoids, and antioxidants. However, a nearly similar cytotoxicity effect as the fruit was exerted on liver hepatocellular cells. Many studies have reported M. alba's multidirectional mechanism of action, which consists of eliminating reactive oxygen and nitrogen species and reducing the number of negative mutations and inflammation, while promoting apoptosis and activating the immune system. However, further investigations on in vivo studies of M. alba cytotoxicity are suggested to be conducted on more aggressive and metastatic cancers or late-stage cancers, thus giving a better chance for advanced-stage cancer patients.
Toxicity Study of Morus alba
The toxicity of M. alba leaves was previously determined in 4 weeks of oral administration of low (1%) and high (5%) doses to both male and female rats [102]. Throughout the observation period, no disturbances to growth or organ weight, nor any effects on the biochemical, hematologic, or pathological examinations were reported, indicating the consumption of M. alba leaves was safe. In an acute toxicity study, an intraperitoneal administration of M. alba leaf extract showed a median lethal dose (LD50) of approximately 4 g/kg in mice and 5 g/kg in Winstar rats [103]. However, they found no significant toxicity from the same 5 g/kg leaf extract when orally administered to both subject groups. During the test, the only effect recorded was the depression of the central nervous and respiratory systems, which recovered within 15 to 30 min. In a subchronic toxicity study, 60 days of orally administered M. alba leaf extract (1, 2, and 3 g/kg daily) showed no significant effects on blood chemistry or hematologic values, as well as no significant histopathological abnormalities in the major organs of the mice. There were no deaths reported in any of the studies [103].
Throughout the 14 days of the trial, the behavioral signals and biomass of the mice were not affected by 300 and 2000 mg/kg of body weight of ethanolic M. alba leaves [87].
Toxicity Study of Morus alba
The toxicity of M. alba leaves was previously determined in 4 weeks of oral administration of low (1%) and high (5%) doses to both male and female rats [102]. Throughout the observation period, no disturbances to growth or organ weight, nor any effects on the biochemical, hematologic, or pathological examinations were reported, indicating the consumption of M. alba leaves was safe. In an acute toxicity study, an intraperitoneal administration of M. alba leaf extract showed a median lethal dose (LD 50 ) of approximately 4 g/kg in mice and 5 g/kg in Winstar rats [103]. However, they found no significant toxicity from the same 5 g/kg leaf extract when orally administered to both subject groups. During the test, the only effect recorded was the depression of the central nervous and respiratory systems, which recovered within 15 to 30 min. In a subchronic toxicity study, 60 days of orally administered M. alba leaf extract (1, 2, and 3 g/kg daily) showed no significant effects on blood chemistry or hematologic values, as well as no significant histopathological abnormalities in the major organs of the mice. There were no deaths reported in any of the studies [103].
Throughout the 14 days of the trial, the behavioral signals and biomass of the mice were not affected by 300 and 2000 mg/kg of body weight of ethanolic M. alba leaves [87]. However, significant reduction of mean corpuscular volume (MCV) and mean corpuscular haemoglobin concentration (MCHC) occurred in the 2000 mg/kg-treated mice. The MCHC of the 300 mg/kg mice was also lower than the control group, indicating the activity of leaf compounds on erythrocytes, leading to lower cell production. Both mice groups showed a lower proportion of lymphocytes with an increase of segmented leukocytes. The 2000 mg/kg-treated mice showed significant alteration in alanine aminotransferase (ALT) and alkaline phosphatase enzymes. The liver cells' instability signal of the Gammaglutamyl transferase level (GGT) was the same in all groups, which indicated the nonhepatotoxic effect of M. alba leaves. Conversely, the 2000 mg/kg dose altered kidney and liver structure, whereas the 300 mg/kg only altered leukocyte proportion, with no toxicity or any irreversible cellular damages observed [87]. Nonetheless, compared to these results, the intraperitoneal injection of the same concentrations extracts was more damaging to the mice, as histological analysis showed significant alteration in the liver, kidney, and spleen [104]. Studies indicate that the ethanolic leaf extract at 2000 mg/k exerted low oral toxicity on mice. However, there were no deaths, so it is safe to use with caution. M. alba leaves could induce biochemical, haematological, and histopathological alterations.
An acute toxicity study on M. alba fruit showed a nonsignificant effect of 1000 mg/kg of polysaccharide intragastric administration on animal behavior. The results considered their respiratory distress, posture, emaciation, and mortality throughout the 1-week observational period [105]. A subchronic toxicity study of 90 days of M. alba fruit oral administration (0, 40, 200, and 1000 mg/kg) on Sprague Dawley rats reported no significant adverse reactions, with no influence on food and water intake, or on body mass gain [106]. No significant toxic impact was detected between the treated group and the control based on organ weight, biochemical values, and hematological and urine analysis [106]. No deaths were reported in any of the studies.
Furthermore, an in vivo neurobehavioral study of M. alba fruit (100, 300, 1000 mg/kg) showed no significant effect on the general health, metabolism, and growth of mice based on their alertness, body weight, daily food and water intake, and organ weight [107]. The low liver toxicity biomarkers, ALT and aspartate transaminase (AST), indicated a nondetrimental effect of M. alba fruit on liver. No indication of renal toxicity was observed, as the levels of renal-function biomarkers (blood urea nitrogen, creatinine, cholesterol, glucose, and albumin) were within normal ranges. The histological analysis confirmed that no morphological changes or internal injuries appeared, even at the highest oral dose of M. alba fruit (1000 mg/kg) throughout 28 days, indicating the safe consumption of M. alba fruit [107]. However, this study was not associated with neurochemical estimation. Therefore, to understand the precise mechanism of action, estimating the brain's neurotransmitter levels is necessary, and could suggest the nontoxic effect of fruit extract on the normal growth of animals.
Functional Ingredients in Food Applications
M. alba leaves, fruit, and seeds are composed of different matrices, therefore varying their potential application in food industry. A substantial amount of nutrients was found in M. alba food products, including carbohydrates; protein; minerals like calcium, iron, and zinc; and vitamins. Yu et al. [33] found a high content of crude protein, total phenolic, and DNJ in M. alba leaves, indicating their health properties and potential as ingredients in the food industry. Recently, M. alba leaves have been used for herbal tea, especially in Asian countries [5]. The bitterness of this tea was found to be positively correlated with the content of TPC and TFC in the leaves and their radical-scavenging abilities [108].
The antioxidative properties of M. alba leaves also have demonstrated its food functionality in paratha [109]. Paratha with M. alba leaves added revealed an increase in the dough's protein levels, fat, ash, polyphenol, DPPH, ABTS, and Fe 3+ -reducing and chelating capabilities in a dose-dependent manner. However, upon frying, a decrease in polyphenolic compounds (by 0.16-0.21%) and DPPH activity (by 22-34.6%) occurred due to polyphenol degradation caused by heating [110], while an increase in the activity of ABTS scavenging (by 1.4-6.6%), Fe 3+ reduction (by 1.3-2.9%), and Fe 3+ chelation (by 247.5-4906.3%) occurred due to Maillard-reaction products (MRPs) that allowed metal chelation to inhibit oxidation [111]. A higher cooking temperature with a shorter cooking time can have less impact on bound polyphenols and sugars than the contrary [112]. Further studies on commercial-range parathas, such as the frozen and the ready-to-cook forms, are more convenient alternatives, and the cooking method should be revised to limit the loss of mulberry's functional activities.
Moreover, better pork quality was observed when 15% M. alba leaf powder (MLP) was added to the pig's diet. The levels of backfat, shear force, lower cooking, and drip loss were decreased; and the content of inosinic acid, intramuscular fat value, pH, meat color value, and antioxidative capacity were increased [113,114]. In addition, 15% MLP-treated pigs showed higher mRNA levels of type I and IIa muscle fibers that were lower in shear, more sufficient in diameter, and more tender than the type IIb fiber [114]. However, the 15% MLP also caused adverse effects on the pig's growth, as it reduced its average daily gain, feed efficiency, carcass weight, and dressing percentage.
On the contrary, the inclusion of 15% and 30% MLP in the diet of lambs maintained growth and carcass performance while increasing the total weight of their rumen and stomach. The 15% MLP supplementation showed to be the most promising concentration to maintain growth, intake, and carcass performance, and alleviate oxidative stress, improved blood metabolites, and improved quality of the longissimus lumborum muscle (redness) [115]. Despite the nonsignificant effect on the meat's pH, the mRNA expressions of Cu/Zn SOD and GPx against lipid peroxidation were elevated, which implied MLP had an ability to enhance lamb meat's redness, quality, and shelf life.
According to Zhang et al. [116], M. alba leaf extracts can prevent oxymyoglobin and metmyoglobin oxidation to maintain refrigerated beef's color. In addition, leaf extract significantly reduced the values of peroxide and thiobarbituric acid reactive substances (TBARS), and increased the superoxide dismutase and glutathione peroxidase activities during beef storage, which suggested that mulberry leaves were able to reduce the lipid oxidation reaction. Nonetheless, M. alba leaves are a good supplement for ruminants and pigs, as well as a good natural antioxidant to preserve the color and increase the quality and shelf life of meat, which are important in food industries. Further research is needed to find a suitable MLP dietary level with a lower negative impact on pig growth, and a longer observation period is required to find the maximum capability of M. alba leaf extract on meat quality and preservation, to ensure the appropriate amount needed for the desired shelf life to prevent the food-processing industry from economic losses.
M. alba fruit, on the other hand, was found to be rich in volatile compounds with higher phenolic contents compared to other berries and mulberry variety [38,117]. The fruit is a good source of pectin (4.75-7%), which has a wide range of food-industry applications as an emulsifier, thickener, stabilizer, gelling agent, and a fat or sugar replacement in low-calorie foods [118]. Due to its remarkable gel-forming property, the authors utilized pectin to produce jellies, jams, and preservatives. They found that pectin promotes multiple biological activities, such as anti-inflammatory, antibacterial, antioxidant, and antitumour activities. Thus, pectin-containing M. alba fruit has a probable function as a food ingredient [89].
Currently, M. alba fruit is used in cooking, baking, and for dessert purposes due to its sweet taste and attractive bright color that is ideal for sherbets, jams, fruit tarts, pies, jellies, teas, wines, and cordials [19]. The utilization of fruit as a natural food colorant is derived from their cyanidin and delphinidin glycoside anthocyanins. They are reddish-purple to purple pigments that possess high FRAP and DPPH antioxidant activities [47,119]. Previously, Natić et al. [119] isolated 14 anthocyanins compounds with a total of 45.42-208.74 mg cyn-3-glu/100 g frozen weight, whereas a more recent study by Kim and Lee [47] In jam processing, the color quality of M. alba jam was influenced by storage temperature. The 5 • C-stored jam possessed better color qualities (L* = lightness, a* = green to redness, b* = blue to yellowness) than the 25 • C-stored jam, but the difference was not statistically significant. In addition, during the four months of storage, no significant influence of light conditions on the quality and fluctuation of total anthocyanins and phenolic contents was observed [120]. This showed that mulberry jam was stable during storage. The high antioxidant and color-enhancing properties positively recommended M. alba and its anthocyanins as a natural, functional food colorant, since it is safer to consume than synthetics and is capable of delivering enhanced color quality with value-added properties to the product. However, a study on this colorant's stability in other foods such as jellies, puddings, or yogurts should be explored to reinforce the coloring efficiency. To our knowledge, no one has conducted further analysis of the antioxidant capacity of an M. alba anthocyanin-containing finished product, or of other biological benefits, despite the proanthocyanidin-significant antimicrobial effects in the gastrointestinal tract and inhibition of sugar-, protein-, and fat-uptake-related enzymes [121], which are valuable in products targeting concerned about calories or their body weight. Therefore, such studies on M. alba anthocyanins and their stabilization methods will determine their benefits as functional food ingredients.
Furthermore, the investigation of M. alba fruit's polyphenolic compounds (MFPs) in dried minced pork slices (DMPS) revealed a dose-dependent antioxidative activity during processing and storage [122]. The MFPs effectively inhibited oxidation of muscle protein and oxidation-induced texture deterioration, and reduced the DMPS hardness during heating and storage. MFP also significantly inhibited aerobic bacteria growth on DMPS and masked the odor and flavor of pork. Interestingly, MFP-added samples during the storage and post-heating process revealed higher redness (a*) values as compared to the preparation-stage samples and the control group. Meanwhile, an increment not surpassing the control group occurred for yellowness (b*) and lightness (L*) [122]. Similar results were found in MFP-added Cantonese sausages during 28 days of room-temperature storage in Xiang et al. [123]. A* and L* elevations were suggested to be caused by anthocyanin degradation during heating and storage. Simultaneously, the rise of b* value was attributed to the oxidation of ferrous heme iron induced by lipid oxidized products. The samples' hardness showed a good increment, owing to the polyphenols' physicochemical modifications during storage. Moreover, MFPs at 1.0 g/kg provided protection against lipid-and protein oxidation-induced damage, based on the lower TBARS and carbonyl contents, yet at a higher level of sulfhydryl than sausages without MFPs. Aside from the effective control of volatile base nitrogen (TVB-N) and microbial stability, MFPs also significantly reduced residual nitrite, which indirectly reduced the production of carcinogenic nitrosamine, causing MFPs to be trusted in biosafety [123]. In summary, MFPs are a promising natural antioxidant with effective protection against TVB-N increment, lipids, and protein oxidation, which could also effectively regulate Cantonese sausages' biosafety. While this study can be used as a preliminary reference for research on other meat-based products, due to its slight unfavorable color defect, further research using the antioxidant function for better sensory and quality properties should be prioritized.
A recent study added M. alba leaves and fruit respectively to liquefied and creamed rape honey to assess their effects on the enriched honey's phenolic profile, antioxidant, and glycoside hydrolysis activities [124]. Rape honey is naturally deficient in polyphenols, but its antioxidant activity was enriched and diversified by 70-and 7-fold after the addition of M. alba leaf and fruit extracts, respectively. A similar increment also occurred in M. albaenriched creamed rape honey. Furthermore, dose-dependent elevations of α-glucosidase and β-galactosidase were observed in leaf-enriched creamed honey. Meanwhile, in M. alba-fruitenriched honey, a slight decrease in α-glucosidase and a non-dose-dependent β-galactosidase increase occurred. The addition of M. alba, especially the leaves, reduced about 50% of diastase, suggesting their inhibitory effect against carbohydrate-hydrolyzing enzymes [124]. This is advantageous, as it slows down the loss of the diastase enzyme in honey, which allows longer storage while still containing a major amount of beneficial enzymes.
The same compound enrichment effect was observed in M. alba (leaf extract + fruit) added to baked bread (MAB) studied by Kobus-Cisowska et al. [125]. They reported significantly high total phenolic acids, flavonols, and antiradical activities in bread after baking and after 30 days of frozen storage. The MAB also revealed substantially high isoquercetin, chlorogenic acids, and protocatechuic acid. Respective positive correlations between the sum of phenolic acid and the sum of flavonols with DPPH inactivation, ABTS, and reducing the power activities were reported. These high values show significant M. alba functionality in enhancing health benefits in foods for consumption. MAB also revealed higher sweet and insipid scent intensity, despite its grassier and bittersweet taste compared to the control bread's sour and salty taste. MAB bitterness is due to M. alba's polyphenols, such as vanillic acids and ferulic acids, that contribute to a bitter, bean-like taste. Meanwhile, the flavonol glycosides affect bitter and sour taste. Despite the differences, MAB received a similar level of scent desirability with higher taste desirability, expressed via sensory profiling compared to bread without M. alba enrichment. DPPH and ABTS activity, which expressed the continuous activation of the responsible antiradical compounds, was stable throughout 30 days of frozen storage in another study on MAB [125]. However, changes in the intensity of frozen MAB characteristics did occur; its sweet scent and grassy taste declined, yet its grassy scent and sweet taste intensified. From these data, M. alba leaf and fruit extracts could be used as natural food fortifiers in bread, as they enhanced the nutrition level, bioactive compounds, and antiradical activity without negatively influencing the bread's sensory and microbiological qualities. Nevertheless, they should have conducted a prior analysis on the dough's bioactive compounds and antiradical levels in this study in order to know the percentage of compound and antioxidant-activity loss after baking. Despite the positive results from various studies, these data are still far from enough to solidify a food-industry function. A broader food-variety study, concise shelf-life of products, and analysis of M. alba extracts' effects, abilities, and best preservation of shelf-life are some aspects for further exploration.
Essential oils (EOs) are plant-based aromatic oily liquids usable in several industries, including the food industry, as functional flavoring substances due to their health benefits. They are also integrated into food packaging to increase food shelf-life due to their antimicrobial and antioxidative activities [126]. Despite EOs' increasing popularity, there have only been a couple of studies conducted on M. alba EO compound constitution. Zhi-ming et al. [127] and Radulović et al. [128] found some volatile compounds in M. alba leaf EOs with noteworthy biological effects. These compounds mainly consist of terpenoids, phytol, heptacosane, hexahydrofamesyl acetone, hexadeconic acids, β-bisabolene, carotenoid derivatives, and geranyl acetone. These compounds are used as food additives and flavor enhancers. For example, phytol contains antioxidant, anti-inflammatory, and antinociceptive activities [121,129]. Hexadecanoic acid is famous for its anti-inflammatory [130] and cytotoxicity potentials [131], whereas β-bisabolene possesses a strong cancer cytotoxicity effect [132] and has a balsamic odor that led to its approval as a food additive. Heptacosane has been used as a nutritional supplement and sweetener to mend aftertaste [133]. Meanwhile, terpenoids are used for their medicinal, flavor-enhancing, and fragrance properties in various industries [134]. The same wide application occurred for carotenoid derivatives, which are also extensively utilized as a food colorant and provitamin, and as a fortifier to introduce its health benefits into food [135]. Nonetheless, no specific analysis of M. alba EOs or their compound potentials have been conducted before. Therefore, further studies of their biological properties and applications as a functional food additive are needed.
Leaf EOs aside, M. alba-fruit-derived oil revealed rich fatty-acid contents, such as linoleic acids (58.89% DW), palmitic acids (12.46% DW), oleic acids (11.87% DW), and stearic acids (5.67% DW) [136]. Moreover, M. alba seed oil revealed its high content of total phenolics, total flavonoids, tocopherols (1644.70-2012.27 mg/kg oil), and fatty acids such as linoleic acid (75.83-81.1%), palmitic acid (8.80-9.51%), oleic acid (5.64-10.38%), stearic acid (3.96-5.01%), linolenic acid (0.41-0.45%), and myristic acid (0.07%) [53][54][55]. The high amount of bioactive compounds in the seeds contributed to its high DPPH (62-72%), ABTS (0.026-0.072 mmol/g), and FRAP (0.15-0.68 mmol Fe 2+ /g) abilities, which were proven by the strong correlation between total phenolic content with DPPH and FRAP (r = 0.965 and r = 0.951, respectively) [54]. Moreover, the high flavor and volatile compounds in the leaf EOs could offer antioxidant [137], antifungal [138] and antibacterial [139] effects in medicine when utilized as a flavorer, fragrance enhancer, antimicrobial agent, and food preservative for pulses, grains, cereals, fruits, and vegetables in the food industry [140]. Meanwhile, M. alba seed oil could be a good source of tocopherol and essential fatty acids, especially linoleic acid, an omega-6 PUFA usable as a blood-vessel cleaner to decrease serum cholesterol and inhibit arterial thrombosis formation [141]. Therefore, the oil derived from M. alba leaves, fruit, and seeds contains many beneficial compounds and is potentially applicable in the food industry as a functional food ingredient. Nonetheless, one of the main complications in utilizing an oil is the quantitative or qualitative diversity of its compound content, which could cause variable biological efficacy [142]. EOs and oil carry a strong aroma, which might restrict their application due to the complex matrices and different interconnecting microenvironments of foods. It also requires a level of natural flavor complexes for adequate effectiveness that might exceed the organoleptic acceptable range, affecting the natural taste of food and human health [126,143]. The limited number of studies and data on M. alba-derived EOs and oils are far from enough to solidify M. alba's potentiality in any industry. Therefore, further studies that isolate more of their compounds, and analyses of M. alba-derived EOs and oils for their nutrition, biological activities, potential applications, effectiveness, toxicity, and stability in the food system are implored to solidify their place as functional food ingredients.
Microencapsulation of Morus alba
M. alba, with its biologically beneficial polyphenolic compounds, is a suitable functional food ingredient. Unfortunately, the poor stability and heat and light sensitivity of phenolic compounds are a widely known weakness [144]. There is evidence that the stability of M. alba polyphenols can be destroyed by environmental factors, including pH, temperature, and oxygen [145]. These downsides have limited their commercial applications, especially for hot-processed foods, as reported in Cheng et al. [146] and their previous studies. Many studies have investigated methods to stabilize and protect polyphenols from degradation. Microencapsulation is an effective technique to ameliorate the unstable physiochemical properties of a substance or compound, including their solubility and dispensability [147]. Xu et al. [145] stated that M. alba polyphenol (MP) stability in dried minced pork slices was significantly enhanced after gum Arabic microencapsulation with a core/wall ratio of 1:90. After 20 days of storage, the microencapsulated polyphenol (MMP) pork slices showed lower loss of total phenolic, flavonoid, and anthocyanin contents (15.10%, 16.87%, and 19.88%, respectively), with higher recovery (by 1.35-fold, 1.26-fold, and 1.16-fold, respectively). Moreover, both MP and MMP exhibited a synergistic inhibitory effect on protein and lipid oxidation, with MMP showing better oxidation and color stability. In addition, Cheng et al. [146] found 1% β-cyclodextrin MMP significantly improved the dried minced pork slices' recovery of total phenolic, flavonoid, and anthocyanin contents as compared to the nonencapsulated product by 5.6%, 5.8%, and 18.6%, respectively. Furthermore, as compared to the control, the MP-added pork slices showed 1.03-fold higher FRAP and 1.18-fold higher ABTSI+ scavenging ability, whereas the β-cyclodextrin MMP showed 1.2-fold and 3.58-fold increases, respectively. These positive effects were attributed to β-cyclodextrin's protective ability on phenolic compounds. However, the degradation of certain compounds such as quercetin, cryptochlorogenic acid, and caffeic acid, were not effectively avoided, presumably due to the difference in structures, thermochemical stabilities, and reaction modes between the compounds and β-cyclodextrin.
The promising results for phenolic-compound recovery was strongly supported by the β-cyclodextrin microcapsule of MP, which showed optimal encapsulation efficiency at a core/wall ratio of 1:6 with ultrasound treatment at 450 W, 25 • C for 90 min. Under the optimized parameters, the processing stability of MMP, including the thermal, light, and storage stabilities, were significantly enhanced. Simultaneously, the encapsulation efficiencies of MMP for total polyphenol, flavonoid, and anthocyanin contents were higher than 97%, thus verifying the success of MP encapsulation in ameliorating the unstable properties of M. alba's polyphenols [144]. Gum Arabic and β-cyclodextrin are some credible encapsulating agents for M. alba polyphenols to add higher nutritional value to food. The results created a solid base for further MMP studies on their natural-pigment and biopreservative abilities in other thermal-processed products when substituted for synthetics. However, detailed laboratory measurements with evaluations of color and sensory characteristics should be the focus of future studies. Moreover, the bioavailability of MMP and its synergistic inhibitory effect on protein and lipid oxidation in functional meat products must be further studied in larger-scale production to accommodate MMP integration at the commercial scale.
Industrial Scale-Up
The assimilation into industrial and commercial production involves a larger production size to accommodate market demand, thus the need for upscaling, which proceeds from the bench or laboratory scale to the pilot scale, and finally to the demonstration scale for commercial demand [148]. The bench or laboratory plant is an early-stage study that provides a sample's successful assessment, clinical trials, and system parameters. This stage provides initial upscaling data and factors before following up with efficient pilot-plant production. However, the procedures and materials operated at a laboratory scale are usually not practical on a larger scale. The unsuitable scale-up factor, policies, and parameters often critically influence products' values, properties, and volumes, leading to energy and economic losses [148]. The upscaling process requires deliberate time, efforts, experiences, in-depth research, and high capital before its successful implementation. These difficulties are the reasons for lower upscaling information than the same sample's laboratory-scale study.
So far, only a handful of the attempted pilot-scale studies were successful, while most of them require more modification and trials. In Flaczyk et al. [149], the authors obtained about a 4% lower M. alba leaf-extraction yield than at the laboratory scale. The study also reported lower protein (by 13.61%), glucose (by 38.78%), phenolics (by 44.94%), phenolic acids (by 53.03%), and flavonols (by 11.36%), as well lower DPPH (by 35.72%) and ABTS (by 20.1%) activities. Nonetheless, at the pilot scale, they found higher concentrations of saccharose and galactose by 23.77% and 23.68%, respectively. These values were due to intermittent loss during pilot-scale processing, as well as the different extraction and concentration conditions between the two scales. Therefore, further analysis and determination of qualitative indicators are necessary for the optimization process.
Gang et al. [150] obtained a polysaccharides' optimal extraction preceding data focusing on M. alba leaves. This study concluded the optimized temperature, sample-to-solvent ratio, and extraction time via a Box-Behnken response surface design (BBD) coupled with response surface methodology (RSM). As such, extraction at 76.7 • C with a 1: 20.6 sampleto-solvent ratio, extracted for 1.87 h, was reported to produce the highest polysaccharides (105.57 mg/g), consistent with their predicted value (106.64 mg/g). A recent pilot-plant study reported the optimal extraction condition for DNJ from M. alba leaves [151]. The RSM exploration showed the best DNJ extraction value (31.62 mg/g) at 79.2 • C with a 1:9.82 sample-to-solvent ratio for 1.46 h. The value was higher than for the small-scale extraction process. These studies showed that extraction-parameter optimization is crucial in obtaining the best outcome value, and that RSM or BBD coupled with RSM were reliable methods for optimization of upscaled extraction. The obtained data on optimized extraction of M. alba leaf polysaccharides and DNJ can be used as a foundation for industrial applications of M. alba.
Sample-processing parameters aside, the equipment and types of machinery used at the pilot scale also affect the process and outcome. Komaikul et al. [152] found that both stilbenoid and mulberroside A from an M. alba cell culture were affected by the usage of different types of bioreactors. The study revealed that round-bottom bioreactors could produce three times higher biomass than the flat-bottom bioreactor, though no significant difference in mulberroside A production was found. However, significantly higher mulberroside A was obtained from shake-flask cultured cells without an air-driven system compared to air-driven bioreactors. Meanwhile, the stilbenoid production preferred smaller air-driven bioreactors (1 L) with low aeration rates. In sum, the production of mulberroside A and stilbenoid in M. alba cell cultures could be affected by factors such as biomass circulation, aeration, and endogenous enzymatic hydrolysis. Therefore, a study focusing on target-compound stabilization and reduction of aeration-induced negative effects would be a valuable approach for M. alba at the pilot-plant scale. More studies and optimization of M. alba pilot-plant processing should include factors of formulae standardization, chemical equilibrium, product-quality maintenance, thermodynamics, processing equipment, energy consumption, etc., to ensure M. alba's successful application at the industrial level.
Conclusions and Future Perspectives
Taken together, M. alba leaves and fruit have a high content of bioactive compounds such as phenolic acids, flavonoids, flavonols, anthocyanins, macronutrients, vitamins, and volatile aromatic compounds. These compounds contribute significantly to the properties of M. alba in preventing and treating illnesses such as oxidative stress, diabetes, hyperlipidaemia, neurological disorders, microbial infections, and cancer. Numerous studies have been conducted on M. alba fruits and leaves, especially to determine their bioactive compounds, pharmaceutical potentials, and toxicity effects. However, limited studies have been done on M. alba seeds. Therefore, it is essential to include knowledge about the seeds so that the full potential of M. alba can be achieved.
There are significant knowledge gaps between M. alba leaves and fruit regarding their working mechanisms, bioavailability, and biochemistry in the human body. Moreover, the multiple phytochemicals and bioactive compounds contained in different M. alba cultivars, maturity stages, extraction conditions, and methodologies have produced heterogeneous data. This has led to difficulty in evaluating and standardizing their activities. For that reason, more suggestive scale trials with well-designed and uniform parameters will provide a substantial input in obtaining homogenous data, which would benefit further comprehension of the in vivo effect on human health of M. alba leaves and fruit, including their bioavailability and biological effect against enzymes, pH, and especially gut microbiota.
There is limited information available on the application of M. alba in the food industry as a functional food ingredient, despite its current presence in commercial products. Studies have found that M. alba has promising prospects for functional food applications. Its flavonoids and phenolic acids can act as flavorants, food fortificants, antioxidants, preservatives, and antimicrobial agents against bacteria and fungi. In contrast, its anthocyanins could be used as natural antioxidative food colorants to substitute for synthetic colorant usage. With the wide variations of food, more studies on a broader range of foods such as frozen, ready-to-eat, or processed foods are necessary to understand the reactions and impact of M. alba on other food ingredients, as the food system itself is very complex. In the vast food-application potential, research emphasizing the effect of processing and storing on the content and stability of bioactive compounds in M. alba-based foodstuffs is needed for a better understanding of the bioactive compounds' role and interaction in the food matrix to enhance food quality and properties.
Additionally, in regard to the degradation of heat-sensitive bioactive compounds, microencapsulation techniques have been proven to naturally protect and significantly enhance M. alba polyphenols' stability and properties. This result could establish a factual basis for further study of their utilization in heat-processed products and better industrialprocessing performance, thus expanding the application of M. alba in various food products. However, integration into the industry requires scaling up from the laboratory scale to the pilot plant, and subsequently to the industrial plant, to produce a larger production scale to ensure demand can be met. Regrettably, studies on the pilot scale-up process for M. alba leaves and fruit are still very lacking. This is possibly due to the difficulty of scale-up implementation of bioprocessing. It requires deliberate time, efforts, experiences, in-depth research, and high capital. Unsuitable scale-up factors, procedures, and parameters would critically influence product values, properties, and volumes, leading to undesirable energy and economic losses to a company. In conclusion, M. alba indeed possesses beneficial biological properties and is suitable to be included as a functional food ingredient. Nevertheless, the exploitation of M. alba's functionality and properties within the industrial field are still scarcely explored.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 2021-03-30T05:11:48.390Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "6624d44959882ead97125c62aff4bd470dbc7ab8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/3/689/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6624d44959882ead97125c62aff4bd470dbc7ab8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220383749 | pes2o/s2orc | v3-fos-license | Cultivation With Powdered Meteorite (NWA 1172) as the Substrate Enhances Low-Temperature Preservation of the Extreme Thermoacidophile Metallosphaera sedula
Recent studies have uncovered a vast number of thermophilic species in icy environments, permanently cold ocean sediments, cold sea waters, and cool soils. The survival of thermophiles in psychrobiotic habitats requires thorough investigation of the physiological and molecular mechanisms behind their natural cryopreservation. Such investigations are mainly impeded due to a restricted cultivation of thermophiles at low temperatures under the laboratory conditions. Artificial culture media used under the laboratory conditions usually fail to support cultivation of thermophiles at low-temperature range. In this study we cultivated the extreme thermoacidophilic archaeon Metallosphaera sedula with the preliminary powdered and sterilized multimetallic extraterrestrial mineral material (the meteorite NWA 1172) under a low temperature regime in laboratory conditions. Our data indicate that M. sedula withstands cold stress and can be maintained at low temperatures, when supplemented with the meteorite NWA 1172 as the sole energy source. Cultivation with the meteorite NWA 1172 opens up new, previously unknown psychrotolerant characteristics of M. sedula, emphasizing that culture conditions (i.e., the “nutritional environment”) may affect the microbial survival potential in stress related situations. These observations facilitate further investigation of strategies and underlying molecular mechanisms of the survival of thermophilic species in permanently cold habitats.
INTRODUCTION
Diverse extremophilic microorganisms have been discovered in habitats characterized by parameters that go beyond the range of their physiological activity. For example, a variety of heat-loving microbial species have been isolated from low temperature environments across the globe. Recent independent investigations have uncovered a vast number of thermophilic species in icy habitats (Bulat et al., 2004;Lavire et al., 2006;Bulat, 2016;Papale et al., 2019;Gura and Rogers, 2020); permafrost environments (Gilichinsky et al., 2007;Demidov and Gilichinsky, 2009;Mironov et al., 2010;Shcherbakova et al., 2011); cool soils with temperatures constantly below 25 • C (Marchant et al., 2002(Marchant et al., , 2008(Marchant et al., , 2011Rahman et al., 2004;Zeigler, 2014); "arctic thermophiles" in permanently cold (−2 • to 4 • C) ocean sediments (Hubert et al., 2009(Hubert et al., , 2010de Rezende et al., 2013;Müller et al., 2014;Robador et al., 2016;Bell et al., 2018Bell et al., , 2020Chakraborty et al., 2018); and non-spore-forming hyperthermophiles in cold (2-4 • C) seawater (Huber et al., 1990;Wirth, 2017). How these thermophilic species, abundantly represented in permanently cold habitats, can tolerate low temperatures significantly below their minimum requirement for growth is an intriguing subject of current investigations. Thermophilic spore-forming bacteria in the marine sediments of Svalbard have been constantly reported (Vandieken et al., 2006;Hubert et al., 2009Hubert et al., , 2010Cramm et al., 2019). Their endospores with specialized cellular features that protect cells from extreme harsh environmental factors contribute to the survival of these thermophilic bacteria in Arctic sediments. Black smoker-associated hyperthermophiles utilize both adherence to suitable surfaces and fast motility as the driving forces for survivability in cold seawater for prolonged periods of time (Mora et al., 2014;Wirth, 2017). Wiegel (2002) proposed the hypothesis of temporary nanoniches in the mesobiotic environments. Such temporary nanoniches provide short-termed limited conditions for alkaliphilic thermophilic bacteria and therefore dictate fastened growth rate in order to cope with this limitation (Wiegel, 2002). Icelandic basaltic and rhyolitic glass and minerals in a sub-Arctic environment with the temperature range below required for thermophile activity were proposed to support transient growth of thermophiles in summer months (Cockell et al., 2015). The low albedo of these rocks has been suggested to facilitate their thermal conductivity and support the function of these igneous materials as microclimatic environments to harbor thermophilic microbial communities (Kelly et al., 2010(Kelly et al., , 2011Cockell et al., 2015). Apparently, these nanoniches inside the rock pores provide a certain potential for their temporary "re-awakening" of such communities during the warm periods. On the molecular level, it has been suggested that microorganisms capable of growth in temperature range of both thermobiotic and mesobiotic environments may contain two different sets of key enzymes whose synthesis are regulated by temperature (Wiegel, 1990).
Understanding freezing tolerance and survival limits of thermophiles in low temperature habitats is also crucial for studies of microbial transfer through space and between celestial bodies, e.g., in the context of lithopanspermia. One of the scientific concepts for the origin and distribution of life is a long-distance shielded interplanetary viable transfer of the most ancient microbial forms of life entombed in lithic habitats (Mileikowsky et al., 2000;Fajardo-Cavazos et al., 2007;Horneck et al., 2008;Nicholson, 2009;Onofri et al., 2012;Kawaguchi et al., 2016). Extreme temperature fluctuations affect microbial "space travelers" during their interstellar transfer. Knowledge of the thermal limits of microbial life embedded into extraterrestrial mineral materials is crucial for envisaging the microbial survival during all lithopanspermia's stages (microbial launching due to the impact ejection from the planet of origin, long-term interplanetary traveling of microbes inside of rocks, and the capture of new life by the recipient planet). Furthermore, microbial cold stress experiments are helpful in terms of interpreting possible spatial and temporal environmental micro(nano)niches suitable for microbial life on Mars. Such near-surface microenvironments of Mars can provide potential habitable niches protected not only from UV, but also from extreme temperature fluctuations. Chin et al. (2010) indicated that chaotropic metabolites and chaotropic environments can increase tolerance to subzero temperatures, extending growth windows in cold ecosystems. Chaotropic ions in the Mars regolith might form microenvironments that support potential Martian biosphere, favoring the growth and preservation of a microbiota at low temperature. Comprehensive laboratory investigations on the survival limits of thermophiles in low temperature habitats can help to address the question if there are habitable niches on Mars today and if they harbor life.
However, our understanding of the physiology of how nonspore-forming thermophiles adapt to the cold is far from being explicit. The major limitation in the field comes from a restricted cultivation of thermophiles at low temperature regiment under laboratory conditions. Artificial culture media used under laboratory conditions usually fail to support cultivation of thermophiles at low-temperature range. One of the rare exceptions is Geobacillus thermoleovorans strain T80 which was cultivated at 4 • C during a long-term period (9 month) and supported in soil microcosms at low temperatures during short-term experiments (1 week; Marchant et al., 2008). Wiegel (1990) described that extreme thermophile Methanobacterium thermoautotrophicum can grow between 22 and 78 • C and the addition of sterile anaerobic sediments permitted its incubation at lower temperatures. The studies obtained under laboratory conditions show that hyperthermophiles can survive at least 9 months in cold surroundings when stored in low-temperature seawater (Mora et al., 2014). In this data report, we show that the extreme thermoacidophile Metallosphaera sedula is among a very few thermophiles supported at cold temperatures under laboratory conditions. Supplementation with the stony meteorite NWA 1172 permits the preservation of the thermoacidophile M. sedula under low temperature regime and provides further possibilities to study the molecular machinery and mechanisms implicated in survivability of non-spore-forming thermophiles at deep subfreezing temperatures.
RESULTS AND DISCUSSION
The heat-, acid-, and heavy metal-resistant M. sedula represents a robust microbiological subject for stress related investigations, with a number of studies published (Peeples and Kelly, 1995;Beblo et al., 2009Beblo et al., , 2011Maezato et al., 2012;Mukherjee et al., 2012;McCarthy et al., 2014;Milojevic et al., 2019a). At the same time, there is an evident gap of knowledge regarding cold stress reactions of this extreme thermophilic archaeon. M. sedula has been described as a well-defined, obligate thermophile which requires a temperature range from 50 to 80 • C for growth, with an optimum of 73 • C (Huber et al., 1989;. However, the sediments from Pisciarelli solfatara, a volcanic field near Naples, Italy, where M. sedula was first isolated, are much cooler, between 25 and 52 • C (Huber et al., 1989). Employing standard culture techniques and evaluating microbial growth, it is obvious that none of the validated terrestrial energy sources (chalcopyrite, pyrite, and other inorganic electron donors) support the cultivation of M. sedula in this lower temperature range (Huber et al., 1989;. However, chemolithoautotrophic cultures of M. sedula in presence of preliminary sterilized meteorite material (the stony chondrite H5 type NWA 1172 Russell et al., 2002;Milojevic et al., 2019b) as the sole energy source permitted the maintenance of M. sedula during 2 months at the average temperature of 12 • C (Table S1). Ramping down the cultivation temperature of M. sedula to this cold temperature regime was achieved by providing a regularly exchanged icesupplemented environment for the glass bioreactors (Figure 1). Similar incubation on sulfide ores did not yield in detectable cells after 2 months ( Figure S1). Examination by multi-labeled fluorescence in situ hybridization (MiL-FISH) with a M. sedulaspecific 16S rRNA-targeted probe, confirmed the identification of cells from the cultures supplemented with the meteorite at cold regime as M. sedula cells (Figure 2 and Figure S2). Furthermore, sequencing of the functional and phylogenetic M. sedula marker gene msed0966 (putative rusticyanin gene; Kelly, 2008, 2010a,b) as well as its 16S rRNA gene confirmed M. sedula presence in low temperature cultures ( Figure S3). Additionally, the content of the cultures was analyzed by scanning electron microscopy (SEM) (Figures 3A-C and Figure S4). Interestingly, our SEM observations of cold-maintained M. sedula indicated a presence of an extracellular matrix evenly spread over the cell surface, appearing as a layer of cellular appendages wrapping around the colonies of M. sedula similar to a biofilm layer (Figure 3 and Figure S4). In conjunction with these observations, when analyzing the cold-maintained cultures by fluorescence microscopy, slightly autofluorescing, dense, and opaque formations were observed (Figures 2A-C), which can be inferred to be the same aforementioned extracellular matrix. A branched extracellular matrix has not been detected in M. sedula cultures grown at 73 • C on terrestrial minerals (Blazevic et al., 2019;Milojevic et al., 2019a) and NWA 1172 (Milojevic et al., 2019b), and might represent an adaptive feature of this thermophilic archaeon for withstanding stressful cold conditions. Another property of cold-maintained cells of M. sedula is their tendency to clump and condense into cellular aggregates (Figure 3 and Figure S4), which is an additional possible strategy for coping with low temperatures.
Our study indicates that the utilization of a specific mineral source and/or nutrients can influence the resistance to cold stress, contributing to the preservation of microbial cells in the cold. The characteristic inability of thermophilic M. sedula to be preserved in common laboratory media at temperatures below 50 • C can be attributed to deficiencies in these media, e.g., in certain essential metabolites/metals which are indispensable for the cultivation of this organism at low temperatures, but that are not required at 73 • C. Our previous studies depicted the beneficial contribution of NWA 1172 as the sole electron donor with a superior growth rate of M. sedula over chalcopyrite, suggesting preferential nature of this multimetalic material as energy source for M. sedula (Milojevic et al., 2019b). The NWA1172 stony meteorite is a non-carbonaceous H type ordinary chondrite, with high iron abundance (largely present in metallic form; Russell et al., 2002) and a wide range of other metal elements (Milojevic et al., 2019b). These metals might be alternatively used by M. sedula as specific metabolic cofactors offering more optimal structural and/or constitutive elements for enzyme activities, e.g., altering protein structural flexibility and shaping protein biophysical and biochemical properties.
A plausible explanation of the observed resistance of NWA 1172-grown M. sedula to cold stress can be inferred from heavy metals chaotropicity that extends the life window at low temperatures (Chin et al., 2010). Heavy metals may chaotropically enhance macromolecular flexibility (Chin et al., 2010;Cray et al., 2012) and so enable cell function/help maintain cellular structure at low temperatures. By exerting chaotropic activity heavy metals can cause specific toxic effects, inhibiting growth already at low concentrations in susceptible microorganisms. However, the studied archaeon M. sedula is an extreme metallophilic microorganism that tolerates elevated heavy metal concentrations (Maezato et al., 2012;Mukherjee et al., 2012;Blazevic et al., 2019;Milojevic et al., 2019a). Cultivation on multimetalic NWA 1172 meteorite (see Milojevic et al., 2019b for heavy metal composition of NWA 1172) exposes M. sedula to higher heavy metal concentrations under physiological conditions, thus helping to increase psychrotolerance. It is important to note that the effect of chaotropes that enhance microbial activity and growth at low temperatures appears to be initiated at around 10-12 • C (Chin et al., 2010), which is the temperature range in our study. Thus, heavy metal chaotropicity may provide a mechanistic understanding of the increased psychrotolerance of M. sedula grown on NWA 1172.
CONCLUSION
In this study we report that when cultivated with meteorite, but not with chalcopyrite, cells of M. sedula can be preserved at cold temperature regime, denoting that the optimized nutrient fitness favor microbial preservation in extreme stressful conditions. Cultures of M. sedula supported at low temperature may serve as a laboratory model to explore metabolic potential of nonspore-forming thermophiles in psychrobiotic environments and the molecular mechanisms behind natural cryopreservation. It is important to note that future follow-up investigations should deliver a systematic study to determine which constituents of the meteorite are the crucial ingredients to ensure M. sedula culture preservation at low temperatures. In this regard, the survivability of M. sedula at low temperatures is a topic that deserves more attention and thorough analysis in the future. Moreover, the fact that an ancient inhabitant of terrestrial thermal springs and extreme thermophilic chemolithotroph M. sedula can be maintained at cold temperatures may complement the on-going scientific debates concerning psychrophilicity or thermophilicity of a last universal common ancestor (Wächtershäuser, 1992;Bada et al., 1994;Bada and Lazcano, 2002;Stetter, 2006;Akanuma et al., 2013), adding new circumstances in terms of survivability of ancestral thermophiles in cool primordial environments.
Cultivation Setup
Chemolithoautotrophic cultivation of M. sedula was performed as described before (Blazevic et al., 2019;Milojevic et al., 2019a,b) in DSMZ88 Sulfolobus medium defined above in 1L glassblower modified Schott-bottle bioreactors (Duran DWK Life Sciences GmbH, Wertheim/Main, Germany), operated with a thermocouple linked to a heating and magnetic stirring plate (IKA RCT Standard/IKA C-MAG HS10, Lab Logistics Group GmbH, Meckenheim, Germany) for agitation and temperature control. The cultivation of M. sedula under cold temperature regime was achieved by providing a regularly exchanged icesupplemented environment of fermentation glass bioreactors (Figure 1). For this the bioreactors were placed in plastic vessels filled with crushed ice and additionally supplemented with a tap water heat exchange system, consisting of 10 mm inner diameter silicon tubes, which were connected to a tap water supply. Melting of ice due to the room temperature fluctuations resulted in an established dynamic temperature profile of M. sedula cultures with the average temperature of 12 • C, which was monitored with an electronic thermocouple inside the bioreactor and with a thermometer outside the bioreactor. For fermentation at cold temperature regime, the media was maintained at room temperature for 2 h prior inoculation. Directly after inoculation the glass bioreactors were provided with icesupplemented environment and water heat exchange system in order to maintain cold temperature regime. Inocula used were exponentially growing H 2 -oxidizing autotrophic cultures of M. sedula. For chemolithoautotrophic growth cultures were supplemented with 10 g/liter either chalcopyrite (provided by E. Libowitzky from the mineral collection of the Department of Mineralogy and Crystallography, University of Vienna) or NWA 1172 (provided by the NHM, Vienna). The minerals were ground and temperature sterilized at 180 • C in a heating oven for a minimum of 24 h prior to autoclaving (121 • C, 20 min). Cells were monitored by phase contrast/epifluorescence microscopy. For the visualization of cells wiggling on solid particles they were stained by a modified "DAPI" (4 ′ -6 ′ -Diamidino-2-phenylindole) procedure (Huber et al., 1985), observed and recorded with ProgRes R MF cool camera (Jenoptik) attached to Nikon eclipse 50i microscope, operated with F36-500 Bandpass Filterset (ex, 377/50 nm; em, 447/60 nm).
Scanning Electron Microscopy
Cells of M. sedula harvested at stationary phase were prepared for electron microscopy by fixing in a solution of 1% (v/v) glutaraldehyde in Na-Cacodylate buffer. Samples were dehydrated in a graded series of ethanol solutions and dried chemically using Hexamethyldisilazan (HMDS). Fixed samples were mounted on aluminum stubs, sputter-coated with Au, and examined with a Hitachi S-4100 SEM (Hitachi, Tokyo, Japan).
Multi-Labeled-Fluorescence in situ Hybridization (MiL-FISH)
Metallosphaera sedula cells were fixed in 2% (v/v) paraformaldehyde (PFA) at room temperature for 1 h, washed three times in distilled water, centrifuged at 10,000 rpm and Fixed cells were mounted on 10 well Diagnostica glass slides (Thermo Fisher Scientific Inc. Waltham, USA) and MiL-FISH conducted directly on them as described previously (Schimak et al., 2015;Kölbl et al., 2017;Milojevic et al., 2019b). Briefly, cells were hybridized with 30% (v/v) formamide for 16 h. For experimental positive controls Gramella forsetii strain KT0803 was hybridized with a 4x labeled general bacterial probe EUB338. Positive control for the specificity of the phylotype specific probe M.sedula_174 was given by including M. sedula DSM5348 in all experiments. After hybridization slides were washed for 15 min at 48 • C [14-900 mM NaCl, 20 mM Tris-HCl (pH 8), 5 mM EDTA (pH 8), and 0.01% SDS (v/v)] at a stringency adjusted to the formamide concentration used. Cells were counterstained by incubation for 10 min with 10 mg ml −1 DAPI followed by rinsing in distilled water 3 times before CitiFluor (CitiFluor Ltd., London, England) mounting medium was applied to slides with a coverslip. Fluorescence images were taken with an AxioCam Mrm camera mounted on an Axioscope2 epifluorescence microscope (Carl Zeiss AG, Oberkochen, Germany) equipped with F36-525 Alexa 488 (ex, 472/30 nm; em, 520/35 nm) filter cube. Images were recorded with the PC-based AxioVision (release 4.6.3 SP1) imaging software.
DATA AVAILABILITY STATEMENT
All data generated for this study are included in the article/Supplementary Material.
AUTHOR CONTRIBUTIONS
ZZ, MS, and TM performed experiments and provided editorial input. All authors made extensive contributions to the analysis, acquisition, and interpretation of data provided in this report. All authors reviewed the report and accepted the final version of it. | 2020-07-08T13:09:50.331Z | 2020-07-08T00:00:00.000 | {
"year": 2020,
"sha1": "5d609212a03ddd6f2a8931f2734ac3e6f1d3ea32",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fspas.2020.00037/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "5d609212a03ddd6f2a8931f2734ac3e6f1d3ea32",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
15903001 | pes2o/s2orc | v3-fos-license | Notch3 Gene Amplification in Ovarian Cancer
Gene amplification is one of the common mechanisms that activate oncogenes. In this study, we used single nucleotide polymorphism array to analyze genome-wide DNA copy number alterations in 31 high-grade ovarian serous carcino-mas, the most lethal gynecologic neoplastic disease in women. We identified an amplicon at 19p13.12 in 6 of 31 (19.5%) ovarian high-grade serous carcinomas. This amplification was validated by digital karyotyping, quantitative real-time PCR, and dual-color fluorescence in situ hybridization (FISH) analysis. Comprehensive mRNA expression analysis of all 34 genes within the minimal amplicon identified Notch3 as the gene that showed most significant overexpression in amplified tumors compared with nonamplified tumors. Furthermore, Notch3 DNA copy number is positively correlated with Notch3 protein expression based on parallel immunohistochemistry and FISH studies in 111 high-grade tumors. Inactivation of Notch3 by both ;-secretase inhibitor and Notch3-specific small interfering RNA suppressed cell proliferation and induced apoptosis in the cell lines that overexpressed Notch3 but not in those with minimal amount of Notch3 expression. These results indicate that Notch3 is required for proliferation and survival of Notch3-amplified tumors and inactivation of Notch3 can be a potential therapeutic approach for ovarian carcinomas.
Introduction
Gene amplification is one of the key mechanisms in activating oncogenes in human cancer (1).Ovarian cancer is the most malignant gynecologic neoplasm.Each year, f16,000 women will succumb to this disease.In ovarian carcinomas, amplifications of cyclin E1 (2), Her2/neu (3), AKT2 (4), L-Myc (5), and Rsf-1 (6) have been reported.Because these amplifications occur only in a subset of tumors, it is expected that additional oncogenic amplifications will be identified.Recent developments of molecular genetic techniques that allow a genome-wide exploration of DNA copy number in cancer have provided investigators unprecedented opportunities to analyze cancer genome in great details.For example, single nucleotide polymorphism (SNP) array has been recently shown as an effective tool in discovering amplified chromosomal regions (7,8).In the current study, we did SNP array analysis on 31 high-grade ovarian serous carcinomas purified from fresh clinical samples and identified a chromosomal region at 19p13.12 that is frequently amplified.One of the genes within this amplicon, Notch3, showed the most significant correlation between gene copy numbers and transcript expression in ovarian cancer, suggesting that Notch3 is a candidate oncogene in the chr19p13.12amplicon.
Materials and Methods
Tumor specimens.For SNP arrays, tissue samples, including 31 highgrade and 7 low-grade ovarian serous carcinomas, were obtained from the department of pathology at the Johns Hopkins Hospital and the Norwegian Radium Hospital in Norway.Tumor cells were affinity purified by anti-EPCAM-conjugated beads.In addition, genomic DNA from 13 normal ovarian tissues was prepared for controls.For quantitative reverse transcription-PCR (RT-PCR), 89 frozen tumor tissues were used to extract RNA, and nine normal ovarian tissues were also included as controls.Acquisition of tissue specimens and clinical information was approved by an institutional review board (Johns Hopkins University) or by the Regional Ethics Committee (Norway).
SNP array.SNPs were genotyped using 10K arrays (Affymetrix, Santa Clara, CA) in the Microarray Core Facility at the Dana-Farber Cancer Institute (Boston, MA).A detailed protocol is available at the Core center web page. 4Briefly, genomic DNA was cleaved with the restriction enzyme, XbaI, ligated with linkers, followed by PCR amplification.The PCR products were purified and then digested with DNaseI to a size ranging from 250 to 2,000 bp.Fragmented PCR products were then labeled with biotin and hybridized to the array.Arrays were then washed on the Affymetrix fluidics stations.The bound DNA was then fluorescently labeled using streptavidinphycoerythrin conjugates and scanned using the Gene Chip Scanner 3000.
dChip software version 1.3 was used to analyze the SNP array data as described previously (7,8).Data was normalized to a baseline array with median signal intensity at the probe intensity level using the invariant set normalization method.A model-based (PM/MM) method was used to obtain the signal values for each SNP in each array.Signal values for each SNP were compared with the average intensities from 13 normal samples.To infer the DNA copy number from the raw signal data, we used the Hidden-Markov model (7) based on the assumption of diploidy for normal samples.Mapping information of SNP locations and cytogenetic band were based on curation of Affymetrix and University of California Santa Cruz hg15.A cutoff of >2.8 copies in more than three consecutive SNPs was defined as amplification.
Digital karyotyping.Purified carcinoma cells as described in the SNP array method were used to generate digital karyotyping library as previously described (9).Approximate 120,000 genomic tags were obtained for each digital karyotyping library.After removing the nucleotide repeats in human genome, the average of filtered tags was 66,000 for each library.We set up a window size of 300 (300 virtual tags) for the analysis in this study.Based on Monte Carlo simulation, the variables used in this study can reliably detect >0.6 Mb amplicon with >5-fold amplification with >99% sensitivity and 100% positive predictive value.
Fluorescence in situ hybridization and quantitative real-time PCR.BAC clones (RP11-937H1 and RP11-319O10) containing the genomic sequences of the 19p13.12amplicon at 15.00 to 15.25 Mb were purchased from Bacpac Resources (Children's Hospital, Oakland, CA).BAC clone (RP11-752A21), located at 19q13.42 (59.26-59.46Mb), was used to generate the reference probe.The method for fluorescence in situ hybridization (FISH) has been detailed in a previous report (6).
Relative gene expression and genomic amplification levels were measured by quantitative real-time PCR using methods previously described (9,10).PCR primers were designed using the Primer 3 program and the nucleotide sequences of the primers for determining transcript expression were listed in Supplementary Table S1.The primer sequences to determine the 19p13.12genomic amplification were 5 ¶-GCCTGTGGCTGAAAATTAAGG-3 ¶ and 5 ¶-TCAATGTCCACCTCGCAATAG-3 ¶.
Immunohistochemistry. A rabbit polyclonal anti-Notch3 antibody was purchased from Santa Cruz Biotechnology (Santa Cruz, CA) and was used in the immunohistochemistry.An EnVision+System peroxidase kit (DAKO, Carpinteria, CA) was used for staining following the protocol provided by the manufacturer.Tissue microarrays (triplicate 1.5 mm cores from each specimen) including 111 high-grade serous carcinomas and 10 normal ovaries were used to facilitate immunohistochemistry. Immunointensity was scored as negative (0), negligible (1+), moderate (2+), and intense (3+), and two investigators independently scored all the samples.For discordant cases, a third investigator scored and the final intensity score was determined by the majority scores.
Cell proliferation and apoptosis assays.Cells were grown in 96-well plates at a density of 4,000 per well.Type 1 g-secretase inhibitor was purchased from Calbiochem (San Diego, CA) and was dissolved in DMSO as 4 mmol/L stock solution.Cells were treated with g-secretase inhibitor with DMSO final concentration <0.1%.Notch3-specific small interfering RNA (siRNA) (rGrUrCrArArUrGrUrUrCrArCrUrUrCrGrCrArGrUrU and rGrCrGrUrGrGrArUrUrCrGrGrArCrCrArGrUrCrUrGrArGrArGrGrG) and control siRNA that targets the Luciferase gene (rGrArUrUrArArArUrCrUr-UrCrUrArGrCrGrArCrUrGrCrUrUrCrGrC) were synthesized by Integrated DNA Technologies (Coralville, IA).Cells were treated with siRNA at a final concentration of 200 nmol/L.
Cell number was measured by the fluorescence intensity of SYBR green I nucleic acid gel stain (Molecular Probes, Eugene, OR) using a fluorescence microplate reader (Fluostar from BMG, Durham, NC).Data was determined from five replicates and was expressed as the percentage of the inhibitor or Notch3 siRNA-treated cells versus DMSO or control siRNA-treated cells.BrdUrd uptake and staining were done using a cell proliferation kit (Amersham, Buckinghamshire, England, United Kingdom) and apoptotic cells were detected using an Annexin V staining kit (BioVision, Mountain View, CA).The percentage of BrdUrd-positive and Annexin V-positive cells was determined by counting f400 cells from each well in 96-well plates.The data was expressed as mean F 1 SD from triplicates.
Results and Discussion
Amplification of chromosome 19 in ovarian serous carcinomas.SNP arrays were used to search for genome-wide DNA copy number alterations in 31 high-grade and 7 low-grade ovarian serous carcinomas.In addition, 13 normal ovarian tissues were analyzed as controls.We found several distinct amplifications on Figure 1.Identification of chr19p13.12amplification in high-grade ovarian serous carcinomas.A, SNP array analysis shows several distinct amplicons on chromosome 19 (top to bottom, p to q arm) including the cyclin E1 and AKT2 loci and a novel amplicon at chr19p13.12(bracket ).An increase in DNA copy number is indicated by a gradient in color with 0 copy in white and five or more copies in red.Each column represents an individual tumor sample.B, quantitative real-time PCR on genomic DNA validates the amplification on the six tumors with chr19p13.12amplification identified by the SNP array analysis.C, comparison of DNA copy number using SNP array and digital karyotyping on chromosome 19.The data of SNP array is compared with that of digital karyotyping analysis on two representative specimens.Left, tumor 2, which contains a high copy number amplification at chr19p13.12.Right, tumor 1, which contains a low copy number gain at chr19p13.12.Both SNP array and digital karyotyping analyses show a similar pattern of DNA copy number alterations.
Notch3 Gene Amplification in Ovarian Cancer www.aacrjournals.orgchromosome 19 specific to high-grade serous carcinomas.Among them, amplification of the cyclin E1 locus was present in 10 of 31 (32.2%)samples and amplification of AKT2 locus in 9 of 31 (29%) samples.Both cyclin E1 and AKT2 have been previously reported as potential oncogenes that are frequently amplified in ovarian cancer.More importantly, a novel amplification on chr19p13.12was identified in 6 of 31 (19.5%)high-grade carcinomas.The peak copy number changes in these six amplified tumors ranged from four to nine copies based on SNP array analysis (Fig. 1A and B).In contrast to high-grade serous carcinomas, no evidence of amplification at 19p13.12, or cyclin E1 or AKT2 loci could be detected in low-grade ovarian tumors or normal ovarian tissues.
Three independent methods were used to validate the 19p13.12amplification.First, digital karyotyping was done on a tumor with low level of gain ( four copies) and a tumor with high level of amplification (nine copies) based on SNP array analysis.Digital karyotyping is a recently developed genome-wide technology that allows a precise measurement of DNA copy number at high resolution (9).The method has been used to identify new amplicons in human cancer (6,(11)(12)(13) and using digital karyotyping, we found similar location and amplitude of these amplicons detected by SNP arrays (Fig. 1C).Second, quantitative real-time PCR was used to measure the copy number of the predicted amplification in all of the six tumors (Fig. 1B).The amplification levels based on the SNP arrays were generally in agreement with quantitative PCR but with an underestimation of copy number in those samples with high amplitude of amplifications.This could be due to the saturation of probe hybridization signal associated with array platform.Third, dual-color FISH analysis was done on all six amplified tumors using a BAC clone located at the 19p13.12amplicon and a BAC clone located at the 19q as the reference.All of the six cases showed increased target signals compared with the reference signals.Among them, four cases showed a pattern of chromosomal gain and two cases showed a pattern of homogenously staining region.Taken together, the above results confirmed amplification of 19p13.12 in high-grade ovarian serous carcinomas.
Notch3 as the candidate cancer-related gene in the 19p13.12amplicon.To select the most promising tumor-associated gene within the minimal amplicon, we aligned the amplicons from these six tumors and delineated a common region of amplification (minimal amplicon), which spanned from 14.60 to 16.47 Mb on chromosome 19p and contained 34 genes (Fig. 2).To identify the candidate-amplified tumor-associated gene within the amplicon, we correlated the gene copy number and gene expression levels for all 34 genes based on the rationale that a tumor-driving gene, when amplified, should overexpress to activate its tumorigenic pathway, whereas coamplified ''passenger'' genes that are not related to tumor development may or may not do so (14).This approach can be useful to narrow down the candidate gene lists, although some amplification events may not necessarily have overexpression of the gene of interest.The mRNA levels were measured using quantitative RT-PCR on five high-grade carcinomas with 19p13.12 Figure 2. Gene expression analysis of the 19p13.12amplicon in ovarian tumors.Left, alignment of the amplicons from the 19p13.12-amplifiedtumors reveals a common region of amplification spanning from 14.60 to 16.47 Mb at chromosome 19p.Right, quantitative real-time PCR was done for all 34 genes located within the minimal amplicon in high-grade serous carcinomas with or without 19p13.12amplification and benign OSE cells were used as controls.The expression level of each gene (top to bottom, centromeric to telomeric) in individual specimen is shown as a pseudocolor gradient based on the relative expression level of a given specimen to the average value derived from five benign OSE samples.
Cancer Research
Cancer Res 2006; 66: (12).June 15, 2006 amplification, 11 high-grade tumors without such amplification, and five benign ovarian tissues including four OSE samples and a benign ovarian cyst.The expression levels for each gene were normalized to the average value of benign tissues (Fig. 2).Mann-Whitney U test was used to compute and compare the difference in gene expression levels between the 19p13.12amplified versus nonamplified high-grade carcinomas.Among the 34 genes within the minimal amplicon, Notch3 showed the most consistent and significant up-regulation in 19p13.12-amplifiedtumors compared with nonamplified tumors (P = 0.0018).Other genes in the amplicon, such as DDX39 and ADHD9, also showed a significant overexpression in the amplified versus nonamplified tumors (P = 0.01 and P = 0.02, respectively).It is plausible that these genes also contribute to the tumorigenesis in ovarian cancer.In this study, we selected Notch3 for further characterization because it showed the most significant P value.We further correlated Notch3 DNA and mRNA copy number in a set of 31 samples in which both tissues and RNA samples were available for FISH and quantitative RT-PCR analyses.Our data showed a moderate positive correlation of these two variables with a correlation coefficient of 0.481 (Spearman rank-order test, P < 0.05).
The above results, together with previous reports showing that Notch3 participates in both development and oncogenesis, suggest that Notch3 is a candidate ''driving'' gene in the 19p13.12amplicon.Therefore, Notch3 was prioritized for further characterization in this study.We then did quantitative RT-PCR on a large panel of ovarian serous tumors to determine Notch3 mRNA levels.As shown in Fig. 3A, Notch3 was overexpressed in 66% (51 of 77) of high-grade tumors, but only in 33% (4 of 12) of low-grade tumor compared with normal ovarian surface epithelium (OSE).In addition, the top five tumors with the highest Notch3 mRNA expression harbored 19p13.12amplification (Fig. 3B), further supporting Notch3 as a candidate amplified gene that is important in tumor development.Mann-Whitney U test showed that there was a statistically significant difference in the levels of Notch3 expression between OSE and high-grade serous carcinomas (P < 0.001) and between OSE and low-grade serous carcinomas (P < 0.05).However, there was no significant difference between low-grade and highgrade carcinomas (P = 0.3187).The intensity of Notch3 immunoreactivity positively correlates with the Notch3 DNA copy ratio based on FISH analysis.Furthermore, there is a statistically significant difference between tumors with intense staining intensity (3+) and those without (Mann-Whitney U test).
Notch3 Gene Amplification in Ovarian Cancer www.aacrjournals.orgImmunohistochemistry and FISH were done in parallel on 111 tumor samples to correlate protein expression and Notch3 DNA copy number.Notch3 immunoreactivity was detected in both nucleus and cytoplasm of tumor cells in 61 of 111 (55%) carcinomas but not in normal ovarian epithelial cells (Fig. 4A).Notch3 immunointensity was scored as negative (0), negligible (1+), moderate (2+), and intense (3+) and was correlated with Notch3 DNA copy number ratio based on FISH analysis (Fig. 4B).Specifically, the intensity of Notch3 staining positively correlated with the Notch3 DNA copy ratio (Spearman rank-order correlation coefficient = 0.4).Furthermore, there is statistically significant difference between tumors with intense staining intensity (3+) and those without (intensity score = 0, 1+, and 2+; Mann-Whitney U test, P < 0.0001).We observed that 8 of 22 carcinomas are with Notch3 overexpression (3+) but without Notch3 gene amplification (gene copy ratio V 2).This finding implies that in addition to gene amplification, mechanisms such as epigenetic activation of the Notch3 promoter in response to environmental cues may contribute to Notch overexpressing in those tumors.Although a positive correlation of Notch3 gene copy ratio and protein expression was observed, there are some tumors (5 of 50 tumors) Figure 5. Effects of g-secretase inhibitor on cell proliferation and apoptosis in cell lines.A, Western blot analysis shows a higher expression level of Notch3 protein in OVCAR3, A2780, and MCF-7 cells compared with the other cell lines in which the Notch3 protein is not detectable.B, g-secretase inhibitor significantly reduces the cell number in cell lines with Notch3 overexpression, including OVCAR3, A2780, and MCF-7 compared with those without Notch3 overexpression.g-Secretase inhibitor decreases DNA synthesis (C ) and increases apoptosis (D ) in OVCAR, A2780, and MCF-7 cells based on measurement of BrdUrd uptake and Annexin V labeling assays, respectively.
Cancer Research
Cancer Res 2006; 66: (12).June 15, 2006 with Notch3 DNA copy ratio >2 but with only weak Notch3 protein expression (0/1+).This result suggests that these carcinomas may use other oncogene(s) that resides on the same chromosomal arm as the Notch3 locus to promote tumor progression in ovarian serous carcinomas.
Notch receptors participate in signal transduction by translocating their cytoplasmic domain to the nucleus where it activates an array of downstream effectors that can play important roles in cell proliferation and survival (15).The association of genetic changes in Notch3 and human cancer has been recently established in lung carcinoma.Translocation of t (15;19) was identified in nonsmall-cell lung cancer cell lines and the breakpoint has been mapped to 50 bp upstream of the Notch3 locus.The translocation of chromosome 19p was found to correlate with overexpression of Notch3 full-length mRNA (16).Our data here provide new evidence that besides translocation, gene amplification is another mechanism to activate Notch3 in human cancer.
The genetic findings are supported by mouse models.For example, expression of the intracellular domain (a constitutively active form) of Notch3 in mouse thymus induced T-cell leukemia/ lymphoma (17).Furthermore, constitutive expression of activated Notch3 in the central nervous system initiates the formation of brain tumors in the choroid plexus in mice (18), suggesting that deregulation of Notch3 plays a role in neoplastic transformation.
Functional analysis of Notch3 expression.To determine if Notch3 is essential for cell growth and survival in cell lines that overexpress Notch3, we used g-secretase inhibitor 1, which prevents the activation of Notch3 by inhibiting the proteolysis and translocation of Notch3 cytoplasmic domain to the nucleus.This compound has been shown to be a potent and specific inhibitor of Notch pathway (19,20).g-Secretase inhibitor was applied to the culture medium in nine cell lines, including three cancer cell lines with Notch3 overexpression (OVCAR3, A2780, and MCF7), an immortalized OSE cell line (OSE 29), and five cancer cell lines (SKOV3, MPSC1, HTB75, TOV-G21, and ES-2) without Notch3 overexpression (Fig. 5A).We first determined the concentration of g-secretase inhibitor to be used and found that the IC 50 was lowest in OVCAR3 (1 Amol/L).Therefore, we treated different cell lines with inhibitor at 1 Amol/L in culture and found that there was a substantial reduction in cell number of OVCAR3, A2780, and MCF7 cells that overexpressed Notch3 compared with other cell lines that did not have Notch3 overexpression (Fig. 5B, P < 0.001, Student's t test).To assess the mechanisms underlying the growth inhibition by the g-secretase inhibitor in OVCAR3, A2780, and MCF-7 cells, we measured the percentage of BrdUrd-labeled cells for cellular proliferation and Annexin V-labeled cells for apoptosis (Fig. 5C and D).We found that g-secretase inhibitor significantly reduced cellular proliferation and induced apoptosis in all three cell lines with Notch3 overexpression compared with the DMSO controls (P < 0.001, Student's t test).siRNA was used to knock down the expression of Notch3 in the same nine cell lines used for the g-secretase inhibitor assay.The knockdown effect of siRNA was shown by Western blot (Fig. 6A).siRNA treatment significantly reduced the Notch3 protein expression compared with the mock or control siRNA-treated groups.Similar to the effects of g-secretase inhibitor, Notch3 siRNA reduced cell number most significantly in OVCAR3, A2780, and MCF7 cells, which overexpressed Notch3 compared with the other cell lines (Fig. 6B, P < 0.001, Student's t test).The BrdUrd-positive cells decreased and the Annexin V-labeled cells increased in Notch3 siRNA-treated cells compared with control siRNA-treated cells (Fig. 6C and D, P < 0.001, Student's t test).
Our in vitro data to inactivate Notch3 by g-secretase inhibitor and siRNA may have clinical implications for ovarian cancer patients and suggest that Notch3 can be a candidate therapeutic target.g-Secretase inhibitors have been studied in the past several years as a potential therapeutic intervention in Alzheimer's disease.Very recently, g-secretase inhibitors have been shown to inhibit the epithelial cell proliferation and induce goblet cell differentiation in intestinal adenomas Apc À/À (min) mice (20).Furthermore, gsecretase was shown to be able to inhibit the growth of Kaposi's sarcoma cells in mouse tumor model (19).Therefore, with the promising effects at both in vitro and in vivo systems, g-secretase inhibitors can be used as new target-based therapy for those tumors with Notch3 activation.
The current study suggests that Notch3 is a strong candidate oncogene among the genes within the ch19p13.12amplicon in ovarian carcinomas.This is because Notch3 gene shows a high correlation of gene amplification and overexpression and is functionally essential for tumor growth and survival.Although the above represents our preferred interpretation, other alternative interpretations should be pointed out.For example, Notch3 may not be the only gene with high correlation of DNA copy number and gene expression level after analyzing a large series of amplified and nonamplified tumors.It is possible that other coamplified gene(s) within the Notch3 amplicon also plays a role in tumorigenesis and they may cooperate with Notch3 in propelling tumor progression.
In conclusion, we have identified Notch3 as a candidate amplified oncogene that overexpressed in 66% of ovarian serous carcinomas.Our findings suggest that Notch3 amplification may play an important role in the development of ovarian carcinomas; moreover, these findings provide a rationale for future development of Notch3-based therapy for ovarian cancer.
Figure 3 .
Figure 3. Overexpression and amplification of Notch3 in high-grade ovarian carcinomas.A, mRNA expression level of Notch3 for each specimen is measured by quantitative RT-PCR and is expressed as fold increase relative to the average value derived from nine OSE samples.Each symbol represents one specimen.The sample with circle harbored 19p13.12amplification, which is shown in (B).B, FISH analysis shows a homogenously stained region in an amplified tumor (right ) with a high level of mRNA expression.A nonamplified tumor (left ) is also shown as a control.
Figure 4 .
Figure 4. Immunoreactivity of Notch3 in ovarian tumors.A, immunoreactivity of Notch3 is not detectable in normal OSE (a and b) but is overexpressed in 55% of high-grade serous carcinoma.Immunolocalization of Notch3 is detected in both nucleus and cytoplasm in the tumor cells (c and d ).B, correlation of Notch3 immunointensity and DNA copy number ratio in high-grade serous carcinomas.The intensity of Notch3 immunoreactivity positively correlates with the Notch3 DNA copy ratio based on FISH analysis.Furthermore, there is a statistically significant difference between tumors with intense staining intensity (3+) and those without (Mann-Whitney U test).
Figure 6 .
Figure 6.Effects of Notch3 knockdown on cell proliferation and apoptosis in cell lines.A, Western blot analysis shows a significant reduction of Notch3 protein in Notch3 siRNA-treated cells compared with the mock or control siRNA-treated cells.B, Notch3 siRNA significantly reduces the cell number in cell lines with Notch3 overexpression, including OVCAR3, A2780, and MCF-7 compared with those without.Treatment with Notch3 siRNA decreases DNA synthesis as measured by BrdUrd uptake (C ) and increases apoptosis as measured by Annexin V labeling (D ) in OVCAR3, A2780, and MCF-7 cells. | 2017-04-16T02:35:09.531Z | 2006-06-15T00:00:00.000 | {
"year": 2006,
"sha1": "4707c0b017705af9cb6eb1d49f4aaba132237d14",
"oa_license": "CCBY",
"oa_url": "https://aacr.figshare.com/articles/journal_contribution/Supplementary_Table_1_from_i_Notch3_i_Gene_Amplification_in_Ovarian_Cancer/22365225/1/files/39809493.pdf",
"oa_status": "GREEN",
"pdf_src": "Grobid",
"pdf_hash": "4707c0b017705af9cb6eb1d49f4aaba132237d14",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
264412491 | pes2o/s2orc | v3-fos-license | Improvement of emulsifying stability of coconut globulin by noncovalent interactions with coffee polyphenols
Highlights • Coffee polyphenols formed a strong interaction with coconut globulin.• Coffee polyphenols promoted the unfold of coconut globulin molecular structure.• The interaction enhanced the affinity of coconut globulin for the O/W interface.• Coffee polyphenols improved the emulsification stability of coconut globulin.
Introduction
Coconut milk, obtained from crushing, squeezing, and extracting mature coconut meat, is nutrient-dense and rich in fats, proteins, minerals, and vitamins (Zhao et al., 2023).Coconut milk consists of 31.0-35.0% fat and 3.5-4.0% protein (Zhao et al., 2023).Coconut protein, an amphiphilic molecule, can be adsorbed, unfolded and rearranged at the oil-water (O/W) interface, stabilizing the oil-in-water emulsion (Chen et al., 2023a).Due to its high nutritional value and unique taste, it is widely used in drinks, ice cream, and cooking.However, coconut milk, like other emulsions, represents a thermodynamically unstable system (Chen, Chen, Fang, Pei, & Zhang, 2024).Since coconut protein has limited emulsifiability, droplets of coconut milk without emulsifiers easily agglomerate and aggregate, therefore tending to phase separate (Tangsuphoom & Coupland, 2009).
The most common method to improve the stability of coconut milk is to add emulsifiers, surfactants, and thickeners, such as protein, sorbitol esters, ethoxy esters, and sucrose esters (Ariyaprakai, Limpachoti, & Pradipasena, 2013).However, for safety reasons, the amount of these additives is limited.Previous research suggests that the stability of coconut milk can be improved by adding natural proteins (improves emulsification) and starch (increases viscosity) (Arlai & Tananuwong, 2021;Tangsuphoom & Coupland, 2009).Nevertheless, the addition of protein and starch can affect the taste and flavor of coconut milk, which is determined by the delicate balance of flavors fat, protein, and carbohydrates.
Coconut milk contains proteins such as albumin, globulin, glutenin, and gliadin, with globulin and albumin serving as emulsifiers to stabilize the coconut milk emulsion (Tangsuphoom & Coupland, 2008).In coconut milk, coconut globulin (CG) is the most abundant protein and the one with the highest emulsifying capacity (Kwon, Park, & Rhee, 1996).Our previous work demonstrated that CG can be oxidized by cold plasma to improve emulsifying properties (Chen et al., 2023c).Furthermore, changing environmental factors (pH and temperature) can adjust the emulsifying properties of CG (Ma et al., 2023).Therefore, modification of CG can improve its emulsifying properties, thereby enhancing the stability of coconut milk.
Protein modification through interactions with other food components such as polyphenols has been identified as a promising strategy to improve the functional properties of protein (Li et al., 2021).Phenolic compounds, possessing at least one aromatic ring and one or more hydroxyl groups, are natural antioxidants (Ma et al., 2023).Polyphenols can interact with proteins through covalent/noncovalent interactions.Covalent interactions are based on the oxidation of polyphenols and nucleophilic addition, while noncovalent interactions include hydrogen bonds, hydrophobic interactions, and electrostatic interactions (Li et al., 2021).These interactions can be exploited in emulsion-based food systems to improve the oxidative stability of the product, as demonstrated in products such as milk and soy milk (Li et al., 2021).Therefore, considering the interaction between CG and polyphenols could promote the adsorption of protein molecules at the O/W interface, thereby improving emulsifying stability.Coffee, one of the most popular beverages worldwide, is rich in polyphenols such as caffeic acid (CA), chlorogenic acid (CHA), and ferulic acid (FA), which have potential antioxidant, hypoglycemic, antihypertensive, antibacterial, and anti-inflammatory effects (Bondam, da Silveira, dos Santos, & Hoffmann, 2022).Additionally, certain studies have demonstrated that coffee polyphenols (CPs) such as CA and CHA can interact with proteins to enhance their functional properties (Qi et al., 2023).However, the mechanisms by which different CPs improve protein emulsifying properties remain unclear.
In this study, CA, CHA, and FA were selected as CP models to investigate the mechanism by which different CPs improve the emulsifying properties of CG.Thermodynamic parameters and molecular docking were used to determine the interaction between CA, CHA, FA, and CG.Methods such as sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), circular dichroism (CD) spectroscopy, fourier transform infrared (FTIR) spectroscopy, and fluorescence spectroscopy were employed to characterize the impact of the interaction between CPs and CG on protein structure.The interfacial tension and emulsifying activity of CG-CPs were observed.Finally, the storage stability, centrifugal stability, pH stability, and salt stability of the prepared emulsion were characterized.
Preparation of CG
The extraction of CG was carried out according to the procedure described by Chen et al. (2023c).Coconut was obtained from Wenchang, Hainan Province, and used for extraction.The oil was removed from the freeze-dried coconut meat powder with n-hexane, and CG was extracted with 0.5 mol/L NaCl solution at a ratio of 1:5 (w/v) for 4 h.The extract was then freeze-dried and stored at 4 • C.
Preparation of CG and CG-CPs complexes
The CG was dissolved in 0.01 mol/L phosphate buffer solution (PBS, pH 6.0), stirred for 2 h, and left overnight for complete hydration, achieving a final solution concentration of 2.0 mg/mL.CA, CHA, and FA were each dissolved in 0.01 mol/L PBS (pH 6.0) to a concentration of 2.0 mg/mL.The solutions of CA, CHA, and FA were added dropwise to the CG solution, stirring equal volumes at low speed to ensure sufficient reaction.The pH of all samples was adjusted to 6.0 using 0.1 mol/L HCl and 0.1 mol/L NaOH solutions to match the pH of commercial coconut milk products (≥5.9) (Zhao et al., 2023).The solutions were labeled as CG-CA, CG-CHA, and CG-FA, respectively.The CG solution (1 mg/mL) was used as a control and all samples were stored at 4 • C for short-term storage.
Fluorescence spectrum measurement
The fluorescence spectra were measured using a fluorescence spectrophotometer (F-7000, Hitachi, Japan), according to the method described by Lu et al. (2022).The CG solution was combined with CA, CHA, and FA solutions to prepare samples with a CG concentration of mg/mL in the final complex.The concentrations of CA,CHA,and FA were 0,20,40,60,80,100,120,and 140 μ mol/L, respectively.The interactions within the complexes were analyzed at temperatures of 298, 303, and 308 K.A 0.01 mol/L PBS buffer (pH 6.0) was used as a blank.Intrinsic emission fluorescence spectra of the samples were recorded in the range of 290 to 550 nm at an excitation wavelength of 280 nm.The scanning speed was 1200 nm/min and the slit width was set to 2.5 nm.
Molecular docking simulation
The molecular docking method was used to simulate the binding mode and affinity of polyphenols and CG.The CG ID (5WPW) was entered into the RCSB database (https://www.rcsb.org/) to obtain the CG molecular model.MOE 2012 software was used for preprocessing, which included adding hydrogen atoms, optimizing charge, and removing water molecules.The structures of the polyphenols were obtained from PubChem (https://pubchem.ncbi.nlm.nih.gov).Since the entire protein represents the potential binding site, the conformation with the lowest binding energy was selected as the optimal conformation.The binding site and interaction forces were analyzed.
Determination of CG, CG-CA, CG-CHA and CG-FA structure SDS-PAGE was performed using a precast Biosharp PAGE gel (Tris-Gly, 4 %-20 %, 10 ×).Samples were prepared by mixing a 1 mg/mL sample solution with loading buffer (SDS-PAGE protein loading buffer, ×) at a volume ratio of 4:1 then heating the mixtures in a water bath at 100 • C for 3-5 min for denaturation.A volume of 10 μL each sample solution was loaded into each well and the voltage was adjusted to 90 V (for the stacking gel)/120 V (for the separating gel).The gel was stained with Coomassie Brilliant Blue R250 and then destained with the destaining solution.
CD spectra were obtained with a Bio-Logic MOS-500 circular dichroism spectrometer (Isere, France) in the 190-250 nm range.Sample solutions were diluted to 0.5 mg/mL, and deionized water, CA, CHA, and FA solutions without CG were used as controls.The spectra were an average of three scans.
The FTIR spectra of the samples were measured using an FTIR spectrometer (TENSOR 27, Bruker, Germany).Freeze-dried powder samples (10 mg) were mixed with KBr at a ratio of 1:100 (w/w) and pressed into tablets.Spectra were recorded in the wavenumber range of 4000-400 cm − 1 with air as a background.Each FTIR spectrum consisted of 32 scans on average.
The free -SH and S -S bond content of samples were determined as previously reported Chen et al. (2023c).The freeze-dried samples were dissolved in Tris-Gly-urea buffer (0.086 mol/L Tris, 0.090 mol/L glycine, 8 mol/L urea, 0.004 mol/L ethylenediaminetetraacetic acid, pH 8.0) to prepare 6 mg/mL solutions.The supernatant was centrifuged and then 80 µL (4 mg/mL) of 5,5′-Dithio bis-(2-nitrobenzoic acid) (DTNB) was added to 2 mL of supernatant and allowed to react for 5 min at 25 • C. The -SH content was calculated by measuring the absorbance at 412 nm.To determine the S -S bond content, 2 mL of the above supernatant was reacted with β-mercaptoethanol (4 mg) for 2 h at 25 • C.Then, a 12 % trichloroacetic acid solution was added and centrifuged for Y. Chen et al. 1 h, followed by repeated washing and precipitation.After adding 80 µL of the DTNB solution, the absorbance was measured at 412 nm, and the total S-S content was calculated as follows Eqs.(1) and (2): where A 412 is the absorbance value at 412 nm, D is the dilution ratio, and C is the protein concentration in the sample.
Particle size and zeta potential measurement
Sample solutions (5 mL) were diluted with deionized water to achieve a concentration of 0.5 mg/mL and particle size and zeta potential were measured three times using a Malvern ZS 90 (Malvern Instruments, Malvern, Worcestershire, UK).The temperature was kept constant at 25 • C during measurements.
Turbidity measurement
The turbidity of the samples was determined according to a previously published method (Zheng et al., 2022) with minor modifications.The samples were sufficiently diluted to 0.5 mg/mL, and the absorbance was measured at 600 nm with a UV spectrophotometer (TU-1901, Universal Instruments, Beijing).
Transmission electron microscope (TEM)
The microstructure of the samples was characterized using a transmission electron microscope (JEM-2100, JEOL, Japan).Samples were placed dropwise on a carbon-coated copper grid and stained with drops of uranium acetate to prevent light exposure to light.After drying, the samples were observed under the transmission electron microscope and images were taken.
Determination of CG, CG-CA, CG-CHA and CG-FA properties
Protein solubility was modified by the Bradford method (Chelh, Gatellier, & Sante-Lhoutellier, 2006).The standard solution of bovine serum albumin (BSA) was prepared and a standard curve was recorded.The sample solutions (1 mg/mL) were centrifuged at 8000 r/min for 20 min, and 1 mL of the supernatants was collected to determine the protein concentration.Protein solubility was expressed as protein concentration in the supernatant relative to the protein concentration before centrifugation.
Surface hydrophobicity was determined by the bromophenol blue method.According to Chelh et al. (2006) described procedure, the prepared protein solution (1 mg/mL, 0.1 mL) was diluted to an appropriate concentration, mixed with a 1 mg/mL bromophenol blue solution (20 µL), and centrifuged.The absorbance of the supernatant was measured at 595 nm.The surface hydrophobicity of the protein was indicated by the amount of bromophenol blue bound.
The dynamic interfacial tension was measured at 25 • C using a drop profile tensiometer (OT100, Ningbo NB Scientific Instruments, China).Due to the technical requirements of the tensiometer, samples were sufficiently diluted and each drop volume was maintained at 10 μL while soybean oil was loaded into a syringe.The dynamic interfacial tension was measured for 7200 s until no further change in interfacial tension was observed.
Preparation of emulsions
Refer to the previous method and modify it (Wu, Wu, Lin, & Shao, 2022), CG and CG-CPs solutions were mixed with soybean oil at a volume ratio of 9:1 to obtain a 10 % oil phase mixture with a final CG concentration of 10 mg/mL.These solutions were sheared with a highspeed shearing machine at a speed of 10,000 rpm for 1 min to prepare macroemulsions.These macroemulsions were then processed using a NanoGenzier high-pressure homogenizer (Genizer, USA) at 20 MPa and passed twice to generate microemulsions.The temperature was maintained at approximately 25 • C during shearing to produce oil-in-water emulsions stabilized by CG, CG-CA, CG-CHA, and CG-FA.Subsequently, 0.02 % thiomersal was added to each emulsion to inhibit microbial growth.All emulsions were stored at 4 • C.
Emulsifying activity
Droplet size distribution and average droplet size of the emulsion were measured using a Malvern Master Sizer 2000 instrument (Malvern Instruments, Malvern Hills, UK).For this experiment, the universal spherical analysis model was used and the refractive index of the oil droplets and dispersed water was set to 1.46 and 1.33, respectively, with the obscuration values ranging from 5 % to 15 %.
Viscosity measurements were carried out using a Haake Mars rheometer (Thermo Fisher, USA).The shear rate was gradually increased from 1 to 100 s − 1 during the measurement while the temperature of the sample (1 mL) was maintained 25 • C. By comparing the flow properties of different samples, the effect of the addition of CPs on the flow properties of the CG-stabilized emulsion was determined.All data provided represents the average of three replicates.
Emulsion stability under different conditions
The droplet morphology of the emulsion after s 0, 7, 14, 21, and days of storage as directly observed with an inverted microscope (Leica Microsystems CMS GmbH, Germany).
The centrifugal stability of the emulsions (0.4 mL) was further analyzed using a LUMiSizer stability analyzer (LUMiSizer, L.U.M. GmbH, Germany).The optical wavelength was set to 870 nm and the optical factor was 1.00.The contour line was set to 300, the time interval to 10 s, and the required time to 50 min.
To evaluate the pH stability of the emulsion, fresh emulsion samples (10 mL) were adjusted to pH 4.0, 5.0, 6.0, 7.0, and 8.0 using 0.1 mol/L HCl and 0.1 mol/L NaOH solutions.The droplet size of the emulsion was measured.The creaming index (CI) of the emulsion was determined using the following Eq.(3): where H E and H S represent the total heights of the emulsion and the serum layer of the emulsion, respectively.Different concentrations of NaCl solution (0.05-0.3 mol/L) were added to the emulsion (10 mL).The emulsion index and the droplet size were determined after 3 h.
Statistical analysis
Three independent experiments were performed for all treatments and the results were expressed as mean ± standard deviation (SD).Data processing and statistical analysis were performed using IBM SPSS Statistics 22.The significance level was set at p < 0.05.
Interaction between CG and CPs
As shown in Fig. 1a-b, the fluorescence intensity of CG at the excitation wavelength of 280 nm decreased with the introduction of CPs.When polyphenols are introduced into a protein solution containing fluorophores, the fluorescent groups of the protein are disrupted, resulting in a decrease in fluorescence intensity (Condict & Kasapis, 2022).Therefore, it was concluded that the quenching of CG fluorescence intensity was a result of the interaction between CG and CPs.
To determine the type of quenching, the fluorescence emission spectrum data were processed using the Stern-Volmer equation at temperatures of 298, 303, and 308 K (Condict et al., 2022): where [Q] is the added concentration of CPs; F 0 and F denote the fluorescence intensity of the composite system without and with the introduction of CPs, respectively.K q represents the rate constant in the quenching process, K sv is the Stern-Volmer quenching constant, and τ 0 is the average lifetime of fluorescent molecules without the presence of CPs (τ 0 = 10 -8 s).Dynamic quenching is typically due to the collision between the fluorescent agent and the quencher, while static quenching is the result of the formation of a stable fluorescent agent-quencher complex (Sadeghi-kaji, Shareghi, Saboury, & Farhadian, 2019).Dynamic quenching is characterized by the fact that the quenching constant increases with increasing temperature, as the latter promotes intermolecular collisions.However, in static quenching, the quenching constant decreases with increasing temperature (Acharya, Sanguansri, & Augustin, 2013).
Therefore, the influence of temperature on the interaction between CG and CPs was investigated and fitted using Eq. ( 4) to obtain K sv and K q .As shown in Table 1, the K sv for all CG-CPs decreased with increasing temperature.The K sv for CHA-CG was highest at 298, 303, and 308 K, suggesting that CG has the greatest influence on CHA molecules.Furthermore, the minimum quenching rate constant K q was 4.07 × 10 12 L⋅mol − 1 ⋅s − 1 , which was significantly higher than the maximum dynamic quenching constant (2.0 × 10 10 mol − 1 ⋅s − 1 ) (Sadeghi-kaji et al., 2019).These results further confirmed that the fluorescence quenching mechanism of CPs on CG was static quenching caused by nonradiative energy transfer from the complexes formed between CG and CPs.
Static quenching was used to calculate the binding constant (K a ) of the complexes between CG and CPs and the number of binding sites (n) of the protein.This can be achieved using the following logarithmic Eq. ( 5) (Tian et al., 2023): The values of K a and n indicated the binding capacity of CPs to CG.Higher K a and n values meant a stronger interaction between the protein and CPs, indicating more stable complex formation.All K a values were well above 10 4 L⋅mol − 1 , indicating that CG and CPs had strong binding affinity.The K a values for CG-CA and CG-CHA decreased with increasing temperature, suggesting that the interaction between CG and CA/CHA was less favorable at higher temperatures.Conversely, higher temperatures promoted the interaction between CG and FA.
These results are consistent with those of Qi et al. (2023) where it was found that when trypsin bound to CA and CHA, the K a of trypsin-CA was lower than that of trypsin-CHA.The increased binding affinity could be due to the presence of a larger number of hydroxyl groups in CHA, which facilitated the formation of hydrogen bonds with proteins, thus leading to a more robust interaction.At 298 K, both K a and n values for CG-CHA were larger than those of CG-CA and CG-FA, indicating the strongest combination between CHA and CG (Tian et al., 2023).The n values for CG-CPs were greater than 1, suggesting that CA/CHA/FA and CG had an additional binding site.
To further identify the main driving forces (hydrogen bonds, hydrophobic forces, and electrostatic forces) behind the interaction of CG and CPs, thermodynamic parameters were calculated using Van't Hoff Eqs. ( 6) and ( 7) (Tian et al., 2023): here, T denotes the absolute temperature; R represents the gas constant, which is 8.314 J⋅mol − 1 ⋅K − 1 ; ΔH, ΔS, and ΔG represent changes in enthalpy, entropy, and Gibbs free energy, respectively.The thermodynamic parameters ΔH and ΔS provide insights into the dominant forces in a given interaction.When ΔH > 0 and ΔS > 0, hydrophobic forces predominate.If ΔH < 0 and ΔS < 0, this suggests that van der Waals forces or hydrogen bonds dominate.If ΔH < 0 and ΔS > 0, it means that electrostatic interactions are the main factor (Liu et al., 2020).According to Table 1, ΔH and ΔS values of CG-CA and CG-CHA were below 0, indicating that the interaction between CA/CHA and CG was predominantly driven by hydrogen bonds and van der Waals
Table 1
The binding parameters and thermodynamic parameters for the interaction of the CG with CA, CHA and FA.forces.These results were consistent with the study showing that these forces were the main drivers in the interaction between CA/CHA and trypsin (Qi et al., 2023).However, other research found that the whey protein-CHA complex was mainly stabilized through hydrophobic interaction (Zhang et al., 2021).These variations could be attributable to differences in protein type and solvent pH.Moreover, ΔH and ΔS values of CG-FA were below 0, suggesting that hydrophobic forces dominated in the interaction between FA and CG.All ΔG values were less than 0, suggesting that the reactions between CG and CA/CHA/FA were spontaneous (Zhang et al., 2021).Protein fluorescence originated from aromatic amino acid residues, mainly tryptophan (Trp).Consequently, the endogenous fluorescence spectrum could highlight changes in the spatial structure of proteins caused by the interaction between polyphenols and proteins (Parolia et al., 2022).With the increase in CA, CHA, and FA concentrations, the fluorescence intensity of CG continuously decreased.Additionally, at 298 K and a concentration of CA, CHA, and FA of 140 μmol/L, there was an obvious red shift of the maximum fluorescence intensity from CG (336 nm) to 353 nm for CG-CA, 368 nm for CG-CHA, and 350 nm for CG-FA.This shift signified that the molecular structure of CG unfolded due to the interaction with CPs and the internal Trp residues of CG were exposed to a polar environment (Qi et al., 2023).
Sample
Molecular docking technology can provide deeper insights into the interaction between CPs and CG by visualizing the binding sites and interaction forces between receptor and ligand molecules (Chen et al., 2023b).Fig. 2 displays the 3D docking mode and the interaction between two-dimensional docking amino acid residues of a docking simulation.The order of molecular docking energy between CG and CPs was CG-CHA (-6.01 kJ/mol) < CG-FA (-5.34 kJ/mol) < CG-CA (-4.36 kJ/mol), indicating that the CG-CHA complex had the most stable molecular conformation.This finding was consistent with those of fluorescence fitting.The hydrogen bond interaction between CG and CA was mediated by Lys 218, His 195, and Arg 57.CG and CHA interacted through four hydrogen bonds involving the amino acids His 156, Lys 293, Arg 295, and Asn 300.A hydrogen bonding interaction was found between CG and FA via Lys 319.These results highlighted that the primary driving force for the formation of the CG-CPs complex was mainly hydrogen bonds.
In conclusion, all CPs could interact robustly with CG.Due to the limited -OH and -COOH groups, the interaction between CG and FA was primarily hydrophobic, followed by hydrogen bonding.CA had two -OH groups and one -COOH group, ensuring that its noncovalent interactions with CG molecules were dominated by hydrogen bonds, followed by hydrophobic interactions (Qi et al., 2023).CHA was composed of esters formed by CA and quinic acid (Mortele et al., 2021), suggesting that CHA could easily form noncovalent interactions with the protein.As expected, CHA formed a stable complex with CG through strong hydrogen bonds and hydrophobic interactions.
Effect of CPs on CG structure
Fig. S1a shows the SDS-PAGE results of CG, CG-CA, CG-CHA, and CG-FA.In their reduced forms, CG and CG-CPs were observed to be between 17 and 55 kDa.Predominantly, CG and CG-CPs exhibited a 55 kDa band that corresponds to 11S globulin (cocosin), which played a vital role in maintaining the stability of coconut milk (Patil & Benjakul, 2017).In its native state, 11S globulin exists as a hexamer and consists primarily of two acidic polypeptides (32 kDa) and two basic polypeptides (22 kDa), which are linked via disulfide bonds (Carr, Plumb, & Lambert, 1990).Furthermore, 7S globulin, another key protein, consists of bands at 19, 29, and 33 kDa (Benito, Gonzalez-Mancebo, de Durana, Tolon, & Fernandez-Rivas, 2007).As shown in Fig. S1a, the bands of the CG-CPs complexes mirrored those of CG with no new bands evident, indicating that the CG-CPs complexes were formed through noncovalent interactions rather than the formation of one new substance via covalent bonding (Ma et al., 2023).Due to the incorporation of CA, CHA, and FA, the 55 kDa band demonstrated slightly deeper staining.At the same time, the bands corresponding to lower molecular weight components of CG-CPs gradually faded.These observations suggested the formation of macromolecular aggregated as a result of the interaction between CG and CPs.
The interaction between CPs and CG could indeed actually lead to rearrangement of the intermolecular forces that maintain the secondary structure of CG, leading to conformational changes in CG.Far-UV CD analysis is widely used to investigate such changes in secondary structure upon binding with CPs.As illustrated in Fig. S1b, the main chain conformation of CG in the 190-250 nm range showed two negative peaks at 208 and 222 nm.This is a typical feature of the protein α-helix structure.These two negative peaks at 208 nm and 220 nm generally represent the π-π * and n-π* transitions of α-helix peptide bonds (Chen et al., 2023c), respectively.The addition of CA, CHA, and FA resulted in a reduction in the molar ellipticity of these negative peaks, indicating a reduction in the α-helix of CG due to its partial unfolding and destabilization (Zhang, Sahu, Xu, Wang, & Hu, 2017).CPs had the potential to interact with the side chains of various amino acid residues in CG and the carbonyl and amino groups in peptide bonds, leading to the formation of hydrogen bonds.Such interactions could alter the hydrogen bond structure and loosen the peptide chain in the native protein, leading to denaturation of the protein.Therefore, the interaction between CG and CPs disrupted the secondary structure of CG molecules and converted them from a loosely structured order-dominated form to a partially disordered structure.
FTIR spectroscopy is indeed an excellent tool for understanding the functional groups involved in the binding between CG and CPs.The FTIR spectra of CG exhibit a robust and broad absorption peak at 3303 cm − 1 , which is attributed to O -H and N -H stretching vibrations.This signified the formation of hydrogen bonds (Liu et al., 2022).When CG was combined with CA, CHA, and FA, the peak of CG shifts from 3303 cm − 1 to 3423 cm − 1 , 3436 cm − 1 , and 3421 cm − 1 (Fig. S1c), respectively.This shift could be related to the hydrogen bond interaction between CG and CA/CHA/FA.The expansion of the CG molecular structure facilitated by the addition of CPs provided more binding sites for hydrogen bonds, thereby promoting the formation of these bonds.However, the strength of the hydrogen bonds was influenced by the type of CPs.CHA, with more -OH and -COOH groups, showed the most obvious red-shift, indicating a stronger interaction.In addition, the FTIR spectra of CG at the absorption peak of 2926 cm − 1 , which corresponds to the C-H tensile vibration of methyl and methylene, can be used to characterize the hydrophobic interaction between CG and CPs.All CG-CPs complexes exhibited a blue shift at this peak, indicating the presence of hydrophobic interactions (Wang et al., 2022).Lastly, the characteristic absorption peaks of the amide I region (1600-1700 cm − 1 ) and amide II region (1500-1600 cm − 1 ) in the FTIR spectra represent significant structural features of proteins.The amide I region is caused by the stretching vibration of C--O and the bending vibration of N-H, while the amide II region is caused by the stretching vibration of C -N and the bending vibration of N-H (Chen et al., 2023c).Upon interaction with CPs, changes in the amide I band of CG indicated changes in the CG structure due to the formation of the complex.
Indeed, changes in the amount of free -SH groups and S -S bonds in proteins can provide valuable insights into the structural transformations of these proteins.Free -SH groups in CG-CA and CG-CHA were observed to decrease significantly, while no significant change was observed for CG-FA (p < 0.05) (Fig. S1d).Moreover, the content of S-S bonds increased in CG-CA and CG-CHA.This phenomenon could be attributed to the strong reactivity of the -OH groups of CPs, which interacted with the free -SH groups of CG, thereby reducing the presence of free -SH groups.
In summary, the presence of CPs led to the formation of macromolecular aggregates with CG.The interaction between CPs and CG promoted the structural transformation of proteins from an ordered structure to a disordered one.This expansion of the protein structure provided more sites for CPs to form hydrogen bonds.However, it was worth noting that the structural influence of CPs on CG depended on the type of CPs, and the order in terms of impact was CHA > CA > FA.This observation correlated with the strength of hydrogen bond formation, with the bond between CHA and CG being the strongest.
Effect of CPs on particle size, polymer dispersion index (PDI), zeta potential, and turbidity of CG
Fig. S2 illustrates the particle size, PDI, zeta potential, and turbidity of CG and CG-CPs.In Fig. S2a, the average particle size of CG was found to be 228 nm, and the PDI was 0.35.With the addition of CPs, the particle sizes of CG-CA, CG-CHA, and CG-FA significantly increased to 247, 245, and 249 nm, respectively (p < 0.05), signifying that CPs caused CG aggregation.These findings results were consistent the study of Feng et al. (2021) that found that the average particle size and zeta potential of nanoparticles can be controlled through interactions with pectin and soy protein isolate.The TEM images further confirmed that the interaction between CG and CPs resulted in an increase in CG particle size, as shown in Fig. S2b.Conversely, the PDI values of CG-CA, CG-CHA, and CG-FA decreased to less than 0.35, suggesting that the presence of CPs contribute to achieve a more uniform dispersion within the system.Additionally, the absolute potential values for CG, CG-CA, and CG-CHA complexes were found to be higher than that of CG (-12.07 mV).These data implied that CPs facilitated CG to expose more negatively charged amino acid residues, thereby enhancing electrostatic repulsion between complex molecules and inhibiting their aggregation.Compared to CG, the turbidity values of CG-CA, CG-CHA, and CG-FA were increased, with CG-CHA showing a significant increase (p < 0.05).This increase could be attributed to the interactions between CG and CA/CHA/FA.
In conclusion, the interactions between CG and CPs led to an increase in both particle size and absolute potential value, while the PDI of CG-CPs decreased.These changes signified an enhancement in the stability of CG-CPs complexes, reflecting the critical role of CPs in modulating the physical properties of the system.
Effect of CPs on solubility, surface hydrophobicity, and interfacial tension of CG
For the successful integration of proteins into various food systems, a comprehensive understanding of their functional properties is essential, as these properties may exhibit different behaviors in different food contexts.Specifically, the solubility, surface hydrophobicity, and interfacial properties of proteins play a crucial role in determining their emulsifying ability (Bertsch, Mayburd, & Kassner, 2003).Given this background, the present study conducted a detailed characterization of CG and CG-CPs, focusing on these critical aspects to shed light on their potential applications and behavior in different food environments.
As illustrated in Fig. 3a, the solubility of CG noticeably diminished in the presence of CPs.The -OH and -COOH groups of CA, CHA, and FA acted as powerful hydrogen bond donors and acceptors, forming robust hydrogen bonds with the peptide backbone.Owing to the relatively low molecular weights of CA, CHA, and FA, a single CG molecule could interact with multiple CA/CHA/FA molecules (as evidenced by the results of molecular docking simulations).This intervention led to the formation of a network with an interconnected structure, subsequently causing precipitation.Furthermore, the CG structure was extended by CPs, exposing the internal hydrophobic groups.This exposure hindered the interaction between the hydrophilic groups on the surface of protein and water molecules, further contributing to the observed decrease in solubility.
Fig. 3b shows the surface hydrophobicity results of CG, CG-CA, CG-CHA, and CG-FA.For proteins with low solubility, the bromophenol blue method proves to be more suitable than the fluorescence probe method using 8-anilinonaphthalene-1-sulfonic acid.The amount of bound bromophenol blue serves as an indicator of surface hydrophobicity; the higher the content, the more pronounced the hydrophobicity (Bertsch et al., 2003).In its native state, CG exhibited relatively weak surface hydrophobicity.This could be attributed to the abundance of hydrophilic groups on the CG surface, while the hydrophobic groups were predominantly located in the interior (Tangsuphoom & Coupland, 2009).The addition of CPs facilitated the unfolding of the CG molecular structure, thereby exposing the internal hydrophobic groups, which subsequently led to an enhancement in hydrophobicity.
During the formation of the O/W interfacial film, the hydrophobic groups of the protein orient toward the oil phase while the hydrophilic groups point to the water phase, which helps to reduce the interfacial tension between the water and oil phases.The hydrophobic groups of the CG molecules were gradually adsorbed on the surface of the oil droplets and then rearranged to form an interfacial film (Liao, Elaissari, Dumas, & Gharsallaoui, 2023).Compared to the equilibrium interfacial tension of water (17.57mN/m), the interfacial tensions of CG, CG-CA, CG-CHA, and CG-FA were significantly reduced to 10.32, 8.83, 9.02, and 10.05 mN/m, respectively.As a globular protein, CG exhibited a low rearrangement rate since it maintained its original conformation at the O/W interface (Han, Liu, & Tang, 2023).As illustrated in Fig. 3c, after adsorption and rearrangement, CG formed a thin interfacial film at the O/W interface, an effect attributed to its poor surface hydrophobicity.The polyphenols CA, CHA, and FA promoted the unfolding of CG, thereby exposing more hydrophobic groups and enhancing the proteinoil interaction.The noncovalent interactions between FA and CG were predominantly hydrophobic, exerting a moderate influence on the structure of CG.Conversely, the -COOH and -OH groups of CA could form hydrogen bonds with the amino acid residues of CG, fostering the formation of a dense and thick interfacial film at the O/W interface.Notably, the presence of quinic acid in CHA molecules introduced additional hydrogen bonding groups that interacted with CG.This caused CG-CHA molecules being more prone to form a robust and substantial interfacial film at the O/W interface.In summary, the hydrogen bonds between CG and CPs promoted molecular cross-linking at the O/ W interface, resulting in a dense and thick interfacial film.Additionally, CPs facilitated the formation of S -S, which further increased the The interaction between CPs and CG promoted the unfolding of the structure of CG, leading to a masking of surface hydrophilic groups and a subsequent decrease in interaction with water molecules, as evidenced by the reduced solubility.Concurrently, this interaction facilitated the exposure of internal hydrophobic groups, resulting in an increase in surface hydrophobicity.These noncovalent interactions between CG and CPs caused the peptide chain to extend and expose the aromatic amino acids in the protein.Consequently, the affinity of CG-CPs to the O/W interface was enhanced.Particularly noteworthy was the robust hydrogen bonds between CHA and CG, which proved beneficial for the formation of a dense and thick interfacial membrane at the O/W interface.The adsorption mechanism diagram of CG, CG-CA, CG-CHA, and CG-FA at the O/W interface is shown in Fig. S3.μm.This observation demonstrated that droplet aggregation was effectively inhibited in CG-CPs stabilized emulsions.This inhibition could be due to an increase in electrostatic repulsion and steric hindrance between droplets, especially for CHA.Fig. 4c presents the apparent viscosity of the emulsions stabilized by CG and CG-CPs.The apparent viscosity of these emulsions decreased with increasing shear rate, indicating that the emulsions were either destroyed or dissolved during shear (Chen et al., 2023a).This interaction between CG and CPs appeared to promote the cross-linking of the droplets, leading to an increase in the apparent viscosity.The phenomenon suggested that the interaction between CG and CPs restricted the movement of molecules due to the formation of a convoluted cross-linking structure.Moreover, a strong hydrogen bonding interaction between CG and CHA led to extensive cross-linking and a denser structure, which in turn resulted in an increase in viscosity.Additionally, CG-CPs exhibited greater deformation resistance at the O/W interface, which was likely due to the
Storage stability, centrifugal stability, pH stability, and salt stability of emulsion stabilized by CG and CG-CPs
In order to demonstrate the effect of noncovalent interactions between CG and CPs on the storage stability of the emulsion, the changes in the droplet microstructure of the emulsion were recorded over a period of 28 days (Fig. 4d).Within the CG-stabilized emulsion, droplets were observed to merge and aggregate.Conversely, the fresh emulsion droplets stabilized by CG-CPs were small and uniform in size.In all instances, the droplet size in the emulsion gradually increased with the extension of the storage time, highlighting that the emulsion became unstable during storage.This instability could be due to the flocculation and coalescence of the emulsion droplets (Liu, Xu, Xia, & Jiang, 2021).After 28 days of storage, the coalesced oil droplets were more prominent in the CG-stabilized emulsion, while the emulsion stabilized by CG-CPs experienced slight changes.These findings confirmed that the emulsions stabilized by CG-CPs were relatively more stable, especially those stabilized by CG-CHA.
In this study, when the emulsion was irradiated with parallel nearinfrared light, the temporal and spatial changes in the transmitted light through the liquid over time and space during centrifugation were recorded, as shown in the transmission curve.After centrifugation, the emulsion became unstable, with the heavier water phase settling to the bottom of the cuvette and the lighter oil droplets migrating to the top.Consequently, the movement of the emulsion droplets was visualized by the intensity of transmitted light over time, allowing quantification of the unstable process within the emulsion droplets (Chen et. al., 2023c).Fig. 5a depicts the change in light transmittance of CG and CG-CPs stabilized emulsions during centrifugation.The CG-stabilized emulsion stabilized exhibited a rapid shift in light transmittance, with the transmittance of most emulsions increasing from less than 10 % to over 40 %.This indicated that it quickly became unstable during centrifugation.In contrast, the emulsions stabilized by CG-CPs effectively improved the centrifugal stability, showing a gradual increase in light transmittance.Notably, after the centrifugation of the CG-CHA-stabilized emulsion, most emulsion layers maintained a light transmittance below 20 %, suggesting a higher degree of stability.
Fig. S4 and Fig. 5b-c demonstrate the effects of pH on the visual appearance, CI, and droplet size of the prepared emulsion.Between pH values of 6.0 and 8.0, the emulsions stabilized by CG and CG-CPs remained relatively stable, showing slight changes in droplet size.However, when the pH value decreased to 4.0, the droplet size of emulsions stabilized by CG and CG-CPs increased to about 16 μm.This indicated that droplets aggregated into larger units near the isoelectric point (pI ≈ 4.3).CG, being an acidic protein, typically had a relatively low surface charge in an acidic environment (pH 4.0-5.0)and was prone to aggregation, leading to decreased emulsion stability (Chen et. al., 2023).At pH values of 4.0 and 5.0, the emulsions stabilized by CG, CG-CA, and CG-FA separated into a transparent serum layer at the bottom of the container, while the emulsion stabilized by CG-CHA showed no detectable phase separation.When the pH was neutral or alkaline, CA, CHA, and FA were prone to oxidation, especially CA and CHA, which had a catechol structure and can oxidize to quinone.This structure could react with nucleophilic groups in CG and covalently couple to form yellow polymers.Additionally, the covalent coupling of CG with CA/ CHA/FA could promote further protein development and exposure of hydrophobic groups, enhancing emulsifying properties (Lin et al., 2022).
Fig. S5 and Fig. 5d-e present a study on the effects of varying salt ion concentrations on the stability of emulsions stabilized by CG and CG-CPs, based on visible appearance, CI, and droplet size.At low Na + concentrations (0-0.1 mol/L), the emulsifying stability of the emulsion improved due to the salt dissolution effect.However, after adding 0.2-0.3mol/L NaCl solution, the electrostatic repulsion between droplets was masked by salt ions, leading to droplet aggregation and coalescence (Yu, Wang, Li, & Wang, 2022).Both the droplet size and CI of emulsions stabilized by CG-CPs were significantly smaller than those of CG-stabilized emulsions.These results indicated that CG-CPs complexes effectively enhanced the salt ion stability of the emulsion after modification with CPs.
Conclusion
In this study, CA, CHA, and FA were selected as representative polyphenols of CPs to investigate the underlying mechanism by which CPs enhance the emulsification stability of CG.It was found that the interactions among CA, CHA, and CG were mainly characterized by hydrogen bonds, with CHA being particularly notable in this regard.This was attributed to the inclusion of quinic acid in CHA, which provided additional -OH and -COOH groups, thereby leading to the strongest hydrogen bonding with CG.Furthermore, the interactions between FA and CG were mainly controlled by hydrophobic forces.These specific interactions caused the protein structure of CG to unfold and expose the hydrophobic groups, thereby promoting the formation of a dense and robust interfacial film at the O/W interface.As a result, the emulsions stabilized by CG-CPs exhibited remarkable stability across various measures such as storage, centrifugation, pH values, and salt tolerance, with the emulsions stabilized by CG-CHA being particularly effective.This study thus provides a solid theoretical basis for improving the stability of CG and coconut milk and offers innovative insights into the potential applications of coconut milk in the food industry.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Y.Chen et al.
Y.Chen et al.
Fig. 2 .
Fig. 2. Molecular docking of CG with CA, CHA, and FA.Crystal structure of CG and structural formula for CA, CHA, and FA (a); The whole stimulated 3D docking mode and schematic diagram of the interaction between 2D docking amino acid residues of CG-CA (b), CG-CHA (c), and CG-FA (d).
Fig. 4a -
Fig. 4a-b illustrate the droplet sizes of emulsions stabilized by CG,
Fig. 4 .
Fig. 4. Droplet size distribution (a), droplet size (b), and viscosity (c) of the emulsion stabilized by CG, CG-CA, CG-CHA, and CG-FA; The corresponding photomicrographs (d) of the emulsion stabilized by CG, CG-CA, CG-CHA, and CG-FA after storing.Different lowercase letters indicated significant differences between samples (p < 0.05).
Fig. 5 .
Fig. 5. Spectrum of stability analysis of the emulsion stabilized by CG, CG-CA, CG-CHA, and CG-FA composite.Profiles started at the red line and end to the green line (a); Creaming index (b), and droplet size (c) of the emulsion stabilized by CG, CG-CA, CG-CHA, and CG-FA composite at different pH.Creaming index (d), and droplet size (e) of the emulsion stabilized by CG, CG-CA, CG-CHA, and CG-FA after adding various concentration of NaCl.Different capital and lowercase letters indicated significant differences between samples (p < 0.05).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Note: Different letters in the same column indicate significant difference (p < 0.05).
Y.Chen et al. | 2023-10-23T15:09:25.759Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "13743334b56b0dfaf4dbc3cbbaee20b590d84ef2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.fochx.2023.100954",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf5333322a10784b07b9378b5e7e25c1cd212c08",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": []
} |
239248302 | pes2o/s2orc | v3-fos-license | DC-DC 3SSC-A-Based Boost Converter: Analysis, Design, and Experimental Validation
: A detailed analysis and validation of the DC-DC boost converter based on the three-state switching cell (3SSC) type-A are presented in this paper. The study of this topology is justified by the small amount of research that employs 3SSC-A and the advantages inherent to 3SSC-based converters, such as the division of current stresses between the semiconductors, the distribution of thermal losses, and the high-density power. Therefore, a complete static analysis of the converter is described, as well as the study of all voltage and current stresses in the semiconductors, the development of a loss model in all components, and a comparison with other step-up structures. Additionally, the small-signal model validation is accomplished by comparing the theoretical frequency response and the simulated AC sweep analysis. Finally, implementing a simple controller structure, the converter is experimentally validated through a 600 W prototype, where its overall efficiency is examined for various load conditions, reaching 96.8% at nominal load.
Introduction
DC-DC converters play a key role in energy conversion and conditioning applications that require reduced weight and size, such as electric and hybrid vehicles, aeronautical equipment, space probes and satellites, renewable energy systems, among other applications [1][2][3]. With regard to the development of topologies for DC-DC converters, there is a trend in the search for equipment with a higher power density, and advances are mainly motivated by the requirements of lower cost and volume [4]. In this scenario, researchers have been focused on finding ways to improve and overcome the limitations in the power processing capacity of classic DC-DC converters. Among the various approaches, the soft switching techniques can be mentioned, which lead to greater efficiency in energy conversion, but the cost and complexity increase due to the multiple elements of the resonant tank network [5].
The interleaving technique in power electronics converters is the usual solution to achieve high power levels and modularity, however, the appropriate current sharing between the association of multiple cells should be taken into consideration, which demands the use of sophisticated control schemes to maintain the current balance between the semiconductors [6]. Similar to interleaving solutions, the three-state switching cell (3SSC) was introduced in [7], being an interesting solution for increasing power density, with a high efficiency level, and without the need for special control strategies.
Since the 3SSC was proposed, a variety of topologies for DC-DC, AC-DC, and DC-AC converters have been presented in the literature [8][9][10]. All of these approaches present interesting advantages inherent to the application of the 3SSC, i.e., reduction of weight and volume of the filter elements, current stress division between the semiconductors and, consequently, distribution of the losses, providing the reduction of the heat-sink's size [11][12][13]. Although the 3SSC uses a high-frequency autotransformer, its dimensions are compact, because that element operates at twice the switching frequency and only in two quadrants of the B-H curve [14].
It is noteworthy that most of the applications with 3SSC mainly use the type-B cell (3SSC-B) topology [15]. It could be associated with the fact that the 3SSC-B-based buck, boost and buck-boost converters have a static gain identical to the classic non-isolated converters, in the whole range of duty cycle, when they operate in continuous conduction mode (CCM). This characteristic has been an attractive solution for the development and exploration of new non-insulated step-up DC-DC structures by employing the 3SSC-B [16][17][18][19].
Nevertheless, a characteristic of classic boost converters, including the 3SSC-B-based boost, is the right-half-plane (RHP) zero in the control-to-output-voltage transfer function in CCM. This fact causes the non-minimum phase characteristic, imposing limitations for the bandwidth and the dynamic response in the control-loop by using single-loop control architectures, i.e., to ensure stability with adequate damping [20]. These limitations have motivated the effort to develop step-up structures that have a satisfactory dynamic performance without the need to apply cascade or complex control architectures [21][22][23].
In this context, the DC-DC 3SSC-A-based boost topology, initially proposed by [7] and briefly explored by the authors in [24], becomes an interesting alternative in step-up converters with minimum phase characteristic, where a faster dynamic response can be attended by employing a simple closed-loop control scheme with a single-loop control, which also reduces computational and signal conditioning costs in practical implementations.
Therefore, taking into account the reduced amount of research that employs the 3SSC-A and the advantages presented by 3SSC-based converters, this paper fills a gap in the literature with the following contributions: • A generalized and detailed static analysis of the DC-DC 3SSC-A-based boost converter, including highlighting the discontinuous conduction mode (DCM) and critical conduction mode (CRM). • A complete theoretical study of voltage and current stresses in semiconductors, comparison with other step-up structures with minimum phase characteristics, description of a loss model in all components, as well as the validation of the small-signal model by simulation results. • Verification of the dynamic response of the control scheme with a single-loop architecture and the efficiency under several load conditions by experimental results.
This paper is organized as follows, Section 2 describes the converter analysis, including loss models, modeling and design considerations. A comparison with similar converters is detailed in Section 3. Section 4 presents the experimental validations, followed by the final considerations.
The 3SSC-A-Based Boost: Static Analysis
The 3SSC-A-based boost converter consists of an autotransformer with windings (T 1 , T 2 ), two controlled switches (S 1 , S 2 ), two diodes (D 1 , D 2 ), one inductor (L), and an equivalent load (R o ) connected in parallel with the output capacitor (C o ), as shown in Figure 1. This topology must operate with a duty cycle less than 0.5, in order to avoid the switches working on overlapping condition, which could be considered as a disadvantage of the 3SSC-A-based converters. In this section, the operation principle in steady state is analyzed.
Operation Principle
The operation principle study of the 3SSC-A-based boost converter is accomplished under CCM, DCM, and CRM, by considering all of converter's components as ideal elements, with electrical variables defined as follows: V GS 1,2 -command signals of switches S 1 and S 2 ; First Stage (t 0 < t < t 1 ) (Figure 1a) Initially, the switch S 1 is turned on, while the switch S 2 is turned off. The autotransformer's windings' currents are equivalent to half of the input current i in , which is guaranteed by the unitary turns ratio and, consequently, the voltages on the windings T 1 and T 2 are equal to V in . Thus, the current i T 1 flows through the switch S 1 , the current i T 2 flows through the diode D 2 and the inductor L stores energy. Considering that the autotransformer's windings have the same impedance, the voltages on T 1 and T 2 are equal and equivalent to V in . Second The switch S 1 is turned off and the diode D 1 is turned on. While S 2 and D 2 remain turned off and turned on, respectively. The current flowing through T 1 and T 2 and the magnetic flux in the autotransformer core is null. Thus, the polarity of V L is inverted and the stored energy in the inductor L is transferred to the load.
This stage is similar to the operation stage 1, where, the switch S 2 is turned on and S 1 remains turned off. The diode D 1 continues turned on and D 2 is turned off.
This stage is identical to the second stage, where the inductor current flows through the diodes D 1 , D 2 and the autotransformer windings.
DCM
This operation mode present six equivalent operation stages defined according to the theoretical waveforms shown in Figure 2b. Some of the operation stages in DCM are equivalent to the CCM and these will not be described in detail.
This stage is identical to the first stage in CCM.
This stage is identical to the second stage when the converter is operating in CCM.
Third Stage (t 2 < t < t 3 ) (Figure 1d) In this stage, the current in the inductor becomes zero, the switches S 1 and S 2 remain turned off, and diodes D 1 and D 2 are turned off. Thus, there is not power transfer from the input source to the load. The power supplied to the load comes from the C o output capacitor.
Identical to the third stage of the CCM.
Fifth Stage (t 4 < t < t 5 ) (Figure 1b) Identical to the fourth stage of the CCM.
Sixth Stage (t 5 < t < T s ) Identical to the third stage of the DCM. In Figure 2b, it is verified that the current in the inductor is null during the third and sixth operation stages, characterizing the DCM. Semiconductors are subjected to a maximum voltage equivalent to twice the input voltage V in .
CRM
In this mode, maintaining the 180-degree delay, each switch is turned on at the exact moment when the current in the inductor becomes null, causing the current to increase again. The inductor current becomes null every half-time, so the minimum current I m is equal to zero and the current ripple ∆I L in the inductor is equal to its maximum current I M . The first and second operation stages in CRM are equivalent to the first and second stages in DCM, respectively.
Output Characteristic of the Converter
The static gain for each operation mode: CCM, DCM, and CRM are expressed as (1), (2), and (3) respectively.
where, I o is the average output current, f s is the switching frequency, D is the duty cycle, and γ represents the normalized output current, defined in (4).
, the static gain curves of the proposed converter are presented in Figure 3. Analogously to the classic boost converter, the output voltage is a function of the load current in DCM. The maximum static gain in CRM occurs at γ = 0.0625 and D = 0.25 for the 3SSC-A-based boost converter, which differs from the classic boost converter, whose maximum gain in CRM occurs at γ = 0.125 and D = 0.5. In this regard, the CCM region is wider for the 3SSC-A-based boost converter, i.e., for the same operating point, the inductance becomes half of that required for the classical boost converter.
Filter Elements
The inductor current ripple ∆I L , described in (5), can be obtained applying Kirchhoff's voltage law to Figure 1a.
The normalized current ripple ∆I L is given by (6) and plotted in Figure 4, where it can be seen that the maximum value occurs at D = 0.2072. The inductance L is determined by reorganizing (5), according to (7).
The critical inductance L crit , described in (8), corresponds to the threshold value of inductance between CCM and DCM. From Figure 3, the threshold value of γ is 0.0625, and the critical inductance is calculated by replacing that value in (4).
Considering the charge variation on the capacitor C o during a switching period, the minimum capacitance required to obtain the desired voltage ripple ∆V o is given by (9).
Semiconductors Stresses in CCM
The average and RMS values of the switches currents are given by (10) and (11), respectively.
The average and RMS values of the diodes currents are given by (12) and (13), respectively.
The maximum voltage stress across the switches and diodes is given by (14) and (15), respectively.
Semiconductor losses are described according to [25,26]. Hence, the conduction and switching losses of the IGBT are given by (16) and (17), respectively.
where I S peak is the peak of the current switch, V CE(sat) is the collector-emitter saturation voltage, and t on and t o f f are the turn-on and turn-off switching times. For the diodes, only the conduction losses and the reverse recovery losses, occurring at the switch off, can be considered, described by (18) and (19), respectively.
where V F is the forward voltage, R D is the dynamic resistance, t rr is the reverse recovery time, and the I r is the maximum instantaneous reverse current.
Autotransformer Design and Losses
The selection of the high-frequency transformer can be defined by the product of the core magnetic cross-section area A e and the window area A w [27]. This relationship, known as the core area product A e A w , is defined in (20) as a function of the converter's electromagnetic parameters.
where P o is the output power, B max is the maximum flux density, J max is the maximum current density, and η is the window utilization factor. Since the turn ratio of the autotransformer is unitary, the number of turns in each winding is found from (21).
where D max is the maximum duty cycle.
The average and RMS current through the autotransformer windings T 1 and T 2 is given by (22) and (23), respectively.
The estimation of autotransformer losses is based on the methodology presented in [28,29]. Thus, the total losses P T are equal to the core losses P Tcore plus the copper losses P Tcopper , defined by (24).
The total core losses vary essentially as a function of the AC magnetic flux density and the operating frequency, whose relationship can be represented by the improved Steinmetz Equation (25).
where k, α, and β are extracted from the core loss per volume unit based on the value of flux density and frequency, from the datasheet provided by the manufacturer. V core is the core volume and B pk is defined as half of the peak AC flux density. It is noteworthy that the f s B pk merit figure is directly related to the total losses in the core and is inversely proportional to the core magnetic volume. Thus, in practical design, it is up to the designer to adjust the losses and volume parameters.
The total copper losses include the sum of losses in the T 1 and T 2 windings, given by (26).
where ρ cu is the copper resistivity constant, l wdg denotes the length of winding, n is the number of litz wires, and the core cross-sectional are A cu .
Inductor Losses
The average and RMS current through the inductor is given by (27) and (28), respectively.
The calculation of losses in the inductor is similar to that of an autotransformer, i.e., the total losses are split into the copper losses and core losses The copper loss is defined by (30).
The core losses in the inductor can be obtained applying Equation (25), taking into account that for a small AC component of current, L can be assumed constant throughout AC excitation, then B pk is given by where N L is the number of turns of the inductor.
Capacitor Losses
The RMS current through the capacitor are given by Power dissipation on the capacitor can be expressed as function of the RMS current I C RMS through the equivalent series resistance ESR:
Transfer Function and Control Design
The adopted modeling technique is based on the basic AC modeling approach proposed in [30]. In this regard, aiming to obtain the small-signal AC equivalent circuit, as shown in Figure 5, small-signal perturbations are applied around the equilibrium point, and its frequency response is validated by the AC sweep analysis performed in PSIM® software (UNESP, Ilha Solteira, Brazil), as shown in Figure 6a. From that circuit, the control-to-output G vd (s) and line-to-output G vg (s) transfer functions are obtained and are described by (34) and (35), respectively. By analyzing the transfer function G vd , the proposed converter behaves as a minimum-phase system, due to the absence of right-half-plane (RHP) zeros. Therefore, the controller design process is simplified for the 3SSC-A-based boost converter and the problems associated with right-half-plane (RHP) zeros are eliminated. Thus, it is possible to obtain a satisfactory dynamic response without implementing an additional control loop. Moreover, it is observed that the dynamic characteristics are similar to the classic buck converter operating in CCM, which can be mathematically evidenced by the transfer function expressed in (35). It differs from the transfer function of the boost 3SSC-B-based converter presented in [31], which has characteristics like the classic boost converter. In this regard, by using the design specifications from Table 1, the bode diagram of the transfer function G vd , including the gain sensor H = 8.33 × 10 −3 V/V, is illustrated in Figure 6b (blue curve). The average voltage-mode control technique is applied to the regulation of the output voltage of the converter and, due to the characteristic of the control-to-output transfer function, the PI controller was adopted. The conventional phase-margin and gain-margin stability criteria are applied, where a crossing frequency is around 1/4 and 1/10 of the switching frequency, and a phase margin 45°≤ PM ≤ 90°s hould be attended to provide a good response with an adequate output-voltage overshoot. As shown in Figure 6 (red in color), by using a proportional gain K p = 0.1033 and an integral gain K i = 7944 for the controller PI, the crossover frequency and phase margin obtained from the voltage control loop are 5 kHz and 90°, respectively. Table 2 summarizes the main characteristics of the 3SSC-A-based boost converter compared to other structures with current source characteristics at the output and RHP zero free control-to-output transfer function. It is observed that the first-order KY converter has the lowest number of semiconductors, however, it has the same number of controlled switches as the 3SSC-A-based boost converter. On the other hand, since the controlled switches are in the same reference, the command scheme of the 3SSC-A-based boost could result in simpler circuits. The interleaved tri-state boost converter exhibits the largest number of semiconductors, requiring greater complexity in the command and control scheme among the aforementioned topologies. With regard to voltage stress, the first-order KY converter presents the least stress on the switches.
Comparison with Other Boost Converter Topologies
The 3SSC-A-based boost converter is the only one that uses an autotransformer, with a turn ratio equal to unity, which guarantees the distribution of current stress in the semiconductors. In addition, all the energy storage elements of this structure operate at twice the switching frequency, which provides a reduction in weight and volume compared to other topologies. In addition, the interleaved tri-state boost has a high voltage gain compared to other converters, however, its static gain has a nonlinear behavior and is dependent on the possible switching logic of this structure. It is important to remark that, although 3SSC-A-based boost operates with duty cycle less than 0.5, the maximum static gain is equal to the 1st-order KY converter.
Experimental Results
The experimental set-up implemented to carry out the laboratory tests is shown in Figure 7. The power circuit of the 3SSC-A-based boost converter was assembled according to the parameters presented in Table 1. It is noteworthy that, by applying the methodology presented in Section 2 for the autotransformer design, the NEE-30/15/14 core was found, however, due to the available components at the laboratory, the NEE-42/21/20 core was used. Figure 8 shows the control signals of the switches S 1 and S 2 , demonstrating that the converter operates without overlapping of the pulses, with D = 0.33. Additionally, it is illustrated the maximum voltage of the v S 1 is approximately 360 V. The current i S1 increases linearly being approximately 2.2 A. The waveforms related to the voltage and current stress on the switches S 1 and S 2 are shown in Figure 9. It is observed that the switches do not remain turned on simultaneously, with a 180-degree delay between the command signals being evident. It is worth remarking that the interaction between the autotransformer leakage inductance and the IGBT collector-emitter capacitance results in an equivalent resonant LC circuit in the activation process, which causes the current spikes seen in the currents i S1 and i S2 in Figure 9. Moreover, the leakage inductance would also cause voltage spikes on switches, however, these spikes were reduced by using RLD snubber circuits. Additionally, the effect of the autotransformer windings leakage inductance could be alleviated by increasing the coupling factor, which can be achieved by replacing the EE core with a toroidal core, increasing the occupation of the window area, and applying suitable interleaved winding techniques [32]. . v S 1 (200 V/div); v S 2 (200 V/div); i S 1 (5 A/div); i S 2 (5 A/div); time: 5 µs/div.
Multimeters
In Figure 10, the voltage v D 1 , the current i L , and the currents i D 1 e i S 1 are illustrated. The maximum reverse voltage on the diodes is equivalent to 2V in ≈ −360 V, according to the theoretical analysis. Looking at the currents i S 1 and i D 1 , it can be verified that the switch S 1 and the diode D 1 do not operate simultaneously, validating the complement operation between these semiconductors. Furthermore, it can be observed, from the current i L 's behavior, that the converter operates in CCM, and the frequency of the ripple current ∆I L is equal to 100 kHz, corresponding to twice the switching frequency, with an average value of 2 A. Moreover, it can be seen the diodes D 1 and D 2 conduct simultaneously when the current i L decreases linearly, i.e., the moment when the switches S 1 and S 2 are turned off, and the current through each diode is equivalent to half of the current i L , with an average value of 1 A. Figure 11 shows the waveforms of the voltage V in , the voltage V o , the current i L , and the input current i in . These results evidence that the converter operates as step-up structure, with V o equal to 300 V, corresponding to the gain required. It is verified that the current i in presents a greater ripple than the current i L and its shape shows the characteristic of the voltage-fed converter, in contrast to the classic boost converter and the 3SSC-B-based boost topology, which present current-fed converter characteristic. The dynamic responses of V o , i L , and i D 1 , during a load variation, are shown in Figure 12. Initially, at t = 400 ms the load decrease from 600 W to 300 W, and then, at t = 1250 ms, the load is increased from 300 W to 600 W. It can be verified an output-voltage overshoot lower than 6% of the rated voltage. According to the loss model described in Section 2.2 and Table 2, the distribution of theoretical power losses in the converter, operating at full load, is shown in Figure 13, which is equivalent to the total theoretical power losses of 17.2 W and theoretical efficiency of around 97.2%. It is observed that semiconductors are the elements that most contribute to total losses, and the power losses in magnetic elements are mainly given by the cooper losses. Moreover, due to the low current ripple and the small ESR, it can be noted that the power losses in the capacitor are extremely reduced when compared to the other elements. The efficiency of the experimental prototype was evaluated in a load range of 100 W to 600 W, as shown in Figure 14. It is verified that the performance of the prototype is greater than 91% in the whole range of defined power, at nominal load this is approximately 96.8%. In nominal power conditions, the thermal distribution in the converter is evaluated by means of a thermal imaging camera, as presented in Figure 15. Due to the current division between these components, the distribution of thermal losses between the semiconductors, which is related to the operating characteristic of the 3SSC-A. At full load, the maximum component temperature is less than 45°C.
Conclusions
This paper presented the 3SSC-A-based boost converter, filling a gap in the literature, regarding the complete study of this topology. The incorporation of the type-A 3SSC structure results in advantageous characteristics compared to the classic boost converters, offering a low-ripple in the current of the output capacitor, which reduces the losses due to the capacitor's series resistance and, consequently, increases the useful life of this element.
Through the study of the converter dynamics, it was found that, unlike classic boost topologies, the control-to-output transfer function of the 3SSC-A-based boost has a minimum-phase characteristic, which allows the use of only one control-loop, with a simple controller, offering fast load transient responses, similar to the classic buck converter behavior. Besides, the high-level efficiency, the 3SSC-A-based boost converter presents the typical advantages related to 3SSC, such as weight and volume reduction of passive components, division of the current stress between semiconductors and losses thermal distribution, providing reducing the size of the heat sinks.
The 3SSC-A-based boost converter becomes an attractive solution for step-up structures that require to supply of critical loads sensitive with low current ripple. Furthermore, the use of the structure in several energy conditioning applications that require high performance and high power density could be explored. | 2021-10-20T15:07:33.381Z | 2021-10-17T00:00:00.000 | {
"year": 2021,
"sha1": "e0f2d44bbeb6ac5646d7ebd6bd6bb40112b93f10",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/20/6771/pdf?version=1634472415",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4fbe6b60589ffdf9f5c8d6cda913402f7e2853ac",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
19171196 | pes2o/s2orc | v3-fos-license | WSe2 light-emitting tunneling transistors with enhanced brightness at room temperature
Monolayers of molybdenum and tungsten dichalcogenides are direct bandgap semiconductors, which makes them promising for opto-electronic applications. In particular, van der Waals heterostructures consisting of monolayers of MoS2 sandwiched between atomically thin hexagonal boron nitride (hBN) and graphene electrodes allows one to obtain light emitting quantum wells (LEQWs) with low-temperature external quantum efficiency (EQE) of 1%. However, the EQE of MoS2 and MoSe2-based LEQWs shows behavior common for many other materials: it decreases fast from cryogenic conditions to room temperature, undermining their practical applications. Here we compare MoSe2 and WSe2 LEQWs. We show that the EQE of WSe2 devices grows with temperature, with room temperature EQE reaching 5%, which is 250x more than the previous best performance of MoS2 and MoSe2 quantum wells in ambient conditions. We attribute such a different temperature dependences to the inverted sign of spin-orbit splitting of conduction band states in tungsten and molybdenum dichalcogenides, which makes the lowest-energy exciton in WSe2 dark.
Furthermore, a strong spin-orbit interaction in these compounds, has been predicted by density functional theory 27,28 : in WX 2 (MoX 2 ) the lowest energy states in the conduction band and the highest energy states in the valence band have the opposite (same) spin orientations, preventing (enabling) their recombination with emission of a photon (see Fig. 1F).Thus, according to the theoretically predicted spin-state ordering, the lowest-energy excitonic sub-band in WX 2 corresponds to dark excitons, separated from the bright exciton sub-band by the energy combined from the electron spin-orbit splitting SO (of the order of 30-40 meV [27][28][29][30] ) and electron-hole exchange interaction energy.As we show here, such band-structure properties of WX 2 strongly impact on the LEQW performance, leading to a significant enhancement of the room temperature EQE of the WSe 2 LEQW's in the electroluminescence (EL) regime.This is in contrast to a more common behavior observed in MoX 2 LEQW's 10 where the EL intensity falls by up to 100 times when the temperature is varied from 6 to 300 K leading to significant reduction of the EQE.
Experimental procedure
The main component of our stacked-layer van der Waals heterostructure LEQWs is a lightemitting monolayer of WSe 2 encapsulated between thin (2-5 monolayers) hexagonal boron nitride (hBN) barriers with top and bottom transparent graphene electrodes for vertical current injection (Fig. 1).The layer stacks in the van der Waals structure were manufactured using multiple 'peel/lift' procedure 31,32 in ambient conditions.The high quality of the samples is confirmed by cross-sectional TEM measurements (Fig. 1B), which demonstrate micron scale absence of contamination between the layers occurring as a result of the self-cleansing effect 32,33 (see supplementary information for AFM and dark field optical microscopy of different devices).We also fabricate similar LEQW structures comprising MoSe 2 monolayers.Optical properties of the LEQW devices are studied using micro-photoluminescence (μPL) at low bias voltages, typically V b <1.8 V (or micro-electroluminescence (μEL) measured at larger biases, typically V b >1.8 V) with samples placed in a variable temperature cryostat (see Methods).
By applying a bias voltage, V b , between the two graphene electrodes we are able to pass a tunnel current through the device (Fig. 1E), with the magnitude of the current determined by the largest thickness of one of the hBN barriers.Fig. 1E shows the current density (J) vs bias voltage (V b ) for four devices having different hBN barrier thicknesses.At high bias we are able to simultaneously inject electrons and holes into the transition metal dichalcogenide (TMDC) layer, Fig. 1C, which is observed as a strong increase of the tunnel conductivity.The lifetime of the injected carriers within the active region of the quantum well is expected to vary as τ ∝ θ -N where θ is the probability of an electron tunneling a single layer of hBN and N is the number of hBN atomic layers [34][35][36] (denoted L below).If the lifetime is long enough then excitons can form within the TMDC and recombine radiatively (Fig. 1F).For hBN thickness below 2 L a high proportion of current will be created by direct carrier tunneling through the whole heterostructure leading to a reduction of the current-to-light conversion efficiency.For thicker barriers the lifetime of the carriers increases, leading to improved light emission efficiency.However, in this case the maximum current density drops leading to dimmer LEQW's.We find that 2-3 layers of hBN is an optimal compromise between overall brightness and efficiency (eN ph /I) of our devices.
Results
Typical light-emission behavior of a WSe 2 LEQW at T = 4.2 K is shown in Fig. 2. PL biasdependence is shown in Fig. 2B where at zero bias we measure a spectrum shown in Fig. 2E exhibiting several peaks.We use notation adopted in You et al 37 , and similarly observe neutral exciton X 0 (1.725eV) and trion X -(1.69 eV) peaks as well as a number of additional features, with the most pronounced peaks P 1 (1.669 eV) and P 2 (1.649 eV) and a band at photon energies below 1.64 eV denoted P 3 .We would like to stress that, although features P 1 -P 3 are always present in all our samples, their relative intensity varies from device to device.We attribute the low energy peaks to excitons localized on defects in the TMDC: This is in agreement with theoretical prediction, that their intensity decays faster upon heating than the intensity of the trion line 38 For biases |V b |>2V, we observe strong electroluminescence (EL).In Figs.2A,C,D,F the peaks in EL can be easily traced to the peaks in PL spectra, as only insignificant energy shifts of ~3 meV of the spectral features compared with PL occur in the whole range of biases.Figure .2E shows that at low T with increasing |V B |, EL is first observed from the lowest energy peak P 3 , then from P 2 , gradually achieving the maximum through the strong increase of P 1 (for clarity we present the low bias behavior in the supplementary information).The high energy EL peaks X 0 and X -also grow with |V B |. Also, at low temperatures, their intensities are somewhat weaker than that of P 1 .
It is evident from Fig. 2B that this device shows an asymmetric behavior for positive and negative biases, especially notable in EL (Figs. 2A,C,E,F) as well in the tunnel current (Fig. 1E, black curve).This is typical for the majority of van der Waals LEQWs, likely, due to the different thickness of the top and bottom hBN barriers.EL spectra from the symmetric WSe 2 LEQW device are presented in the supplementary information and show similar EL and bias dependent tunnel current for positive and negative polarities at low temperature.
Considering that the key for applications is the room temperature operation of the device, in Fig. 3 we show temperature dependence of the EL comparing two LEQWs, one with monolayer of MoSe 2 and the other with monolayer WSe 2 , with the same architecture.The dependence of the tunnel current density on bias voltage for these two devices, Fig. 1E, shows both samples that the current density at a given bias voltage is of the same order of magnitude.The emission of the WSe 2 -based device becomes around 200 times brighter at 300K, as compared to T=6 K Fig. 3A.Qualitatively similar behavior has been observed in the previous PL studies on monolayers of WSe 2 : typically a factor of ten 39,40 increase in the range from 6 K to 300 K.Such PL variation has also been observed for the WSe 2 device shown in Fig. 3A (see SI).We have checked that such unusual temperature dependence of PL and EL is reproduced in a multiplicity of WSe 2 LEQW devices.Depending on a particular device, the integrated EL increase from T = 6 to 300 K is in the range from 4 to 200 times, and the increase of 200 times occurs in the samples with thin 2-3 L hBN barriers.In such devices the trion emission dominates both in PL and EL and X 0 emission is not observed (Fig. 3A), leading to a simpler spectrum with one peak at 1.698 eV and a shoulder at 1.666 eV for T=6 K.Note that such samples also provide the brightest EL exceeding 1 million counts per second at room temperature (see Methods for details of the LEQW substrate leading to enhanced collection efficiency).
In Fig. 3(B-D) we also carry out direct comparisons with a MoSe 2 LEQW fabricated in an identical way to the WSe 2 devices discussed above.As shown in Fig. 3B,C, the brightest EL from MoSe 2 devices is observed at low T, where a single EL peak is observed close to the spectral position of trion emission peak (see Supplementary Information for bias dependent PL characterization).As the temperature is increased, this peak significantly broadens, and the integrated EL intensity decreases by about 100 times, see Fig. 3C.This is similar to the earlier found ten-fold decrease in the EL intensity of MoS 2 LEQWs between 10 K and 300 K 10 .
The strong increase of EL with temperature corresponds to a rise of the external quantum efficiency (EQE) in WSe 2 LEQWs and follows an Arrhenius relation with an energy gap of 40 meV (see Fig. 3D).This makes van der Waals heterostructures with embedded WSe 2 monolayers highly promising material for ultra-thin flexible LEQWs.Figure.4 shows the EQE T-dependence for three WSe 2 devices (data for additional devices are shown in the SI).Here the EQE is defined as EQE = eN ph /I, where N ph is the number of photons emitted by the device, I is the current passing through the device and e is the electron charge.It is observed in Fig. 4A that the EQE shows the characteristic increase with temperature reaching 5% at T = 300 K, a factor 250 improvement in the room temperature performance as compared to the best single-monolayer MoS 2 LEQWs 10 .
In addition to this, in Fig. 4C we observe a monotonically increasing EQE as a function of bias voltage and injection current density to a maximum measured value of j=10 3 A/cm 2 .Fig. 4B shows the corresponding EL spectra obtained from Device 3 in Fig. 4A operated at room temperature at increasing injection current densities with a peak emission of more than 1.3 million counts per second.Indeed, a common drawback of commercial LED lighting is a suppression of the EQE (socalled 'efficiency droop') at high injection currents caused by increased non-radiative processes 41,42 .In contrast, the presented WSe 2 -based van der Waals heterostructure LEQWs become brighter at higher temperature, and their efficiency remains high and increases with current densities even at current densities as high as 1000 A/cm 2 as shown in Fig. 4C.
We suggest that the mechanism of the unusual T-dependence of EL and PL in WSe 2 LEQWs is related to the spin-orbit splitting of the spin states in the free carrier bands as illustrated in Fig. 1D, leading to the specific sequence of bright and dark exciton states in these materials 27,28 .In the discussion below we will focus on the behavior of the neutral exciton which dominates in PL and EL at room T in the majority of our LEQWs.The dark and bright exciton states are split by Δ = Δ SO + Δ e-h , where SO 30-40 meV [27][28][29][30] is the spin-orbit splitting of electrons in the conduction band (which has the opposite sign to that of MoX 2 ) and Δ e-h 1.5 meV is the interband (conduction-valence) exchange energy.
At low temperature the exciton population accumulates in the low energy dark exciton subband, which mostly decays via non-radiative escape.At the same time, at low T the bright exciton states are weakly populated as it is shifted to higher energy by with respect to the dark exciton, hence, the intensity of the X 0 line is low.As the temperature increases, the thermal activation increases the bright exciton population.This population will be mostly contained in the high-momentum exciton 'reservoir', whereas light emission occurs from exciton states with the negligible momentum (k0).Excitons from the 'reservoir' can be scattered into the lightemitting states as a result of interaction with acoustic phonons and electrons, (the latter is particularly significant in the EL regime), or defects.
The fact that, at high T, the intensity of the X 0 line increases both in PL and EL -is the manifestation of such thermal activation behavior.Note that for several of the studied devices the Arrhenius fit to the exponential increase of the EL with increasing temperature yields the characteristic energy of ~40meV (inset to Fig. 3D), which is quite close to the theoretically predicted SO 27 .
This is in contrast to MoSe 2 LEQWs, where the dark exciton is higher in energy than the bright exciton.As a result, MoSe 2 LEQW's always show the opposite behavior with a notable decrease of light emission with increasing T. While it is expected that the emission efficiency would decrease due to the transfer of some exciton population into the dark exciton sub-band and also into the 'reservoir' states, such a strong decrease by a factor of 100 and more may only be possible if the non-radiative escape time shortens at elevated temperature, which appears to be a stronger effect in MoSe 2 compared with WSe 2 .
In conclusion, we have fabricated high-efficiency LEQWs made from van der Waals heterostructures comprising a single layer of WSe 2 as the active light-emitting material, hBN tunnel barriers, and graphene electrodes for vertical current injection.Such WSe 2 -based LEQWs show the enhanced performance at room temperature compared with the low-T operation.This enhancement is also in contrast to MoSe 2 and MoS 2 LEDs studied in this work and reported earlier, where both PL and EL decrease by a factor of 10 to 100 when the temperature is varied from 6 to 300 K.With room temperature external efficiencies of 5%, such LEDs present significant promise for future development of flexible opto-electronic components.The efficiency can be boosted further by creating 'multiple quantum well' devices 10 and by the fine tuning of the h-BN tunnel barrier thickness.One of the remaining challenges is scalable production of these components, only possible with well-controlled wafer-scale growth techniques 43,44 .
Materials and Methods
LED fabrication: Firstly bulk hexagonal boron nitride hBN is mechanically cleaved and exfoliated onto a freshly cleaned Si/SiO2 substrate.After this a graphene flake is peeled from a PMMA membrane onto the hBN crystal followed by a thin hBN tunnel barrier then a hBN tunnel barrier on PMMA is used to lift a WSe2 or MoSe2 single layer from a second substrate then both of these crystals are together peeled off the PMMA onto the hBN/Gr/hBN stack forming hBN/Gr/hBN/WX2/hBN.Finally the top graphene electrode is peeled onto the stack thus completing the LED structure (see supplementary information for fabrication details).After the stack is completed we follow standard micro fabrication procedures for adding electrical contacts to the top and bottom graphene electrodes.We also transferred some complete stacks onto highly reflective distributed Bragg reflector (DBR) substrates.From LEDs placed on such DBRs we are able to collect up to 30% of the emitted light opposed to just 2% from the Si/SiO2 substrate.
Electrical and optical measurements: Samples were mounted either in a liquid helium flow cryostat for temperature dependent measurements, or in an exchange-gas cryostat for measurements at T= 4.2 K. Light emission from the samples was collected with a high numerical aperture lenses positioned either outside the flow cryostat or inside the exchange-gas cryostat.The photoluminescence (PL) and electroluminescence (EL) signals were measured using a 0.5m spectrometer and a nitrogen cooled charge coupled device (Princeton Instruments, Pylon CCD).Electrical injection is performed using a Keithley 2400 source-meter.PL was excited with continuous wave lasers at 532 nm or 637 nm focused in a spot size of ~ 2 to 3 μm on the sample surface.The optical image taken in Figure 1F was captured using a Nikon DS-Qi2 monochrome camera with a quantum efficiency of 20% at 750 nm.
Scanning transmission electron microscopy (STEM):
STEM imaging was carried out using a Titan G2 probe-side aberration-corrected STEM operating at 200 kV and equipped with a highefficiency ChemiSTEM energy-dispersive X-ray detector.The convergence angle was 19 mrad and the third-order spherical aberration was set to zero (±5 μm).The multilayer structures were oriented along the <hkl0> crystallographic direction by taking advantage of the Kikuchi bands of the silicon substrate.(See Supplementary information and ref. 19 for more detailed description).
ASSOCIATED CONTENT
Supporting Information: Description of the device fabrication, Temperature dependent electroluminescence data for additional WSe2 LED's, low temperature photoluminescence of MoSe2 LED's, details of our quantum efficiency estimations and further discussion."This material is available free of charge via the Internet at http://pubs.acs.org."
Fabrication
Quantum well heterostructure devices are assembled by a multiple peel-lift Van der Waals assembly procedure which has been described in detail previously [1-3].
In summary the devices are constructed as follows, firstly an hBN flake of thickness 5-35 nm is deposited onto a thermally oxidized silicon wafer (90 or 290 nm oxide thickness) to form an atomically flat substrate.A graphene flake is then peeled from a poly(methyl methacrylate) (PMMA) membrane onto the hBN substrate, followed by a thin hBN tunnel barrier of thickness 2-5 monolayers (L).
Another hBN tunnel barrier of thickness 2-5 L on a PMMA membrane is then used to lift (by Van der Waals forces) a single layer of transition metal dichalcogenide (TMDC) from a separate SiO 2 substrate.The hBN together with the single layer TMDC is then peeled onto the already constructed hBN-Gr-hBN structure to form the stack of hBN-Gr-hBN(2-5L)-TMDC(1L)-hBN(2-5L).Finally a graphene flake is then peeled from a membrane to complete the light-emitting quantum well (LEQW) device.Estimation of the hBN tunnel barrier thickness is conducted using a combination of optical and atomic force microscopy measurements.
Electrical contacts are patterned using electron beam lithography followed by evaporation of Cr/Au (5nm/50nm) allowing for independent electrical contacts to both the top and bottom graphene electrodes.
Devices were also fabricated onto distributed Bragg reflector (DBR) substrates which allow for the collection of 30% of the emitted light and leads to much brighter LEQW's.Details of the Bragg reflector substrates can be found in [4, 5].Heterostructure LEQW's on DBR's were firstly fabricated on a thermally oxidized Si wafer then the whole heterostructure stack was transferred by using the wet transfer method from the SiO 2 substrate to the DBR mirror followed by e-beam lithography and metallization.This step was found to be necessary due to poor adhesion of flakes to the DBR mirrors preventing direct exfoliation onto the mirror surfaces.The DBR dielectric pairs were also found to delaminate when removing tape during mechanical exfoliation.
All TMDC materials were sourced from either HQ-Graphene or 2Dsemiconductor.A,B,C, but for a symmetric (2L hBN barriers) MoSe 2 LEQW's.PL was measured using a 532nm laser at a power of 32 µW.
Figure S6. Photoluminescence (PL) and electroluminescence (EL) spectra measured at T= 6 K. A, contour map of PL vs bias voltage for a WSe 2 LEQW with the current density shown on the right y-axis and the data plotted in white. B, PL spectrum at V b = 0 V and C, EL spectra for positive and negative biases . D,E,F, same as in A,B,C but for a symmetric LEQW's with 2L hBN tunnel barriers around the WSe 2 layer. G,H,I, same as in
4. Low bias EL spectra for the device shown in Figure 2 of the main text.
Temperature dependence of the low energy peaks
Here we consider in more detail the temperature dependence of the low energy peaks P 1 and P 2 in a typical WSe 2 LEQW device.As the temperature increases from 20 to 80 K, the EL intensity of the high energy peak X 0 increases by a factor of 2 and the X -peak grows by 1.5 times (see Fig. S8).At the same time the low energy peaks P 1 and P 2 start to decay, with P 2 showing a sharp decay by a factor of 2. At higher temperatures these low energy peaks start to broaden, merge and it becomes hard to trace individual features.The observed redistribution of the EL intensity clearly shows thermal-activation type behavior where the occupation of the low energy states decreases, while the population of the high energy states grows with temperature.Note, that, eventually, at room T, the neutral exciton line dominates in PL and EL in the majority of LEQWs studied in this work (see also PL results on WSe 2 monolayer films in Refs.[6, 7]).The PL yield of the MoSe 2 device drops off by a similar amount to the EL in this device.However for the WSe 2 device the PL intensity only grows by roughly 10x as compared to 200x increase of the EL in this structure.
Collection efficiency
The quantum efficiency is defined as the number of photons emitted per number of injected carriers, Ne/i (N = number of emitted photons per second, e electron charge, I is the current passing through our collection area).In order to estimate the number of emitted photons we need to estimate our collection efficiency.The total loss is defined as, η = η Lens η optic η system .η optic is the loss of all the optical components in the optical circuit.It was measured directly using a 1.96 eV laser and a power meter to determine the loss at each component.We find η optic = 0.18.η system -converts the number of photons arriving at the incoming slit of the detector into the detector counts.It takes into account the loss of photons which pass through the slit, grating and onto the CCD and has been again measured directly by using the 1.96 eV laser and taking spectra of the laser for different powers in order to get a counts vs incident photons.For our system we get 4203 integrated cts/sec per 1 pW.Taking into account that 1 pW of power corresponds to N=P/hν=3177476 photons, we arrive at a conversion coefficient between the number of integrated counts and the number of photons incident on the slit of the spectrometer per second leading to the system efficiency of η system =4203/3177476 =1.32 x 10 -3 .η Lens is the efficiency of the lens collection [8].We use a 50x objective with a numerical aperture, NA = 0.55.LED's are fabricated on either two substrates, firstly Si-SiO 2 (290nm) with refractive index of Si(n=3.734) and SiO 2 (n=1.645) or distributed Bragg reflectors which consist of 10 alternating quarter wave pairs (187.5nm) of SiO 2 (n=1.46) and NbO 2 (n=2.122)(see[4, 5]).
Cross sectional imaging
Further details of cross sectional imaging of heterostructures produced from 2D materials can be found in [1, 9] .
Sample preparation
In summary a dual beam instrument (FEI Dual Beam Nova 600i) has been used for site specific preparation of cross sectional samples suitable for TEM analysis using the lift-out approach [Schaffer, M. et al.Sample preparation for atomic-resolution STEM at low voltages by FIB [10]].This instrument combines a focused ion beam (FIB) and a scanning electron microscope (SEM) column into the same chamber and is also fitted with a gas-injection system to allow local material deposition and material-specific preferential milling to be performed by introducing reactive gases in the vicinity of the electron or ion probe.The electron column delivers the imaging abilities of the SEM and is at the same time less destructive than FIB imaging.For heterostructures fabricated on insulating substrates, such as DBR's, a thin layer of Au was initially deposited to prevent charging when SEM imaging.SEM imaging of the device prior to milling allows one to identify an area suitable for side view imaging.After sputtering of a 10 nm carbon coating and then a 50 nm Au-Pd coating on the whole surface ex-situ, the Au/Cr contacts on graphene were still visible as raised regions in the secondary electron image.These were used to correctly position and deposit a Pt strap layer on the surface at a chosen location, increasing the metallic layer above the device to ~2 μm.The Pt deposition was initially done with the electron beam at 5kV e -and 1nA up to about 0.5m in order to reduce beam damage and subsequently with the ion beam at 30kV Ga + and 100pA to build up the final 2m thick deposition.The strap protects the region of interest during milling as well as providing mechanical stability to the cross sectional slice after its removal.Trenches were milled around the strap by using a 30 kV Ga + beam with a current of 1-6nA, which resulted in a slice of about 1m thick.Before removing the final edge supporting the milled slice and milling beneath it to free from the substrate, one end of the Pt strap slice was welded to a nano-manipulator needle using further Pt deposition.The cross sectional slice with typical dimensions of 1 μm x 5 μm x 10 μm could then be extracted and transferred to an Omniprobe copper half grid as required for TEM.The slice was then welded onto the grid using Pt deposition so that it could be safely separated from the nanomanipulator by FIB milling.The lamella was further thinned to almost electron beam transparency using a 30kV Ga + beam and 0.1-1nA.A final gentle polish with Ga+ ions (at 5kV and 50pA) was used to remove side damage and reduce the specimen thickness to 20-70nm.The fact that the cross sectional slice was precisely extracted from the chosen spot was confirmed for all devices by comparing the positions of identifiable features such as Au contacts and /or hydrocarbon bubbles, which are visible both in the SEM images of the original device and within TEM images of the prepared cross section.
Scanning transmission electron microscope imaging and energy dispersive x-ray spectroscopy analysis
High resolution scanning transmission electron microscope (STEM) imaging was performed using a probe side aberration-corrected FEI Titan G2 80-200 kV with an X-FEG electron source operated at 200kV.High angle annular dark field (HAADF) and bright field (BF) STEM imaging was performed using a probe convergence angle of 26 mrad, a HAADF inner angle of 52 mrad and a probe current of ~200 pA.Energy dispersive x-ray (EDX) spectrum imaging was performed in the Titan using a Super-X four silicon drift EDX detector system with a total collection solid angle of 0.7 srad.The multilayer structures were oriented along an <hkl0> crystallographic direction by taking advantage of the Kukuchi bands of the Si substrate.
Figure 1 .
Figure 1.(A) Schematic of the device architecture.(B) High resolution transmission electron microscopy image of a cross-sectional slice of a WSe 2 LEQW on a DBR substrate.(C) Band alignment at high bias of a WSe 2 LEQW.(D) Schematic representation of the band structure of WSe 2 .Red and blue arrows denote spin orientation.(E) Current density vs bias voltage V b for the presented devices.(F) 50x magnification monochrome image of a WSe 2 LEQW device with an applied bias of V b =2 V and current of 2 μA taken in ambient conditions with weak backlight illumination.Red false color: Au contacts to bottom graphene, Blue false color: Au contacts to top graphene (Central white region corresponds to strong electroluminescence) .See supplementary information for fabrication details.
Figure 2 .
Figure 2. Contour maps of the EL spectra from a 5+/-1L hBN-WSe 2 -3L hBN LED at T= 4.2 K for negative (A) and positive (C) bias voltage.(B) PL contour map as a function of V b , measured at an excitation power of 10 W and excitation energy of 1.95 eV.Current density vs bias voltage for this device is shown in Fig. 1E (black curve) (E) PL spectrum at V b =0V.(D), (F) Bias dependence of the EL for negative and positive polarities respectively (Low bias spectra are presented more clearly in the supplementary materials).
Figure 3 .
Figure 3. (A) Electroluminescence spectra taken at different temperatures for a WSe 2 QW with 2L hBN tunnel barriers measured with applied bias of V b =2V (J-V b is shown in Fig.1E with a blue curve).This sample demonstrates a 200 times increase of the EL output when the temperature is increased from T= 6 K to 300K.(B) Electroluminescence spectra recorded for V b = 1.8V for various temperatures for a MoSe 2 LEQW (J-V b is shown in Fig.1E with a magenta curve) having identical structure to the WSe 2 device in (A).The device shows a typical decrease of light emission with increased temperature.(C) Temperature dependence of the integrated EL intensity for a WSe 2 (blue) and MoSe 2 (red) showing opposite trends with increasing T. The intensities are normalized by those measured for each device at T=6K. (D) Arrhenius plot of the EL yield with temperature for the WSe 2 device shown in (A).Inset shows the high temperature region used for the linear fit.
Figure 4 .
Figure 4. (A) Temperature dependence of the quantum efficiency for three typical WSe2 LED devices measured at bias voltages and injection currents of 2.8 V and j = 0.15 µA/μm2 (Device 1), 2.8 V and j = 0.5 µA/μm2 (Device 2), 2.3 V and j = 8.8 µA/μm2 (Device 3).(B) Individual electroluminescence spectra plotted for four different injection current densities for Device 3. (C) The external quantum efficiency plotted against bias voltage and injection current density at T = 300 K for Device 3. The EQE monotonically increases even up to current densities of 10 µA/μm2 or 1000 A/cm2.
Figure S1 .
Figure S1.A, hBN crystal exfoliated onto an oxidized silicon wafer, dark field images are shown on the right; B, a graphene flake is peeled from an PMMA membrane onto the large hBN crystal; C, the graphene flake is then covered with the first hBN tunnel barrier which is again peeled from a PMMA membrane.D, the quantum well is completed by using a second hBN tunnel barrier to lift a WSe 2 flake from a separate Si-SiO 2 substrate, the second tunnel barrier together with the WSe 2 layer are then peeled onto the hBN-Gr stack.E, Finally the top graphene electrode is then peeled completing the heterostructure stack (inset: shows a blown up image of the heterostructure region).Scale bars 50 μm in A, 25 μm in B-E.
Figure S2 .
Figure S2.Top: Optical micrograph of the of the completed device, the dark regions either side of the overlap region correspond to an etch mesa in PMMA used to remove excess graphene outside the overlap region.Bottom: image of the device under a bias of V b = 2 V.The central overlap region shows strong electroluminescence.(Scale bar: 25 μm)
Figure S5 .
Figure S5.A, Typical I-V b dependence for a LEQW device , showing only weak dependence on temperature B, ratio of the tunnel conductivity at a given temperature to that of T = 6 K showing only a small increase from T = 6 K to T = 300 K taken at V b = 2.8 V.
Figure S7 .
Figure S7.A, Electroluminescence contour maps for positive and negative bias voltage plotted in logarithmic scale to better show the onset of EL.B,C Electroluminescence spectra in the low bias regime for positive and negative bias.
Figure S8 .
Figure S8.Comparison of the photoluminescence intensity for MoSe 2 and WSe 2 devices shown in Figure 3D.
6 .
FigureS9shows the temperature dependence of the photoluminescence integrated intensity for the MoSe 2 (red) and WSe 2 (blue) devices shown in figure3Dof the main text.
Figure S9 .. 5 %
Figure S9.Comparison of the photoluminescence intensity for MoSe 2 and WSe 2 devices shown in Figure 3D.
Figure S10 .
Figure S10.Total power emitted within a polar angle for an emitting dipole placed on Si-SiO 2 and on a distributedBragg reflector (DBR) as well as in free space. | 2018-04-03T04:12:44.189Z | 2015-11-16T00:00:00.000 | {
"year": 2015,
"sha1": "d3aa4dcdee7c66205a170ea5c9b19e3ff0695cb5",
"oa_license": null,
"oa_url": "http://eprints.whiterose.ac.uk/94114/1/1511.06265v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d3aa4dcdee7c66205a170ea5c9b19e3ff0695cb5",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science",
"Physics"
]
} |
245515210 | pes2o/s2orc | v3-fos-license | Biochemical Analysis Based on Zinc Uptake of Chickpea (Cicer arietinum L.) Varieties Infected by Meloidogyne incognita
The significant constraints in Chickpea (Cicer arietinum L.) production hampers a bit more than 14% global yield loss due to plant-parasitic nematodes. Root-knot nematode (Meloidogyne sp.) is an endoparasite and a significant species affecting the chickpea plant. So, the chemical basis of management is more cost-effective, and pest resurgence building is enhanced in the pathogen. So, ecological-based nematode management is requisite, which also is got hampered due to breeding for resistance against such plant-parasitic nematodes. This was the primary reason to conduct this experiment to enhance resistance in the chickpea plants based on Zinc uptake by using bioagent, Pseudomonas fluorescens alone or in combination. where Different treatments including nematode, bacterium, and chemicals were used sustaining the enhancement of disease resistance in chickpea cultivars, RSG 974, GG 5, GNG 2144. Zinc content of chickpea variety GNG 2144 was found the highest in treatment, when only bacterium (P. fluorescens) was inoculated, i.e., 3.14 mg/100g of root followed by GG 5, i.e., 2.79 mg/100g of root and RSG 974 was, i.e., 2.35 mg/100g of root respectively in a descending order. Application of P. fluorescence combined or alone gradually increased the Zn concentration in roots of chickpea plants compared to healthy check followed by chemical treated plants.
INTRODUCTION
Root-knot nematode (Meloidogyne sp.) is an endoparasite. The infective second-stage juvenile readily penetrates the plant roots near the apical meristem within 24 h after inoculation [1]; however, other regions of the roots are not immune to attack [2]. The mature females, which develop in the root tissue, produce conspicuous galls in the infected zone of the roots. The formation of giant cells through the dissolution of cell walls and coalescing of their contents was reported for root-knot infection of Nicotiana hybrids [3]. The authors gave further support to this process of the syncytial gall formation, while studying root-knot infections on tomato [2]. Similar evidence for cell wall dissolution has been reported for many hosts infected by Meloidogyne species [4,5,6, and 7]. Scientists, also, surveyed 30 soil samples from medicinal plants and found M. incognita in all the samples [8]. Furthermore, an inverse relationship was observed between M. incognita and the growth of Ocimum sanctum [9]. The effects of M. incognita on different plants were observed by several workers, e.g., on Manihot esculenta by [10], on tomato by [11], on resistant cotton genotypes by [12], on olive explants by [13], and on sunflower by [14], fruit crops by [15] and [16], pulse crops by [17], and chickpea by [18]. Chickpea (Cicer arietinum L.) is an important pulse crop with an annual production of 11.5 million tonnes worldwide [19,20]. However, Its yield of chickpea tends to be low and unstable, with a world average yield of 850 kg/ha [20], well below the estimated yield potential of 4,000 kg/ha [21,22].
India is the world's largest consumer of chickpea and the world's largest producer, contributing over 70% of total global chickpea production [23]. However, there is a significant decrease in the production of chickpea is occurred due to plantparasitic nematodes. Plant-parasitic nematodes constrain chickpea (Cicer arietinum) production, with annual yield losses estimated to be 14% [24] of total global production. Nematode species causing significant economic damage in chickpea include root-knot nematodes (Meloidogyne artiella, M. incognita, and M. javanica) [25]. For the management biocontrolling purpose to against root-knot nematode problem in this present study, we applied used biocontrol agent, Pseudomonas fluorescens, as it has a potential antagonistic nature to control plant diseases [26]. Field trials in Vigna mungo to control root rot disease complex caused by Macrophomina phaseolina and cyst nematode, Heterodera cajani were carried out at Coimbatore. P. fluorescens was applied as a seed treatment (2 g/kg seed), and the results showed less root rot incidence, and less nematode population and increased pod yield [27]. P. fluorescens alone or in combination with pesticides controlled wilt disease complex of Pigeon pea+ H. cajani. P. fluorescens alone increased plant growth, nodulation, and phosphorus content, and decreased nematode multiplication and wilting in infected plants. Field trials at Kanpur, India for controlling root rot disease (Rhizoctonia bataticola) in Chickpea cv. C235 using bacterial antagonist, P. fluorescens at 500 g/ha [28] It gave some control compared to untreated control when given as soil inoculation plus seed treatment.
Considering the importance of the subject, the present investigation was undertaken to find the changes, if any, in zinc content concerning chickpea inoculated with the root-knot nematode, M. incognita with a combination of P. fluorescens as a bioagent, where different treatments of nematode, bacterium, and chemicals were used sustaining the enhancement of disease resistance in chickpea cultivars, RSG 974, GG 5, GNG 2144.
MATERIALS AND METHODS
Cultivars of chickpea were sown in 15 cm diameter earthen pots filled with steam-sterilized soil. In order to know the chemical and genetic basis of resistance, three varieties were chosen for a detailed analysis. These varieties were grown with complete care. One set of each uninoculated (healthy) and inoculated (infected) plants was analyzed to test the effects of rootknot nematode infection on the growth and vigor of the plants and their root system. A week after germination, seven treatments with four replications to each of chickpea varieties RSG 974, GG 5, and GNG 2144 were done. T 1 -Meloidogyne incognita alone @ 1000 J 2 / pot, T 2 -Bacterium, P. fluorescens alone @ 7gm/pot, T 3 -Meloidogyne incognita inoculated, one week prior to bacteria T 4 -Bacterium inoculated, one week prior to Meloidogyne incognita T 5 -Meloidogyne incognita and Bacterium inoculated at a time T 6 -Carbofuran 3G @ 2.5kg ai/ha, T 7 -Control.
Healthy and inoculated plants were harvested, 45 days after planting. The harvested roots were washed thoroughly under running tap water to remove the adhering soil particles and were kept separately for chemical analysis.
Estimation of Micronutrient 'Zn' in Roots
Mineral acids like diacid (HNO₃ -HClO₄) were digested [29]. The digested sample was introduced to AAS (atomic absorption spectroscopy) for Zn analysis after standardizing the AAS with respective standards.
Effectiveness of Zinc Contents on the Resistance / Susceptibility of Chickpea Varieties Influenced by the Root-knot Nematode, M. incognita, and P. fluorescens Zn content in variety RSG 974
The total zinc content of chickpea variety RSG 974 (
Zn content in variety GG 5
The total zinc content of chickpea variety GG 5 (
Zn content in variety GNG 2144
The total zinc content of chickpea variety GNG 2144 (
DISCUSSION
The total zinc content of chickpea variety RSG 974 (Table 1) was found the highest in treatment-2, where only bacterium (P. fluorescens) was inoculated, i.e., 2.35 mg/100gm of root with a percentage of increase of 45.06% over the control treatment-7 followed by treatment-6, where only carbofuran was treated, i.e., 2.11mg/100mg with a percentage of increase of 30.25% respectively. These findings were found quite similar to findings by [30], where as they concluded that inoculation of isolates BT3 and CT8 improved the growth parameters of chickpea and increased the plant's Zn uptake by 3.9-6.0%.
The total zinc content of chickpea variety GG 5 ( The total zinc content of chickpea variety GNG 2144 (Table 3) was found the highest in treatment-2, where when only bacterium (P. fluorescens) was inoculated, i.e., 3.14 mg/100gm of root with a percentage of increase of 65.87% over the control treatment-7 followed by treatment-6, where when only carbofuran was treated i.e. 3.07 mg/100mg with a percentage increase of 62.3% respectively in a descending order. Scientists stated that Zinc-solubilising bacteria could release Zn from its insoluble compounds and the strongest bacterium for Zn production is P. fuorescence (Ur21) [32]. The lowest amount of zinc content was recorded in treatment-1 when only M. incognita was treated, i.e., 2.20 mg/100mg of root of variety GNG 2144 with a low increase in percentage (16.4%) over the control. This finding is was similar to the findings of [33]. He observed that Zn content was decreased in infected plants of African marigold than healthy plants due to Tylenchulus semipenetrans infection.
Certain members of the P. fluorescens have been shown to be potential agents for the biocontrol which suppress plant diseases by protecting the seeds and roots from fungal infection. They are known to enhance plant growth promotion and reduce severity of many fungal diseases [26].
CONCLUSION
Chickpea (Cicer arietinum L.) production is hampered by considerable restrictions, which result in a global yield loss of about 14% due to plant-parasitic nematodes. The root-knot nematode (Meloidogyne sp.) is an endoparasite that has a big impact on chickpea plants. As a result, the chemical basis of management is more cost-effective, and the pathogen's pest comeback is strengthened. As a result, ecologically based nematode control is required, which is additionally impeded by resistance development for plant-parasitic nematodes. The primary goal of this study was to improve resistance in chickpea plants by employing the bioagent Pseudomonas fluorescens alone or in combination to increase Zinc uptake. where Various treatments, such as nematodes, bacteria, and chemicals, were utilised to maintain disease resistance in the chickpea cultivars RSG 974, GG 5, and GNG 2144. Zn enhances biocontrol activity by reducing toxic materials produced by the pathogen [34]. Zn content was found more in GNG 2144 and GG 5 than in that of tolerance one, RSG 974 among three chickpea cultivars, and P. fluorescens has the leading role in increasing Zn content in roots of chickpea plants. | 2021-12-29T16:20:39.320Z | 2021-12-14T00:00:00.000 | {
"year": 2021,
"sha1": "132cca9f5ae8076e843087fa1fa5a8abfcde9059",
"oa_license": null,
"oa_url": "https://www.journalijpss.com/index.php/IJPSS/article/download/30767/57743",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f319e5e6461b0988f5779643e89f748c7230cbd3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
15560946 | pes2o/s2orc | v3-fos-license | Leukemia cancer mortality trend in iran, from 1995 to 2004.
BACKGROUND
Cancer is the third most common cause of death in Iran, the leukemia cancer is one of the most important causes of cancer mortality. Regarding cancer mortality, data would be important to monitor the program screening effects, earlier diagnosis, demographic data and other prognostic factors. The aim of this study was mortality rates evaluating, then leukemia cancer trends among the Iranian population within almost a period of a decade, i.e. from 1995 to 2004.
METHODS
National death Statistic Reported by Ministry of Health and Medical Education (MOH&ME) from 1995 to 2004, stratified by age group, sex, and cause of death, have included in this study. Leukemia cancer has expressed as the annual mortality rates/100,000, in general, and/or per gender, and age group.
RESULTS
The general mortality rate of leukemia cancer has slightly increased within the mentioned study period, from 0.44 to 2.54, then leukemia cancer mortality has often seen in men more than women.
CONCLUSION
The mortality rate of leukemia has significantly increased throughout Iran. Associated risk factors with leukemia have headmost identified for their prevention and control. So, future studies to reveal leukemia risk factors among the Iranian population would be crucial in order to control its burden.
Introduction
Cancer is worldwide major cause of morbidity and mortality. According to the World Health Organization (WHO), the global burden of cancer will continuously increase, during the next 20 years [1]. Among the different types of cancers, leukemia has greatly increased in frequency.
Leukemia is one of the most common cancers among children. However, recently the incidence has been increased in adults as well [2].
In the United States, the cases that diagnosed with leukemia and lymphoma, have accounted for a third of total [2,3]. According to World Health Organization (WHO), leukemia has increased worldwide. The cancer registry has recorded about 250,000 new cases annually [4] and a case fatality rate of 76% [5].
In Iran, the incidence registration has shown an increase in recent years, which has reached to the 7th grade after skin cancer, breast cancer, stomach cancer, colorectal cancer, bladder cancer and prostate cancer [6,7]. This cancer is among most fatal cancers in Iran, including gastric cancer, lung cancer, leukemia, and liver cancer [5].
Regarding the cancer mortality, data would be important, together with other epidemiologic indicators such as incidence and survival, to monitor the effects of screening program, early diagnosis, and other prognostic factors, then also the population risk [8].
The aim of this study has determined the leukemia trends mortality among the Iranian general population within almost a decade, i.e. from 1995 to 2004.
Materials and Methods
"National death statistic" which has reported by Ministry of Health, and Medical Education (MOH&ME) from 1995 to 2000 (registered death statistics for Iranian population at the Information Technology and Statistic Management Center, MOH&ME), then from 2001 to 2004 (published by MOH&ME) [9][10][11], stratified by age group, sex, and cause of death (coded according to the 10th revision of the International Classification of Diseases [ICD-10]) have included in this analysis. Leukemia cancer [ICD-10; C91-95] has expressed as the annual mortality rates/100,000, overall, both by sex and age group (0-5, 5-14, 15-49 and >=50 years of age). The populations of Iran in 1995-2004 have estimated by age group and sex using the census from 1996 conducted by Statistics Centre of Iran and its estimation according to population growth rate for years before and after national census [12].
Results
All death records due to leukemia cancer from 1995 to 2004 have included in the analysis. The general mortality rate of leukemia cancer dramatically has increased during these years from 0.79 to 6.45 per 100,000 during the study period (Table 1 and Figure 1). Moreover, leukemia mortality was higher among male than female (Table 1 and Figure 2). In male population, the rate of leukemia cancer mortality has increased from 0.90 (in 1995) to 7.47 (in 2004) per 100,000 but in female, the rate has increased from 0.67 to 5.38 in the same years (Table 1). Also the mortality has increased as age increased, so the highest rate has observed among elderly people, older than 50 years old ( Table 1).
Discussion
This study provides comprehensive projections for mortality rates due to leukemia cancer, based on the national registry data, indicating remarkable increasing trends in leukemia cancer mortality, during the study period. Our findings are in contrast to other countries.
In recent decades, important changes in leukemia mortality have occurred all over the world, then its mortality has decreased among all ages. According to World Health Organization statistics, the mortality rates of this cancer has been shown declining among all age and sex groups, in countries such France, Italy, UK and Japan [13]. A Chinese study has shown that from 1987 to 1999; there was a significant decrease in leukemia mortality in the age group 15-34 years of age [14]. Among those 15-24 years of age in Latin America, leukemia is the first cancerrelated cause of death [15] but even there, data has shown decreases among the most region countries [16]. In Western Europe, leukemia death rate declining among the adolescents was lower than children [17].
Our study also has indicated mortality increase among the children younger 15 years old. In contrast, a Brazilian study about leukemia mortality rate among the children, has revealed a notable decline in Brazil [18] then based on one another study in Rio de Janeiro, Brazil has indicated the same continuous downward trend among children aged<15 over a 25-year period, again with higher rate among the males [19].
Leukemia treatments have been established and improved very well in recent decades, including a 5year survival rate for children and adolescents of 84% [20]. However, leukemia prognosis among adults has not declined as much as children's rates [2]. So, even thought the mortality rates for young adults have decreased, but this decrease is much lower in comparison to children mortality rate [2].
According to some Asian studies, overall obesity would be a risk factor of mortality among adults [21]. Besides, a national study in US on a statespecific basis, has shown that leukemia mortality has decreased in states where smoking rates have declined, but remained unchanged in states where smoking prevalence were relatively stable [22].
This study has revealed an increasing trend of leukemia cancer mortality in Iranian population, specifically in older age and in men which could be reflects the higher risk factors including obesity and smoking for these groups. These results will help to understand the direction of the lung cancer mortality in Iran. A limitation of this study was cancers mortality underestimation in Iran due to poor registration [9]. Also we haven't reached complete crude data for all ages in order to give agestandardized mortality rates, for international comparison. But the results might be useful for health practitioners and policy makers in monitoring and projecting future rates.
In conclusion, the burden of leukemia (including incidence, prevalence and mortality) has significantly increased throughout Iran. Identification of risk factors associated with leukemia would be headmost for its prevention and control [4]. So, future studies to reveal the risk factors of this cancer among the Iranian population would be crucial in order to control its burden. | 2017-03-31T15:22:25.567Z | 2013-09-03T00:00:00.000 | {
"year": 2013,
"sha1": "22a982b40ba5af8e814323a2f7f6b524bc99d7f2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "22a982b40ba5af8e814323a2f7f6b524bc99d7f2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15286484 | pes2o/s2orc | v3-fos-license | The Mechanism of Hearing Loss in Paget's Disease of Bone
Objectives/Hypothesis The mechanism of hearing loss (HL) in Paget's disease of bone was investigated. The present study was a systematic, prospective, controlled set of clinical investigations to test the hypothesis that there is a general underlying mechanism of HL in Paget's disease of bone and to gain additional insights into the auditory and otologic dynamics of this disease. Specific questions were 1) whether the mechanism is cochlear or retrocochlear and 2) whether the bone mineral density of the cochlear capsule is related to hearing levels. Study Design Several double-blinded, cross-sectional, prospective, correlational studies were conducted in a population of elderly human subjects with skull involvement with Paget's disease versus a control population of elderly subjects free of Paget's disease. Demographic and clinical data were recorded. Longitudinal observations were made in subjects under treatment. Methods Subjects were recruited from a Paget's disease clinic. Pure-tone auditory thresholds, word recognition, and auditory brainstem responses (ABRs) were recorded. The dimensions of the internal auditory canals were measured using computed tomographic (CT) images and digital image analysis. The precision, accuracy, and temporal stability of methods to measure the bone mineral density of the cochlear capsule and an adjacent area of nonotic capsule bone were validated and applied. Correlations were sought between hearing levels and cochlear capsule bone mineral density. Results ABRs were recorded in 64 ears with radiographically confirmed Paget's disease involving the skull. Responses were absent in eight ears, all of which had elevated high pure-tone thresholds. ABRs were interpreted as normal in 56 ears; none were abnormal. The mid-length diameter and minimum diameter of the internal auditory canal of 68 temporal bones from subjects with Paget's disease were found to have no statistically significant relationship to hearing thresholds. The Pearson product-moment correlation coefficients (age- and sex-adjusted) in the group with Paget's disease involving the temporal bone were −0.63 for left ears and −0.73 for right ears for high-frequency air conduction pure-tone thresholds (mean of 1, 2, and 4 kHz) versus cochlear capsule density. Correlation coefficients (age- and sex-adjusted) between cochlear capsule density and air-bone gap (mean at 0.5 and 1 kHz) for the affected group were −0.67 for left ears and −0.63 for right ears. All correlations between hearing thresholds and cochlear capsule density in pagetic subjects were significant at P < .001. The regressions were consistent throughout the ranges of hearing level. There were no significant correlations between cochlear capsule mean density and hearing level in the volunteer subjects. Conclusions The evidence supports the existence of a general, underlying, cochlear mechanism of pagetic HL that is closely related to loss of bone mineral density in the cochlear capsule. This mechanism accounts well for both the high-frequency sensorineural HL and the air-bone gap. Early identification, radiographic diagnosis of temporal bone involvement, and vigorous treatment with third-generation bisphosponates are important to limit the development and progression of pagetic HL.
INTRODUCTION
Paget's disease is a disorder of osteoclasts. Its etiology is unknown. There is evidence of the involvement of a paramyxovirus 1 and genetic factors. 2 Pagetic osteoclasts are larger, and they are more active resorbers of bone by an order of magnitude or more. Progressively enlarging lytic lesions are formed. Osteoblasts react to the accelerated osteolysis of pagetic osteoclasts by forming new bone at a greatly accelerated rate. Often, this bone is poorly lamellated. When the rate of bone resorption is controlled by medication, the rate of bone formation is gradually reduced. Lytic areas are at least partially repaired, and the new bone formed has a normal lamellar appearance. 3 A bibliography on Paget's disease for health professionals is maintained by the National Institutes of Health Osteoporosis and Related Bone Diseases National Resource Center at http://www.osteo.org/research.asp.
Hearing loss (HL) is a prominent feature of Paget's disease of bone when the skull is involved. 4 -7 In auditory studies, the majority of patients were found to have a highfrequency sensorineural HL and a low frequency air-bone gap. 5,6,8 The mechanism of HL is not well understood.
Because of the lack of an animal model for Paget's disease of bone, studies of the mechanisms of HL are limited to postmortem studies and in vivo clinical investigations. Before 1990, efforts to understand the mechanism of HL in many otologic disorders, including Paget's disease of bone, were limited to descriptive histopathologic studies, usually of a small numbers of cases as they became available for study. The histopathologic findings in human temporal bones with Paget's disease have now been described in more than 50 cases, including some with audiometric data. 7,9,10 Several mechanisms for the sensorineural HL were suggested in these studies, including compression of the auditory nerve 11 or vascular shunts. 7 It was suggested that the air-bone gap may be caused by stiffness changes in the soft tissue elements of the middle ear. 7 Epitympanic spurs and proliferation of fibrous tissue adjacent to the ossicles have occasionally been demonstrated. 5,12 In a differing analysis, Khetarpal and Schuknecht 9 were unable to demonstrate histopathologic correlates of HL in a thorough study of 26 temporal bones. Most remarkable was the finding that even in cases of severe HL, there was no correlation between the hearing threshold and the number or location of surviving hair cells. 9 The authors also showed no evidence of ossicular fibrosis or ankylosis to account for the air-bone gap. Khetarpal and Schuknecht proposed that "both the conductive and sensorineural components of the HL in Paget's disease are caused by changes in bone density, mass, and form that dampen the finely tuned motion mechanics of the middle and inner ears." 9 They were unable to offer direct evidence to support this suggestion.
The present study was a systematic, prospective, controlled set of investigations to test the hypothesis that there is a general underlying mechanism of HL in Paget's disease of bone and to gain additional insights into the auditory and otologic dynamics of this disease. A group of elderly subjects with Paget's disease involving the skull and a control group of elderly subjects without Paget's disease were recruited for several investigations.
Auditory brainstem responses (ABRs) were recorded to search for evidence of retrocochlear disease. The presence of normal ABRs would be taken as evidence of normal retrocochlear function and a cochlear site of lesion. Dimensional measurements were made of the internal auditory canal using digital image analysis from computed tomographic (CT) images. Correlations were sought between internal auditory canal dimensions and hearing levels (both high-frequency air-conduction thresholds and the low-frequency air-bone gap) to determine whether narrowing of the internal auditory canal or compression of the auditory nerve would account for hearing levels in subjects with Paget's disease of the skull.
Precise, accurate, stable techniques of quantitative CT (QCT) have been used to measure the mineral density of human bone in vivo. For this study, a QCT method was developed and validated to measure the bone mineral density of small volumes of bone in the skull, specifically the cochlear capsule. Subjects with Paget's disease involving the temporal bone were scanned. Correlations were sought between density values versus hearing levels. If statistically significant correlations were found, the hypothesis would be supported.
Experimental Subjects
All subjects in this study gave informed consent according to procedures of the institutional review board (Human Rights Committee) and the principles of the Declaration of Helsinki. Fortytwo subjects with Paget's disease of the skull were evaluated during the term of the study. The diagnosis of Paget's disease was established using standard clinical criteria, including medical history, physical examination, laboratory tests, and radiologic studies by an endocrinologist subspecializing in bone and mineral diseases. The diagnosis was confirmed by clinical evaluation of CT scans by a neuroradiologist. The duration of disease, severity of disease, and degree of cochlear involvement varied widely.
A priori exclusion criteria included systemic bone disease, prior middle ear surgery, and known extraneous causes of HL (otosclerosis, chronic otitis, classic Menière's disease, and mastoiditis). Subjects with diabetes, mild hypothyroidism corrected by hormone replacement, or with histories of noise exposure were not excluded.
Thirty-seven subjects with Paget's disease involving the skull were recruited for studies of ABRs and dimensional studies of the internal auditory canal. Subjects ranged in age from 40 to 87 years. The mean age was 68 years. There were 23 female and 14 male subjects. Nine (24%) of these subjects were black and 28 (76%) were white.
Thirty-three subjects (66 ears) were available for studies of ABRs. Of these, two left ears were excluded from analysis because of prior middle ear surgery, leaving 31 left ears and 33 right ears, or 64 total ears with ABR data for analysis.
Thirty-seven subjects were available for CT scanning for dimensional measurements. The processing of CT images included a data reduction step that resulted in inadvertent omission of the medial portion of the internal auditory canal from the image data set of one right and two left ears (total 3 ears). Dimensional measurements could not be made from these incomplete data sets, leaving 36 right and 35 left ears (total 71 ears) for analysis of dimensions in pagetic versus normal subjects.
Thirty-five patients with Paget's disease involving the skull were available for both studies of the bone mineral density of the cochlear capsule (QCT) and hearing levels. There were 21 female and 14 male subjects with Paget's disease involving the skull. Eight (23%) of these subjects were black and 27 (77%) were white. The 35 patients ranged in age from 40 to 87 years. The mean age was 68 years. Data from three ears from three subjects were excluded from the analysis, two ears for a history of ear surgery and another because of chronic mastoiditis. Data from the remaining 67 ears (33 left and 34 right ears) were analyzed.
Volunteer Subjects
A control group of 22 volunteer subjects over age 50 was recruited from among the spouses of pagetic subjects and others. Two volunteers were excluded, one each for Ménière's disease and otosclerosis, leaving a total group of 20 subjects. There were 10 men and 10 women in the control group. One woman was of black descent. The remaining 19 control subjects were white. The mean age was 59 years.
Volunteers had no clinical history of Paget's disease and underwent determination of serum alkaline phosphatase to exclude Paget's disease. A neuroradiologist unaware of the clinical diagnosis reviewed each set of QCT images to confirm the presence of Paget's disease in each pagetic subject and its absence in the volunteers. Other exclusion criteria for volunteers were the same as for the pagetic subjects.
Audiometry and Auditory Brainstem Responses
Pure-tone behavioral audiometric thresholds for air and bone conduction were determined at 2.5 dB increments. Because of reports that a high-frequency sensorineural HL and a lowfrequency air-bone gap are characteristic in Paget's disease, 5,8,13,14 audiometric frequencies that reflected these two effects were selected. A high pure-tone average was calculated as the arithmetic mean of pure tone thresholds at 1, 2, and 4 kHz. The air-bone gap was calculated as the mean of air-conduction thresholds at 0.5 and 1 kHz minus the bone-conduction thresholds at those frequencies.
The ABR was recorded with a commercially available clinical evoked potentials system (Quantum 84, Cadwell, Kennewick, WA). Electrodes were applied with a conductant paste to the vertex (noninverting input Cz), left and right mastoids (inverting inputs M1 and M2, respectively), and at the midfrontal location (ground Fpz). Simultaneous ipsilateral and contralateral recordings were conducted to optimize the identification of component wave V. The bioelectric activity was differentially amplified (10 5 ), filtered (100 -3000 Hz), and averaged (1,500 -4,000 samples).
Stimuli consisted of 0.1 ms unipolar electrical rectangular pulses. Either condensation or rarefaction clicks were used, depending on which yielded the higher quality waveform. The sound stimulus was produced by a piezoelectric transducer that was coupled to the ear by a 26 cm length of silastic tubing terminating in a foam earplug. The stimulus intensity was 75 dB nHL for those subjects with normal hearing sensitivity and 85 dB nHL for those subjects with hearing losses. The stimulus repetition rate was 21.3 pulses per second. Each recording was duplicated at least once and was superimposed so that an assessment of waveform stability could be made. The latencies of ABR components waves I, II, III, IV, and V were tabulated (when identifiable) as were the amplitudes of waves I, III, and V. The wave V/wave I amplitude ratio was calculated.
There is no universally accepted method of interpretation of ABRs. In this study, ABRs were interpreted as though they were diagnostic tests for retrocochlear disease (disease of the auditory nerve and central auditory pathways). It was reasoned that abnormal ABRs could be taken as evidence of abnormal auditory nerve function (as opposed to abnormal cochlear function) as the site of the auditory lesion. ABR findings were compared with normative data established for interpretation for retrocochlear disease. These data consisted of latency and amplitude values from 32 clinic subjects with normal hearing. ABRs were considered normal if values were within 2.5 SD of the normal hearing group. 15 If latency or amplitude values were not within 2.5 SD of the values of the normal hearing group, another level of analysis was applied. No specific "correction factor" in milliseconds of latency per dB of high frequency HL was used; however, the degree of HL in the higher frequencies and the steepness of the audiometric slope in these frequencies was assessed. In the presence of a high-frequency cochlear HL, the portion of the cochlea that gives rise to the transduction events that will result in the ABR is understood to be the domain that normally codes for middle-and low-frequency sounds. The traveling wave takes a finite amount of time (approximately 1.2-1.5 ms) to reach the middle-to lowfrequency portion of the cochlear duct, producing a latency shift, which is reflected in the ABR in subjects with high-frequency sensory HL. Thus, measured latencies that exceeded the reference values by the time expected based on the degree of high-frequency HL were interpreted as normal (for the degree and configuration of the HL) and consistent with intact auditory nerve function. 15
Dimensional Measurements
Measurement of the dimensions of the internal auditory canal was accomplished with digital image processing. In step 1, a perpendicular pair of planar CT images (axial and coronal) was selected from the three-dimensional data set using digital image analysis software developed for this purpose. 16 The axial plane was selected first and included a section of maximal diameter throughout the internal auditory canal. The vestibule and portions of the cochlear and vestibular labyrinths were also included in this sectional image. The line of rotation between the axial and coronal planes was the line passing through the center of the internal auditory canal (Fig. 1).
In step 2, the length, the mid-length diameter, and the minimum diameter of the internal auditory canal in the axial and coronal planes of each temporal bone were measured. To simplify data analysis, the two sets of data (axial and coronal) for length and mid-length diameter were averaged. The smaller of the two minimum diameters (axial and coronal) was taken to identify all cases where auditory nerve compression might occur. Left and right ears were analyzed separately.
The precision of the dimensional measurements was tested in two ways.
Step 2 was repeated for each pair of axial and coronal images in eight temporal bones at least 24 hours after the initial set of measurements (24 pairs of observations). Steps 1 and 2 were also repeated in eight temporal bones each (24 pairs of observations). During each second set of measurements, the operator was unaware of (i.e., "blinded" to) the first result. For the length and mid-length diameter, the average coefficients of variation were less than 3% for the first test of precision described above and less than 5% for the second test. The measurement of the minimum diameter was somewhat less precise, with a coefficient of variation of 5% for the first test and 11% for the second. 15
Radiology
Much of the present study depended on the development and validation of the radiologic method of measuring bone mineral density of the cochlear capsule using a particular type of QCT. 17 The method of measuring bone mineral density of the cochlear capsule used in this study was a modification of a technique originally used to measure the regional bone mineral density in human vertebral bodies in vivo. 17 A specific convolution filter (mathematical algorithm to calculate the image from raw data) was applied to optimize the accuracy and precision of density measurements without sacrificing the high quality of the images. The technique resulted in a three-dimensional array of values expressed as physical density (e.g., mg/cm 3 ). The amount of radiation exposure was similar to that of conventional CT.
The scan protocol used an x-ray tube voltage of 130 kVp, 100 mA current, 4 seconds exposure time, and a 1 mm 2 focal spot. The head holder incorporated a solid bone-mineral calibration phantom marketed by Image Analysis (Irvine, CA). This device had a graded series of five density standards consisting of hydroxyapatite embedded in a plastic resin (Fig. 2). Thirty to 50 scans with a slice thickness of 1 mm were taken through the temporal region of the skull of each subject in the axial plane. A scan diameter of 30 cm was reconstructed to 512 ϫ 512 voxels, producing voxels of dimensions 0.585 ϫ 0.585 ϫ 1.0 mm. A subregion of 128 ϫ 128 voxels was selected for analysis ( Fig. 2). Measurement resolution was determined to be 1.6 mm in each dimension, providing a sample volume of 0.004 cm 3 . This method has previously produced measurements with noise of 0.7 to 1.3%, accuracy of 2.7%, and serial precision in vitro of 0.2%. 15 The long-term precision was monitored by use of a solid calibration standard (Image Analysis, Irvine, CA). A very stable result for a period of almost 4 years was accomplished. 16 To assess measurement accuracy, four cochlea-shaped plastic density standards were inserted into the temporal bones of a skull phantom. Measurements were confirmed to be linear through the range of densities encountered in Paget's disease. Measurements of the mineral density of normal cortical bone (1,850 mg/cm 3 ) were consistent with the results of other laboratories. 18
Image Analysis
A custom software package with a trilinear interpolation algorithm was used to translate and rotate the image data so that a thin section of the temporal bone oriented in a standard plane could be selected for analysis interactively on a computer workstation. This plane incorporated the lateral semicircular canal and a near mid-modiolar section of the cochlea (Fig. 3A). 19 Regions of interest were constructed from these sectional images to sample the bone density in the cochlear capsule (Fig. 3B). 19 Regions of interest were defined with particular care to correspond to the structure of the cochlear capsule and to avoid overlapping the area of study with the air-containing space of the middle ear. Such overlapping could result in errors caused by partial volume effects. Three-dimensional regions of interest that included the entire cochlear capsule were used for studies of treatment response. Two studies were conducted to assess the consistency of image analysis. First, a second region of interest was constructed on the original image section at least 24 hours after constructing the first region of interest, and the density measurements from each region of interest were compared. In this manner, the error attributable to selection of the regions of interest was demonstrated to be 0.7%. 16 In the second study, the image section that passed through the lateral semicircular canal was reselected, and a region of interest on this second sectional image was defined. The error attributable to the complete process of image analysis was found to be 1.5%, including selection of the lateral canal plane and construction of the regions of interest. 16
Statistical Analysis
The relationship between dimensions and hearing measures was assessed by correlations. Internal auditory canal dimensions were compared between the Pagetic subjects and controls by use of t tests and analysis of covariance. In each case, adjustment was made for the covariates age and sex.
Regression methods were used to examine the relationship between hearing thresholds and mean bone mineral density of the cochlear capsule. Linear regression relationships and Pearson product-moment correlations were computed to assess the simple associations. Analysis of covariance was used to compute partial correlation coefficients, adjusted for the covariates of age and sex. A Bonferroni adjustment to the P values considered statistically significant ([0.05]/2 ϭ 0.025 or less) was performed to account for the two measures of hearing that were tested.
Auditory Brainstem Responses
Eight ears in five subjects had no recordable auditory evoked responses caused by low hearing levels. ABRs from the remaining ears were normal, indicating that there was no electrophysiologic evidence of impairment of auditory nerve transmission.
Dimensional Measurements
Results are summarized in Tables I and II. 15 Correlations were sought between the length, mid-length diameter, and minimum diameter versus high pure-tone threshold average (1, 2, and 4 kHz) and versus the lowfrequency air-bone gap (0.5-1 kHz). No association was found between internal auditory canal dimensions and hearing level in the volunteer subjects. There were significant correlations of length versus both high-tone average and air-bone gap (Table II). t tests did not demonstrate that any of the dimensions were significantly different between pagetic and control groups.
There were no significant correlations between hearing level and the minimum diameter in either the pagetic or the volunteer subjects. There were significant correlations between mid-length diameter and high-tone average, but these were weak and were significant only in left ears (Table II). In one extensively pagetic temporal bone, the minimum coronal diameter was 1.17 mm (pure-tone average at 0.5, 1, 2, and 4 kHz ϭ 42 dB), the smallest diameter measured in this study. The corresponding diameter on the opposite side was 1.76 mm (pure-tone average ϭ 42 dB).
Comparison of Left versus Right Ear Cochlear Capsule Density
Left and right cochlear capsule densities were highly correlated (r ϭ 0.82, P Ͻ .001). The regressions were consistent throughout the ranges of cochlear capsule density (Fig. 4). These findings confirm clinical observations that pagetic involvement of the skull is most commonly symmetrical.
Relationship of Bone Mineral Density of the Cochlear Capsule to Hearing Levels
Significant Pearson product-moment correlations were found between high-frequency pure-tone air conduction hearing thresholds and mean bone mineral density of the cochlear capsule in 33 left ears and 34 right ears of the 35 Paget's disease subjects both with and without correction for age and sex (Table III) 16 (data representing three ears were not included because of exclusion criteria). Scatter plots (Fig. 5) illustrate the consistency of the regression functions throughout the range of the high-frequency pure-tone hearing levels. Multiple regressions showed that correlations between cochlear capsule density and hearing threshold at individual frequencies from 1 through 4 kHz were not substantially different from each other.
Significant correlations also were found between mean cochlear capsule density and the air-bone gap in the pagetic subjects (Fig. 5, C and D). In contrast, there were no significant correlations between cochlear capsule mean density and the high frequency pure-tone hearing levels in the volunteer subjects (Fig. 5, E and F). Because normal subjects are not expected to have conductive HL or airbone gaps, it is questionable whether calculation of correlations involving the air-bone gap in the volunteer sub- The robustness of the correlations was explored by examining the effect of risk factors for HL. The effect of risk factors, including noise exposure (by history) and diabetes mellitus, was examined by comparing selected pagetic subjects without risk factors (n ϭ 22) to those with risk factors (n ϭ 13). There were no significant differences between the two groups in correlations of either the high pure-tone threshold level or the air-bone gap. There was no apparent relationship between sex and high-frequency pure-tone thresholds; however, men may have had a smaller air-bone gap by 3.8 dB for left ears or 3.7 dB for right ears (P ϭ .10 and P ϭ .03, respectively) after adjustment for age and mean bone density of the cochlear capsule. 16
DISCUSSION
The ABR results are consistent with intact auditory nerve function in this group of subjects with Paget's disease of the skull. Correlational data in right temporal bones suggested that the length of the internal auditory canal might be related to high-tone hearing level. However, this association was less clear on the left side and appeared qualitatively weaker for conductive losses compared with sensorineural losses (Table II). The biologic or clinical significance of these possible weak associations is unclear, although it is conceivable that extensive Paget's disease is correlated with bossing of the porus acusticus in some cases of moderate to advanced disease and has been observed histologically. 9 Olivares and Schuknecht 20 measured the diameters of the internal auditory canal in 435 histologically prepared temporal bones. Specimens with evidence of bone disease, including Paget's disease, were specifically excluded. One hundred forty-four ears with sensorineural HL were purposely included. The authors found no relationship between the width of the internal auditory canal and hearing level. Ten subjects had canal widths of 3 mm or less and average pure tone thresholds of 30 dB or better in frequencies 0.5, 1, and 2 kHz. The authors concluded that it is doubtful that narrowing of the internal auditory canal is associated with HL in nonpagetic sensorineural HL. The differences between the measurements of the mid-length diameter of the internal auditory canal from digital radiographic data (mean of all mid-length diameters in pagetic subjects ϭ 3.97 mm) and the 6% smaller measurements derived from histologic preparations The possibility of partial auditory nerve compromise in occasional instances cannot be excluded absolutely; however, our radiologic and electrophysiologic data indicate that auditory nerve function is intact in most cases of HL associated with Paget's disease of bone. The results support the hypothesis that the primary effect of Paget's disease on hearing is on the cochlea. The results with QCT show the feasibility of determining regional bone mineral density in the temporal bone, including small structures such as the cochlear capsule. Although the linear resolution of measurement is 1.6 mm, voxel dimensions of 0.585 mm enable accurate measurement of mean density of small regions by sampling multiple voxels, typically 60 to 100 for each cochlear capsule. Errors caused by voxels that include in their volume different structures of adjacent regions (partial volume effect) were minimized but not completely eliminated.
A strong and statistically significant relationship was demonstrated between the bone mineral density of the cochlear capsule and both the high-frequency pure-tone airconduction thresholds and the air-bone gap in subjects with Paget's disease of the skull. There does not appear to be an inflection in the regression line or suggestion of a threshold phenomenon in this relationship. This consistency of the regression throughout its range suggests that the effect of Paget's disease on hearing is a continuous, graded process.
The bone of the normal cochlear capsule is lamellar bone with few haversian canals and vascular elements and thus consists of dense, compact bone tissue. 21 Consequently, bone lysis by Paget's disease would result in density values less than normal, whereas bone sclerosis could only thicken the cochlear capsule, not increase its mineral density. In accordance with this principle, in this study, pagetic involvement of the skull was associated only with reductions in cochlear capsule density or normal values, depending on the degree of the pagetic effect.
The findings in the studies reported here strongly support the hypothesis that there is a general underlying mechanism of HL in Paget's disease of bone. The finding of normal ABRs supports the cochlea as the site of the pagetic lesion rather than the auditory nerve. The lack of evidence for nerve compression further supports the cochlear hypothesis and demonstrates that there is very little distortion of consequence in the shape of the temporal bone, except perhaps in rare, extreme cases. The demonstrated relationship between cochlear capsule mean density and hearing levels also supports the cochlear hypothesis and the hypothesis of a general underlying mechanism. This relationship also suggests that alteration of bone mineral density may be close to the mechanism of HL.
These correlational findings are not necessarily in direct conflict with the findings of earlier histologic studies of pagetic temporal bones. The possibility of additional mechanisms, particularly in advanced cases, is not excluded.
Audiometric data, particularly data involving the determination of bone-conduction thresholds, are subject to considerable variability, especially in elderly subjects. The subjects in this study were older than the general population. Consequently, it must be acknowledged that the strength of the correlations is remarkable. The positive finding of correlations between cochlear capsule mineral density versus hearing levels supports the validity of the negative finding of no significant correlations between dimensional measurements of the internal auditory canal and hearing levels.
The idea that an air-bone gap can exist without a conductive HL in the manner usually understood may be difficult to accept. Khetarpal and Schuknecht 9 originated the concept with their report of no fibrosis or ankylosis involving the ossicles, their articulations, or suspensory ligaments. Currently, Paget's disease is the only instance known of an air-bone gap that does not indicate an impairment of the sound conducting mechanism. The implication is that the air-bone gap is a phenomenon of the acoustical mechanics of hearing by bone conduction. We are led to suspect that the air-bone gap may be caused in some way by enhanced conduction of low-frequency sound energy by pagetic bone. The strong correlations between bone mineral density of the cochlear capsule and the airbone gap suggest that the enhancement occurs primarily in the cochlear capsule, although enhancement may also occur in pagetic bone of the skull base more generally.
Pagetic bone has been observed to change shape. Gradual bowing of the weight-bearing long bones is common when they are involved. HL in Paget's disease does not seem to be related to changes in the shape of the skull or inner ear. Platybasia, a flattening and widening of the skull base, occurs only in the most advanced cases of pagetic involvement of the skull, whereas HL occurs with almost any degree of involvement of the temporal bone. The dimensional analysis of pagetic temporal bones in this study did not show changes in the internal auditory canal. Also, we did not observe qualitative changes in the shape of the cochlea.
Treatment with the newer bisphonsponates provides rapid arrest of the pagetic process and long-term control. 22 Acute spinal neuropathy caused by involvement of the bone adjacent to the spinal cord recovers in hours to days during bisphosphonate treatment, possibly because of a vascular steal phenomenon or the effects of toxic cytokines on neural structures. 23 Nevertheless, it has been the common observation, confirmed in this study and the author's clinical experience, that pagetic HL does not recover despite vigorous treatment. Consequently, spinal neuropathy is not a suitable model for pagetic HL. The possibility that pagetic bone releases toxic cytokines that impair cochlear function irreversibly but without resulting in loss of hair cells was not addressed by the study and cannot be excluded. Also, spiral ligament atrophy, which occurs prominently in otosclerosis, is not a typical feature of pagetic HL.
Histologic studies show that the pagetic process involves the entire cochlear capsule in essentially all cases observed. This finding suggests that the entire capsule is involved early. A reasonable inference from the crosssectional data of the study is that bone mineral density is gradually lost as the disease progresses. It may be suspected that continuous remodeling of pagetic otic capsule bone results in progressive sensorineural HL, predominantly in the higher frequencies. One is tempted to speculate that perhaps Paget's disease reveals the normal auditory function of otic capsule bone. This bone normally does not undergo remodeling. When it does undergo remodeling in Paget's disease, a gradual decline occurs in auditory function.
Sensory transduction in the cochlea is a process of interaction between mechanical (acoustic) and electrical events. It appears that conformational change in membrane-bound ion channels may be induced by me-chanical deformation. The flow of ions across hair cells is dependent on the maintenance of the "silent" current. 24 This current flow may be reduced in pagetic bone. It is also possible that acoustic energy is absorbed by pagetic bone, resulting in lower amplitude displacements of the basilar membrane in response to sound.
This study demonstrates the importance of studying a disease process in a systematic, prospective manner. It also demonstrates the difficulties of doing so. Progress requires the combination of a large group of subjects, advanced investigative techniques that are validated, and a multidisciplinary approach. Bone disorders that affect hearing, such as otosclerosis, are among the most common otologic disorders. Progress in understanding disorders of bone that affect hearing is hampered by a general lack of knowledge of the biology of calcified tissue, by the lack of more detailed knowledge about the unique features of otic capsule bone, and the lack of a suitable animal model. It is likely that each disease process (Paget's disease, otosclerosis, osteogenesis imperfecta, etc.) will prove to involve different mechanisms of HL. Despite the difficulties, the effort is worthwhile. Not only can more be learned about the causes, prevention, and treatment of HL, but we also may gain insight into normal auditory function.
CONCLUSIONS AND CLINICAL RECOMMENDATIONS
The findings in this study support a new view of the mechanism of HL in Paget's disease of bone and a new approach to clinical management of skull involvement: 1. There is a general underlying mechanism of HL in Paget's disease.
2. The mechanism of HL is cochlear.
3. The loss of bone mineral density in the cochlear capsule is a marker of disease effect and is close to the mechanism of both the sensory HL and the air-bone gap.
4. The strong correlation between the bone mineral density of the cochlear capsule and air-bone gap supports the suggestion that the air-bone gap in Paget's disease is not caused by pathology of the ossicular chain but by alteration of the acoustical mechanics of the ear. This alteration may include facilitation of bone conduction of sound.
5. Treatment that normalizes biochemical markers of active disease does not result in clinically significant improvement in hearing levels.
6. The pagetic process and associated HL are established well before signs appear in the facial skeleton. Consequently, it is not appropriate to withhold treatment until such signs appear. 7. HL caused by Paget's disease of bone is gradually progressive but potentially preventable with early diagnosis and vigorous treatment. Third-generation bisphosponates are highly effective in controlling the pagetic process.
8. Pagetic HL should be suspected in any middle-aged or elderly person, especially if hearing levels progress more rapidly than would be suspected in presbycusis or if a lowfrequency air-bone gap is present. 22,25 A serum alkaline phosphatase level can exclude the disease if normal or lead to definitive diagnostic evaluations if abnormally elevated. | 2018-04-03T05:57:38.404Z | 2004-04-01T00:00:00.000 | {
"year": 2004,
"sha1": "4d4576f577399f1d886fa00c47ca5ac7e2fcda53",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3813977?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d4576f577399f1d886fa00c47ca5ac7e2fcda53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119033570 | pes2o/s2orc | v3-fos-license | On the design of random metasurface devices
Metasurfaces are generally designed by placing scatterers in periodic or pseudo-periodic grids. We propose and discuss design rules for functional metasurfaces with randomly placed anisotropic elements. By analyzing the focusing performance of random metasurface lenses as a function of their density and the density of the phase-maps used to design them, we find that the performance of 1D metasurfaces is mostly governed by their density while 2D metasurfaces strongly depend on both the density and the near-field coupling configuration of the surface. The proposed approach is used to design all-polarization random metalenses at near infrared frequencies. Challenges, as well as opportunities of random metasurfaces compared to periodic ones are discussed. Our results pave the way to new approaches in the design of nanophotonic structures and devices from lenses to solar energy concentrators.
Introduction
Originally designed at radio wave frequencies for radar and space communications, metasurfaces have been implemented to design devices at visible and infrared wavelengths such as carpet cloaks [6][7][8][9] , holograms 10-17 , optical flat lenses [18][19][20][21] and solar concentrators 22,23 to name a few. Metasurfaces control the reflection and refraction of waves at interfaces using phase-shifting elements 24 . In optics, whether they are designed from metallic materials using plasmonic phenomena 7,18,[24][25][26][27][28][29] or dielectrics to obtain higher efficiencies at the cost of larger elements 20,22,30,31 , whether they are relying on subwavelength gratings 32,33 , resonators 7,22,24,27,28 , waveguides 19,26,31 and, or, geometric phase 18,20,26,32,34,35 to tune the phase of the wave, metasurfaces are generally designed in a periodic framework where their constituting elements are placed in a periodic grid 30 . Recent advances on the control of light in complex media 36 have motivated the study of random or disordered metasurfaces for specific applications such as decreasing the radar cross-section [37][38][39][40] , improving SERS enhancement 41 , reducing laser coherence 42 , designing wide band-gaps 43 , or increasing light-matter interaction and the absorption of solar cells 44,45 . One of the advantages of random media is the very high number of degrees of freedom that they support and which can be harnessed to control waves on scales smaller than the wavelength, or to multiplex more information for communications 36,46 . This has recently led to the design of random metasurfaces for wave front shaping [47][48][49] . However, the design of such devices still remains elusive due to the disordered distances between neighboring elements, the near-field coupling, and variations of the local density of elements. Some theoretical approaches can address the homogenization problem of homogeneous random polarizability materials in periodic arrays of resonators 50 , or for identical polarizabilities in disordered arrays of scatterers 51,52 . The relation providing the phase-shift of the elements constituting a metasurface as a function of their dimensions is determined either analytically, when possible, or with numerical simulations for a single element or for periodic arrays of identical elements. However, metasurfaces are generally made of elements of different sizes to provide a phase-shift that varies spatially. Hence, the previous approaches may fail 53 as near-field coupling introduces errors in the phase-shifts provided by the elements. Important questions are thus whether this periodic arrangement is always the best solution and whether it is possible to design functional metasurfaces within a random framework with general guidelines. While random and disordered metasurfaces can be complex to design, they also have potential advantages. For instance, the random design process optimizes the area of the metasurface. In a periodic metasurface, small and large elements have the same footprint. On the contrary, in a random metasurface, the random design or the pseudo random algorithm finds more easily a spot for a small element than for a larger one. This optimizes the local density and the footprint of the elements. Furthermore, the absence of periodicity eliminates any spurious diffraction orders that arise from large periods, due for example to large resonators made of low index materials. The circular symmetry of the elements is also statistically restored by the randomness 54 , which enables the design of polarization independent metalenses with anisotropic elements. This contrasts with the current works implementing polarization independent lenses using circular or fourfold symmetric cross-section elements 19,31,55-57 . Here, using anisotropic gold nanoparticles as resonators, we design random metasurface lenses at the wavelength of1.5 μm. To establish general rules and guidelines for such designs, the resonators are first considered in a periodic framework to numerically obtain their phase maps ϕ(l), i.e. the phase-shift provided by the element as a function of a geometrical parameter-the length in our case-for different periods of the array. These periods, corresponding to a density of the phase map, are then used as references from which a resonator can be chosen. Then, using different phase maps, one-and two dimensional metasurfaces of periodic or random elements of various densities can be constructed. The performances of the designs are then discussed.
Phase maps and unit cell simulations
To design random metalenses such as the one represented Fig. 1(a), phase-shifting elements need to be used. Figure 1(b) presents a possible implementation using anisotropic gold nanoresonators. The nanoparticle has a width of 50 nm, a height of 40 nm, and the length is tuned between 150 nm and 500 nm. Gold is modeled using a Drude model with a plasma frequency ωp=1.367×10 16 rad/s and a collision frequency ωc=6.478×10 13 rad/s 58 . This plasmonic particle is supported by a dielectric SU8 spacer with a refractive index nSU8=1.59, optimized to a thickness of 70 nm, on top of a metallic ground plane. The building block of the metasurfaces is first investigated using in-plane periodic boundary conditions, as shown in Fig. 1(b). The period in the direction of the width of the element (period py in the y direction) will be swept but is kept to 150 nm in Fig. 1. The period along the longer dimension of the element (px in the x direction) is set to 900 nm and is kept constant all along the paper. Using the frequency domain solver of the commercial software CST, the complex reflection coefficient of our structure is computed. The illuminating plane wave has a frequency varying from 50 THz to 350 THz and is polarized along x (i.e. the long axis of the particle). The particle is transparent to the orthogonal polarization. Varying the length of the particle from 150 nm to 500 nm shifts its fundamental resonance frequency as shown in Fig. 1(c). The phase of the wave reflected by the particle and the metallic plane can thus be controlled. Fig. 1(d) shows the phase shift of an element around 200 THz (λ0=1500 nm) for different particle lengths. The shortest element is taken as phase reference. Fig. 2(a) shows the phase shift as a function of the length of the nano-bars for different spacer thicknesses at 200 THz. For a single resonance, the complete 2 phase shift is only obtainable asymptotically far away from the resonance. The SU8 spacer thickness sets the quality factor Q of the resonances which in turn, controls the maximum value of the phase-shift-the thicker the SU8 layer, the lower the Q factor and the smaller the maximum phase-shift. However, the higher the Q factor, the sharper the slope of the reflected phase. Hence, a compromise has to be made between the maximum value of the phase shift and the slope of the reflected phase around 200 THz ( Fig. 1(d)). A very steep change of phase introduces discretization errors. A thickness of 70 nm (the red curve on Fig. 2(a)) appears to be a good compromise as the difference between 2 and the maximum phase-shift is smaller than 37°. The phase-shift required to design a lens in reflection or concentrator with a focal length f is given by the parabolic law: (1) The phase-shift required to design a metalens of 30 μm width with a focal spot of 20 μm is shown in Fig. 2 Knowing the phase-shift required at any position of the metasurface (Fig. 2(b)) and the phase shift induced at reflection on a periodic nano-bar array as a function of the nano-bar length ( Fig. 2(a)) (reference phase map), leads to choose the length of the nano-bars as a function of their position on the metasurface. Fig. 2(c) presents the length required at a given position to realize the phase shift plotted in Fig. 2(b), for different phase-maps represented by different periods py (from 150 nm to 500 nm) in the periodic array. Changing the period shifts the resonant frequency of an array of identical elements. This originates from two reasons: near-field coupling that becomes stronger as the distance between the particles is decreased and the density of particle itself in the limit of negligible near-field coupling. Indeed, the denser the array, the more field will be phase-shifted by the particles compared to the field which is only reflected by the ground plane. The total reflected field which is the sum of the field reflected by the mirror and the field scattered by the elements has therefore different phases for different densities.
One-dimensional random metalenses
For a periodic metasurface, on one hand, choosing a period py of a phase-map sets the density of elements per unit area of the periodic array: the relation between the period and the density is: = 1/ . On the other hand, the main questions that arise are which density optimizes the focusing of a random metalens, and which phase-map should be chosen to design a metasurface at this density. A naïve response would be to select the phase-map with the same density as the random metasurface to be designed, but, as will be seen later, the response is not straightforward. These questions are all the more important that random metasurfaces have fluctuations of the near-field coupling that may affect the efficiency if the density becomes too high. We simulated 16 different random metasurfaces corresponding to four densities of phase-maps (four periods in the y direction (py) of 100 nm, 150 nm, 250 nm and 500 nm) and four equivalent densities in the random metasurface of 25, 17, 10 and 5 elements per squared wavelength. On a matrix with the rows representing the density of the phase-maps and the columns representing the density of the random metasurface, elements on the diagonal are thus those for which the two densities are equal. Figure 3 represents 16 one-dimensional metasurfaces with a width (along "y") of 30 μm and a focal length (along "z") of 20 μm. Elements (nano-bars) are set along "x" perpendicularly to the 1D metasurface (i.e., out of plane in Fig. 3). Periodic boundary conditions define the out of plane direction, with a period of 900 nm.
Our algorithm to design a random metasurface corresponds to random loose packing 59 and consists in the following steps. First, we randomly select a position (for 2D metasurfaces, we also choose a random orientation for the element). We compute, using the phase maps of Fig. 2(a-c) the length that a particle at this position should have. We then check if it is overlapping or is too close to previously placed particles. A minimum distance of 10 nm is set in order to put the particles as close as possible to get the maximum density but this value can be varied. If the particles are too close or overlap, we remove it and select another random position. If they do not overlap, we approve the change and move to the next particle. The process is repeated until we manage to place a defined number of elements (from 300 for a density of 25 λ0 -2 to 60 for a density of 5λ0 -2 ) or until we have failed to place a given element. This would mean that the maximal density of the random metasurface has been reached. Using CST simulations in time domain for better computational efficiency, we computed the density of energy of the reflected field normalized by the density of energy of the incident plane wave for the sixteen 1D random metasurfaces. Each design is randomly repeated, simulated ten times, and averaged to ensure that the results are statistically significant. The average results for the 16 (4 by 4) metasurfaces are displayed on Fig. 3. Unaveraged results are very similar as the maximum value of the standard deviation is found to be about 10% of the average value (see supplementary information). All elements are parallel to the polarization direction of the incident field and only the distance between bars varies randomly in the 1D metasurfaces. Metasurfaces on the diagonal of the figure and below the diagonal have better focusing performance than those above the diagonal. This figure shows that for the designed 1D random metasurfaces, the density of elements in the metasurface plays a more important role than the density of the particular phase-map chosen to design the metasurface. This is an interesting conclusion as near-field coupling fluctuation in the random system, when the phase-map is no longer representative of the metasurface, does not seem to alter the focusing ability of the metasurface.
Two-dimensional random metalenses
We now consider 2D metalenses. As in the 1D case, metasurfaces are constructed using the same phase-maps corresponding periods py of the reference of 100 nm, 150 nm, 250 and 500 nm. The size of the 2D metasurfaces is chosen to be 10 by 10 μm, with a focal length of 10 μm to limit the computational volume. The phase shift provided by the elements is now a function of x and y. The nominal frequency is still 200 (0=1.5 μm). We again simulate 10 sets of 16 metasurfaces with densities from 25 0 -2 to 5 0 -2 and present the averaged results. The process to design the random metalenses is the same as for the 1D case, but with randomly placed and oriented elements as shown in Fig.1 (a). A minimum distance of 10 nm between adjacent elements is enforced and makes the structures realistic to fabricate. The focusing results, i.e., the reflected density of energy in the plane y=0, which correspond to a cross section of the central volume, are shown in Fig. 4. Metasurfaces produce well defined focal spots of the size of Abbe's limit 0/2NA=1.6 m and 0/NA 2 =6 m. Very interestingly, the best results are not achieved anymore for metasurfaces below the diagonal. This contrasts with the 1D case, where metasurfaces of the lower-left part of the figure, i.e. which are designed for equal or higher densities than their corresponding phase-maps, provided good results. Here the three best results are obtained with the densest phase-map with py=100 nm. With this phase-map density, even the random metasurface with a density of 5/0 2 leads to a visible focusing spot. The density of the phase-maps thus seems to play a more important role than the density of the random metasurface itself even if the two parameters obviously play a role. The phase shift in denser phase-maps accounts for a more important near-field coupling between particles. Near-field coupling thus seems to play a more important role in the designed 2D random metasurfaces than in the design of 1D metasurfaces. In 2D metasurfaces the elements are randomly oriented whereas they were all aligned in the 1D random metasurfaces. Near-field interactions in 2D metasurfaces are different from what is modeled in the phase-maps and thus raises fundamental questions on the optimal design of random metasurfaces. As noted in a recent paper 53 , near-field interactions between adjacent elements in a gradient metasurface do not account for the fact that particles are different, because, phase maps use identical elements. This problem is exacerbated in metasurfaces in which the cross-talk between elements is not negligible. While ideas to address this problem and improve the efficiency were proposed for gradient metasurfaces using periodic grids 53 , the question still remains open for random structures that thus operate away from their optimum. Figure 4: Average of the density of energy over 10 samples of 2D random metasurfaces. Influence of the density of elements and reference phase-maps on the focusing of 2D random metalenses for 16 metasurfaces designed with different densities and phase-maps. The metasurfaces are located in the plane z=0.
Discussion
In the previous section, we have proposed a strategy to design 2D random metasurface lenses. A 2D random metasurface using anisotropic elements such as nano-bars are expected to be polarization independent, an important property for many applications. The theoretical focusing power of the 2D random metasurfaces is expected to be half of the focusing power of the corresponding periodic 2D metasurface using the same anisotropic element. However, there is not a one to one map between the periodic and the random structure of the same density as near-field coupling between elements in the two surfaces is different. We present in supplementary information the polarization dependent periodic metasurface with the same element. Figure 5 shows the cross section of energy on the focal spot at a distance of 10 m from the metasurfaces for two incident polarizations for 2D random metasurfaces and for the metasurface with a periodic grid. The random 2D metalens is polarization independent as expected while the metasurface with periodic grid and aligned elements is not. However, as previously discussed, the power at the focal spot in Fig. 5 (b) is smaller than half the power at the focal spot in Fig. 5 (a) and this stems from the differences in near-field interaction configurations. To further optimize the efficiency of random metasurface, a possible solution would consist in refining the references and use phase-maps representative of the configuration of randomness and accounting for the near-field coupling fluctuation in the metasurface. A local phase method that optimizes the length of the element for quasi periodic metasurfaces has been proven to be effective 53 , but it would be very computationally intensive for 2D random metasurfaces. Finally, an interesting feature of random metalenses can be seen in Fig. 1(a), the density of elements is not homogeneous. At locations where elements are larger, the density is smaller, while at positions where elements are shorter, the density is higher. Such a distribution can be expected to compensate the fact that smaller elements have a smaller scattering cross-section. Hence, using disordered metalens provides additional degrees of freedom in the design of devices. Figure 6 presents the distribution of the length of nano-bars for a random and a periodic metasurface lens of 100 by 100 μm with a focal length of 250 μm and a density of 25 elements per squared wavelength (111.000 elements on the surface). We can see that a simple random loose packing 59 algorithm favors smaller elements. Such feature may be engineered by tailoring the algorithm used to design the metasurface, for instance using close packing algorithms or hyperuniform media 60-63 .
Conclusion
We proposed a method to design 1D and 2D random metasurface lenses. Using extensive numerical simulations, we demonstrated successful focusing by 1D and 2D random metasurfaces. By implementing random metalenses of various densities using phase-maps of same density (but periodic), we found that the main metric affecting the performance of random 1D metasurfaces is the density of the metasurface, while, in 2D random metasurfaces, the density of the phase-maps or the near-field coupling between elements seems to play a more important role than the density of elements on the metasurface itself. Randomness statistically restores the circular symmetry of the devices and enables polarization independent lenses. We have also demonstrated that random metasurfaces contain a larger number of small scatterers than their periodic counterpart and this may favor higher intensity at the focus if the optimal near-field couplings between random structures is obtained. Further investigations need to be performed to understand the role of the orientation disorder and the strength of the near-field coupling to optimize 2D random metalenses. Our results pave the way to the design of random metasurfaces for devices as diverse as lenses and concentrators. We also believe that random metasurfaces may overcome limitations on the diffraction efficiency of periodic systems, especially for dielectric metasurfaces that are made with larger elements. Random structures are also more amenable to self-assembly fabrication for large scale systems. | 2019-01-20T05:03:19.401Z | 2018-02-04T00:00:00.000 | {
"year": 2018,
"sha1": "9aa7191c64cedce904ad163ece2539851bf64afc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9aa7191c64cedce904ad163ece2539851bf64afc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
201230211 | pes2o/s2orc | v3-fos-license | Effects of nutrient composition on the formation of biofilm and biocorrosion in MEOR biostimulation medium based on response surface methodology
Nutrition addition in Microbial Enhanced Oil Recovery (MEOR) application is one of the methods used to increase oil production. Nutrition injection in MEOR application must be done carefully because the lack of proper nutrition addition can induce biofilm and biocorrosion formation. The composition of biostimulation medium for optimizing beneficial indigenous bacterial growth in Microbial Enhanced Oil Recovery (MEOR) was evaluated using Response Surface Methodology (RSM) with Central Composite Design (CCD). Three main factors for the medium composition were molasses, Diammonium Phosphate (DAP), and NPK. The RSM was used to know the most effective medium composition towards biofilm production and strength that high potential leads to biocorrosion. Molasses, NPK, and diammonium phosphate (DAP) are utilized as carbon, nitrogen, and phosphate supplementary substrate in Brine water as basal medium. Molasses concentration was 0%-10% while DAP and NPK was 0%-0.5%. Both aerobic and anaerobe sessile bacteria, as well as acid producing bacteria, were enumerated by total plate and turbidity methods. Statistical analysis with α=0.05 proves molasses give significance effect to biofilm strength and sessile anaerobic bacteria. DAP gives significance effect to biofilm strength and sessile aerobic and anaerobic bacteria. NPK give significance effect to sessile anaerobic bacteria. Molasses addition decrease brine pH by microbial activity, make the carbon steel coupon ST-37 corrosion rate increased. Meanwhile, the addition of DAP increased the pH of brine medium and decrease the corrosion rate.
Introduction
Indonesia's energy demand is increasing along with economic and huge population growth. On the other hand, oil production is decreasing from 287.1 million barrels in 2006 to 251.87 million barrels in 2015 [1]. Microbial Enhanced Oil Recovery (MEOR) is one of the tertiary methods that increase oil production using microorganism activity that benefits the oil uptake process from the reservoir. Biostimulation is one of the MEOR approaches by adding suitable nutrient into the reservoir to increase indigenous microbial growth as well as increase the production of its metabolites [2,3].
Unfortunately, the injection of those nutrients should be considered carefully because it could lead to microbial accumulation along the metal-carbon steel pipeline and also equipment that induces Biofouling and biocorrosion deterioration process. The microbial cells tend to form a biofilm which has caused many problems such as pipe clogging, biofouling, oil souring and Microbiologically Influenced Corrosion (MIC) [4]. The MIC causing crucial problems in industries that estimated 40% of internal pipeline corrosion in petroleum industry [5].
The improper concentration of nutrients addition in MEOR Biostimulation could affect excessive microbial growth and microbial accumulation in the system that leads to biocorrosion. Microbial growth depends on the supply of carbon (C), nitrogen (N), phosphor (P), and minor trace elements. The previous study showed that molasses, NPK, and diammonium phosphate were suitable to be applied as a medium for MEOR biostimulation. This study aimed to analyze the effect of several main nutrients concentration formula developed by response surface methodology experimental design as a tool of biostimulation approach towards biofilm formation and its biocorrosion potential on a metal surface.
Microorganisms
An indigenous microbial consortium in this study was obtained from Brine water of oil reservoir, located in South Sumatera, Indonesia. The physical and chemical characteristics of the brine sample are shown in Table 1.
Experimental design to evaluate the effect of nutririon composition
The concentration of Carbon, Nitrogen, and Phosphate combination was assessed for brine medium formulation with surface response methodology. The 20 variations (Table 2) of design experiment was obtained from surface response methodology (two order) using a central composite design of Minitab 17 Statistical Software™. The nutrients concentration used are 2-8% ( w /v) of molasses as carbon source and 0.1%-0.4% ( w /v) diammonium phosphate (DAP) and NPK use as phosphate (P) and nitrogen (N) source [7][8][9]. All of these three sources of nutrients were diluted in brine water.
Analysis of biofilm strength
The 20 variations of Brine water and nutrients formula incubated in sterile 24 well microtiter plate with 3 mL final volume of each well. Brine water without additional nutrients used as a negative control. After incubated at 70 o C for 14 days sessile microbial cell was counted on a different specific medium. After 14 days of incubation time, a planktonic cell was removed and the remaining cell on the surface of each microtiter plate was washed with Phosphate Buffer Saline (PBS) pH 7.2 for 3 times. The remaining sessile microbial layer on the bottom of each well was applied for heat fixation at 70 o C for 45 minutes. Then, remained microbial layer stained with 1mL 0.5% crystal violet for 5 minutes. When the crystal violet was removed thoroughly, the surface was washed with PBS for three times. The remaining stained-sessile cell was diluted with 1mL ethanol 70%. The amount of 200µL of those homogenized stain solutions was measured using 96 well microplate reader by BIO-RAD ELISA Reader (595nm) in triplicates for each sample [10,11]. The absorbance was categorized using interpretation from Stepanović et al. [12].
Microbial enumeration
Planktonic and sessile microbial enumeration was tested. Planktonic microbial enumeration was counted using absorbance on a spectrophotometer. While sessile microbial enumeration was quantified using the total plate count method on several growth media. Nutrient Agar (NA) for aerobic microbes, Postgate B for Sulfate-reducing and anaerobic bacteria. All of the bacterial growth medium was incubated at 70 0 C with initiation time of 1-2 days for aerobic bacteria and 7-14 days for anaerobic bacteria. While all anaerobic bacteria incubated in the same temperature for 7-14 days with special Nitrogen gas pump to create anaerobic condition [13].
Biocorrosion analysis
Brine water supplemented with nutrients formula (shown in Table 3) was inoculated in 250 mL of the final volume in a vacuum glass bottle. In order to create an anaerobic condition, those brine water mediums were gassed with Nitrogen for three minutes to eliminate the diluted. The prepared Carbon Steel ST-37 coupon was then placed to each bottle and incubated for 14 days at 70 o C. The bacteria were inoculated aseptically in anaerobic-conditioned laminar. Autoclaved brine (Treatment A) used as a control to eliminate the role of biocorrosion process. After days of incubation, three carbon-steel coupons from each formula were harvested and rinsed with pickling methods using HCl 26% ( v /v), and neutralized with NaOH 10% ( w /v) and washed with sterile distilled water. Isopropyl alcohol and acetone were used to eliminate biofilm extracellular polymeric substances (EPS) which were remaining attached on the coupon. The coupon was dried at the 70 o C oven for 30 minutes and moved to desiccator to eliminate Oxygen contact and weighed with an analytical scale. Corrosion rate count with equation (1) [14].
EPS analysis
Biofilm was scrapped from the Carbon Steel Coupon using laboratory spatula and washed with 1.5mL TE Buffer pH 7.5, 10mM Tris base, 10mM EDTA, 2.5% NaCl, and then resuspended. Biofilm suspension centrifuge in 4 o C, 4x10 3 g for 20 minutes. Concentrated biomass was resuspended in 1.5mL 0,85% NaCl and 0.22% formaldehyde in 80 o C for 30 minutes for EPS (Extracellular Polymeric Substance) extraction. EPS was then harvested after centrifugation (4 o C, 1x10 4 g) for 60 minutes [15].
(1) The carbohydrate concentration contained in EPS was quantified by phenol-sulfuric methods [16]. The amount of 500µL sample was inserted into a sterile reaction tube, supplemented with 500µL 5% phenol and 2.5mL 96% sulfuric acid and mixed with a vortex. The reaction was incubated in 30 o C for 30 minutes. The absorbance was measured at 490nm in triplicates with standard curve which was generated by using 0.01mg/mL standard glucose suspension.
Effect of nutrients composition on biofilm formation
The varied concentration of Molasses DAP, and NPK as main factors were used in Response Surface Analysis and resulting in 20 variations of experiments such as mentioned above ( Table 2). The results of those variations on biofilm strength and bacterial cell numbers are shown in Table 4. Based on multiple regression analysis on Central Composite Design matrix, and taking into account the response variable of OD, aerobe, and anaerobe bacterial cell numbers given in Table 2, a second-order polynomial equation was obtained as the equation (2), (3), and (4).
Aerobic bacteria = 209,0 + 451 X2 Anaerobic bacteria = 18358 + 7397 X1 -959 X2 -7928 X3 -9926 X1*X2 -16563 X1*X3 + 279901 X2*X3 p-value for OD, aerobe bacteria and anaerobe bacteria are 0.042, 0.017 and < 0.001 respectively with significant value α = 0.05. X1, X2 and X3 stand for variable molasses, DAP and NPK. Biofilm analysis categorized into three biofilm strength interpretation based on the absorbance of each variation. Nonbiofilm < 0.4041; weak biofilm 0.4041< -≤0.8082; and moderate biofilm 0.8082<-≤1.616 ( Figure 1). All variations that use molasses as the carbon source resulting in biofilm formation. While the weak biofilm formed when the addition of DAP concentration was more than or as much as the NPK concentration. While the moderate biofilm formed with DAP concentration lower than that of NPK concentration. From the data, the biofilm strength raises when the Molasses concentration added was up to 5% (v/v) ( Figure 2). Addition of molasses concentration lower than 5% lower the biofilm strength. Molasses needed as the main carbon source that was very important in the biofilm formation. While the addition of DAP concentration will weaken the biofilm formation. Ammonium ion in DAP reacts with Chloride ion resulting Chloramine compound that has a disinfectant effect towards the microbial cell in the Brine Water that formed the biofilm [17]. While NPK concentration did not affect biofilm strength.
Figure 2. Molasses and DAP concentration effect towards biofilm strength analysis
The anaerobic condition of growing microbes in several media performed and resulting in the anaerobic cell amount higher than aerobic cell amount. Statistical analysis resulting linear equation and quadratic that was proofing the concentration of DAP has a significant effect towards both anaerobic and aerobic bacteria on biofilm. NPK and Molasses concentration give a significant effect on the growth of anaerobic cell amount. While statistical analysis on acid producing bacteria cells, the amount did not result in the significant effect. Figure 3, the effects of nutrient addition towards anaerobic bacteria amount conclude. Increasing concentration of Molasses when DAP and NPK concentration were low could raise the anaerobic bacteria number of cells, while the increasing Molasses concentration while DAP and NPK were high could decrease the number of anaerobic bacteria cells. DAP concentration raises the total anaerobic bacteria when NPK concentration serves at a high level. Each substrate effect towards anaerobic cell growth affected with the other substrate concentration that was added. Basically, Biofilm has consisted of diverse microorganism that has varied physiological activity and also nutrient needs that caused complex microbial and chemical interaction. The high concentrations of Carbon, Phosphate, and Nitrogen cause an increasing amount of varies bacterial communities that affect nutrient competition. In order to get a high number of anaerobic bacteria, high C/N ratio of nutrients needed. It also can allow the growth of nitrifying bacteria, ammonification bacteria, phosphate utilizing bacteria [18]. That is why proportional substrate addition should be done carefully in the right concentration.
We can see the effect of substrate addition towards biofilm and anaerobic bacterial growth showed a similar pattern, that increasing DAP amount decrease the growth and increasing molasses amount increase the growth, in this study. This might happen because all the inoculation technique performed in this study using anaerobic approach by Nitrogen gassing and use of Anaerobic chamber. The biofilm cell quantification basically represents total microbes that involved in the biofilm formation process, which in this experiment dominated by anaerobic bacteria.
Effect of optimized nutrients formula towards biocorrosion potential in carbon-steel coupon
The statistical analysis was then given the result for the optimum medium that stimulated the biofilm formation consisted of molasses, DAP and NPK 4.6%; 0.4754% and 0.5% respectively. On the other hand, the medium formulation to minimize the biofilm formation has consisted of 0.1% DAP without molasses and NPK supplement. The results for those treatments variation as illustrated in Table 3. The carbon-steel ST-37 coupon composition was analyzed using SEM-EDX. The composition is shown in peak concentration of each metal alloys. The carbon-steel coupon prepared with manual scouring using gradient sandpaper. It is shown from the SEM-EDX result that 74.08% composition of Carbon-steel ST-37 coupon is Ferrous (Fe); 0.07% is Carbon (C), and 0.96% is Nitrogen (N). Table 3 shows that the corrosion rate increase by the high DAP amount, as well as the increasing of anaerobic and aerobic bacterial cell numbers (Figure 4). Based on the statistical analysis, DAP performed as an independent variable that affects the corrosion rate as it increases (shown in Table 3). The increasing concentration of molasses affected the increasing microbial growth because it decreases pH in the environment due to microorganism activity that metabolizes molasses into acetic acid [16]. When environment pH is low, abundant hydrogen ions (H + ) found in the environment and it induces oxidation-reduction reaction or else, the hydrogen ions interact in cathode or metal in this experiment and erodes protective layer in the metal surface. This reaction causing metal oxidation into Fe 2+ that interact with hydroxide ions forming tubercle in metal surface and continuing its oxidation into Fe 3+ with help of Iron-oxidizing bacteria that present in the system. Fe 3+ ions could form other solid crystal forms of FeOOH or goethite in the surface of the metal when interacting with oxygen and hydroxide [19]. The SEM-XRD analysis performed after 14 days incubation vivianite was found in C formula. While SEM imaging of C did not show any pitting corrosion, assumed that the surface of the metal in both formulas covered with crystal existence.
Goethite and vivianite are the protective crystal that protects the metal surface from corrosion process, while hematite can lead to further corrosion. The Phosphate substrate addition into the system can reduce the existence of goethite crystal in the metal surface and form vivianite and struvite crystal that soluble in the acidic pH. The acidic environment because phosphate addition into the system can cause formation reactive Phosphate compound, including iron phosphide. Phosphate is an anamorphic compound that it hardly detects by XRD analysis. This kind of compound is reactive that could increase metal corrosion rate [20].
The minimum formula has an acidic pH environment because of passivity in the metal surface. DAP contains Phosphate (P) that act as a corrosion inhibitor [21]. Phosphates act as inorganic anodic inhibitor so that could form insoluble protection layer and cohesively protect the metal surface, called passivity [22]. The formation of passivity in the metal surface could decrease organic compound accumulation and conditioning alkaline pH. The Brine water contains Fe 2+ and Fe 3+ that could react with Oxygen and Hydroxide ions to form goethite crystal. The corrosion rate in the chemical environment and Brine water alone systems show smaller result compare to several optimized formula (c) treatments (Table 3). This could happen because the existence of microbial activity that also releasing organic acids metabolites in OPT treatments induced chemical corrosion process in the system. Consequently, on chemical condition 9 and water formation treatments the corrosion rate measured is the result of metal weight loss because of the formation of a protective crystal, such as goethite and vivianite.
Microorganism might not affect the corrosion in an instance, but indirectly affect the corrosion process as a system, microorganism affect the corrosion by producing metabolites such as organic acid from carbon source utilization, such as molasses in this experiment, that form acidic environment in the system. NPK affected corrosion has not clearly known, but the existence as Nitrogen, Phosphor and Potassium source made changes in aerobic and anaerobic growth that also affect biofilm formation in the system. Even though the different result of NPK existence on the system depends also on different molasses and DAP used due to changes on the C:N:P ratio of media.
Conclusion
From this study, we can conclude that molasses addition as a main carbon source in the nutrients increase anaerobic bacteria and add up to 5% molasses concentration increase the biofilm strength. The molasses addition of more than 5% could decrease biofilm strength. While, the addition of DAP (Diammonium Phosphate) concentration could increase aerobic microbial total cell, lower the biofilm strength, lower anaerobic bacteria amount when molasses concentration increases and raise anaerobic bacteria amount when molasses concentration decrease. Increasing NPK addition raises anaerobic bacteria amount when molasses is low and DAP is high. Molasses, DAP, and NPK optimum for the biofilm formation are 4.6%; 0.4754%; and 0.5% and for the minimum biofilm formation are 0%; 0.1% and 0%, in a respective order. The increasing microorganism amount correlated with increasing corrosion rate observed on 10 optimized media compares to one that observed in minimum media in this study. This biofilm formation might cause the increasing potential of biocorrosion process in the static environment in this study. | 2019-08-23T08:16:45.492Z | 2019-07-29T00:00:00.000 | {
"year": 2019,
"sha1": "37d3c938cca933ecde6d59a5400cefdf6d2fa2c8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/299/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a232346c3ec7457b8cf6ca1d7094d71c470f131f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
125855041 | pes2o/s2orc | v3-fos-license | Lattice-Boltzmann Simulations of Fluid Flows in MEMS
The lattice Boltzmann model is a simplified kinetic method based on the particle distribution function. We use this method to simulate problems in MEMS, in which the velocity slip near the wall plays an important role. It is demonstrated that the lattice Boltzmann method can capture the fundamental behavior in micro-channel flow, including velocity slip and nonlinear pressure drop along the channel. The Knudsen number dependence of the position of the vortex center and the pressure contour in micro-cavity flows is also demonstrated.
The lattice Boltzmann model is a simplified kinetic method based on the particle distribution function. We use this method to simulate problems in MEMS, in which the velocity slip near the wall plays an important role. It is demonstrated that the lattice Boltzmann method can capture the fundamental behavior in micro-channel flow, including velocity slip and nonlinear pressure drop along the channel. The Knudsen number dependence of the position of the vortex center and the pressure contour in micro-cavity flows is also demonstrated.
The development of technologies in Micro-electromechanical systems (MEMS) has motivated the study of fluid flows in devices with micro-scale geometries, such as micro-channel and micro-cavity flows [1]. In these flows, the molecular mean free path of fluid molecules could be the same order as the typical geometric length of the device; then the continuum hypothesis which is the fundamental for the Navier-Stokes equation breaks down. An important feature in these flows is the emergence of a slip velocity at the flow boundary, which strongly affects the mass and heat transfer in the system. In micro-channel experiments, it has been observed that the measured mass flow rate is higher than that based on a non-slip boundary condition [2]. The Knudsen number, K n = l/L, can be used to identify the influence of the effects of the mean free path on these flows, where l is the mean free path of molecules and L is the typical length of the flow domain. It has been pointed out that for a system with K n < 0.001, the fluid flow can be treated as continuum. For K n > 10 the system can be considered as a free-molecular flow. The fluid flow for 0.001 < K n < 10, which often appears in the MEMS [1], can not be considered as a continuum nor a free-molecular flow. Traditional kinetic methods, such as molecular dynamics simulations [3] and the continuum Boltzmann equation approach, could be used to describe these flows. But these methods are more complicated than schemes usually used for continuum hydrodynamic equations. The solution of the Navier-Stokes equation including the velocity-slip boundary condition with a variable parameter has also been used to simulate micro-channel flows [4].
In the past ten years, the lattice Boltzmann method (LBM) [5] has emerged as an alternative numerical technique for simulating fluid flows. This method solves a simplified Boltzmann equation on a discretized lattice. The solution of the lattice Boltzmann equation converges to the Navier-Stokes solution in the continuum limit (small Knudsen number). In addition, since the lattice Boltzmann method is intrinsically kinetic, it can be also used to simulate fluid flows with high Knudsen numbers, including fluid flows in very small MEMS.
To demonstrate the utility of the LBM, we use the LBM model with three speeds and nine velocities on a two-dimensional square lattice. The velocities, c i , include eight moving velocities along the links of the square lattice and a zero velocity for the rest particle. They are: (±1, 0), (0, ±1), (±1, ±1), (0, 0). Let f i (x, t) be the distribution functions at x, t with velocity c i . The lattice Boltzmann equation with the BGK collision approximation [6,7] can be written as is the equilibrium distribution function and τ is the relaxation time. We have assumed that the spatial separation of the lattice is δx and the time step is δt. A suitable equilibrium distribution is [7]: Here c s = 1/ √ 3, t 0 = 4/9, t 1 = t 2 = t 3 = t 4 = 1/9 and t 5 = t 6 = t 7 = t 8 = 1/36. The Greek subscripts α and β denote the spatial directions in Cartesian coordinates. The density ρ and the fluid velocity v are defined by In previous lattice-BGK models, τ was chosen to be a constant. This is applicable only for nearly-incompressible fluids. In micro-flows, the local density variation is still relatively small, but the total density change, for instance the density difference between the inlet and exit of a very long channel, could be quite large. To include the dependence of viscosity on density we replace τ in Eq.(1) by τ ′ : τ ′ = 1 2 + 1 ρ (τ − 1 2 ). Using the Chapman-Enskog multi-scale expansion technique, we obtain the following Navier-Stokes equations in the limit of long wavelength and low Mach number: where ν = c 2 s (2τ − 1)/(2ρ) is the kinematic viscosity. In classical kinetic theory, the viscosity ν for a hard sphere gas is linearly proportional to the mean free path. Similarly, we define the mean free path l in the LBM as: a(τ − 0.5)/ρ, where a is constant.
Our first numerical example is a micro-channel flow [2]. The flow is contained between two parallel plates separated by a distance H and driven by the pressure difference between the inlet pressure, P i , and exit pressure, P e . The channel length in the longitudinal direction is L. We take L = 1000, H = 10 (lattice units) in our simulations satisfying L/H >> 1. The bounce-back boundary condition is used for the particle distribution functions at the top and bottom plates, i.e., when a particle distribution hits a wall node, the particle distribution scatters back to the fluid node opposite to its incoming direction. A pressure boundary condition is used at the input and the exit. The slip velocity V s at the exit of the micro-channel flow is defined as: u(y) = u 0 (Y − Y 2 + V s ), where u(y) is the velocity along the x (or flow) direction at the exit and Y = y/H. u 0 and V s can be obtained by fitting numerical results using the least squares method. This definition of the slip velocity is consistent with others [2,4]. In Fig. 1, we plot the slip velocity V s and the normalized mass flow rate M f = M/M 0 , as functions of Knudsen number when the pressure ratio η = P i /P e = 2. The normalization factor, M 0 = h 3 Pe 24ν (η − 1), is the mass flow rate when the velocity slip is zero. To calculate the Knudsen number we have chosen a = 0.388 in order to match the simulated mass flow rate with experiments (See theory curve in Fig. 3). Using a least squares fit to the data in Fig. 1, we obtain: If we assume that the Navier-Stokes equations are valid for the micro-flows except that the slip boundary condition V s in Eq.(5) replaces the traditional non-slip condition on the walls [4], Eqs. (3), (4) and (5) will give the mass flow rate: For η = 2, the above formula becomes M f = 1 + 24.1K 2 n , which agrees well with the numerical results in Fig 1. In laminar Poiseuille flows, one usually assumes that the density variation along the channel is very small, and the pressure drop along the channel is nearly linear. In micro-channel flow, however, the ratio between the length and the width is much larger and the pressure drop is not linear. If there is no velocity slip at the walls, it has been shown [2,4] from the Navier-Stokes equation that the pressure along the channel has the following dependence on the dimensionless coordinate, X = x/L: If the velocity at the boundaries is allowed to slip, the pressure drop along the channel will depend on the Knudsen number. In Fig. 2 we present the LBM simulation results for the normalized pressure deviation from a linear pressure drop, (P − P l )/P e , as functions of X for several Knudsen numbers, where P l = P e + (P i − P e )(1 − X). It is seen that when K n ≤ 0.2, (P − P l )/P e is a positive nonlinear function of X. This agrees with the results in [4] using an engineering model. For K n ≥ 0.2, the LBM simulation shows that (P − P l )/P e becomes negative, which is directly linked to the fact that the slip velocity depends on the square of K n in the LBM. For large K n , the pressure can be derived from Eq. (6): The negative deviation from a linear pressure drop has not been experimentally observed before and it would be interesting to testify this experimentally. In Fig. 3 the mass flow rates as functions of the pressure ratio η when K n = 0.165 are shown for our theory, the experimental work [2], the engineering model [4] and the LBM simulation. Our theory and the LBM simulation agree well with the experimental measurements. It is noted that for large pressure ratios (η ≥ 1.8), the LBM agrees reasonably well with Beskok et al. [4]. But for smaller pressure ratios, the difference increases because of different dependence of the slip velocity on K n . Our second LBM numerical simulation is the twodimensional micro-cavity flow. The cavity size is L x = L y = 40 (lattice units). The upper wall moves with a constant velocity, v 0 , from left to right. The other three walls are at rest, and bounce-back boundary conditions are used. To see the dependence on the Knudsen number in our simulations, we fixed the Reynolds number, R e = v0Lx ν = 2.4 × 10 −4 and require the Mach number to be small, M a = v0 cs ≤ 10 −3 . In Fig. 4, we show the streamlines for two different Knudsen numbers. In Fig. 5 we show the vertical positions of the vortex center and the mass flux between the bottom and the vortex cen-ter as functions of K n . It can be seen that the center moves upward and the mass flow decreases with increasing Knudsen number. This occurs because the slip velocity on the upper wall causes momentum transfer to be less efficient. It has been shown [8] that the center of the vortex moves downward when the Reynolds number increases for very small K n . Fig. 6 shows the pressure contours for the same parameters as in Fig. 4. Totally different pressure structures are observed for these two cases. When the Knudsen number is small, the continuum assumption is valid and the pressure contours are almost circles with centers at the left or the right corners. On the other hand, due to the slip velocity on the walls, the pressure contours become nearly straight lines at the higher Knudsen number. | 2019-04-22T13:05:33.845Z | 1998-06-11T00:00:00.000 | {
"year": 1998,
"sha1": "1616cd8527a9bb20639a93f21f5012374a9a5ad7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "492bfea2d094451f6149661908a71607ca6cb738",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264544118 | pes2o/s2orc | v3-fos-license | Fiber metal laminates for high strain rate applications with layerwise shock impedance tuning
Novel materials such as fiber-metal laminates (FMLs) have demonstrated significant potential in a variety of applications. They must contend with problems such fatigue, creep, high-speed projectile impact, and deformation at high strain rates while in use. When employed as structural materials in aircraft, especially when exposed to shock wave impact and high velocity impact, fiber-metal laminates’ high strain rate characteristics become crucial. Shock impedance matching is a revolutionary approach used for shock-tuning the separate layers. The novelty of the current work is in developing custom shielding laminates, with in-depth analysis on the response of the shock impedance tuning of individual layers on the laminate behaviour at high strain rates. In the current study, five stackups of FMLs comprising metallic (AA 6061-T6) and fiber-reinforced polymer (FRP) plies, were formulated, incorporating shock impedance matching. The fiber-polymer plies used in the FMLs include ultra-high molecular weight polyethylene (UHMWPE), p-aramid for supplementing the impact resistance. Transmission loss functions (TL) estimated from the impedance tube experiments were used to indicate the shock tuning of the various laminates. The laminates underwent testing using a Split Hopkinson Pressure Bar (SHPB) apparatus to determine their properties at high strain rates (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$350\, \textrm{s}^{-1}$$\end{document}350s-1 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$460\, \textrm{s}^{-1}$$\end{document}460s-1). The variation in the Shock Energy (SE) absorbed by the laminates at various strain rates was analyzed as a function of the corresponding Transmission Loss employing regression. The dynamic stress-strain curves showed an increase in shock energy absorption at higher strain rates. The sequence SSP-IV and SSP-II showed the highest values of energy absorption as well as Transmission Loss.
List of symbols
Left face velocity (m/s) V 2 Right face velocity (m/s) X i Transmission loss (dB) for ith reading N Number of values of transmission loss readings for the sequence Fiber-metal laminates (FMLs) comprising alternately layered metallic with fiber-reinforced polymer laminae are found in several applications like aerospace, automotive, buildings, and shielding structures.Commonly utilized fiber-metal laminates include ARALL, CARALL and GLARE with superior impact resistance and fatigue performance [1][2][3] .During the regular operation, the structures may be subjected to spontaneous impact events like crash or collison in the case of aircrafts and automobiles [4][5][6] , while buildings may be subjected to seismic activities like earthquakes 7 .Bird strikes pose a significant threat to aircraft, causing damage to the fuselage and endangering passengers 8 .FMLs with shock impedance grading can provide better protection by dispersing and absorbing the energy from bird impacts.Further research and development in this area can lead to the creation of FMLs with optimized material compositions and microstructural designs to enhance their impact resistance capabilities 9 .The aerospace industry is continuously looking for materials and structures that can survive high-velocity impacts, such as those brought on by damage from debris or foreign objects 10 .In order to protect crucial components' structural integrity, FMLs with high strain rate characterisation can be designed to endure severe shock effects.FMLs offer the potential for increased resilience and safety in aircraft structures.By incorporating advanced materials and innovative designs, FMLs can exhibit superior damage tolerance and fracture toughness.This resilience can enable the aircraft to withstand and safely absorb impact loads, reducing the risk of catastrophic failures and enhancing passenger safety 11 .Shielding structures designed for protection against ballistic, shockwave and blastwave impact suffer enormous material deformation in negligible time 12,13 .One of the shielding applications include whipple shields which protect the satellites and spacecrafts against hyper-velocity impact 14 .Thus, the mechanical response of such structures under dynamic loading conditions and high strain rates is vital from the aspect of engineering design.The high strain rates for the ballistic and blast impacts reach >(10 − −10 4 s −1 ) [15][16][17][18][19] .Typical construction of fiber-metal laminates include metallic/ alloy skins stacked alongside fiber reinforced polymer laminae 6,[20][21][22][23] .In many of the works, one fiber ply type and one metallic ply type [24][25][26] have been used, with the order of the arrangement being arbitrary.When higher number of fiber ply types are included, the response of the fiber-metal laminates is significantly affected by the order of the plies 27,28 .In one of our recent works 29 , in order to identify the ordered arrangement with the maximum Transmission Loss, shock impedance matching of fiber-metal laminates has been carefully examined using computational, analytical, and experimental methods.In another of our recent works 30 , the sequences have been subjected to shockwave impact experiments using a shocktube.Although this study provided useful information on the deformation profiles and ply failure modes, the mechanical performance at high strain rates could provide further insight on the capability and quality of the stacking sequences.
High rate of strain characterisation ( 10 2 − −10 4 s −1 ) commonly uses the Split Hopkinson Pressure Bar (SHPB).Developed by Kolsky 15 , the apparatus saw several enhancements to cater to a broader class of materials-metals, concrete, adhesives, composites [31][32][33] .Grote et al 34 have employed a SHPB made up of a set of 12.7 mm diameter steel bars with a yield strength of sim 1800 MPa (the striker bar, incident pressure bar, transmitter bar, and momentum trap).The loading pulse is transferred to the pressure bar in the form of a compression wave by the axial impact when the striker bar strikes the incident pressure bar with a high impact velocity.The specimen is deformed by the compression wave with a pressure pulse whose constant amplitude and duration are proportional to the striker bar's length.The impact velocity may be regulated by varying the air gun's pressure since the amplitude of the incident pulse is inversely related to the impact velocity.SHPB and Direct impact tests have been used in tandem for several classes of materials 35,36 .Richter et al 37 deployed Digital Image Correlation (DIC), a non-contact technique, to track the strains during the SHPB's high-speed strain testing.Yang et al. 38 explored the behaviour of aramid fibre-reinforced polymer (AFRP) confined concrete subjected to high strainrate compression at strain rates ranging from 80 to 170 s −1 .Kevlar CAS-415 AFRP was used to wrap cylindrical concrete samples with epoxy resin as the binder.Single wrap, twin wrap and three-layer wrap samples along with bare concrete samples were tested at different strain rates.Twin ply AFRP wrapped concrete performed better than the other samples with better ability to redistribute the internal forces coupled with the viscoelastic character of the hardened cement paste and time-dependent micro-cracks growth.Additionally, the twin AFRP ply wrapped concrete showed similar ultimate strain values ∼0.033) at the different strain rates.The uncovered concrete showed disparities in its response to different strain rates.The dynamic increase factor (DIF) and the logarithmic strain rate were discovered to have an operational relationship by the authors.Gardner et al. 39 determined the dynamic constitutive properties of sandwich constructions built of E-glass vinyl ester facesheets using a SHPB device with a hollow transmission bar.The sandwich panels comprised Corecell TM A-series foam with a polyurea interlayer.The core consisted of three layers of A-series foams with increasing density with a polyurea interlayer.Two configurations with the polyurea layer placed before the lightest foam, and the other where www.nature.com/scientificreports/polyurea was placed after the heaviest foam were developed.The overall thicknesses of the sandwich panels were kept constant at 4.8 mm.The foams displayed an augmented response to higher strain rates able to absorb higher energies.Sassi et al . 40have studied how adhesively bonded glass fiber-polyester laminates behave in dynamic compression utilising hopkinson bars and the impact of high strain rates.The bonded laminates displayed high strain rate sensitivity, the authors discovered, with brittle fractures at the polyvinylester adhesive interfaces.Li et al. 41 studied the influence of high strain rates (up to 2051 s −1 ) on 3-D braided composites employing split hopkinson devices under dynamic compression on braids with various braiding angles.The dynamic characteristics improved as the strain rate increased, while the strain to failure decreased.Shear fracture, fibre breaking, interface debonding, and matrix cracking were the composites' failure modes.The amount of dynamic damage and fracture decreased when the braiding angle was raised.Sharma et al. 42 analysed the glass fibre reinforced epoxy-AA2024 laminates' high strain rate response under tension using a split Hopkinson pressure bar rig.The strain rate was calculated using a DIC method.The tensile strength significantly increased at high strain rates, according to the authors.Studies on the dynamic characterisation of fibre metal laminates have frequently used split Hopkinson Pressure bar tests [43][44][45] .Khan and Sharma 45 assessed the high strain rate response (400-480/s) of FMLs made of glass fiber plies and AA2024-T3 layers using Split Hopkinson bar.The rate sensitivity influences the matrix cracking and delamination among the layers.In another study on glass-fiber/ AA2024-T3 FMLs, Sharma et al. 46 used a split hopkinson bar along with digital image correlation setup for strain measurements for high strain rate response.The authors observed that highest strength was observed in the FML with all the glass-fiber layers stacked together, possibly attributed to fiber bridging.Zarezadeh-mehrizi et al. 47 modified the FMLs containing glass fiber epoxy/AA6061-T6 by inserting a natural rubber elastomeric layer.The recent works on split Hopkinson pressure bar characterization have been summarized in Table 1.
The novel aspect of the current work is the creation of specialized shielding laminates, together with a thorough analysis of the impact of shock impedance tuning for individual layers on laminate behavior at high strain rates.The FMLs comprised high-performance, ballistic grade fiber-reinforced plies, made of aramid and UHM-WPE fabrics, along with a low shock impedance, partially auxetic sheet of paperboard.AA6061-T6 skins have been used to sandwich the fiber-reinforced plies in the FMLs.To incorporate the shock impedance matching, the order and the number of the core layers comprising aramid bi-directional layer, UHMWPE layer, and the paperboard layer was varied, with epoxy binder.Five configurations were considered for the study (refer Fig. 2) and each configuration was assigned a roman numeral succeeding the nomenclature SSP (Stratified Sandwiched Panels).The approach used in the study to determine how the plies' shock impedance varied with the high strain rate response is shown in Fig. 1.
Materials
The AA6061-T6 sheets of metallic skins, which are 0.7 mm thick, were provided by Hi-Tech Sales Corporation in Mangalore, India.The ballistic grade materials used in the laminates comprised the aramid BD 480 GSM, woven fabric (plain weave, yarn count balanced in warp and weft directions), the UHMWPE UD 130 GSM fabric, and epoxy resin (CT/E 556 epoxy resin and CT/AH 951 polyamine hardener) were all bought from Composites Tomorrow Inc. based at Gujarat, India.The 650 GSM paperboard sheets were purchased from Vijay Papers in Karnataka, India.Table 1 shows the shock impedance values and densities of the constituent materials.The provided epoxy resin had a pot life of 30 minutes, a density of 1150 kg/m 3 , and a mix viscosity of 1500 mPas.Table 2 shows shock impedance values and densities of the constituent materials.
Fabrication process
One of the common methods for producing fibre metal laminates, compression moulding, was used to construct the various combinations 53,54 .The fabrication setup is shown in Fig. 3. To increase the interfacial adhesion, the aluminium alloy AA6061-T6 surfaces were sanded with grit papers 54 .The mold release agent was applied to the bottom die plate and a peel ply was placed above it.The different plies were then pre-processed, weighed for each stacking sequence, and progressively stacked on top of one another using the Hand Layup method.Between the layers, the premixed resin/hardener mixture (in the ratio 10:1 by weight) was uniformly coated as per the supplier recommended fiber-to-matrix weight fractions.The upper die-plate (coated with mold release agent) was then positioned above the peel ply, placed over the upward facing surface of the AA6061 layer (distal).Precisely machined spacers were placed between the upper and lower die plates for ensuring uniform thickness of the Table 2. Physical properties of the constituent materials 29,39 .www.nature.com/scientificreports/stackups.Each arrangement was transferred to the cold pressing machine, pressed, and held for a dwell period of ∼ 25 h, at room temperature to facilitate curing as per the supplier recommendations.After curing, laminates were sent out for water jet machining to create test specimens.For the experimentation on the impedance tube (circular specimens of 30 mm and 100 mm diameter) and SHPB (circular specimens of 10 mm diameter) were cut by water jet machining.After the water jet cutting, the specimens of all sequences were inspected; no delamination or debonding among the plies were noticed.
Shock impedance matching
Shock impedance grading affects the intensity of shock waves transmitted, demonstrating the efficacy of the shock shielding 55 .Transmission Loss measurements were made for each of the sequences in the frequency range of 0 to 6300 Hz using the impedance tube experiments on the various sequences.The impedance tube apparatus (make: BSWA SW) as shown in Fig. 4 www.nature.com/scientificreports/ the frequency range was acquired.For each sequence, two sets of specimens were subjected to the transmission loss measurements.
The key focus metrics from the study i.e. the transmission loss, denotes the amount of energy that each specimen fails to transmit, as shown in Eq. (1).
Dynamic testing using split Hopkinson pressure bar
In the current SHPB setup, the striker bar, incident bar, transmitter bar and the momentum bar were made of maraging steel (elasticity modulus of 190 GPa and density ∼ 8000 kg/m 3 ).The experimental setup of the Split Hopkinson pressure bar is shown in Fig. 5.
When the loading device strikes the incident bar (time, t = 0 ), a one-dimensional pressure wave travels in the direction of the specimen.At the free end, the compression wave is reflected as a tension wave.At the bar/ specimen contact, this unloading wave is repeatedly reflected, with the remaining energy passing through the transmission bar.Strain gages were used to measure the stresses in the bars, and the data was sent to a data acquisition setup that includes a Wheatstone bridge for signal conditioning and a pre-amplifier to boost the voltage before it is sent to the oscilloscope.
(1) TL = 10 log www.nature.com/scientificreports/When the incident bar is struck at an impact velocity of ' V B ' , an incident elastic wave travels through the bar with a velocity of ' c B ' which can be calculated from Eq. ( 2).' E m ' is the Young's modulus of the elastic mate- rial of the bar and ' ρ m ' is the mass density of the bar material.Figure 6 shows the test section of the bar setup.The velocities at the left face and right faces are V 1 and ' V 2 ' respectively, which are given by Eq. (3) and Eq. ( 4), respectively.The average rate of axial strain in the specimen is shown in Eq. ( 5), where ' L S ' is the length of the specimen.Equation ( 6) gives the engineering strain ' ǫ E ' in the specimen.With the availability of the displace- ment and force data, the stresses on the input and output side of the specimen were determined.Equation (7) shows the force at the input side ( F 1 ) and Eq. ( 8) shows the force at output side ( F 2 ) of the specimen, where ' A B ' is the cross-sectional area of the bar.The average stress in the specimen is then given by Eq. ( 9), where ' A S ' is the cross-sectional area of the specimen.
To minimize radial inertia, the ratio of the specimen diameter to its length was maintained ∼ 3.5.The speci- men (from each sequence) was mounted in the test section with facet plate facing the incident side and the distal plate facing the transmission side as shown in Fig. 5.The free surfaces of the specimen were lubricated with grease to reduce the interfacial friction.The diameters of the incident and transmission bars were equal to 12.5 mm.To investigate the effect of high strain rates on the stress-strain response of each sequence, compression pressures of 10 bar and 30 bar were chosen (loading section setting).The striking velocity ' V 0 ' was noted during each trial, along with the strain and time data.The experiments were repeated for three specimens from each of the sequences.From the data, the strain-time plots and true stress-strain plots were obtained.The shock energy absorption for the different sequences was determined.The cross-sections of the tested specimens were inspected using optical microscopy (make: Olympus BX53M) and scanning electron microscopy (make: Zeiss EVO).A regression analysis was carried out between the Transmission Loss displayed by the sequences in our previous work 3 , and the shock energy absorption by the respective sequences at the two strain rates.To conduct the regression analysis, the MINITAB ® software was used.
Response of the FMLs to the impedance tube experiments
The results of the impedance tube experiments are shown in Fig. 7, with the variation of transmission loss with frequency for the different sequences.The sequences SSP-II and SRSP-IV demonstrated the largest values of www.nature.com/scientificreports/transmission loss, exhibiting more peaks than the other sequences and a constant Transmission Loss value of over 15 dB between 500 Hz and 6.3 6300 Hz.Our recent paper has more information on the shock impedance matching of FMLs 29 .Table 3 shows the measured average transmission loss.The uncertainty for the transmission loss was estimated at 95% confidence interval, as shown in Eq. (10).SSP-II showed the highest Transmission Loss, followed by SSP-IV and SSP-III.The presence and location of the paperboard ply was critical in influencing the Transmission Loss characteristics of the sequences.
Response of the FMLs to SHPB experiments
The strain-time data for the different sequences have been plotted as shown in Fig. 8.For each sequence, the incident, reflected, and transmission strain values have been indicated.As the incident wave strikes the surface of the specimen, a portion gets transmitted through the specimen into the transmission bar, hence the strain amplitude is lower for the transmission side as measured by the strain gages; Another portion of the incident wave gets reflected into the incident bar, the sign of the strain curves is inverted owing to the reversal in the direction of shock travel and expectedly weaker with lower strain amplitude.As the striker velocity is increased (by increasing pressure on the pressure cylinder), the overall amplitude of the strain pulses (in incident, reflected and transmitted signals) is seen to increase, and this trend is seen throughout the sequences.The strain-time histories were compared with the results of high strain rate testing of AA7449-T7651 carried out by Mylonas et al 57 , displayed in Fig. 8f.Evidently, the strain rate used is 1000 s −1 , leading to the spike in the amplitude of the strains measured for the incident, transmitted and reflected compression waves.
The true stress and strains were calculated using Eqs.(9 and 6) respectively [58][59][60] , the true stress-strain variation for the different sequences have been plotted in Fig. 9.It is evident that the stress-strain curves cannot be used to compute the elasticity modulus in the SHPB tests owing to the high strain rates, which disrupt the equilibrium conditions required for the test volume 57,61,62 .In all the sequences, the strain hardening effect was observed on increasing the strain rates.Strain hardening was observed in the sequences between 1.8 and 2.4 mm/mm.As AA6061-T6 is strain-hardenable, the facet and distal layers are where strain hardening occurs most frequently.The variation in the dislocation-dislocation interactions of the other constituent layers-aramid, UHMWPE and paperboard plies in the individual laminates may be responsible for the variation in the strain hardening for the various sequences.Thus, it can be surmised that arrangement of the constituent plies influences the response of the different sequences, and the acoustic impedance matching plays an important role in the transmittance of the compressive stresses from ply-to-ply [63][64][65] .The shock energy absorbed per unit volume was computed for all the sequences, the details have been shown in Table 4.At the higher strain rate of 460 s −1 , the highest shock energy absorption was shown by SSP-IV, followed by SSP-II, and SSP-III.When the strain rate was increased from 350 to 460 s −1 , the highest increase in the strain energy absorption was shown by SSP-IV (76.2%), followed by SSP-II (68.7%),SSP-III (68.4%),SSP-V (68.1%) and SSP-I (65.9%).Shock energy absorption depends on the shock attenuation capability of the individual plies in the respective laminates.Coupled with the fact that the extent of strain hardening varies across the sequences due to the ply arrangement, the sequences with least shock transmission response, contribute to maximum shock energy absorption.Hence, among all the sequences, SSP-IV, SSP-II and SSP-III showed the best stress-strain response, strain hardening and shock energy absorption at high strain rates.The presence of the low acoustic impedance material, paperboard/epoxy ply as an intermediate layer in these sequences has positively contributed to the improved performance.The absence of AA6061-T6 faceplate in SRSP-V has led to a pronounced reduction in the shock energy absorption, although it was the lightest arrangement among all the sequence.
Optical and SEM analysis of the specimens subjected to SHPB experiments
The optical micrographs of the samples put through SHPB tests are shown in Fig. 10.The sequences SSP-I, SSP-II, and SSP-IV displayed microcracks in the penultimate layers at the lower strain rate, whereas SSP-III and SSP-V displayed delamination between the paperboard and UHMWPE layers.At the higher strain rate of 460 s −1 , the plies were subjected to aggravated failures.SSP-I showed many microcracks in the penultimate layer, and delamination between the 2nd and 3rd plies of aramid.SSP-II showed development of microcracks in the (10) paperboard layer.SSP-III showed an aggravated delamination and separation between the paperboard and UHMWPE layers.SSP-IV showed a debonding between the paperboard and AA6061 layers.In SSP-V, the mild delamination at lower strain rate transformed to moderate delamination at the higher strain rate, attributed to the absence of a high shock impedance layer (AA6061) faceplate.The scanning electron micrographs of the SHPB tested specimens (at 450 s −1 ) are shown in Fig. 11.All of the sequences displayed microcracks in the intermediate plies.SRSP-II and SRSP-III displayed severe delamination between the paperboard and UHMWPE layers.SRSP-IV showed a mild delamination between the UHMWPE and paperboard.In SRSP-V, the mild to moderate delaminations were observed in the intermediate plies in addition to the microcracks.
Regression analysis between shock impedance matching and shock energy absorption
(11) SE abs,1 = 6.12 + 0.512 × TL The regression equation between the transmission loss and the shock energy absorption ( SE abs,1 ) is shown in Eq. (11).The regression analysis tables for the shock energy absorption at 350 s −1 are shown in Tables 5 and 6, respectively.
The comparison of the Transmission Loss and the shock energy absorption data is shown in Fig. 12.The shock impedance mismatch introduced by the varied arrangement of the core layers plays an important role in to the other sequences, there was a close agreement between the Transmission Loss values and the shock absorption at the two strain rates too.The ANOVA is shown in Table 7.A significance level of α ∼ 0.05 was taken during the analysis.The degree of freedom (DoF), adjusted sum of squares (Adj.SS), adjusted mean squares (Adj.MS), the F-Value and P-Value indicate the level of significance of the parameters.It was seen from Table 7 that 'P-Value' < α , which indicated the high level of significance for the study.The corresponding residual plots are shown in Fig. 13.The regression equation between the Transmission Loss and the shock energy absorption ( SE abs,2 ) is shown in Eq. ( 12).The Regression analysis tables for the shock energy absorption at 460 s −1 are shown in Tables 8 and 9 respectively.The ANOVA is shown in Table 10.It was observed from Table 10 that 'P-value' < α , which indicated the high level of significance for the shock energy absorption and Transmission Loss.The corresponding residual plots are shown in Fig. 14.
Conclusion
Five configurations of fiber-metal laminates comprising AA6061-T6 skins, aramid, UHMWPE, and paperboard layers as core layers were experimentally characterized for Transmission Loss response on an impedance tube and high strain rate response on split hopkinson pressure bar.Based on the behaviour of the different sequences, the following conclusions were made: www.nature.com/scientificreports/• The shock impedance tuning influences the Transmission Loss functions of the sequences.Among the five sequences, SSP-II (20.17 ± 2.63 dB), SSP-IV (19.29 ± 2.72 dB), and SSP-III (18.68 ± 2.69 dB) showed the highest values of the average Transmission Loss.Thus, the location of the low impedance paperboard ply in the stackup, minimized the transmitted energy (acoustic) in the sequences.• The dynamic stress-strain curves display a marked rise at higher strain rates.The failure strains were found to reduce with increase in the strain rate.The SSPs offered enhanced capability to absorb shock energies.• The sequence SSP-IV followed by SSP-II and SSP-III displayed the highest energy absorption at the high strain rates 350 s −1 to 460 s −1 .The addition of a low shock impedance ply as an intermediate ply assists in improving the strain rate sensitivity of fiber-metal laminates.Absence of metallic ply as facing layer severely affected the response of SSP-V, which backs the role of metals and alloys as prime facing materials in hybrid laminates.• The primary failure modes in the laminates comprised micro-cracks, debonding and delamination among plies.The delamination was predominant at the paperboard ply interface, as seen in the sequences SSP-II, SSP-III, SSP-IV and SSP-V.• The regression analysis between the shock energy absorption at the two high strain rates and the transmission loss displayed a high level of significance ( < α=0.05).The regression coefficients were obtained as R 2 =0.77 at 350 s −1 and R 2 =0.71 at 460 s −1 .The Shock energy absorption was shown as a function of transmission loss for the respective strain rates.
Future advancements in manufacturing methods, like automated layup methods and additive manufacturing, may significantly improve the production of FMLs.These developments may reduce costs, increase manufacturing effectiveness, and make it possible to create intricate FML structures with specific features.The reliability and safety of aircraft structures can be greatly increased through multifunctional integration like the ones covered in this research.
Figure 1 .
Figure 1.Methodology for studying the influence of shock impedance matching on high strain rate response of laminates.
Figure 2 .
Figure 2. Representation of the layered arrangements (SSPs) with dimensions.
Figure 4 .
Figure 4. Impedance tube setup for transmission loss measurement.
8 ) F 2 Figure 5 .
Figure 5. Experimental setup of the split Hopkinson pressure bar.
6 .
Figure 6.Test section of the split Hopkinson pressure bar 56 .
Figure 7 .
Figure 7. Transmission loss versus frequency using numerical model for sequences, SSP-I to SSP-V.
www.nature.com/scientificreports/shock attenuation in the sequences.SSP-II displaying a high Transmission Loss was found to display the second highest shock energy absorption at both the tested high strain rates, while SSP-IV showed the second highest Transmission Loss with the highest shock energy absorption among the sequences.SSP-III showed the third highest Transmission Loss and shock energy absorption among the sequences.The remaining two sequences, SSP-I and SSP-II although displayed lower Transmission Losses and lower shock energy absorption as compared
Figure 12 .
Figure 12.Comparison of transmission loss of the sequences with the shock energy absorption 29 .
Figure 13 .
Figure 13.Residual plots for shock energy absorption at 350 s −1 and transmission loss.
Figure 14 .
Figure 14.Residual plots for shock energy absorption at 460 s −1 and transmission loss.
SE abs,1 Shock energy absorption at 350 s −1 SE abs,2 Shock energy absorption at 460 s −1 X Arithmetic mean of the transmission loss values for the corresponding sequence (dB)
Table 1 .
Recent research on high strain rate characterization using split Hopkinson pressure bar.
Table 3 .
Transmission loss functions for the different stacking sequences.
Table 4 .
Details of shock energy absorbed at different strain rates.
Table 5 .
Regression analysis table for shock energy absorption at 350 s −1 and transmission loss: coefficients.
Table 6 .
Regression analysis table for shock energy absorption at 350 s −1 and transmission loss: model summary.
Table 7 .
Analysis of variance table for shock energy absorption at 350 s −1 and transmission loss: coefficients.
Table 8 .
Regression analysis table for shock energy absorption at 460 s −1 and transmission loss : coefficients.
Table 9 .
Regression analysis table for shock energy absorption at 460 s −1 and transmission loss: model summary.
Table 10 .
Analysis of variance table for shock energy absorption at 460 s −1 and transmission loss. | 2023-10-29T06:16:57.550Z | 2023-10-27T00:00:00.000 | {
"year": 2023,
"sha1": "a32362a30ae2e70a1865b89348039962b7daa022",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "75033ad1029ca2d59b0dd59a0d2b1438a5f1be83",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199002927 | pes2o/s2orc | v3-fos-license | The Design and Application of FEU: Automation Monitoring Unit for 5G base stations
In order to realize the co-construction, sharing and intensive management of millions of stations across the network, supporting 5G go into operation efficiently, China Tower Group has developed a new generation of intensive and distributed operation maintenance monitoring platform over the inherited mobile base stations from the three major telecom operators. Based on the development and the application of this platform, FEU (an intelligent Power and Environment Monitoring Unit) is proposed, which uses the dynamic shared library method to embed the edge gateway to realize the edge computing capability of China Tower’s intelligent operation and maintenance platform. It effectively solves the normalized monitoring of heterogeneous FSU and massive sites, constructs the intensive operation maintenance capability for China Tower and lays a solid foundation for the tower-based IoT [1, 2] expansion business.
Introduction
As a vital components of communication system, mobile base stations can provide system interface and wireless function to mobile terminal. In recent years, by the quick development of communication technology, especially the 4G network, there is a great increasing of the mobile base stations. By 2020, 5G network is planning to go into commercial usage, which bring greater demand for constructing more mobile base stations, however, the features of our base stations are decentralization, rugged environment and unstable power supply. For the development of mobile communication, it is even more important to have real time monitoring the power and environment of base stations. It depends on the cooperation of locale FSU, centralized O&M monitoring platform and operation team. Thereinto, centralized O&M monitoring platform is the key to enhance the stability, quality and efficiency of all the base stations.
The Current Situation of Power Environment Monitoring Platform
The operators usually use 3-tier-architecture power supply environment system, which is deployed according to the classification of group company, province company and city company. Over 95% of the base stations access with wire (a few use wireless GPSK), and most of them use the method of central analysis [3]. After CHINA TOWER is established, it took over all the base stations from three Chinese main operator. As the number of base stations reaches 2 million and locates all over 31 provinces of China, CHINA TOWER have built a core R&D team and cooperate with equipment vendors to quickly achieve production support ability. The power supply monitoring system use centralize architecture and the network management system is deployed in the headquarter with distributed nodes for calculation. The ways of transmission are wireless (mainly 3G and 4G). The signal data performance and remote control of base stations directly access to FSU. It adopts End-Analysis method [4,5] and no longer access to OMC directly.
As the equipment of FSU have a large amount of types, those function interfaces vary considerably. The building of 5G network and the development of IoT accelerate extending the types and number of those equipment, which generate serious requirement for the compatibility of FSU system. Frequent upgrading is necessary for operation and maintenance of base stations. For the vendors vary in their technique, it is complicated to reform, upgrade and combined adjusting their interfaces. The loose relevance between FSU and SC platform cause the complexity to locate problem, which make the platform difficult to control the key functions, like registering FSU, reporting alarm, collecting performance, remote controlling and so on [3].
The Standardizing Design and Application of Intensive Distributed Platform According to FSU Interface
At the beginning, China Tower integrate the standard of power supply environment monitoring from China Telecom, China Mobile and China Unicom and make a unified monitoring standard and a crossplatform-interface technical proposal [2]. After investigation and communication with 16 domestic main FSU vendor, China Tower reform and upgrade their B Interface of power supply environment monitoring. Then, they formed the architecture of Automatic Intensive and self-adapted monitoring platform [2,6] , as shown in Figure 1, after a lot of test on the platform in the experimental network and the typical base stations. In this architecture, FSU works as a basic element of the whole monitoring network. By install embedded system and application into terminal-automatic-unit equipment, it can manage and deal with the monitoring data. The main business functions of FSU are controlling the collection of data, interacting with superior monitoring platform and, according to specific hardware, making integrated design with hardware and software. As the types and system software vary in different FSU equipment, although there is unified standard, the monitoring centre is difficult to maintain, analysis and locating the collecting data, especially alarm data due to the process to deal with data vary [7]. If it is necessary to upgrade the FSU software, each type of equipment need to develop its own application, which cause the low flexibility and high cost.
The Automatization of FEU Edge Node Calculating
As 5G network incoming, base stations expanding and IoT technique developing [4,8], there are more and more sensors put into monitoring, which cause an explosive growth in the capacity and difficulty of the platform operating. More than ten types of the FSU equipment and millions of them need to upgrade their interfaces. It is quiet a difficult job to update the version depend on different area and vendor, which make system updating a huge amount of work and the software lacks of robustness. Therefore, it is necessary to reform and unified this system. The iTower Core R&D Team of China Tower spent more than three months to analyse the structure of platform software [2], the business process, the function of Edge Node calculation, the core chip of FSU and the algorithm of embedded system [9,10]. Combining the Functional Specification of intensive monitoring platform, this team designed a kind of automatic embedded access unit, FEU, which move the terminal calculating function into FSU operation system and make it possible for each base station to calculate as a distributed unit. Based on this, China Tower plan to reform the access terminal and start the FSU Automatizing Project. The structure after reformed is shown in Figure 2. report alarm, store the data as history and so on. Wireless Network is connected by FSU. When the connecting status (online/offline) of VPN has changed, FSU need to notify FEU with all the registering information, like the VPN address, IMSI, network standard and so on. As soon as FEU judge it is Heartbeat Timeout, it will inform FSU and FSU will deal with it following the Triple-Reboot standard. The process is shown in Figure 3. FEU is provided as Dynamically Shared Lib to the FSU vendor. As each vendor varies in its CPU chip, FEU need to compile according to the Cross Link Compilation Environment of each vendor. The FSU software can load the dynamic lib by the feu.h and libfeu.so, and then it can interact with monitoring platform after start-up FEU function by the operation process. FEU API consist of 4 parts: Configuring management, which set the parameter for SDK running by Configuration File, including the storing space offer to SDK, the space for IPC picture, the base information of FSU and equipment, the information of collecting point and so on. Call-back function, which includes modifying FTP service parameter and control orders, reporting for Heartbeat Timeout, getting version of FSU, getting the status of VPN, correcting clock, obtaining the resource of system and so on. Control of SDK running, which can start or stop the SDK function, distribute initial memory for FSU, load the configure parameter, create instant and so on. Business Interaction Managing Function, which is the core model of FEU receiving the business monitoring function from FSU, including the real-time-remote-measurement data, the real-time data of remote signal related to remote measurement, the real-time data of remote signal, the real-time data of remote control and submit function, the informing about the status of VPN, getting the register status of FEU and so on. After the platform issues a remote controlling order, FEU distributes it to FSU by Call-back function and FSU operates the controller and returns the result to FEU, and then FEU feed it back to the platform. | 2019-08-02T11:23:38.534Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "20f9d605d31c13a2a57ba32753cf90ef37fe5cf9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1267/1/012066",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7e51ae0973f730ed9dbbae6cb0ef6ca100072fe9",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
4714921 | pes2o/s2orc | v3-fos-license | Early Diagnosis of Dementia from Clinical Data by Machine Learning Techniques
Dementia is the most prevalent degenerative disease in seniors in which progression can be prevented or delayed by early diagnosis. In this study, we proposed a two-layer model inspired by the method used in dementia support centers for the early diagnosis of dementia and using machine learning techniques. Data were collected from patients who received dementia screening from 2008 to 2013 at the Gangbuk-Gu center for dementia in the Republic of Korea. The data consisted of the patient’s gender, age, education, the Mini-Mental State Examination in the Korean version of the CERAD Assessment Packet (MMSE-KC) for dementia screening test, and the Korean version of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD-K) for the dementia precise test. In the proposed model, MMSE-KC data are initially classified into normal and abnormal. In the second stage, CERAD-K data are used to classify dementia and mild cognitive impairment. The performance of each algorithm is compared with that of Naive Bayes, Bayes Network, Begging, Logistic Regression, Random Forest, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) using Precision, Recall and F-measure. Comparing the F-measure values of normal, mild cognitive impairment (MCI), and dementia, the MLP was the highest in the F-measure values of normal with 0.97, while the SVM appear to be the highest in MCI and dementia with 0.739. Using the proposed early diagnosis model for dementia reduces the time and economic burden and can help simplify the diagnosis method for dementia.
Introduction
Quality of life has increased with the development of medical technology, and as the average human lifespan increases the senior population is growing.Additionally, the pace of aging is accelerating.Countries today are facing an aging society, which poses many changes and challenges for society [1,2].In particular, the number of dementia patients is increasing because of the increase of the senior population.Dementia is the most prevalent degenerative disease in seniors.There are 47.5 million people living with dementia around the world, a majority of whom (58%) live in middle-and low-income countries.Each year brings 7.7 million new cases of dementia [1].The number of dementia patients is expected to more than triple by 2050.As the number of dementia patients increases dramatically, the socioeconomic, psychological, physical, and economic burdens for dependents' families are also increasing [2].
Dementia can be sorted into dementia caused by Alzheimer's disease, cerebrovascular dementia, hypothyroidism, benign brain tumors, etc. Alzheimer's accounts for about 60% to 70% of dementia patients; it is caused by aging, family history, and depression.However, the presence of Alzheimer's disease can serve to delay the progression of dementia when it leads to early diagnosis.Cerebrovascular dementia, which affects about 20% to 30% of demented patients, is caused by diseases such as hypertension, heart disease, diabetes, arteriosclerosis, cerebral hemorrhage, and cerebral infarction.Cerebrovascular dementia can be prevented through risk factor management, and can be treated by medicine.Other dementia can be treated by surgery, such as removing the thyroid or benign brain tumor.
Previous research on dementia was focused on treatment and care after the onset of the disease.However, as mentioned before, early diagnosis may delay the progression of dementia [3].
Generally, there are three stages to dementia diagnosis.The first stage is a screening test for cognitive ability using the MMSE-KC (Mini-Mental State Examination in the Korean version of the CERAD Assessment Packet).The second stage involves performing neuropsychological assessment using CERAD-K (the Korean version of the Consortium to Establish a Registry for Alzheimer's Disease) for those who are not diagnosed as normal in the screening test.The final stage is to diagnose (R/O) for dementia or mild cognitive impairment (MCI) by doctor consultation and carer interview.After the third stage, the suspected patients are definitively diagnosed using MRI or CT and blood tests in hospital.As a result, patients are classified into categories of normal, MCI, and dementia.
In this paper, we propose a two-layer model for the early diagnosis of dementia, inspired by the diagnosis approach used in dementia support centers and using machine learning methods.The first layer is a screening test to classify subjects as normal or abnormal, while the second layer is close examination, classifying cases as MCI or dementia.
In the first stage, data preprocessing is performed based on the MMSE-KC data.The next step is to select the required features.Once the feature selection is completed, the data are learned by the selected features, and classified into normal and cognitive decline groups.Finally, the first step classifies the normal group.In the second stage, CERAD-K data are learned, using machine learning algorithms, for classifying MCI and dementia.
Therefore, the structure of the model is similar to the existing dementia screening method, and its effect is simplifying the dementia screening process.
The data were collected from patients who visit and are tested at the Gangbuk-Gu center for dementia in Seoul, Republic of Korea.We collected patient information such as age, gender, education, and test results using MMSE-KC and CERAD-K.To these data we applied machine learning techniques, which are useful for data analysis and are used in various domains.We used supervised learning algorithms such as Support Vector Machine (SVM), Naive Bayes, Multilayer Perceptron (MLP), Bayesian Network, Begging, Logistic Regression, Random Forest evaluation method using F-measure, precision and recall.
We initially examined the influence of each feature through feature selection using chi-squared and information gain algorithms.As a result, MLP, SVM and logistic regression showed the highest F-measure value of normal, MCI, and dementia, respectively.This paper is organized as follows: Section 2 explains the existing diagnosis methods of dementia; Section 3 explains the architecture of the proposed prediction model for early diagnosis of dementia; and Section 4 explains results and discussions.Finally, Section 5 illustrates the conclusions.
Related Works
Dementia, an illness of the brain, attacks cognitive activities such as memory, rationality, and thought.It is caused either by old age or traumatic injury, with approximately 60-70% of cases attributable to Alzheimer's disease [4].Dementia increases in severity the longer it goes undiagnosed.The process of diagnosis involves three steps: the first involves consulting a physician; the second consists of completing an array of neuropsychological tests; the third involves an MRI scan [5].This paper addresses the early diagnosis of dementia by means of neuropsychological testing in tandem with demographic information.Commonly used neuropsychological measures include the Mini-Mental State Examination (MMSE), the Consortium to Establish a Registry for Alzheimer's Disease (CERAD), the Blessed Orientation-Memory-Concentration Test (BOMC), the Montreal Cognitive Assessment (MoCA), a brief informant interview to detect dementia (AD8), and the General Practitioner Assessment of Cognition (GPCOG), with each presenting certain advantages and limitations.MMSE and CERAD are currently most used, since they can be administered regardless of the subject's gender, education, culture or religion [5][6][7][8].
MMSE-KC (Mini-Mental State Examination in the Korean version of the CERAD Assessment Packet) is used to screen and measure impairment of cognitive function.The MMSE tests and scores six domains (1) orientation, (2) registration, (3) attention and calculation, (4) recall, (5) language and (6) constructional ability [9,10].Tables 1 and 2 show the contents of the neuropsychological assessment tests used in the screening test (MMSE-KC) and the precise examination (CERAD-K) used in this paper.
Category Description
Word fluency Enumerate in one minute as many instances of "animal" as possible
Boston naming
Respond to the name of the picture shown CERAD-K (the Korean version of the CERAD neuropsychological assessment battery) is mainly used for scrutiny.CERAD began with researchers at sixteen major Alzheimer's research centers in the United States and was developed for standard diagnosis and evaluation of Alzheimer's patients [11,12].CERAD can examine areas that are more intense than MMSE, such as language fluency, verbal memory ability, time span configuration ability, and depression.
MMSE-KC
Because analysis and decision-making about the results from these sorts of tests depend on the inclinations of the psychologist (and thus human error cannot be avoided), machine-based analysis and data mining approaches have been widely used to alleviate inconsistencies.In this paper, machine learning algorithms are explored to determine if the analysis of neuropsychological and demographic data can be automated for the early diagnosis of dementia.According to Chen and Herskovits [13] in their study of various statistical and machine learning methods, a Bayesian-network classifier and a SVM performed best in assessing participants afflicted by little or no dementia.In a study conducted by Joshi et al. [4], machine learning and neural network methods were used for classifying dementia states to improve accuracy over current dementia screening tools, MMSE and the Functional Activities Questionnaire.The findings showed that the accuracy can be optimized by combining both the tests along with machine learning and neural network.
Trambaiolli and Lorena [3] previously used electroencephalography (EEG) data to classify patients with normal cognition and Alzheimer's or MCI by learning the EEG pattern of Alzheimer's patients using the SVM algorithm.As a result, EEG Epochs showed a high accuracy (79.9%) and the SVM result was about 87%.Williams and Weakley [14] compared the CDR (Clinical Dementia Rating) score and the method of screening dementia using Naive Bayes, Decision Tree, Neural Network, and SVM.The results of the evaluation of the severity of dementia showed that Naive Bayes was the most accurate and SVM had the lowest accuracy.Cho and Chen [15] proposed a hierarchical double layer structure for the early diagnosis of dementia.This is a model that predicts early diagnosis of dementia using a Bayesian network in the top-layer after diagnostic prediction with FCM and PNN algorithm in the base-layer when a cognitive test such as MMSE and CERAD is performed.In this model, the accuracy of FCM and PNN was 74% and 69%, respectively, but MCI and dementia were not well classified when comparing normal, MCI, and dementia.Shanklea and Mani [16] performed CDR prediction using machine learning method and electronic medical records.For Naive Bayes, the accuracy was the highest, while for the other algorithms, it was lower than Bayesian, but it was about 70% accurate.
The diagnosis of dementia consists in large part of assessing different cognitive abilities.As such, physicians frequently interpret test results in conflicting ways: this represents a major impediment to attaining high accuracy with machine learning algorithms in the absence of a specified model.In contrast to the aforementioned studies, we advance a two-tiered hierarchical approach for evaluating and making distinctions between normal, MCI, and early dementia.This approach is derived from the dementia support center's diagnostic method (a combination of cognitive screening, neuropsychological evaluation, and early diagnosis).In this research we aim to use neuropsychological and demographic information in order to predict normal, MCI, and dementia within our proposed model by applying seven frequently used machine learning models: Naive Bayes, Bayes Network, Begging, Logistic Regression, Random Forest, SVM, and MLP.This method offers diagnostics which are at once intuitional and also far-reaching.
Architecture of the Proposed Model
In this paper, we propose a model that learns data using a machine learning algorithm and classifies data into normal, MCI, and dementia.The proposed model is a two-level hierarchical model similar to the dementia diagnosis method used in the dementia support center.The structure of the model is as follows.In the first stage, we classify a normal group and cognitive decline group.In the second stage, we classify a MCI group and a dementia group.
In the first stage, data preprocessing is performed based on the MMSE-KC data.The data preprocessing process removes missing or incorrectly entered data.In addition, due to differences in data range of each attribute (which may affect machine learning algorithms), normalization is performed to set the range of data to 0~1.The next step is to see how each feature influences the classification result through feature selection and select the required features.Once the feature selection is completed, the data are learned by the selected features, and classified into normal and cognitive decline groups.Finally, the first step classifies the normal group.
In the second stage, CERAD-K data are learned for classifying MCI and dementia.The preprocessing process and feature selection process are the same as in the first stage.After the completion of data preprocessing and normalization and feature selection, machine learning algorithms are used to classify MCI and dementia.
In this paper, performance evaluation was performed by using various algorithms in data learning and classification model generation.The proposed model is shown in Figure 1.In this paper, performance evaluation was performed by using various algorithms in data learning and classification model generation.The proposed model is shown in Figure 1.
Data Collection
The data used in the study were collected from people who visited the dementia center in Gangbuk-Gu, Seoul, from 2008 to 2013 and received a screening test.
The data collection method is as follows.First, MMSE-KC examines the cognitive decline of the patient.If the resulting diagnosis indicates cognitive decline, CERAD-K would be further conducted.After the precise examination, it is decided whether or not to take a doctor's examination, consult with the doctor, decide whether to be confirmed at the hospital or participate in the program run by the center.
Two types of data were used in this study.The collected data consists of 14 attributes for Phase 1 data and 31 attributes for Phase 2 data.First, the data used in Phase 1 were gender, age, education, and MMSE-KC scores.MMSE-KC results (normal, cognitive decline) were used as class data for classification.When performing neuropsychiatric treatment, the age, education level, physical condition and basic cognitive ability of the subject should be considered.In addition to MMSE-KC score data, demographic data such as patient's sex, age, and education level were collected.The second data is the data used in Phase 2, which classifies the normal in Phase 1 and adds CERAD-K data to the data, which is not classified as normal.In Phase 2, we used the data of the patients who were confirmed (dementia or dementia high risk) visiting the hospital after the final examination.
Data from Phase 1 consisted of data from a total of 14,000 patients, 9799 in the normal group and 4201 in the cognitive decline group.The mean age of the patients was 73 years old, 72 years in the normal group, and 74 years in the cognitive impairment group.The MMSE-KC score was 25 points
Data Collection
The data used in the study were collected from people who visited the dementia center in Gangbuk-Gu, Seoul, from 2008 to 2013 and received a screening test.
The data collection method is as follows.First, MMSE-KC examines the cognitive decline of the patient.If the resulting diagnosis indicates cognitive decline, CERAD-K would be further conducted.After the precise examination, it is decided whether or not to take a doctor's examination, consult with the doctor, decide whether to be confirmed at the hospital or participate in the program run by the center.
Two types of data were used in this study.The collected data consists of 14 attributes for Phase 1 data and 31 attributes for Phase 2 data.First, the data used in Phase 1 were gender, age, education, and MMSE-KC scores.MMSE-KC results (normal, cognitive decline) were used as class data for classification.When performing neuropsychiatric treatment, the age, education level, physical condition and basic cognitive ability of the subject should be considered.In addition to MMSE-KC score data, demographic data such as patient's sex, age, and education level were collected.The second data is the data used in Phase 2, which classifies the normal in Phase 1 and adds CERAD-K data to the data, which is not classified as normal.In Phase 2, we used the data of the patients who were confirmed (dementia or dementia high risk) visiting the hospital after the final examination.
Data from Phase 1 consisted of data from a total of 14,000 patients, 9799 in the normal group and 4201 in the cognitive decline group.The mean age of the patients was 73 years old, 72 years in the normal group, and 74 years in the cognitive impairment group.The MMSE-KC score was 25 points for the normal group and 18 points for the cognitive decline group, which was about 7 points different from the normal group.The overall average was 23 points.
In Phase 2, the average age of all patients was 76 years, the difference average age gap between MCI and dementia patients was 5 years.When measuring cognitive ability, the level of the patient's education also affected the results, but MCI and dementia did not show much difference.The MMSE-KC score was 17 points out of 30 as a whole, with an average of 20 points for MCI patients and a dementia score of 15, slightly lower than the average.Details are shown in Tables 3 and 4.
Preprocessing
The collected data includes data from patients who have problems with hearing and vision or who are unable to be examined due to anxiety, and data that is lost due to errors or omissions in the data collection process.Because only a few machine learning algorithms ignore missing value (e.g., Bayes, Neural Network) during data training and most algorithms can be affected by such gaps, we deleted missing values and errors using data preprocessing.Data preprocessing is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect, inaccurate or irrelevant parts of the data and then replacing, modifying, or deleting the data.Data preprocessing involves finding erroneous, incomplete, irrelevant or corrupt data in a record set, table, or database, and then correcting or deleting the data.The missing values in our data may come from patients who did not properly understand the test.The results for this kind of patient considered cognitive decline based on family interviews and other parts of test results.Thus, if data type is categorical, we changed missing values to abnormal and, if numeric, to 0.
Data normalization changes values in different ranges to values in the same range, preventing an attribute with a larger range of values from having a larger weight than a smaller range of attributes.
For numeric data, there are four approaches to normalizing data: firstly, to convert to a range between 0 and 1; secondly to use a value between −1 and 1; thirdly to find the average and standard deviation of the attribute; and, fourthly, to normalize the data using the log value.
In the case of categorical data, algorithms using neural networks and statistical methods among machine learning algorithms cannot process categorical data, so they are converted to binary data or one-hot encoding [17].For example, in regard to one-hot encoding, if there are three categorical data such as red, blue, and green in the color attribute, they are converted to a format such as (100), (010), (001).In this study, we use the maximum-minimum normalization method for numerical data and one-hot encoding for categorical data among various normalization methods.Equation (1) shows the maximum-minimum normalization formula.
Feature Selection
Feature selection is a method of extracting the most relevant features from data with a certain pattern.It can be used to remove irrelevant data, eliminate redundancy, and identify what features contribute to a model with high accuracy.When creating a model, it is important to write as few features as possible, so that one can reduce the number of features through feature selection.In this paper, feature selection is performed using chi square and information gain.
The chi-square test is used to analyze categories, variables, or relationships, and is useful for studying categorical variables such as regional and political preferences, and the relationship between food and obesity.The chi-square test can be used in two broad contexts: as a fitness test to test whether the observed data follows a predicted distribution, and as an independence test to test whether two random variables are independent of each other.Independence means that there is no cause or effect relationship between them.The following Equation (2) indicates the chi-square feature selection.
Information gain means that when an attribute is selected, the data is well distinguished because of its attribute.Information gain is the value obtained by subtracting the entropy value of the lower node from the entropy of the upper node.Equation ( 3) is used for calculating the information gain amount when the attribute A is selected, and the entropy of the original node is obtained.This is the result of subtracting the value divided by the smallest m nodes.The larger the value of Gain (A), the greater the information gain and the better the discriminative power; see the study conducted by Garrard et al. [18].
In this study, feature selection was performed in two distinct phases, in which Phase 1 dealt with MMSE-KC data, while Phase 2 selected features among data from both MMSE-KC and CERAD-K.
Phase 1
As a result of feature selection of thirteen attributes in Phase 1, both algorithms (chi-squared, information gain) showed almost identical results.Table 5 shows the results in the order of location, timing, order execution, and memory recall.As a result of feature selection of Phase 2 data, both algorithms showed the same results.The most influential feature appeared to be temporal order, followed by memory function (Trial 1), place order, and a language fluency test.Memory, Word Fluency, Boston Naming, Visuospatial.Regarding MMSE-KC data in Phase 2, only the time and location were ranked and the rest were all at the bottom.Details are given in Table 6.The darker color is the CERAD-K data added in Phase 2.
Classifiers
Of the various uses to which machine learning is put, data mining is the most important.People are liable to err in analyzing data or seeking to discern relationships between various features, and these mistakes interfere with the problem-solving process.Frequently these problems are conducive to the application of machine learning, which can thereby optimize systemic efficiency and design.With machine learning algorithms, each instance within a dataset is represented with a consistent set of features-continuous, categorical, or binary.Supervised learning describes cases wherein instances are provided with known labels of the corresponding outputs, while unsupervised learning involves no labeling of instances.A great many applications of machine learning necessitate supervised tasks, so we focus here on the requisite techniques for accomplishing this labeling [19].
Support Vector Machine
The machine learning method of Support Vector Machine (SVM) involves a mapping model for analyzing data and recognizing patterns.Classification and regression analysis are its primary uses.When provided with a set of data falling into one of two categories, an SVM algorithm constructs a non-probabilistic binary linear classification model by which it can ascertain from the given data the correct category in which to place new data.The classification model it produces is conveyed as a boundary within the space of the mapped data.An SVM algorithm establishes the widest boundary.It is useful for both linear and nonlinear classification.For non-linear classification, it is necessary to map given data onto a high-dimensional feature space.In order to do this efficiently, a kernel trick may be used [20,21].
Naive Bayes
Naive Bayes is a kind of probability classifier applying the Bayesian theorem, and it is one of the most used classification methods, as in text classification and document classification [22].Naive Bayes learns using algorithms based on general principles rather than undergoing training through a single algorithm.Naive Bayes is trained very efficiently in a supervised learning environment and estimates parameters using Maximum Likelihood Estimation.The Naive Bayes classification is a combination of the probability model and the decision rule described above, and finds the class with the maximum probability.
Random Forest
The Random forest is a kind of ensemble method that randomly learns decision trees.It consists of a learning step that constructs a large number of decision trees and a test step that classifies and predicts when input vectors come in [23].Random forests are used in various applications such as detection, classification, and regression.The most important feature of the random forest is that it consists of trees with slightly different characteristics due to randomness, and improves generalization performance by de-correlating the prediction of each tree.In addition, the randomization characteristics can be improved through the ensemble learning method, i.e., the invitation method and the arbitrary node optimization method.
Logistic Regression
Logistic regression analysis is a stochastic model that is used when a dependent variable refers to a binomial problem; it is a statistical technique used to predict the likelihood of an event using a linear combination of independent variables [24].Therefore, the relationship between the dependent variable and the independent variable is expressed as a concrete function and used in future prediction models.In addition, unlike linear regression analysis, logistic regression analysis is often used as a classification and prediction model in which the results of marine data are divided into specific categories when the input data is given to the categorical data.
Bagging
Bagging is an ensemble learning method designed to improve the safety and accuracy of machine learning algorithms used in statistical classification and regression analysis [25].Bagging also reduces variance and avoids overfitting, and is applied not only to decision tree learning methods and random forests, but also to other methods.
Bayesian Network
A Bayesian network graphically models probabilistic relationships between pertinent variables.In data analysis, a Bayesian model offers a number of benefits when paired with statistical techniques.First, since the model charts dependencies between all variables, it can easily address an instance in which there are gaps in data entries.Second, one can employ a Bayesian network to discern causal relationships, and thus to more fully grasp a problem domain and anticipate the effects of intervention.Third, a Bayesian model contains both probabilistic and causal semantics, making it particularly well-suited for connecting data with prior knowledge (since the latter frequently takes shape as causal).Lastly, Bayesian networks combined with statistical methods provide a clearly delineated and effective way to prevent data overfitting [26].
Multilayer Perceptron
A Multilayer Perceptron (MLP) is a feedforward neural network, instructed by way of a backpropagation algorithm.Because it is a supervised network, it must have a sought-after response for its training: what MLP learn is how to translate given data into that response.As such, they are frequently employed in pattern classification.They are able, with a hidden layer or two, to match almost any input-output map.In challenging problems, they have proven to be the equal of optimal statistical classifiers.For these reasons, they are at present arguably the most widely used network architecture: nearly all neural network applications make use of MLP.MLP is arranged so that neurons are divided into delineated layers, and each layer's output is conjoined with the nodular input of the subsequent layer.Hence the first (or input) layer represents the inputs to the network, and the last layer's outputs represent those of the network [27].
Results and Discussion
In this section, we study the early diagnosis of dementia, according to results of data mining techniques such as multilayer perceptron, random forest, bagging, SVM, logistic regression, Bayesian network, and Naive Bayes that were explained above.We then compare them to discern which is more accurate in the diagnosis of dementia.
As stated earlier, in Section 3.2, we used data from the Gangbuk-Gu center for dementia.Since classification of data includes two classes-namely Phase 1, consisting of 14,000 data including the normal class (9799 data) and cognitive decline class (4201 data), and Phase 2, consisting of 1236 data including the MCI class (663 data) and dementia class (573 data)-we used 10-fold cross-validation.
In cross-validation, data is divided into two segments in order to statistically compare and assess learning algorithms.One data segment (the training set) trains a model, while the other (the validation set) validates it.Usually, these sets are required to be staggered over successive rounds, so that each data point can be validated against the next.K-fold cross-validation is the standard cross-validation form, providing the basis modified in special cases or repeated rounds of cross-validation [28].For this reason, we show the accuracy of these criteria: precision, recall and F-measure to diagnose dementia.Each of these criteria has been obtained from Equations ( 4)- (6).
According to Equations ( 4)-( 6), TP (True Positives) is equivalent to the number of samples that correctly have been identified as positive.Likewise, FP (False Positives) is equivalent to the number of samples that have been wrongly identified as positive, and FN (False Negative) is equivalent to the number of samples that have been wrongly identified as negative [29].
Phase 1
As mentioned in Section 3, Phase 1 classifies subjects into categories of normal and cognitive decline.The results of any use of data mining techniques are based on Table 7.This table shows the achieved accuracy based on normal and cognitive decline class of features in Table 5.Given the results of Table 7, Figure 2 shows the comparison of the accuracy of these criteria: precision, recall and F-measure.According to Figure 2, the highest precision recall and F-measure accuracy in Phase 1 belongs to MLP with 0.97, 0.97, and 0.97, respectively, followed by random forest and bagging.
Phase 1
As mentioned in Section 3, Phase 1 classifies subjects into categories of normal and cognitive decline.The results of any use of data mining techniques are based on Table 7.This table shows the achieved accuracy based on normal and cognitive decline class of features in Table 5.Given the results of Table 7, Figure 2 shows the comparison of the accuracy of these criteria: precision, recall and F-measure.According to Figure 2, the highest precision recall and F-measure accuracy in Phase 1 belongs to MLP with 0.97, 0.97, and 0.97, respectively, followed by random forest and bagging.Each algorithm has a level of error in the diagnosis of dementia; by using four criteria-Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE) in the Table 8-any technical errors in the diagnosis of dementia is shown.Each algorithm has a level of error in the diagnosis of dementia; by using four criteria-Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE) in the Table 8-any technical errors in the diagnosis of dementia is shown.According to the results of Table 8, Figure 3 shows the comparison of the level of error for the diagnosis of dementia by these criteria MAE, RMSE, RAE and RRSE, to decide which algorithms of these criteria have the lowest error.According to Figure 3, concerning MAE, MLP has the lowest error in MAE criterion compared to other algorithms.Also, the level of error with SVM is close to that of MLP and, after MLP, it has less error than other algorithms in diagnosis of dementia.Based on Figure 3, MLP also has the lowest error in RMSE criterion compared to other algorithms; the level of error in random forest, bagging, and logistic regression is almost identical.In regard to RAE, Figure 3 shows that MLP has the lowest error in RAE criterion compared to other algorithms.Furthermore, SVM follows MLP in having less error than other algorithms in diagnosis of dementia.Finally, in regard to RRSE, as Figure 3 shows, MLP has the lowest error in RRSE criterion compared to other algorithms.And after MLP, random forest and bagging are closer together in level of error, with the lowest error in the diagnosis of dementia.According to Table 9, the comparison of data classification accuracy for diagnosis of dementia is shown in Figure 4.As noted, the Phase 1 data are tested and the data classification accuracy rate for diagnosis of cognitive decline is performed by testing data according to Equation (7).
Accuracy =
TP + TN TP + TN + FP + FN (7) According to Table 9, the comparison of data classification accuracy for diagnosis of dementia is shown in Figure 4.As the results in Table 9 and Figure 4 are clear, MLP has the highest accuracy for the diagnosis of dementia (Phase 1) and the value of this algorithm is equal to 97.2%.Additionally, the classification accuracy of random forest and bagging was 96.3% and 94.4%, respectively.According to Figure 4, Bayes Network and Naive Bayes have the lowest classification accuracy in the diagnosis of dementia.Moreover, the accuracy of SVM and logistic regression was almost equal, meaning that both algorithms possessed the classification accuracy of 91.7%.
Phase 2
As mentioned in Section 3, Phase 2 classifies MCI and dementia.The results of any use of data mining techniques are based on Table 10.This table shows the achieved accuracy based on the MCI and dementia classes of feature in Table 6.Each algorithm has a level of error in the diagnosis of dementia.By using four criteria (MAE, RMSE, RAE and RRSE, in the Table 11) any technical errors in the diagnosis of dementia are shown.Given the results of Table 11, Figure 6 compares the errors in the diagnosis of dementia by these criteria MAE, RMSE, RAE and RRSE, to discern which algorithms of these criteria have the lowest error.According to Figure 6, SVM has the lowest error in the MAE criterion compared to other algorithms.Also, the level of error for Naive Bayes is close to the SVM and after SVM has less error than the other algorithms in the diagnosis of dementia.Regarding RMSE, based on Figure 6, random forest and bagging had the lowest error in RMSE criterion compared to other algorithms; the level of error in bagging, random forest and logistic regression is very close together.Figure 6 also shows that SVM has the lowest error in RRSE criterion compared to other algorithms.After SVM, Naive Bayes has a level of error close to it, and the lowest error to diagnosis of heart disease.Finally, in regard to RRSE, as Figure 6 shows, logistic regression has the lowest error by the RRSE criterion when compared to other algorithms.After logistic regression, Bayes Network had the lowest error for the diagnosis of dementia.Each algorithm has a level of error in the diagnosis of dementia.By using four criteria (MAE, RMSE, RAE and RRSE, in the Table 11) any technical errors in the diagnosis of dementia are shown.Given the results of Table 11, Figure 6 compares the errors in the diagnosis of dementia by these criteria MAE, RMSE, RAE and RRSE, to discern which algorithms of these criteria have the lowest error.According to Figure 6, SVM has the lowest error in the MAE criterion compared to other algorithms.Also, the level of error for Naive Bayes is close to the SVM and after SVM has less error than the other algorithms in the diagnosis of dementia.Regarding RMSE, based on Figure 6, random forest and bagging had the lowest error in RMSE criterion compared to other algorithms; the level of error in bagging, random forest and logistic regression is very close together.Figure 6 also shows that SVM has the lowest error in RRSE criterion compared to other algorithms.After SVM, Naive Bayes has a level of error close to it, and the lowest error to diagnosis of heart disease.Finally, in regard to RRSE, as Figure 6 shows, logistic regression has the lowest error by the RRSE criterion when compared to other algorithms.After logistic regression, Bayes Network had the lowest error for the diagnosis of dementia.
As noted, Phase 2 data have been tested and their data classification accuracy rate for diagnosis of cognitive decline assessed according to Equation (7).
According to Table 12, the comparison of data classification accuracy for diagnosis of dementia is shown in Figure 7.As noted, Phase 2 data have been tested and their data classification accuracy rate for diagnosis of cognitive decline assessed according to Equation (7).
According to Table 12, the comparison of data classification accuracy for diagnosis of dementia is shown in Figure 7.As the results in Table 12 and Figure 7 clearly demonstrate, SVM has the most accuracy for the diagnosis of dementia (Phase 2) and the value of this algorithm is equal to 74.03%.Moreover, the classification accuracy of logistic regression and random forest were equal to 73.71% and 72.98%, respectively.According to Figure 7, MLP had the lowest correct classification accuracy in the diagnosis of dementia (Phase 2).The accuracy of bagging and Naive Bayes were 72.49% and 71.44%, respectively.The classification accuracy of Bayes Network was 70.95%.
To sum up, in Phase 1, MLP showed the highest accuracy with 97.2%, followed by random forest and bagging.The lowest accuracy was Naive Bayes at 81.3%.In Phase 2, SVM was tops among other As noted, Phase 2 data have been tested and their data classification accuracy rate for diagnosis of cognitive decline assessed according to Equation (7).
According to Table 12, the comparison of data classification accuracy for diagnosis of dementia is shown in Figure 7.As the results in Table 12 and Figure 7 clearly demonstrate, SVM has the most accuracy for the diagnosis of dementia (Phase 2) and the value of this algorithm is equal to 74.03%.Moreover, the classification accuracy of logistic regression and random forest were equal to 73.71% and 72.98%, respectively.According to Figure 7, MLP had the lowest correct classification accuracy in the diagnosis of dementia (Phase 2).The accuracy of bagging and Naive Bayes were 72.49% and 71.44%, respectively.The classification accuracy of Bayes Network was 70.95%.
To sum up, in Phase 1, MLP showed the highest accuracy with 97.2%, followed by random forest and bagging.The lowest accuracy was Naive Bayes at 81.3%.In Phase 2, SVM was tops among other As the results in Table 12 and Figure 7 clearly demonstrate, SVM has the most accuracy for the diagnosis of dementia (Phase 2) and the value of this algorithm is equal to 74.03%.Moreover, the classification accuracy of logistic regression and random forest were equal to 73.71% and 72.98%, respectively.According to Figure 7, MLP had the lowest correct classification accuracy in the diagnosis of dementia (Phase 2).The accuracy of bagging and Naive Bayes were 72.49% and 71.44%, respectively.The classification accuracy of Bayes Network was 70.95%.
To sum up, in Phase 1, MLP showed the highest accuracy with 97.2%, followed by random forest and bagging.The lowest accuracy was Naive Bayes at 81.3%.In Phase 2, SVM was tops among other classifiers for MCI and dementia cases with 74.03%, followed by logistic regression and random forest.Whereas MLP was the best in Phase 1 for predicting normal, SVM was best in Phase 2 for predicting dementia.The results of this study are consistent with findings from several researchers (e.g., [4,18]) showing that the machine learning approaches can be used to diagnose dementia.Our efforts in the diagnosis of dementia may be similar to those mentioned with respect to the employed machine learning approaches.However, inspired by the method used in dementia support centers for early diagnosis, not only can our proposed model diagnose dementia with a data from simple tests from patients, but also we can achieve higher accuracy in early diagnosis of dementia.
Conclusions
As the senior population increases due to social aging, the prevalence of dementia increases, and the number of young dementia patients also increases.In this study, we proposed a two-layer model inspired by the methods used in dementia support centers for the early diagnosis of dementia and using machine learning techniques.MMSE-KC and CERAD-K data have been used in screening and precise screening to reduce time and the economic burden on patients, and increase the accuracy of screening with the employed machine learning algorithms.In the first stage, the patients who need precise screening are classified by MMSE-KC data.In the second stage, MCI and dementia are classified by adding CERAD-K data.In conclusion, we compared various classification models using dementia diagnosis data.In Phase 1, the highest F-measure value belongs to MLP, while in Phase 2 the highest F-measure value belongs to SVM.Our proposed model simplifies the task of interpreting test results by constructing a set of criteria to classify the patient and therefore diagnose dementia at early stages in a fast, inexpensive, and reliable way, which improves the current clinical practice.
In future research, we will study a model that can predict dementia more precisely by using lifestyle or disease information of the patient and plan to improve the accuracy.
Appl.Sci.2017, 7, 651 5 of 17 classification result through feature selection and select the required features.Once the feature selection is completed, the data are learned by the selected features, and classified into normal and cognitive decline groups.Finally, the first step classifies the normal group.In the second stage, CERAD-K data are learned for classifying MCI and dementia.The preprocessing process and feature selection process are the same as in the first stage.After the completion of data preprocessing and normalization and feature selection, machine learning algorithms are used to classify MCI and dementia.
Figure 1 .
Figure 1.The architecture of the proposed model.
Figure 1 .
Figure 1.The architecture of the proposed model.
Figure 2 .
Figure 2. Comparison of Classification Based on the Precision, Recall and F-measure Criteria for Diagnosis of Dementia (Phase 1).
Figure 2 .
Figure 2. Comparison of Classification Based on the Precision, Recall and F-measure Criteria for Diagnosis of Dementia (Phase 1).
Figure 3 .
Figure 3.Comparison of the Classification Algorithms Based on Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE) Criteria for the Diagnosis of Dementia (Phase 1).
Figure 3 .
Figure 3.Comparison of the Classification Algorithms Based on Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE) Criteria for the Diagnosis of Dementia (Phase 1).
Figure 4 .
Figure 4.The Comparison of Data Classification Accuracy in Phase 1.
Figure 5 .
Figure 5.Comparison of the Classification Based on the Precision, Recall, and F-measure criteria for the Diagnosis of Dementia (Phase 2).
Figure 5 .
Figure 5.Comparison of the Classification Based on the Precision, Recall, and F-measure criteria for the Diagnosis of Dementia (Phase 2).
Figure 6 .
Figure 6.Comparison of the Classification Algorithms Based on MAE, RMSE, RAE and RRSE Criteria for Diagnosis of Dementia (Phase 2).
Figure 7 .
Figure 7.The Comparison of Classification Accuracy by Phase 2.
Figure 6 .
Figure 6.Comparison of the Classification Algorithms Based on MAE, RMSE, RAE and RRSE Criteria for Diagnosis of Dementia (Phase 2).
Figure 6 .
Figure 6.Comparison of the Classification Algorithms Based on MAE, RMSE, RAE and RRSE Criteria for Diagnosis of Dementia (Phase 2).
Figure 7 .
Figure 7.The Comparison of Classification Accuracy by Phase 2.
Figure 7 .
Figure 7.The Comparison of Classification Accuracy by Phase 2.
Table 1 .
Mini-Mental State Examination in the Korean version of the CERAD (Consortium to Establish a Registry for Alzheimer's Disease) Assessment Packet (MMSE-KC).
Table 2 .
Neuropsychological Testing (Korean version of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD-K)).
Table 5 .
Feature Selection Using Chi-squared and Information Gain (Phase 1).
Table 6 .
Feature Selection Using Chi-squared and Information Gain (Phase 2).
Table 8 .
The Results of Errors Obtained from Classification to Diagnosis of Dementia (Phase 1).Multilayer Perceptron (MLP).
Table 8 .
The Results of Errors Obtained from Classification to Diagnosis of Dementia (Phase 1).Multilayer Perceptron (MLP).
Table 9 .
Data Classification Accuracy of Dementia by Evaluating Experimental Data.
Table 9 .
Data Classification Accuracy of Dementia by Evaluating Experimental Data.
Table 10 .
The Results of Classification Based on Phase 2 (MCI, dementia).
Table 11 .
The Results of Errors Obtained from Classification to Diagnosis of Dementia (Phase 2).
Table 11 .
The Results of Errors Obtained from Classification to Diagnosis of Dementia (Phase 2). | 2018-04-10T03:56:56.383Z | 2017-06-23T00:00:00.000 | {
"year": 2017,
"sha1": "398c58a122578947571cd620221c093709e809d3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/7/651/pdf?version=1498453163",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "398c58a122578947571cd620221c093709e809d3",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220856010 | pes2o/s2orc | v3-fos-license | Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities
The machine learning community has become alert to the ways that predictive algorithms can inadvertently introduce unfairness in decision-making. Herein, we discuss how concepts of algorithmic fairness might apply in healthcare, where predictive algorithms are being increasingly used to support decision-making. Central to our discussion is the distinction between algorithmic fairness and algorithmic bias. Fairness concerns apply specifically when algorithms are used to support polar decisions (i.e., where one pole of prediction leads to decisions that are generally more desired than the other), such as when predictions are used to allocate scarce health care resources to a group of patients that could all benefit. We review different fairness criteria and demonstrate their mutual incompatibility. Even when models are used to balance benefits-harms to make optimal decisions for individuals (i.e., for non-polar decisions)–and fairness concerns are not germane–model, data or sampling issues can lead to biased predictions that support decisions that are differentially harmful/beneficial across groups. We review these potential sources of bias, and also discuss ways to diagnose and remedy algorithmic bias. We note that remedies for algorithmic fairness may be more problematic, since we lack agreed upon definitions of fairness. Finally, we propose a provisional framework for the evaluation of clinical prediction models offered for further elaboration and refinement. Given the proliferation of prediction models used to guide clinical decisions, developing consensus for how these concerns can be addressed should be prioritized.
Consistent and substantial differences in the treatment of medical conditions in patients who differ by race/ethnicity or by sex have raised concern that clinician bias may contribute to disparities in healthcare [2][3][4] . The emergence of artificial intelligence holds promise that computer-based algorithms may ameliorate human biases and possibly attenuate health disparities 5 . However, computer scientists have recently become alert to the possibility that predictive algorithms can inadvertently introduce unfairness in decision-making. This is a major concern as algorithmic technologies have permeated many important sectors: criminal justice (e.g., predicting recidivism for parole decisions); the financial industry (e.g., credit worthiness); homeland security (e.g., "no fly" lists); and targeted ads (e.g., job listings). Indeed, legislation has recently been proposed in the U.S. that would direct the Federal Trade Commission to require the assessment of algorithmic fairness and bias by entities that use, store, or share personal information for algorithmically supported decisionmaking 6 .
Despite the broader awareness of the importance of algorithmic fairness, and the rapidly expanding impact of algorithmic prediction in healthcare, how principles of algorithmic fairness might apply in clinical decision-making has received little attention in the medical literature 7,8 . In this perspective, we review methodological research from the computer science literature and relevant epidemiological principles, to clarify when fairness concerns might be germane and to introduce a practical framework for evaluating algorithmic bias and fairness in clinical decision-making and prediction in healthcare. While we focus on race, the discussion may extend to other classes (such as ethnicity, religion or creed, sex, national origin, etc.) legally protected against discrimination in certain settings. This perspective is intended for those stakeholders who are developing algorithms (e.g., clinical researchers, medical informaticians), as well as users of models, such as healthcare administrators, clinicians, and payers.
THE FUNDAMENTAL PROBLEM OF PREDICTION AND PREJUDICE: REFERENCE CLASS FORECASTING IS DISCRIMINATION BY GROUP MEMBERSHIP
Machine learning and statistical algorithms make predictions on individuals using mathematical models that are not explicitly programmed, but rather are developed using statistical rules that associate variables (or features) with outcomes (or labels) within a training data set. Machine learning is thus a form of "reference class forecasting" 9 whereby an individual's risk of a given outcome is estimated by examining outcome rates in a group of others with "similar" features. Because people have many different attributes, and because there are many different approaches to modeling, there are many different ways to define similarity; thus, any given individual's "risk" is model-dependent. Each different way of defining similarity leads to a different risk estimate-and often a very different risk estimate-for a given individual 10,11 .
The fact that "risk" is not a property that can be objectively measured in an individual (like blood pressure or cholesterol)-but instead can only be estimated in a group of other individuals judged to be similar in a set of selected features-suggests the overlap between the concepts of reference class forecasting and prejudice: in both, an individual's disposition is determined by that person's group membership.
A key statistical measure of model performance is how well the model discriminates between those who have the outcome and those who do not. Disentangling the two meanings of "discrimination"-discernment between individuals' risk of a future event on the one hand and unfair prejudice leading to inequity on the other (akin to what economist Thomas Sowell has referred to as Discrimination I and Discrimination II, respectively 12 )-is central to understanding algorithmic fairness, and more deeply problematic than generally appreciated.
COMMON SENSE FAIRNESS CRITERIA ARE SUPERFICIALLY APPEALING BUT MUTUALLY CONFLICTING
The specter of "machine bias" was highlighted in 2016. Using data from over 7000 arrests, an investigative report showed that a commercial software (COMPAS) used to predict the risk of criminal re-offense assigned a higher risk of reoffending to black defendants than to whites, leading to potentially longer sentences. This was true even among those who did not subsequently recidivate, i.e., whose "true" risk is (retrospectively) 0%. These disparities emerged even though the algorithm was "race-unaware"-i.e., race was not explicitly coded for in the statistical model (as it is potentially illegal to use protected characteristics in sentencing decisions); other features correlated with race were included. The observed unequal error rates between blacks and whites-even among those whose future behavior was the same-corresponds to common sense notions of unfairness. It has been argued that unequal error rates also align with legal definitions of discrimination through "disparate impact" 13 , which proscribes practices that adversely affect one group of people more than another, even when the rules (or the statistical models) are formally neutral across groups 14 . Nonetheless it's important to bear in mind that fairness and the legal standard of disparate impact are not purely statistical concepts, and involve ethical, political and constitutional concerns 15 .
However, the software developers argued that the model is fair as it had similarly good calibration across both white and black populations. Calibration refers to the agreement between observed outcomes and predictions. For example, if we predict a 20% risk of recidivism in a group of subjects, the observed frequency of recidivism should be~20 out of 100 individuals with such a prediction. Like unequal error rates, calibration also appears to conform to informal notions of fairness in that a given score from a prediction model should correspond to the same probability of the outcome, regardless of group membership (known as the test fairness criteria).
Subsequently, it was demonstrated mathematically that these two fairness criteria-equalized error types and test fairness-cannot both be satisfied when the outcome rates differ across the two groups (except in the unrealistic circumstance of perfect prediction), leading to the conclusion that unfairness is inevitable 13,16 . Figure 1 provides a numerical illustration showing that, when outcome rates vary across two groups, a predictive test can have consistent error rates or consistent calibration across groups but not both. Because there are many different fairness criteria (Table 1), and these may be mutually incompatible [17][18][19] , prioritizing across criteria necessarily involves a value judgment and may be sensitive to various contextual factors. Fig. 1 Mutual incompatibility of fairness criteria. For two groups with different outcome rates, a predictive test can have consistent error rates or consistent calibration but not both. We present outcomes using coarsened prediction scores, thresholded to divide the population (N = 100) into low and high risk strata. Confusion matrices for a low prevalence group with a 20% outcome rate (Matrix A, red) and a high prevalence group with a 30% outcome rate (Matrices B and C, green) are shown. For the low prevalence group, a predictive test with an 80% sensitivity and specificity identifies a high risk (test+) strata with an outcome rate of 50% (i.e., the positive predictive value) and a low risk (test−) strata with an outcome rate of~6% (i.e., the false omission rate). However, as shown in Matrix B, the same sensitivity and specificity in the higher prevalence group gives rise to outcome rates of~63% and~10% in the high and low risk-strata, respectively. This violates the criterion of test fairness, since the meaning of a positive or negative test differs across the two groups. Holding risk-strata specific outcome rates constant would require a higher sensitivity and lower specificity (Matrix C). This violates the fairness criteria of equalized error rates. For example, the Type I error rate (i.e., the false positive rate) would almost double from 20% in the low prevalence population to~39% in the higher prevalence population. The diagnostic odds ratio was fixed at~16 across this example, whole numbers are used to ease interpretation.
The impossibility of simultaneously satisfying the various fairness criteria points both to the inevitability of unfairness (defined by heterogeneous "common sense" outcomes-based measures) and to the limited validity, authority and usefulness of these measures. If we start from the premise that fair and unbiased decision-making is possible in theory, the impossibility results suggest that unequal outcomes will emerge from both fair and unfair decision-making. To satisfy more stringent, narrow, and rigorous definition of unfairness, it is not enough to observe differences in outcomesone must understand the causes for these outcome differences. Such a causal concept of fairness is closely aligned to the legal concept of disparate treatment (Table 1) 20 . According to causal definitions of fairness, similar individuals should not be treated differently due to having certain protected attributes that qualify for special protection from discrimination, such as a certain race/ethnicity or gender. However, causality is fundamentally unidentifiable in observational data, except with unverifiable assumptions 20,21 . Thus, we are more typically stuck with deeply imperfect but ascertainable criteria serving as (often poor) proxies for causal fairness.
A FUNDAMENTAL CONFLICT IN FAIRNESS PRINCIPLES
The conflict between fairness criteria reflects the fact that criteria based on outcomes do not correspond to causal notions of fairness. While a complete understanding of the true causal model determining an outcome (or label) promises in theory to provide the bedrock to determine fair processes for prediction and decision-making (by permitting the disentangling of legitimate causal attributes from illegitimate race-proxies), we note that differing conceptions of fairness would still ensure that fairness definitions remain deeply contested. There are two competing principles or goals in antidiscrimination law 15 : anticlassification and antisubordination. The goal of anticlassification is to eliminate the unfairness individuals experience due to bias in decisionmakers' choices, whereas antisubordination seeks to eliminate status-based inequality across protected classes. Enforcing balance in outcomes or results can only indirectly address anticlassification concerns (if at all)-since large differences in group outcomes can arise with or without biased decision-making. Conversely, ensuring fair processes is unlikely to satisfy those who adhere to the antisubordination principle, since this requires adjudicating the degree of difference between groups that a fair society should tolerate.
FAIRNESS CONCERNS ARE NOT CLEARLY RELEVANT FOR ALL DECISIONAL CONTEXTS
In addition to the limited validity and authority of proposed results-focused fairness criteria, it is important to recognize the limits of their relevance across decision contexts. In particular, the contexts described above (such as in the criminal justice system) Equality of classification/predictions conditioned on observed outcome (see blue arrow in Fig. 1)
Classification
Equalized odds also known as: error rate balance The probability of being correctly classified conditional on the outcome should be the same for all values of the protected attribute.
Predicted probability Balance on the positive class The algorithm produces the same average prediction (or score) for participants/ patients with the outcome across all values of the protected attribute. For a binary prediction (i.e., a classifier), this is equivalent to maintaining equal sensitivity and type II error (false negative rates).
Balance on the negative class The algorithm produces the same average prediction (or score) for participants/ patients without the outcome across all values of the protected attribute. For a binary prediction (i.e., a classifier), this is equivalent to maintaining equal specificity and type I error (false positive rates).
Equality of outcomes conditioned on classification/prediction (see orange arrow in Fig. 1) For participants/patients assigned to the positive class, observed outcome rates (e.g., PPV) are the same across values of the protected attribute.
Negative predicted value (NPV)
For participants/patients assigned to the negative class, observed outcome rates (e.g., 1-NPV, or the false omission rate) are the same across values of the protected attribute.
Predicted probability Calibration also known as: test fairness An algorithm is said to have good calibration if, for any given subgroup with a predicted probability of X%, the observed outcome rate is X% for all values of the protected attribute. For any single threshold, a well-calibrated prediction model will never have the same sensitivity and specificity for two groups with different outcome rates. "Issues of justice arise in circumstances in which people can advance claims… that are potentially conflicting, and we appeal to justice to resolve such conflicts by determining what each person is properly entitled to have. In contrast, where people's interests converge, and the decision to be taken is about the best way to pursue some common purpose… justice gives way to other values" 22 .
In many of the non-medical examples, there are clearly competing interests-for example, between society's need for safety and security and an individual's claim to freedom and freedom from harassment; between a lending institution's responsibility to remain financially healthy and an individual's desire for a loan. In these conditions, predictions can be said to be "polar"-i.e., one end of the probability prediction is linked to a decision that is (from the perspective of the subject) always favorable or unfavorable 23 . It is always better to get a lower recidivism score or a higher credit rating, for example, from the perspective of the individual whose score or rating is being predicted. In this context, the decision-maker's interest in efficient decision-making (i.e., based on accurate prognostication using all available information) is not aligned with the subject's interest in receiving the lowest (or highest) possible risk prediction. However, in the medical context accurate prognostication helps decisionmakers appropriately balance benefits and harms for care individualization-the common goal of the patient and provider. When the clinician/decision-maker's and patient's interests are aligned (or when the patient is in fact the decision-maker), and when race has important predictive effects not captured by other variables included in the model, including race/ethnicity as a variable in models used for this purpose improves predictions and decisions for all groups. Prediction supporting decisions in this context may be described as "non-polar" (Fig. 2a).
But in medicine too, there are contexts where the interests of the clinician/decision-maker and the patient diverge, such as when predictions are used to prioritize patients for rationed services that might benefit a broader population (e.g. organ transplantation, disease management programs, or ICU services). We label predictions used for microallocation of scarce medical resources as "positively" polar, indicating that patients may have an interest to be ranked high to receive a service that may be available only to some of those who can potentially benefit (Fig. 2b). This is in distinction to "negatively" polar predictions, in which prediction is used for the targeting of an intervention perceived as punitive or coercive (e.g. such as involuntary commitment, screening for child abuse, or quarantining patients at high infectious risk; Fig. 2c). Use of algorithms for microallocation (i.e. rationing based on individual characteristics) is likely to play a larger role for population health management in accountable care organizations or value-based insurance design. Allocating scarce health care resources on the basis of a protected characteristicor using such characteristics as the basis for other "polar" decisions-appears to have similar fairness concerns as many of the high profile non-medical examples.
LEARNING FROM BIASED DATA While fairness concerns are alleviated in the setting of non-polar prediction, additional problems arise when the data itself are biased or mislabeled across classes (for polar and non-polar prediction alike). We use the term algorithmic bias (in distinction to fairness) specifically to refer to these issues related to model design, data and sampling that may disproportionately affect model performance in a certain subgroup. Consider, for example, prediction models developed on routinely collected electronic health data to target cancer screening of populations with higher cancer rates. Because cancer diagnosis is an imperfect proxy for cancer incidence, rates of "surveillance-sensitive" cancers (e.g., thyroid and breast cancer) are inflated in affluent compared to underserved communities 24 . This could lead to the mis-targeting of screening to the over-served, thereby establishing a continuously self-reinforcing positive feedback loop.
Similarly, consider mortality predictions that might support decisions in the intensive care unit, such as the determination of medical futility. Using "big data" across multiple health systems with different practice patterns might lead to the assignment of higher mortality probabilities to the types of patients seen at (+) (-) Fig. 2 Non-polar and polar prediction-supported health care decisions. Understanding the specific decisional context of a prediction-supported decision in healthcare is necessary to anticipate potential unfairness. In the medical context-particularly in the shared decision-making context-patients and providers often share a common goal of accurate prognostication in order to help balance benefits and harms for care individualization. Predictions supporting decisions in this context may be described as "nonpolar" (a). On the other hand, when one "pole" of the prediction is associated with a clear benefit or a clear harm, predictions may be described as "polar" in nature. In cases of polar predictions, the decision maker's interest in efficient decision making (i.e. based on accurate prognostication using all available information) is not aligned with the subject's interest to have either a lower (e.g. screening for abuse risk) or higher (e.g. microallocation of organs) prediction. "Positively" polar predictions correspond to those where patients may have an interest to be ranked high to receive a service that may be available only to some of those who can potentially benefit (b). This is in distinction to "negatively" polar predictions, in which prediction is used for the targeting of an intervention perceived as punitive or coercive (e.g. such as involuntary commitment, screening for child abuse or mandatory quarantining those at high infectious risk) (c). Issues of fairness pertain specifically to predictions used in decisional contexts that induce predictive polarity-since these are contexts in which people advance claims that are potentially conflicting.
J.K. Paulus and D.M. Kent institutions with less aggressive approaches or lower quality care. Collinearity between patient factors and care factors can bias prognostication and lead again to a self-reinforcing loop supporting earlier withdrawal of care in the underserved. Observed mortality is an imperfect proxy for mortality under ideal care, the true outcome of interest when constructing models for futility. The above are examples of label bias, which arises when the outcome variable is differentially ascertained or otherwise has a different meaning across groups. There may also be group differences in the meaning of predictor variables; this is known as feature bias. For example, feature bias may be a problem if diagnoses are differentially ascertained or thresholds for admission or healthcare-seeking differ across groups in the training data and model features (prediction variables) include prior diagnosis or previous hospitalization. Label and feature biases, as well as differential missingness, can contribute to violations of subgroup validity, which arise when models are not valid in a particular subgroup. Subgroup validity may also be a concern in the context of sampling bias, where a minority group may be insufficiently represented in model development data (e.g., certain ethnic groups in the Framingham population 25 ) and the model might be tailored to the majority group. When effects found in the majority group generalize well to the minority group, this is not problematic but generalization across groups should not be assumed. Sampling bias was a well-known issue with the highly influential Framingham Heart Study, which drew its study population from the racially homogeneous town of Framingham, Massachusetts-and consequently can lead to both over-and under-treatment of certain ethnic minorities 25,26 . More recently, the emergence of polygenic risk scores derived largely on European populations have been shown to generally perform very poorly on non-European populations 27 . For similar reasons, there are concerns about the representativeness of the Precision Medicine Initiative (the "All of Us" Study 28 ).
SHOULD THE USE OF PROTECTED CHARACTERISTICS IN CLINICAL PREDICTION MODELS (CPM) DIFFER FOR POLAR VERSUS NON-POLAR PREDICTIONS?
Currently, there is no consensus or guidance on how protected characteristics-race in particular-should be incorporated in clinical prediction 29 . Previous work found race to be included only rarely in cardiovascular disease prediction models, even when it is known to be predictive 30 . Several authors explicitly acknowledged excluding race from prediction models due to concerns about the implications of "race-based" clinical decisionmaking 31 .
We have previously argued that much of the reluctance to use race in prediction models stems from overgeneralization of its potentially objectionable use in polar predictions in non-medical settings to its use for non-polar predictions in medical settings 29 . The ethical issues involved in using race or race proxies to move a person up or down a prediction scale with a clear directional valence (liberate versus incarcerate; qualify versus reject a loan application; receive versus not receive an available donor organ) are clearly different than for optimizing one's own decisions about whether to take or not take a statin; whether percutaneous coronary intervention might be better than coronary artery bypass; whether medical therapy might be superior for carotid endarterectomy and so forth.
For these latter non-polar decisions, a mature literature exists for how to evaluate prediction models to optimize decisionmaking in individual patients 32 . When race is importantly predictive of health outcomes (as it often is), excluding race from a model will lead to less accurate predictions and worse decisionmaking for all groups. In particular, "race-unaware" models (i.e., models that exclude race) will often especially disadvantage those in minority groups, since predictions will more closely reflect outcomes and associations for patients in the majority. Indeed, race is used explicitly in popular prediction models that inform the need for osteoporotic 33 , breast [34][35][36][37] and prostate cancer screening 38,39 ; statin use for coronary heart disease prevention 40,41 and other common decisions.
For polar predictions, however, there are efficiency-fairness trade-offs that are not germane in the non-polar context. To take a non-medical example, developing a model which predicted loan default, use of variables such as "income," "assets," and "credit history" might be uncontroversial-even if race-correlated. However, even if using race (or race proxies without a clear causal link to the outcome) in addition to these variables substantially improved model performance and increased the efficiency of decision-making and the overall net economic benefits, the use would still be unethical and violate the disparate treatment criteria. Similar principles presumably apply regarding the use of protected characteristics when using predictions to ration resources decisions in health care.
PUTTING IT ALL TOGETHER: TOWARDS A FRAMEWORK FOR BIAS AND FOR UNFAIRNESS
The above discussion suggests different considerations and approaches for polar and non-polar predictions. In the former context, we argue, both bias and fairness concerns apply whereas ensuring an unbiased model is sufficient in the latter.
How to ensure unbiased models With the exception of label bias, which can be difficult to diagnose with the data because the outcome itself has a different meaning across groups (and thus recognition of label bias requires external knowledge about how the data are ascertained), the above subgroup validity issues can generally be diagnosed by examining model performance separately in each of the groups (Fig. 3a). When a model is found to be poorly calibrated in a subgroup, provided the minority populations are sufficiently represented in the data, this can often be addressed by the inclusion of a main effect for group status; inclusion of selected interactions between group status and other features; or developing stratified models. Indeed, the widely-used Pooled Cohort Equation for coronary heart disease prediction addressed the subgroup validity issues identified in the Framingham score (i.e., poor model performance in ethnic minorities) by developing separate models for whites and African-Americans 42 .
Labeling bias should be anticipated whenever a proxy is used as the outcome or label. Problems with proxy labels are very similar to the well-described, familiar problem with surrogate outcomes in clinical research 43,44 . Like surrogate outcomes, proxy labels can often seem compellingly, persuasively similar to the outcome of interest and nevertheless be very misleading. The remedy here is to try to pick a better label (i.e., outcome definition). A high profile example of this was recently reported in which an algorithm used to target services to those with high health needs used future health care costs as a proxy for need. The bias was detected because black patients were sicker than similarly scored white patients, and the algorithm was remedied through the use of a better label that more directly captures health need 45 .
Addressing fairness concerns Reducing model bias and differential performance may be insufficient to eliminate fairness concerns in decision contexts characterized by predictive polarity (such as when predictions are used to ration health care resources), where unambiguously favorable (or unfavorable) decisions are associated with a higher (or lower) score. Here, we identify two broad and fundamentally very different unfairness mitigation approaches: (1) an inputfocused approach, and (2) an output-focused approach (Fig. 3b).
The input-focused approach relies on model transparency; it loosely aligns with anticlassification goals and avoidance of disparate treatment since it promotes class-blind allocation by meticulously avoiding the inclusion of race or race proxies. Since any variable can be correlated with race and therefore serve as a proxy, only highly justified, well-established causal variables should be included in the model. The use of "high dimensional" or "black box" prediction techniques typically favored in the machine learning community are generally problematic (since these approaches can predict race through other variables, whether or not race is explicitly encoded)-although methods that have been proposed to make these models more transparent have recently been adapted to address fairness 46 .
In contrast, the output-focused approach does not restrict model development, but relies on an evaluation of model predictions using outcomes-based fairness criteria (Table 1) and seeks to mitigate fairness concerns by making use of "fairness constraints". These constraints can be understood as formalized "affirmative action" rules to systematically reclassify subjects in an attempt to equalize allocation between groups 19,47 . This approach aligns loosely with the legal concepts of antisubordination and disparate impact; it has the disadvantage that there is no agreed upon mathematical solution to define fairness. Because value judgments are key for any approach to fairness, robust input from a diverse set of stakeholders who are developing, using, regulating and are affected by clinical algorithms should be sought. The stakeholders include patients and their advocates, model developers (e.g., clinical researchers, informaticians), model users/deployers (e.g., healthcare administrators, clinicians, payers), and health policy, ethical and legal experts. Application of resultsoriented criteria requires standards or consensus regarding what degree of disparity in allocation of health care resources across groups might be intolerable.
LIMITATIONS
To be sure, the framework we introduce is simplified and provisional, and is intended as a starting point. Adding further complexity is that some predictive algorithms are applied in different decisional contexts with different ethical concerns. For example, the estimated GFR equations, (which are race-aware) may be used to inform both resource prioritization (e.g., transplant lists) and for appropriate medication dosing 48 . Sometimes the polarity of a prediction may be non-obvious. We also acknowledge that some objections to the use of race as a variable in prediction models have little to do with unfairness as described here 49 . Finally, we wish to underscore the political and legal complexities of identifying and mitigating algorithmic disparities and the need to integrate statistical and legal thinking -amongst other stakeholders -in devising remedies.
CONCLUSION
People are often told-either by elders or by experience itself-that life is unfair; now there is mathematical support 16 for that gloomy bit of wisdom. Yet fairness is a central preoccupation of any decent society. While there is no universally accepted algorithmic solution to the problem of unfairness, the problem also cannot be solved by replacing algorithms with a human decision-makerjust obscured. Formalizing predictions opens the issues up to communal (and mathematical) scrutiny, permitting us, for example, to more precisely understand the conflict between competing fairness notions and the limitations of these notions. This is an essential, though insufficient, step in developing consensus about how to impose human values on agnostic, data-driven algorithms, and how to supervise these algorithms to ensure fairer prediction and decision-making in healthcare and elsewhere. More rigorous and narrow (e.g., causal) definitions of unfairness might be a part of the answer, though a wholly technical solution seems unlikely. A set of principles 50 1) and outputs (2) a. Reducing Algorithmic Bias: for models used to aid decisions for balancing harms-benefits, when decision maker and subject are aligned Examine models using fairness criteria.
Ensure "fair" distribuƟons of services by either: • Using different decision thresholds • Applying fairness constraints to strategically reclassify based on the protected aƩribute b. Reducing Algorithmic Unfairness: for models used for raƟoning
Approach 1: Restrict model inputs
Models should include only well-established, causal risk factors.
• Models should therefore be race-unaware, and also exclude non-causal variables that may be race proxies (e.g. zip code) Fig. 3 Mitigating algorithmic bias and unfairness in clinical decision-making. Bias arises through differential model performance across protected classes, such as across racial groups. a It is a concern in both polar and non-polar decision contexts and can be addressed by "debiasing" predictions, typically through the explicit encoding of the protected attribute to ameliorate subgroup validity issues, or by the more thoughtful selection of labels (in the case of labeling bias). Fairness concerns are exclusively a concern in polar decision contexts, and may persist even when prediction is not biased. b There are two broad and fundamentally very different unfairness mitigation approaches: (1) an input-focused approach, and (2) an output-focused approach (Fig. 3b). The goal of the input-focused approach is to promote class-blind allocation by meticulously avoiding the inclusion of race or race proxies. The output-focused approach evaluates fairness using criteria such as those described in Table 1 and Fig. 1. Fairness violations can be (partially) addressed through the use of "fairness constraints" (which systematically reclassify participants/patients to equalize allocation between groups) or by applying different decision thresholds across groups.
articulated to provide guidance to those developing and disseminating algorithms (Box 1)-principles that may ultimately get encoded into law 6 . If we can figure out how to encode fairness into computer programs, we may yet come to a deeper understanding of fairness, algorithmic and otherwise.
Received: 19 August 2019; Accepted: 17 June 2020; Box 1. Principles for accountable algorithms developed by FAT/ML (adapted from fairness, accountability, and transparency in machine learning) 51 Responsibility Identify a person/persons and process for monitoring and remedying issues related to the algorithm Explainability Ensure that the algorithm is understandable to users and stakeholders Accuracy Consider sources and impact of possible errors Auditability Establish a system that will allow transparent public auditing of the algorithm Fairness Anticipate and assess the potential for algorithmic unfairness | 2020-07-30T14:31:40.173Z | 2020-07-30T00:00:00.000 | {
"year": 2020,
"sha1": "11686792dc3f8008a87b813bbefe59124145924d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41746-020-0304-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "685ed6952c4d26476f3c6f10a158bd0225fa5c60",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
1182076 | pes2o/s2orc | v3-fos-license | A Genetic Cascade of let-7-ncl-1-fib-1 Modulates Nucleolar Size and rRNA Pool in Caenorhabditis elegans
Ribosome biogenesis takes place in the nucleolus, the size of which is often coordinated with cell growth and development. However, how metazoans control nucleolar size remains largely unknown. Caenorhabditis elegans provides a good model to address this question owing to distinct tissue distribution of nucleolar sizes and a mutant, ncl-1, which exhibits larger nucleoli than wild-type worms. Here, through a series of loss-of-function analyses, we report that the nucleolar size is regulated by a circuitry composed of microRNA let-7, translation repressor NCL-1, and a major nucleolar pre-rRNA processing protein FIB-1/fibrillarin. In cooperation with RNA binding proteins PUF and NOS, NCL-1 suppressed the translation of FIB-1/fibrillarin, while let-7 targeted the 3’UTR of ncl-1 and inhibited its expression. Consequently, the abundance of FIB-1 is tightly controlled and correlated with the nucleolar size. Together, our findings highlight a novel genetic cascade by which post-transcriptional regulators interplay in developmental control of nucleolar size and function.
Introduction
Among the RNA/protein bodies within the nucleus, nucleoli bear the essential function of being the factories for ribosome subunit production and assembly, a stress sensor for cell cycle control, as well as a site for hepatitis D virus (HDV) replication and adenovirus-associated virus (AAV) assembly [1][2][3]. The size and morphology of the nucleolus is a cytological manifestation of ribosome biogenesis and therefore protein biosynthesis and is closely coordinated with cell growth and development [4]. Accordingly, these attributes sometimes are also physiological indicators of cell cycle, cancer growth and malignancy as well as stem cells differentiation and pluripotency [5,6]. However, without membrane delimitation, the principles that define nucleoli size and shape are poorly understood. Furthermore, spatiotemporal regulation of nucleolar size and output, particularly in coordination with development and in nondividing cells, are not fully characterized.
Caenorhabditis elegans represents an exploitable model for further interrogating nucleolus biology owing to distinct distribution of nucleolar sizes in different cell types. A C. elegans mutant, ncl-1, described as a recessive mutation with enlarged nucleoli in nearly all cells of the worm [7,8], has this phenotype consistent with its role as a suppressor of rRNA biosynthesis. C. elegans ncl-1 phenotypes can be rescued by its Drosophila homolog, brat [9,10]. Mutations in the fly brat gene have a similar phenotype to the defect of ncl-1 mutants in C. elegans, affecting nucleolar size. In addition, brat mutants induce brain tumor formation [11]. These homologous proteins belong to a TRIM/RBCC/NHL (NCL-1, HT2A, and LIN-41] family characterized by the presence of a RING domain, a B-box zinc finger, and a coil-coiled domain [12,13]. Because its lack of an RNA binding motif, Brat protein was previously shown to associate with the 3'UTR of the hunchback transcript in partnership with two RNA-binding proteins Pumillio (PUF) and Nanos (NOS), and suppress expression of Hunchback protein at the translational level [14].
In this study, we dissected the molecular mechanism through which NCL-1 controls nucleolar size and function, and pinpointed fibrillarin-the rRNA 2'-O-methyltransferase and pre-rRNA processing factor [1,[15][16][17]-as a downstream effector. Further, this regulation is dynamically coordinated with development as part of a functional axis driven by let-7, a critical developmental regulator of heterochronic development in worms and flies [18][19][20] and of cancer formation and stem cell maintenance in the mammals [21].
Results
Suppression of nucleolar size and rRNA expression by NCL-1 is associated with nucleolar protein FIB-1 Although abundantly expressed in the gonads of C. elegans [9], the effect of ncl-1 on the nucleoli of germ cells was not characterized. In intact gonads of wild-type (N2) young adult worms, nucleolar structure is nearly absent in the -1 oocyte, which is immediately adjacent to the spermatheca (Fig 1A, upper panel). In contrast, the nucleolus was readily detectable in the -1 oocyte of ncl-1 (e1942) mutant (Fig 1A, lower panel). While nucleoli were evident in the germ cells and the -3 and -2 oocytes of both worms, ncl-1 worms exhibited considerably larger average nucleoli size ranging from 119% to 176% of wild-type diameter (Fig 1A and 1B). Profiling of the ncl-1 mRNA expression by RT-qPCR revealed a progressive decline in mRNA abundance from the embryo to and throughout the four larva stages, followed by subsequent up-regulation in the adult (S1 Fig). This developmental stage-specific expression is consistent with previous in situ immuno-staining of NCL-1 that demonstrated its expression in the proximal gonad and early embryos and the subsequent gradual disappearance in the late stages of embryos [9]. Further, this expression is in line with the non-detectable to small sizes of nucleoli in the -1 oocyte and early embryos (Fig 1C, left panel), supporting the notion that NCL-1 is a negative regulator of nucleolar size.
We also examined nucleolar morphology in worms devoid of functional fib-1. Consistent with its significance, fib-1 mutation led to lethality [22]; we thus characterized fib-1 mutant larvae (L1 stage) and found that nucleoli therein displayed size reduction (S2 Fig). To next determine if FIB-1/fibrillarin is involved in the nucleolar appearance and size, we depleted fib-1 in ncl-1(e1942) worms by RNAi feeding and measured the nucleolar size. The increase in , were quantitatively determined. Asterisks signify differences between the two worms: ***P < 0.0001; n = 8-14 gonad arms. (C) DIC microscopy of the blastomeres of wild-type (wt) and the ncl-1 embryos, with or without fib-1 RNAi. Each blastomere is indicated as ABa, ABp, EMS and P. Insets represent enlarged versions of the boxed regions of ABp cells to highlight the nucleoli. Scale bar, 20 μm. (D) Quantitative representation of the results shown in (C), which illustrates the distribution of nucleolar areas in the four blastomeres. Asterisk signifies difference between the indicated strains: *P < 0.05; n 31 embryos. (E) Knockdown of fib-1 was done in the indicated worms. The expression of Actin (lower panel) and the endogenous FIB-1 (upper panel) were examined by Western blot analysis. (F) RT-qPCR analysis of 26S rRNA expression in the indicated strains of worms as shown in (E) *P < 0.05; ***P < 0.001; ns, no significant; n = 3.
doi:10.1371/journal.pgen.1005580.g001 nucleolar size in the blastomeres of ncl-1(e1942) worms ( Fig 1C, middle panel) was significantly reversed by fib-1 abrogation as shown by image analysis (Fig 1C, right panel, and 1D). This observation supports a notion that the amount of FIB-1 expression is directly associated with the control of nucleolar size by NCL-1. Moreover, Western blot and RT-qPCR analyses showed that worms expressing a greater amount of FIB-1 generally had a higher level of rRNA abundance (Fig 1D and 1F). Conversely, knockdown of FIB-1 led to an overall reduction in the rRNA levels, further indicating a positive role of FIB-1 in this functional regard.
NCL-1 is a suppressor of FIB-1 expression
To examine whether NCL-1-mediated nucleolar size alternations is through the regulation of FIB-1 expression, we generated a pair of transgenic worms that express FIB-1::GFP chimeric protein in both the N2 and ncl-1 backgrounds [respectively designated as cguIs1 (strain SJL1) and ncl-1(e1942); cguIs1 (strain SJL14), see S1 Table]. Time-lapse fluorescence microscopy of embryos was performed to trace the level of GFP expression during early stages, and showed progressively higher GFP signals (Fig 2A and S1-S3 Movies). Dynamic up-regulation of GFP levels was more prominent in the ncl-1(e1942); cguIs1 embryos (62.8%) than in cguIs1 (26.9%) (Fig 2B). Random collections of embryos from both transgenic worms were further examined to quantify the GFP intensity of each embryo in the same field ( Fig 2C) and subsequently revealed that the embryos in the absence of NCL-1 exhibited higher levels of FIB-1::GFP (about 2 fold) ( Fig 2D). Further expression analyses consistently showed elevated levels of FIB-1 in ncl-1(e1942) embryos (5.2 fold) and adult worms (1.7 fold) ( Fig 2E). Unexpectedly, RT-qPCR analysis revealed comparable levels of fib-1 mRNA in wild type and ncl-1(e1942) in embryo and adult stages (Fig 2F). Taken together, these findings indicate that ncl-1 is an upstream negative regulator of fib-1 expression at the post-transcriptional/translational stage.
NCL-1 cooperates with PUF and NANOS to modulate fib-1 mRNA translation
We next aimed to test whether NCL-1 acts as its fly homologue Brat, which suppresses its target gene at the translational level by binding to the 3' UTR of transcripts [14]. Towards this end, we created two more pairs of transgenic worms [cguIs2 and ncl-1(e1942); cguIs2 (strain SJL2/strain SJL15), and cguIs19 and ncl-1(e1942); cguIs19 (strain SJL34/strain SJL38), see S1 Table]; SJL2 and SJL15 harbored a plasmid similar to cguIs1 worms that contains the fulllength fib-1 3' UTR, while in SJL34 and SJL38 the fib-1 3' UTR was replaced with unc-54 3' UTR sequence ( Fig 3A). In agreement with the above observations, enlarged nucleoli and a significantly increased levels of FIB-1::GFP expression were both evident in the tail hypodermis of ncl-1(e1942); cguIs1 and ncl-1(e1942); cguIs2 worms ( Fig 3B, top two panels at right, and S3 Fig). In contrast, for the transgene harboring the unc-54 3' UTR, ncl-1 inactivation did not lead to discernable difference in GFP intensity, despite the occurrence of enlarged nucleoli of cells in cguIs19; ncl-1 transgenic worms ( Fig 3C). These observations and the quantitative data for the whole worms ( Fig 3D and 3E) strongly support the notion that, rather than being the consequence of altered nucleolus, the suppression of FIB-1 may arise from direct targeting of its 3' UTR by NCL-1.
Since Brat mediates its repressive role through other RNA-binding factors, we further tested the roles of C. elegans pumillio and nanos in the translational suppression of fib-1. A potentially direct involvement of these RNA-binding proteins was first supported by the sequence analysis of the fib-1 3'UTR, which revealed a consensus PUF binding motif ( Fig 4A). To demonstrate the link between this 3'UTR element and NCL-1-dependent control, we then generated worms with 3'UTR reporter carrying mutations in the PUF binding sequence (cguEx18; Figs 3A and 4A) [23,24]. Fluorescence microscopy showed that, in comparison to the wild-type reporter ( Fig 3B and 3D), this particular transgene exhibited considerably diminished responsiveness to the loss of ncl-1 (Fig 4B and 4C), giving rise to a lower level of fluorescence intensity. In further support to the roles of the PUF proteins, RNAi knockdown puf-5, puf-8 and puf-9 and nos-2 in cguIs1 worms resulted in the appearance of brighter GFP signals (Fig 4D and 4E). However, such effect of nos/puf knockdown (puf-8 and puf-9 in particular) on the GFP reporter was reduced in the cguEx18 worms, in which the PUF binding sequence was altered ( Fig 4F). Consistently with the ncl-1 knockdown and mutant worms, immunoblotting showed a rise in FIB-1::GFP abundance in these knockdown worms ( Fig 4G). Collectively, these data imply that ncl-1 may coordinate with puf-5, -8, -9 and nos-2 to act directly on the 3' UTR element of fib-1, likely through a similar regulatory mechanism exhibited by brat, pumillio and nanos in the fly [14]. This demonstration of a response element in the fib-1 3'UTR and its regulatory relevance would certainly strengthen a specific and direct control mechanism.
We next interrogated the significance of let-7 in the regulation of nucleolar size by assessing vulva cells in the temperature-sensitive, loss-of-function let-7(n2853) mutants. Mutants grown at non-permissive temperature (25°C) displayed a significant reduction in nucleolar size in these cells, by 25% as compared to those at permissive condition (15°C) (Fig 5F and 5G). However, such temperature-sensitive nucleolar size alteration was not observed in a double mutant let-7; ncl-1 (strain SJL39) (Fig 5F and 5G), implying that let-7 acts upstream of ncl-1 transcript to directly suppress NCL-1 translation and regulate nucleolar sizes of the vulva cells. We further verified the link of let-7 to NCL-1-mediated regulation by assessing downstream FIB-1 expression and rRNA abundance in let-7(n2853) and let-7(n2853); ncl-1(e1942) worms. To this end, expression profiling revealed higher amounts of both FIB-1 ( Fig 5H) and ribosomal RNA species (Fig 5H and 5I and S5 Fig) in let-7(n2853) worms grown at 15°C vs. 25°C, in contrast to a lack of discernable differences in the let-7(n2853); ncl-1(e1942) worms between these rearing temperatures (Fig 5H and 5I, and S5 Fig). Such loss of phenotypes in the ncl-1(e1942) background is in agreement with let-7-ncl-1 interaction and functional antagonism. Based on these findings, we hypothesize that the genetic circuit of let-7-ncl-1-fib-1 constitutes a critical determinant in the regulation of nucleolar size and rRNA pool (Fig 6).
Discussion let-7 is known as a critical regulator of heterochronic development in worms and flies [18,29]. Our studies outlined for the first time a genetic cascade through which the coordinated actions of let-7 and NCL-1 modulate the expression of a major nucleolar protein FIB-1, thereby fine-10 μm (right panels). (C) In Fig 4C (the cguEx18 and ncl-1(e1942); cguEx18 worm pair containing the mutated fib-1 3' UTR) were quantitatively determined, with ratios between the indicated strains being shown in the bar graph. Asterisk signifies the difference: ***P < 0.001; n = 136-156 animals. (D) cguIs1 worms with RNAi targeting ncl-1, nos-2, puf-5, puf-8, or puf-9 were assessed for FIB-1::GFP expression as in (B). Scale bar: 100 μm (left panels) and 10 μm (right panels). (E) Quantitative image analysis for the results shown in (D), showing the relative ratios of average FIB-1::GFP signals between the indicated worm pairs. The bar graph depicts means ± S.E.M.; ***P < 0.001; n = 30-198 animals. (F) Quantitative image analysis for the FIB-1::GFP reporter expression in worm strains derived from cguEx18, showing the relative ratios of average FIB-1::GFP signals between the indicated worm pairs. The bar graph depicts means ± S.E.M.; *P < 0.05; **P < 0.01;***P < 0.001; ns, no significance; n = 30-198 animals. (G) Expression of the FIB-1::GFP reporter in worms with RNAi targeting the indicated genes was monitored by anti-GFP immunoblotting. tuning the size and function of the nucleolus (Fig 6). This circuit of let-7-ncl-1-fib-1 and nucleolus size may represent an adaptive mechanism that couple cellular protein production capacity to the metabolic state of individual cell types. Interestingly, in a recent genome-wide RNAibased screening for molecular networks underlying nucleolus size regulation in Drosophila, both brat and fib were identified [4], substantiating the possibility that these factors constitute a conserved core of regulatory network. Moreover, Vogt et al. has demonstrated that nucleolus maturation during early embryonic development in mice is dependent on the pluripotency factor LIN28 [30], which is known as an essential regulator of let-7 biogenesis [19,20,31]. Intriguingly, Chan and Slack have also shown that ribosomal protein RPS-14 is able to modulate let-7 function [32], which hints at the possibility for a feedback regulation between let-7 and nucleolar dynamics. Our work thus contributes to these findings by reinforcing the relevance of hierarchical organization of post-transcriptional regulators in the fundamental process of nucleogenesis. As FIB-1 expression in C. elegans is also regulated by the die-1 and let363/TOR pathways [33,34], our findings further support the notion that intricate integration of multiple mechanisms underpins nucleolus integrity.
NCL-1 is a member of TRIM/RBCC-NHL protein family, which has been implicated in the regulation of tumor suppression, cell growth, and cell differentiation. In Drosophila larval neuroblasts (stem cell-like precursors), the Brat homologue is distributed to only one daughter cell through asymmetric cell division and acts as an inhibitor of its self-renewal through posttranscriptional suppression of Myc expression. In Brat mutant, both daughter cells grow and lead to the formation of larval brain tumor [35]. Similarly, the mammalian homologue TRIM3 has been reported as a tumor suppressor in human glioblastoma (GBM), a highly malignant human brain tumor, through its suppression on Myc [36]. Our study complements these findings on the NCL-1 homologues and further provides significant insight into understanding how microRNA cooperates with TRIM/RBCC-NHL proteins to suppress tumor formation. A schematic model of the let-7-ncl-1-fib-1 circuit and its regulation of nucleolus size and function. Since let-7 is a heterochronic gene linked to the control of vulva formation in the L4 larva stage, this model depicts a novel let-7-driven regulatory cascade-the let-7-ncl-1-fib-1 pathway-that regulates the nucleolus size and rRNA expression in the vulva cells. In this context, let-7 increases in the L4 larva and targets the 3' UTR of ncl-1 transcript to suppress NCL-1 translation. In other types of cells with low levels of let-7, such as hypodermis for example, NCL-1 may be accumulated and cooperates with two other RNA binding proteins, PUF and NOS, to suppress translation of a nucleolar protein FIB-1 and consequently the size of the nucleolus (see Fig 4B and 4C). However, in the vulva cells in which NCL-1 is down-regulated, a higher abundance of FIB-1 enters the nucleolus to facilitate rRNA processing and likely contributes to enlarged nucleolus exhibited by this particular cell type (see Fig 5F and 5G). Possible FIB-1 action on Pol I activity is not resolved in this study (the question mark in the scheme), although one recent study (Tessarz et al., Nature 505, 564-568, 2014) [47] has shown that FIB-1 impacts Pol I transcription through an epigenetic control. Despite the prevalent requirement for proper maintenance of nucleolus size, our data did not exclude the possibility that the NCL-1-dependent control mechanism may have tissue-and developmental stage-specific relevance. First, while elevated FIB-1::GFP expression was robustly observed in the ncl-1 mutant, the extent to which it was up-regulated was varied between cells/tissues. A strong evidence for this phenotype is shown in Fig 3B, in which we observed variation in nucleolar size changes between hyp 9 and hyp 10 cells. Second, and perhaps more intriguingly, even in the absence of putative PUF binding site, loss of ncl-1 led to a prominently up-regulated GFP reporter expression in the head region of the cguEx18 worms (Fig 4B). This observation of differential regulation thus implies that 1) there is additional cisacting element(s) in the fib-1 3' UTR, through which a yet unknown protein mediates brainspecific expression suppression, and/or 2) NCL-1 may functionally cooperates with other neuronal RNA-binding protein(s) to exert a context-dependent regulation of fib-1. This possibility of a modular organization of NCL-1-based regulatory network, as well as its developmental implications, may be further resolved by genetic screens and/or biochemical characterization of NCL-1-interacting factors.
Worm transformation (microinjection and bombardment)
Germ line transformation by microinjection was performed as described by Mello and Fire [40]. Plasmids at the concentration of 100 ng/μl were injected into young adult N2 worms. An integrated line containing the plasmid of P fib-1 ::fib-1::gfp::3' UTR fib-1 in about a hundred copies (determined by RT-qPCR) was first obtained in the wild-type background (designated as SJL1 cguIs1). A male of cguIs1 was then crossed with ncl-1(e1942) hermaphrodites, and GFP positive worms were selected. This was followed by hermaphrodite selfing to generate a homozygote worms [SJL14 ncl-1(e1942); cguIs1]. The same method was used to generate the other integration lines (see S1 Table), whereas strains of SJL6 to SJL12 (S1 Table) were obtained by the bombardment method [41].
RNAi treatment
The RNAi library was obtained from Julie Ahringer's group [42][43][44]. Bacteria clones producing double-stranded RNA to each target gene were grown in LB broth containing ampicillin and tetracycline for 7 to 8 hrs, and subsequently induced to produce double-stranded RNA by 1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) for 2 hrs. Concentrated bacteria were then seeded on RNAi plates (NGM agar, 1 mM IPTG, 100 mg/ml ampicillin, and 5 mg/ml tetracycline), onto which synchronized L1-L2 stage worms were placed and cultured for 36 hrs at 25°C. Young adult worms were collected for microscopy, RT-qPCR, and/or Western blot analyses.
Western blot
Protein extracts from embryos or worms at L4 or young adult stage were prepared by sonication and separated on 10% or 15% SDS-PAGE and transferred onto polyvinylidene fluoride (PVDF) membranes. Blocked membranes were then incubated with anti-FIB-1 (1:2,000 dilution, Santa Cruz) or anti-Actin (1:200,000 dilution, Millipore) antibody overnight at 4°C, and subsequently probed with secondary antibody-horseradish peroxidase conjugate (1:5,000 dilution, Sigma). Signals were detected with the ECL Western blot detection system (Thermo Scientific Inc., Waltham, MA).
RT-qPCR
Synchronized worms were collected by washing with M9 buffer and then subjected to sucrose density centrifugation to remove OP50 (E. coli) contamination. Total RNA was isolated from a frozen 1 ml aliquot (100 μl worm pellet dissolved in 1 ml TRIzol) by thawing and vigorous mixing according to the manufacturer's instructions. The genomic DNA was digested by DNase I (Promega). Reverse transcription reactions were performed with iScript Reverse Transcription Supermix for RT-qPCR (Bio-Rad) with 1 μg of RNA. Fifty ng of cDNA was used for each realtime PCR reaction, which was performed with the iCycler IQ real-time PCR detection system (Bio-Rad). For the quantitative detection of ncl-1, fib-1, act-1, gfp and 26S rRNA transcripts, the following primer pairs were respectively used (the act-1 transcript was simultaneously quantified as an internal control): Qncl-
Northern blot analysis
Synchronized late L4 worms grown at 15°C or 25°C as indicated were homogenized by a beadbeating homogenizer (FastPrep-24, MP Biomedicals) and total RNA was isolated by acid guanidinium thiocyanate-phenol-chloroform extraction [45]. Total RNA was subjected to 1.2% agarose-formaldehyde gel electrophoresis (5 μg/lane) and transferred to a Hybond-N+ membrane (GE Healthcare). DNA probes were generated from PCR products amplified from C. elegans genomic DNA and labeled with 32 P-dCTP (Perkin Elmer, PK-BLU513H) by hexamer priming. Primers for generating the ribosomal RNA species and actin probes were performed as described by Voutev et al. [46]. Hybridization was carried out at 55°C in 0.36 M Na 2 HPO 4 , 0.14 M NaH 2 PO 4 , 1 mM EDTA, 10% SDS, 25% formamide and 0.1 mg/ml salmon sperm DNA. Washes were done at 55°C sequentially in 4× SSPE, 4% SDS and 0.1× SSC, 0.1% SDS. Membranes were exposed to Kodak BioMax MS film.
Light microscopy and quantitative image analysis
To observe the FIB-1::GFP expression, embryos or young adult worms of cguIs1 were prestained with WGA 555 (50 μg/ml) (Alexa Fluor 555 conjugate of wheat germ agglutinin, Invitrogen) at room temperature for 30 mins (embryos) or 4 h (worms) and collected by washing 3 times with M9 buffer. They were then mixed with embryos or worms of ncl-1(e1942); cguIs1 in an equal ratio and mounted onto 5% agar pad (worms) or a chamber coverglass (embryo) (Thermo) for image acquisition. Bright field and fluorescence images were captured on an inverted or upright microscopy (Leica DMIRE2 and DM2500) using a 10×/NA 0.3 air immersion objective lens and a cool CCD (CoolSNAP K4, Roper Scientific). In order to distinguish the levels of GFP in the experimental and control embryos or worms under a same fluorescence microscope field, the average fluorescence intensity of different strains in the same images was measured using Metamorph 7.7.10.0 offline (Molecular Devices) and quantitatively determined by using Microsoft Excel software. For visualization of FIB-1::GFP expression and nucleolus size in worms, a upright microscope (Leica DM2500) with high-magnification, differential interference contrast (DIC) and fluorescence channels was used; images (shown in enlarged insets) were captured using a 63×/NA 1.4 oil immersion objective lens and a cool CCD (CoolSNAP K4). Metamorph 7.7.10.0 and Microsoft Excel software were used to measure the nucleolus size.
Deconvolution microscopy
For visualization of GFP signals in the vulva and seam cells, transgenic worms at the L4 stage were paralyzed and mounted onto 5% agar pad for z-series image recording. The DIC and fluorescence signals were collected on a Deltavision deconvolution microscope (PersonalDV, Applied Precision) using a 60×/NA 1.4 oil immersion objective lens and a cool CCD (Cool-SNAP HQ2, Roper Scientific). The Metamorph software version 7.7.10.0 offline was used in image analysis.
Time-lapse images recording
Embryos of cguIs1 or ncl-1(e1942); cguIs1 as described above were plated onto a chamber coverglass for image acquisition. Phase contrast and fluorescence images were captured on an inverted microscope (Leica DMIRE2) using a 25×/NA 0.95 water immersion objective lens and an electron multiplying (EM) CCD (iXon ultra 897, Andor Technology). Images were recorded at 30s intervals and converted to pseudo-color using Metamorph software.
Statistical analysis
Statistical analyses were performed with a two-tailed Student's t-test for independent samples by using GraphPad Prism 5 software. P<0.05 was considered statistically significant.
Supporting Information (WMV) S1 Table. Strains of worms generated in this study. | 2016-05-04T20:20:58.661Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "eb13769e3980c3e465e4adafd54f8dfa03ba8d7e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1005580&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb13769e3980c3e465e4adafd54f8dfa03ba8d7e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237566120 | pes2o/s2orc | v3-fos-license | The Prevalence and Genotype Distribution of Human Papillomaviruses Among Men in Henan Province of China
Background: This paper aimed to assess the prevalence of human papillomavirus (HPV) infection and the associations of sociodemographic and behavioral characteristics with HPV in unvaccinated men in Henan Province before the mass administration of the HPV vaccine through a baseline survey. Methods: Between June 2015 to June 2020, 3,690 men were tested for the HPV genotype at the Henan Provincial People's Hospital. The HPV genotype was detected by a polymerase chain reaction (PCR)-based hybridization gene chip assay. Results: The overall HPV infection rate was 29.97%; The most prevalent genotypes were HPV 6 (21.76%), 11 (12.68%), 16 (8.94%), 58 (5.37%), 18 (3.41%), 84 (3.25%), 61 (3.09%), and 81 (3.09%). Low-risk HPV (LR-HPV) infection (24.91%) and single infection (17.78%) were the most prevalent forms. Age-specific HPV distribution was presented as a bimodal curve; the youngest age group (≤ 25 years) had the highest HPV infection rate (36.03%), followed by the 36–40-year-old group (33.68%). Men with Junior high school or above were more likely to have Pure-LR HPV infection. Unmarried status and smoking increased single and LR-HPV infection. Multiple lifetime sex partners and not using a condom were more likely to cause LR-HPV infection. Conclusions: The data on the prevalence and HPV infection type distribution in men in Henan Province could serve as a valuable reference to guide nationwide screening. We provide a time-based estimate of the maximum impact of the HPV vaccine and critical reference measurements important for assessing the clinical benefits of HPV vaccination and the increase in non-vaccine HPV types.
HPV prevalence and genotype distribution are different between various nations and regions (5,6). Previous studies have mainly focused on HPV infection in women; epidemiological studies of HPV infection in men have been rare; the available data are still insufficient. Regarding HPV infection in women, it has been proposed that men play an important role as reservoirs and transmission agents. Therefore, studies are needed to outline HPV infection in men and help reduce HPV infection in women through contact with HPV-infected partners (7). This study aimed to investigate the HPV infection status in men in Henan Province to guide vaccine-based HPV prevention strategies.
Subjects
The study population consisted of 3,690 men (age, 20-85 years) attending a male outpatient clinic from June 2015 to June 2020 in Henan Province. A man was considered eligible to participate in the study if he met the following inclusion criteria: (1) aged 18-70 years; (2) resided in Henan Province; (3) had current or past sexual activity; (4) reported no previous diagnosis of penile cancer; (5) had not participated in an HPV vaccine clinical trial; (6) reported no prior diagnosis of human immunodeficiency virus (HIV) infection or AIDS; (7) was not currently receiving treatment for a sexually transmitted infection; (8) agreed to undergo an HPV test and participate in the present study.
Ethical Statement
This research was approved by the Ethics Committee of Henan Provincial People's Hospital (No.2021062). All of the samples and data were collected after written informed consent was provided by the participants. The management and publication of patient information in this research was strictly in accordance with the Declaration of Helsinki, including the confidentiality and anonymity, data were de-identified before analysis.
Specimen Collection
A single cytobrush was used to collect exfoliated cells from different penile areas: the dorsal and ventral area of the penile shaft, the external and internal surface of the prepuce, coronal sulcus, glans, and distal urethra. The cells were collected in a sampling tube and stored at 4 • C until processing. The men were admonished not to wash their external genitalia the morning of the collection to increase the number of cells in the collected samples.
HPV DNA Extraction, PCR Amplification, and Genotyping
All samples were stored in a specimen preservation medium and sent to our laboratory. DNA was obtained from disruption of the cells using the DNA Mag-Ax Kit (HybriBio Ltd.) in the automatic nucleic acid extraction instrument (HBNP-4801A
Statistical Analysis
Data analysis was performed using GraphPad Prism 5 statistical analysis software (GraphPad Software, La Jolla, CA). The chisquare test was used for statistical analysis between the two groups. Odds ratios (ORs) with 95% confidence intervals (CIs) were presented using unconditional logistic regression. P-values were two-sided, and differences were considered statistically significant at p < 0.05.
Sociodemographic and Behavioral Characteristics of the Subjects
A total of 3,690 men aged 20-85 years were included in this study. The mean age was 41.6 ± 11.3 years. The Sociodemographic and behavioral characteristics are described in Table 1. The majority of them were aged between 26 and 55. Those who attended primary school or below and Junior high school or above were roughly equal. Unmarried people were more than twice the married ones. About 60.62% of the men smoked. More than half of them had two or more sexual partners, and only 15.83% used a condom every time.
Prevalence and Genotype Distribution of HPV
As is shown in Table 2, thirty-two of the 37 HPV genotypes that our HPV-DNA assay could detect were found, including 17 HR HPV genotypes (16, 18, was the most prevalent genotype, while the HR HPV infection rate was 11.90%. Moreover, HPV 6 was the most common LR genotype, and HPV 16 was the most common HR genotype among all patients. There was a significant difference between the HR and LR genotypes of HPV. The prevalence of the HR and LR genotypes is displayed in Figure 1.
Associations of Sociodemographic and Behavioral Characteristics With HPV
As shown in
Prevalence of Single and Multiple HPV Infection
As shown in Figure 2 Table 1, p < 0.001). Moreover, In the different age groups, the incidence of men infected with single-and multiple-type infections was also significantly different (Figure 3, p < 0.001).
Prevalence of HPV Grouped by Age
The participants were categorized into eight age groups five years apart from 25 to 56 years. The HPV infection prevalence (Figure 5).
DISCUSSION
HPV infection in women has been widely studied worldwide. HR-HPV has been identified as the leading cause of cervical cancer, primarily in developing countries (8,9). Characterization of HPV infection and the genotype distribution in men are serious clinical issues owing to the prevention of genital cancer in men and, consequently, HPV infection in women. Nonetheless, most studies on HPV infection in China have been conducted in women (10,11), and data on the epidemiology of HPV infection in men are quite rare. Moreover, the HPV infection rate varies between nations and regions (6). In the present study, we assessed the prevalence and genotype distribution of genital HPV in sexually active men in Henan Province, located in central China. A widespread immunization program would influence HPV genotype distribution with HPV vaccines (12), but our study participants had never been vaccinated against HPV. Therefore, our results provide preliminary information on HPV genotype-specific prevalence in a high-risk cohort of sexually active men in Henan Province. Thirty-two different HPV genotypes were detected in our study. The most common HPV genotypes were HPV 6 (24.88%) and HPV 11 (12.68%), consistent with some other reports (13)(14)(15). In line with several studies about the prevalence and genotype distribution of HPV in male genital warts (16)(17)(18), HPV 16 was confirmed to be the most frequent HR-HPV genotype and the third most frequent HPV genotype after HPV 6 and HPV 11 in this study. The results also placed HPV 18 and HPV 58 as the second most prevalent HR genotypes. Therefore, the vaccine-targeted HPV genotypes (HPV 6, HPV 11, HPV 16, HPV 58, and HPV 18) were among the most frequently detected HPV genotypes in our study. Potentially, the practicality of the 9-valent HPV vaccine may allow us to prevent the most frequent HPV genotypes in men in our region.
HPV 16, 58, 51, 39, and 52 have been identified as the most common HR-HPV genotypes causing male genital warts in Shanghai, but in the Guangdong province, the most common genotypes were HPV 52, 16, 81, and 58 (19, 20). In our study, the most common HR-HPV genotypes were HPV 16, 58, 18, and 39. We detected a high prevalence of vaccine-targeted HPV genotypes in our study, which is attributable to the fact that men in this region are typically not vaccinated against HPV. For example, the 4valent or 9-valent HPV vaccines can prevent HPV 6, 11, and 16. The new 9-valent HPV vaccine does not cover HPV 84, 61, and 81; therefore, these cannot be prevented with vaccination. This indicates that a new HPV vaccine should be considered to cover the most prevalent HPV genotypes in this region.
Multiple HPV infection was associated with an increased risk of HPV persistence, though it was relatively low in the present study (21)(22)(23). The infection rate with multiple HPV genotypes was 12.20% of all cases. This rate was lower than that found by previous studies, which showed the infection rate with multiple HPV genotypes to be 56.7% (13), 33.8% (15) and 59.7% (24). The variation between the studies may be explained by differences in the detection protocols employed in different laboratories, the sampling approaches, and the geographical variations in HPV genotype distribution. The prevalence of HPV infection was the highest among men in the ≤25-year-old age group, and the difference is statistically significant between different age groups (p = 0.001, Supplementary Table 2), while several other studies showed there were few differences between different ages (25,26). This study provided beneficial information about the epidemiology of genital HPV infection in men in the Henan Province, which should be utilized when evaluating the efficacy of HPV vaccines to prevent vaccine-targeted HPV genotypes.
In conclusion, the prevalence of HPV infection was relatively high. Thirty-two different HPV genotypes were detected, and the most frequently detected genotypes were HPV 6, HPV 11, HPV 16, HPV 58, and HPV 18, respectively. We also found that an unmarried status and smoking increased single and LR-HPV infection. Multiple lifetime sex partners and not using a condom were more likely to cause LR-HPV infection. Understanding the epidemiological characteristics of HPV infection is essential to the development of prevention and control strategies for HPV.
In our study, even the 9-valent vaccine could not cover all the HPV genotypes that we most frequently detected. Although some HPV infections in young people are temporary and could be naturally cleared by the immune system, which will not result in clinical diseases, they may still lead to continuous virus transmission to their sexual partners. Therefore, it is necessary to monitor the infection status of HPV-positive men in our region.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of Henan Provincial People's Hospital. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
HW, YY, and GL: conceptualization. HW, YY, and WY: data curation. GL and XL: investigation. XL and JZ: methodology. HW project administration and resources. JZ: supervision. HW and YY: visualization, and writing and editing. All authors contributed to the article and approved the submitted version. | 2021-09-20T13:16:15.254Z | 2021-09-20T00:00:00.000 | {
"year": 2021,
"sha1": "a2370515dfeed42bd6966aa4832192ec963ed975",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.676401/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2370515dfeed42bd6966aa4832192ec963ed975",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259634292 | pes2o/s2orc | v3-fos-license | A critical study on the role of industrial relation as an instrument in settling conflict between industries and their employees
Industrial relations or employment relations are the multidisciplinary academic field that studies the employment relationship; that is, the complex interrelations between employers and employees, labor/trade unions, employer organizations and the state. The newer name, "employment relations" is increasingly taking precedence because "industrial relations" is often seen to have relatively narrow connotations. Nevertheless, industrial relations has frequently been concerned with employment relationships in the broadest sense, including "non-industrial" employment relationships. This is sometimes seen as paralleling a trend in the separate but related discipline of human resource management. In simple terms Industrial Relations deals with the worker employee relation in any industry Government has attempted to make Industrial Relations more health them by enacting Industrial Disputes Act 1947 to solve the dispute and to reduce the regency of dispute. This in turn improves the relations. Industrial relations in countries, sub-regions and regions, have been influenced by a variety of circumstances and actors such as political philosophies, economic imperatives, and the role of the State in determining the direction of economic and social development, the influence of unions and the business community, as well as the legacies of colonial governments. IR fulfilled the function of providing employees with a collective voice, and unions with the means to establish standardized terms and conditions of employment not only within an enterprise but also across an industry, and sometimes across an economy. This was achieved through the freedom of association, collective bargaining and the right to strike. Similar results were achieved in the South Asian sub-region where political democracy, and sometimes socialist ideology, provided enormous bargaining power and influence on legislative outcomes to even unions with relatively few members.
Introduction
Industrial relations examines various employment situations, not just ones with a unionized workforce. However, according to Bruce E. Kaufman, "To a large degree, most scholars regard trade unionism, collective bargaining and labour-management relations, and the national labour policy and labour law within which they are embedded, as the core subjects of the field." Initiated in the United States at end of the 19th century, it took off as a field in conjunction with the New Deal. However, it is generally regarded as a separate field of study only in English-speaking countries, having no direct equivalent in continental Europe. In recent times, industrial relations has been in decline as a field, in correlation with the decline in importance of trade unions and also with the increasing preference of business schools for the human resource management paradigm. Industrial relations has three faces: science building, problem solving, and ethical. In the science building phase, industrial relations is part of the social sciences, and it seeks to understand the employment relationship and its institutions through high-quality, rigorous research. In this vein, industrial relations scholarship intersects with scholarship in labour economics, industrial sociology, labour and social history, human resource management, political science, law, and other areas. Industrial relations scholarship assumes that labour markets are not perfectly competitive and thus, in contrast to mainstream economic theory, employers typically have greater bargaining power than employees. Industrial relations scholarship also assumes that there are at least some inherent conflicts of interest between employers and employees (for example, higher wages versus higher profits) and thus, in contrast to scholarship in human resource management and organizational behaviour, conflict is seen as a natural part of the employment relationship. Industrial relations scholars therefore frequently study the diverse institutional arrangements that characterize and shape the employment relationship-from norms and power structures on the shop floor, to employee voice mechanisms in the workplace, to collective bargaining arrangements at company, regional, or national level, to various levels of public policy and labour law regimes, [citation needed] to varieties of capitalism (such as corporatism, social democracy, and neoliberalism) [1] .
History of industrial relation Pre-independence period of industrial relations system
In the pre-independence days, workers were 'hired and fired', as the principle of demand and supply governed industrial relations. The employer was in a commanding position and the conditions of employment wages were very poor. The relationship between the employers and workers during the period could be said to be the masters and servants. Workers' organisations during the period were either non-existent or in the nascent stage of emergence. A few workers' organisations that came to be set up during the period were mainly Philanthropic organisations and lacked element of modern union. When these conditions continued despite the efforts of leaders, it paved the way for revolutionary movements. However, even till the end of the First World War, the trade union movement had not emerged. There were hardly any laws to protect the interests of workers except the Employers and Workmen (Disputes) Act, 1860, which was used to settle wage disputes. Most of the labour laws enacted during the period such as Workmen's Breach of Contract Act, 1859, Employers and Workers (Disputes) Act, 1860, Assam Labour Emigration Acts (1863-1901) were primarily intended to serve the interests of British employer. A notable feature of the then existing industrial relations in the country was the role of Jobbers or Sardars. In 1938, in order to meet the acute industrial unrest prevailing then, the Bombay government enacted the Bombay Industrial Relations (BIR) Act. For the first time permanent machinery, called the Industrial Court, was established for settling disputes. This was replaced by the BIR Act, 1946, which was amended in 1948, 1949, 1953 and 1956. Soon after the Second World War, India faced many problems, such as rise in the cost of living, scarcity of essential commodities, high population growth rates, massive unemployment, increasingly turbulent industrial relations situation, etc.
Post-independence period of industrial relations system
After India attained independence, one of the significant steps taken in the field of industrial relations was the enactment of the Industrial Disputes Act, 1947, which not only provides for the establishment of permanent machinery for the settlement of industrial disputes but also makes these awards binding and legally enforceable. Besides the Industrial Disputes Act, in December 1947, an industrial conference was held in India, where an appeal was made to labour and management in the form of an Industrial Truce Resolution to maintain industrial harmony. Some of the significant recommendations of NCL were processed by ILC and the Standing Labour Committee in 1970 and 1971 respectively and the major policy decisions were taken up for implementation. Some of these recommendations relate to the statutory recognition of a representative union as the sole bargaining agent to be determined by the verification of paid membership and the appointment of industrial relations commissions (IRCs) in the States and the Centre instead of present tribunals. These recommendations were, however, never implemented, though some of them are in various stages of implementation, as for instance those relating to workers training, induction and education, working conditions, social security, labour administration, etc. The Essential Services Maintenance Act empowers the government to ban strikes, lay-offs and lockouts in what it deems to be "essential services". It also empowers the government to punish any person who participates or instigates a strike which is deemed illegal under ESMA. Dunlop (1958) identified the main contextual variables like the technology, labour and product markets, budgetary constraints and distribution of power within society. This is within a system involving groups of actors bound together by a set of beliefs. These input factors were seen to have an impact upon the rule making output of the IR system. This system as a whole was called as systems approach. A wide array of problems hinders marketers in approaching rural areas with confidence. The lack of fair weather roads, widely dispersed villages, low density of population, lack of bank and credit facilities, multiple tiers, higher costs and administrative problems, and lack of retailers are the problems in rural distribution. Intelligent way of approaching rural areas is required. The emerging distribution approaches include: Cooperative societies, petrol bunks, agricultural input dealers, NGOs, etc. Latest approaches include the direct to home selling methods: network marketing and internet marketing. Baldev R. Sharma and Sundararajan P.S.7 in their study on "Organisational Determinants of Labour Management relations in India" investigated factors determining labour management relations in 50 companies of the nine factors studied, the two included in the best equation scope for advancement and grievance handling were found to be the ~ 40 ~ most critical determinants. Together these two factors accounted for 58 percent of the variation in labour management relations across the 50 companies.
Need and significance of industrial relation
This study becomes necessary in the sense that the causes of conflict across various organizations in the country, especially in manufacturing organization have become a matter of concern to all well-meaning Nigerian. In 2012, there was industrial conflict across the country over the issue of minimum wages and other working conditions of employment in which it affected the operational activities of many government parastatals and private organizations in both manufacturing and service industries. Secondly, this study will enable managers in manufacturing industry to have in-depth knowledge of causes of conflict and how to manage conflict in manufacturing industry not only in Nigeria but also across the world. So in essence, this study seeks to ensure smooth running of organizations and to enable the two sides (employers and workers) to work together harmoniously in pursuing and achieving organizational goals and objectives. Alternate and systematic approaches to the study of industrial relations Alternate Approach to the Study of Industrial Relations Systems Approach to the Industrial Relation (Dunlop's approach) The Pluralist Approach The Marxist Approach Sociological Approaches Gandhian Approaches Psychological Approaches Objectives The general objective of this study is to examine industrial conflict and its management Strategies The specific objectives for the study are to investigate the major causes of conflict in the company understudy; to know the conflict management strategies that the organization understudies are using in solving conflict in the organization find out the effective conflict management strategies in work organization.
Research methodology
Type of Research of study is exploratory & descriptive in nature. The data is collected from Questionnaire & Secondary Data is collected from the books, publication, Records of the companies, Websites.
Scope of the study
The study is undertaken IR is dynamic and developing socioeconomic process. As such, there are as many as definitions of IR as the authors on the subject. IR is concerned with the systems and procedures used by unions and employers to determine the reward for effort and other conditions of employment, to protect the interests of the employed and their employers and to regulate the ways in which employers treat their employees.
Finding and Conclusion
The current study was a comparative analysis of conflict and its management strategies. The study established that conflict arises from various sources and is inevitable in organizational settings. It also highlighted conflict management strategies such as dominance, avoiding, smoothing and compromising, hierarchical decision making and appeal procedure as voluntary means of conflict settlement in an organization using three manufacturing companies.
The study adopted correlational survey design using the export facto type to establish the relationship between conflict and its management strategies in work organization. Two hundred and sixteen respondents were selected for the study using multi-stage sampling technique. A well-structured questionnaire was generated for the study and the elicited information was analyzed using t-test. The finding of the study reviewed that the main sources of conflict in the selected organizations were due to ineffective means of communication of grievances to top managers, poor government economic and industrial policies, and poor employee compensation and welfare. The managers utilized combination of conflict management strategies such as compromising, namely putting machineries in place to address the sources of conflict, intimidation of workers and effecting necessary changes in process and procedure management. Besides, they took advantage of problem solving and dominating strategies. In addition, the results indicated that good conflict management strategies promote better labour-management relations, less disruption of work activities, and improve profitability. The most commonly used strategies for managing conflict among the managers.
In the light of the results, a series of recommendations are presented: Managers should develop diverse but appropriate strategies to resolve and manage conflicts as they arise before escalating to unmanageable level; efforts should be made by the managers to occasionally stimulate conflict by encouraging divergent views and rewarding staff and unit/department for outstanding performance; and proper communication procedures should be put in place to resolve conflict. For instance, when any disagreements arise among the employees, it should be reported to the management and then management should get statements from the parties involved, brainstorm the issue, and make recommendation on how to resolve the conflict. | 2023-07-11T16:24:40.784Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ca124a1a91c5f0d3a5630b7007547cc549f4972e",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.allfinancejournal.com/article/view/95/4-1-2",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8e461bbc787e88a52fdcc0d0b382cba6f616bf9c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
49415144 | pes2o/s2orc | v3-fos-license | Comparison of training and detraining on redox state of rats: gender specific differences
Given the fact that oxidative stress response induced by training/detraining has still not been clarified and may be influenced by gender, the aim of our investigation was to compare the effects of swimming training and detraining on oxidative and antioxidative parameters in rats, with a special focus on sex differences. Wistar albino rats (n = 64) were divided into 4 groups: control, trained group, groups exposed to 2 and 4 weeks of detraining. Each group included two subgroups: males and females. After sacrificing, hearts were isolated and retrogradely perfused according to Langendorff technique. Levels of superoxide anion radical, hydrogen peroxide, nitrites and thiobarbituric acid reactive substances were measured in plasma and coronary venous effluent, while reduced glutathione, activities of superoxide dismutase and catalase were measured in erythrocytes. Our results indicate that swimming training doesn’t promote oxidative damage, nor act protectively within the heart. However, 2 and 4 weeks of detraining led to a partial lost in exercise-induced adaptation. It seems that moderate-intensity physical exercise of sufficient duration leads to beneficial adaptations, which may be partially lost during detraining period. Positive antioxidative effects of training remained longer in males. Findings of present study may help in elucidation of training and detraining effects on modulation of redox homeostasis, especially from aspect of gender differences. Gen. Physiol. Biophys. (2018), 37, 285–297 doi: 10.4149/gpb_2017053 Correspondence to: Vladimir Lj. Jakovljevic, Department of Physiology, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34 000 Kragujevac, Serbia E-mail: drvladakgbg@yahoo.com Introduction Under normal physiological conditions, pro-oxidants are continuously produced as a result of essential metabolic processes and environmental factors (Pham-Huy et al. 2008). Aerobic organisms possess antioxidant system which consists of variety of enzymatic and nonenzymatic antioxidants that serve to counterbalance the effects of oxidants (Birben et al. 2012). Oxidative stress occurs when there is an imbalance between oxidants and antioxidants in favour of oxidants. Since pro-oxidants are very reactive molecules, they can cause tissue damage interacting in a destructive manner with practically every cellular component (Birben et al. 2012; Rahal et al. 2014). Regular physical exercise has been shown to exert a myriad of beneficial effects for health, such as promotion of health and lifespan, improvement of quality of life and decrease the incidence of a life-style related diseases (Macera et al. 2003; Vina et al. 2012). Groundbreaking research which gave the first data about the association between exercise and oxidative stress was conducted by Dillardi et al. (1978) approximately 4 decades ago. This finding stimulated further curiosity of many authors regarding the role of reactive oxygen species (ROS) and reactive nitrogen species (RNS) in skeletal muscle and other metabolically active organ during physical exercise (Davies et al. 1982; Ammeren et al. 1992; Gomez-Cabrera et al. 2008). Numerous papers have been published proving that low physiological levels of pro-oxidants produced in the muscles have an important role in maintenance its normal tone and
Introduction
Under normal physiological conditions, pro-oxidants are continuously produced as a result of essential metabolic processes and environmental factors (Pham-Huy et al. 2008). Aerobic organisms possess antioxidant system which consists of variety of enzymatic and nonenzymatic antioxidants that serve to counterbalance the effects of oxidants (Birben et al. 2012). Oxidative stress occurs when there is an imbalance between oxidants and antioxidants in favour of oxidants. Since pro-oxidants are very reactive molecules, they can cause tissue damage interacting in a destructive manner with practically every cellular component (Birben et al. 2012;Rahal et al. 2014).
Regular physical exercise has been shown to exert a myriad of beneficial effects for health, such as promotion of health and lifespan, improvement of quality of life and decrease the incidence of a life-style related diseases (Macera et al. 2003;Vina et al. 2012). Groundbreaking research which gave the first data about the association between exercise and oxidative stress was conducted by Dillardi et al. (1978) approximately 4 decades ago. This finding stimulated further curiosity of many authors regarding the role of reactive oxygen species (ROS) and reactive nitrogen species (RNS) in skeletal muscle and other metabolically active organ during physical exercise (Davies et al. 1982;Ammeren et al. 1992;Gomez-Cabrera et al. 2008). Numerous papers have been published proving that low physiological levels of pro-oxidants produced in the muscles have an important role in maintenance its normal tone and contractility. On the contrary, excessive production of ROS leads to a contractile dysfunction which is followed by muscle weakness and fatigue (Powers and Jackson 2008;Rahal et al. 2014). It has been established that swimming training causes changes in antioxidant enzymes, alters muscle gene expression, thus contributing to exercise-induced adaptations to skeletal muscle (Venditti and Di Meo 1997;Elikov 2016;Ruzicic et al. 2016). However it should be taken into consideration that various factors influence the oxidative stress response to swimming training, such as type of exercise, intensity, duration, gender and age of athletes etc. (Ruzicic et al. 2016).
Interestingly, prolonged cessation of training stimulus, known as detraining, may abolish training-induced adaptations in response to oxidative stress and antioxidant status markers (Fatouros et al. 2004;Agarwal et al. 2012). Nevertheless there is a lack of data referring to the oxidative stress response to detraining.
Gender differences in response to exercise-induced oxidative stress have gained increased attention despite controversial results (Liu et al. 2000;Balci and Pepe 2012;Farhat et al. 2017). To our best knowledge there are just a few studies which explored the potential gender difference in oxidative stress markers and antioxidant defense system in the period of detraining.
Given the fact that oxidative stress response induced by detraining has still not been clarified and may be influenced by gender, the aim of our investigation was to compare the effects of swimming training and detraining on oxidative stress parameters and parameters of antioxidant defense system in rats, with a special focus on sex differences.
Materials and Methods
The study was performed in the laboratory for cardiovascular physiology of the Faculty of Medical Sciences, University of Kragujevac, Serbia. It was approved and performed in accordance with the regulatives of the Faculty's Ethical committee for the welfare of laboratory animals and principles of the Good laboratory practice and European Council Directive (86/609/EEC).
Subjects
Sixty four Wistar albino rats (males and females, eight weeks old at the beginning of the experiment, body weight 200 ± 50 g) were included in the study. The animals were housed at temperature of 22 ± 1°C, with 12 hours of automatic illumination daily. They consumed commercial rat food (20% protein rat food, Veterinary institute Subotica, Serbia) and water ad libitum.
Exercise training protocol
Rats were divided into 4 groups, while each group consisted of 2 subgroups, males (M) and females (F). The first group was control group (C), subgroups CM and CF (n = 8 for each subgroup). The second group was trained group (T), subgroups TM and TF (n = 8 for each subgroup). The third group included 2 weeks detrained animals (D2), i.e., animals who were subjected to training, followed by 2 weeks of detraining period, subgroups DM2 and DF2 (n = 8 for each subgroups). The forth group consisted of 4 weeks detrained animals (D4), i.e., animals who were subjected to training followed by 4 weeks of detraining, subgroups DM4 and DF4 (n = 8 for each subgroup). Rats from the control group were placed in the pool 5 times a week for 3 minutes to achieve water induced-stress (Lima et al. 2013;Stojanovic Tosic et al. 2015). Trained group included rats who were subjected to moderate intensity exercise, such as swimming training (8 weeks, 5 days/week, 60 min/day) in a specially designed pool according to the protocol in Figure 1A. Week before the experiment, rats were gradually exposed to swimming training from 5 to 15 minutes, in order to familiarize them to the swimming exercise. Subsequently they started with 8 weeks training process. Rats were sacrificed a day after accomplishing training process. On the same day, rats (the same age as in T group) from C groups were sacrificed as well. Animals from DM2, DF2 and DM4, DF4 groups were sacrificed after 2 and 4 weeks of cessation, respectively ( Figure 1).
Isolated rat heart preparation
After a short-term ketamine/xylazine-induced narcosis rats were sacrificed by decapitation. The chest was then opened via midline thoracotomy. The hearts were immediately removed and immersed in cold saline and were then mounted on a stainless steel cannula of the Langendorff perfusion apparatus to provide retrograde perfusion under gradually increasing coronary perfusion pressure (CPP from 40 to 120 cmH 2 O). Krebs-Henseleit buffer was used for retrograde perfusion (in mmol/l: NaCl 118, KCl 4.7, CaCl 2 ×2H 2 O 2.5, MgSO 4 ×7H 2 O 1.7, NaHCO 3 25, KH 2 PO 4 1.2, glucose 11, and pyruvate 2). The buffer was balanced with 95% O 2 and 5% CO 2 , with a pH of 7.4 and a temperature of 37°C. Following the establishment of heart perfusion, the preparations were stabilized within 30 min with a basal coronary perfusion pressure of 70 cmH 2 O. After the stabilization period, the perfusion pressure was reduced to 50 and 40 cmH 2 O and then gradually increased to 60, 80, 100 and 120 cmH 2 O to establish coronary autoregulation.
Biochemical analysis
In order to test systemic oxidative stress response to training, after animal sacrifice blood samples for biochemical analysis were collected from jugular vein. After centrifugation of heparinised venous blood, plasma and erythrocytes were separated. In plasma the following parameters of redox balance were determined: the levels of superoxide anion radical (O 2 -), nitrites (NO 2 -), hydrogen peroxide (H 2 O 2 ) and index of lipid peroxidation (measured as thiobarbituric acid reactive substances-TBARS). Superoxide dismutase (SOD), catalase (CAT) and reduced glutathione (GSH) were determined in erythrocytes samples. During each CPP coronary venous effluent was collected for the purpose of determination of levels of pro-oxidants such as O 2 , NO 2 -, H 2 O 2 and TBARS. Analyses of the pro-oxidants were performed using the same methods as when analyzing plasma samples and coronary venous effluent except NO 2 where the protocol differs. Biochemical parameters were measured spectrophotometrically, using UV-1800 Shimadzu UV spectrophotometer, Japan.
Superoxide anion radical determination (O 2 -)
The level of superoxide anion radical (O 2 -) was measured using nitro blue tetrazolium (NBT) reaction in TRIS-buffer combined with plasma sample or coronary venous effluent. The measurement was performed at a wavelength of 530 nm. For O 2 determination in coronary venous effluent the Krebs-Henseleit solvent was used as the blank control, while in case of plasma samples distilled water served as a blank control (Auclair and Voisin 1985).
Hydrogen peroxide determination (H 2 O 2 )
The protocol for measurement of hydrogen peroxide (H 2 O 2 ) is based on oxidation of phenol red in the presence of horseradish peroxidase. 200 μl sample with 800 μl PRS (phenol red solution) and 10 μl POD (horseradish peroxidase) were combined (1:20). The level of H 2 O 2 was measured at 610 nm. For H 2 O 2 determination in coronary venous effluent the Krebs-Henseleit solvent was used as the blank control, while in case of plasma samples distilled water served as a blank control (Pick and Keisari 1980).
Nitrite determination (NO 2 -)
Nitric oxide (NO) decomposes rapidly to form stable metabolite nitrite/nitrate products. Nitrite (NO 2 -) was determined as an index of nitric oxide production with Griess reagent. For NO 2 determination in plasma 0.1 ml 3 N PCA (perchloride acid), 0.4 ml 20 mM ethylenediaminetetraacetic acid (EDTA), and 0.2 ml plasma were put on ice for 15 min, then centrifuged 15 min at 6,000 rpm. After pouring off the supernatant, 220 μl K 2 CO 3 was added. Nitrites were measured at 550 nm. Distilled water was used as a blank probe.
For NO 2 determination in coronary venous effluent, 0.5 ml of the perfusate was precipitated with 200 µl of 30% sulfosalicylic acid, mixed for 30 min and centrifuged at 3000 × g. Equal volumes of the supernatant and Griess reagent were mixed and stabilized for 10 min in the dark, and then the sample was measured spectrophotometrically at a wavelength of 543 nm. The nitrite concentrations were determined using sodium nitrite as the standard (Green et al. 1982).
Determination of the index of lipid peroxidation measured as TBARS
The degree of lipid peroxidation in the sample (plasma and coronary venous effluent) was estimated by measuring of TBARS using 1% TBA (thiobarbituric acid) in 0.05 NaOH, incubated with sample at 100°C for 15 min and read at 530 nm. TBA extract was obtained by combining 0.8 ml sample and 0.4 ml trichloro acetic acid (TCA), then samples were put on ice for 10 min, and centrifuged for 15 min at 6,000 rpm. The Krebs-Henseleit solvent was used as a blank control when TBARS was determined in coronary venous effluent, while in case of plasma sample distilled water was used (Ohkawa et al. 1979).
Determination of antioxidant enzymes (CAT, SOD)
Isolated RBCs were washed three times with three volumes of ice-cold 0.9 mmol/l NaCl and hemolysates containing about 50 g Hb/l (prepared according to McCord and Fridovich 1969) were used for the determination of CAT activity (Beutler 1982). Then 50 μl CAT buffer, 100 μl sample, and 1 ml 10 mM H 2 O 2 were added to the samples. Detection was performed at 360 nm. SOD activity was determined by the epinephrine method (Misra and Fridovich 1972). A 100 μl lysate and 1 ml carbonate buffer were mixed, and then 100 μl of epinephrine was added. Detection was performed at 470 nm. Distilled water was used as a blank probe.
Determination of reduced glutathione (GSH)
Level of GSH was determined spectrophotometrically, and it is based on GSH oxidation via 5,5-dithiobis-6,2-nitrobenzoic acid. GSH extract was obtained by combining 0.1 ml 0.1% EDTA, 400 μl haemolysate, and 750 μl precipitation solution (containing 1.67 g metaphosphoric acid, 0.2 g EDTA, 30 g NaCl, and filled with distilled water until 100 ml; the solution is stable for 3 weeks at +4°C). After mixing in the vortex machine and extraction on cold ice (15 min), it was centrifuged on 4000 rpm (10 min). Distilled water was used as a blank probe. Measuring was performed at 420 nm. The concentration is expressed as nanomoles per milliliter of RBCs (Beutler 1975).
Statistical analysis
IBM SPSS Statistics 20.0 for Windows was used for statistical analysis. Values were expressed as mean ± standard deviation (SD). Descriptive statistics were used to calculate arithmetic mean with dispersion measures (standard deviation SD and standard error SE). Distribution of data was checked by Shapiro-Wilk test. Where distribution between groups was normal, statistical comparisons were performed using the one-way ANOVA tests with a Tukey's post hoc test for multiple comparisons. Kruskal-Wallis was used for comparison between groups where the distribution of data was different than normal. Values of p < 0.05 were considered to be statistically significant, while values of p < 0.01 were considered to be statistically high significant.
Levels of superoxide anion radical (O 2 -)
There was a significant difference between groups in the level of O 2 -(F(7,56) = 8.229, p = 0.00). Significantly lower levels were noticed in trained females compared to 2 weeks detrained females (p = 0.007) and trained males in comparison to 2 weeks (p = 0.008) and 4 weeks detrained males (p = 0.007). There was a decrease in 4 weeks detrained females compared to 2 weeks detrained females (p = 0.009). Level of O 2 in 4 weeks detrained females was significantly lower than in 4 weeks detrained males (p = 0,006) ( Figure 6A).
Levels of nitrites (NO 2 -)
A significant difference between groups in the level of NO 2 was noticed (F(7,56) = 9,125, p = 0.00). There was a decrease in the level of NO 2 parameter in 2 weeks detrained females in comparison to trained females (p = 0.024) and 2 weeks detrained males in comparison to trained males (p = 0.029). Level of this parameter in 4 weeks detrained males was decreased compared to trained males (p = 0.028). In addition, level of NO 2 in 4 weeks detrained females was increased compared to 2 weeks detrained females (p = 0.032). In a 4 weeks detrained group, levels of NO 2 was significantly higher in female rats than in male rats (p = 0.026) ( Figure 6B).
Levels of hydrogen peroxide (H 2 O 2 )
According to the results of one-way ANOVA, there was a statistically significant difference between groups in the level of H 2 O 2 (F(7,56) = 8,125, p = 0.00). A decrease in the level of H 2 O 2 in 2 weeks detrained males compared to trained males (p = 0.009) and 2 weeks detrained females group compared to trained females group was revealed (0.008). In addition, there was a significant decrease in the level of this parameter in 4 weeks detrained males compared to trained males (p = 0.007). In 4 weeks detrained females levels were increased compared to 2 weeks detrained females (p = 0.006). In a group of rats who were subjected to 4 weeks of detraining, levels of H 2 O 2 were higher in females (p = 0.004) ( Figure 6C).
Levels of TBARS
There was a significant difference between groups in the level of TBARS (F(7,56) = 5,125, p = 0.01). Level of TBARS in plasma increased in trained groups compared to control at females (p = 0.07) and males (p = 0.004). An increase in the level of this parameter in 4 weeks detrained females compared to trained females (p = 0.029) and 2 weeks detrained females (p = 0.034) was noticed as well ( Figure 6D).
Activity of superoxide dismutase (SOD)
Activity of SOD differed between groups (F(7,56) = 7,221, p = 0.00). A significant increase in the activity of this enzyme in trained group in males was observed in comparison to control males (p = 0.023). There was a decrease in the activity of SOD in 2 weeks detrained females (p = 0.021) and 4 weeks detrained females (p = 0.009) compared to trained females. In 4 weeks detrained males activity of SOD was decreased in comparison to 2 weeks detrained males (p = 0.006) and trained males (p = 0.008). There was a statistically significant increase in SOD activity in male rats compared to female rats in the period of 2 weeks detraining (0.003) ( Figure 7A).
Activity of catalase (CAT)
Statistically significant difference between groups in the activity of catalase (F(7,56) = 10,221, p = 0.00) was found. There was an increase in the activity of catalase (CAT) in trained group compared to the control group in females (p = 0.008), while in males the difference was not significant. In 2 weeks detrained females the activity was decreased compared to trained females (p = 0.009). A decrease in the activity of CAT was observed in 4 weeks detrained females compared to trained females (p = 0.032), and that value was increased compared to 2 weeks detrained females (p = 0.029). An increased activity was noticed in 4 weeks detrained males compared to trained males (p = 0.009) and 2 weeks detrained males (p = 0.016). The only difference in values between males and females was found in trained groups, where females had higher values of CAT activity (p = 0.006) ( Figure 7B).
Levels of reduced glutathione (GSH)
There was a statistically significant difference between groups in the level of GSH (F(7,56) = 4,298, p = 0.02). Level of reduced glutathione (GSH) was significantly increased in trained groups compared to control groups in females (p = 0.002) and males (p = 0.003). In 2 weeks detrained males values were increased compared to trained males (p = 0.027). In 4 weeks detrained males group level of GSH was increased compared to trained males (p = 0.022). In addition, level of GSH was lower in 4 weeks detrained females than in 2 weeks detrained females (p = 0.021). Gender difference in 4 detrained females in comparison to the 4 weeks detrained males group was noticed, where values in female rats were lower than in male rats (0.018) ( Figure 7C). Potential source of oxidative stress during anaerobic training is an increased aerobic metabolism (Stankovic and Radovanovic 2012). ROS and RNS production during exercise follows the principal of hormesis and may represent an adaptive response of cells to stressors, such as physical activity. The responses of biological systems may be described with a bell-shaped curve whose two endpoints are inactivity and overtraining (Stojanovic Tosic et al. 2015;Radak et al. 2017).
Numerous investigations aimed to examine the influence of aerobic exercise on oxidative stress markers were mostly focused on treadmill or cycle ergometer (Powers et al. 2016). We've chosen to examine the effects of swimming training due to the fact that swimming is considered as a natural ability of rats and it has also been proposed as a convenient model for studying the physiological changes and stress response to training (Balci and Pepe 2012;Araujo et al. 2015).
Results of our study clearly show that 8 weeks of swimming training led to the decrease of almost all pro-oxidants measured in the heart both in male and female rats. Analysis of aforementioned parameters in the coronary venous efflu-ent during coronary autoregulation refers to the oxidative stress in the endocardium of the left ventricle and endothelium of the coronary circulation as well. In order to complete the picture about the role of oxidative stress in physiology of effort, we investigated the systemic oxidative stress response to training and detraining. Generally viewed, there were no changes in the release of the measured pro-oxidants in plasma, thus suggesting that applied intensity and duration of swimming training may affect only local production of ROS (in the heart), while systemic response was not changed. In addition, having in mind that release of pro-oxidants was even decreased in the heart, it seems that swimming training of this type doesn't promote oxidative damage, nor act protectively within the heart.
Regarding the components of antioxidant defense system, SOD, CAT and GSH, training led to the significant increase of GSH values at both sexes, CAT in females and SOD in males. It can be assumed that effects of training on these antioxidant parameters depend on their chemical characteristics. Also, results for these enzymes can not be interpreted independently of O 2 -/H 2 O 2 dynamic. In female, unchanged activity of SOD in training induced less scavenging of O 2 -, leading to higher H 2 O 2 values (compared to 2 weeks detrained group), which can induce enhanced CAT activity. Increased activity of antioxidants is in correlation with unaltered levels of pro-oxidants in the plasma and may explain these results.
Our results are in agreement with the study conducted by Balci et al. who found decreased malondialdehyde (MDA) levels in the heart of female rats at rest, however they didn't observe any change in MDA and NO levels in male rats in the heart (Balci and Pepe 2012). Neither did we reveal the difference in level of NO 2 in the heart when compared trained and untrained rats both sexes at rest. The potential explanation for this may be the interaction of NO with reactive oxygen species (ROS), particularly O 2 -. Significantly lower levels of O 2 in the present study may be a consequence of this interaction resulting in generation of peroxynitrite (ONOO -).
A group of researchers whose methodology differed from ours in case of duration of swimming training noticed that this type of activity reduced lipid peroxidation in the heart (Venditti and Di Meo 1997). It should be taken into consideration that pro-oxidants detected in blood plasma and erythrocytes reflect the redox state of all components which are included in the motor act during physical exercise. In that sense it's logical to expect the different values of parameters of oxidative stress response measured in heart and in plasma (Elikov 2016). Furthermore, Hu et al. (2000) observed no change in the lipid peroxidation level in heart, which may be explained by a shorter period of swimming training (7 days, 45 minutes per day) insufficient to establish positive adaptations to exercise.
When discussing antioxidant defense system, our results are not in accordance with the results of Balci and Pepe (2012) who revealed that 8 weeks of swimming training caused a decrease in SOD activity and didn't affect total GSH levels in rat heart. Furthermore, Lima and co-workers proved increased reduced glutathione (GSH) content and reduced/oxidized (GSH/GSSG) ratio, higher superoxide dismutase activity in liver mitochondria after 6-week swimming training protocol (Lima et al. 2013). Others showed an increase in serum superoxide dismutase activity induced by swimming as well (Botezelli et al. 2011). We expected that gender differences in response to training exist, since it's been reported that female rats often show lower oxidative damage than males (Stankovic and Radovanovic 2012). However, values of all measured pro-oxidants were similar in female and male rats in training.
It's been known that the exercise-induced adaptive process is reversible in case of cardiovascular function and mitochondrial enzyme activity (Mujika and Padilla 2000). After 2 weeks of detraining, the release of O 2 and H 2 O 2 in the heart was increased compared to the values in training. Regarding the systemic response, we noticed the same trend in O 2 production, while levels of NO 2 and H 2 O 2 were decreased in comparison to the levels in training. The explanation for decreased plasma level in H 2 O 2 detraining period may be an increased catalase activity during training, which catalyzed the decomposition of hydrogen peroxide to water and oxygen. Furthermore, SOD and CAT activity were lower after 2 weeks of training cessation in females and level of GSH was higher in males in comparison with the values in training. The difference between gender that we noticed was an increased activity of SOD in males and higher TBARS production in heart of females.
We detected an increase in cardiac release of O 2 at both sexes and H 2 O 2 at males after four weeks of training cessation compared to the values in training. Interestingly, after 4 weeks of detraining levels of the most of measured pro-oxidants in females were similar to those in training. Regarding the parameters of antioxidant defense system, 4 weeks detraining led to the decrease in SOD activity at both sexes and CAT in females compared with training. On the contrary, GSH and CAT values in males remained increased in detraining. There is a data that increased total antioxidant capacity was associated with an increased circulating CD34 + / VEGFR2 + cells in detraining (Witkowski et al. 2010).
Radak et al. examined the effects of 8 weeks of swimming and 8 weeks of detraining on the level of free radical species in the cerebellum. They proved that positive effects of training were maintained during detraining (Radak et al 2006). Others showed that 16 week of walking/jogging at 50-80% of HR(max) decreased MDA levels and increased total antioxidant capacity (TAC) and glutathione peroxidase activity (GPX) (Fatouros et al. 2004). However after 4 months of training cessation those effects were eliminated. It has been previously reported that effects of treadmill training on paraventricular nucleus in hypertensive rats reversed after 2 weeks of training cessation (Agarwal et al. 2012).
Although estrogen 17β-estradiol and different levels of ferritin may be responsible for the higher antioxidant protection noticed in females compared to males, we revealed lower antioxidant protection in females during detraining (Català-Niell et al. 2008;Stankovic and Radovanovic 2012). Based on our results we may hypothesize that probably there are other mechanisms independent of the change in estrogen and iron metabolism, thus contributing to sex differences in oxidative-stress response to exercise cessation. Beside mentioned mechanisms involved in effects of training/detraining on diversity between male and female antioxidative status, a very recent study on rats have shown gender difference in mitochondrial function which can be affected by exercise or cessation of it (Farhat et al. 2017). It can be assumed that diminshed mitochondrial activity can lead to depressed production of mitochondrial SOD and thus impair functioning of cellular antioxidant pathways (Macak-Safranko et al. 2011). Lower antioxidant values in our study support higher pro-oxidant levels after 4 weeks of detraining noticed in females.
We confirmed the fact that sexual dimorphism in oxidative capacity exists. These results suggest different dynamic of production of specific pro-oxidants among sexes during period of training cessation. One of limitations of our investigation was the absence of technique through which cellular mechanisms of obtained effects could be proved. Therefore further studies are necessary for better understanding the possible mechanisms underlying the gender differences in response to training and detraining.
Our results illustrated that moderate-intensity physical exercise of sufficient duration leads to the beneficial adaptations, manifested as improvement of antioxidant defense system. In addition, these results suggest that 2 and 4 weeks of training cessation may lead to a partial lost in exercise-induced adaptation. Positive antioxidative effects of training remained longer in males. Findings of the present study may help in elucidation of training and detraining effects on modulation of redox homeostasis, especially from aspect of gender differences. | 2018-07-04T00:07:13.841Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "d070641ce26fc477862faa960c3b0718e8b23a8e",
"oa_license": null,
"oa_url": "http://www.elis.sk/download_file.php?product_id=5696&session_id=trhsql1c41bshp2jpp430rjvb3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "15ed85371589f78e83e1f94afa64e0f67dc050a3",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195767485 | pes2o/s2orc | v3-fos-license | Improved hardness for H-colourings of G-colourable graphs
We present new results on approximate colourings of graphs and, more generally, approximate H-colourings and promise constraint satisfaction problems. First, we show NP-hardness of colouring $k$-colourable graphs with $\binom{k}{\lfloor k/2\rfloor}-1$ colours for every $k\geq 4$. This improves the result of Bul\'in, Krokhin, and Opr\v{s}al [STOC'19], who gave NP-hardness of colouring $k$-colourable graphs with $2k-1$ colours for $k\geq 3$, and the result of Huang [APPROX-RANDOM'13], who gave NP-hardness of colouring $k$-colourable graphs with $2^{k^{1/3}}$ colours for sufficiently large $k$. Thus, for $k\geq 4$, we improve from known linear/sub-exponential gaps to exponential gaps. Second, we show that the topology of the box complex of H alone determines whether H-colouring of G-colourable graphs is NP-hard for all (non-bipartite, H-colourable) G. This formalises the topological intuition behind the result of Krokhin and Opr\v{s}al [FOCS'19] that 3-colouring of G-colourable graphs is NP-hard for all (3-colourable, non-bipartite) G. We use this technique to establish NP-hardness of H-colouring of G-colourable graphs for H that include but go beyond $K_3$, including square-free graphs and circular cliques (leaving $K_4$ and larger cliques open). Underlying all of our proofs is a very general observation that adjoint functors give reductions between promise constraint satisfaction problems.
Introduction
Graph colouring is one of the most fundamental and studied problems in combinatorics and computer science. A graph G is called k-colourable if there is an assignment of colours {1, 2, . . . , k} to the vertices of G so that any two adjacent vertices are assigned different colours. The chromatic number of G, denoted by χ(G), is the smallest integer k for which G is k-colourable. Deciding whether χ(G) ≤ k appeared on Karp's original list of 21 NP-complete problems [Kar72], and is NP-hard for every k ≥ 3. In particular, graphs but arbitrary relational structures. Note that if G = H then we obtain the (search version of the) standard H-colouring and constraint satisfaction problem.
PCSPs have been studied as early as in the classic work of Garey and Johnson [GJ76] on approximate graph colouring but a systematic study originated in the paper of Austrin, Guruswami, and Håstad [AGH17], who studied a promise version of (2k + 1)-SAT, called (2 + )-SAT. In a series of papers [BG16; BG18; BG19], Brakensiek and Guruswami linked PCSPs to the universal-algebraic methods developed for the study of non-uniform CSPs [BKW17]. In particular, the notion of weak polymorphisms, identified in [AGH17], allowed for some ideas developed for CSPs to be be used in the context of PCSPs. The algebraic theory of PCSPs was then lifted to an abstract level by Bulín, Krokhin, and Opršal in [BKO19]. Consequently, this theory was used by Ficak, Kozik, Olšák, and Stankiewicz to obtain a dichotomy for symmetric Boolean PCSPs [Fic+19], thus improving on an earlier result from [BG18], which gave a dichotomy for symmetric Boolean PCSP with folding (negations allowed).
Prior and related work
While the NP-hardness of finding a 3-colouring of a 3-colourable graph was obtained by Karp [Kar72] in 1972, the NP-hardness of finding a 4-colouring of a 3-colourable graph was only proved in 2000 by Khanna, Linial, and Safra [KLS00] (see also the work of Guruswami and Khanna for a different proof [GK04]). This result implied NP-hardness of finding a (k + 2 k/3 − 1)-colouring of a k-colourable graph for k ≥ 3 [KLS00]. Early work of Garey and Johnson established NP-hardness of finding a (2k − 5)-colouring of a k-colourable graph for k ≥ 6 [GJ76]. In 2016, Brakensiek and Guruswami proved NP-hardness of a (2k − 2)-colouring of a k-colourable graph for k ≥ 3 [BG16]. Only very recently, Bulín, Krokhin, and Opršal showed that finding a 5-colouring of a 3-colourable graph, and more generally, finding a (2k − 1)-colouring of a k-colourable graph for any k ≥ 3, is NP-hard [BKO19].
In 2001, Khot gave an asymptotic result -he showed that for sufficiently large k, finding a k vertices that leaves no hyperedge monochromatic. Dinur, Regev, and Smyth showed that for any constants 2 ≤ k ≤ c, it is NP-hard to find a c-colouring of given 3-uniform k-colourable hypergraph [DRS05]. Other notions of colourings (such as different types of rainbow colourings) for hypergraphs were studied by Brakensiek and Guruswami [BG16;BG17], Guruswami and Lee [GL18], and Austrin, Bhangale, and Potukuchi [ABP20].
Some results are also known for colourings with a super-constant number of colours. For graphs, conditional hardness was obtained by Dinur and Shinkar [DS10]. For hypergraphs, NP-hardness results were obtained in recent work of Bhangale [Bha18] and Austrin, Bhangale, and Potukuchi [ABP19].
Results
For two graphs or digraphs G, H, we write G → H if there exists a homomorphism from G to H. 3 We are interested in the following computational problem. To state our results it will be convenient to use the following definition. If G → G and H → H, then PCSP(G, H) trivially reduces to PCSP(G , H ) (this is called homomorphic relaxation [BKO19]; intuitively, increasing the promise gap makes the problem easier). Therefore, if H is a left-hard graph, then all graphs left of H (that is, H such that H → H) are trivially left-hard. 4 If G is right-hard, then all graphs right of G are right-hard.
For the same reason, since every non-bipartite graph admits a homomorphism from an odd cycle, to show that H is left-hard it suffices to show that PCSP(C n , H) is NP-hard for arbitrarily large odd n, where C n denotes the cycle on n vertices. Dually, since every loop-less graph admits a homomorphism to a clique, to show that G is right-hard it suffices to show that PCSP(G, K k ) is NP-hard for arbitrarily large k.
It is conjectured that all non-trivial PCSPs for (undirected) graphs are NP-hard, greatly extending Hell and Nešetřil's theorem: Conjecture 2.3 (Brakensiek and Guruswami [BG18]). PCSP(G, H) is NP-hard for every non-bipartite loop-less G, H. Equivalently, every loop-less graph is left-hard. Equivalently, every non-bipartite graph is right-hard.
In addition to the results on classical colourings discussed above (the case where G and H are cliques), the following result was recently obtained in a novel application of topological ideas.
Theorem 2.4 (Krokhin and Opršal [KO19]). K 3 is left-hard. 3 In this paper, we allow graphs to have loops: the existence of homomorphisms for such graphs is trivial, but this allows us to make statements about graph constructions that will work without exceptions. 4 Note that by our definition, bipartite graphs are vacuously left-hard.
Improved hardness of classical colouring
In Section 3, we focus on right-hardness. We use a simple construction called the arc digraph or line digraph, which decreases the chromatic number of a graph in a controlled way. The construction allows to conclude the following, in a surprisingly simple way: Proposition 2.5. There exists a right-hard graph if and only if K 4 is right-hard. 5 More concretely, we show in particular that PCSP(K 6 , K 2 k ) log-space reduces to PCSP(K 4 , K k ), for all k ≥ 4. This contrasts with [Bar+19, Proposition 10.3], 6 where it is shown to be impossible to obtain such a reduction with minion homomorphisms: an algebraic reduction, described briefly in Section 4.3, central to the framework of [BKO19; Bar+19] (in particular, there exists a k such that PCSP(K 4 , K k ) admits no minion homomorphism to any PCSP(K n , K k ) for 4 < n ≤ k ).
Furthermore, we strengthen the best known asymptotic hardness: Huang [Hua13] showed that for all sufficiently large n, PCSP(K n , K 2 n 1/3 ) is NP-hard. We improve this in two ways, using Huang's result as a black-box. First, we improve the asymptotics from sub-exponential 2 n 1/3 to single-exponential n n/2 ∼ 2 n √ πn/2 . Second, we show the claim holds for n as low as 4.
Theorem 2.6 (Main Result #1). For all n ≥ 4, PCSP(K n , K n In comparison, the previous best result relevant for all integers n was proved by Bulín, Krokhin, and Opršal [BKO19]: PCSP(K n , K 2n−1 ) is NP-hard for all n ≥ 3. For n = 3 we are unable to obtain any results; for n = 4 the new bound n n/2 − 1 = 5 is worse than 2n − 1 = 7, while for n = 5 the two bounds coincide at 9. However, already for n = 6 we improve the bound from 2n − 1 = 11 to n n/2 − 1 = 19.
Left-hardness and topology
In Section 4, we focus on left-hardness. The main idea behind Krokhin and Opršal's [KO19] proof that K 3 is left-hard is simple to state. To prove that PCSP(C n , H) is NP-hard for all odd n, the algebraic framework of [BKO19] shows that it is sufficient to establish certain properties of polymorphisms: homomorphisms f : C L n → H for L ∈ N (where G L = G × · · · × G is the L-fold tensor product 7 ). For large n the graph C L n looks like an L-torus: an L-fold product of circles, so the pertinent information about f seems to be subsumed by its topological properties (such as winding numbers, when H is a cycle). We refer to [KO19] for further details, but this general principle applies to any H and in fact we prove (in Theorem 2.7 below) that whether H is left-hard or not depends only on its topology.
The topology we associate with a graph is its box complex. See Appendix A for formal definitions and statements. Intuitively, the box complex |Box(H)| is a topological space built from H by taking the tensor product H ×K 2 and then gluing faces to each four-cycle 5 Jakub Opršal and Andrei Krokhin realised that in this Proposition, 4 can be improved to 3 by using the fact that δ(δ(K4)) is 3-colourable, as proved by Rorabaugh, Tardif, Wehlau, and Zaguia [Ror+16]. Details will appear in a future journal version. 6 [Bar+19] is a full version of [BKO19]. Proposition 10.3 in [Bar+19] is Proposition 5.31 in the previous two versions of [Bar+19]. 7 The tensor (or categorical ) product G × H of graphs G, H has pairs (g, h) ∈ V (G) × V (H) as vertices and (g, h) is adjacent to (g , h ) whenever g is adjacent to g (in G) and h is adjacent to h (in H). and more generally, gluing higher-dimensional faces to complete bipartite subgraphs. The added faces ensure that the box complex of a product of graphs is the same as the product space of their box complexes: thanks to this, Box(C L n ) is indeed equivalent to the L-torus. The product with K 2 equips the box complex with a symmetry that swaps the two sides of H × K 2 . This make the resulting space a Z 2 -space: a topological space together with a continuous involution from the space to itself, which we denote simply as −. A Z 2 -map between two Z 2 -spaces is a continuous function which preserves this symmetry: f (−x) = −f (x). This allows to concisely state that a given map is "non-trivial" (in contrast, there is always some continuous function from one space to another: just map everything to a single point). The main use of the box complex is then the statement that every graph homomorphism G → H induces a Z 2 -map from |Box(G)| to |Box(H)|. Graph homomorphisms can thus be studied with tools from algebraic topology.
The classical example of this is an application of the Borsuk-Ulam theorem: there is no Z 2 -map from S n to S m for n > m, where S n denotes the n-dimensional sphere with antipodal symmetry. Hence if G and H are graphs such that |Box(G)| and |Box(H)| are equivalent to S n and S m , respectively, then there can be no graph homomorphism G → H. See Figure 1. This is essentially the idea in Lovász' proof [Lov78] of Kneser's conjecture that the chromatic number of Kneser graphs KG(n, k) is n − 2k + 2. In the language of box complexes, the proof amounts to showing that the box complex of a clique K c is equivalent to S c−2 , while the box complex of a Kneser graph contains S n−2k . We refer to [Mat08] for an in-depth, yet accessible reference.
We show that the left-hardness of a graph depends only on the topology of its box complex (in fact, it is only important what Z 2 -maps it admits, which is significantly coarser than Z 2 -homotopy equivalence): Theorem 2.7 (Main Result #2). If H is left-hard and H is a graph such that |Box(H )| admits a Z 2 -map to |Box(H)|, then H is left-hard.
Using Krokhin and Opršal's result that K 3 is left-hard (Theorem 2.4), since |Box(K 3 )| is the circle S 1 (up to Z 2 -homotopy equivalence), we immediately obtain the following: Corollary 2.8. Every graph H for which |Box(H)| admits a Z 2 -map to S 1 is left-hard.
Two examples of such graphs (other than 3-colourable graphs) are loop-less square-free graphs and circular cliques K p/q with 2 < p q < 4 (see Lemma A.1 for proofs), which we introduce next. Square-free graphs are graphs with no cycle of length exactly 4. In particular, this includes all graphs of girth at least 5 and hence graphs of arbitrarily high chromatic number (but incomparable to K 4 and larger cliques, in terms of the homomorphism → relation). The circular clique K p/q (for p, q ∈ N, p q > 2) is the graph with vertex set Z p and an edge from i to every integer at least q apart: i + q, i + q + 1, . . . , i + p − q. They generalise cliques K n = K n/1 and odd cycles C 2n+1 K (2k+1)/k . Their basic property is that K p/q → K p /q if and only if p q ≤ p q . Thus circular cliques refine the chain of cliques and odd cycles, corresponding to rational numbers between integers. For example: The circular chromatic number χ c (G) is the infimum over p q such that G → K p/q . Therefore: Thus there cannot be a homomorphism from K 4 to K 7/2 (of course in this case it is easier to show this directly).
Corollary 2.9. For every 2 < r ≤ r < 4, it is NP-hard to distinguish graphs G with χ c (G) ≤ r from those with χ c (G) > r .
In this sense, we conclude that K 4−ε is left-hard, thus extending the result for K 3 . However, the closeness to K 4 is only deceptive and no conclusions on 4-colourings follow. For K 4 , since the box complex is equivalent to the standard 2-dimensional sphere, we can at least conclude that to prove left-hardness of K 4 it would be enough to prove left-hardness of any other graph with the same topology: these include all non-bipartite quadrangulations of the projective plane, in particular the Grötzsch graph, 4-chromatic generalised Mycielskians, and 4-chromatic Schrijver graphs [Mat08;BL03]. In this sense, the exact geometry of K 4 is irrelevant. However, the fact that it is a finite graph, with only finitely many possible maps from C L n for any fixed n, L should still be relevant, as it is for K 3 . It is also quite probable that any proof for a "spherical" graph would apply just as well to K 4 , where the proof could be just notationally much simpler.
Finally, in Appendix A we rephrase Krokhin and Opršal's [KO19] proof of Theorem 2.4 in terms of the box complex. In particular, left-hardness of K 3 follows from some general principles and the fact that |Box(K 3 )| is a circle. The proof also extends to all graphs H such that |Box(H)| admits a Z 2 -map to S 1 , giving an independent, self-contained proof of Corollary 2.8 (and Theorem 2.4 in particular).
The general principle is that a homomorphism C L n → H induces a Z 2 -map (S 1 ) L → |Box(H)|, in a way that preserves minors (identifications within the L variables) and automorphisms. (In the language of category theory, the box complex is a functor from the category of graphs to that of Z 2 -spaces, and the functor preserves products). In turn, the Z 2 -map induces a group homomorphism between the fundamental group of (S 1 ) L , which is just Z L , and that of |Box(H)|. This is essentially the map Z L → Z obtained in [KO19]. While this rephrasing requires a bit more technical definitions, the main advantage is that it allows to replace a tedious combinatorial argument (about winding numbers preserving minors) with straightforward statements about preserving products.
Methodology -adjoint functors
While the proof of the first main result is given elementarily in Section 3, it fits together with the second main result in a much more general pattern. The underlying principle is that pairs of graph constructions satisfying a simple duality condition give reductions between PCSPs. To introduce them, let us consider a concrete example. For a graph G and an odd integer k, Λ k G is the graph obtained by subdividing each edge into a path of k edges; Γ k G is the graph obtained by taking the k-th power of the adjacency matrix (with zeroes on the diagonal); equivalently, the vertex set remains unchanged and two vertices are adjacent if and only if there is a walk of length exactly k in G. (For example Γ 3 G has loops if G has triangles).
We say a graph construction Λ (a function from graphs to graphs) is a thin (graph) functor if G → H implies ΛG → ΛH (for all G, H). A pair of thin functors (Λ, Γ) is a thin adjoint pair if ΛG → H if and only if G → ΓH.
We call Λ the left adjoint of Γ and Γ the right adjoint of Λ.
For all odd k, (Λ k , Γ k ) are a thin adjoint pair. For example, since Γ 3 C 5 = K 5 , we have G → K 5 if and only if Λ k G → C 5 . This is a basic reduction that shows the NP-hardness of C 5 -colouring; in fact adjointness of various graph construction is the principal tool behind the original proof of Hell and Nešetřil's theorem (characterising the complexity of H-colouring) [HN90].
In category theory, there is a stronger and more technical notion of (non-thin) functors and adjoint pairs. A thin graph functor is in fact a functor in the thin category of graphs, that is, the category whose objects are graphs, and with at most one morphism from one graph to another, indicating whether a homomorphism exists or not. In other words, we are only interested in the existence of homomorphisms, and not in their identity and how they compose. Equivalently, we look only at the preorder of graphs by the G → H relation (we can also make this a poset by considering graphs up to homomorphic equivalence). In order-theoretic language, thin functors are just order-preserving maps, while thin adjoint functors are known as Galois connections. We prefer the categorical language as most of the constructions we consider are in fact functors (in the non-thin category of graphs), which is important for connections to the algebraic framework of [BKO19], as we discuss in Section 4.3. While unnecessary for our main results, we believe it may be important to understand these deeper connections to resolve the conjectures completely.
Thin adjoint functors give us a way to reduce one PCSP to another. We say that a graph functor Γ is log-space computable if, given a graph G, ΓG can be computed in logarithmic space in the size of G.
In some cases, a thin functor Γ that is a thin right adjoint in a pair (Λ, Γ) is also a thin left adjoint in a pair (Γ, Ω). This allows to get a reduction in the opposite direction: Observation 2.11. Let (Λ, Γ) and (Γ, Ω) be thin adjoint pairs of functors. Then PCSP(ΓG, H) and PCSP(G, ΩH) are log-space equivalent (assuming Λ and Γ are logspace computable).
Proof. The previous observation gives a reduction from PCSP(G, ΩH) to PCSP(ΓG, H). For the other direction, let F be an instance of PCSP(ΓG, H). Then ΛF is an appropriate instance of PCSP(G, ΩH). Indeed, if F → ΓG, then ΛF → G. If ΛF → ΩH, then F → ΓΩH → H. The last arrow follows from the trivial ΩH → ΩH.
The proofs of Observations 2.10 and 2.11 of course extend to digraphs and general relational structures. Note that the above proofs reduce decision problems; they work just as well for search problems: all the thin adjoint pairs (Λ, Γ) we consider with Λ log-space computable also have the property that a homomorphism ΛF → H can be computed from a homomorphism F → ΓH and vice versa, in space logarithmic in the size of F .
As we discuss in Section 4, all of our results follow from reductions that are either trivial (homomorphic relaxations) or instantiations of Observation 2.10. While for the first main result we prefer to first give a direct proof that avoids this formalism (in Section 3), it will be significantly more convenient for the second main result (in Section 4.1), where we use a certain right adjoint Ω k to the k-th power Γ k .
Hedetniemi's conjecture
Another leitmotif of this paper is the application of various tools developed in research around Hedetniemi's conjecture.
The arc digraph construction, which we will use in Section 3 to prove Theorem 2.6, was originally used by Poljak and Rödl [PR81] to show certain asymptotic bounds on chromatic numbers of products. The functors Λ k , Γ k , Ω k were applied by Tardif [Tar05] to show that colourings to circular cliques K p/q (2 < p q < 4) satisfy the conjecture. Matsushita [Mat19] used the box complex to show that Hedetniemi's conjecture would imply an analogous conjecture in topology. This was independently proved by the first author [Wro19] using Ω k functors, while the box complex was used to show that squarefree graphs are multiplicative [Wro17]. See [FT18] for a survey on applications of adjoint functors to the conjecture.
The refutation of Hedetniemi's conjecture and the fact that methods for proving the multiplicativity of K 3 extend to K 4−ε and square-free graphs, but fail to extend to K 4 , might suggest that the Conjecture 2.3 is doomed to the same fate. However, it now seems clear that proving multiplicativity requires more than just topology [TW19]: known methods do not even extend to all graphs H such that |Box(H)| is a circle. This contrasts with Theorem 2.7: topological tools work much more gracefully in the setting of PCSPs.
The arc digraph construction
Let D be a digraph. The arc digraph (or line digraph) of D, denoted δD , is the digraph whose vertices are arcs (directed edges) of D and whose arcs are pairs of the form ((u, v), (v, w)). We think of undirected graphs as symmetric relations: digraphs in which for every arc (u, v) there is an arc (v, u). So for an undirected graph G, δ(G) has 2|E(G)| vertices and is a directed graph: the directions will not be important in this section, but will be in Section 4.2. The chromatic number of a digraph is the chromatic number of the underlying undirected graph (obtained by symmetrising each arc; so The crucial property of the arc digraph construction is that it decreases the chromatic number in a controlled way (even though it is computable in log-space!). We include a short proof for completeness. We denote by [n] the set {1, 2, . . . , n}.
Lemma 3.1 (Harner and Entringer [HE72]). For any graph G: Proof. Suppose δG has an n-colouring. Recall that we think of G as a digraph with two arcs (u, v) and (v, u) for each edge {u, v} ∈ E(G); thus δG contains two vertices (u, v) and (v, u), as well as (by definition of δ) two arcs from one pair to the other. In particular, an n-colouring of δG gives distinct colours to (u, v) and (v, u). Define a 2 n -colouring φ of G by assigning to each vertex v the set φ(v) of colours of incoming arcs. For any edge {u, v} of G, φ(v) contains the colour c of the arc (u, v). Since every arc incoming to u gets a different colour from (u, v), the set φ(u) does not contain c. Hence φ(u) = φ(v), so φ is a proper colouring. Suppose G has a n n/2 -colouring φ. We interpret colours φ(v) as n/2 -element subsets of [n]. Define an n-colouring of δG by assigning to each arc (u, v) an arbitrary colour in φ(u) \ φ(v) (the minimum, say). Such a colour exists because The proofs in fact works for digraphs as well. For graphs, it is not much harder to show an exact correspondence (we note however that most conclusions only require the above approximate correspondence). Let us denote b(n) := n n/2 .
Lemma 3.2 (Poljak and Rödl [PR81]). For a (symmetric) graph G, In other words, δG → K n if and only if G → K b(n) .
This immediately gives the following implication for approximate colouring: Proof. Let G be an instance of the first problem. Then δG is a suitable instance of Remark 3.4. As a side note, adding a universal vertex gives the following obvious reduction: PCSP(K n , K k ) log-space reduces to PCSP(K n+1 , K k+1 ), for n, k ∈ N.
Recall also that if n ≤ n ≤ k ≤ k, then PCSP(K n , K k ) trivially reduces to PCSP(K n , K k ). One corollary of Lemma 3.3 is that if any clique of size at least 4 is right-hard, then all of them are: Proof. Let n ≤ n . For one direction, right-hardness of K n trivially implies right-hardness of K n .
On the other hand, we claim that if K b(n) is right-hard, then so is K n . Indeed, suppose is not right-hard and so on. Since starting with n ≥ 4, the sequence b(b(. . . n . . . )) grows to infinity, we conclude that K n is not right-hard for some n ≥ n . Therefore, trivially K n is not right-hard.
In other words if any loop-less graph H is right-hard, then trivially some large enough clique K χ(H) is right-hard; by the above, K 4 and all graphs right of it are right-hard. This proves Proposition 2.5. The proof fails to extend to is not strictly greater than 3.
We thus improve the asymptotics from sub-exponential f (n) := 2 n 1/3 to singleexponential b(n) = n n/2 ∼ 2 n √ πn/2 . The informal idea of the proof is that any f (n) can be improved to b −1 (f (b(n))). Since b(n) is roughly exponential and b −1 (n) is roughly logarithmic, starting from a function f (n) of order exp (i+1) (α · log (i) (n)) with i-fold compositions and a constant α > 0, such as f (n) = 2 n 1/3 = 2 2 1 3 log n from Huang's hardness, so a similar composition but with i decreased. In a constant number of steps, this results in a single-exponential function. In fact using one more step, but without approximating the function b(n), this results in exactly b(n) − 1. We note it would not be sufficient to start from a quasi-polynomial f (n), like n Θ(log n) in Khot's [Kho01] result.
Proof of Theorem 2.6. By Lemma 3.3: For any k ∈ N, let m = log k (all logarithms are base-2); then b(m) ≤ 2 m ≤ k, hence Therefore, composing the two reductions: Starting from Theorem 3.6 we have a constant C such that: Hence, substituting n = b(k): Applying the above reduction, since log 2 for sufficiently large k, we conclude: We repeat this process to bring the constant further "down". That is, we substitute b(k) for k and apply the above reduction again. Since log 2 b(k)/4 = b(k)/4 ≥ 2 k /4k for sufficiently large k, we conclude: To apply the reduction one more time, notice that for large for sufficiently large k, hence: PCSP(K k , K b(k−1) ) is NP-hard, for sufficiently large k.
Substituting b(k) for k one last time: Composing with Lemma 3.3 one last time: This concludes the improvement in asymptotics. Moreover, one can notice that the requirements on "sufficiently large k" gets relaxed whenever we substitute b(k) for k. Formally, let k be maximum such that PCSP( is not NP-hard for n = b(k). By maximality of k, k ≥ n. But k ≥ b(k) is only possible when k < 4. Hence hardness holds for all k ≥ 4.
Adjoint functors and topology
4.1. Thin functors Λ k , Γ k , Ω k Recall that Λ k denotes k-subdivision and Γ k denotes the k-th power of a graph. For all odd k, they are thin adjoint graph functors: More surprisingly, Γ k is itself the thin left adjoint of a certain thin functor Ω k : This characterizes Ω k G up to homomorphic equivalence. The exact definition is irrelevant, but we state it for completeness: for k = 2 + 1, the vertices of Ω k are tuples (A 0 , . . . , A ) of vertex subsets A i ⊆ V (G) such that A 0 contains exactly one vertex. Two such tuples and A is fully adjacent to B (meaning a is adjacent to b in G, for a ∈ A k , b ∈ B k ). We note that Λ k and Γ k are log-space computable, for all odd k; however, Ω k is not: Ω k G is exponentially larger than G. See [Wro19] for more discussion about the thin functors Λ k , Γ k , Ω k and their properties.
Observation 2.10 tells us that PCSP(G, Ω k H) log-space reduces to PCSP(Γ k G, H) (in fact, by Observation 2.11, they are equivalent). To give conclusions on left-hardness, we will need to observe only two more facts about the functors Λ k , Γ k , Ω k . First, Ω k G → G for all G (it suffices to map (A 0 , . . . , A l−1 , A ) ∈ V (Ω 2 +1 G) to the unique vertex in A 0 ). Second, it is not hard to check that Γ k Λ k G → G and hence by adjointness Λ k G → Ω k G for all G and odd k (see Lemma 2.3 in [Wro19]). Proof. If H is left-hard, then trivially so is Ω k H because Ω k H → H. For the other implication, suppose Ω k H is left-hard, that is, PCSP(G, Ω k H) is hard for every nonbipartite G such that G → Ω k H. By Observation 2.10, this implies PCSP(Γ k G, H) is hard. Let G be any non-bipartite graph such that G → H. We want to show that PCSP(G , H) is hard. Observe that Ω k G is non-bipartite, because Λ k G → Ω k G and Λ k subdivides each edge of G an odd number of times. Since Ω k G → Ω k H, using G := Ω k G we conclude that PCSP(Γ k Ω k G , H) is hard. Since Γ k Ω k G → G , this implies PCSP(G , H) is hard.
As an example, consider the circular clique K 7/2 (we have K 3 → K 7/2 → K 4 ). Knowing that K 3 is left-hard, one could check that Ω 3 (K 7/2 ) is 3-colorable and hence left-hard as well; the above lemma then allows to conclude that K 7/2 is left-hard.
What other graphs could one use in place of K 7/2 ? The answer turns out to be topological. Intuitively, while the operation Γ k gives a "thicker" graph, the operation Ω k gives a "thinner" one. In fact, Ω k behaves like barycentric subdivision in topology: it preserves the topology of a graph (formally: its box complex is Z 2 -homotopy equivalent to the original graph's box complex) but refines its geometry. With increasing k, this eventually allows to model any continuous map with a graph homomorphism; in particular:
Other examples of adjoint functors
The arc construction δ is also an example of a digraph functor which admits both a thin left adjoint δ L and a thin right adjoint δ R ; 8 this adjointness essentially gives a proof of [PR81] showed that sub(δ R (K k )) → K b(k) (the sub is essential here); recall also that δ(sym(K b(n) )) → K n . Therefore, PCSP(K b(n) , K b(k) ) trivially reduces to PCSP(K b(n) , sub(δ R (K k ))), which by Observation 2.10 log-space reduces to PCSP(δ(sym(K b(n) )), K k ), which trivially reduces to PCSP(K n , K k ), proving Lemma 3.3. From Observation 2.11 we also have: Another example of a thin adjoint pair (but not triple) of functors is given by products and exponential graphs (see e.g. [FT13] for definitions): for any graphs F, G, H, we have Here × is the tensor (or categorical ) product, in particular G → H 1 × H 2 if and only if G → H 1 and G → H 2 . Nevertheless, a few other products have an associated exponentiation as well. These and other examples fall into a pattern known as Pultr functors -see [FT13] for an extended discussion (we note here that central Pultr functors, like Γ k or δ, are a kind of pp-interpretation). Foniok and Tardif [FT15] studied which digraph functors admit both thin left and right adjoints.
The box complex also admits a left adjoint, though they involve two categories. More precisely, the functor G → Hom(K 2 , G) (see definitions in Appendix A) gives a Z 2simplicial complex that is Z 2 -homotopy equivalent to the box complex. As proved by Matsushita [Mat19], it admits a left adjoint A from the category of Z 2 -simplicial complexes (with Z 2 -simplicial maps as morphisms) to the category of graphs.
Relation to the algebraic framework
We will need basic concepts from the algebraic approach to (P)CSPs, such as polymorphisms [AGH17; BG18], minions, and minion homomorphisms [BKO19]. We shall define them only for graphs as we do not need them for relational structures. We refer the reader to [BKW17; BKO19] for more details, examples, and general definitions.
An n-ary polymorphism of two graphs G and H is a homomorphism from G n to H; that is, a map f : V (G) n → V (H) such that, for all edges (u 1 , v 1 ), . . . , (u n , v n ) in G, (f (u 1 , . . . , u n ), f (v 1 , . . . , v n )) is an edge in H. We denote by Pol(G, H) the set of all polymorphisms of G and H.
Given an n-ary function f : A n → B, the, say, first coordinate is called essential if there exist a, a ∈ A and a ∈ A n−1 such that f (a, a) = f (a , a); otherwise, the first coordinate is called inessential or dummy. Analogously, one defines the i-th coordinate to be (in)essential. The essential arity of f is the number of essential coordinates.
Let f : A n → B and g : A m → B be n-ary and m-ary functions, respectively. We call f a minor of g if f can be obtained from g by identifying variables, permuting variables, and introducing inessential variables. More formally, f is a minor of g given by a map π : [m] → [n] if f (x 1 , . . . , x n ) = g(x π(1) , . . . , x π(m) ).
A minion on a pair of sets (A, B) is a non-empty set of functions (of possibly different arities) from A to B that is closed under taking minors. A minion is said to have bounded essential arity if there is some k such that every function from the minion has essential arity at most k.
Let M and N be two minions, not necessarily on the same pairs of sets. A map ξ : M → N is called a minion homomorphism if (1) it preserves arities; i.e., maps n-ary functions to n-ary functions, for all n; and (2) it preserves taking minors; i.e., for each π : [m] → [n] and each m-ary g ∈ M , we have ξ(g)(x π(1) , . . . , x π(m) ) = ξ(g(x π(1) , . . . , x π(m) )). Minion homomorphisms provide an algebraic way to give reductions between PCSPs. Our methods do not give minion homomorphisms in general: while Observation 2.10 gives a reduction from PCSP(G, ΓH) to PCSP(ΛG, H), it does not give a minion homomorphism from which the reduction would follow (from Pol(ΛG, H) to Pol(G, ΓH)). Indeed it cannot, as discussed below Proposition 2.5. However, adjoint functors in the (non-thin) category of graphs do imply such a minion homomorphism.
In the remainder of this section, we assume knowledge of basic definitions in category theory. One can define minions in any Cartesian category C (i.e. a category with all finite products), using morphisms of C in place of functions. For objects G, H ∈ C, Pol C (G, H) is the minion of morphisms from G L (the L-fold categorical product of G) to H. A function π : [L] → [L ] induces a morphism π G : G L → G L . For a graph G, it maps (v 1 , . . . , v L ) to (v π(1) , . . . , v π(L) ). In general, it can be defined as the product morphism p π(1) , . . . , p π(L) of appropriate projections p i : G L → G. For a polymorphism f : G L → H, the minor of f by π is then simply f • π G : G L → H.
For objects G and H of a category, we denote by hom(G, H) the set of morphisms from G to H.
Lemma 4.7. Let Γ : C → D and Ω : D → C be adjoint functors between Cartesian categories C, D. Then for all objects G in C and H in D, there is a minion homomorphism from Pol D (ΓG, H) to Pol C (G, ΩH). If, moreover, Γ preserves products then this is a minion isomorphism.
Proof. This essentially amounts to checking definitions. We have a natural morphism ψ L : Γ(G L ) → (ΓG) L defined as the product morphism Γp 1 , . . . , Γp L for projections p i : G L → G. It is natural in the following sense: for every function π : [L] → [L ], the following diagram commutes: because it is the unique morphism whose composition with p i : (ΓG) L → ΓG is Γp π(i) (in other words, it is the product morphism Γp π(1) , . . . , Γp π(L) ).
Let Ω be a right adjoint of Γ. Let Φ G L ,H : hom(Γ(G L ), H) → hom(G L , ΩH) be the natural isomorphism given by definition of adjunction. Naturality here means that in particular the right square in the following diagram commutes: . The left square also commutes because of the previously discussed commutation. Therefore, we can define a minion homomorphism ξ : hom Indeed, ξ preserves minors, because ξ(f • π ΓG ) = ξ(f ) • π G as seen on the perimeter of the above diagram. If Γ preserves products, then ψ L is an isomorphism. Since Φ G L ,H is a bijection, this means ξ is a minion isomorphism.
A basic lemma in category theory says that if a functor Γ admits a left adjoint, then it preserves products (indeed, all limits). So a pair of adjoint pairs (Λ, Γ), (Γ, Ω) implies a minion isomorphism. Hence the first part of Lemma 4.7 is analogous to Observation 2.10, while the second part is analogous to Observation 2.11. We can also derive the second direction as a corollary to the following lemma. Proof. Recall from the proof of Lemma 4.7 the following diagram, for G ∈ C, L, L ∈ N, and π : [L] → [L ]: Since Γ preserves products, ψ L is an isomorphism, so we can define a minion homomorphism ξ : Pol C (G, H) → Pol D (ΓG, ΓH) as follows: ξ(f ) := Γ(f ) • ψ −1 L , for f : G L → H. This preserves minors, because from the diagram's commutation we have: If we have adjoint functors in the (non-thin) category of graphs (or multigraphs), then Lemma 4.7 implies a minion homomorphism between the standard polymorphism minions (because a morphism is associated with a function between vertex sets). One could also apply Lemma 4.7 to the thin category of graphs, but the conclusion is then about minions of polymorphisms in that thin category, which is useless, since it does not distinguish between different projections G L → G.
All the thin functors we have considered are in fact functors in the category of graphs or digraphs: in particular Λ k , Γ k , Ω k , δ L , δ, δ R . The definitions can also be extended to give functors in the category of multi(di)graphs. The pairs (Λ k , Γ k ) and (δ L , δ) are adjoint pairs in the categories of multi(di)graphs (this fails in the category of (di)graphs; e.g. the number of homomorphisms Λ 3 G → H is not always equal to the number of homomorphisms G → Γ 3 H). This implies minion homomorphisms Pol(Λ k G, H) → Pol(G, Γ k H) and Pol(δ L G, H) → Pol(G, δH).
In contrast, the pairs (Γ k , Ω k ) and (δ, δ R ) are not adjoint pairs; they are only thin adjoints. Since Γ k and δ are right adjoints (of Λ k and δ L ), they preserve products. Applying Corollary 4.9 hence at least gives minion homomorphisms Pol(G, Ω k H) → Pol(Γ k G, H) and Pol(G, δ R H) → Pol(δG, H). However, our results would only follow from the opposite direction. This is impossible to obtain in general: a minion homomorphism Pol(δG, H) ? − → Pol(G, δ R H) would imply the following minion homomorphism (trivially from δK 6 → K 4 and δ R K k → K 2 k ), which is impossible by [Bar+19, Proposition 10.3]. Thus the seemingly technical difference between adjoints and thin adjoints turns out to be crucial.
As proved by Matsushita [Mat19], the hom complex Hom(K 2 , −) has a left adjoint from the category of Z 2 -simplicial complexes with Z 2 -simplicial maps to the category of graphs; the left adjoint preserves products.
Conclusions
The reduction in Lemma 3.3, on which our first main result relies, does not have a corresponding minion homomorphism. Given the simplicity of the reduction itself, this contrasts with the success of minion homomorphism in explaining other reductions between promise constraint satisfaction problems. It is to been seen whether this notion can be extended to a more general relation between polymorphism sets in a way that would imply Lemma 3.3.
The question of whether K 4 is left-hard stands open. In principle, it may be possible to extend the proof in Appendix A using more tools from algebraic topology to analyse Z 2 -maps (S 1 ) L → S 2 and deduce an appropriate minion homomorphism. It could also be interesting to consider how δ or δ R affect the topology of a graph, cliques in particular.
Another direction could be to look at Huang's Theorem 3.6 not as a black-box: could constructions like δ be useful to say something directly about PCPs?
A. Left-hardness using the box complex Basic definitions in topology
For topological spaces X, Y , we call a continuous function f : X → Y a map, for short. Two maps f, g : X → Y are homotopic if they can be continuously transformed into one another; formally: there is a family of maps φ t : X → Y for t ∈ [0, 1] (called a homotopy) such that φ 0 = f , φ 1 = g and such that the function (t, x) → φ t (x) from [0, 1] × X to Y is continuous. Two spaces X, Y are homotopy equivalent if there are maps f : X → Y and g : Y → X such that g • f and f • g are homotopic to identity maps on X and on Y .
We shall only consider topological spaces described in the following simple combinatorial way. A (simplicial) complex K is a family of non-empty finite sets that is downward closed, in the sense that ∅ = σ ⊆ σ ∈ K implies σ ∈ K. The sets in K are called faces (or simplices) of the complex, while their elements V (K) := σ∈K σ are the vertices of the complex. The geometric realisation |σ| of a face σ ∈ K is the subset of R V (K) defined as the convex hull of {e v | v ∈ σ}, where e v is the standard basis vector corresponding to the v coordinate in R V (K) . The geometric realisation |K| of K is the topological space obtained as the subspace σ∈K |σ| ⊆ R V (K) . We represent the points of |K| as linear combinations of vertices λ 1 v 1 + . . . λ n v n such that {v 1 , . . . , v n } ∈ K and λ i are non-negative reals summing to 1. We often refer to K itself as a topological space, meaning |K|. A simplicial map K → K is a function f : V (K) → V (K ) such that f (σ) := {f (v) | v ∈ σ} is a face of K whenever σ is a face of K. It induces a map |f | : |K| → |K | by extending it linearly from vertices on each face: For example, the circle may be represented as the triangle K = {{1}, {2}, {3}, {1, 2}, {2, 3}, {3, 1}}, meaning that |K|, which is the sum of three intervals in R 3 , is homotopy equivalent to the unit circle S 1 in R 2 . Adding the face {1, 2, 3} to K would make |K| contractible, that is, homotopy equivalent to the one-point space.
Equivariant topology -topology with symmetries
Rather than asking about "non-trivial maps" (maps not homotopic to a constant map) it is easier to work with equivariant topology, that is, considering topological spaces together with their symmetries and symmetry-preserving maps. A Z 2 -space is a topological space X equipped with a map − : X → X, called a Z 2 -action on X, satisfying −(−x) = x (for all x ∈ X). We will call −x the antipode of x. The main example is the n-dimensional sphere: the Z 2 -space defined as the unit sphere in R n+1 with Z 2 -action x → −x as vectors. this is also called an equivariant map). We write X → Z 2 Y if such a map exists (the Z 2 -actions being clear from context).
Standard notions extend in a fairly straightforward way to equivariant notions. A Z 2complex is a simplicial complex K together with a function − : We say that two Z 2 -spaces X, Y are Z 2 -homotopy equivalent, denoted X Z 2 Y , if there are Z 2 -maps f : X → Z 2 Y and g : Y → Z 2 X such that g • f and f • g are Z 2 -homotopic to the identity. Note this is stronger than just requiring X → Z 2 Y and Y → Z 2 X; homotopy equivalence is more similar to graph isomorphism than to homomorphic equivalence of graphs.
The box complex -the topology of a graph
The box complex Box(G) of a graph G is a Z 2 -complex defined as the family of vertex sets of complete bipartite subgraphs of G × K 2 (with both sides non-empty) and their subsets. In particular it contains all edges of G × K 2 and every K 2,2 = C 4 subgraph. The topology of box complexes of the following graphs is folklore.
Proof. For (i), see Proposition 19.8 in [Koz08], Proposition 4.3 in [BK06], or Lemma 5.9.2 in [Mat08]. Informally, the vertices of Box(K n ) can be mapped bijectively to points in R n of the form ±e i := (0, . . . , 0, ±1, 0, . . . , 0). These are vertices of the cross-polytope in R n (the n-dimensional counterpart of the octahedron). Faces of Box(K n ) are exactly those subsets of {±e 1 , . . . , ±e n } that do not contain repeated indices (+e i and −e i for any i), except for the two sets {+e 1 , . . . , +e n } and {−e 1 , . . . , −e n } (since a bipartite complete graph containing all n vertices on one side cannot contain any vertex on the other side). The complex is thus isomorphic to the cross-polytope (the n-dimensional counterpart to the octahedron) in R n , but with the interior and two opposite facets removed. The cross-polytope after removing the interior is Z 2 -homotopy equivalent to S n−1 and after removing two opposite facets it is Z 2 -homotopy equivalent to S n−2 .
For (iv), let use denote the two vertices of Box(K) corresponding to v ∈ V (K) as v • and v • . Observe that Box(K) would be isomorphic to K × K 2 (meaning the 1-dimensional simplicial complex with V (K × K 2 ) as vertices and with E(K × K 2 ) and their subsets as faces), except that it also contains N each v ∈ V (K) (except those with empty neighbourhood). However, these additional faces can be collapsed. Formally, every face not in E(K × K 2 ) is either of the form {v • , w 1 • , . . . , w n • } or {w 1 • , . . . , w n • } for some w i ∈ N (v) and n ≥ 2, or the same with • and • swapped. Since K is square-free, even in the second case v is uniquely determined by the w i . Hence we can match these faces in pairs. This matching is easily checked to satisfy the definitions of a so-called acyclic Z 2 -matching in Discrete Morse Theory, which allows to show that removing these faces gives a Z 2 -homotopy equivalent complex: see Section 3 in [Wro19] for definitions and details.
For (ii), observe that by the above, Box(C n ) is Z 2 -homotopy equivalent to C n × K 2 = C 2n as a simplicial complex (for odd n). It is straightforward to give a Z 2 -homotopy equivalence (in fact a homeomorphism) to S 1 .
For (iii), we first consider the case when p is odd. Then, K p/q × K 2 is isomorphic to the Caley graph K of Z 2p with generators {±1, ±3, . . . , ±p − 2q} (the isomorphism maps (i, 0) to 2i and (i, 1) to 2i + p). In particular, K includes a cycle C 2p on 0, 1, . . . , 2p − 1 and the Z 2 -action on K p/q × K 2 correspond to point reflection on C 2p . We thus have an inclusion map ι : |C 2p | → |K | (where |K | is is shorthand for Box(K p/q × K 2 ) and C 2p is meant as a subcomplex). Note that p q < 4 is equivalent to p − 2q < p 2 , so two adjacent vertices of K are at distance at < p 2 in C 2p . Therefore, every face of the box complex (a complete bipartite subgraph of K ) is contained in an interval of length < p in Z 2p . Every point in the geometric realization of such a face can be unambiguously mapped by linear extension in the interval to a point in the geometric realization of C 2p , giving a Z 2 -map f : |K | → |C 2p |. The maps ι, f give a Z 2 -homotopy equivalence (f • ι : |C 2p | → |C 2p | is equal to the identity, while ι • f is Z 2 -homotopic to the identity, since one can also linearly extrapolate between the definition of f and the identity map). The proof for even p is similar, the main difference being that K should be the graph on Z p × {0, 1} with (i, a) adjacent to (j, b) if a = b and i, j are at distance ≤ p−2q 2 .
Note that for a loop-less graph K, Box(K) is a free Z 2 -complex, which means every face σ is disjoint from −σ. This in turn implies that |Box(K)| is a free Z 2 -space, which means that a point is never its own antipode. Proposition 5.3.2.(v) in [Mat08] shows that a free Z 2 -complex of dimension n admits a Z 2 -map to S n . Hence for loop-less, square-free graphs K, we have |Box(K)| → Z 2 S 1 .
The hom complex -preserving products
Instead, we will use the Hom complex Hom(K 2 , G), which is Z 2 -homotopy equivalent to Box(G), as proved by Csorba [Cso08]. Its vertices are homomorphisms K 2 → G, that is, oriented edges (u, v) of G. For every U, V ⊆ V (G) such that U × V ⊆ E(G), U × V and its subsets are faces of Hom(K 2 , G). In other words, a set σ of oriented edges is a face if for every two (u, v), (u , v ) ∈ σ, (u, v ) is an oriented edge of G. The Z 2 -action swaps (u, v) to (v, u).
This definition has the advantage that it respects products trivially (and exactly, not just up to homotopy equivalence): Hom(K 2 , G × H) is isomorphic to Hom(K 2 , G) × Hom(K 2 , H) (as Z 2 -simplicial complexes). The isomorphism simply maps the oriented edge between pairs (g 1 , h 1 ) and (g 2 , h 2 ) ∈ V (G) × V (H) to the pair of oriented edges ((g 1 , h 1 ), (g 2 , h 2 )). In the same way, Hom(K 2 , G L ) is isomorphic Hom(K 2 , G) L , mapping pairs of L-tuples to L-tuples of pairs. Lemma A.2. Let f : G L → H be a graph homomorphism. Let f : Hom(K 2 , G) L → Hom(K 2 , H) be the induced simplical Z 2 -map, defined as: Then the transformation f → f preserves minors and composition. This is straightforward from the definitions. Here by compositions we mean functions of the form h(f (g 1 (x 1 ), . . . , g L (x L ))) for g i : G → G and h : H → H ; the graph homomorphisms g i and h induce simplicial maps just as above for L = 1. Preserving compositions means in particular that if µ is an automorphism of G and µ is the automorphism of Hom(K 2 , G) it induces, then f (x 1 , . . . , µ(x i ), . . . , x L ) induces f (x 1 , . . . , µ (x i ), . . . , x L )).
In the geometric realisation, the above-mentioned isomorphism induces (by linear extension) an isomorphism from |Hom(K 2 , G × H)| to |Hom(K 2 , G) × Hom(K 2 , H)|. The latter has a natural Z 2 -homotopy equivalence to |Hom(K 2 , G)| × |Hom(K 2 , H)|, implicit in the following claim: Then the transformation f → |f | preserves minors up to Z 2 -homotopy rel x 0 and preserves composition exactly.
Proof. Preservation of composition is again straightforward.
To see that the transformation preserves minors, consider for example the contraction (identification) of two coordinates. The general case is entirely analogous. Let f : X 2 → Y and let f /2 : X → Y be the minor obtained by contracting the two coordinates. Then On the other hand, if we take the induced map first and only then contract, we obtain: of Y which contains the former. We can thus continuously move from one to the other. Formally, let µ i,j := λ i if i = j and 0 otherwise. Then the functions (for t ∈ [0, 1]) are always well-defined and give a Z 2 -homotopy between |f /2 | and |f | /2 . For any vertex x 0 (i.e. λ 1 = 1) f t (x 0 ) is constantly equal to f (x 0 ).
We thus have a minion homomorphism from Pol(G, H) to the minion of maps-up-tohomotopy | Hom(K 2 , G)| L → | Hom(K 2 , H)|, which preserves automorphisms of G. This, as well as the minion homomorphism in the following subsection, can be interpreted as an instance of Lemma 4.8.
The fundamental group
For a topological space |X| and a point x 0 ∈ |X|, two maps from |X| to some topological space are homotopic rel x 0 if there are homotopies that do not move the image of x 0 . In the fundamental group π 1 (|X|, x 0 ), the elements are equivalence classes of loops at x 0 (maps [0, 1] → |X| mapping 0 and 1 to x 0 ) under homotopy rel x 0 , the group operation is concatenation. We skip x 0 when it is not important, since π 1 (|X|, x 0 ) is always isomorphic to π 1 (|X|, x 0 ) if |X| is path-connected 10 which we implicitly assume throughout.
Including information about the Z 2 -symmetry in the fundamental group is a bit less obvious. For a Z 2 -space |X| we can look at the fundamental group of |X| but also the fundamental group of the quotient |X| /Z 2 (where every point is identified with its antipode; a.k.a. the orbit space or base space; we denote the equivalence class of x by ±x). One way to think of elements of π 1 (|X| /Z 2 , ±x 0 ) is as paths from x 0 to either x 0 or −x 0 , with concatenation defined using the Z 2 -action if necessary. Observe that π 1 (|X| /Z 2 ) contains π 1 (|X|) as a subgroup, consisting of paths from x 0 to x 0 .
Another way to describe the subgroup is by a group homomorphism to ν X : π 1 (|X| /Z 2 ) → Z 2 mapping the subgroup (paths x 0 to x 0 ) to 0 and everything else (paths x 0 to −x 0 ) to 1. Thus π 1 (|X|) is the subgroup given by the kernel of ν X . 11 For example, consider S 1 . The quotient S 1 /Z 2 is again a circle, so π 1 (S 1 /Z 2 ) is isomorphic to Z (a loop in the quotient is represented by its winding number); ν is the remainder mod 2 (loops with odd winding number in the quotient correspond to paths from a point to its antipode in S 1 ) and π 1 (S 1 ) is the subgroup 2Z of even integers. In contrast, the quotient S 2 /Z 2 is the projective plane, so π 1 (S 2 /Z 2 ) is isomorphic to Z 2 ; ν is the identity and the subgroup π 1 (S 2 ) is the trivial group.
In other words, to a Z 2 -space |X| we assign a group π 1 (|X| /Z 2 ) together with a group homomorphism ν X to Z 2 . Consider the category whose objects are such pairs (G, ν) (a group with a homomorphism to Z 2 ), while morphisms (G, ν G ) → (H, ν H ) are group homomorphisms G → H preserving ν. The categorical product of (G, ν G ) and (H, ν H ) is {(g, h) ∈ G × H : ν G (g) = ν H (h)} with coordinate-wise multiplication and the homomorphism to Z 2 defined in an obvious way (ν(g, h) := ν G (g) = ν H (h)).
Then the transformation f → f * preserves minors and preserves automorphisms of |X| that fix x 0 . 13
Consider a graph homomorphism f : C L n → H (n odd). We have | Hom(K 2 , C n )| Z 2 S 1 and hence π 1 (C n/Z 2 ) is Z with a group homomorphism ν Cn : i → (i mod 2). In particular Z L is the subgroup of Z L given by L-tuples in which the integers are all even or all odd and π 1 (C n ) is the subgroup 2Z of even integers in Z. For an arbitrarily fixed edge e 0 of C n , the automorphism µ Cn that mirrors the graph and fixes e 0 induces the automorphism of Z which maps i to −i.
Therefore, composing the transformations from Lemmas A.2, A.3, and A.4, we obtain a group homomorphism f * : Z L → π 1 (H /Z 2 ) which preserves the homomorphism to Z 2 and the mirror automorphism on each coordinate.
Suppose that | Hom(K 2 , H)| Z 2 S 1 , so again π 1 (H /Z 2 ) = Z with the same homomorphism to Z 2 (i mod 2) and the same mirror automorphism (−i). Since f * preserves the homomorphism to Z 2 , d := f * (1, 1, . . . , 1) ∈ Z is an odd number, which means f * (2, 2, . . . , 2) = 2d is non-zero. This is why we needed the Z 2 -action: to conclude that f * is non-trivial. We can now focus on what f * does on the subgroup of even integers.
Let a := f * (0, . . . , 0, 2, 0, . . . , 0) ∈ Z with a 2 in the -th coordinate. Then f * on even numbers is completely determined by these elements: f * (2i 1 , . . . , 2i L ) = a 1 · i 1 + · · · + a L · i L (because it is a group homomorphism). By the above, L =1 a is nonzero. Since f → f * preserves minors, we know that the minor i → f * (i, i, . . . , i) is a group homomorphism induced by some graph homomorphism C n → H (namely by the corresponding minor v → f (v, . . . , v)), hence the integer f * (2, 2, . . . , 2) belongs to a set of at most |H| n possibilities. The same holds for compositions with mirror symmetries: the group homomorphism i → f * (i, . . . , −i, . . . , i) with a minus on any subset of coordinates is induced by the graph homomorphism C n → H defined as f (v, . . . , µ Cn (v), . . . , v) with µ Cn on the same set of coordinates. Hence for i 1 , . . . , i L ∈ {+1, −1}, the values f * (2i 1 , 2i 2 , . . . , 2i L ) = a 1 ·i 1 +· · ·+a L ·i L belong to a set of at most |H| n possibilities. This implies less than |H| n of the integers a are non-zero. Indeed, if there are L coordinates for which a is non-zero, then one can set the corresponding i to make a · i positive, and then swap i one-by-one in any order, resulting in a strictly decreasing sequence of values a 1 · i 1 + · · · a L · i L , hence in L + 1 distinct values. Hence L + 1 ≤ |H| n . Therefore, the group homomorphism (2i 1 , . . . , 2i L ) → f * (2i 1 , . . . , 2i L ) : (2Z) L → (2Z) has bounded (but non-zero) essential arity. Note that this is exactly the homomorphism f * | π 1 (Cn) L , from the subgroup π 1 (C n ) L to the subgroup π 1 (H). Therefore, the transformation f → f * | π 1 (Cn) L is a minion homomorphism from Pol(C n , H) to a minion of functions of bounded essential arity.
The same argument would work if instead of | Hom(K 2 , H)| Z 2 S 1 we only assumed we had a Z 2 -map g : | Hom(K 2 , H)| → S 1 , since it would induce a group homomorphism g * : π 1 (H /Z 2 ) → Z which preserves the homomorphism to Z 2 , in a way that preserves mirror automorphisms of S 1 ; it then suffices to compose g * with f * and continue as above.
This concludes the proof of the following: Theorem A.5. Let H be a graph such that | Hom(K 2 , H)| → Z 2 S 1 . Then for all odd n, Pol(C n , H) admits a minion homomorphism to a minion of bounded essential arity with no constant functions.
By Theorem 4.6, this concludes the direct proof that PCSP(C n , H) is NP-hard for all odd n: Corollary A.6. Let H be a graph such that | Hom(K 2 , H)| → Z 2 S 1 . Then H is left-hard.
Since | Hom(K 2 , H)| is Z 2 -homotopy equivalent to |Box(H)| (hence they admit the same Z 2 -maps), this is exactly equivalent to Corollary 2.8; in particular it gives a proof of Theorem 2.4.
Further remarks
In the case of S 1 , the fact that a Z 2 -map g : | Hom(K 2 , H)| → S 1 induces a group homomorphism g * : π 1 (H /Z 2 ) → Z which preserves the homomorphism to Z 2 is in fact an exact characterisation. That is, as stated by Matsushita [Mat19], standard covering space theory yields the following: Lemma A.7. A connected Z 2 -space |X| admits a Z 2 -map to S 1 if and only if there exists a group homomorphism f : π 1 (|X| /Z 2 ) → Z which preserves the action (that is, f −1 (2Z) = π 1 (X)).
In the above proof, one could go directly from graphs to fundamental groups, avoiding simplicial complexes and topological spaces (though they remain the simplest way to prove that these fundamental groups preserve products). A direct definition of the fundamental group of the quotient space |Box(H)| /Z 2 is as follows. We consider closed walks (cycles that are allowed to self-intersect) from an arbitrary fixed vertex v 0 ∈ V (H). Two such walks are consider equivalent if one can be obtained from the other by adding/removing backtracks (a pair of consecutive edges going back and forth on the same edge of H) and 4-cycles (subwalks around a cycle of length 4). The elements of the group are equivalence classes of walks, with concatenation as multiplication. The resulting group is isomorphic to π 1 (H /K 2 ) (this combinatorial definition is known as the edge-path group; see [Mat17] or Section 3.6 and 3.7 in [Spa66]). Considering walks in H × K 2 instead would yield a group isomorphic to π 1 (H).
For example, for odd cycles and more generally circular cliques < 4 the group is just Z (Lemma 4.1 in [Wro17] has a direct but technical proof), for square-free graphs the group is a free (non-Abelian) group. For K 4 , the resulting group is just Z 2 (all walks of the same parity are equivalent), which corresponds to the fact that |Box(K 4 )| is the 2-sphere and |Box(K 4 )| /Z 2 is the projective plane.
Unfortunately, this makes the fundamental group useless for the question of whether K 4 is left-hard. Indeed, there is only one possible induced group homomorphism f * : Z L → π 1 (K 4/Z 2 ) = π 1 (RP 2 ) = Z 2 : it maps L-tuples of even integers to 0 and L-tuples of odd integers to 1 (because it has to preserve the homomorphism to Z 2 , which is the identity). Whether other tools of algebraic topology can be useful remains to be seen. | 2019-07-02T13:43:34.328Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "48875feae2e592c7bbfc4fd6b7d787c0e03cb495",
"oa_license": null,
"oa_url": "https://epubs.siam.org/doi/pdf/10.1137/1.9781611975994.86",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6bd282c5687d43dd8326e82aa555be7c24baff2d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
226304009 | pes2o/s2orc | v3-fos-license | A global population assessment of the Chinstrap penguin (Pygoscelis antarctica)
Using satellite imagery, drone imagery, and ground counts, we have assembled the first comprehensive global population assessment of Chinstrap penguins (Pygoscelis antarctica) at 3.42 (95th-percentile CI: [2.98, 4.00]) million breeding pairs across 375 extant colonies. Twenty-three previously known Chinstrap penguin colonies are found to be absent or extirpated. We identify five new colonies, and 21 additional colonies previously unreported and likely missed by previous surveys. Limited or imprecise historical data prohibit our assessment of population change at 35% of all Chinstrap penguin colonies. Of colonies for which a comparison can be made to historical counts in the 1980s, 45% have probably or certainly declined and 18% have probably or certainly increased. Several large colonies in the South Sandwich Islands, where conditions apparently remain favorable for Chinstrap penguins, cannot be assessed against a historical benchmark. Our population assessment provides a detailed baseline for quantifying future changes in Chinstrap penguin abundance, sheds new light on the environmental drivers of Chinstrap penguin population dynamics in Antarctica, and contributes to ongoing monitoring and conservation efforts at a time of climate change and concerns over declining krill abundance in the Southern Ocean.
Scientific Reports
| (2020) 10:19474 | https://doi.org/10.1038/s41598-020-76479-3 www.nature.com/scientificreports/ dynamics 8,[15][16][17] . Other potential impacts, including tourism, extreme weather events, and disease outbreaks, may be important at local scales or for short periods of time, but are less compelling explanations for the widespread changes observed across the Antarctic penguins 3,18,19 . The purpose of this paper is to address two research questions: (1) What is the global population and distribution of Chinstrap penguins? and (2) How does updated information on Chinstrap penguin abundance and distribution support or refute current hypotheses of penguin population dynamics around the Western Antarctic Peninsula? This paper provides a status report on our efforts to assemble all the available information on Chinstrap penguin distribution, abundance, and population trends over the past 40 years. We have used published data, additional unpublished data from our own recent field surveys, and estimates derived from high-resolution [0.31-3.0 m/pixel] satellite imagery obtained from Maxar, Planet, and Google Earth, as well as medium-resolution [30.0 m/pixel] Landsat imagery and unmanned aerial system (UAS) imagery. While unavoidably incomplete, this report provides the most comprehensive catalog to date of Chinstrap penguin colonies, their exact locations, and their population trends (see Supplementary Information), and identifies priority areas for future surveys. Finally, we discuss how these results fit into the current debate surrounding drivers of penguin population trends in the Antarctic Peninsula region.
Results
We estimate the global population of Chinstrap penguins at 3.42 (95th-percentile CI: [2.98, 4.00]) million breeding pairs (Table 1) in 375 extant breeding sites, not including recent extirpations. All survey details, updated population estimates, historical benchmarks, and estimates of population change are provided in the Supplementary Information.
Most Chinstrap penguin colonies are in the southwest Atlantic sector of the Southern Ocean, which includes the Antarctic Peninsula and associated islands, including the South Orkney Islands, South Sandwich Islands, and South Georgia Island (Fig. 1). The southernmost Antarctic Peninsula colonies are located on the north side of Marguerite Bay, at a latitude of approximately 67.8′ S. Globally, the Chinstrap penguin's range also includes small colonies on Bouvet Island and in the Balleny Islands (Fig. 2). Of 398 total sites, we were able to verify the locations of 364 from satellite imagery ( Table 2). Colonies were present, or presumed present, at 332 sites, with an additional 42 colonies unable to be assessed with available literature and imagery. This total includes 26 previously unreported colonies that, which the exception of several small colonies in the far south, were likely present but overlooked by previous surveys. There were 23 colonies that were either confirmed as having no breeding Chinstrap penguins or where we presume, but have not confirmed, absence-all of which represent potential extirpations. For 260 sites with updated abundance estimates, 43.8% had the highest level of precision (N1, ± 5% accuracy) and 42.3% had the lowest (N5, nearest order of magnitude), following accepted standards (e.g. 20,21 ). While we did identify several previously unreported colonies, the identification of very small colonies (< 10 breeding pairs) is difficult, and it is likely that additional, small colonies will be discovered over time. These colonies do not significantly affect global totals. www.nature.com/scientificreports/ Using the best current estimates of abundance, mean colony size for 367 colonies was 9327 breeding pairs (SD = 40,861) (median = 1100), excluding the 23 extirpated sites and eight sites with no current abundance information. A total of 19.1% of these colonies fell between 1 and 100 breeding pairs, 68.4% fell between 101 and 10,000 breeding pairs, and 12.5% had more than 10,000 breeding pairs.
To summarize the distribution of Chinstrap penguins in a management context, we use regions previously defined by the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR). Within Table 3. Overall, 34.7% of colonies could not be assessed against a historical benchmark. Of those colonies for which a comparison could be made to counts in the 1980s, 40.4% have declined, 16.2% have increased, 24.6% have not changed significantly, 10.0% represent previously unrecognized colonies, and 8.8% have been extirpated. However, this picture changes when assessing total population rather than number of colonies. Of the same sample, 15.3% of the total population are in colonies that have declined or probably declined and 25.1% in colonies that have increased or probably increased, with 57.6% of the population not changing significantly and 2.0% representing previously unrecognized colonies. This is due to the fact that more than one-third of the global Chinstrap penguin population is concentrated in a few large colonies in the South Sandwich Islands, where the species is apparently stable 4 . For colonies that could be assessed against a historic benchmark, most are declining on the Western Antarctic Peninsula (Fig. 3), while trends are mixed elsewhere. Many colonies in the eastern part of the Chinstrap penguin's range could not be assessed because of limited historic data; these are not shown in the figure.
Discussion
We estimate Chinstrap penguin abundance at 3.42 million breeding pairs. This estimate is broadly consistent with the BirdLife International total of 8 million mature individuals 1 . The BirdLife estimate required an update because it did not incorporate recent compilations of abundance, include full spatial coverage, or provide a robust treatment of uncertainty. Thus, this study provides a rigorous baseline against which past and future trends can be assessed. While substantively complete, our efforts to calculate a global Chinstrap penguin assessment remain a work in progress, as additional satellite imagery and field expeditions over the coming years will undoubtedly fill gaps in our understanding of current abundance and distribution.
To place our findings on the current abundance and distribution of Chinstrap penguins in context, we briefly summarize the various hypotheses put forth to explain why Chinstrap penguin populations have fluctuated dramatically since the very earliest days of monitoring efforts and provide some suggestions for future research to address gaps in our knowledge uncovered by this population assessment. www.nature.com/scientificreports/ decades 23 Chinstrap penguins have simultaneously begun to decline. Notably, however, sympatric populations of the other pygoscelid penguins-Adélie and Gentoo-have not behaved similarly. Fraser et al. 15 observed that while Chinstrap penguin populations had increased through the 1980s, Adélie penguins had not. Because both species are krill specialists, competition alone could not explain the disparity. Their "sea ice hypothesis" proposed that Chinstrap penguins, as a pagophobic species, had benefitted from a long-term decline in sea ice cover. Although this prediction did not fit the subsequent drop in Chinstrap penguin numbers on the Western Antarctic Peninsula, where sea ice decline has continued, it helped shape later discussions relating life history characteristics to penguin population change in the context of environmental impacts. Reviewing both Chinstrap and Adélie penguin populations, Trivelpiece et al. 8 formulated an inclusive hypothesis relating penguin populations to krill biomass. Competition from rebounding whale and fur seal populations, the effects of climate change on sea ice used by larval krill, and the development of a pelagic krill-trawling fishery were each proposed to exert downward pressure on total krill biomass, with cascading effects on penguin populations. This overall "krill hypothesis" remains a reasonable model for the decline of Chinstrap penguins on the Western Antarctic Peninsula since the 1980s, but it cannot distinguish which of these broad impacts is most significant. It also does not address the rapid increase in sympatric Gentoo penguin populations during the same period. Although Gentoo penguins have a somewhat more flexible diet, they also survive predominantly on krill during the Antarctic summer and often nest in the same colonies as Chinstrap and Adélie penguins 24,25 .
While competition from whales and fur seals cannot be ruled out as a contributing factor in the rapid decline of some krill-dependent species, most of the focus has been on climate change and krill fishing as the two most likely principal drivers underpinning the observed changes in pygoscelid composition. Scientists have increasingly emphasized the impacts of rising temperatures and retreating sea ice due to global climate change (e.g. 26,27 ), especially where temperature increases have been disproportionately high on the Western Antarctic Peninsula. Warmer temperatures in this region are well documented; for instance, Bromwich et al. 28 found a 2.4 °C increase in Western Antarctic temperatures from 1958 to 2012. Extreme weather events, including record high air temperatures, are also evidently becoming more frequent 29 . Climate change, by influencing sea ice dynamics and marine environmental conditions, likely affects the biomass and distribution of krill in the Southern Ocean 30 and, in turn, penguin populations 8 .
Krill fishing may play a significant role in Chinstrap penguin population dynamics, especially at local scales 17 . Multiple studies have documented overlap between commercial krill trawls and penguin foraging ranges 16,31,32 . Because they make marginally longer foraging trips than Adélie or Gentoo penguins, Chinstrap penguins could have higher exposure to fishing interference during the breeding season. Watters et al. 32 predicted that today's "precautionary" limits on krill fishing would affect penguin breeding success with a similar magnitude to climate change, if seasonal local harvest rates exceed a threshold of 0.1. This is especially pertinent to Chinstrap penguin colonies adjacent to krill-fishing hotspots, though continued declines in areas currently experiencing less intense krill trawling (e.g. Elephant Island 6 ) suggest that krill fishing does not fully explain the widespread Chinstrap penguin population changes identified by our assessment.
A renewed focus on the overwinter period. If breeding success is the primary driver of population change in pygoscelid penguins, then one might expect clear distinctions between Chinstrap, Adélie, and Gentoo penguin productivity throughout the Western Antarctic Peninsula over the past half century. However, several decades-long investigations of these penguins nesting together, in mixed colonies, have reported no significant difference in chicks fledged (e.g. 5,13 ). As previously argued by Hinke et al. 13 , these data strongly indicate that contemporary Chinstrap penguin declines are largely driven by factors in the non-breeding portion of the year rather than by summer production.
All three Pygoscelis spp. are central-place foragers at their summer colonies with heavily overlapping ranges 16 . After the nesting season, though, each occupies a very different niche and geographic area. Chinstrap penguins migrate the farthest, swimming up to 4500 km away from their colonies to overwinter in a latitude near 60° south in the Southern Ocean; those from the Western Antarctic Peninsula mostly travel westward 33,34 . Adélie penguins from the northern Antarctic Peninsula migrate east to overwinter in the Weddell Sea pack ice zone 33 . Gentoo penguins, meanwhile, stay close to their Antarctic Peninsula breeding colonies throughout the winter 16 .
If austral summer conditions are nominally similar for all three sympatrically breeding Pygoscelis spp., could we be observing a mismatch between the timing or location of winter resources and overwinter foraging behavior? Such trophic mismatches have been an area of concern for other bird species, and long-distance migrant birds have declined at a faster rate than resident species in response to global climate change (e.g. 35,36 , see also 37 ). This has generally been explained as a disconnect between inflexible migratory instincts and rapidly changing environmental conditions in one part of the birds' range, a concept known as "decoupling" 38 . If the Southern Ocean ecosystem is changing faster than migratory penguins can adapt, then long-distance migrants-Chinstrap and Adélie penguins-could be disproportionately affected.
One area of concern salient to understanding Chinstrap penguin population declines is that krill biomass may be shifting. In a wide analysis of krill trawl data, Atkinson et al. 39 found that krill have become more concentrated near Antarctic shelves, krill densities have declined near the species' northern limit, and the range of Antarctic krill has contracted south by more than 400 km in the past 90 years. Winter tracking data show Chinstrap penguins foraging in a narrow latitude along the northern edge of known krill distribution, in a zone vulnerable to such a shift 34 . If the winter range of these penguins is out of sync with krill concentrations, and if Chinstrap penguins depend on krill year-round, then a decoupling hypothesis specific to the nonbreeding season could predict the decline of Chinstrap penguin populations while allowing for an increase in Gentoo penguins, and would be consistent with the similar breeding productivity of all three pygoscelid species. This perspective on the importance of Chinstrap penguin winter foraging is inspired by recent tracking studies, and www.nature.com/scientificreports/ suggests that continued research on the overwinter activities of the three Pygoscelis spp. penguins (regarding both foraging areas and diet) is key to understanding the extent to which changes in population reflect summer or winter conditions.
Moving forward. While recent surveys in the Elephant Island and Low Island region have filled in some
critical gaps for Chinstrap penguins 6 , our assessment has identified additional areas that should be considered priorities for future surveys. The South Orkney Islands contain several very large Chinstrap penguin colonies that should be resurveyed, particularly in the vicinity of Sandefjord Bay and Monroe Island as well as on Saddle Island. The South Sandwich Islands contain some of the largest and most poorly surveyed Chinstrap penguin colonies globally, though recent UAS imagery has been collected that may soon provide updated population counts for several of these sites. The Aitcho Islands group just west of Robert Island hosts a number of previously unreported Chinstrap penguin colonies that were first identified in Landsat satellite imagery and have since been confirmed using ground surveys and UAS imagery. Though none of these colonies is particularly large, there remain several Chinstrap penguin colonies which have never been surveyed. A complete exploration of this area is both logistically feasible and necessary. While we were recently able to census Cape Wallace on Low Island, we were unable to survey the similarly sized Cape Garry, which is likely to be one of the largest Chinstrap penguin colonies outside the South Sandwich Islands despite probable declines. Finally, as an aid to future surveys by satellite imagery, we have provided in the Supplementary Information shapefiles representing the bounding boxes for 139 colonies investigated using satellite imagery so these colonies can be easily relocated.
Methods
In this population assessment, we refer to a group of nesting Chinstrap penguins (with a minimum size of one breeding pair) as a "colony, " and the location of snow and ice-free terrain where such a colony may be located as a breeding "site. " The distinction is important because sites are permanent and exist regardless of whether penguins colonize them. Most actual and potential breeding sites for Chinstrap penguins in Antarctica have been previously defined 42 ; because of natal philopatry, each Chinstrap penguin breeding population is considered separate from other colonies at adjacent sites, although weak population structure implies some movement over generations 40,41 . Where we have subdivided sites or grouped sites relative to previous accounts, we have noted those distinctions in the database accompanying this report in the Supplementary Information. Site names for this assessment were based on previous compilations; in some cases, names have been updated to reflect what is currently available through the Mapping Application for Penguin Populations and Projected Dynamics 42 . Colony locations have been cross-checked against available satellite imagery. A large number of sites have had their locations updated to reflect the precise location of the Chinstrap penguin colony. Subareas for breeding populations, as well as Small-Scale Management Units within subarea 48.1, followed CCAMLR definitions. In our compilation of Chinstrap penguin abundance, we prioritized census data from direct methods (ground counting of individual occupied nests or chicks, or counts based on UAS imagery). Where no updated count was available, we used satellite imagery to estimate the area of guano coverage at the colony, similar to the approach used by Lynch and LaRue 43 for Adélie penguins, prioritizing high-resolution commercial imagery and using Planet or Landsat imagery only when no cloud-free high-resolution imagery was available. In six cases, we were able to confirm continued occupation of a site using satellite imagery, but were unable to estimate abundance, either because the guano signature was too diffuse or because a portion of the colony was obscured by clouds. In about one third of all sites, we were unable to update the abundance estimate for known colonies. Instead, for the purposes of estimating regional or global population totals, we used the most recently available estimate, noting that in most cases the older abundance estimates were from the 1980s. Insufficiently surveyed coastlines in Chinstrap penguin-dominated areas were visually searched in high-resolution satellite imagery; in this manner, we discovered a few new or previously unreported colonies.
On satellite images, Chinstrap penguin colonies were identified by the spatial and spectral characteristics of their guano 43 and the manual delineation of guano was done based on experience in concert with historic maps of guano extent. Different species of penguins may sometimes be differentiated in high-resolution imagery 43 , but identifying Chinstrap penguin colonies at mixed-species sites required knowledge of the site or multiple images in which species could be distinguished based on breeding phenology. For that reason, we did not attempt to estimate Chinstrap penguin abundance using satellite imagery at mixed-species sites. Our determination as to the current status of a colony at a site followed the logic tree illustrated in Supplemental Figure S1. For colonies that we were unable to update, we considered all large colonies (> 499 breeding pairs at last census) to be "presumed present" whereas smaller colonies were designated as "unknown".
Unlike Adélie penguins 43 , we do not yet have enough coincident satellite and ground count data to construct a rigorous statistical model for Chinstrap penguin density. In 10 instances, a ground count and satellite image coincided within a seven-year period, and we used these counts to estimate a very crude nesting density of 0.5 nests/m 2 . We have used this nesting density to convert the area of guano in satellite imagery to an estimate of the breeding population. Recognizing uncertainty of nesting density (beyond the uncertainty associated with guano area itself), we have designated all such population estimates as being in the lowest precision category (accuracy = 5, correct to the "nearest order of magnitude"). For these sites only, counts were rounded to the nearest hundred (< 1000), nearest thousand (< 100,000), or nearest hundred thousand.
In communicating the precision of each population estimate, we have followed the tradition established by previous accounts (e.g. 9,20,21,43 ) by binning count accuracy on a 5-point scale (see Supplementary Information). Recognizing that our uncertainty on current abundance reflects both the uncertainty of the original survey and the time elapsed since the most recent survey of a colony, we have downgraded the precision of counts older than 2015 by either one step (e.g., from accuracy = 2 to accuracy = 3) for counts from 2005-2014, by two steps for www.nature.com/scientificreports/ counts from 1995-2004, and by three steps for counts from 1985-1994, noting that the accuracy code saturates at 5. Doing so allows us to most accurately communicate the uncertainty of our estimate of current abundance (which includes both the uncertainty in the most recent count as well as the time elapsed since that most recent count). To estimate the abundance in a region encompassing multiple sites, we simply summed the samples from the distribution representing our best estimate for each site to arrive at a distribution for their sum. This procedure allowed us to estimate uncertainty regarding the total population in any region, such as with each of the Small-Scale Management Units. When summing groups of sites, we have propagated the uncertainties inherent to each site's current abundance by sampling from a truncated (0, ∞ ) Gaussian distribution with the mean equal to the population estimate and with standard error equal to 2.5% (2σ = 5% for accuracy = 1), 5% (2σ = 10% for accuracy = 2), 12.5% (2σ = 25% for accuracy = 3), 25% (2σ = 50% for accuracy = 4), and 45% (2σ = 90% for accuracy = 5). We opted to use a truncated Gaussian distribution here, rather than the bias-corrected log-normal distribution used by Che-Castaldo et al. 44 , because the extreme skew of the latter distribution for accuracy 5 counts, which are common in the database, interfered with a sensible estimate of the difference between two populations, which was needed to identify which colonies changed significantly in abundance over the past several decades.
Change in abundance.
At each site, we compared our current abundance estimate with a previous population estimate to determine the degree to which populations increased, decreased, or remained stable over time. In most cases, this benchmark population estimate was from the early to mid-1980s, which provided an approximately 40-year span over which to judge population change. To evaluate population change, we drew random samples from the distributions representing the historic count and the updated count, and differenced them to create a distribution of population changes. This procedure allowed us to propagate uncertainties from the population estimates to an estimate for population change. If more than 95% of this distribution indicated either an increase (decrease), we designated that colony as having "increased" ("decreased"), whereas 87.5-95% of the distribution indicating an increase (decrease) would yield the designation "probable increase" ("probable decrease"). If either no updated abundance estimate was available, or if no historic estimate was available, the population change was designated "unknown". This population assessment involved no contact with animals, though we report on previously collected but unpublished data obtained under a survey protocol approved by the Stony Brook University Institutional Animal Care and Use Committee (IRBNet ID 237420) and carried out under a permit granted by the National Science Foundation under the Antarctic Conservation Act (45 CFR §673 et seq.) with an initial environmental evaluation approved by the U.S. Environmental Protection Agency Office of Federal Activities.
Data availability
All count data generated or analyzed for this study are included with this published article as a Supplementary Information file (1 file in .xlsx format). Supplementary files defining bounding boxes for all penguin sites are also provided (695 files in .dbf, .prj, .qpj, .shp, and .shx formats). www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-11-12T09:09:29.173Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "ba758d71e9f35a7542eb218611777408441adb83",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-76479-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfa5c470d259e39496fba0418c0901a91b1d8c6a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
57130753 | pes2o/s2orc | v3-fos-license | miR-18a reactivates the Epstein-Barr virus through defective DNA damage response and promotes genomic instability in EBV-associated lymphomas
Background The Epstein-Barr virus (EBV) is closely associated with several types of malignancies. EBV is normally present in the latent state in the peripheral blood B cell compartment. The EBV latent-to-lytic switch is required for virus spread and virus-induced carinogenesis. Immunosuppression or DNA damage can induce the reactivation of EBV replication. EBV alone is rarely sufficient to cause cancer. In this study, we investigated the roles of host microRNAs and environmental factors, such as DNA-damage agents, in EBV reactivation and its association with lymphomagenesis. Methods We first analyzed the publicly available microRNA array data containing 45 diffuse large B-cell lymphoma patients and 10 control lymph nodes or B cells with or without EBV infection. In situ hybridization for miR-18a and immunohistochemitry were performed to evaluate the correlation between the expression of miR-18a and nuclear EBV protein EBNA1 in lymphoid neoplasm. The proliferative effects of miR-18a were investigated in EBV-positive or –negative lymphoid neoplasm cell lines. EBV viral load was measured by a quantitative real-time EBV PCR and FISH assay. The genomic instability was evaluated by CGH-array. Results In this study, we analyzed the publicly available microRNA array data and observed that the expression of the miR-17-92 cluster was associated with EBV status. In situ hybridization for miR-18a, which is a member of the miR-17-92 cluster, showed a significant upregulation in lymphoma samples. miR-18a, which shares the homolog sequence with EBV-encoded BART-5, promoted the proliferation of lymphoma cells in an EBV status-dependent manner. The DNA-damaging agent UV or hypoxia stress induced EBV activation, and miR-18a contributed to DNA damaging-induced EBV reactivation. In contrast to the promoting effect of ATM on the lytic EBV reactivation in normoxia, ATM inhibited lytic EBV gene expression and decreased the EBV viral load in the prescence of hypoxia-induced DNA damage. miR-18a reactivated EBV through inhibiting the ATM-mediated DNA damage response (DDR) and caused genomic instability. Conclusions Taken together, these results indicate that DNA-damaging agents and host microRNAs play roles in EBV reactivation. Our study supported the interplay between host cell DDR, environmental genotoxic stress and EBV. Electronic supplementary material The online version of this article (10.1186/s12885-018-5205-9) contains supplementary material, which is available to authorized users.
Background
The Epstein-Barr virus (EBV) infects nearly all humans. Although EBV injection is asymptomatic in most individuals, EBV is closely associated with several types of malignancies, including lymphomas, nasopharyngeal carcinoma (NPC), gastric adenocarcinoma and gastric lymphoepithelioma-like carcinoma [1][2][3]. Due to the tropism for B lymphocytes, the most common forms of EBV-associated lymphoproliferative disorders are B-cell lymphomas: Hodgkin lymphomas (HL), non-Hodgkin lymphomas (NHLs), including Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL) [4]. EBV also plays a complex and multifaceted role in T/NK cell lymphomas [5].
The virus is present in two stages: an active lytic state and latent state, and the virus can only be spread during the active state [6]. During primary EBV infection, a portion of individuals, particularly adolescents, develop infectious mononucleosis (IM). EBV infection in vitro can transform resting B cells into immortalized lymphoblastoid cell lines (LCL).Under normal conditions, EBV is present in the latent state in the peripheral blood B cell compartment. EBV can be reactivated under certain circumstances. Reactivation of the viral replication plays an important role in the development of EBV-associated malignancy. Oral hairy leukoplakia is unequivocally due to lytic EBV infection [7]. Except for hematological malignancies, the lytic form of EBV infection was also observed in malignant breast epithelial cells in certain cases of breast cancer [8]. EBV reactivation-induced cell proliferation and migration caused the relapse of NPC [9]. Elevated levels of EBV DNA is observed in EBV-positive lymphoma patients with active disease but not in EBV-positive patients in remission or those with EBV-negative tumors, indicating that the EBV viral load at lymphoma diagnosis is an indicator of disease activity and biological characteristics associated with a negative prognosis [10].
Environmental factors serve to promote EBV-driven tumors, which are primarily of B cell origin but also of epithelial and NK or T cell origin. Immunosuppression or DNA-damaging agents, including chemotherapy, certain HDAC inhibitors, and radiation, can induce the reactivation of EBV replication [11]. EBV infection in turn has been implicated in DNA damage. Phosphorylated H2AX, a reporter of DNA damage was increased in EBV-carrying cells in the absence of exogenous stimuli [12].
Deregulation of the cellular Myc proto-oncogene is the hallmark of all types of Burkitt lymphoma. The Myc-mediated oncogenic signaling pathway regulates miRNAs, especially miR-17-92 clusters. miR-17-92 is transcribed as a polycistron, which is subsequently processed into 7 mature miRNAs: miR-17-3p and -5p, miR-18a, miR-19a, miR-19b, miR-20a and miR-92a. The miR-17-92 cluster is frequently amplified or overexpressed in lymphoma [13]. Compared with the other miR-17-92 members,which showed distinct functional interplays with Myc, the role of miR-18a remains unexplained. In our previous study, miR-18a was shown to promote malignant progression of nasopharyngeal carcinoma, which is closely related to EBV infection [14]. miR-18a shares a homologous sequence with EBVencoded BART 5. In this study, we aimed to elucidate the roles of miR-18a in the tumorigenesis of lymphoma and determine whether they are mediated by EBV.
Cell culture and patients samples
Human Burkitt lymphomas cell lines P3HR-1, Raji(EBV positive)and BJAB(EBV negative) were cultured in RPMI-1640 (HyClone, Life Sciences, Logan, UT, USA) supplemented with penicillin G (100 U/mL), streptomycin (100 mg/mL) and 10% fetal calf serum. Cells were grown at 37°C in a humidified atmosphere of 5% CO 2 and were routinely sub-cultured. Cells were obtained from Cell Bank of Cancer Research Institute, Central South University and authenticated by STR profiling. Patients diagnosed with lymphoma (n = 100) were included in this study. Twenty non-cancerous individuals were enrolled as a control group. All cases enrolled in this study were identified at Xiangya hospital, Central South University, China. The clinical and laboratory characteristics of the cases are summarized in Table 1. The patients were informed of the sample collection and signed informed consent forms. The collection and use of samples were approved by the ethical review committees of Xiangya Hospital, Central South University.
Analysis of public datasets
The microRNA array datasets (GSE42906, GSE36926) were collected from the National Center for Biotechnology Information's Gene Expression Omnibus (GEO, NCBI). Using GEO2R of PubMed (http://www.ncbi.nlm. nih.gov/geo/geo2r/), we analyzed changes in the miRNA expression spectrum. Expression of the miR-17-92 cluster and EBV-encoded microRNA was analyzed and visualized by Multiple Experiment Viewer (MeV).
Immunohistochemistry and in situ hybridization
Lymphomas samples and normal lymph node tissues were fixed and embedded in paraffin wax. Next, 4-to 6-μm thick paraffin sections were defaced followed by hydration. The detailed procedures for the immunohistochemistry for EBNA1 and in situ hybridization for miR-18a have been described in our previous publication [14]. Tissue sections were incubated with primary antibody EBNA1 (Novus Biologicals, Littleton, Colorado, USA) at 4°C overnight in a humidified chamber. Finally, after dehydration and mounting, the sections were observed and imaged under a microscope (OLYMPUS BX-51, Japan). Goat serum and PBS were used instead of the first antibody as a negative control and blank control respectively. A semi-quantitative scoring criterion for IHC was used in which both the staining intensity and positive areas were recorded.
Cell cycle analysis
Cells were fixed with 70% ethanol overnight and were incubated in PBS containing 0.5 mg/ml RNase A (Takara, Japan) at 37°C for 1 h. Cells were treated with propidium iodide at the final concentration of 5 μg/ml (Beyotime Biotechnology, Hangzhou, China) and subjected to flow cytometry analysis.
Quantitative real-time PCR
Total RNA was isolated from cells using Trizol® reagent (Invitrogen, Carlsbad, CA, USA). cDNA was synthesized from total RNA using the RevertAid First Strand cDNA Synthesis Kit (Fermentas, Waltham, MA, USA). To measure the expression of miR-18a, the microRNA qPCR Quantitation kit was purchased from Takara, Japan. Real-time PCR was performed using the Bio-Rad IQ™ 5 Multicolor Real-Time PCR detection System (Bio-Rad, Berkeley, CA, USA). The data were analyzed using iQ5 software. Relative EBV-related gene expression was determined and normalized to the expression of GAPDH or β-actin (for hypoxic conditions) using the 2 ΔΔCt method. miR-18a expression was calculated and normalized to U6 using the 2 ΔΔct method. The data are representative of the means of three experiments. Student's t-test was applied to compare two or more values; p < 0.05 indicated that there was a significant difference. Quantitative real-time EBV PCR was performed in samples collected from the media of cells using the Epstein-Barr virus DNA Quantitative Fluorescence Diagnostic Kit (Sansure Biotech, Hunan, China). Viral DNA was extracted, and PCR was performed according to the manufacturer's instructions. The qPCR protocol was as follows: 94°C for 5 min, followed by 45 cycles of 94°C for 15 s and 57°C for 30 s. The EBV copy number was calculated according to the standard curve.
EBV purification and fluorescence in situ hybridization
An EBV-specific probe was constructed from EBV and obtained from the productive EBV B-cell lineage P3HR-1, which has been describe in detail previously [15,16]. Briefly, the culture media were collected from P3HR-1, followed by repeated freezing and thawing for several times. After centrifuged for 20 min at 3000 RPM, the supernatant was collected and filtered with a 0.45-μm membrane filter. EBV viral DNA was extracted using the Qiagen QIAamp Virus MinElute Spin Kit (Qiagen, Valencia, CA, USA). PCR was performed to amplify the 3267 bp of EBV DNA probes. The DNA probes were labeled with biotin-dUTP by the random-primed labeling method (Roche, Mannheim, Germany). The slides were incubated with hybridization mixture containing 50 ng of probe and 5 μg of salmon sperm DNA overnight at 37°C for 14 h. Following hybridization, the slides were incubated with avidin-FITC for 30 min at room temperature followed by incubation with anti-FITC. After washing, drying and mounting, the slides were examined under fluorescence microscopy (Olympus FSX100).
Western blotting
The procedures for western blotting have been described in our previous publication [14]. Briefly, approximately 50 μg of protein were separated on SDS-PAGE and was transferred to PVDF membranes (Millipore, MA, USA). The membranes were incubated with the primary antibody overnight at 4°C followed by a brief wash with PBST and incubation with secondary antibody for 1 h at 37°C. An anti-GAPDH antibody control was purchased from Millipore (Billerica, MA, USA) and was used as a loading control. Finally, ECL solution (Millipore, MA, USA) was added to cover the blot surface, the signals were captured, and the intensity of the bands was quantified using the Bio-Rad Che-miDoc XRS+ system (Bio-Rad, CA, USA). The antibodies were used as follows: ATM, EBNA1 (Novus Biologicals, USA), γ-H2AX(Abcam, USA), β-actin (Santa Cruz Biotechnology, CA, USA) and GAPDH (cell signaling, USA).
In vitro cell proliferation assessment
The proliferation of lymphoma cells in suspension was measured using the CCK-8 assay (Beyotime Biotechnology, Hangzhou, China). The cell suspension was inoculated in a 96-well plate. After treatment, 10 μl of CCK-8 solution was added to each well and the plate was incubated for an additional 4 h. Next, the absorbance measured at 450 nm using a microplate reader. The experiment was repeated three times, and six parallel samples were measured each time.
Hypoxia and UV treatment
The cells were incubated in the hypoxic chamber with 0.1% oxygen, 95% N 2 and 5% CO 2 (Don Whitley Scientific, H35 hypoxystation) for 48-72 h. For UV treatment, cells were subjected to UV light (254 nm) for 10 min to cause DNA damages.
Immunofluorescence analysis
Cells were collected, washed and subsequently fixed in fixation buffer (4% paraformaldehyde in PBS). Next, the cells were permeabilized for 30 min with 0.25% Triton X-100 and blocked for 60 min with 5% BSA. Cells were incubated with anti-γ-H2AX antibody at a 1:800 dilution at 4°C overnight, washed in TBST 3 times, and incubated with the secondary antibody FITC-conjugated IgG (BD biosciences, USA) at a dilution of 1:300 for 1 h. Next, cells were washed in TBST at room temperature, dried and mounted by mounting medium including the nucleus stain DAPI (Vector Laboratories). The cells were examined by fluorescence microscopy (Olympus FSX100).
Luciferase assay
Reporter constructs were generated in which the 3'UTR of ATM, wild-type or miR-18a binding-site mutant was cloned downstream of the luciferase open reading frame. Luciferase activity was measured using the Dual-Glo luciferase assay system (Promega). Renilla luciferase activity was normalized to the corresponding firefly luciferase activity. 293 cells were co-transfected with luciferase constructs and miR-18a mimics or mimics negativa control (mimics NC). The renilla constructs were also cotransfected as an internal control. Luciferase acticity was normalized to renilla luciferase activity. Data were presented as the mean ± SD of twice experiments with six replicates (Student's t-test, *p < 0.05; **p < 0.01).
Alkaline comet assay
The alkaline comet assay was performed according to the procedure of alkaline Comet assay kit (Genmed Scientifics, USA). The detailed protocol has been previously published [17]. Briefly, cells were pelleted and resuspended in 1 mL of PBS. Next, 10 μL of resuspended cells was mixed with an equal volume of prewarmed low melting-point agarose. The agarose-cell mixture was placed on fully frosted slide precoated with agarose and spread gently with a coverslip. After 10 min at 4°C, the slides were immersed in precooled (4°C) lysis solution for 80 min in a dark chamber. After soaking with electrophoresis buffer for 30 min, the slides were subjected to electrophoresis (25 V) for 30 min. Finally, the cells were stained with 10 μg/ml propidium iodide (Beyotime Biotechnology, ST511), and individual cells were viewed using an Olympus FSX100 fluorescence microscope.
Array CGH
The detailed protocol has been previously published [17]. CGH was performed using Agilent SurePrint G3 Human Catalog 2x400K CGH Microarrays (Agilent, CA, USA). Genomic DNA of P3HR-1 or Raji cells transfected with miR-18a or mimics NC were extracted with the Universal Genomic DNA Kit (CW2298S, Cwbio, Beijing, China). The visualization and image analyses were performed using Feature Extraction (Agilent, Santa Clara, CA, USA).
Statistical analysis
Survival data were analyzed by Kaplan-Meier analysis.
The log-rank test was used to determine the difference among survival curves according to miR-18a and EBNA1. Pearson χ 2 -test was used to discover the association between miR-18a and EBNA1. The real-time PCR data were represented as the means ± standard deviation.Statistical significance was determined using a 2-tail Student's t-test. Statistical analyses were performed by SPSS11.0 and GraphPad prism. For cell cycle analysis, statistical significance for fold over miRNA negative control was determined using a Student's t-test. All p values were two-sided, and a p value of less than 0.05 was considered to be significant.
Increased expression of miR-18a in lymphomas patients is associated with EBV infection and a shorter survival
We first investigated the expressions of miR-18a and the miR-17-92 cluster in lymphomas samples and the association with EBV infection. Publicly available microRNA array data from 45 diffuse large B-cell lymphoma patients and 10 control lymph nodes or B cells with or without EBV infection were compared (GSE42906, GSE36926). Using GEO2R tool analysis, we found that the relative expression level of miR-18a was higher in B-cell lymphoma patients than in control lymph nodes (Fig. 1a). The unsupervised hierarchical clustering of microRNA expression showed that EBV-infected B cells had upregulated miR-17-92 cluster expression and were clustered together (Fig. 1b), indicating that the expression of the miR-17-92 cluster was correlated with the EBV infection status. miR-18a, which shares sequences with EBV-miRNA-BART5, was upregulated in EBV-infected B cells; however, EBV-miRNA-BART5 did not show upregulated expression in EBV-positive B cells. miR-155, which can be altered by EBV infection, was notably upregulated. miR-29a/b/c, which share sequences with EBV miRNA BART1-3p were downregulated. The expressions levels of miR-18a and nuclear EBV protein EBNA1 in 100 lymphoid neoplasm tissues (59 of BL or DLBCL, 34 of NK/T-cell lymphomas and 7 of HL) and 20 non-cancerous control tissues were determined by in situ hybridization and immunohistochemistry. The expression levels of miR-18a and EBNA-1 across the entire cohort are described in Table 1. Compared with non-neoplastic lymphatic tissues, the expression of miR-18a in tumor biopsies from lymphomas was upregulated (p = 0.0127, Fig. 1c, d). Higher levels of miR-18a were correlated with advanced stage and extranodal lymph node metastasis (p < 0.01) but not with age and gender ( Table 1). The expression of miR-18a was positively correlated with the expression of EBNA1 (Fig. 1e, Pearson r r: 0.7745, 95% confidence interval: 0.6813-0.8431, R squared: 0.5539, P < 0.0001). Higher expression levels of miR-18a and EBNA1 were correlated with a shorter overall survival (p = 0.002, p = 0.011, respectively, Fig. 1f, g).
miR-18a promotes cell proliferation in EBV positive lymphoma cells and increases the EBV viral load
We next investigated whether miR-18a affected lymphoma cell growth. Transfection of miR-18a into EBV-positive P3HR-1,Raji cells and EBV-infected BJAB cells resulted in promoted cell proliferation; however, when EBV-negative BJAB cells were transfected with miR-18a, decreased proliferation was observed (Fig. 2a), indicating that miR-18a promotes lymphoma cell growth in EBV-dependent manner. Cell cycle analysis indicated that miR-18a promoted the entry of G1 cells into S-phase in EBV-infected BJAB and EBV-positive Raji cells, miR-18a inhibitor caused a decrease in cell number in the S-phase of cell cycle in Raji cells, but in EBV-negative BJAB cells, the effect of miR-18a on cell cycle was not observed, indicating that the cell growth effects induced by miR-18a were associated with the status of EBV (Fig. 2b, c). In EBV-infected BJAB, miR-18a inhibitor did not show obvious effect on cell cycle, suggesting that miR-18a inhibitor may inhibit the reactivation of EBV, but does not reduce the burden of primary EBV infection (Fig. 2c).
Transfection of miR-18a mimics in P3HR-1 and Raji cells resulted in a higher EBV viral load and EBV gene expression, including that of BZLF1, which controls entry into the EBV lytic replication phase (Fig. 3a-c). EBNA-1, which is always expressed in EBV-carrying proliferating cells, was overexpressed after the transfection of miR-18a (Fig. 3b). In situ hybridization generated more fluorescent spots in miR-18a-transfected P3HR-1, Raji and EBV-infected BJAB cells. EBV genomes staining reached the nucleus, being recognized as both latent punctate foci of episomal genomes and bright lytic staining (Fig. 3d, Additional file 1: Figure S1). There were no positively stained spots in EBV-negative BJAB (Additional file 1: Figure S1). miR-18a is a DNA damage sensor and mediates EBV reactivation after DNA damage Studies have revealed that DNA-damaging agents, including chemotherapy,hypoxia and radiation, can reactivate EBV [11,18,19]. The expression of γ-H2AX, which is the biomarker for DNA damage, was increased in EBV-positive Raji cells after treatment with UV or hypoxia (Fig. 4a, b). We observed that UV exposure and hypoxia treatment both caused an increase in the EBV viral load in EBV-positive lymphoma cells (Fig. 4c). The EBV-related gene expression was dramatically increased after treatment with UV or hypoxia, suggesting that hyperproliferated EBV was mediated by the DNA damage response induced by UV or hypoxia (Fig. 4d).
To determine whether DNA-damaging agents reactivate the EBV genome through miR-18a, we measured the expression of miR-18a upon treatment with UV and hypoxia. After UV exposure and hypoxic treatment, Raji and P3HR-1 showed a significant increase in miR-18a, suggesting that miR-18a is a DNA damage sensor (Fig. 5a). We also found that EBV infection can increase the expression of miR-18a, which may occur because of the DNA damage arising from EBV infection (Additional file 1: Figure S2). Inhibition of miR-18a can reduce the EBV copy and the EBV-related gene expression in EBV-positive lymphoma cell line Raji after exposure to hypoxia and UV (Fig. 5b, c), indicating that UV exposure or hypoxic treatment caused EBV reactivation through miR-18a expression.
miR-18a reactivated EBV by targeting ATM-mediated DDR
The components of the DNA damage signalling pathway, including ATM, ATR and DNA-PK, play key roles in defending against neoplastic transformation. The DNA-PK inhibitor AZD 8055 can increase EBV-related gene expression (Fig. 6a). As reactivation of EBV can occur with DNA damage caused by UV exposure or hypoxic treatment through miR-18a, we further investigated the roles of miR-18a in DNA damage. Previous research has shown that ATM is a potential target of miR-18a [20]. The schematic diagram of miR-18a binding sites in the 3'UTR of ATM is shown in Fig. 6b. Transfection with miR-18a mimics led to a significant decrease in luciferase activity compared with the miRNA control. By contrast, luciferase activities of mutant 3' UTR remained unchanged in miR-18a-overexpressing cells (Fig. 6b). Western blot analysis showed that the over-expression of miR-18a suppressed endogenous ATM expression in EBV-infected BJAB cells (Fig. 6b). However, the inhibitory effect of miR-18a on ATM was not obvious in EBV-negative BJAB cells (Additional file 1: Figure S3). Under hypoxic conditions, transfection with ATM could inhibit the expression of EBV-related genes and reverse the promotion effect of miR-18a on the EBV-related gene expression (Fig. 6c, d). Conversely, under normoxia, transfection with ATM increased EBV lytic gene expression (BMRF1,BRLF1) and decreased the expression of EBV lytic gene BZLF1 and latent genes (EBER1/2, LMP1) (Fig. 6e). Transfection with ATM inhibited the promotion effect of miR-18a on the EBV viral load (Fig. 6f). These results indicated that miR-18a reactivated EBV through inhibiting ATM expression.
miR-18a promotes genomic instability through the reactivation of EBV
Because miR-18a targets ATM and impairs the DNA damage response, we further investigated whether miR-18a affected genomic stability. When transfected with miR-18a,EBV-positive Raji cells showed increased expression of γ-H2AX (Fig. 7a, b). The Comet assay showed long (See figure on previous page.) Fig. 3 miR-18a increases the EBV viral load. a Transfection of miR-18a mimics in P3HR-1 and Raji resulted in a higher EBV viral load. Quantitative real-time EBV PCR was performed in samples collected from the culture media of cells. Viral DNA was extracted, and PCR was carried out according to the instructions. The EBV copy number was calculated according to a standard curve. b Expression of EBNA1 after transfection of miR-18a mimics and inhibitor in Raji cells as measured by western blotting. c Relative expression of EBV gene expression after transfection of miR-18a. d Visualization of episomal and integrated EBV DNA by fluorescence in situ hybridization Fig. 4 DNA damage reactivates EBV. a Expression of γ-H2AX as measured by immunofluorescence after treatment with hypoxia or UV. EBVpositive Raji cells were stained with anti-γ-H2AX and DAPI. Expression of γ-H2AX is indicated as green loci. DAPI was used to stain the cell nuclei. The merge images present the DAPI and FITC as blue and green, respectively. b Expression of γ-H2AX as measured by western blotting after treatment with hypoxia or UV in Raji cells. β-actin served as an loading control. c EBV copy number as measured by real-time PCR after treatment with hypoxia or UV. d Relative expression of EBV genes as measured by real-time PCR after treatment with hypoxia or UV comet tails after transfection of miR-18a in P3HR-1 and Raji cells (Fig. 7c, d). However, transfection of miR-18a in EBV-negative BJAB cells did not show obvious overexpression of γ-H2AX and long comet tails, indicating that miR-18a caused DNA damage through EBV reactivation (Fig. 7b, d).
DNA damage that is not repaired leads to genomic instability and cancer. In an Array-CGH assay, the evaluation of DNA copy number changes was performed by comparing a DNA test isolated from Raji cells and P3HR-1 cells, which were transfected with miR-18a against a normal reference DNA of control lymphoma cells. A graphical presentation of the regions of gain (blue) and loss (red) is shown in Fig. 7e. These abnormalities in cells when transfected with miR-18a included gains in chromosome 16 and 20. Losses in chromosome 1, 2, 3 and chromosome X are also shown (Fig. 7e). Thus, miR-18a transfection inhibited ATM-mediated DNA repair and reactivated EBV, leading to genomic instability.
Discussion
In this study, we performed a comprehensive analysis of expression of the miR-17-92 cluster and EBV status regarding lymphomas. Overexpression of miR-18a is observed in lymphomas tissue and EBV-infected B cells. We demonstrated that miR-18a can reactivate EBV through inhibiting the DNA damage response and induce genomic instability through EBV reactivation.
EBV established a persistent, asymptomatic, latent infection in a restricted pool of resting B cells [6,21]. In Fig. 5 DNA-damaging agents reactivate EBV through miR-18a. a Relative expression of miR-18a after treatment with hypoxia or UV. Real-time PCR was used to measure the mRNA expression of EBV-related genes. β-actin served as an internal control. b miR-18a inhibitor decreased the EBV copy number induced by hypoxia or UV in Raji cells. c miR-18a inhibitor decreased EBV gene expression induced by hypoxia or UV. Relative expression of EBV genes as measured by real-time PCR. upper: hypoxia treatment; lower: UV treatment Fig. 6 miR-18a reactivates EBV by targeting ATM. a DNA-PK inhibitor AZD 8055 increased EBV gene expression in Raji cells. Real-time PCR was used to measure the mRNA expression of EBV-related genes. GAPDH served as an internal control. b ATM is a potential target of miR-18a. I: Schematic representation of the 3'UTR of ATM. The red bar shows the predicted miR-18a binding sites in the 3'UTR of ATM. The sequence of mature miR-18a aligned to target sites is shown. (II) Luciferase activity assay. The reporter constructs in which the 3'UTR of ATM, wild-type or miR-18a binding-site mutant, was cloned downstream of the luciferase open reading frame. 293 cells were cotransfected with the luciferase construct and miR-18a mimics or control miRNA. The renilla construct was also cotransfected as an internal control. Luciferase activity was normalized to renilla luciferase activity. The data were presented as the means±SD of two experiments with six replicates (Student T-test, *p < 0.05; **p < 0.01; ***p < 0.001). III: miR-18a decreased the expression of ATM. Western blot of ATM was performed 48 h after the transfection of miR-18a mimics and miR-18a inhibitor. GAPDH was used as an internal control. c ATM inhibited the expression of EBV-related genes under hypoxic conditions (*p < 0.05; **p < 0.01). d ATM reversed the promotion effect of miR-18a on the EBV-related gene expression (*p < 0.05; **p < 0.01). e EBV gene expression after transfection of ATM under normoxia (*p < 0.05; **p < 0.01). f Transfection with ATM inhibited the promotion effect of miR-18a on the EBV viral load immunocompetent adults, the EBV viral load is stable for years. It is assumed that the latently infected memory B cells undergo periodic reactivation to produce infectious virus [22]. EBV reactivation begins through expression of the immediate-early transcription factors BZLF1, which initiates a cascade of expression of early genes and late genes [23]. The roles of EBV in the lytic or latent state in the development of lymphoma are not fully understood. B cell transformation after EBV infection in vitro is believed to be a latent infection. However, the in vitro model is not consistent with the viral gene expression programs observed in most EBV-positive B-cell tumors [24]. Although the EBV latent gene-encoded products activate survival pathways to promote the progression of cancer and cellular transformation, an increased level of lytically infected cells may increase the likelihood of EBV-associated malignancies [25,26]. EBV viral load measurement is suggested to predict and monitor EBV-associated tumors, including nasopharyngeal carcinoma, post-transplant lymphoproliferative disorder, and Hodgkin's disease [27]. AIDS patients and organ transplant patients, who have a high risk of developing EBV-associated lymphomas, also have a high level of lytic as well as latent EBV infection [26,28]. A high level of EBV particles would be predicted to increase the number of latently infected B cells. The differentiation of EBV-infected memory B cells into plasma cells and acute stress represent two distinct pathways of EBV reactivation [23]. Although this treatment is controversial, antiviral drugs that block virus replication and kill proliferating infected cells were administered to EBV-positive lymphoma patients [29]. Our data showed that miR-18a caused a burst of lytic gene expression and concomitant transcription of EBV latent gene expression. miR-18a promoted the proliferation of EBV-positive lymphoma cells but not EBV-negative lymphoma cells, suggesting that miR-18a-induced EBV reactivation mediated the increase in lymphoma cell growth.
Although EBV infection is related to B cell malignancy, the slow course of cancer induction and infections worldwide suggest that EBV alone is rarely sufficient to cause lymphoma. EBV infection, endogenous and exogenous damages in DNA and host repair mechanisms play synergistic effects on tumorigenesis of lymphomas. Exposure to DNA damaging agents, such as certain chemotherapy, ultraviolet or hypoxia, can induce the reactivation of viral replication, causing fold increases in the EBV viral load [6,21]. The replication of EBV, in turn, increases replicative stress, similar to oncogene activation, leading to the induction of the DNA damage response (DDR). DDR emerges as a barrier to tumor progression responding to cellular DNA replicative stress [30]. The host DDR, which is orchestrated by ATM and ATR kinase, senses EBV-induced oncogenic stress,causes cell cycle arrest or apoptosis and blocks long-term outgrowth of most infected cells [19] [31]. Inhibition of the DDR kinase ATM markedly increases the transformation efficiency of primary B cells [19]. ATM kinase expression is reduced in EBV-associated nasopharyngeal carcinomas [32]. It was reported that the DNA repair sensor ATM causes EBV lytic reactivation in Burkitt lymphoma cells in p53-dependent and non-dependent ways [19]. The EBV latent-to-lytic switch is mediated by the viral proteins BZLF1, BRLF1 and BRRF1 [11]. The ability of the EBV BRLF1 and BRRF1 proteins to induce lytic reactivation in EBV-infected AGS cells is ATM dependent. However, overexpression of BZLF1 induces lytic gene expression in the presence or absence of ATM activity [19]. ATM enhances BZLF1 promoter activity in the context of the intact EBV genome [11]. In our study, under normoxia, ATM induced the overexpression of BRLF1 and BMRF1, however, decreased the expression of BZLF1 in EBV-positive Raji cells. Under DNA damage stress, ATM decreased lytic and latent EBV gene expressions and reversed the induction of miR-18a on the EBV viral load. We speculated that ATM decreased EBV gene expression, including lytic or latent genes in the context of EBV-or DNA damage agents-induced genomic stress. As a host mechanism, miR-18a acts as a sensor to DNA damage stress induced by hypoxia, UV and EBV infection, inhibiting DDR and promoting genomic instability. miR-18a inhibited ATM-mediated DDR and increased the virus proliferation, increasing the growth of lymphomas and indicating the oncogenic effect of EBV is dependent on the host microenvironment. DNA damage that is not repaired leads to genomic instability and cancer. It is now generally accepted that oncoproteins encoded by tumor viruses can drive genomic instability and initiate tumorigenesis. EBV (See figure on previous page.) Fig. 7 miR-18a induces DNA damage. a Expression of γ-H2AX as measured by immunofluorescence in Raji cells; EBV-positive Raji cells were stained with anti-γ-H2AX and DAPI. Expression of γ-H2AX is indicated as green loci. DAPI was used to stain the cell nuclei. The merge images present the DAPI and FITC as blue and green, respectively. b Expression of γ-H2AX as measured by western blotting in EBV-positive or -negative cells. c Detection of DNA damage after transfection of miR-18a. The comet assay was applied. Cells were electrophoresed in agarose gels on a coverslip and were stained with propidium iodide. Labeled DNA was visualized under a fluorescence microscope. d Detection of DNA damage after transfection of miR-18a in EBV-negative and -positive BJAB cells. Magnification, × 100. e Graphic presentation of all chromosomal changes. Cells transfected with miR-18a and mimics negative control were analyzed by comparative genomic hybridization array (Array-CGH). The regions of DNA gain (blue) and loss (red) are shown infection promotes genomic instability and telomere dysfunction [12]. Repeated malarial infection helps EBV to cause lymphomas. However, it was reported that antibodies to fight malaria cause DNA damage that can lead to Burkitt's lymphoma because of the ligation of inappropriate segments of chromosomes during antibody affinity maturation [33]. Thus, host factors contribute to the genomic instability induced by viral replication.
There are three EBV-encoded microRNAs that share sequences with human miRNAs: EBV miRNA BART1-3p with human miRNA miR-29a/b/c; BART5-5p with miR-18a/b and BART22-3p with miR-520d-5p, and miR-524-5p [14,34]. EBV-encoded microRNAs have been reported to regulate human and viral transcripts. EBV can also alter host miRNA expression [34]. BART miRNAs are present in all EBV-infected cells, with much higher expression in epithelial cells than in infected B lymphocytes. BART miRNAs act as repressors of EBV lytic replication and probably maintain a balance between the virus and its host [35]. In our study, EBV can enhance the expression of miR-18a (Additional file 1: Figure S3). In contrast to the roles of BART miRNAs, the host miR-18a promoted EBV replication. Most EBV miRNAs co-target mRNAs with host miRNAs, in particular with members of the miR-17-92 miRNA cluster [34]. Our study found that ATM is targeted by miR-18a in EBV-infected BJAB cells but not in EBV negative BJAB, indicating that EBV miR-NAs co-target mRNAs with host miRNAs. Although sharing seed sequence identity, miR-18a and BART5-5p showed different potential to regulate expression of LMP1. BART5-5p can repress the expression of LMP1 and not miR-18a-5p [34].
Conclusions
Taken together, this study's results demonstrate the interplay of host factors, environmental factors and EBV infection status. Environmental genotoxic stresses, such as UV and we found the interplay of host factors, environmental factors and EBV infection status. Environmental genotoxic stresses, such as UV and hypoxia, caused EBV reactivation and overexpression of miR-18a. miR-18a targeted ATM and inhibited host DDR, thereby causing EBV reactivation and genomic instability and contributing to the development of lymphoproliferative diseases. Inhibition of miR-18a may be a novel approach to prevent the reactivation of EBV.
Additional file
Additional file 1: Figure S1. Visualization of episomal and integrated EBV DNA by fluorescence in situ hybridization in EBV-positive or -negative BJAB cells. Figure S2. EBV infection increased the expression of miR-18a. Real-time PCR was used to measure the mRNA expression of EBV-related genes. Figure S3 | 2019-01-17T05:55:09.308Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "25e8e95b88680a2843ad45978f271b18ca90b362",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-018-5205-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68460254abe4c710973c783ac8a1c09e1af7ed02",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52309333 | pes2o/s2orc | v3-fos-license | Moderate hypothermia inhibits microglial activation after traumatic brain injury by modulating autophagy/apoptosis and the MyD88-dependent TLR4 signaling pathway
Background Complex mechanisms participate in microglial activation after a traumatic brain injury (TBI). TBI can induce autophagy and apoptosis in neurons and glial cells, and moderate hypothermia plays a protective role in the acute phase of TBI. In the present study, we evaluated the effect of TBI and moderate hypothermia on microglial activation and investigated the possible roles of autophagy/apoptosis and toll-like receptor 4 (TLR4). Methods The TBI model was induced with a fluid percussion TBI device. Moderate hypothermia was achieved under general anesthesia by partial immersion in a water bath for 4 h. All rats were killed 24 h after the TBI. Results Our results showed downregulation of the microglial activation and autophagy, but upregulation of microglial apoptosis, upon post-TBI hypothermia treatment. The expression of TLR4 and downstream myeloid differentiation primary response 88 (MyD88) was attenuated. Moderate hypothermia reduced neural cell death post-TBI. Conclusions Moderate hypothermia can reduce the number of activated microglia by inhibiting autophagy and promoting apoptosis, probably through a negative modulation between autophagy and apoptosis. Moderate hypothermia may attenuate the pro-inflammatory function of microglia by inhibiting the MyD88-dependent TLR4 signaling pathway. Electronic supplementary material The online version of this article (10.1186/s12974-018-1315-1) contains supplementary material, which is available to authorized users.
Background
In developing countries, traumatic brain injury (TBI) is a major cause of morbidity and mortality. The pathological process of TBI is quite complicated and is commonly divided into two phases, primary and secondary injury. The activation of resident microglia plays a key pro-inflammatory role in the acute secondary phase post-TBI [1][2][3].
Autophagy is a highly conserved intracellular process, which includes the degradation of abnormal accumulations of toxic substances, proteins, and damaged organelles, so that the proteins and other substances can be recycled efficiently. After TBI, autophagy can both protect cells and damage them [4][5][6]. In our previous study, we found that TBI could induce autophagy and apoptosis in neurons and glial cells. However, in the acute phase of TBI, hypothermia plays a cytoprotective role. Furthermore, we demonstrated that post-TBI hypothermia could upregulate the autophagy pathway, modulate apoptosis, and reduce cell death in neurons and glial cells; a possible mechanism for this is the negative regulatory effect of autophagy on apoptosis [7,8]. We also found autophagy induction in microglia after TBI [9], but the regulatory effect of moderate hypothermia on microglial activation remains unknown.
In this study, we evaluated the effect of hypothermia on the microglial activation post-TBI and preliminarily explored the possible mechanisms, with regard to the relationship between autophagy, apoptosis, and the MyD88-dependent TLR4 pathway.
Animals and experimental design
All animal procedures in this study were approved by the Animal Care and Experimental Committee of the School of Medicine of Shanghai Jiao Tong University. Adult male Sprague-Dawley rats (280-300 g) were used. Rats were randomly divided into four groups: sham injury with normothermia group (SNG; 37°C; n = 60), sham injury with hypothermia (SHG; 32°C; n = 60), TBI with normothermia group (TNG; 37°C; n = 60), and TBI with hypothermia group (THG; 32°C; n = 60). Rats were housed in individual cages in a temperature-and humidity-controlled animal facility with a 12-h light/ dark cycle. Rats were housed in the animal facility for at least 7 days before surgery, and they were given free access to food and water during this period.
Surgical preparation
Rats were anesthetized through intraperitoneal injection (i.p.) of 10% chloral hydrate (3.3 mL/kg) and were then mounted in a stereotaxic frame. An incision was made along the midline of the scalp, and a 4.8-mm diameter craniectomy was performed on the left parietal bone (midway between the bregma and the lambda). A rigid plastic injury tube (a modified Leur-loc needle hub, with an inside diameter of 2.6 mm) was secured over the exposed, intact dura by using cyanoacrylate adhesive. Two skull screws (2.1 mm in diameter, 6.0 mm in length) were placed in the burr holes, 1 mm rostral to the bregma and 1 mm caudal to the lambda. The injury tube was secured to the skull with dental cement. Bone wax was used to cover the open needle hub connector after the dental cement had hardened (5 min). The scalp was closed with sutures. The animals were returned to their cages for recovery.
Lateral fluid percussion brain injury
A fluid percussion device (VCU Biomedical Engineering, Richmond, VA) was used to cause TBI, as described in detail previously [10,14]. The rats were subjected to TBI 24 h after the surgical procedure to minimize possible confounding factors of the surgery. In brief, the device consisted of a Plexiglas cylindrical reservoir filled with 37°C isotonic saline. One end of the reservoir had a rubber-covered Plexiglas piston mounted on O-rings, and the opposite end had a pressure transducer housing with a 2.6-mm inside diameter male needle hub opening. On the day of the TBI, rats were anesthetized with 10% chloral hydrate (3.3 mL/kg, i.p.) and endotracheally intubated for mechanical ventilation. The suture was opened, and the bone wax was removed. The rats were disconnected from the ventilator, and the injury tube was connected to the fluid percussion cylinder. A fluid pressure pulse was then applied for 10 ms directly onto the exposed dura to produce a moderate TBI (2.1-2.2 atm). The injury was delivered within 10 s after disconnection from the ventilator. The resulting pressure pulse was measured in atmosphere by using an extracranial transducer (Statham PA 85-100; Gloud, Oxnard, CA) and recorded on a storage oscilloscope (Tektronix 5111; Tektronix, Beaverton, OR). After the initial observation, the rats were ventilated with a 2:1 nitrous oxide/ oxygen mixture and the rectal and temporal muscle temperatures were recorded. The needle hub, screws, and dental cement were then removed from the skull, and the scalp was sutured closed. The rats were extubated as soon as spontaneous breathing was observed. The SNG and SHG rats were subjected to the same anesthetic and surgical procedures as the rats in the other groups but without being subjected to injury.
Manipulation of temperature
The frontal cortex brain temperature was monitored with a digital electronic thermometer (model DP 80; Omega Engineering, Stamford, CT) and a 0.15-mm diameter temperature probe (model HYP-033-1-T-G-60-SMP-M; Omega Engineering) inserted 4.0 mm ventral to the surface of the skull. The probe was removed before the fluid percussion injury and replaced immediately after the injury. Rectal temperatures were measured with an electronic thermometer with an analog display (model 43 TE; YSI, Yellow Springs, OH) and a temperature probe (series 400; YSI). A brain temperature of 32°C was achieved by immersing the body of the anesthetized rat in ice-cold water. The skin and fur of the animals were protected from direct contact with the water by placing each animal in a plastic bag (head exposed) before immersion. Animals were removed from the water bath when the brain temperature had dropped to within 2°C of the target temperature. It took approximately 30 min to reach the target brain temperatures, which were maintained for 4 h under general anesthesia at room temperature by intermittent application of ice packs as needed. Gradual warming of the animals to normothermia levels (37°C) was done over a 90-min period to avoid rapid warming that may have affected the secondary injury processes.
Hematoxylin and eosin (HE) staining
Rats were subjected to deep anesthesia with 10% chloral hydrate. At 24 h after TBI, rats were perfused transcardially with 4% paraformaldehyde. The brains were removed, further fixed at 4°C overnight, and then immersed in 30% sucrose/phosphate-buffered saline (PBS) at 4°C overnight. Specimens were mounted in optimal cutting temperature compound (OCT). Serial sections were obtained by using a cryostat and were stained with toluidine blue for 30 min; two to three drops of glacial acetic acid were then added. Once the nucleus and granulation were clearly visible, the sections were mounted in Permount or Histoclad. Sections were cut in a microtome and adhered to glass slides with polylysine. Images of injured cortex and ipsilateral hippocampus were captured at × 100 by using a microscope (Nikon Labophot; Nikon USA, Melville, NY). There were six rats in each of the four groups.
Immunohistochemical staining
The 4-mm-thick, formalin-fixed OCT-embedded sections were subjected to immunofluorescence analysis to determine the immunoreactivity of ionized calcium-binding adapter molecule 1 (Iba-1) and cleaved caspase 3. Endogenous peroxidase was blocked by treatment with 3% hydrogen peroxide for 5 min, followed by a brief rinse in distilled water and a 15-min wash in PBS. Sections were cooled at room temperature for 20 min and rinsed in PBS. Nonspecific protein binding was blocked by incubation in 5% horse serum for 40 min. Sections were incubated with primary antibodies (goat anti-rat Iba-1, diluted 1:100, Abcam; rabbit anti-rat cleaved caspase 3, diluted 1:100, CST) for 1 h at room temperature and then subjected to a 15-min wash in PBS. Sections were incubated with Alexa Fluor 488 donkey anti-goat secondary antibodies for Iba-1 and Alexa Fluor 555 donkey anti-rabbit secondary antibodies for cleaved caspase 3, protein light chain 3 (LC3), and Beclin-1 (1:1000 dilution, Invitrogen) for 1 h at room temperature. For negative controls, sections were incubated in the absence of a primary antibody. At least 10 randomly selected microscopic fields (× 630 magnification; Zeiss LSM880; Zeiss, Germany) were used for counting the Iba-1-positive and cleaved caspase 3-positive cells. There were six rats in each of the four groups.
Immunofluorescence microscopy for cell localization
The primary antibodies for immunofluorescence were goat anti-rat Iba-1 (1:100 dilution, Abcam), rabbit anti-rat cleaved caspase 3 (1:100 dilution, CST), rabbit anti-rat LC3 (1:100 dilution, CST), and rabbit anti-rat Beclin-1 (1:100 dilution, Proteintech). The sections were incubated with the primary antibodies in PBS with 1% bovine serum albumin for 30-40 min at room temperature; this was followed by washing and application of the secondary antibodies. The secondary antibodies were Alexa Fluor 488 donkey anti-goat for Iba-1 (1:1000 dilution, Invitrogen) and Alexa Fluor 555 donkey anti-rabbit for cleaved caspase 3, LC3, and Beclin-1 (1:1000 dilution, Invitrogen). We performed double labeling for Iba-1/cleaved caspase 3, LC3, or Beclin-1 to detect expression of them in microglia. After a final wash, the sections were protected with cover slips with anti-fading mounting medium, sealed with clear nail polish, and stored at 4°C for preservation. At least 10 randomly selected microscope fields were observed in each group, and the number of positive cells was statistically analyzed (× 630 magnifications; Zeiss LSM880; Zeiss, Germany). There were six rats in each of the four groups.
Western blot analysis
At 24 h after TBI, the injured cortex and ipsilateral hippocampus were harvested. The frozen brain samples were mechanically lysed in 20 mM tris(hydroxymethyl)aminomethane (Tris; pH 7.6), containing 0.2% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% deoxycholate, 1 mM phenylmethylsulphonyl fluoride, and 0.11 IU/mL aprotinin (all purchased from Sigma-Aldrich, Inc.). The lysates were centrifuged at 12,000g for 20 min at 4°C. The protein concentration was estimated by the Bradford method. The samples (60 μg/lane) were separated by 12% SDS polyacrylamide gel electrophoresis and electro-transferred onto a polyvinylidene difluoride membrane (Bio-Rad Lab, Hercules, CA). The membrane was blocked with 5% skim milk for 2 h at room temperature and incubated with primary antibodies against TLR4 (1:1000 dilution, Proteintech), MyD88 (1:1000 dilution, Proteintech), cleaved caspase 3 (1:100 dilution, CST), and Iba-1 (1:1000 dilution, Abcam). β-Actin (1:10,000 dilution, Sigma-Aldrich) was used as the loading control. After the membrane had been washed six times in a mixture of Tris-buffered saline and Tween-20 (TBST) for 10 min each time, it was incubated with the appropriate horseradish peroxidase-conjugated secondary antibody (1:10,000 dilution in TBST) for 2 h. The blotted protein bands were visualized by enhanced chemiluminescence Western blot detection reagents (Amersham, Arlington Heights, IL) and exposed to X-ray film. The developed films were digitized using an Epson Perfection 2480 scanner (Seiko Corp, Nagano, Japan). The results were quantified by Quantity One Software (Bio-Rad). The band density values were calculated as a ratio of TLR4, MyD88, Iba-1, and cleaved caspase 3/β-actin. There were six rats in each of the four groups.
Quantitative real-time polymerase chain reaction (qRT-PCR)
Total RNA was isolated from the injured cortex and the ipsilateral hippocampus by using Trizol (Invitrogen) according to the manufacturer's instructions. cDNA was synthesized by using a reverse transcription kit (TAKARA). PCR was performed by using SYBR Advantage Premix (TAKARA). The primers for TLR4 were (forward) 5′-TGT TCC TTT CCT GCC TGA GAC-3′ and (reverse) 5′-GGT TCT TGG TTG AAT AAG GGA TGT C-3′. The primers for MyD88 were (forward) 5′-GGT TCT GGA CCC GTC TTG C-3′ and (reverse) 5′-AGA ATC AGG CTC CAA GTC AGC-3′. Relative mRNA expression was calculated with the 2 −ΔΔCt method; SNG values were taken as 100%. β-Actin was used as the control. All experiments were done in triplicate. There were six rats in each of the four groups.
Enzyme-linked immunosorbent assay (ELISA) analysis of TNF-α and interleukin-1β (IL-1β)
At 24 h after TBI, rats were subjected to deep anesthesia by 10% chloral hydrate. The brains were quickly removed by dissection and kept over ice in physiologic salt solution. The injured cortex and ipsilateral hippocampus specimens were separated, cut into small pieces, dispersed by aspiration into a pipette, and suspended in 1 mL of physiologic salt solution in a test tube. Samples were kept over wet ice for 20 min before use. The homogenates were centrifuged at 7500 rpm for 20 min. The supernatants were used for measuring TNF-α and IL-1β concentrations with commercial ELISA kits (Shanghai Enzyme-linked Biotechnology Co., Ltd.) by following the manufacturer's instructions. There were six rats in each of the four groups.
Statistical analysis
All data are presented as the mean ± the standard deviation (SD). SPSS for Windows version 23.0 (SPSS, Inc., Chicago, IL) was used for statistical analysis of the data. All data were subjected to one-way analysis of variance. Post hoc comparisons were made with Fisher's least significant difference test. Statistical significance was inferred at P < 0.05.
Histologic examination of the injured cortex and the ipsilateral hippocampus
The brains of SNG and SHG rats showed the normal neuronal structure. The gray/white matter interface showed visible contusions and hemorrhaging in TNG and THG rats (Fig. 1).
Discussion
Our results suggest that microglial activation post-TBI could be suppressed by moderate hypothermia and that the negative regulation of autophagy and apoptosis may play a role in this process. In addition, hypothermia may also act by inhibiting the MyD88-dependent TLR4 pathway. Thus, moderate hypothermia exerts anti-inflammatory and neuroprotective effects. 5 Immunofluorescence analysis of LC3 (red) and Iba-1 (green) from the injured cortex and ipsilateral hippocampus. a Immunohistochemical staining of LC3 and Iba-1 from the injured cortex and ipsilateral hippocampus. Arrows indicate co-localization of LC3 and Iba-1 (magnification, × 630). b Number of LC3-positive microglia in the injured cortex and ipsilateral hippocampus. At least 10 randomly selected microscopic fields were used for counting. Data in the bar graphs represent mean ± SD. ****P < 0.001 In this study, we found that moderate TBI induced by fluid percussion caused cortical and ipsilateral hippocampal cell death, microglial activation, and microglial autophagy 24 h after TBI. We also found that moderate hypothermia could reduce the amount of hippocampal and cortical cell death. Furthermore, we found that microglial autophagy was suppressed, and microglial apoptosis was increased after moderate hypothermia 24 h post-TBI. In addition, the attenuation of microglial activation was observed through the downregulated expression of Iba-1, which is an important biomarker for microglial activation. These results suggest that moderate hypothermia may reduce the number of activated microglia by inhibiting autophagy and promoting apoptosis.
Apoptotic cell death is one of the most common pathologic changes after TBI [15][16][17][18]. The activation of cleaved caspase 3, which is an executioner caspase, presents an irreversible point in the complex cascade of apoptosis induction. Activated caspase-3 has been detected in neurons, astrocytes, and oligodendrocytes post-TBI in previous studies [19][20][21][22][23]. In this study, the number of microglia labeled by cleaved caspase-3 significantly increased after moderate hypothermia post-TBI, which indicates that hypothermia accelerated apoptosis of the microglia.
Autophagy is a highly regulated process involving the bulk degradation of cytoplasmic macromolecules and organelles in mammalian cells through the lysosomal system. Beclin-1, an autophagic biomarker, is a novel Bcl-2-homology-3 domain-only protein that participates in autophagy regulation with several co-actors [24]. LC3, a mammalian orthologue of yeast ATG8, is synthesized as pro-LC3, which is cleaved by ATG4 protease and converted into LC3-I. Once autophagy is activated, LC3-I is conjugated to phosphatidylethanolamine (lipidated) to form LC3-II. The amounts of LC3-II and p62 degraded by autophagy provide an estimate of the autophagy activity [25]. Diskin et al. first demonstrated that the Beclin-1 level increased near the site of an injury by using the closed-head injury model in mice in 2005 [26]. Viscomi et al. found that autophagy could serve as a protective mechanism for maintaining cellular homeostasis after TBI, which could be enhanced by rapamycin through inactivation of the mammalian target of rapamycin [27]. However, the role of autophagy is still controversial. In previous studies, we have demonstrated that moderate hypothermia post-TBI could activate the neuronal and glial autophagy pathway, which could negatively modulate apoptosis and reduce cell death [8,14]. On the other hand, this negative regulatory effect of autophagy on apoptosis may play a different role in microglia and may be associated with different cell types. In this study, the use of LC3 and Beclin-1 as autophagic Fig. 6 Immunofluorescence analysis of Beclin-1 (red) and Iba-1 (green) from the injured cortex and ipsilateral hippocampus. a Immunohistochemical staining of Beclin-1 and Iba-1 from the injured cortex and ipsilateral hippocampus. Arrows indicate co-localization of Beclin-1 and Iba-1 (magnification, × 630). b Number of Beclin-1-positive microglia in the injured cortex and ipsilateral hippocampus. At least 10 randomly selected microscopic fields were used for counting. Data in the bar graphs represent mean ± SD. ****P < 0.001 biomarkers showed that microglial autophagy could be inhibited by moderate hypothermia.
From the above discussion, it can be stated that moderate hypothermia reduces the number of activated microglia. Moderate hypothermia inhibits autophagy and promotes apoptosis, which may be the possible mechanism. However, how moderate hypothermia specifically affects the autophagy and apoptosis of microglia post-TBI is not very clear. This issue needs further research.
In addition to reducing the number of activated microglia, the moderate hypothermia may also directly affect the microglial pro-inflammatory function. In the current study, we found that a moderate TBI induced by fluid percussion led to high expression of the inflammatory cytokines TNF-α and IL-1β. We also found that moderate hypothermia post-TBI could inhibit the expression of TLR4 and MyD88 and reduce the level of the inflammatory cytokines TNF-α and IL-1β after 24 h. This suggested that activated microglia might initiate neuroinflammatory responses post-TBI through the MyD88-dependent TLR4 pathway. Moderate hypothermia may reduce the release of inflammatory cytokines from activated microglia by inhibiting the MyD88-dependent TLR4 pathway.
Toll-like receptors play a role in microglial activation, and toll-like receptor-associated pathway mediates the release of pro-inflammatory cytokines [28]. TLR4 is the most abundant toll-like receptor expressed in microglia [10,11]. TLR4 could activate IRAK and TRAF6 via the MyD88-dependent pathway. TRAF6 induces the activation of transforming growth factor-β-activated kinase 1, which leads to the activation of the mitogen-activated protein kinase and IκB kinase (IKK) cascades [29,30]. When activated by these signals, IKK phosphorylates two serine residues located in an IκB regulatory domain, and the IκB proteins are ubiquitinated and degraded by proteasomes. After that, the NF-κB complex is freed to enter the nucleus, where it can further induce the expression of pro-inflammatory cytokines IL-6, TNF-α, IL-12, and so on [12,13,31,32]. This current study demonstrated that moderate hypothermia might decrease the level of inflammatory cytokines by inhibiting the expression of relevant proteins in the MyD88-dependent TLR4 pathway.
Iba-1, also known as allograft inflammatory factor 1, is a 17-kDa EF-hand protein that is specifically expressed in macrophages/microglia and is upregulated during the activation of these cells. Ionized calcium-binding adapter molecule 1 (Iba1) expression is upregulated in microglia following nerve injury [33]. There is a constitutive expression of Iba-1 in microglial cells, but Iba-1 has also been regarded in several articles as an important biomarker to detect the activation of microglia [34][35][36]. After referring to these papers, we chose to detect the Fig. 7 Immunofluorescence analysis of cleaved caspase 3 (red) and Iba-1 (green) from the injured cortex and ipsilateral hippocampus. a Immunohistochemical staining of cleaved caspase 3 and Iba-1 from the injured cortex and ipsilateral hippocampus. Arrows indicate co-localization of cleaved caspase 3 and Iba-1 (magnification, × 630). b Number of cleaved caspase 3-positive microglia in the injured cortex and ipsilateral hippocampus. At least 10 randomly selected microscopic fields were used for counting. Data in the bar graphs represent mean ± SD. ****P < 0.001 Fig. 8 Western blotting of TLR4 and MyD88 from the injured cortex and ipsilateral hippocampus. Data in the bar graphs represent mean ± SD. β-Actin was used as the load control. *P < 0.05; **P < 0.01; ***P < 0.005; ****P < 0.001 Fig. 9 qRT-PCR of TLR4 and MyD88 from the injured cortex and ipsilateral hippocampus. Changes in the expression of TLR4 (a) and MyD88 (b) are shown as n-fold. Data in the bar graphs represent mean ± SD. ***P < 0.005; ****P < 0.001. There were six rats in each of the four groups expression level of Iba-1 for microglial activation by immunohistochemical staining and Western blotting. However, Iba-1 is not an ideal biomarker to distinguish microglial cells from macrophages. Other biomarkers, such as CD11b and CD68, are not of high specificity to microglia. The key to studying microglial activation after TBI is to figure out the immune subtypes of microglia. Therefore, we are planning to define the subtypes of macrophages/microglia post-TBI with a series of biomarkers in animal models in the future.
In previous studies, we estimated the role of autophagy and apoptosis in neuron and glial cells post-TBI, as well as the modulation of moderate hypothermia upon them. The term "glial cells" includes both macroglia and microglia. Macroglia, including oligodendrocytes, astrocytes, and ependymal cells, are derived from ectodermal tissues. However, microglia are derived from the earliest wave of mononuclear cells that originate in the yolk sac blood islands early in development. Microglia are resident immune cells in the central nervous system. In our opinion, microglia are distinct from macroglia, although they can be classified as glial cells. The current study is based on this opinion and extends the research that has taken place in the past. The effects of moderate hypothermia post-TBI are quite different between microglia and macroglia, which is our core concern. This study initially explored the changes in microglial autophagy and apoptosis after TBI and the regulation of hypothermia upon these processes. This study only draws preliminary conclusions.
The specific mechanism of moderate hypothermia on the TLR4 pathway of microglia post-TBI remains unclear. Logically, it would be necessary to inhibit or block the TLR4/MyD88 pathway to study the effects of hypothermia on microglial activation. But this pathway is quite important, and we are concerned that inhibition or knockout of this pathway may affect the survival rate of the experimental animals after TBI. Therefore, we are planning to establish an in vitro model of a cell stretch injury. We can then perform more efficient gene editing on microglia and study the specific mechanism of moderate hypothermia on the TLR4 pathway.
The negative correlation between autophagy and apoptosis has been widely observed in our past studies and in this study. The autophagy inhibitor 3-methyladenine (3-MA) was used after moderate hypothermia post-TBI in a previous report [9], and an increase of apoptosis was observed therewith. Therefore, we preliminarily speculated that there is a negative modulation between autophagy and apoptosis. We intend to establish in vitro or in vivo models of TBI in the future and perform moderate hypothermia on microglia with and without addition of 3-MA, to further investigate the effect of hypothermia on the pathological process of autophagy and apoptosis. | 2018-09-20T15:53:00.713Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "c8f5500e7bcb943bd72e7cb7efebe19db7ad43b0",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-018-1315-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8f5500e7bcb943bd72e7cb7efebe19db7ad43b0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4430193 | pes2o/s2orc | v3-fos-license | Unusual architecture of the p7 channel from hepatitis C virus
The Hepatitis C virus (HCV) has developed a small membrane protein, p7, which remarkably can self-assemble into a large channel complex that selectively conducts cations1-4. We are curious as to what structural solution has the viroporin adopted to afford selective cation conduction because p7 has no homology with any of the known prokaryotic or eukaryotic channel proteins. The p7 activity can be inhibited by amantadine and rimantadine2,5, which also happen to be potent blockers of the influenza M2 channel6 and licensed drugs against influenza infections7. The adamantane derivatives were subjects of HCV clinical trials8, but large variation in drug efficacy among the various HCV genotypes has been difficult to explain without detailed molecular structures. Here, we determined the structures of this HCV viroporin as well as its drug-binding site using the latest nuclear magnetic resonance (NMR) technologies. The structure exhibits an unusual mode of hexameric assembly, where the individual p7 monomers, i, not only interact with their immediate neighbors, but also reach farther to associate with the i+2 and i+3 monomers, forming a sophisticated, funnel-like architecture. The structure also alludes to a mechanism of cation selection: an asparagine/histidine ring that constricts the narrow end of the funnel serves as a broad cation selectivity filter while an arginine/lysine ring that defines the wide end of the funnel may selectively allow cation diffusion into the channel. Our functional investigation using whole-cell channel recording showed that these residues are indeed critical for channel activity. NMR measurements of the channel-drug complex revealed six equivalent hydrophobic pockets between the peripheral and pore-forming helices to which amantadine or rimantadine binds, and compound binding specifically to this position may allosterically inhibit cation conduction by preventing the channel from opening. Our data provide molecular explanation for p7-mediated cation conductance and its inhibition by adamantane derivatives.
cell channel recording showed that these residues are indeed critical for channel activity. NMR measurements of the channel-drug complex revealed six equivalent hydrophobic pockets between the peripheral and pore-forming helices to which amantadine or rimantadine binds, and compound binding specifically to this position may allosterically inhibit cation conduction by preventing the channel from opening. Our data provide molecular explanation for p7-mediated cation conductance and its inhibition by adamantane derivatives.
Many viruses have developed integral membrane proteins to transport ions and other molecules across the membrane barrier to aid various steps of viral entry and maturation 9,10 . These membrane structures, known as viroporins, usually adopt minimalist architectures that are significantly different from those of bacterial or eukaryotic ion channels. Therefore, understanding the structural basis of how viroporins function broadens our knowledge of channels and transporters while generating new opportunities for therapeutic intervention.
The viroporin formed by the HCV p7 protein has been sought after as potential anti-HCV drug target 5,11 . p7 is a 63-residue membrane protein that oligomerizes to form ion channels with cation selectivity, for Ca 2+ over K + and Na +2, 3,12,13 , and more recent studies also reported p7-mediated H + intracellular conductance 14 . The p7 channel is required for viral replication1 5 ; it has been shown to facilitate efficient assembly and release of infectious virions 16,17 , though the precise mechanism of these functions remains unclear. The channel activity can be inhibited by adamantane and long alkylchain iminosugar derivatives and hexamethylene amiloride in vitro, with varying reported efficacies 2,3,12,13 . In addition to ion conduction, p7 has been shown to specifically interact with the non-structural HCV protein NS2, suggesting that its channel activity could be regulated 18,19 .
There is not yet a detailed structure of the p7 channel, though a number of pioneering NMR studies showed that the p7 monomer has three helical segments: two in the N-terminal half of the sequence and one near the C-terminus 12,20 . A single-particle electron microscopy (EM) study obtained a 16 Å resolution electron density map of the p7 oligomer using the random conical tilting approach 4 . The map shows that the p7 channel is a 42 kDa hexamer and adopts a flower-like shape that does not resemble any of the known ion channel structures in the database.
How does the small p7 polypeptide assemble into what appears to be a complex channel structure? Has the viroporin adopted novel structural elements for cation selectivity and channel gating? Amantadine or rimantadine blocks the influenza M2 channel by binding to the small pore formed by four transmembrane helices [21][22][23] , but the pore of the p7 hexamer is expected to be much bigger and it is thus unclear how would these small molecules fit. We sought to address these important questions by determining detailed structures of the p7 hexamer and its drug-binding site.
We systematically tested p7 amino acid sequences from various HCV genotypes and found that the sequence from genotype 5a (EUH1480 strain) generated samples that were sufficiently soluble for structure determination ( Supplementary Fig. 1). This p7 construct, designated here as p7(5a), could be efficiently reconstituted in dodecylphosphocholine (DPC) micelles at near physiological pH and generated high quality NMR spectra ( Supplementary Fig. 2). Negative-stain EM of the DPC-reconstituted p7(5a) in NMR buffer showed hexameric, flower-shaped particles that are similar to those in the electron micrographs of the p7 (JFH-1 strain, genotype 2a) hexamer in dihexanoyl-phosphatidylcholine (DHPC) micelles used earlier for single-particle reconstruction 4 ( Supplementary Fig. 3). Moreover, isothermal titration calorimetry and NMR chemical shift perturbation analyses of p7(5a)-rimantadine interaction showed that the drug binds specifically to the reconstituted protein with a binding constant (K d ) from 50-100 μM at 3 mM detergent concentration ( Supplementary Fig. 4&5). The above results together indicate that the p7(5a) polypeptides reconstituted in DPC micelles form structurally relevant hexamers.
Structure determination of the p7(5a) hexamer by NMR employed approach taken earlier for oligomeric membrane proteins [24][25][26] , which involves 1) determination of local structures of the monomers and 2) assembly of the oligomer with intermonomer distance restraints and orientation constraints. The NMR-derived restraints define an ensemble of structures with backbone r.m.s. deviation of 0.74 Å (Fig. 1a). Each monomer consists of an N-terminal helix (H1) from residues 5-16, a middle helical segment (H2), with a kink at Gly34, from residues 20-41, and a C-terminal helix (H3) from residues 48-58. These secondary structures are consistent with earlier NMR studies of p7 monomers in DHPC detergent and organic solvent 12,20 . There are no intramonomer contacts (Fig. 1a). The monomers are intertwined to form a tightly packed channel, where H1 and H2 form the channel interior and H3 is lipid-facing and packs against H2 of the i+2 and H1 of the i+3 monomer (Fig. 1a&b). The intermonomer association between H3 and H2 appears to be stabilized by interaction involving conserved residues such as Trp30, Tyr42, and Leu52, and the contacts between H3 and H1 are mostly between the alanine rich region of H1 (residues 10-15) and Ala61 and Ala63 of H3 (Fig. 1c). The overall structure of the p7(5a) hexamer has a flower-like shape that agrees with the EM map (EM database ID:1661), fitting to the map with a correlation coefficient of 0.94 (Fig. 1d).
The channel cavity has a funnel profile that resembles a champagne flute and is largely hydrophilic (Fig. 2a). The H2 helices form the wide cylindrical region (internal diameter, I.D. ~12 Å) by packing with each other at large angles (angle between adjacent helices ~ -47°), and the H1 helices assemble at smaller packing angles (~ -34°) to form the narrow conical region of the funnel (smallest I.D. at 6.8 Å). Residues 17-19 constitute the flexible joint between H1 and H2; their NMR resonances are significantly broader than other regions of the protein, suggesting the presence of conformational exchange.
The channel architecture described above represents a novel topology and exemplifies how HCV has optimized the short p7 polypeptide to achieve a rather complex channel structure. What are then the elements for cation conduction and gating? An in-depth examination of the channel interior found two strongly conserved polar residues with salient structural features (Fig. 2b). One is Asn9, which forms a ring of carboxamide that constricts the conical region of the channel (Fig. 2c). Residue 9 is asparagine in all strains except being substituted with histidine in genotype 2 viruses. Both asparagines and histidines have affinity for monovalent and divalent cations. We hypothesize that the Asn9 ring serves as a broad selectivity filter that dehydrates cations, allowing them to pass the hydrophobic ring formed by Ile6. The Ile6 ring defines the narrowest point of the channel and likely serves as a hydrophobic gate. Another feature is the Arg35 ring that defines the wider, C-terminal end of the channel (Fig. 2b). Placement of a positively charged ring on the other end of the pore was incomprehensible to us initially because it can repel cations. But the recent structure of an Orai Ca 2+ channel also revealed a stretch of basic residues in the ion conducting pore 27 . We hypothesize that one of the Arg35 roles is to bind and obstruct anions at the pore entrance while allowing cations to diffuse into the pore. In this model, cation conduction is unidirectional from the C-to N-terminal end of the channel.
To test the above hypotheses, we established an assay that uses the two-electrode voltageclamp technique to record p7-mediated current in Xenopus oocytes (METHODS). Due to the poor stability of oocytes that overexpress p7(5a), p7 (JFH-1 strain, genotype 2a) was used instead for these experiments. As expected of the proposed role of residue 9 in selectively dehydrating cations, replacing His9 of p7(2a) with alanine caused ~70% reduction in channel conductance at +80 mV (Fig. 2d). The proposed role of Arg35 infers that placing negatively charged residues at the channel entrance would bind cations and hinder their diffusion into the pore, and indeed the R35D mutation also reduced conductance by ~70% (Fig. 2d).
We next investigated the mechanism of amantadine binding to the p7 channel using proteins that are 15 N-labeled and deuterated so that Nuclear Overhauser Enhancement (NOE) between the protein backbone amide protons and drug protons could be measured unambiguously. At 10 mM amantadine (not corrected for drug partitioning to detergent micelles), the 15 N-edited NOESY spectrum showed NOE crosspeaks between the adamantane protons and the amide protons of Val26, Leu55, Leu56, and Arg57 (Fig. 3a). We then identified contacts between the drug and protein sidechains using protein that is ( 1 H/ 13 C)-labeled at the methyl positions of alanines, valines and leucines but is otherwise deuterated. In this case, the 13 C-edited NOESY showed several methyl-drug NOEs (Fig. 3b).
These NOEs were used to dock amantadine into the structure determined in the absence of drug. In doing so, we emphasize that the relevance of the p7-amantadine complex is confined to only the drug binding region because we do not know how and to what degree does drug binding alter the global conformation of the channel. The relatively poor stability of the protein-drug complex at the current stage of our study precludes full-scale structure determination. Nonetheless the available NMR data show that the drug adamantane binds to six equivalent hydrophobic pockets between the pore-forming and peripheral helices (Fig. 3c). The pocket consists of Leu52, Val53, and Leu56 from H3, and Phe20, Val25, and Val26 from H2. The amantadine amino group on average points to the channel lumen. The same NOESY spectrum as above recorded using a sample with 5 mM rimantadine indicates that rimantadine binds to the same pocket with the methyl and amino groups pointing to the lumen (Supplementary Fig. 6).
The binding site is overall consistent with mutational study showing that mutations in residues 50-55 significantly reduced drug sensitivity of the channel 28 . It is also consistent with a L20F mutation in genotype 1b virus originally identified in clinical trials that confers amantadine resistance 8,29 . In the p7(5a) structure, residue 20 is an integral part of the drug pocket and is in direct contact with the drug adamantane. Therefore, replacing Leu20 in p7(1b) with phenylalanine is expected to reduce hydrophobic interaction with the drug. Elucidation of previous functional data in the context of the structure suggests that the binding site shown in Fig. 3c is relevant to drug inhibition and that interactions between the drug adamantane and protein hydrophobic residues are critical for inhibition. Variations in the hydrophobicity of the binding pocket among the p7 variants ( Supplementary Fig. 7) thus explain the large differences in drug efficacies observed between different HCV genotypes.
We have learnt from KcsA and other channels that a gated ion channel generally adopts two essential features: pore elements that provide ion selectivity and gating mechanism that can transiently open the channel to allow ion permeation. By virtue of being a funnel, the p7 structure suggests that the tip of the funnel represented by the Ile6 and Asn9 rings is the key region for channel gating (Fig. 4). The role of the Asn9 ring is to provide ion selectivity by recruiting and dehydrating cations near the funnel exit, whereas the Ile6 ring is a hydrophobic constriction that would prevent water from freely passing through. Channel activation may involve reorientation of the H1 helices that widens the funnel tip, analogous to the dynamic C-terminal helix of KcsA 30 , and such structural rearrangement can be afforded by the flexible hinge between H1 and H2, the intervening loop between H2 and H3, and the C-terminal tail that "latches" onto H1. We thus propose that binding of adamantane derivatives inhibit channel activity by restricting the structural rearrangement. Our NMR titration data ( Supplementary Fig. 5c) is consistent with this proposal, which showed that in the absence of rimantadine, the Ile6 methyl resonance is split into an intense and weak peak, possibly corresponding to the open and closed state, respectively, and that increasing the drug concentration shifted the equilibrium that made the weak peak stronger. Although rigorous testing of the model is needed, the preliminary observation suggests the existence of multiple states of the p7 channel.
Sample Preparation
The amino acid sequence of p7 from genotype 5a was slightly modified to allow for efficient reconstitution and protein sample stability. In this sequence, Thr1 is replaced with Gly, Ala12 is replaced with Ser, and the three cysteines at positions 2, 27, and 44 are replaced with Ala, Thr, and Ser, respectively ( Supplementary Fig. 1). The p7(5a) construct was cloned, expressed and purified as previously described 1,2 . Briefly, the protein was expressed as a fusion to His9-trpLE that formed inclusion bodies. The peptide was released from the fusion protein by CNBr digestion and subsequently separated on a Proto-18C column by reverse-phase chromatography (more details given in Supplementary Methods). The lyophilized peptide was then dissolved in 6 M guanidine and DPC and refolded by dialysis against the NMR buffer. A typical NMR sample contains 0.8 mM protein (monomer concentration), 200 mM DPC, and 25 mM MES (pH 6.5).
Assignment of NMR resonances
All NMR experiments were conducted at 30 °C on Bruker spectrometers equipped with cryogenic probes. Sequence specific assignment of backbone chemical shifts was accomplished using three pairs of triple resonance experiments, recorded using a 15 N/ 13 C/ 2 H labeled sample. The triple resonance experiments were relaxation optimized (TROSY) 3 , including HNCA, HN(CO)CA, HNCACB, HN(CO)CACB, HN(CA)CO, and HNCO 4 . Protein sidechain aliphatic and aromatic resonances were assigned using a combination of NOESYs including 15 N-edited NOESY-TROSY (60 ms NOE mixing time, τ NOE ) and 13 C-edited NOESY-HSQCs (τ NOE =100 ms). Specific stereo assignment of the methyl groups of valines and leucines were obtained from a constant-time 1 H-13 C HSQC spectrum recorded using a 15% 13 C-labeled sample 5 .
Assignment of local NOEs for determining the secondary structures
The same 15 N-edited NOESY-TROSY and 13 C-edited NOESY-HSQC above with short τ NOE were used to assign local NOEs. Combining the NOE restraints with chemical shifts, we could very precisely define the helical and loop regions of the individual monomers.
Measurement of residual dipolar coupling (RDC) constants
The backbone 1 H-15 N RDCs were measured using a modified approach 6 of the straininduced alignment in a gel method 7,8 . In this experiment the p7(5a) channel in DPC micelles was soaked into a cylindrically shaped polyacrylamide gel (4.5%), initially of 6 mm diameter, which was subsequently radially compressed to fit within the 4.2 mm inner diameter of an open-ended NMR tube. The 1 H-15 N RDCs were obtained from 1 J NH /2 and ( 1 J NH + 1 D NH )/2, which were measured by interleaving a regular gradient-enhanced HSQC and a gradient-selected TROSY 9 . The largest 1 H-15 N RDC measured is 33.5 Hz.
Assignment of intermonomer NOEs
Intermonomer NOEs between protein backbone amide and sidechain methyl protons was assigned using a sample that was reconstituted with a 1:1 mixture of 15 N-, 2 H-labeled p7(5a) peptide and 13 C-labeled peptide. Recording a 15 N-edited NOESY-TROSY (τ NOE =300 ms) on a 900 MHz spectrometer with this sample allowed exclusive detection of NOE crosspeaks between the 15 N-attached protons of one monomer and the 13 C-attached protons of other monomers. The intermonomer NOEs between the neighboring H1 helices and neighboring H2 helices effectively defined the central cavity formed by these helices. The initial structural solution of the pore assembly then allowed us to assign complementary and self-consistent intermonomer NOEs between the aliphatic and aromatic protons in a pair of 15 N-edited NOESY-TROSY and 13 C-edited NOESY-HSQC recorded using a 15 N-, 13 Clabeled sample. These spectra were recorded with τ NOE of 120 ms and 150 ms, respectively.
The packing of H1 and H2 helices between the adjacent monomers and RDC-derived orientation constraints together positioned the H3 helix of monomer i to be in contact with H2 of the i+2 and H1 of the i+3 monomers, and this conformation was confirmed by the unambiguous amide-methyl NOEs between H3 and H1/H2. The conformation as defined by the intermonomer NOEs was subject to numerous rounds of self-consistency test with the NOE crosspeaks in the 13 C-edited NOESY-HSQC spectrum to ensure that all NOEs are consistent with the structure. The overall distribution of intermonomer NOEs is illustrated in Supplementary Fig. 7.
Assignment of NOEs between protein and drug
We prepared a sample containing 15 N-, 2 H-labeled p7(5a), 10 mM amantadine (or 5 mM rimantadine), and perdeuterated DPC. The sample was used to record a 15 N-edited NOESY-TROSY (τ NOE =300 ms) on a 900 MHz spectrometer. This experiment allowed exclusive detection of NOEs between the exchangeable amide protons and the drug protons. For assigning NOEs between the protein sidechain methyl protons and the drug protons, we prepared the ALV-labeled protein that is 1 H-, 13 C-labeled at the methyl positions of alanines, valines and leucines but is otherwise deuterated. The NOEs were measured using a 13 C-edited NOESY with diagonal suppression, i.e., interleaving two experiments: one with NOE mixing (300 ms) of the H z magnetization (NOE crosspeaks) and the other with mixing of the H z C z magnetization (no NOE crosspeaks) 10 . Subtracting the two spectra mostly cancelled the strong methyl diagonal peaks (~0.8 ppm) and thereby unveiled the weak methyl-drug NOEs at ~1.7 ppm.
Structure calculation of the p7(5a) hexamer
Structures were calculated using the program XPLOR-NIH 11 . The monomer structures (mainly the secondary structures) were first calculated using intramonomer NOE-derived distance restraints, backbone dihedral restraints derived from chemical shifts using the TALOS program 12 , and RDC restraints. A total of 10 monomer structures were calculated using a standard simulated annealing (SA) protocol. Six copies of the lowest-energy monomer structure were used to construct an initial model of the hexamer using intermonomer NOE restraints collected from the mixed-labeled sample for the H1 and H2 helical segments. For each intermonomer restraint between two adjacent monomers, six identical distance restraints were assigned respectively to all pairs of neighboring monomers to satisfy the condition of C6 rotational symmetry (as indicated by the EM data). The assembled hexamer was then subject to refinement against RDCs to accurately orient the three helical segments. Finally, using the SA protocol, the hexamer was refined against the complete set of NOE restraints (including intramonomer and intermonomer distance restraints), dihedral restraints, and RDC restraints. A total of 60 hexamer structures were calculated and 15 low energy structures were selected as the structural ensemble. Ramachandran plot statistics for the structure ensemble, calculated using PROCHECK 13 , are as follows: most favored (96.6%), additionally allowed (2.8%), generously allowed (0.6%) and disallowed (0.0%).
Whole-cell channel recording assay for p7
The cRNA of p7(2a) variants were synthesized and injected into Xenopus laevis oocytes at ~15 ng per oocyte. After about 16-30 hours of expression, healthy oocytes were collected and subject to channel recording using the two-electrode voltage-clamp technique 14 . The oocytes were first bathed in standard ORi solution (90 mM NaCl, 2 mM KCl, 2 mM CaCl 2 , and 5 mM MOPS, pH 7.4) before impaled with two microelectrodes. For recording p7mediated current, we used a voltage-clamp protocol consisting of rectangular voltage steps from −100 to +80 mV in 10 mV increments, applied from a holding voltage of −60 mV. Expression levels of the p7 variants in oocytes were examined by confocal microscopy using HA-tagged p7. More experimental details are described in Supplementary Methods. Table 1. b, Two-dimensional drawing illustrating the intermonomer interactions among the H1, H2, and H3 helical segments that are responsible for the hexameric assembly. c, Three-dimensional cartoon representation describing the global arrangement of helical segments and amino acids that appear to play a role in the packing of H3 against H1 and H2. d, Fitting the lowest energy structure from the ensemble to the 16 Å EM map (EM database ID:1661) 4 . The fitting correlation is 0.94 as calculated with the program Chimera. Figure 2. The pore properties of the p7(5a) channel a, The pore surface calculated using the program HOLE, showing the shape and constrictions of the pore. b, Sectional view of the channel showing the pore-lining residues with residues in red being strongly conserved. The numbers next to the helical segments represent the monomers to which the helices belong. c, A close view of the rings formed by Asn9 and Ile6 that constrict the N-terminal end of the channel. d, The current-voltage relationships of wildtype p7(2a) and the H9A and R35D mutants. Each data point is the mean ± SEM (standard error of mean) calculated over measurements from six different oocytes (n=6). a, Representative strips from the 3D 15 N-edited NOESY-TROSY spectrum (300 ms NOE mixing time) recorded using a sample containing 15 N-, 2 H-labeled p7(5a) and 10 mM amantadine, showing amantadine NOEs to the backbone amide protons of Val26, Leu55, Leu56, and Arg57. b, Representative strips from the 3D diagonal-suppressed 13 C-edited NOESY-HSQC spectrum recorded using a sample that is 1 H-, 13 C-labeled at the methyl positions of alanines, valines and leucines but is otherwise deuterated, showing drug NOEs to the sidechain methyl protons of Val26, Leu52, and Val53. The spectra in a and b were recorded at 1 H frequency of 900 MHz. c, Amantadine docked into the p7(5a) hexamer using restraints from NOEs in a and b (left) and a close view of amantadine in the binding pocket (right). | 2017-03-31T13:41:24.914Z | 2013-06-05T00:00:00.000 | {
"year": 2013,
"sha1": "49918259ef9af657c4280106a65e40185a0e8b92",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3725310?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4ce7106f5e2917687aed984e5682d4d70a24187",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
49573815 | pes2o/s2orc | v3-fos-license | Microbially Mediated Methylation of Arsenic in the Arsenic-Rich Soils and Sediments of Jianghan Plain
Almost nothing is known about the activities and diversities of microbial communities involved in As methylation in arsenic-rich shallow and deep sediments; the correlations between As biomethylation and environmental parameters also remain to be elucidated. To address these issues, we collected 9 arsenic-rich soil/sediment samples from the depths of 1, 30, 65, 95, 114, 135, 175, 200, and 223 m in Jianghan Plain, China. We used microcosm assays to determine the As-methylating activities of the microbial communities in the samples. To exclude false negative results, we amended the microcosms with 0.2 mM As(III) and 20.0 mM lactate. The results indicated that the microbial communities in all of the samples significantly catalyzed arsenic methylation. The arsM genes were detectable from all the samples with the exception of 175 m, and 90 different arsM genes were identified. All of these genes code for new or new-type ArsM proteins, suggesting that new As-methylating microorganisms are widely distributed in the samples from shallow to deep sediments. To determine whether microbial biomethylation of As occurs in the sediments under natural geochemical conditions, we conducted microcosm assays without exogenous As and carbons. After 80.0 days of incubation, approximately 4.5–15.5 μg/L DMAsV were detected in all of the microcosms with the exception of that from 30 m, and 2.0–9.0 μg/L MMAsV were detected in the microcosms of 65, 114, 135, 175, 200, and 223 m; moreover, approximately 18.7–151.5 μg/L soluble As(V) were detected from the nine sediment samples. This suggests that approximately 5.3, 0, 8.1, 28.9, 18.0, 8.7, 13.8, 10.2, and 14.9% of total dissolved As were methylated by the microbial communities in the sediment samples from 1, 30, 65, 95, 114, 135, 175, 200, and 223 m, respectively. The concentrations of biogenic DMAsV show significant positive correlations with the depths of sediments, and negative correlations with the environmental NH4+ and NaCl concentrations, but show no significant correlations with other environmental parameters, such as NO3-, SO42+, TOC, TON, Fe, Sb, Cu, K, Ca, Mg, Mn, and Al. This work helps to better understand the biogeochemical cycles of arsenic in arsenic-rich shallow and deep sediments.
INTRODUCTION
Arsenic is widely distributed in the Earth's crust at an average concentration of 2.0 mg/kg. It occurs in more than 200 minerals, usually in combination with sulfur and metals (Nordstrom, 2002). Levels of arsenic differ considerably from one geographic site to another, depending on the biogeochemical conditions of the sites and the anthropogenic activities carried out in the vicinity. Arsenic deposited in sediments, rocks and minerals is usually insoluble. However, problems have arisen when the mineral arsenic was mobilized and released into groundwater (Polizzotto et al., 2008;Stuckey et al., 2016). Arsenic-contaminated groundwater exists in more than 70 countries worldwide, including Bangladesh, India, Pakistan, Burma, Nepal, Vietnam, Cambodia, China, and United States (Kirk et al., 2004;Gu and Wang, 2012;Shao et al., 2016). Natural processes, biochemical reactions, and anthropogenic activities are responsible for the dissolution and release of arsenic from minerals into groundwater (Fendorf et al., 2010).
Increasing evidences suggest that the microbial communities play major roles in the transformation, mobilization or immobilization of arsenic in the arsenic-rich sediments (Das et al., 2016;Chen et al., 2017). Diverse AOB and DARPs catalyze the redox reactions of arsenic in the sediments (Oremland and Stolz, 2003;Rhine et al., 2005;Islam et al., 2013;Kulp, 2014). AOB can oxidize As(III) into As(V) using oxygen as the terminal electron acceptor under aerobic conditions, or using nitrate, selenate, or chlorate as the terminal electron acceptor under anaerobic conditions (Silver and Phung, 2005;Stolz et al., 2006;Van der Zee and Cervantes, 2009;Kumari and Jagadevan, 2016;Zeng et al., 2016;Yang et al., 2017). As(V) can also be directly reduced into As(III) by DARPs using lactate, pyruvate, acetate, or other organic/inorganic materials as the sole electron donor (Sierra-Alvarez et al., 2005;Lear et al., 2007;Song et al., 2009;Kudo et al., 2013;Ohtsuka et al., 2013;Osborne et al., 2015). DARPs were found to be the major driver of the dissolution and release of arsenic from sediments into groundwater Ohtsuka et al., 2013;Osborne et al., 2015;Chen et al., 2017;Wang et al., 2017).
In addition to the microorganisms-catalyzed redox reactions of arsenic, it was recently found that biomethylation of arsenic occurred in the arsenic-rich sediments of the southern Willamette Basin, near the Eugene-Springfield area of Oregon, United States (Maguffin et al., 2015). An aquifer injection test indicated that tentative biomethylation could produce dimethylarsinate at a rate of approximately 0.09% of total dissolved arsenic per day, comparable to rates of dimethylarsinate production in surface environments. It was estimated that global biomethylation of arsenic in aquifers has a potential to transform Abbreviations: AFS, atomic fluorescence spectrometry; AOB, arsenite-oxidizing bacteria; ArsM, As(III)-S-adenosylmethionine methyltransferase; DARPs, dissimilatory arsenate-respiring prokaryotes; DMAs V , dimethylarsinic acid; EC, electrical conductivity; HPLC-ICP-MS, high-performance liquid chromatography linked to inductively coupled plasma-mass spectrometry; ICP-AES, inductively coupled plasma-atomic emission spectrometry; MMAs V , monomethylarsonic acid; TMAO, trimethylarsine oxide; TMAs III , trimethylarsine; TOC, total organic carbon; TON, total organic nitrogen. 100 tons of inorganic arsenic into methylated arsenic species per year (Maguffin et al., 2015). This suggests that microbial methylation of arsenic in the arsenic-rich sediments may play a key role in the global biogeochemical cycles of arsenic. Actually, the methylarsenicals have been detected in groundwater worldwide since 1973 (Braman and Foreback, 1973;Del Razo et al., 1990;Lin et al., 1998Lin et al., , 2002Shraim et al., 2002;Xie et al., 2009;Watts et al., 2010;Christodoulidou et al., 2012). However, the substantial links between arsenic methylation and microbial communities, as well as the correlations between arsenic methylation activities of indigenous microbial communities and environmental parameters in the arsenic-rich sediments remain to be elucidated.
Many bacteria, archaea, fungi, and animals are able to methylate As . Arsenic methylation was catalyzed by ArsM (Ajees et al., 2012). The first arsM gene was cloned from the soil bacterium Rhodopseudomonas palustris (Qin et al., 2006). It codes for a protein with 283 amino acid residues. ArsM proteins exist in both prokaryotic and eukaryotic microorganisms . Purified ArsM can convert As(III) into DMAs V , TMAO, and volatile TMAs III (Qin et al., 2006(Qin et al., , 2009Slyemi and Bonnefoy, 2012;Chen et al., 2014;Zhu et al., 2014). The arsM gene can be used as a molecular marker for investigations of the diversities of As-methylating microbes (Jia et al., 2013). Recently, some single bacterial strains with significant As-methylating activities were isolated from paddy soils, wastewater ponds, and microbial mats (Kuramata et al., 2015;Huang et al., 2016;Wang et al., 2016). Functional analyses suggest that ArsMs play a role in the detoxification of As(III) and could be exploited in bioremediation of arsenic-contaminated groundwater (Qin et al., 2006).
Jianghan Plain is an alluvial plain located in the middle and south of the Hubei Province, China. Geochemical surveys indicated that groundwater in some areas in Jianghan Plain was contaminated by arsenic (Gan et al., 2014). Previously, we found that DARPs widely exist in the deep sediments of this plain, and significantly catalyze the dissolution, reduction, and release of arsenic from sediments into groundwater . Our group also found that sulfate markedly enhances the DARPs-mediated dissolution, reduction and release of arsenic and iron from mineral phase into groundwater . This study aimed to explore the diversity and functions of the arsenic-methylating bacteria from the arsenic-rich sediments in Jianghan Plain.
Sampling
The sampling site (113 • 36 35.028 E, 30 • 8 34.944 N) was near a paddy field, located in the Jiahe village that is affiliated to the Shahu town of Xiantao city, Hubei province, China (Figure 1). Xiantao area is one of the most important food production regions in China. Jiahe is a little rural village that lies in the riverside of a rivulet, approximately 1.9 km from the Tongshun River. A borehole with a depth of 230 m was drilled using the direct-mud rotary drilling technique as described elsewhere (Keely and Boateng, 1987). Soil and sediment samples were collected from the depths of 1, 30,65,95,114,135,175,200, and 223 m. The external layers of the sediment cores were carefully removed to avoid contaminations and oxidation by oxygen. The samples were placed in sterilized tubes that were put in an anaerobic bag buried in the ice. All of the samples were transported into the laboratory in 12.0 h.
Geochemical Analyses
To determine the total arsenic content, one gram of dried sample powders was mixed with 20.0 mL of aqua regia. After 2.0 h of incubation at 100 • C, the mixture was centrifuged at 10,000 g/min for 5.0 min, and the supernatant was collected for determination of the arsenic concentration using AFS (AFS-9600, Haiguang, Beijing, China). Soluble arsenic species, including total soluble As, MMAs V , and DMAs V , were determined by high-performance liquid chromatography linked to AFS (HPLC-ICP-MS) (LC-20A, Shimadzu, Japan; ELAN, DRC-e, PerkinElmer, United States). An anion-exchange column (PRP-X100, Hamilton; 4.1 mm i.d × 250 mm, 10 µm) was used for the measurement. The mobile phase consisted of 10.0 mM (NH 4 ) 2 HPO 4 and 10.0 mM NH 4 NO 3 , adjusted to pH 6.0 with 4% formic acid. The mobile phase was pumped through the column at a flow rate of 1.0 mL min −1 .
Total organic carbon was determined with a TOC analyzer (multiN/C 3100, Analytik Jena, Germany). Concentrations of anions were measured using ion chromatography (DX-120, Dionex, United States). Concentrations of metal ions were determined using ICP-AES (IRIS Intrepid II XSP, Thermo Fisher Scientific, United States).
Determination of As-Methylating Activities of the Sediment Samples
To detect the As-methylating activity of the microbial community from different sediment samples, active microcosms were prepared in triplicate by inoculating 8.0 g of samples into 20.0 mL of simulated groundwater amended with 0.2 mM As(III) and 20.0 mM lactate in a 50-mL flask. Triplicate controls were each prepared by mixing 8.0 g of autoclaved samples with 20.0 mL of simulated groundwater. All of the mixtures were incubated at 30 • C without shaking, under microaerobic conditions. After 80.0 days, approximately 1.0 mL of cultures was removed from each flask for determination of the concentrations of MMAs V and DMAs V using HPLC-ICP-MS.
Methylated As Release Assay in the Absence of Exogenous As and Carbon Source
To determine if the As-methylating microbes can catalyze the release of methylated As from the arsenic-rich sediments under natural conditions, we performed microcosm assays with the sediment samples in the absence of exogenous As and organic carbon. An active microcosm was prepared by mixing 8.0 g of the sediment samples with 20.0 mL of simulated groundwater in a 50-mL flask. All of the microcosms were incubated at 30 • C for 80.0 days without shaking, under microaerobic conditions. After 80.0 days of incubation, approximately 1.0 mL of slurries was removed from the flaks for measurement of the concentrations of soluble As, MMAs V and DMAs V .
Analysis of the Correlations Between Methylated As and Geochemical Parameters
The Spearman's Rank Correlation Coefficient was used to determine the correlations between the concentrations of methylated As species produced by microbial communities, and the geochemical parameters in the environment. The SPSS software was used for statistical analyses. Correlations were considered to be statistically significant at a 95% confidence level (P < 0.05).
Amplification, Cloning, and Analysis of arsM Genes From Genomic DNAs of the Samples
To detect the diversity of As-methylating microbes from the samples, three pairs of primers were used to amplify arsM genes from the genomic DNA of each sample. The primers were listed in Table 1. Metagenomic DNA was extracted from the samples using the MiniBEST Bacterial Genomic DNA Extraction Kit (TaKaRa, Japan). Nested PCR technique was used to amplify the arsM genes from the genomic DNA. The first run of PCR was performed using the primers arsMF1 and arsMR2; the PCR products from the first run were purified and used as templates for the second run with the primers arsMF2 and arsMR1 or arsMF3 and arsMR1. The predicted lengths of the PCR products using the three sets of primers are 340, 230, and 210 bp, respectively. PCR amplifications were carried out in a reaction volume of 50 µL containing 2 µL of bacterial genomic DNA as template, 1.0 unit of Taq DNA polymerase (TaKaRa, Japan), 10 pmol of each primers and 0.2 mM dNTPs. Reaction conditions were as follows: pre-denaturing at 94 • C for 5 min, 30 cycles of 94 • C for 30 s, 65 • C for 30 s, and 72 • C for 30 s, and ended with a final extension phase at 72 • C for 10 min. PCR products were gel purified using the E.Z.N.A gel extraction kit (Omega, United States). Purified DNA was ligated into a T vector. An arsM gene library was generated by introducing the recombinant plasmids into Escherichia coli DH5α competent cells. All of the clones from the library were sequenced and analyzed as described previously (Zhong et al., 2017).
The obtained different arsM genes were translated into amino acid sequences using the ExPASy server. A protein sequence homology search against the GenBank database was performed using the BLAST server. Multiple sequence alignments were conducted using the ClustalW2 software as described elsewhere Mu et al., 2016a). A phylogenetic tree of the obtained ArsMs and their closely related homologues was constructed using the maximum likelihood method implemented in MEGA 6.0 as described previously (Xu et al., 2014;Mu et al., 2016b).
Accession Number(s)
The arsM sequences from the Jianghan Plain have been deposited into the GenBank database under the accession numbers MH177487 to MH177576.
Characterization of the Sampling Site
We collected nine sediment samples from the depths of 1, 30,65,95,114,135,175,200, and 223 m, respectively. Geochemical analyses indicated that the sediments contain high contents of total As (ranging from 6.74 to 42.11 mg/kg) and soluble As (ranging from 1.9 to 100.7 µg/L). The arsenic contents show no correlations with the depths of the sediments. The concentrations of total arsenic also show no correlations with those of soluble arsenic; this suggests that multiple factors controlled the arsenic dissolution and release from the sediments. The sediment samples contain 0.27-8.37 g/kg TOC and 0.18-1.26 g/kg TON; these substances could provide essential carbon and nitrogen sources for the growth of microorganisms in the sediments. The sediments also contain relatively high contents of sulfate (ranging from 14.53 to 863.87 mg/kg), and relatively low contents of ammonium (ranging from 3.12 to 52.77 mg/kg) and nitrate (ranging from 0.23 to 3.03 mg/kg).
As-Methylating Activities of the Microbial Communities From the Sediments
Microcosm assay was used to determine the As-methylating activities of the nine sediment samples. The microcosms were amended with 0.23 mM As(III) as the substrate of ArsM enzymes, and 20.0 mM lactate as the carbon source. The results showed that no detectable amounts of methylated arsenic species were observed in the autoclaved sediment slurries. In contrast, when the sediment microcosms were not autoclaved, 2. 06, 35.53, 18.83, 53.99, 121.86, 7.54, 7.00, 260.38, and 3.02 µg/L DMAs V were detected in the microcosms from 1, 30,65,95,114,135,175,200, and 223 m, respectively (Figures 2A,B). No significant amounts of MMAs V were detectable in all of the sediment samples with the exception of that from the depth of 135 m, in which 3.30 µg/L of biogenic MMAs V was produced. This suggests that the microbial communities in the nine sediment samples were able to significantly catalyze As methylation, and the dominant products were DMAs V .
Methylated As Release From the Sediments
To determine if the As-methylating microbes catalyze the release of methylated As from the arsenic-rich sediments under natural conditions, we performed methylated As release assay using the microcosms in the absence of any exogenous organic carbon and As(III) compounds. We found that 4.5, 7.0, 10.5, 7.0, 8.5, 7.0, 15.5, 12.5 µg/L DMAs V were released from the slurries of the samples from the depths 1, 65,95,114,135,175,200, and 223 m, respectively, and no significant amount of DMAs V were detected from the microcosm of 30 m ( Figure 3A). In contrast, 6.5, 2.0, 2.0, 6.5, 5.5, 9.0 µg/L MMAs V were detected in the slurries of the samples from the depths 65,114,135,175,200, and 223 m, respectively, and no detectable MMAs V was found from the slurries of other sediment samples. This suggests that significant amount of DMAs V were generated in the sediment samples after 80.0 days of incubation without shaking, even though no exogenous carbons and As(III) were added to the microcosms. We also examined the concentrations of soluble As in the sediment samples after 80.0 days of incubation. The results showed that approximately 85. 05, 18.69, 86.64, 36.36, 39.00, 97.20, 50.79, 151.53, and 84.09 µg/L of soluble As(V) were detected in the slurries of the samples from 1, 30,65,95,114,135,175,200, and 223 m, respectively ( Figure 3B). This suggests that 5. 29, 0, 8.08, 28.87, 17.95, 8.74, 13.78, 10.23, and 14.87% of total soluble As were methylated by the microbial communities in the microcosms of the samples from 1, 30,65,95,114,135,175,200, and 223 m, respectively.
Correlations of the Methylated As Species and Some Geochemical Parameters
The Spearman's Rank Correlation Coefficient was used to determine the correlations between methylated As species and geochemical parameters. We found that the concentrations of biogenic DMAs V from the sediments of different depths show
Diversities of As-Methylating Microbes in the Sediment Samples
To understand the microbial basis of the As-methylating activities of the nine sediment samples, we explored the diversities of the As-methylating microbes present in the microbial communities by cloning, sequencing and analyzing the arsM genes from the metagenomic DNAs of the samples. The arsM genes were detectable from all of the samples with the exception of the sample from the depth of 175 m. A total Figure S1). The lengths of these arsM genes are either 233 bp (for the genes from the depths 1, 65, 95, 135, 200, and 223 m) or 209 bp (for the genes from the depths of 30 and 114 m). They share 25-98% sequence identities with each other. This suggests that the As-methylating microbes widely exist in the arsenic-rich shallow and deep sediments. We did not detect any arsM gene from the genomic DNA of the sample from 175 m; this does not mean that there are no arsM genes in the microbial community of the sample. It is most likely that the primers used in this study is not complementary to the specific regions of the arsM genes from the sample of 175 m.
The amino acid sequences of the obtained ArsM proteins were each used as queries to search against the GenBank database using the BLAST sever. We found that these ArsMs share 35-98% sequence homology with other known ArsMs from bacteria and archaea. A phylogenetic tree was constructed based on the multiple alignments of the ArsM proteins from this study and their closely related ArsMs from other known microorganisms ( Figure 5). An ArsM sequence from archaea was chosen as the outgroup.
DISCUSSIONS
Global arsenic transformation includes the redox reactions, and methylation/thiolation cycles . Despite the fact that As is toxic to microorganisms, diverse prokaryotes and eukaryotes obtain their energy from oxidation of As(III), or reduction of As(V). Some microorganisms also detoxify As by oxidizing As(III) to less toxic As(V), or by using arsC gene operon capable of converting intracellular As(V) into As(III) that is further transported out of the cell by an energy-dependent efflux pathway (Slyemi and Bonnefoy, 2012).
As methylation is regarded as another As detoxification mechanism for both prokaryotic and eukaryotic microbes. However, some compounds of the pathway were shown to be more toxic than the inorganic forms of arsenic (Li H. et al., 2016). It is likely that the production of MMAs(III) or DMAs(III) doesn't only have a role in detoxification, but also acts as a primordial antibiotic to attack other species (Li J. et al., 2016). Biomethylation of As is widespread in nature, and has been widely observed in bacteria, archaea, fungi, algae, plants, animals, and humans . It was estimated that biological methylation of arsenic causes approximately 2.1 × 10 7 kg As per year to be volatilized and released into the atmosphere (Srivastava et al., 2011;Mestrot et al., 2013).
During the last decade, the investigations of microbial As methylation have focused on three aspects: (i) environmental diversity of arsM genes; (ii) isolation and characterization of cultivable As-methylating bacteria; (iii) construction of engineered bacterial strains expressing high levels of arsM genes, for bioremediation of As contaminations in the environment. Asmethylating bacteria widely exist in As-contaminated paddy soils and rice rhizosphere (Jia et al., 2013;Zhao et al., 2013;Zhang S.Y. et al., 2015;Reid et al., 2017), activated sludges (Cai et al., 2013), contaminated aquatic ecosystems (Desoeuvre et al., 2016), copper mine (Xiao et al., 2016), composting manure (Zhai et al., 2017), and estuarine wetland . It was found that there is a significant correlation between arsC and arsM genes in the paddy soils; this suggests that the two genes coexist well in the microbial As resistance system (Zhang S.Y. et al., 2015). Until now, at least seven cultivable As-methylating bacteria, including Clostridium sp. BXM, Rhodopseudomonas palustris, Arsenicibacter rosenii, Cytophagaceae. sp. SM-1, Shewanella oneidensis MR-1, Pseudomonas alcaligenes NBRC14159, and Streptomyces sp.GSRB54, have been isolated from different Ascontaminated environments (Qin et al., 2006;Kuramata et al., 2015;Wang et al., 2015Wang et al., , 2016Huang et al., 2016Huang et al., , 2017. They possess significant As-methylating activities under aerobic, anaerobic, or microaerobic conditions. Moreover, several engineered As-methylating bacterial strains expressing high levels of ArsM proteins, such as Pseudomonas putida and Bacillus subtilis, were constructed for bioremediation of Ascontaining soils and organic manure Huang et al., 2015); however, all these efforts were in the laboratory.
Recently, it was found that microbial methylation of arsenic could also occur in the arsenic-rich aquifers of the depths 20-40 m in the Southern Willamette Basin, Oregon, United States (Maguffin et al., 2015). However, little is known about the distributions, diversities, and activities of As-methylating microbes in the arsenic-rich sediments from different depths. In this study, we found that there is a large diversity of As-methylating microorganisms distributed in the arsenic-rich sediments from the depths of 1-223 m in Jianghan Plain in the central China. All of the sediment samples possess significant As-methylating activities. We also found that the microbial communities in the sediments catalyzed As dissolution and methylation in the absence of exogenous carbon source and As. To the best of our knowledge, this is the first report on the diversities and functions of As-methylating microbes from arsenic-rich sediments. This work also provided direct evidences that the microbial communities in the deep sediments extensively involve in global arsenic methylation reactions.
As shown in Supplementary Figure S1, we failed to detect the presence of arsM gene in the sample from the depth of 175 m; however, the As-methylating activity was really detectable in this sample ( Figure 3A). We also observed that the diversities of the arsM genes are not always consistent with the concentrations of the methylated As in different samples. This inconsistence should be attributed to that there are other arsM-like genes that we failed to detect because of mismatch by the primers used for PCR amplifications.
It is interesting to see that the concentrations of biogenic DMAs V from the sediments show significant positive correlations with the depths of the sediments. This suggests that pressure may be beneficial to the microbial As-methylating reactions. We also found that the concentrations of DMAs V generated by microbial reactions show significant negative correlations with those of NaCl and NH 4 + in the sediments. However, the mechanism for this observation remains to be elucidated. Because the civil wastewater always contains high concentrations of NaCl and NH 4 + , it can be inferred that anthropogenic activities could decrease the As methylation in the sediments. Considering that the sampling site of this study was located in a paddy field, and the groundwater in the area contains high concentrations of NH 4 + (Gan et al., 2014), it is no surprise that the groundwater in Jianghan Plain contained little methylated As.
As shown in Table 2 and Figure 3B, a comparison between the soluble As in the original sediment samples and in the incubated samples indicated that in some cases, the latter is higher, in other cases, it is lower. This observation was attributed to that the microbial communities from different depths have different As mobilization/immobilization activities.
Arsenic-rich groundwater is widely distributed in more than 70 countries in the world. Our work clearly indicated that As-methylating microorganisms ubiquitously exist in the arsenic-rich sediments from 1 to 223 m, and possess apparent biomethylation activities. This suggests that the microbial communities in the arsenic-rich sediments play important roles in the global transformations of inorganic As into organic forms, and the amount of released methylated As from the sediments as predicated by Maguffin et al. (2015), may be significantly underestimated; however, many in situ experiments are required to achieve accurate prediction.
As shown in Figure 6, a comparison between the Asmethylating activities in the presence or absence of exogenous As and organic carbon indicated that addition of exogenous As and C markedly stimulated the methylation activities of the microbial communities in the sediments from the depths of 30, 65, 95, 114, 175, and 200 m. In comparison, exogenous As and C inhibited the microbial arsenic-methylating activities of the sediments from other depths; this could be attributed to that the supplemented As and C dominantly promoted the growth of non-As-methylating microbes in the sediments, which competitively inhibited the growth of As-methylating microbes.
Based on the data of this study and related knowledge (Carlin et al., 1995;Qin et al., 2006;Ye et al., 2012;Yang and Rosen, 2016), we proposed a conceptual model for the microbes-catalyzed methylation of As in the arsenic-rich sediments (Figure 7). The microbial reactions started from microbial communitiescatalyzed weathering, dissolution and release of As(III) and As(V) from the sediments under microaerobic conditions (1). Bacterial ArsC catalyzes the reduction of As(V) into As(III) (2).The released and produced As(III) binds to the repressor (ArsR) of bacterial arsM-arsC-arsH gene operon (3), and thus activate the expression of the arsM, arsC and arsH genes (4). ArsM catalyzes As methylation by converting As(III) into MMA V (5) that is further converted into DMAs V via reduction and methylation reactions catalyzed by ArsH and ArsM (6).
Finally, DMAs V is converted into TMA and TMAO by ArsM and ArsH (7).
CONCLUSION
Methylation of As plays important roles in the global biogeochemical cycles of arsenic. However, little is known about whether biomethylation of As occurs in arsenic-rich shallow and deep sediments; the activities and diversities of microorganisms involved in As methylation in the sediments also remain to be elucidated. In this study, we found that the microbial communities from the sediments of 1-223 m in Jianghan Plain have efficient As-methylating activities. They can significantly catalyze the dissolution and methylation of As in the absence of exogenous As and carbon source. Functional gene analyses indicated that there are large diversities of novel As-methylating microbes widely existing in all of the samples from 1 to 223 m sediments with the exception of 175 m. The concentrations of biogenic DMAs V show significant positive correlations with the depths of sediments, and negative correlations with the environmental NH 4 + and NaCl. This suggests that anaerobic or microaerobic conditions and pressure could be favorable for the microbial biomethylation reactions of As. These findings for the first time provided direct evidences that the microbial communities in the arsenic-rich shallow and deep sediments catalyze arsenic methylation, and thus play key roles in the global transformation of arsenic from inorganic to organic form. The data of this study also strongly suggest that the global arsenic methylation in the arsenic-rich sediments may be significantly underestimated because previous investigation only focused the biomethylation occurred in the sediments from the depths 20-40 m. This work also is useful for us to better understand the biogeochemical cycles of arsenic in the arsenic-rich sediments.
AUTHOR CONTRIBUTIONS
YY, WS, ZP, XC, and XZ conducted the experiments. X-CZ, YY, WS, and YW analyzed the data. X-CZ conceived and designed the experiments, wrote the paper, contributed the reagents, materials, and analysis tools. | 2018-07-06T13:03:58.854Z | 2018-07-06T00:00:00.000 | {
"year": 2018,
"sha1": "cef4fb06631fe4d5b0efb2571af9541cd009ca86",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.01389/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cef4fb06631fe4d5b0efb2571af9541cd009ca86",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
33400940 | pes2o/s2orc | v3-fos-license | High-brightness Cs focused ion beam from a cold-atomic-beam ion source
We present measurements of focal spot size and brightness in a focused ion beam system utilizing a laser-cooled atomic beam source of Cs ions. Spot sizes as small as (2.1 ± 0.2) nm (one standard deviation) and reduced brightness values as high as (2.4 ± 0.1) × 107 A m−2 Sr−1 eV−1 are observed with a 10 keV beam. This measured brightness is over 24 times higher than the highest brightness observed in a Ga liquid metal ion source. The behavior of brightness as a function of beam current and the dependence of effective source temperature on ionization energy are examined. The performance is seen to be consistent with earlier predictions. Demonstration of this source with very high brightness, producing a heavy ionic species such as Cs+, promises to allow significant improvements in resolution and throughput for such applications as next-generation circuit edit and nanoscale secondary ion mass spectrometry.
Introduction
Ion beams focused to nanoscale dimensions have become an essential tool for nanotechnology, spanning a wide variety of disciplines ranging from three-dimensional imaging and analysis of samples in biology, geology and materials science, to circuit diagnosis and repair in state-of-the-art semiconductor manufacturing. Over the course of several decades, the gallium liquid metal ion source (LMIS) has become the most common approach to producing these beams, with its simple construction, high brightness and stable output. For many applications, the Ga LMIS continues to be the source of choice, with several commercial versions available, and a robust literature of source characterizations and application demonstrations [1][2][3][4][5][6][7][8][9].
Recently, an increased demand for higher performance focused ion beams (FIBs) has emerged. Growing requirements for higher resolution, more beam current density, and better control over sputtering and damage have led the research community to develop a number of alternatives to the Ga LMIS. For example the He gas field ion source (GFIS) [10,11], and more recently the Ne GFIS [12], have significantly higher brightness than the LMIS and can produce correspondingly smaller spot sizes. Inductively coupled plasma sources have also become available [13]. While these plasma sources do not have a brightness as high as the LMIS or the GFIS, they have the advantages of a higher current and access to a high-sputteryield, low-contamination, heavy ion species such as Xe. The extension to species other than Ga is an important one, as it opens possibilities for not only optimizing sputter yield and controlling contamination, but also for selective nanoscale implantation of specific species [14]. To this end, extension of the LMIS to alloys has recently seen development, expanding the possibility to as many as 46 different ionic species, with the incorporation of a mass filter in the source [15].
Sources based on ionization of laser cooled atoms have recently attracted attention in the search for higher brightness, access to new ionic species, and overall better performance [16]. With this type of source, neutral atoms are cooled using momentum transfer from nearresonant laser light [17] to temperatures in the microkelvin range, and then ionized via a focused laser beam to create a very high brightness ion beam. Advantages of this approach include a brightness that does not rely on producing ions from a potentially unstable very sharp tip, an inherently narrow energy spread, and well-developed technology for laser cooling over 27 ionic species, many of which are not easily addressed with alloy LMIS, GFIS, or plasma sources. It is also worth noting that while extraordinarily cold temperatures are involved in this type of source, laser cooling allows this to be accomplished without the need for cryogens of any sort.
Two types of cold atom ion source have been successfully demonstrated based on this principle. In a magneto-optical trap ion source (MOTIS) [18,19], atoms are cooled and collected in a three dimensional trap, consisting of a quadrupole magnetic field and three pairs of counter-propagating laser beams incident from three orthogonal directions, before being ionized by one or more additional lasers. This type of source has been realized with Cr [20,21], Rb [19,[22][23][24], and Li [25,26]. Alternatively, an atomic beam of neutral atoms can be cooled to microkelvin temperatures in only the two transverse dimensions before entering an ionization region [27]. This approach overcomes a limitation on the current produced in a MOTIS arising from the slow transport of cold atoms into the ionization region [28], since a constant flux of atoms is available from the beam. Successful demonstrations of this type of source producing Cs ions have recently appeared [29][30][31], showing great promise.
In this paper we present measurements on a Cs cold atomic beam ion source in a so-called LoTIS configuration [32], demonstrating a brightness well over 2 × 10 7 A m −2 Sr −1 eV −1as much as 24 times higher than that seen with the Ga LMIS-and show spot sizes in the single-digit nanometer range. These results represent a breakthrough in heavy ion source development. Up to now, the Ga LMIS, with its maximum brightness [9] of 10 6 A m −2 Sr −1 eV −1 , has remained the most practical choice for high speed, high resolution milling. The introduction of this new type of Cs ion source will enable a higher flux of ions in a smaller spot size with a larger sputtering rate per ion, realizing much improved throughput and resolution.
Although the LoTIS has a number of inherent advantages over a Ga LMIS besides a higher brightness, such as a lower energy spread and potential for very long term stability, the fact that it can be implemented with Cs is particularly advantageous. For example, table 1 shows Monte Carlo calculations [33] of sputter rate, ion depth, and straggle at 30 keV incident energy for several ion species accessible with high brightness sources. As seen in the table, Cs is calculated to have a 31% greater sputter rate, while maintaining 14% smaller penetration depth and 36% smaller straggle, when compared with Ga. The comparison with Ne and He is even more favorable. This comparison shows that better milling performance can be expected when using Cs. In addition, a LoTIS-based Cs source should prove very useful for nanoscale secondary ion mass spectrometry (SIMS) applications, where Cs is the ion of choice for studying electronegative species. Current Cs ion sources used in SIMS have typical brightness [34] in the range of 500 A m −2 Sr −1 eV −1 , so the LoTIS could represent a more than 10 4 fold improvement.
Experiment
The LoTIS, figure 1, has been described in previous publications [29,32]. Briefly, Cs vapor from a heated Bi-Cs alloy source enters a room-temperature rectangular glass cell where it is captured and cooled in a two dimensional magneto-optical trap [35]. A 'pusher' laser beam tuned to the Cs atomic resonance at 852 nm and oriented along the axis of the trap creates a high-flux, slow atomic beam with mean velocity approximately 10 m s −1 , which exits through a 1 mm aperture. The beam then enters a magneto-optical compressor [36], which is essentially another two-dimensional magneto-optical trap with increasing magnetic field gradients along the beam axis. The compressor reduces the beam diameter to a few tens of micrometers, resulting in a peak atomic flux of nearly 10 18 m −2 s −1 . The beam then enters a magnetically shielded region where it is further cooled in two dimensions using polarization gradient optical molasses [37] to a temperature of 4 (10 ± 3) μK.
Ionization occurs in the next section, where two conducting plates with apertures create an extraction electric field. A pair of focused, crossed ionization laser beams, one tuned to the Cs resonance near 852 nm, and the other tuned near 508 nm, ionizes the atoms in a small volume created by the overlap of the foci of these two beams. The 508 nm laser frequency is chosen to promote 6P 3/2 Cs atoms excited by the 852 nm laser beam to an energy level between the field-free ionization threshold and the classical ionization saddle point set by the extraction field [38]. The exact frequency is chosen based on balancing the relative need for low ion temperature or high ion current. The focal spot sizes of the 852 nm and 508 nm ionization laser beams are chosen in a range between 8 and 160 μm (1 e 2 diameter), depending on the desired current and resolution, with the larger diameters yielding higher currents at somewhat lower brightness, due largely to Coulomb effects. After exiting the ionization region at a beam energy of up to 1 keV, the ion beam enters a set of electrostatic lenses where it is accelerated to the desired energy and formed into a beam with the desired diameter and divergence.
The entire LoTIS is mounted in place of a Ga LMIS on a commercial high resolution FIB column, with the usual condenser lens, stigmators, deflectors, and objective lens. In our case the condenser lens is not used because sufficient control over the beam divergence is provided by the LoTIS acceleration optics. There is no need to use a beam limiting aperture in this system; because there is no minimum emission associated with the source, controlling the ion source current is instead matter of choosing the power and geometry of the ionizing laser beams.
The energy spread of an ion source is an important consideration because it can lead to limitations of the spot size due to chromatic aberration. Unlike the LMIS, which has an energy spread of 4-5 eV arising from the emission characteristics of the Taylor cone and Coulomb interactions when the typical current of 2 μA is extracted, the LoTIS has a very small inherent spread, dominated by the extraction potential gradient across the ionization laser beams' extent along the direction of the field. Previous measurements [32] have shown this spread to be in the range of 0.45 eV with an extraction field of 100 kVm −1 . This small energy spread contributes to the enhanced performance of the source, since it makes it possible to use a larger convergence angle in the focused beam without introducing excessive chromatic aberration.
Results
Our initial prototype source was designed and constructed to operate at 10 keV ion beam energy. While resolution can always be improved by increasing ion beam energy, the initial purpose of the prototype is to carry out measurements of spot sizes and convergence angles with the aim of characterizing the source's brightness. A 10 keV beam is entirely adequate for this purpose. Figure 2 contains images exemplifying the qualitative performance of the source. In figure 2(a) we show a scanning ion micrograph of a standard tin ball microscopy resolution sample, acquired by collecting secondary electrons while a scanning a 10 keV, 1 pA ion beam. This image, acquired in a single scan over 17 s, illustrates the level of beam stability and resolution that can be obtained with a LoTIS. Figure 2(b) shows a scanning ion micrograph of a pattern milled by a similar ion beam, demonstrating the milling capability of a 10 keV Cs + beam. We show this just as an example, although optimum resolution and milling rate may well be achieved at a higher beam energy.
Spot size measurements
After optimizing the ion optics for best resolution, spot sizes were measured by scanning the ion beam across the edge of a cleaved Si wafer and collecting secondary electron emission ( figure 3). For each horizontal line scan in the image, a fit was made to an error function plus background: (1) where S (x) is the signal as a function of pixel position x, and the dark level A, the bright level B, the centroid x 0 , and the standard deviation s x are free parameters. Allowing the centroid to be a free parameter for each scan reduced the effects of sample vibrations and/or beam position instabilities, which were present at the level of approximately 5-10 nm, but on a time scale much slower than the transit time of the scan across the edge. The resulting beam widths were averaged across the image. After exploring the parameter space of acceleration optics, focus and stigmation settings, the smallest observed focal spot for a 10 keV, 1.2 pA beam was found to have a standard deviation of (2.1 ± 0.2) nm (1.6 nm 35-65 width, or 2.8 nm 25-75 width) 5 . The uncertainty in this value contains statistical variation from line scan to line scan, as well as systematic components arising from possible beam focus errors, and is intended to be interpreted as one standard deviation.
While scanning the beam across an edge is a common method for FIB spot characterization, it can sometimes be misleading if milling of the edge occurs during the measurement [39]. Milling-induced systematic effects on the spot size result were investigated by reversing the direction of the beam scan over the silicon edge; the spot size measurement was not found to be dependent on this choice of direction. It is believed that in our case milling was minimal due to the relatively low 10 keV beam energy, the low beam currents (≈1 pA) used for high resolution operation, and the use of the fastest scan speed that could be used while maintaining good signal-to-noise-ratios. Any residual effects were minimized by fitting the edge profile line by line.
Brightness measurements
The reduced, or scaled, brightness of an ion beam is independent of beam energy and does not depend on the presence of apertures in the beam, and thus is a good figure of merit for describing the performance of a source. Given a source's reduced brightness and energy spread, it is possible to predict the expected spot size for any final beam energy, focal length and convergence angle in a given focusing scenario, provided the chromatic and spherical aberration coefficients of the lens are known [40]. There are several definitions for a beam's reduced brightness in the literature, with coefficients depending, for example, on whether the beam has a uniform, Gaussian, or other distribution. For present purposes we consider the peak reduced brightness at the center of a cylindrical beam with Gaussian distributions in both the transverse spatial and the angular coordinates. We note this is an appropriate description for a LoTIS, since the ion beam is generated by laser beams with nearly Gaussian distributions and the beam is not defined by any apertures. In this case, we write 5 The 35-65 width is defined as the distance over which the edge profile rises from 35% to 65% of the maximum value. The 25-75 width is similarly defined for a rise of 25%-75%. (2) where I is the beam current, σ x and σ y are the standard deviation in the transverse directions, σ θ x and σ θ y are the standard deviation in convergence angles, and U is the beam energy [16].
We measure the brightness of the LoTIS as follows. Using the objective lens of the FIB column, we create a focal spot which we measure in the manner described above along two orthogonal axes. We then obtain the convergence angle by turning off the objective lens and scanning the beam across the cleaved Si edge again along those same axes. The spatial distribution of the unfocused ion beam at the sample is a good measure of the distribution at the principal plane of the objective because the ratio of the focal distance to the full length of the column is small (≈0.05), and the divergence of the beam is also small (<3 μrad). The standard deviation of the convergence angle is then derived from the standard deviation of the spatial distribution at the principal plane σ L and the focal length of the lens f (in our case 30 mm) via σ θ = σ L /f. Combining this with the measured beam current, energy and standard deviation of the focal spot σ x in equation (2) yields the peak reduced brightness.
For the purpose of brightness measurement, the system was operated in a regime where the beam exits the accelerator essentially collimated, with a relatively small diameter in the objective lens (≈2.4 μm). This regime was chosen so that the focal spot size would be dominated by the beam brightness, and the contributions from aberrations in the column would be negligible. With this beam configuration-which is different from the configuration chosen for measuring the smallest focal spot discussed above-the focal spot is typically larger than 5 nm. The chromatic aberration contribution is estimated to be less than 0.5 nm, and the spherical aberration contribution is three orders of magnitude smaller. Several measurements were also performed over a range of opening angles that included the ones used in figure 4; these brightness values did not vary appreciably, as they would have if aberrations were significant. This larger spot size is additionally advantageous because it minimizes the possible impact of sample interactions or environmental perturbations on the results.
The largest reduced peak brightness observed was (2.4 ± 0.1) × 10 7 A m −2 Sr −1 eV −1 with a 7.4 pA beam operating at 10 keV. Figure 4 shows brightness measurements as a function of beam current, where the current was varied by changing the ionization laser powers. For these measurements, the ionization laser spot sizes and accelerator voltages were held fixed for all measurements. The brightness falls off at below 2 pA because the ionization efficiency is smaller at lower ionization laser intensity. It is important to note that the lower brightness at 1.0 pA does not represent a fundamental limitation of the system. Higher brightness could in principle be obtained at lower currents by focusing the ionization laser more tightly [29].
Given the maximum brightness measured of 2.4 × 10 7 A m −2 Sr −1 eV −1 , even smaller focal spot sizes should in principle be achievable than the 2.1 nm spot size demonstrated to date. We believe that platform and environmental difficulties account for this discrepancy. In addition, it is possible that aberrations due to ions optical misalignment or fabrication tolerances are degrading focusing performance for very small probe sizes. Achieving the nearly 1 nm spot sizes that the above brightness and energy spread will likely require a FIB platform and source accelerator optics with more stringent design specifications.
Source temperature
A measurement of the effective transverse ion temperature is of interest to help clarify whether the underlying cold atom temperature is dominant, or whether other effects such as Coulomb interactions cause additional heating. It is possible to extract the effective transverse temperature of the ions leaving the source, T, by equating the emittance at the source, , to the emittance at the focus, , (k B is Boltzmann's constant) [25]. Defining the effective focal length f of the objective as f ≡ σ 0 /σ α , we can write (3) We used equation (3) to obtain measurements of T by focusing the ion beam onto the cleaved Si edge using the accelerating optics near the ion source. Ray tracing simulations were used to determine the effective focal length f of this accelerating lens configuration, and the rise distance of the secondary electron signal was used, as described above, to characterize the standard deviation of the current distribution at the focal spot σ x . Figure 5 shows the derived ion temperature of a 10 keV, 1 pA beam as a function of ionization photon energy, measured in gigahertz detuning above the classical field ionization threshold. Also shown in the figure is a line indicating the transverse temperature of the neutral atoms as they emerge from the polarization gradient optical molasses, measured by turning off the ionization lasers and observing the beam width using laser induced fluorescence after expansion for a distance of 140 mm. Close to threshold, the measured ion temperature is seen to be consistent with the atom temperature of 10 μK. At higher photon energies, the ion temperature increases, presumably as excess photon energy begins to add recoil energy to the ions.
Summary and conclusion
In this paper we have presented measurements on a laser-cooled atomic beam LoTIS Cs ion source, demonstrating a peak reduced brightness as high as (2.4 ± 0.1) × 10 7 A m −2 Sr −1 eV −1 . We have also measured spot sizes as small as (2.1 ± 0.2) nm using a 10 keV beam, and shown example images and milling patterns. The brightness measurements confirm earlier predictions of the performance of this type of source [29,32] and demonstrate its potential for producing a high performance FIB. The brightness attained by this source is significantly higher than LMIS or plasma sources, suggesting that smaller spot sizes and higher milling rates can be attained.
While the results presented here demonstrate improved performance over other sources, it should be noted that the system discussed here is still not fully optimized. With further optimization, it is reasonable to expect that even smaller spot sizes will be possible. The maximum brightness value observed in this work is entirely consistent with creating a subnanometer focal spot with a 30 keV, 1 pA beam [32].
Work is ongoing on optimizing this source, with the next steps being demonstration of even smaller focal spot sizes at higher beam energies and also exploring the utility of the source for traditional FIB applications such as circuit edit, transmission electron microscope sample preparation, and general nanofabrication. As improvements to the source continue, its high resolution, along with its ability to produce a wide range of currents from picoamperes to nanoamperes, promise to open an even broader array of applications in present and nextgeneration nanotechnology. Schematic of LoTIS cold atomic beam ion source. Cs atoms are trapped and cooling in a 2D magneto-optical trap (MOT), pushed into a magneto-optical compressor, further cooled in polarization-gradient optical molasses, then photoionized in a two-step process and extracted with an electric field. (a) Secondary electron image of a standard tin ball resolution target acquired using a focused 10 keV, 1 pACs + ion beam from the LoTIS. (b) Secondary electron image of a pattern milled in the edge of a Cu grid using a similar Cs + ion beam. Milling time for this pattern was approximately 120 s. Ion temperature T, as a function of ionization laser detuning Δ above the classical field ionization threshold. The atom temperature is shown with a dashed blue line. Error bars indicate one standard deviation uncertainty, derived by combining uncertainties in measurements of current and spot sizes. | 2018-04-03T05:37:31.986Z | 2017-05-02T00:00:00.000 | {
"year": 2017,
"sha1": "f4beb8586a08fb008898b7cdf722e495669fb52a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-1984/aa6a48",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4beb8586a08fb008898b7cdf722e495669fb52a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics",
"Medicine"
]
} |
1967654 | pes2o/s2orc | v3-fos-license | Comparison of Systolic Blood Pressure Measurements by Auscultation and Visual Manometer Needle Jump.
Purpose
This study was designed to investigate differences in systolic blood pressure measurements as obtained through auscultation and observation of the visual jump on the manometer.
Methods
Men (n = 21; 26.9 ± 7.4 yrs) and women (n = 22; 29.3 ± 13.9 yrs) volunteered to have resting systolic blood pressure (SBP) assessments. During the same cardiac inflation-deflation cycle of traditional sphygmomanometry, the initial visual jump of the manometer needle and first Korotkoff sound heard were recorded. Duplicate assessments were made in each arm with 30 sec between intra-arm trials.
Results
Paired t-test results indicated there were no within-method differences between arms for visual jump (R: 132.1 ± 11.3; L: 131.8 ± 10.5 mmHg) or auscultation (R: 116.8 ± 9.0; L: 113.5 ± 8.8 mmHg). There were methodological differences within arm with visual jump being the higher of the two (right: t(42) = -12.69; left: t(42) = -11.37; p < .001).
Conclusion
If visual jump determination of SBP cannot be avoided, re-assessment using a more traditional method (i.e. auscultation) is recommended.
INTRODUCTION
Blood pressure is a significant indicator of many diseases and disorders and, globally, is the most influential of the modifiable risk factors for cardiovascular disease (11). In the US, one in three adults has high blood pressure; the prevalence of American adults with hypertension is highest for non-Hispanic blacks (approximately 45%) and lowest for non-Hispanic Asians (approximately 29%) (3). As reported by the World Health Organization, hypertension is estimated to be at the root of 7.5 million deaths worldwide (17). Fortunately, blood pressure is a plastic phenotype meaning that high blood pressure can be prevented and reduced over time via suitable mechanisms including exercise (16) and medication (7,8). However, appropriate action can only take place once blood pressure has been measured in an individual, and as high International Journal of Exercise Science http://www.intjexersci.com 215 blood pressure is symptomless (often termed the "silent killer") the actual recordings are extremely important.
Blood pressure measurements are important for gauging the health of an individual and occur in many settings. Such measurements take place at different frequencies depending on the individual, but the American Heart Association (1) suggests that blood pressure screenings should occur at least once every two years starting at age 20 if blood pressure is less than the standard 120/80 mmHg, with more frequent visits being required when over the standard reading. These recordings allow the individual and their doctor to monitor blood pressure over time.
While automated blood pressure assessment is growing in popularity in doctors' offices and inhome settings, the most common method of measuring blood pressure involves the use of a blood pressure cuff and stethoscope (14). This method, known as auscultation, requires the practitioner to listen to the artery (usually brachial artery) for sounds that indicate the point of systolic and diastolic blood pressure. When these Korotkoff sounds are heard, the practitioner observes the manometer and records the reading. Nurses and other practitioners have been observed using a method where auscultation is not used; instead, the needle/gauge of the manometer is observed with a visual jump of the needle indicating the point of systolic blood pressure (SBP); diastolic blood pressure (DBP) cannot be measured with this method.
While there is minimal literature (2) that even references using a "needle bounce manometer," online forums and primary research (personal communications with and observations of athletic trainers, nurses, etc.) indicate that the method is not uncommon. That this method is occasionally used and that there is no research supporting or comparing the method with an accepted BP assessment method is concerning. Online forums suggest the method can be as far as 20 mmHg different from a reading made using auscultation. If the method is commonly used (as suggested by primary research) and taught in degree programs and clinical settings, it is important that the method is analyzed for its validity and reliability. If the inaccuracy of the method is as extreme as some sources suggest, it could have potentially detrimental effects -the risk of developing cardiovascular disease for example, doubles with every 20/10 mmHg increment above 115/75 mmHg (5). This would mean that an inaccurate reading could result in an inappropriate course of treatment.
This study was designed to compare visual jump and auscultated SBP values obtained within the same cuff inflation-deflation cycle for a sample of adults. The visual jump SBP values were hypothesized to be significantly higher than those heard via auscultation in the same cuff inflation-deflation cycle.
Participants
A convenience sample of English-speaking men (n = 21; 26.9 ± 7.4 yrs) and women (n = 22; 29.3 ± 13.9 yrs) between the ages of 18 and 65 yrs volunteered. To participate in the study as approved by the university's Institutional Review Board, volunteers had to be free of all International Journal of Exercise Science http://www.intjexersci.com 216 exclusion criteria: inability to consent, age outside designated range, pregnancy, missing all or part of an arm. An a priori power analysis (G*Power 3.1.9.2) (6) was performed to estimate sample size with a power of .95, alpha value of .05, and effect size of .70, we determined that a total of 13 participants were needed. To compare possible differences by sex, we sought to recruit 25 men and 25 women to account for exclusion criteria found during screening, missing data, or attrition.
Protocol
Recruitment was conducted via word-of-mouth and posted flyers. Interested persons contacted a research team member and were given the opportunity to review the approved Informed Consent at their leisure prior to undergoing screening for inclusion criteria. Those screening into the study were scheduled for an assessment appointment and given instruction on the importance of wearing a shirt with short sleeves so blood pressure was not taken over clothing. No other pretest guidelines were provided as the sole interest was in how closely SBP values determined via visual jump and auscultation would be.
At the scheduled appointment, the participant met with a research team member who instructed the participant to sit in a chair with a supportive back and place their feet flat on the floor. Participants asked any questions they had, gave written consent, and completed a brief health history questionnaire. Participants bared their upper arm, and the researcher positioned an appropriately sized blood pressure cuff in accordance to standard procedure (12). Approximately 5 min after the client had been sitting, the researcher palpated the brachial artery. With stethoscope earpieces and participant's arm properly positioned, the researcher quickly inflated the cuff to 200 mmHg, released the pressure to allow deflation at a rate of 2 to 3 mmHg/s, mentally noted when the needle of the aneroid gauge deviated from its downward fall with a rhythmic jump, and continued to deflate the cuff while listening for the first (auscultated SBP) Korotkoff sound. The researcher continued listening for another 10 mmHg beyond the last (auscultated DBP) Korotkoff sound before releasing the remaining pressure in the cuff. These values were recorded and, 30 sec later, the process was repeated on the same arm with inflation stopping at 20 mmHg above where the visual jump had been observed in the first trial. The same procedure was used on the arm contralateral to the initial arm assessed, resulting in two blood pressure assessment cycles per arm, a total of four measurements being recorded per participant. The arm (right or left) in which the first assessment was taken was randomly selected.
The same trigger-style palm aneroid sphygmomanometer (American Diagnostic Corporation, Hauppauge, NY, USA) and stethoscope (3M Littmann, Maplewood, MN, USA) were used throughout the study. Three technicians, all of whom had been similarly trained over the course of a semester, performed the blood pressure assessments. Intertester reliability for visual jump and auscultated SBP was determined to be r = .99.
As indicated in
There were significant within-arm differences for the sample with the visual jump method producing higher values compared to auscultation (right: t(42) = -12.69; left: t(42) = -11.37; p < .001). This pattern of within-arm differences was also noted for the women (right: t(21) = -8.62; left: t(21) = -7.94; p < .001) and men (right: t(20) = -9.17; left: t(20) = -8.00; p < .001). Frequencies of the mean SBP values were stratified based on the SBP assessment method, sex, and new hypertension guidelines (15). As seen in Table 2, classifications of SBP are notably different and shifted into higher mmHg ranges when using the visual jump of the manometer needle.
DISCUSSION
The results of this study are the first to formally document that SBP values are higher when relying on the first visual jump of the manometer needle as opposed to the standard auscultation assessment of systolic blood pressure. Additionally, the individual difference between assessment methods ranged from 2.5 to 38.0 mmHg, with the mean difference being approximately 15.3 mmHg for men and 14.2 mmHg for women in the current sample. Although these methodological differences are slightly lower than the previously mentioned 20 mmHg, the individual differences varied widely with only 17 of the 43 participants having a mean methodological difference between 15.0 and 25.0 mmHg.
Auscultated SBP is defined as being the first Korotkoff sound heard during cuff deflation and is dependent on the technician's ability to hear the onset of blood flow turbulence when the pressure in the occluded artery exceeds that in the inflated cuff (4,12). The 1.4 mmHg difference for the men's between-arm SBP auscultation values in the current study is statistically significant (p < .04) but may not be clinically significant. Recently, a between-arm difference in SBP that is ≥ 5 mmHg has been suggested as the criterion for predicting future cardiovascular events (10). The source of the men's inter-arm difference for auscultation may be due to normal hemodynamic variation, preponderance of right-arm dominance, starting arm selection, and/or measurement error.
The visual jump of the manometer needle may represent a sub-auditory volume of blood flowing through the monitored artery beneath the stethoscope. Alternatively, the jump may represent pressure changes transmitted through the cuff to the manometer as the pulsatile blood flow arrives in vessels at the point of vascular occlusion (4). These pulsations of pressure seen on the manometer describe the initial oscillations that precede the point of maximal oscillation corresponding to the "gold standard" of systolic blood pressure measures -intra-arterial systolic pressure (12). It was the initial, not maximal, jump (oscillation) that was recorded in the current study. Whereas, proprietary computer algorithms on which the oscillometric blood pressure assessment technique is based differ in regard to which magnitude of oscillation is determined as the SBP (4,12).
Proper and accurate assessment of resting blood pressure is an important skill taught in most exercise science programs. During that training, it is important that students learn that the auditory/auscultated SBP values are more accurate than the visual jump in all instances. Client safety relies on accurate blood pressure assessment before, during, and after aerobic capacity testing. Systolic blood pressure is expected to increase from baseline in response to increasing exercise intensities and decrease toward baseline as the workload is reduced and the test stopped. Relative blood pressure-related contraindications to maximal exertion exercise need to be ruled out before testing begins (13). Likewise, a drop in SBP during a maximal exertion exercise test may warrant test termination. This determination relies on the baseline blood pressure values taken with the client in the posture required for the exercise test (i.e. seated for cycling) (13). Personal trainers and fitness professionals who periodically assess their client's blood pressure can determine if the prescribed exercise program is having the desired effect or International Journal of Exercise Science http://www.intjexersci.com 219 is in need of modification. Therefore, it is recommended that students repeatedly practice with trained and skilled technicians until accurate auscultation of blood pressure is mastered (9). Use of visual jump SBP should be avoided except in emergency situations that impair ability to hear or to alert the technician that the auscultation value is not far away time-wise.
Limitations of the current study could be reduced by using a teaching stethoscope with two sets of ear pieces and scheduling appointments so simultaneous determinations could be made by two of the similarly-trained technicians. The American College of Sports Medicine recommends a minimum of 1 minute elapse between inflation-deflation cycles (13), but our inter-trial assessment time lapse was shorter. Thus, there is a possibility that this may have introduced some error in the second reading within each arm.
Ultimately, in the situations in which the visual jump of the manometer needle might be used to identify SBP (e.g. riding in back of an ambulance, loud environment, poor hearing acuity, lack of functioning stethoscope), the value recorded should be identified as an estimate only and in need of reassessment via a more precise method such as auscultation. | 2019-03-08T14:17:18.333Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "28734cdbe6f54a5b50af568052dacc91d405e3c4",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/97/5/1500/1389775/1500.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "28734cdbe6f54a5b50af568052dacc91d405e3c4",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
264538016 | pes2o/s2orc | v3-fos-license | Metabolic Syndrome and Adipokines Profile in Bipolar Depression
Metabolic syndrome (MS) is a growing social, economic, and health problem. MS coexists with nearly half of all patients with affective disorders. This study aimed to evaluate the neurobiological parameters (clinical, anthropometric, biochemical, adipokines levels, and ultrasound of carotid arteries) and their relationship with the development of MS in patients with bipolar disorder. The study group consisted of 70 patients (50 women and 20 men) hospitalized due to episodes of depression in the course of bipolar disorders. The Hamilton Depression Rating Scale was used to assess the severity of the depression symptoms in an acute state of illness and after six weeks of treatment. The serum concentration of adipokines was determined using an ELISA method. The main finding of this study is that the following adipokines correlated with MS in the bipolar depression women group: visfatin, S100B, and leptin had a positive correlation, whereas adiponectin, leptin-receptor, and adiponectin/leptin ratio showed a negative correlation. Moreover, the adiponectin/leptin ratio showed moderate to strong negative correlation with insulin level, BMI, waist circumference, triglyceride level, treatment with metformin, and a positive moderate correlation with HDL. The adiponectin/leptin ratio may be an effective tool to assess MS in depressed female bipolar patients.
Introduction
Metabolic syndrome (MS) is a growing social, economic, and health problem.MS consists of abdominal obesity, hypertension, and disorders of carbohydrate and lipid metabolism [1].The prevalence of metabolic syndrome in the Polish population is estimated at about 20%, 18% of men and 22% of women [2].Insulin resistance and obesity (especially abdominal obesity) plays a key role in the pathogenesis of MS [1,3].
Adipose tissue is a highly specialized tissue that plays an important endocrine function through the synthesis and secretion of adipokines [3][4][5][6].The enlarged adipocytes secrete pro-inflammatory adipokines, promoting systemic inflammation, and contributing to metabolic syndrome [1].Among the most important of these adipokines are leptin, resistin, adiponectin, visfatin, and interleukin-6 (IL-6) [1,3,5].Leptin regulates feeding behavior, energy homeostasis [4,7], and lipid metabolism [5,8,9].It also controls glucose homeostasis and insulin sensitivity [3,4,9].Moreover, leptin is known to promote a proinflammatory immune response, and it is suggested to be an important factor linking obesity, MS, and cardiovascular diseases [3,8,9].Another essential role in the development of inflammation is played by resistin [1,5].The role of resistin in the pathogenesis of insulin resistance in humans remains unclear [4,5]; however, the role in inflammation and metabolic dyslipidemia development is well-known [9].Adiponectin plays a vital role in the regulation of glucose and lipid metabolism [1,3,5,7].Adiponectin acts as an endogenous insulin sensitizer, increasing glucose uptake and promoting fatty acid oxidation [7].Moreover, adiponectin plays a protective role against the development and progression of insulin resistance, MS, and cardiovascular diseases, and also inhibits pro-inflammatory factors [3,7].Visfatin similarly affects insulin sensitivity due to its insulin-mimetic capacity [1,5,10].It binds insulin receptors and enhances glucose uptake, transport, and lipogenesis [10].However, unlike adiponectin, visfatin is a pro-inflammatory mediator that activates multiple inflammatory pathways (e.g., mitogen-activated protein kinase and phosphatidylinositol 3 kinase) [10].Increased expression of IL-6 has numerous implications for the pathogenesis of obesity and its complications.IL-6, by reducing the expression of the insulin receptor, adiponectin, and inhibiting the activity of lipoprotein lipase, leads to the intensification of insulin resistance [1,11].Moreover, IL-6, affects the functioning of the vascular endothelium by stimulating the synthesis of acute phase proteins.This leads to the formation and progression of the atherosclerotic lesions inflammation, and dysfunction [1,3].
The S100B is a calcium-binding protein responsible for transcriptional regulation and DNA repair, cell differentiation, cell growth and migration, and programmed cell death [12].The S100B is predominantly expressed by astrocytes [13], and also by other cell types: melanocytes [14], chondrocytes, adipocytes [15,16], skeletal muscle, and a few other cell types [17].In their research, Fujiya et al. [18] proposed that the S100B functions as an adipokine in the interaction between adipocytes and macrophages.They proved that S100B upregulated the expression of TNF-α and proinflammatory markers in macrophages, and TNF-α augmented S100B secretion from preadipocytes.Moreover, silencing of S100B in preadipocytes significantly reduced TNF-α secretion from macrophages [18].Recent publications [17,19] suggest the involvement of S100B in obesity and diabetes mechanisms, possibly by participating in the inflammatory processes.
Bipolar disorder patients are almost twice as likely to have MS than the general population [20][21][22].Moreover, patients taking antipsychotic medication have a higher risk of developing MS than antipsychotic-free patients [20,21].Psychiatric patients are more likely to develop obesity and metabolic abnormalities than healthy people [21][22][23].Factors that predispose psychiatric patients to MS include genetic, unhealthy lifestyle (e.g., smoking, excessive alcohol intake, poor sleep hygiene, physical inactivity, and unhealthy nutritional patterns), but also the use of psychotropic medication (antipsychotics, antidepressants, and mood stabilizers) [20][21][22][23].
The aim of this study was to evaluate the neurobiological parameters (clinical, anthropometric, biochemical, adipokines levels, and ultrasound of carotid arteries) and their relationship with the development of MS in patients with bipolar affective disorder.The first research hypothesis is that depending on coexisting MS, patients differ in neurobiological parameters and the level of adipokines tested.The second research hypothesis is that differences in neurobiological parameters and the level of tested adipokines depend on the state of the disease (exacerbation vs. improvement).
Participants
The study group included 50 women (45.0 ± 14.29 years old) and 20 men (50.9 ± 15.73 years old) (Table 1) with a diagnosis of bipolar disorder based on DSM-IV criteria [24,25].Only patients with current depression episodes were included in the study.Patients were evaluated by psychiatrists twice, upon admission to the hospital in an acute state of illness (exacerbation) and after six weeks of treatment as recommended by the National Consultant [26], to assess depression symptoms using the 17-item version of Hamilton Depression Rating Scale (HAMD) [27].The exclusion criteria were severe and unstable medical conditions, pregnancy, autoimmune diseases, severe and chronic somatic diseases (except diabetes, hypertension, and obesity), infectious diseases four weeks before and during the study period, neuropsychiatric illnesses associated with cognitive impairment, or a prior clinical diagnosis of schizophrenia or schizoaffective disorder.The study protocol did not interfere with the treatment of patients, was compliant with the indications and was supervised by the attending physicians.Patients were treated with various combinations and doses of drugs with different mechanisms of action (antidepressants, antipsychotics, and mood stabilizers).The most often prescribed drugs were: antidepressant-venlafaxine (21 women and 7 men), mood stabilizers-lithium carbonate (23 women and 10 men) and valproic acid (9 women and 11 men), antipsychotics-quetiapine (32 women and 8 men), olanzapine (13 women and 5 men) and clozapine (12 women and 3 men).More detailed information about the pharmacological therapy can be found in the Supplementary Material (Table S1).No patients were in monotherapy or with the same set of drugs in the study group.
Upon admission to the hospital, all patients underwent anthropometric measurements: height, waist circumference (with an accuracy of 1.0 cm), and body weight (with an accuracy of 0.1 kg).These data were used to estimate visceral obesity and body mass index (BMI).Following the recommendations of the World Health Organization (WHO) regarding visceral obesity [28], we adopted the following cut-off points for significantly increased risk of metabolic complications: >88 cm in women and >102 cm in men.BMI below 25 kg/m 2 was considered a normal body mass index, whereas overweight was diagnosed with BMI values between 25 and 29.9 kg/m 2 , and obesity with a BMI above 30 kg/m 2 [29].In the statistical analysis, we distinguished two groups: normal weight and obesity (BMI above 25 kg/m 2 ) which consists of overweight, and obesity according to WHO. Biochemical tests determined the lipid profile (HDL, LDL, and TG [mg/dL]) and insulin concentration (mU/mL) were performed.Based on anthropometric measurements and laboratory tests, the internist diagnosed metabolic syndrome under the guidelines of the International Diabetes Federation [30] and decided which patients required metformin treatment (500 mg daily).
The intima-media complex (IMC) was measured in the common carotid arteries.Distal IMC of both carotid arteries was measured by duplex and B-mode ultrasound using SonoScape S6 ultrasound with a 6-12 Mhz linear transducer.IMC measurements were taken at several points approximately 1 cm proximal to the common carotid sinus.The IMC thickness result is presented as the average of the measurements taken.In addition, the maximum systolic velocity in the tested vessels was determined [31].
The study was approved by the Bioethics Committee of University Medical Sciences in Poznan (Resolution No. 1082/15 of 3 December 2015) [32].All study participants provided written informed consent and were recruited in 2015-2018 in the Adult Psychiatry Clinic by the Department of Psychiatry, University of Medical Sciences in Poznan.
Biochemical Analysis
The blood samples were collected from the patients twice: upon admission to the hospital and after six weeks of treatment in order to assess the concentration of adipokines in the blood serum.In the exacerbation state, we measured concentrations of visfatin (VIS), adiponectin (ADIPO), S100B, interleukin-6 (IL-6), leptin (LEP), leptin-receptor (LEP_R), and resistin (RES).After six weeks of treatment only ADIPO, VIS, and S100B were measured.Venous blood was collected into EDTA tubes and centrifuged at 1000× g for 15 min at 4 • C to obtain serum samples, aliquoted into Eppendorf tubes, and stored at −80 • C. Commercial Enzyme-Linked Immunosorbent Assay tests (ELISA) (Table 2) were used to quantify the selected proteins in the blood serum.Optical density was read with a spectrophotometric plate reader (Asys UVM 340 Microplate Reader from Biochrom Ltd., Cambridge, UK) for a wavelength of 450 nm ± 10 nm.A four-parameter algorithm (four-parameter logistic curve) was used to assay the concentration in the tested samples.All samples and standards were run in duplicates, and the mean value of the two assays was used for statistical evaluation.All ELISA tests were performed according to the manufacturer's instructions without any modifications.
Statistical Analysis
Statistical analyses were performed using the STATISTICA 13.3 (StatSoft, Krakow, Poland).The significance level p < 0.05 was adopted for all analyses.The distribution of variables was studied by the Shapiro-Wilk test.Variables with normal distribution were tested using parametric tests (student's t-test and ANOVA with Tukey's post-hock test).Variables that did not meet the normal distribution criteria were tested using nonparametric tests (Mann-Whitney U test, Wilcoxon pair order test, ANOVA rank Kruskal-Wallis test, Chi-square test, and Friedman ANOVA with Kendall Concordance).Spearman's rank correlation coefficient was applied to assess the relationship between the analyzed variables.The ROC curve was used to test the diagnostic ability of the ADIPO/LEP ratio.
Results
Spearman's rank correlation coefficient was applied to assess the influence of gender on studied variables.Seven of the examined variables (hypothyroidism, waist circumference, HDL, ADIPO, S100B, LEP, and LEP_R) correlated with sex; therefore, we decided to study women and men groups separately.Depression symptoms upon admission to the hospital in an acute state of illness (exacerbation) and after six weeks of treatment were assessed using the 17-item version of the Hamilton Depression Rating Scale (HAMD) [27].Hamilton's total score was higher in exacerbation in both sexes (Table 3).Thirty-five patients (23 women and 12 men) did not achieve remission (score ≤ 7) [33], but they had 25-56% reduction in the total score.A total number of the uptaken antidepressant drugs correlated positively with Hamilton total score (R = 0.3633 p = 0.0195).
Because insulin resistance and obesity play a crucial role in MS pathogenesis, we conducted a statistical analysis of metabolic syndrome, and variables such as insulin levels, BMI, and metformin treatment.Women with MS had higher insulin levels (p = 0.0197) and higher BMI values (p = 0.007) than women without MS.Unfortunately, these observations were not confirmed in the group of men, probably due to the small study group.We have also tested the thickness and maximum systolic velocity of intima-media complex in the common carotid arteries, but no statistically significant results were obtained in either sex (but were correlated with age, duration of disease, and waist circumference).Abbreviations: HAMD-Hamilton Depression Rating Scale; VIS-visfatine; ADIPO-adiponectin; E-an exacerbation; 6-after six weeks of treatment; SD-standard deviation; ns-non-significant values.Students' t-test was applied only to HAMD; all other parameters were calculated using the Wilcoxon pair order test, with a significance level of p < 0.05.Pro-inflammatory adipokines have been tested in the context of obesity, metabolic syndrome, and metformin treatment, as well as concerning the mental state of patients.Obese women had statistically significant elevated levels of S100B and LEP and lower LEP_R levels in exacerbation (Table 4).Women with MS had statistically significant differences in the level of adipokines (VIS, ADIPO, S100B, LEP, and LEP_R) in exacerbations compared with women without MS.The VIS, S100B, and LEP levels have been increased, while ADIPO and LEP_R concentrations have been reduced.Similar results were obtained when comparing three groups: with and without MS, and MS metformin-treated.Post-hock tests showed significant differences not only between patients with and without MS, but also between MS and MS treated with metformin (Table 4).We also compared the ratio of ADIPO/LEP; women without MS (mean ± SD 1.1 ± 0.95) had significantly (Z = 3.90 p = 0.0001) higher ratio than women with MS (mean ± SD 0.4 ± 0.76).Moreover, the ADIPO/LEP ratio showed moderate to strong negative correlation with insulin level, BMI, waist circumference, TG level, metformin treatment, and positive moderate correlations with HDL concentrations.We conducted the ROC curve analysis for the ADIPO/LEP ratio regarding MS (Supplementary Table S2).The result was statistically significant (AUC = 0.846 p < 0.0001), with the cut-off point at 0.31 (Youden index 0.64).The sensitivity and specificity were 76.5% and 87.5%.In the case of our data, this meant two people without MS were classified as ill, and eight ill people were classified as healthy.We believe that 23.5% of misclassified ill people is too high a percentage, so we decided to set a better-fitting cut-off point.In relation to our data, we have selected a 0.48 cut-off point (Youden index 0.599) with 91.2% sensitivity and 68.8% specificity.In the analysis between the level of adipokines and the mental state of the patients, only the level of visfatin had changed (decreased) significantly (Table 3).In contrast to women, there were no changes in the level of adipokines in the group of men, in the context of obesity, metabolic syndrome, or metformin treatment (probably due to the small study group).However, the ADIPO/LEP ratio showed a strong negative correlation with BMI and waist circumference (R = −0.6568p = 0.0057 and R = −0.6139p = 0.0088), but no correlation with MS was observed.Interestingly, concerning the mental state of the patients, changes were observed in the level of three adipokines (VIS, ADIPO, S100B), not just one, as was the case in the group of women (Table 3).As in women group, visfatin level decreased while ADIPO and S100B levels increased after six weeks of treatment.
Spearman's rank correlation coefficient was applied to assess the influence of neurobiological parameters (clinical, anthropometric, biochemical, adipokines, and ultrasound of carotid arteries) on the development of MS in patients with bipolar affective disorder.In women, thirteen parameters were significantly correlated with MS (Table 5).A positive, strong correlation was observed with metformin treatment, whereas other parameters showed moderate (TG, insulin, BMI, waist circumference, S100B, and LEP) and weak (hypothyroidism and VIS_E) positive correlation (Table 5).A negative, weak correlation was also observed for three parameters: HDL, ADIPO_E, and LEP_R (Table 5).In the men group, only metformin treatment was significantly correlated with MS (R Spearman = 0.8819 p < 0.0001).
Discussion
In our study, not all patients achieved remission, but they had shown an improvement, defined as a 20-30% reduction in the total scores of HAMD [26].Bipolar depression is challenging to treat, and drugs should be administrated wisely to avoid mood switches in patients [34][35][36].Antidepressant therapy is associated with an increased risk of mania or hypomania, so it should be taken with mood stabilizers and/or antipsychotic medications [35].
In our study, women suffering from MS had higher insulin levels, BMI, and waist circumference than patients without MS.It is unsurprising because one of the criteria of MS is overweight (mainly abdominal), whereas insulin resistance plays a crucial role in MS pathogenesis [1,3].BP patients have a higher MS prevalence than the general population [20][21][22].Most psychiatric patients have at least one metabolic disorder [22,37].Despite other factors like genetics, physical inactivity, unhealthy diet, and addictions, medications, especially antipsychotic drugs, have well-established weight gain side effects [20,21].In clinical practice, clozapine and olanzapine are associated with a higher risk of MS [20,21,38,39], quetiapine and risperidone cause moderate alterations [39], whereas aripiprazole has a little effect on body weight [21,38].Unfortunately, we could not assess the effect of treatment on BMI because patients were weighed only at the time of admission to the hospital.Therefore, it is important to balance the potential benefits and damages in bipolar depression treatment, especially in long-term treatment, because high doses or multiple medications can be associated with harmful metabolic consequences [21,39].Interestingly, there appears to be a correlation between the higher clinical effectiveness of atypical antipsychotics and the increased risk of metabolic alterations [39].
The following adipokines correlated with MS in the women group VIS, S100B, and LEP had a positive correlation, whereas ADIPO, LEP_R, and ADIPO/LEP ratio showed a negative correlation.It has been proven that leptin concentrations are significantly increased in obesity [4,5,40].The concept of "leptin resistance" was proposed to explain this phenomenon [8].It assesses that tissues have decreased sensitivity to leptin, so higher leptin levels are essential to correct the metabolic imbalance in obesity [3].Leptin binds to and activates its transmembrane receptor, the LEP-R, which plays a crucial role in regulating body mass via a negative feedback mechanism between adipose tissue and the hypothalamus [8].Similarly to our observations, higher leptin level and lowest leptin-receptor concentrations in obese patients were detected in Koch et al. study [41].
Decreased ADIPO levels in patients with obesity [40], coronary heart disease, diabetes, and hypertension demonstrate a high tendency to develop MS [3].Moreover, patients with BP during the depression episode showed decreased levels of ADIPO [42][43][44].Obesity and MS are characterized by increased leptin and decreased adiponectin concentration [6,45].Therefore, the ADIPO/LEP ratio has been suggested as a maker of adipose tissue dysfunction [6,45].In our study, MS patients reached significantly lower values of ADIPO/LEP ratio than patients without MS, which is consistent with the study conducted by Frühbeck et al. [45].Moreover, the ADIPO/LEP ratio was negatively correlated with BMI and waist circumference, which is consistent with the previous study [46].In our study, an insulin level was significantly correlated with the ADIPO/LEP ratio.In the literature, it has been claimed that the ADIPO/LEP ratio correlates with insulin resistance better than adiponectin or leptin alone [6].In our ROC curve analysis, the statistical software proposed a 0.31 cut-off point with 76.5% sensitivity and 87.5% specificity.The 23.5% of misclassified ill people is too high a percentage, so we have chosen a better-fitting cut-off point 0.48 cut-off point with 91.2% sensitivity and 68.8% specificity.At that point, the specificity is lower, but we believe it is better to order a more detailed diagnostics for a healthy person than to miss an ill person.The ADIPO/LEP ratio has been proposed as a predictive marker for the MS with a cut-off point lower than 0.5 [45].
Visfatin is secreted by adipose and visceral adipose tissue [7], skeletal muscle, liver, and lymphocytes [40].This cytokine acts like insulin by binding to insulin receptors, increasing glucose uptake [10,47].The serum visfatin level correlates with the BMI, waist circumference, and insulin resistance index [47].A meta-analysis study showed a significant increase in visfatin serum concentration in overweight/obese participants compared with normal BMI participants and also in type 2 diabetes mellitus participants compared with the control group [48].The visfatin serum concentration was also higher in MS patients [49], consistent with our study.
The S100B, which is expressed in adipose tissue [15,16], has been associated with the pathophysiology of obesity-promoting macrophage-based inflammation [17,50].The serum level of S100B correlates with insulin resistance, metabolic risk score, and fat cell size [17].In studies conducted on mice [50], plasma and white adipose tissue S100B levels were increased by diet-induced obesity.Also, in a human study, serum levels of S100B positively correlate with BMI; S100B levels in obesity were significantly higher than in overweight and normal weight subjects [51].In another study, participants with MS had a significantly higher level of S100B than the control group [19].Moreover, serum levels of S100B were positively correlated with abdominal obesity and triglyceride serum levels [19].In our study, BP patients with obesity and MS had a higher serum level of S100B in exacerbation than BP patients with normal BMI, and without MS.Our results show, on the one hand, the relationship between the S100B levels and obesity and MS, and on the other hand, with the mental state of patients.The meta-analysis showed elevated levels of serum S100B in patients with affective disorders (depression and mania) compared with the control group [52,53].The same relationship was also observed in drug-naïve adolescents diagnosed with first-episode unipolar major depression [54].
Conclusions
We partially confirmed our first research hypothesis: 'Depending on coexisting MS, patients differ in neurobiological parameters and the level of adipokines tested'.This statement is correct for a women's group.The second research hypothesis, 'Differences in neurobiological parameters and the level of tested adipokines depend on the state of the disease (exacerbation vs. improvement)', is correct both in the groups of women and men but for a different set of proteins.
In this study, we showed that in bipolar depression, adipokines correlated with MS in the women group: VIS, S100B, and LEP had a positive correlation, whereas ADIPO,
Table 1 .
Characteristic of bipolar patients group with gender distinction.
Table 2 .
Parameters of commercial ELISA tests used to assess concentrations of adipokines in the serum.
Table 3 .
Clinical and biochemical parameters were analyzed at two-time points: upon admission to the hospital in an acute state of illness (exacerbation) and after six weeks of treatment.
Table 4 .
The comparison of biochemical parameters concerns three variables: obesity, metabolic syndrome, and metformin treatment in the women group.
Abbreviations: VIS-visfatin; ADIPO-adiponectin; Il 6-Interleukin 6; LEP-leptin; LEP_R-receptor for leptin; RES-resistin; E-exacerbation; 6-after six weeks of treatment; W-patients without metabolic syndrome; MSMT-patients with metabolic syndrome with metformin treatment; MSnT-patients with metabolic syndrome non treated with metformin; ns-non-significant values; significance level p < 0.05.Comparison between normal weight and obesity (Obesity column in the table), patients with and without metabolic syndrome (Metabolic syndrome column in the table), and post-hock analysis were conducted using the Mann-Whitney U test.Comparison between three groups (patients without and with metabolic syndrome, and patients with metabolic syndrome treated with metformin) was conducted using ANOVA rank Kruskal-Wallis test (Metformin treatment column in the table).
Table 5 .
Spearman's rank correlation coefficient and descriptive statistics for female bipolar patients with and without metabolic syndrome. | 2023-10-28T15:04:19.918Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "b7a0c0204b75b2f2f949403a62034d4282f64557",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/21/4532/pdf?version=1698295995",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6db6ace2b9432650349cc605eef9c3f8b07b5ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119207423 | pes2o/s2orc | v3-fos-license | Differences at low l in Planck's first light sky map of the cosmic microwave background from WMAP's and COBE's
The recent release of the first light sky map of the cosmic microwave background (CMB) from the Planck satellite provides an initial opportunity for comparison with the WMAP and COBE sky maps and their reconstruction algorithms. The precision of the match between Planck's and WMAP's anisotropies below several degrees in size, which corresponds to spherical harmonics with high l, provides confidence that the differences between the anisotropies at low l are substantial. If the Planck first light sky map is taken as the gold standard, the results seem to suggest the low l components of the WMAP map and a considerable part of the COBE sky map have a similar reconstruction artefact. As the Planck first light sky map covers only about 10% of the sky, any conclusions drawn from this comparison are speculative but deserving of further investigation.
Introduction. -Imaging of the brightness of the whole sky between about 40 GHz and 400GHz is believed to provide a "baby picture" of the universe. Referred to as the cosmic microwave background (CMB), variations in the intensity of the CMB with direction in the sky are believed to be due to the structure of the universe when it was about 400,000 years old. As these variations, which are referred to as anisotropies, are only about 1 part in 100,000 of the intensity of the CMB, they require extremely stable and reliable instrumentation and processing to detect them with high confidence.
The release of the Planck first light (PFL) sky map by the Planck team on Sept 17, 2009 was in the form of a single image covering about 10% of the sky. While the release of the image seemed to be intended primarily for publicity reasons, the high quality of the image allowed for a cautious comparison with the WMAP and COBE sky maps. While the acquisition frequency of the sky map was not provided in the release, nevertheless the image's minimal foreground suggests it was acquired at about 143GHz. As the PFL sky map only covers a fraction of the sky and is only a single image any conclusions drawn from the sky map must be considered speculative. However, as the full formal release of Planck's first year data is not due for another year or two, cautious comparison seems worthwhile.
The first satellite missions to report the detections of the anisotropies were COBE [SmootGF1992, BennettCL1996] and WMAP [BennettCL2003, HinshawG2003, HinshawG2009]. Neither satellite measured the sky maps directly. Instead, to generate sky maps of the whole sky, each satellite measured the difference between two points in the sky many times per second while each satellite was scanned over a large region of the sky. These measurements were made at multiple frequencies and polarisations simultaneously. To generate a map of the whole sky, observations from at least 1 year of measurements where reconstructed into a sky map. A sky map was generated for each frequency/polarisation pair for each satellite.
A great deal of effort went into the design of both the COBE and WMAP satellites to insure the reliability of any detection of anisotropies. The measurements were designed such that for an anisotropy to be considered detected it must show up in all of the frequency bands and in both polarisations. Launched about a decade after COBE, WMAP had more sensitive instrumentation which allowed it to acquire sky maps with 3,145,728 pixels over the whole sky as compared to COBE's 6,144. In the series of papers reporting the initial WMAP results, the smoothed version of the WMAP sky maps, which corresponded to spherical harmonics at low l, were shown to correspond well with the COBE sky maps [Hinshaw2003].
The COBE and WMAP image reconstruction algorithms were designed and implemented by very competent and well funded researchers [SmootG1992, BennettCL1996, BennettCL2003, HinshawG2003, HinshawG2009]. Thus any major problems with the image reconstruction of the sky maps are likely to primarily reflect problems with the current state of the art in the design and assessment of image reconstruction algorithms.
A common feature of the Planck, WMAP and COBE satellites was that all three made simultaneous measurements at multiple frequencies and polarisations. However, the design of the Planck satellite is different from WMAP's and COBE's in several ways.
First, Planck measures the sky at a single point as opposed to WMAP's and COBE's differential measurements between two points. Second, Planck repeats the measurement of a ring on the sky every minute for 1 hour. It then moves onto the next ring. Planck's trivially simple image reconstruction for each ring is to average all the revolutions together into a single ring. Thus, each point in the sky will be measured 60 times during one hour and the 60 measurements will be averaged together. Therefore, within 1 hour, Planck has a reliable measurement of the intensity of the CMB over a single ring. It takes Planck 6 months to scan the whole sky.
Several papers have raised questions about the reliability of the WMAP sky maps at low l based on unusual properties of the sky maps. The improbable alignment of the quadrapole and octapoles of the sky maps, particularly with the earth's orbit around the sun, has been discussed extensively in the literature [Oliveria-CostaA2004, SchwarzDJ2004, LandK2007, LiuH2009]. These alignments are commonly referred to by the colourful term the "Axis of Evil". Another paper noted a puzzling correlation between the large-scale non-Gaussian patterns in the CMB and WMAP's observation numbers [LiTP2009].
Cover [CoverKS2009] also found a perplexing property of the image reconstruction used for the official WMAP sky maps. It was found that for each of the sky maps from WMAP's 20 channels, sky maps with no anisotropies were a better fit to the uncalibrated time ordered data (TOD) than WMAP's official sky maps. In this calculation WMAP's calibration parameters for each channel were allowed to vary. This result raised the possibility that there was something amiss with the calibration of the WMAP measurements that had a substantial impact on the official sky maps.
Methods. -Three sky maps were used for the analysis presented in this paper. The first was the sky map at 94 GHz from the WMAP 5 year analysis smoothed to 20 arc minutes. The second image was the same as the first, but with the PFL image overlaid in the regions of the WMAP sky map where the PFL data was available. The PFL data had also been smoothed to 20 arc minutes. The third image was the COBE sky map. It was the result of a combination of the sky maps from all of COBE's polarisations and frequency bands over COBE's full 4 year of observations. All three of the images used in this analysis were obtained from the slide show presented by Gary Hinshaw of the WMAP team at the Bielefeld International workshop on Cosmic Structure and Evolution (Sept 23-25, 2009, Bielefeld, Germany). As part of this analysis, the validity of the comparison was double checked to confirm it was a valid comparison between the PFL image and the WMAP's sky maps. The WMAP team had overlaid the PFL image on the WMAP sky map at 94GHz. Visual comparison of the WMAP only and WMAP/Planck greyscale images indicated that anisotropies composed of spherical harmonics with high l were in very good agreement between WMAP and the PFL. The slide show is available online at http://www.physik.uni-bielefeld.de/igs/cosmology2009/cosmic-ws09.html.
The first step of this analysis was to convert each of the 3 images from false colour to greyscale. Greyscale images are routinely used in medical imaging as colour sometimes de-emphasises important structures in images. The next step was to subtract the WMAP/Planck sky map from the WMAP only sky map. Figure 1 shows the WMAP only, WMAP/Planck and the difference image sky maps. The shape of the PFL sky map, which is limited to about 10% of the sky, determines the shape of the difference sky map.
A scatter plot of the Planck pixel values versus the difference pixel values can provide valuable insight into the quality of the difference image. A scatter plot of the values of the pixels for the PFL sky map versus the difference sky maps is shown in Fig 2. The log of the pixel count is plotted for each Planck/difference pixel pair.
The difference sky map was compared to the COBE sky map. The greyscale version of the COBE sky map is shown in Fig. 3(a). Before overlaying the difference sky map on the COBE sky map it received two steps of processing. First it was smoothed with a Gaussian blurring filter with a radius of 10 WMAP pixels. Second, it was multiplied by a gain and had a baseline added. The gain and baseline were chosen based on visual comparison with the COBE sky map. The COBE sky map with the difference sky map overlaid is shown in Fig 3(b).
Results. -Visual comparison of the WMAP (Fig 1a) and Planck sky map (Fig 1b) shows little obvious difference. However, careful inspection, while showing agreement at high l, suggests a difference at low l. The difference image (Fig 1c) shows primarily smoothly varying structures. This implies that for high l, the WMAP and Planck sky maps are closely matched and well calibrated. However, there are substantial differences at low l between the WMAP and Planck sky maps.
The scatter plot in Fig 2. shows the amplitude of the difference sky map is about half of the anisotropies in the Planck sky map. Also, the scatter plot shows little correlation between the Planck and difference sky maps.
More careful examination of Fig. 1(c) shows the difference sky map to be primarily bright in the northern hemisphere and dark in the two regions of the southern hemisphere. Examination of Fig. 3(b) shows these bright and dark regions generally align with the corresponding bright and dark regions on the COBE sky map. More careful examination of each of these three regions shows a rough correlation of the variation of intensity within each of the regions with the corresponding locations on the COBE sky maps.
Discussion. -The availability of the PFL sky map provides an initial opportunity to assess the performance of the image reconstruction used to calculate the WMAP and COBE sky maps. However, it must always be kept in mind that the PFL sky map was released primarily for publicity reasons. As a consequence, the quality of the sky map is substantially smoothed from that available to the Planck team and virtually no supporting documentation is available for the image other than that generally available for Planck.
The most important characteristic of the difference image (Fig 1c) is what it does not have. The WMAP sky map clearly shows a texture on a scale of about 1 degree that corresponds to the peak of the WMAP power spectrum. The amplitude of this texture on the difference image is much smaller, indicating the PFL image has a very similar texture and calibration as the WMAP sky map. The difference image does have a fine grain speckle that is close to the pixel size of the image. This is likely do to a slight registration problems between the WMAP and PFL images.
The scatter graph shown in Fig. 2 gives an indication of the quality of the PFL sky map. It shows no correlation between the PFL values and the pixel values of the difference image. Thus, it is unlikely the difference image is due to an artefact in the PFL. This result supports the working hypothesis that the PFL sky map is of sufficient quality for some initial analysis.
If the scatter plot showed any correlation between the PFL and difference images there would be cause for concern that the difference image is some artefact of the PFL. For example, if the values of the pixels in the difference image increased with the pixel values in the PFL this would suggest the difference image contained some information from the PFL image. This additional information could have been due to, for example, reversing the colour coding of the published PFL. Since there is no correlation, this suggests the information in the difference image had little to do with the PFL image. While the scatter plot does not completely rule out the possibility that the difference image is an artefact in the PFL, it provides reassurance that this is not the case.
The initial impression that the WMAP sky map minus the PFL sky map seems to be roughly equal to the COBE sky maps suggests there is something wrong at low l in one or more of the sky maps. When combined with the fact that the WMAP and COBE sky maps match up at low l [HinshawG2003], the results seem to be at odds with the widely held belief that both WMAP and COBE at low l are reliable sky maps of the CMB.
Planck's relatively simple scan pattern and image reconstruction makes calibration of the Planck data simpler than WMAP's or COBE's. Because of the averaging over each 1 hour ring, the calibration over the averaged ring can be characterized by two parameters -a gain and baseline. However, precise calibration between rings can be more complicated. Nevertheless, Planck's simple scan pattern should measure both high and low l spherical harmonics with equal reliability.
In contrast, each year of TOD for each of WMAP's 20 channels requires 2 parameters for every hour of TOD. Thus, a total of 17,532 parameters are required per channelyear, the minimum number of WMAP calibration parameters required to generate a sky map. Therefore, as mentioned above, reliable calculation of the WMAP calibration parameters is the more challenging problem. The WMAP calibration issue may yield sky maps that have differing sensitivities to various spherical harmonics.
As outlined in the introduction, there have been concerns expressed about the accuracy of the WMAP sky map at low l. From an image interpretation point of view, the most widely discussed concerns have focused on the improbable alignment of the quadrapole and octapole of WMAP's sky maps, including with the earth's orbit around the sun [Oliveria-CostaA2004, SchwarzDJ2004, LandK2007, LiuH2009].
Examining the image reconstruction used in WMAP, Cover [CoverKS2009] found the perplexing result that no anisotropies were a better fit to the TOD than the official sky maps for each of the 20 channels when the calibration parameters were allowed to vary. The form of the analysis used was not implemented in a way to determine if the problem was concentrated at low or high l. But a reanalysis only constraining low l or high l harmonics to zero could provide useful insight into this issue.
One possible scenario that appears to account for the difference between Planck and WMAP-COBE at low l is suggested by the Axis of Evil. The alignment of the Axis of Evil with the earth's orbit around the sun suggests it could be due to a reconstruction artefact. Imperfect calibration of the WMAP TOD could allow some of the Doppler shifted CMB due to WMAP's orbit around the sun to contaminate the sky maps [CoverKS2009]. As WMAP's orbit tracks the earth's, it would explain the Axis of Evil's improbable alignments. As the error in the calibration is at most a few percentage points, any artefact might be a perturbation and thus added to the CMB true signal [HinshawG2003,CoverKS2009].
If Planck is considered to have measured the true signal and WMAP is considered the true signal plus artefact, then WMAP minus Planck would yield the artefact. This suggests the difference sky map is a good estimate of the artefact. The match of the difference image with the COBE sky map would then suggest a considerable part of the COBE sky is also a reconstruction artefact. Under this scenario, the similarity between WMAP and COBE at low l [HinshawG2003] is consistent with both having similar artefacts. But how is it possible that WMAP and COBE could have similar reconstruction artefacts?
The fact that there is a substantial difference at low l between the Planck and WMAP-COBE sky maps suggests the possibility that there is far less information in the WMAP and COBE TOD about the low l anisotropies than previously realised. This does not seem to be a problem at high l as Planck and WMAP give the same results about high l anisotropies. This is likely because they both had the same information about the CMB in their TOD at high l even though they used very different scanning patterns.
Both WMAP and COBE used a sophisticated scanning pattern and image reconstruction algorithm. The scanning patterns of each of the satellites only concentrated on one region of the sky at once and then gradually moved to the next. The measurement from the regions of the whole sky where then stitched together during the image reconstruction. This might have left ambiguity in the TOD with regards to the low l sky. In other words, a wide range of low l sky maps may have been consistent with the uncalibrated TOD. The process of selecting the calibration coefficients may have actually substantially constrained the range of low l sky consistent with the TOD. So a much smaller range of low l sky maps may have been consistent with the calibrated TOD than the uncalibrated TOD. For WMAP and COBE to yield similar but incorrect sky maps at low l would require them to introduce similar biases during their image reconstruction possibly via the choice of the calibration parameters.
How can we test the hypothesis that there is far less information in WMAP and COBE TOD about the low l sky than previously realized? Cover [CoverKS2009] proposed the criterion of asking how consistent WMAP's TOD were with sky maps containing no anisotropies while allowing WMAP's calibration parameters to vary. This was the first time the impact that WMAP's calibration parameters had on the WMAP sky maps was discussed in the literature other than by the WMAP team. After applying the proposed criterion Cover found that, for each of WMAP's 20 channels, a sky map the contained no anisotropies but did include the dipole, was a better fit to the TOD than the official WMAP sky map. This finding suggests the large low l differences between WMAP and Planck may be due to a problem with the calibration of WMAP's TOD. This hypothesis could be tested by a more detailed application of Cover's reanalysis.
Cover's proposed criterion constrained the anisotropies to zero for all l spherical harmonics and then allowed the calibration parameters to vary. The reanalysis could be repeated by first constraining the amplitude of the high l spherical harmonics to zero but allowing the low l harmonics, as well as the calibration parameters, to vary. If there is little information in the TOD about the low l anisotropies, allowing the low l spherical harmonics to vary will do little to improve the fit to the TOD as compared to no anisotropies. In contrast, constraining the low l harmonics to zero while allowing the high l harmonics to vary should substantially improve the fit.
The details of the spherical harmonics of the CMB obtained from the sky maps are believed to tell much about the early universe. It would be desirable to be able to extract more information about the spherical harmonics from PFL image. However, the limited fraction of the sky covered, combined with the degradation of the data due to its presentation as a PFL image, would pose barriers to reliable values.
As the PFL image was released primarily for publicity reasons it is important to consider other possible causes of the difference at low l other than WMAP and COBE having a similar reconstruction artefact. The precise match at high l strongly suggests the PFL image and the WMAP sky maps were properly scaled for comparison. The histogram in Fig. 2 suggests the difference image is independent of the PFL image, apparently ruling out the colour display of the PFL image as a source of the difference. The possibility that Planck has poor measurement abilities at low l is judged extremely unlikely based on the care that when into the design of the satellite. Thus, given the limited information available about the PFL image, the best guess for the difference at low l is judged to be a real difference between the true CMB and the WMAP and COBE sky maps at low l.
In light of the results of the analysis presented in this paper, the substantial differences between the WMAP and PFL sky maps at low l need to be studied carefully for the rest of the sky. Also, the match between the COBE sky map and difference between the WMAP and PFL sky maps needs to be confirmed over the rest of the sky. As only about 10% of the sky was covered by the PFL image, and the publicity nature of the release of the image, the results of this study must be considered speculative. Fig. 1: (a) Official WMAP sky at 94 GHz (b) The same sky map as (a) but with pixels replaced by those from Planck's first light sky map where available (c) difference between the WMAP and WMAP/Plank sky maps. The shape of the non-zero pixels in (c) corresponds to the region of Planck's first light image. All three images have the same brightness scale. The colour version of (a) and (b) were provided by WMAP/NASA. | 2019-04-12T20:07:30.998Z | 2009-10-27T00:00:00.000 | {
"year": 2009,
"sha1": "418aaf18890c6c1781b26e634063caecf8249232",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a85e3d830822c7424b1c31dc95b29bceae26ebf4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4145278 | pes2o/s2orc | v3-fos-license | Maternal autonomy and birth registration in India: Who gets counted?
This paper examines the effect of maternal socio-economic status in the household, such as their autonomy, ability, freedom and bargaining power, on child birth registration in India using the nationally representative India Human Development Survey-II (IHDS-II), 2011–12. We have estimated a multilevel mixed effects model which accounts for the hierarchical structure of the data and allows us to examine the effects of unobserved ‘district’ and ‘community’ characteristics along with individual child level characteristics on probability of birth registration. The results show that between-districts and between individuals differences share a considerably high and an almost equal proportion of the variations in probability of birth registration in India. At individual child level, mother’s socio-economic status such as her ability to move around independently and her exposure to outside world, significantly raise the probability of birth registration. More importantly, the marginal effects of the maternal autonomy indicators: mother’s ability to move around freely and her control over resources, on birth registration vary across districts in India. Other variables such as institutional birth, mother’s antenatal care seeking behaviour, caste, religion, household wealth and parental education are significant determinants of birth registration.
Introduction
Civil Registration and Vital Statistics (CRVS) system have renewed momentum in the new Sustainable Development Goals (SDG) guidelines. CRVS is both a target in its own right and fundamental for maternal and child health, social inclusion, access to education and health services [1]. Civil registration is the way by which countries keep a continuous and complete record of births and deaths. It is important at a national and state level for policy and planning purposes. World Health Organisation (WHO) considers it as the most reliable source of statistics. It is the birth right of each child to be registered and issued with a birth certificate [2]. Globally 35% of births go unaccounted for in registration [3]. The issue has been highlighted as a scandal of invisibility where most people born in Africa and Asia die without leaving a trace in any legal record or official statistic [4]. Over the past decade there has been a [4][5][6][7]. . .. . . to Counting Births and Deaths [3,[8][9][10] that established the CRVS system as a necessary component of SDG.
Globally 230 million children under the age of five have never been recorded. More than half (59%) live in Asia, and an estimated 71 million-one in three, live in India. Many barriers prevent people from registering births and deaths. There are countries that do not have the necessary system in place to make births and deaths registration mandatory whereas in other countries only urban people have access to registration services. India has started its own CRVS improvement initiatives and introduced the requisite legislative and administrative reforms to improve civil registration. As a result, birth registration coverage increased from 60% in 2001 to more than 80% in 2010 but the process is still incomplete. Over the last two decades, there has been significant emphasis on promoting access to Maternal and Child Health (MCH) services in India while similar emphasis on completing birth registration has been lacking. In the SDG era, where the goal is to promote access and equitable health for all through Universal Health Coverage (UHC), universal birth registration needs to be prioritised. Fagernas and Odame (2013) [11] note that birth registration systems would be useful in tracking progress towards health-related goals.
There is little empirical research so far to identify at individual birth and death registration level-what exactly hinders the registration process in India. This paper is an important first step that examines the individual, household, community and district level determinants of birth registration using a multilevel hierarchical mixed model. In India, the process of birth registration is based on informant reporting structure, where the primary responsibility lies on individual informants who report birth. It can be the head of the household in case of home events and institutional heads in case of institutional deliveries. Under such circumstances, where only 13% (84%) of pregnant women in the poorest (the richest) population quintile delivered in health facilities in 2005 [12,13], it is important to focus on the relative significance of individual, household, community and district level enabling factors in determining the registration of a birth. This paper analyses 'if independent and informed mothers are more likely to register their children in India'. Many mothers lack the knowledge on how to register a child's birth [2] and consequently are unaware of what it entails and delivers to their child. The status of birth registration would significantly improve across the world if women were educated, well informed and independent given their role as primary caregivers for children. However, women continue to have little household decision-making authority in many developing countries including India. Improvements in maternal socio-economic status have been strongly linked in the literature to better educational and demographic outcomes, improved child welfare and allocation of household resources in favour of children' [14][15][16][17][18]. UNICEF (2013) [2] notes that mothers with some schooling are more likely to know how to register a child than their uneducated peers. In India, birth registration levels increase with mothers' education.
Study design and data source
We used the latest round of the India Human Development Survey-II (IHDS-II), 2011-12 for our analysis. IHDS-II is a nationally representative, multi-topic survey of 42,152 households in 1,503 villages and 971 urban neighbourhoods across India. The survey collects a wide range of information on household health, education, employment, economic status, marriage, fertility, gender relations, social capital, village infrastructure, wage levels, and panchayat composition. IHDS-II is the most up-to-date household survey available on India. Also, it contains a comprehensive set of information on gender relations and women status in the household such as their autonomy, ability, freedom, exposure to information and bargaining power that allows us the unique opportunity to study the relationship between these variables and child birth registration in India.
Our sample contains information on 9333 children less than 5 years old in 31 states, 367 districts and 2189 villages/neighborhoods. India is a large country with 31 states and 5 union territories. Those are subdivided into 686 districts and districts into a few more layers of administrative units. The Maternal and Child Health (MCH) care is implemented through the Department of Family Welfare (DFW) mostly at the district and sub-district levels through different levels of health care delivery systems e.g. Subcentres (SCs), Primary Health Centres (PHCs), Community Health Centres (CHCs) and District Hospitals. Previous studies have found significant disparities in MCH service coverage and efficiency differences in service delivery across districts in India [19,20]. Therefore, along with individual and household level characteristics, we have considered villages/neighborhoods and districts as the other two higher levels of analysis in a mixed effects hierarchical model as policies and service provisions at these levels might influence birth registration. We have included state level covariates at the individual level analysis.
Methodology
We use multilevel models to take account of the hierarchical or clustered structure of the data. For example, children who live in the same household are more likely to have similar outcomes for birth registration than children randomly chosen from the population at large. Households are further nested within communities with children living in the same communities facing the same set of cultural and institutional barriers and enabling factors for birth registration than those living in other communities. Communities are also nested within higher administrative units such as districts with children living in the same districts likely to share similar policy and health care infrastructure than those living in other districts. However, our analysis does not consider multiple children from the same household because the birth registration information in the data pertains to the mother's last birth only [21][22][23][24].
Our dependent variable in the study is a binary response variable with a value 1 'if the child has a birth certificate' and 0 otherwise. It is based on the survey response to the question 'if the mother possesses a birth certificate for her last birth that occurred in the last 5 years (i.e. since January 2005)'.
Our explanatory variables are grouped into three levels to reflect the hierarchical nature of the data. Level 1 variables correspond to child/household/maternal characteristics with community and state level contextual covariates. Level 2 variables correspond to community as a random effect and community/village level characteristics as random slopes and Level 3 corresponds to district as a random effect and some of child level maternal characteristics as random slopes. We run a three level mixed effects random slope logit model. We started our estimation by running two mixed effects logit null models with no covariates [24]. The first null model introduces a random intercept component at Level 3 (district level) while the second introduces an additional random intercept term at Level 2 (community level). Then, we introduce state, community, maternal, household and child level covariates with one slope coefficient at Level 2 and two slope coefficients at Level 3 in a step wise manner following forward selection in a three level mixed effects random slope logit model. The aim here is to study any variations in the null models that were due to each of the confounding factors.
The corresponding equations for the two null and full mixed effect models are presented below [24]. The first null model, a mixed effects binary response logit model with a random intercept component at the Level 3 (district level) can be represented as The latent variable formulation is Where, y ik is the outcome variable for whether the i th child in the k th district has a birth certificate, β 0 the overall sample mean, v 0k the district level random intercept-it is the effect of being in district k on the log-odds that y = 1, σ 2 the district level (residual) variance, or the betweendistrict variance on the log-odds that y = 1, and e ik the individual level residuals. In a two-level model the aim is to split the residual variance into two components corresponding to the two levels in the data structure [24]. The second null model with an additional Level 2 (community) random intercept, u 0j within districts can be represented as Finally, we model the binary response for whether the child has a birth certificate or not as a three level logistic random slope model that can be represented as Where u 0 and u 1 are the random intercept and slope coefficients at the Level 2 (community level) that are assumed to follow normal distributions with zero means, variances s 2 u0 and s 2 u1 respectively, and covariance σ u01 . Because u 0jk and u 1jk are allowed to be correlated (i.e. σ u01 is not assumed to equal zero), they are expected to follow a bivariate normal distribution that can be represented as Similarly, v 0 is the random intercept and v 1 , and v 2 are the random slope coefficients at the Level 3 (district level) that can be represented as All analyses were conducted using STATA-14. We calculated robust standard errors that are clustered by districts to relax the assumption of independent and identically distributed errors within districts.
Explanatory variables
This research is conceptually aligned to the literature on maternal and child health (MCH) which focuses on factors influencing the utilisation of maternal and new born health services (in low and middle-income countries). The factors associated with the utilisation of MCH related services (such as prenatal, delivery and postnatal services) are unlikely to be different from those that influence birth registration decision in the period following the birth of a child. The literature identifies a number of factors that are associated with the utilisation of such services including: lack of education (i.e., mother's and husband's education level); lack of decision-making authority (i.e., women's authority and autonomy); socio-economic barriers (i.e., low household living standards, low household income, no insurance coverage); social class structure and religion (i.e., religion and caste of the household); limited access to healthcare facilities (i.e., transportation); geographical location (i.e., distance to health care facilities); and lack or shortage of trained and skilled health care professionals (i.e., capacity and knowledge of skilled health care professionals) [25].
Some recent studies on determinants of birth registration have highlighted the association between child, household and community level sociodemographic and economic factors and birth registration. Amo-Adjei and Annim (2015) [26] find that mother's education, household wealth and urban residence are positively associated with the likelihood that a child is registered in Ghana. Religion is also found to be a significant determinant of birth registration with children whose parents practice a traditional religion at a significant risk of not being registered. There is also evidence of significant regional effect with children from eastern region of Ghana less likely to be registered [26]. Okunlola et al. (2017) [27] similarly find that birth registration increases with household wealth index and educational attainment of the mother in Nigeria. It is also noted that lack of access to registration services and indirect costs associated with registration contribute to low birth registration. According to Chereni (2016) [28], social and cultural factors are equally important in influencing birth registration as economic factors in Zimbabwe. And birth registration is viewed as an outcome that results from the interaction between economic, non-economic, personal and structural factors. Isare and Atimati (2015) [29] advocate for a community based approach where birth registration centers are established within communities to increase accessibility and awareness about the benefits of birth registration. Our study controls for a comprehensive list of variables associated with birth registration in the wider literature and in India (in particular) [25].
As noted above, our explanatory variables are grouped into three levels. The variables at Level 1 correspond to child/household/maternal characteristics with village/community level and state level contextual covariates. The child level attributes include-child's age as a continuous variable and dummy variables representing institutional/non-institutional place of birth and gender of the child. Parental level attributes included are-mother's age and educational level of both parents as continuous variables, and categorical variables representing mother's self-assessed health status. Mother's migrant status representing mother's childhood place of residence (i.e. if the mother's natal family resides in the same village/town or another) and mother's Ante Natal Care (ANC) seeking behavior, in particular, if the mother had four or more ANC visits during her last pregnancy are also included. We have also included a variable representing the proportion of children who have died out of the total children ever born to the mother at the time of the survey.
More importantly, we have included a wide range of variables representing gender relations and mother's social and economic status in the household such as mother's role in household decision making, her control over household resources, her ability to independently visit places of need, her freedom of movement and bargaining capacity in the household. Understanding how women's autonomy, ability and freedom at the household level are associated with birth registration and how factors at the community or higher sociopolitical level (such as districts) moderate this relationship is an important contribution of this research. Previous studies have highlighted the positive influence of maternal autonomy on feeding practice, birth weight and infant growth in India [30 -32]. It is also noted in the literature that maternal autonomy could increase self-motivation and bring about behavioral change that would improve the welfare of the mother and her family. A systematic review by Upadhyay et al. (2014) [33] finds a positive association between women's empowerment and lower fertility, longer birth intervals, and lower rates of unintended pregnancy.
Methodologically, Upadhyay et al. (2014) [33] highlighted the importance of choosing appropriate measures that better approximate women's empowerment. It was further noted that studies that used multiple and multidimensional measures of empowerment were more likely to find consistent results. Shroff et al. (2011) [31] conducted a confirmatory factor analysis to develop multiple dimensions of maternal autonomy. They find that individual dimensions of autonomy could operate differently to influence child growth and wellbeing. Accordingly, we identified 20 variables in our dataset that described mother's autonomy, ability, freedom, exposure to information/other resources and bargaining power in the household [32,34,35] and conducted factor analysis to summarize and identify any common underlying theme. Our analysis clearly identified four different underlying constructs or factors in these variables and depending on the nature and category of the variables clubbing under each of these factors we have named Factor 1 as 'Mother's Autonomy', Factor 2 as 'Mother's Ability', Factor 3 as 'Mother's Freedom in Movement' and Factor 4 as Mother's Bargaining Capacity. However, three variables that we identify as mother's exposure to outside world did not group into any of these four factors and we have decided to include them independently in the model. The set of variables grouped under each factor, their rotated factor loadings (pattern matrix), unique variances and the three independent mother's exposure variables are listed in Table 1 below.
The household level variables that we control for include urban-rural residence, household wealth status captured using wealth quintiles, caste and religious affiliation of the household. We also control for state level contextual covariates in our regression:-health expenditure as a percentage of Net State Domestic Product (NSDP), per capita public expenditure on health, literacy rate, gross enrolment rate, infant mortality rate and a dummy variable identifying the low-income states in India. The state level variables were all extracted from the Economic Survey 2012-13 [36] and publication from National Health Accounts Cell [37].
At Level 2, we have controlled for the community/village level random effects and a random slope for proportion of institutional births in the village. Additionally, village/community level variables are included as Level 1 contextual covariates in the model. These include village/ community level mean years of schooling, proportion of institutional births and quintiles of median per capita household consumption expenditures. Given that IHDS-II used cluster sampling, all community level variables were created by aggregating relevant individual survey responses at the clusters.
At Level 3, we have controlled for the district level random effects and random slopes for two maternal socio-economic status summary variables representing Mother's Autonomy and Ability. Previous studies have highlighted the need to account for the influence of communities and broader socio-political environment on women's empowerment. It is documented that an individual woman's empowerment process is simultaneously shaped by individual, social, cultural and political forces [34]. This calls for multilevel modelling to analyse the complex interactions between women empowerment measures at individual and at higher than the individual level [33,38]. All continuous variables in our model are centered at their mean values.
Descriptive statistics
A summary statistics of the variables is presented in Table 2. Table 2 shows that over 62 per cent of children in our sample have birth certificates. The average age of children is 2.19 years and 55 percent are male. Also, over 71 percent of children in the sample were born in an institution.
Mothers' characteristics reveal that the average mother is 28 years of age and had received around 6 years of schooling. Moreover, over 80 percent of mothers self-assess their health status as good or very good. Only 45 percent of mothers have received the recommended 4 and above ANC check-ups during their last pregnancy. The summary statistics of the mother's socio-economic status related factor variables-namely, mother's autonomy, mother's ability, mother's freedom of movement and mother's bargaining capacity are presented in the table. In addition, the variables representing mother's exposure to the outside world reveal 29 per cent of mothers in the sample have been to a metropolitan city, another state or abroad in the past five years. In contrast, a larger proportion-about 62 percent-state that they have gone out on family outings while 57 percent maintain that they often engage in discussions with their husbands on various topics including work, community and politics.
Discussion of results
We have estimated a couple of null models in the beginning. The first null model is estimated with only district-level random effects and the second null model with both district-level and village/community level random effects. The results from these models are presented in Table 3. The intraclass correlation coefficient (ICC) in model 1 indicates that 46.7 percent of the total variation in birth registration in India lies between districts while the remaining 53.3 percent lies within-districts. Fig 1 below shows a caterpillar plot of the residuals for all 367 districts in the sample from model 1 together with 95% confidence intervals. For a substantial number of districts, the 95% confidence interval does not overlap with the horizontal line at zero, indicating that birth registration coverage in these districts is significantly above average (above the zero line) or below average (below the zero line) [Also see 24]. As within-district differences account for most of the variation in birth registration, we looked at the effect of village/community level differences within districts on birth registration in model 2. The ICC from model 2 indicates that 44 percent of the variation in birth registration is explained by between district differences while between community differences account for 11 percent and within community differences for 45 percent of the variation.
As noted above, within-community differences [i.e. differences at the household and individual level] account for most of the variation in birth registration within-districts. We, therefore, estimate a comprehensive model which controls for household and individual child level covariates at level 1 including mother's social and economic status, state and community level contextual characteristics. We also allow for two random slopes at the district level for mother's social and economic status indicators-namely, mother's autonomy and mother's ability. A random slope at the community level is also included for 'proportion of institutional delivery at the village level'. The results from estimation of this model are shown in Table 4.
Our primary focus of analysis in this paper is the variables representing mother's social and economic status and bargaining power in the household. The summary variable (estimated using factor analysis) representing 'mother's ability' is significantly associated with birth registration. Mothers that are able to visit health centres, friends/relatives, a local shop or travel short distance by train/bus on their own are more likely to have their children registered. This variable reflects the mother's capacity to leave the house unaccompanied and move around without needing a chaperon. Higher maternal mobility is related to greater decision-making ability within the household [39]. As primary caregivers for children, mothers' ability to move around is crucial for a number of activities that enhance the welfare of children such as immunisation, health check-ups, and possibly birth registration [40]. A mother who is capable of moving around on her own can also register the birth of her children without depending on the husband. We can also see that the random slope coefficient for 'mother's ability' [included at the district level] is statistically significant. This shows that the marginal effect of 'mother's ability' on birth registration is not constant but varies across Indian districts. Moreover, the negative covariance estimate between the intercept and mother's ability variable indicates that districts with above average birth registration tend to have a flatter than average slope or below average effects of maternal ability.
We also find that two other variables which represent mother's exposure to outside worldnamely, whether the mother had been to a metro/another state/abroad in the last 5 years; and whether she had gone out on family outings-are significant determinants of birth registration. In particular, having a mother who had been to a metro/another state/abroad increases the odds of birth registration by a factor of 1.18. A mother travelling to another state or abroad signifies [that she enjoys] more autonomy in the household [39]. And undertaking such travels present the opportunity to interact with various people and come across different ideas that could enhance the wellbeing of children. Similarly, having a mother that had gone out on family outings to cinemas or restaurants increases the odds of birth registration by a factor of 1.14, which is probably reflecting an already empowered mother with a relatively higher bargaining power in the household or ability to positively influence decisions concerning children [including that of birth registration].
Additionally, we had included a random slope coefficient for 'mother's autonomy' at the district level. The 'mother's autonomy' variable shows mother's decision-making power on issues such as purchase of property/land, wedding expenses, what to do if a child falls sick and whom the child should marry. A mother that enjoys a greater degree of 'autonomy' would have greater access and control over economic resources in the household. It is documented in the literature that autonomous mothers would allocate more resources towards their children [41,42]. Thus, mothers with control over resources would be more likely to register their children by bearing the direct and indirect costs that might be involved in the birth registration process. The random slope coefficient for this maternal autonomy variable at district level has come out as statistically significant indicating that the marginal effect of this variable significantly varies across districts. It may be reflecting the extent of regional diversity in gender relations in India. Previous studies have shown significant spatial and socio-cultural differences in various dimensions of women's empowerment across regions in India [43]. For example, it has been argued that women in South India have more voice in family life, more freedom of movement and exposure to the outside world than their counterparts in North India [44]. Gutpa and Yesudian (2006) [43], on the other hand, found that states situated in central part of India (such as Uttar Pradesh, Rajasthan, Madhya Pradesh, Bihar and Orissa) had low empowerment of women. Given the spatial (and socio-cultural) differences in India, our findings highlight that policy initiatives to increase birth registration (or even women's autonomy for that matter) should be designed by taking district (regional) idiosyncrasies into account.
Among the child specific factors included in the regression, place of birth of child is statistically significant. The odds of birth registration increase by a factor of 4.6 for a child who has had an institutional birth. We can also see that institutional delivery has one of the largest impacts on the likelihood of birth registration in our model. The high likelihood of registration for institutionally delivered children can be attributed to the fact that the reporting of such births to the 'Registrar of Births and Deaths' is the responsibility of the medical officer in charge. More importantly, medical officers who attend the birth of a child are obligated to report the incidence to the Registrar [45]. Brito et.al (2013) [46] also find high probability of registration for institutionally delivered children in Latin America and the Caribbean. Our results do not show any significant difference in the odds of birth registration between boys and girls. A multivariate analysis by UNICEF (2005) [47] identifying the determinants of birth registration across 63 countries also concluded that gender is insignificant. Age of child is another significant determinant with the odds of registration increasing with age. Turning to the household and other mother level factors in our model, we see that parental education is positively associated with the likelihood of birth registration. An increase in the mother's (father's) years of schooling by one year raises the odds that a child will get registered by a factor of 1.05 (1.03). The positive association between maternal education and birth registration had also been established in other countries [Also see 26,46]. Harding et.al (2015) [48] investigate the transmission channels through which maternal education affects child outcomes, in particular their academic achievement. One channel explored is social capital with educated mothers more likely to be part of a social network of other educated people who possess the knowledge, skills and resources beneficial for children. Thus, educated mothers can be expected to receive valuable information and advice on various aspects of a child's life [including the benefits of birth registration]. The positive effect of father's education may also be a result of similar benefits from father's social capital.
Mother's number of antenatal visits at last pregnancy is another significant variable positively associated with birth registration. The odds of birth registration for mothers who have had four or more antenatal check-ups is 2.18 times that of mothers who have had no check-up. Antenatal care provides pregnant women with education, counselling, screening and treatment to ensure mother and foetus remain in good health [49]. Moreover, antenatal visits would increase the mother's awareness of what to expect after the birth of a child by providing information on postpartum care, including breastfeeding, immunisation, and (quite possibly) the importance of birth registration [50].
Our results also indicate that household wealth is significantly associated with birth registration. The odds of birth registration for households in the top wealth quintile is 1.49 times that of households in the bottom quintile. Other cross-country studies have also established the positive impact of wealth status on birth registration [2,47]. This might be capturing the possibility that households in lower wealth quintiles are put off birth registration by the late fee that applies for children not registered within 21 days of birth. Office of the Registrar General (2010) [45] also notes that births not registered after 30 days but within a year can be registered on production of an affidavit and permission from the prescribed authority on top of the prescribed late fee. Amo-Adjei and Annim (2015) [26] had demonstrated that late fee was a barrier for registration in Ghana. There might also be indirect costs such as cost of transportation or income lost due to time away from work which might hinder poorer households from having their children registered [2]. Also, it might simply be the fact that poor households fail to realise the long term benefits of registering their child birth.
Moreover, the dummy variable for religious background shows that the odds of birth registration for a child born in a Muslim family is 0.80 times that of a child born in a Hindu family. The public health literature in India which has examined the Hindu-Muslim differences in fertility planning may shed some light into this result. There are studies that indicate higher level of unmet need for family planning among Muslims [51]. Singh et. al (2012) [52] also find that utilisation of safe delivery care was significantly lower among Muslim women than women from other religions in India. It has also been pointed out that mistrust of government family planning programs and clinics may prevent Muslims from availing themselves of family planning services [51]. In another context, Hussain et. al (2014) [53] attribute the significant Hindu-Muslim disparity in the incidence of Polio to Muslim mistrust of the Polio eradication program in India, which in his view may be rooted in the socio-political and historical context of the country. Similar factors may be at play here making Muslims less likely to register their children than Hindus.
We also find that a child belonging to Scheduled Castes (SC), Scheduled Tribes (ST) or other castes has a lower probability of registration compared to the Forward/General castes. Since India's independence in 1947 the social caste structure has been identified as a significant hindrance for the socioeconomic development of minority groups. Despite government effort to empower lower caste groups and minorities, significant disparities across all indicators of development continue to exist. Studies indicate that social caste structure in India is a significant predictor for safe delivery and post-natal care utilization with women from Scheduled Castes (SC) and Scheduled Tribes (ST) less likely to access such services [52]. Thus, it is not surprising that we also find women from these minority groups are less likely to register their children given their less inclination to access post-natal and other services. Nevertheless, further research needs to establish whether this is due to some cultural practice or lack of proper knowledge or awareness in these communities. There is similar evidence from Ghana that members of minority ethnic or religious groups are less likely to register compared to majorities [47].
Among the community level factors included in our model, mean years of schooling and proportion of institutional delivery come out as significant. An increase in mean years of schooling by one year is associated with 1.1-fold increase in the odds of birth registration. Thus, education is important not only at the household level but also at the community level. Living in a community with a higher proportion of institutional births also raises the probability of birth registration. These results point out that there are advantages that spill over to the individual/household from living in a community with high mean years of schooling and institutional delivery. We have included a random slope coefficient for proportion of institutional delivery at community level to allow for the effect of this variable (on birth registration) to vary across communities in India but this didn't come out as significant.
Living in non-low income states has the largest positive effect on birth registration in our regression. The odds of birth registration for a child living in a non-low income state is 9.2 times that of a child living in low income state. This might be reflecting the poor state of health and other infrastructure leading to poor service delivery in low income states in India. Chotia and Rao (2015) [54] note that '. . ..low income (BIMARU) states. . .. . .still lack basic health infrastructure in many of their villages and towns leading to low positions in health index rankings'. The gross enrolment ratio at the state level is also significant and is associated with a higher birth registration. However, health expenditure as percentage of state GDP negatively affects birth registration. This result might be capturing the possibility that national birth certificate campaigns to raise birth registration in India may have largely targeted those states where health expenditure as a percentage of GDP is lower.
Conclusion
This paper examines the determinants of birth registration in India using a multilevel hierarchical mixed model. In particular, we looked at mother's autonomy, ability and bargaining power within the household and its significance for child birth registration. The rationale behind using a multilevel mixed effects model is the hierarchical nature of the data in that child and households are nested within communities, which in turn are nested within districts. The estimation results provide us with useful policy implications to increase birth registration in India. Our results show that 45 percent of the variation in birth registration in India lies between individual level differences and 44 percent lies between district level differences and remaining 11 percent lies between community level differences. This brings the policy focus on individuals and districts for targeting birth registration. At individual level the results indicate that the summary variable representing mother's ability increases the probability of birth registration. Ability to move around independently is an important trait for the mothers because such mothers do not have to wait for their husbands to take their children for immunisation, health check-ups, birth registration. . .etc. Two variables representing mother's exposure to outside world [i.e. whether the mother had been to a metro/another state/abroad in the last 5 years; and whether she had gone out on family outings] are also significant. So, mothers that have better bargaining power in the household and are exposed to liberating and progressive ideas from their surrounding that they could use to advance children's welfare are more likely to register the birth of their children. Again, random slopes at the district level for two indicators of mother's bargaining power-namely, mother's ability and mother's autonomycame out as significant confirming that while these variables significantly influence birth registration, the level of their influence varies across districts in India. So, policies targeting to empower mothers to improve maternal and child health outcomes and birth registration should have a district level focus.
Our estimates further showed that institutional delivery is a highly significant determinant which increases the probability of birth registration. This is consistent with a priori expectation as medical officers presiding over delivery are duty bound to report the birth. The number of mother's antenatal visits also came out as significant, which is not surprising given that mothers will also get advice on postpartum care when they access antenatal services. Finlayson and Downe (2013) [55] note that costs of visiting antenatal facilities (even when antenatal services are provided for free) are the major reason behind low access to antenatal care in India (as in other low and middle income countries). The Janani Suraksha Yojana cash transfer program in India, where pregnant women are given a small sum of money to attend antenatal care and deliver in a recognised health care facility has had a significant success in increasing antenatal attendance [55]. Such cash transfer programs with a view to encouraging women's attendance of antenatal care should be continued and extended to include birth registration given the success of the Janani Suraksha Yojana program. Robertson (2013) [56] notes that cash transfer programmes are an increasingly popular approach to meet health and development needs of vulnerable children. For example, the conditional cash transfer program rolled out in Mexico under PROGRESA led to increased preventative care, including prenatal care and child nutrition monitoring, and higher school enrolment [57]. In the African context, a social cash transfer program in Malawi reduced child morbidity and increased school enrolment. Similar benefits had also been observed in Kenya and South Africa. More importantly, Robertson et al (2013) [56] found that conditional cash transfers led to increase in the proportion of children with birth certificates in Zimbabwe. Baruah et al (2013) [58] similarly established the positive impact of conditional cash transfer on the birth registration of female children in Assam India. Under the conditional cash transfer scheme, known as Majoni scheme, girls born after February 1, 2009 get 5000 rupees deposited in a bank account given institutional delivery of the female child and compulsory registration among other things. The scheme resulted in an increase in the formal request to have female children registered from 24 percent to 39 percent [58].
Our study also established that probability of birth registration increases by wealth quintile. The direct and indirect costs associated with the process of registration may discourage households in lower wealth quintiles from having children registered. We also find disadvantaged groups such as Scheduled Castes (SC), Scheduled Tribes (ST) or other castes have a lower probability of birth registration compared to the Brahmin and Forward/General castes.
Children in low income states are also less likely to be registered. This suggests that policy should give special attention to raising birth registration rates of these disadvantaged groups and states. | 2018-04-03T05:29:21.678Z | 2018-03-13T00:00:00.000 | {
"year": 2018,
"sha1": "5341a44a459c4890f06e0abdb6810713f8f69296",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0194095&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5341a44a459c4890f06e0abdb6810713f8f69296",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
199327831 | pes2o/s2orc | v3-fos-license | “The role of the banking system in supporting the financial equilibrium of the enterprises: the case of Ukraine”
The financial equilibrium (“financial health”) of the enterprises is a prerequisite for their sustainable development, which ensures macroeconomic stability of the economy and the welfare of the state. It should be supported by the banking system, which performs the function of the effective reallocation of capital. Recently, the Ukrainian banking system itself is in a challenging situation and is undergoing a period of trans- formation. The purpose of the study is to assess how sufficiently the banking system of Ukraine supports the financial equilibrium of enterprises and to find the possibilities to strengthen its role in the progress of the real sector of economy. The authors single out three stages of financial equilibrium growth; each of them can be supported by the relevant banking services. The empirical analysis proves that the Ukrainian banks suc-cessfully ensure only the first stage, namely, liquidity balancing. To quantitatively as- sess the role of the banking system in supporting the enterprises’ financial equilibrium, a multivariate regression applying mathematical gnostic analysis in the program shell R Console is used. The research makes it possible to find out that only the economy monetization, the share of time deposits of economic entities and growth rate of mort- gage loans have a positive effect. The authors conclude that the problems of both enterprises and the banking system are in the sphere of development and implementation of government economic policy and are aggravated by the restrictive monetary policy.
INTRODUCTION
The Ukrainian economy now faces many challenges of current and strategic development: overcoming the remote consequences of the deep 2014-2015 recession, ensuring acceleration of the GDP growth, creating environment for public debt reduction, deep restructuring and modernization of the innovative basis, etc. Only financially strong and economically stable business entities in rather favorable external environment are able to realize these ambitious and at the same time, vital tasks to preserve an independent state.
Along with the institutional factors that play an extremely important role in implementing economic reforms in emerging market countries, the banking system should have a serious positive impact on the real economy. Its development, the innovative improvement of products and services, as well as the monetary effects of its performance should help to strengthen and balance corporate finance. The role of the banking system in a transformational economy is determined pre- cisely by the extent to which it serves as a source of funds for the real sector. The growing separation of the bank capital from the productive sphere increases the uncertainty of long-term economic prospects, the likelihood of deploying new financial crisis, restricts investment, and finally adversely affects the financial equilibrium of economic entities, which is the basis of ensuring the equilibrium of the national economy.
LITERATURE REVIEW
The financial equilibrium of enterprises is a rather new issue for Ukrainian economic science, although foreign scholars have been widely examining it through economic modeling enterprises' "financial health" as an opposite to bankruptcy. Altman and Fleur (1981) were one of the first to make mathematical model of the turning point in the financial position of the firm which threatens bankruptcy. Dhaliwal, Li, and Xie (2010) research the relationship between the financial health of corporations and the benefits of institutional investors in the choice of financial instruments for investment. The authors define the positive influence of institutional investors on corporate finance. It has also been proven that the value of institutional investors' capital is a function of the financial health of corporations.
Belolipetskii (2000) proves that the equilibrium in corporate finance causes a financial equilibrium throughout the national economy taking into consideration the same key financial conflict set for the enterprise: between yield, liquidity and risk. Kostyrko (2008) argues that financial equilibrium is the basis for an integrated estimation of the enterprise's financial position. Lvova (2019) examines three key theoretical foundations for corporate financial health assessment: bankruptcy prediction, investment analysis and financial systems assessment. Gudz (2018) proposes the broad concept of financial equilibrium considering its levels, the role in corporate finance, the types of financial equilibrium, the stages of its growth, and management methods.
When examining external factors ensuring the financial equilibrium of business entities, the researchers focus on the performance of the banking system, the situation on financial markets, and the nature of the monetary policy of the central bank. In the mid-1980s, Bernanke and Gertler (1985) questioned the validity of Modigliani-Miller's theorem on the fact that economic deci-sions do not depend on the financial structure and the addition of financial intermediaries to this environment has no consequences for real activity. They argue that financial intermediaries (in particular, banks) provide important real services to the economy and that these services have substantive implications for the behavior of economic entities. The constant business relations between banks and companies were shaped in the concept of "relationship banking", which had been analyzed by Besanko and Thakor (1993) in the context of portfolio choices of banks and borrowers' well-being and by Boot (2000) regarding its origin, scope, and benefits for all parties. Davis (1994) argues that, in terms of asymmetry of information and incomplete contracts, bank financing is preferred to maintain a balance of corporate finances than bond financing, especially for small firms. There is direct evidence that bank relationships reduce investment and financial constraints on liquidity and may improve overall performance. Schnitzer (1999) proves that banks play a central role in financing and monitoring firms in transition economies, due to poorly developed capital markets, a lack of accumulated profits for internal financing, and a dominance of strongly insider-controlled firms. Sufi (2005), in his Ph.D. dissertation, examines the importance of commercial banks in financing decisions of corporations. Inter alia, he analyses the use of credit lines. The results suggest that lines of credit provide bank-managed flexibility for the firms, which are able to obtain them. The firms use lines of credit as the incremental source of adjustments in leverage ratios and nominal levels of debt. Bolton and Freixas (2006) indicate the impact of the monetary policy transmission effects on the composition of the firm's financing. Savchenko (2011) defines the mechanism of influence of the banking system on the economic equilibrium. He characterizes it as a set of elements that transform the impulses generated by the banking system. The author notes that both macro-and microeconomic systems (including enterprises) can be the recipients of this influence and their equilibrium depends on such impulses. Kovalenko (2013) considers the economic essence of financial and credit interaction between banks and enterprises, emphasizing the conditions for ensuring the efficiency of such interaction and its impact on the financial position of business entities. Similar problems are raised in the study by Neizvestna (2016). Banks' assistance in enterprises' investments as a basis for their steady and balanced development is analyzed by Bilenko (2015) using the countries of Central and Eastern Europe as an example.
RESULTS AND DISCUSSION
It is well-known that the high level of production, which is ensured by financially stable economic entities, is the basis of the nation's welfare. According to the authors, financial equilibrium is one of the most important characteristics of the financial performance of the enterprises.
2.1. The essence of financial equilibrium of enterprises as a subject to the banking system's influence The financial equilibrium can be defined as balancing the processes of formation of the enterprise's financial position depending on their intensity, direction of change, the nature of internal interaction, and the sensitivity to the influence of external factors (Gudz, 2018). The role of financial equilibrium is to ensure the viability and sustainability of corporate finance. Liquidity, solvency, and financial stability can be considered as its forms, and financial potential balanced with the needs of sustained development is its material embodiment.
The enterprises' financial equilibrium as an economic phenomenon relates to the system of economic relations. It is one of the key points in the process of national economy development. After all, financial equilibrium enterprises are guarantors of price stability and competitiveness of the economy, promising employers, powerful budget sources, and active and efficient consumers of financial services. Therefore, due to the ensuring the financial equilibrium of economic entities it becomes possible to create favorable environment for the development of an economically powerful state with the prospect of its transformation into a state of social equilibrium.
It is obvious that the problem of supporting the financial equilibrium of the economic entities as the basis for economic progress should be solved not only by the financial management of the enterprises themselves, not only by the favorable institutional environment for their activity (competition protection, tax system, amortization policy, customs regulation, exchange rate regime, the nature of monetary policy, etc.), but also by the national banking system. The dialectic unity of the real and financial sectors of the economy stems from the fact that they must mutually create the conditions for their stable performance and development.
And if the financial sector is able to grow, breaking away from the real one due to a speculative component, then the development of production is extremely complicated without the participation of banks. The latter, as it was proved by J. Schumpeter in the early 20th century, provide funding to create "new combinations", that is innovations that ensure progress.
The financial equilibrium is a dynamic phenomenon that undergoes certain stages from the lower, basic forms, which are the basis of survival of the enterprise, to the highest, which constitute countercyclical anti-bankruptcy buffer and financial potential for its stable development. The implementation of each of these stages can be supported by certain banking services or by the creation of the possibilities to implement financial decisions ( Figure 1). This can take place most effectively within the framework of the concept of "relationship banking" that means: "… the establishment of a long-term relationship between a bank and its customers. The main advantage is that it enables the bank to develop in-depth knowledge of a customer's business, which improves its ability to make informed decisions regarding loans and other services to the company. The latter expects to benefit by increased support during difficult times" (Law, 2018).
Banking services in ensuring the corporate finance equilibrium
The first stage in the formation of the financial equilibrium of an enterprise presupposes establishing liquidity equilibrium, which is the basis for ensuring its stable solvency. It is based on a rational asset structure, timing of incoming and outgoing cash flows, timeliness and completeness of settlements with debtors and lenders, as well as a balanced structure of requirements and obligations. It is difficult to overestimate the role of banks in this case because payment intermediation is one of their basic functions. The velocity of money, the ability of economic entities to forecast and manage their cash flows, and the turnover of their funds depend on how efficiently payment systems operate in the country.
Established in Ukraine in 1993, the nationwide payment system of RTGS class -System of Electronic Payments (SEP) currently provides 97% of interbank transfers in the national currency. Already since its origination, the SEP, performing interbank transfers in the file (or batch) mode, ensured the receiving the transfer in no more than two hours. Such time was much shorter comparing with the system of postal or telegraph letters of advice existing in the USSR, as well as with payment processing time (up to three days) in clearing systems of many developed countries. Currently, the SEP fulfils payments in the file mode in 15-20 minutes; in addition, since 2001 it has become possible to make payments in real time, with the immediate transfer of funds to the payee's account.
In 2018, the system processed 357.0 million payments amounting to UAH 25.0 trillion. 91% of these payments were originated from customer accounts, and 90% of them were initiated from current accounts of business entities (National Bank of Ukraine, 2019a). Consequently, the latter are the most active users of the SEP services.
During the last decade, Ukrainian banks have been actively developing channels for remote access to client accounts by implementing innovative information technologies. In the early 2010s, based on the analysis of the 60 most powerful banks belonging to the I-III groups according to the National bank of Ukraine (NBU) classification (by assets), it was found out that the "Client-
Balance of requirements and obligations
Balance of liquidity and profitability taking into account the threat of bankruptcy Bank" system was offered by all the largest banks (17 institutions). But in the other groups (22 and 21 banks), there were only 76% of such institutions. As to Internet banking for legal entities, these indicators were respectively 68.8%, 58.8%, and 42.9%, and the mobile banking -75%, 53%, and 5%, respectively (Yehorycheva, 2012). Currently, the "Client-Bank" and "Internet-Client-Bank" services are offered by all Ukrainian banks, even the smallest or recently registered ones. Mobile applications are in the spectrum of services of at least one third of banks.
However, nowadays, it is not enough for enterprises to have only timely and prompt settlements. The progress of payment technologies should be combined with bank's assistance in managing cash flows and balances on accounts in order to optimize them. All these aspects are included in the service of cash management, which is intended for large corporate clients, in particular, holding companies. This service, innovative for Ukraine, has been adopted from foreign markets. It is complex, high-tech, needs the high level of personnel qualification and customer awareness. Therefore, not even all banks of foreign banking groups propose it: Alfa-Bank, Raiffeisen Bank Aval, OTP Bank, Ukrsibbank, Prominvestbank, ING Bank and several others. Cash management ensures the effective use of time lags between receipt and use of funds, managing the client's credit position, forecasting financial flows, etc. This service includes the following components: consolidated statement, headquoting, cash pooling, and zero balancing.
In addition, competently organized cash management makes it possible to use effectively overdraft as a product to maintain liquidity. This type of short-term financing is actively used by enterprises to cover the time gap between incoming and outgoing cash flows. Its main advantage is that all funds received on the client's account are directed to pay off the debt, which allows for saving on interest payments. The nature of these operations does not allow to determine the real amounts of such loans according to available statistics of current accounts' turnover. Some idea of the dynamics of lending in the form of overdraft can be obtained by its share in the total amount of loans provided by banks to economic entities. For the period 1998-2007, this share increased from 0.5% to 3.5% (National Bank of Ukraine, 2019b), and at the beginning of 2019 it was 2.3% (National Bank of Ukraine, 2019с).
Factoring financing is a vital prerequisite for ensuring financial stability and economic growth for the companies that supply goods on terms of delayed payment. It solves the problem of working capital deficit for a supplier, and this occurs without increasing its payables. Other advantages of factoring financing are that it is granted for the period of actual delay of payment, does not require the pledging of collateral and may increase together with the sales volume of the enterprise.
In Ukraine, classic factoring was offered to clients for the first time in 2001 by Ukrsotsbank. Being rather complex, this service is provided nowadays by only 20-25 banks (of the total number about 80) and the amount of purchased factoring claims constitutes less than one percent of the loan portfolio of the banking system. Moreover, the banking factoring market is highly concentrated, as of January 1, 2018, almost 60% of the total amount of financing (UAH 2,936.0 million) was provided by three banks: Ukreximbank, FUIB and Tascombank (National Bank of Ukraine, 2019c). Under the terms offered by banks, financing is carried out in the amount of 80 to 95% of the volume of supplies, the delay of payment is up to 90-180 days, the service of electronic factoring is offered. It is important that some banks offer a new type of factoring for Ukraine -without regress, when the risk of non-payment by the client's debtors is completely transferred to the factor.
Discounting of bills plays the same role as factoring financing in achieving the financial equilibrium of enterprises. However, because of the lack of traditional use of bills in Ukrainian economy, this service is not in demand. It is offered by few banks with the share in credit portfolio of less than 0.01% (National Bank of Ukraine, 2019c).
Leasing services are extremely attractive for businesses, especially for small and medium-sized ones, which are not always creditworthy in the classical sense. Leasing as a method of updating fixed assets does not require significant one-time expenses; it allows quickly switching to a new technological base and constantly improving it. In this case, there is no withdrawal of own funds, the relevant assets and liabilities are balanced by amounts and terms, the enterprise is given the opportunity to reduce tax payments. All of this positively affects the financial equilibrium of enterprises, ensuring the creation of a countercyclical buffer. However, currently only 20-25 banks provide leasing services in Ukraine. The total volume of these operations as of January 1, 2018 amounted to UAH 17.6 billion, which was about 4% of the loan portfolio. The banking leasing market is extremely concentrated, since more than 75% of its amount is constituted by Privatbank's transactions.
Of course, one of the main sources of enterprises' fund replenishment is bank lending, which is aimed to ensure development and innovation, to increase the profitability of equity capital due to the effect of the financial leverage, thereby maintaining a financial equilibrium. Ukrainian banks provide economic entities with one-time loans and credit lines, loans for working capital replenishing and investment, for the specific needs of certain sectors of the economy, in particular, agribusiness and SMEs.
Analyzing official statistics (Figure 2), it should be noted that the absolute amounts of lending to business entities are slightly increasing; howev-er, given the reduction of the official hryvnia exchange rate and inflation trends, starting from 2014, they are decreasing. This is reflected in the decrease in the rate of lending penetration (ratio of loans to GDP), which in 2017 reached the minimum in the last 10 years. In terms of ensuring the financial equilibrium of industrial enterprises as a basis for economic development, the structure of the bank's loan portfolio by type of economic activity is unsatisfactory: one third of loans are directed to the trade, which is largely focused on the sale of imported goods, and only a quarter of loans -to the manufacturing.
At the same time, the share of long-term loans in their total amount is gradually increasing in recent years, which is generally a positive trend. However, the own funds of enterprises remain the main source of investment, ten times exceeding the share of bank loans (State Statistics Service of Ukraine, 2019).
Finally, the bank time deposits play a significant role in balancing liquidity and profitability of economic entities. Ukrainian banks offer deposits to business entities from a few days to two years or more. The latter are often used as a collateral for a loan. The amount of time deposits of business entities over the past ten years has increased 3. Ratio of amount of loans to economic entities to GDP, % Share of long-term loans in total loans to economic entities, % times, more than 90% of long-term (over two years or more) funds belongs to state-owned enterprises. However, the share of time deposits of business entities in the total amount of their bank funds decreased from 51% to 32%, which indicates that now enterprises have relatively less temporarily available funds (National Bank of Ukraine, 2019b).
2.3.
Analysis of the impact of the banking system performance on the financial equilibrium of enterprises in Ukraine Table 1 presents a database for modeling the relationship between indicators of the performance of the banking system and the financial equilibrium of enterprises.
The given database is used to develop the multivariate regression using mathematical gnostic analysis in the program shell R Console. R is a programming language for statistical processing of data, combined with a mathematical analysis. The function GWLS (GWLS -gnostics of weighted least squares) was used to develop a model for forecasting the financial equilibrium of the enterprises. The following regression equation was obtained: Model (1) has an error of forecast of the effective index lower than the standard deviation of its actual value. This indicates the adequacy of the model (1) to predict the financial equilibrium index of enterprises. Appendix A presents the statistical characteristic of this model. Model (1) can be considered valid, since the null hypothesis is insolvent. This is evidenced by a low, less than 0.02, probability of its existence.
The analysis of the model's coefficients allows to state inverse relationship between the amounts of bank lending to enterprises and their financial equilibrium. This is at variance with the economic role of lending in market economy: instead of stimulating the progress of the real economy in Ukraine, bank lending is hampering it. The reason for this is in the strict terms of lending, dictated by long-term economic instability with rising of risks in banking and the restrictive nature of monetary policy. Sensitive to a wide range of negative external factors, banks are trying only to save their business rather than to develop ( Table 2). Table 2 Restrictive monetary policy deforms the socio-economic role of banks; the reallocation of funds from the real sector of the economy to the banking sector takes place. By rising the discount rate, the central bank stimulates the profitability of bank assets through transferring to a borrower not only the risk of non-returning of his own loan, but the other ones as well. It is much more important to increase the turnover of bank assets, but the solvency of enterprises should be increased for this to happen. However, the latter now is twice lower than the optimal level with a tendency to decrease over the last decade (State Statistics Service of Ukraine, 2019). In Ukraine, the share of problem bank loans in their total portfolio for 2008-2017 has grown twenty-four times. At the beginning of 2019, the loan portfolio of banks has more than a half of non-performing assets (National Bank of Ukraine, 2019c). According to model (1), the increase in the share of non-performing assets in the loan portfolio of banks by 1 p.p. leads to a decrease in the index of financial equilibrium of enterprises by 0.03711 units on average.
The correlation indices in
Not being used efficiently in the real sector, banks' funds are directed to servicing the government budget deficit. Over the past ten years, the amount of DGLBs in bank assets has grown sixty-four times with state-owned banks as the main buyers of the bonds and the latter's share in their assets amounting to 30% (National Bank of Ukraine, 2019c). In 2017, the amount of banks' investment in government debt securities exceeded the amount of long-term loans granted to economic entities by four times. With each increase of this ratio per unit, the financial equilibrium index of enterprises is reduced by an average of 0.02918 units. Investing banks' funds into liquid and profitable government securities is explained by the fact that, on the one hand, DGLBs are refinanced An increase in the financial resources of the economy contributes to the greater diversification of the demand for money, strengthening the investment capacity of cash flows for the growth of the real sector, as well as creation of financial grounds to prevent artificial shortage of payment means. However, economic environment in Ukraine is characterized by the opposite processes. The monetization of the Ukrainian economy in 2017 has reached the lowest rate for the last ten years -40.53%. For comparison, in the OECD countries this indicator is 117%, in China -200%, while in Ukraine it has fallen to the limit typical for the developing countries (40-60%) (World Bank, 2017). This indicates an aggravation of the problem of artificial shortage of money, which hampers the investments in the economy. As model (1) shows, the growth of the monetization of the economy by 1 p.p. contributes to the increase of the index of financial equilibrium of enterprises by 0.03744 units. It is important to note that this factor is an incentive for the formation of the highest level of financial equilibrium of enterprises, which reflects their capacity to progressive development.
Mortgage lending is one of the internal financial incentives for national economic development.
In Ukraine, mortgage loans provided to households have had unstable dynamics in 2008-2017 (Table 1). However, their positive impact on the financial equilibrium of enterprises is characterized by model (1): with an increase in the growth rate of mortgage loans by 1 p.p., the index of financial equilibrium of enterprises increases by an average of 0.00083 units. This interaction is realized through a system of economic interconnections between different branches of the economy. Housing construction expands demand, first of all, for industrial goods. In turn, it stimulates the development of transport, trade, services, and also creates new jobs. The development of mortgage lending is a catalyst for the progress of the economy as a whole, but in the situation of increase in domestic production rather than import. The high import dependence of Ukraine by most groups of manufactured products reduces the positive effect of bank mortgage lending.
The closer the partnership of business entities with banks is, the greater is the economic effect of synergy. It is indicated by the deposit services. Efficient financial management performs balancing liquidity and profitability at an acceptable level of risk. Time deposits are one of the tools for Ratio of value of DGLB in banks' assets and long-term loans granted to economic entities (X7) 5% Level of the economy monetization (Х8) 34% Growth rate of loans granted by banks to economic entities (X1) 8% Growth rate of mortgage loans provided by banks to households (X2) 1% Average weighted annual interest rate on bank loans granted to economic entities in national currency (Х3) 6% Share of non-performing assets in the bank's loan portfolio (X4) 28% Share of term deposits of economic entities in the amount of resources attracted by banks from this category of clients (Х5) 7% solving this financial collision. As model (1) shows, with the growth of the share of time deposits of business entities by 1 pp., the index of their financial equilibrium increases by 0.10503 units. The available liquidity of enterprises is effectively used through the banking sector in the economic sectors needed growth.
The processing of the raw data in the R Console software program allowed to explore the role of banking performance indicators in the forecasting the index of financial equilibrium of enterprises in the medium-term ( Figure 3) and the shortterm (Figure 4) prospects.
The monetization of the economy, the share of non-performance assets in the loan portfolio of banks and the discount rate are crucial for the financial equilibrium of enterprises in the threeyear perspective. This means that the priority in the medium-term period is given to the factors that have general economic nature and are equally important both for banks and enterprises.
It is interesting to compare the change of these priorities in the short-term period (Figure 4). The providing of bank loans on affordable terms ranks first. This means that currently it is important for financial management of enterprises to ensure the balance of cash flows, payment requirements and obligations.
Another feature of the short-term financial equilibrium is that the use of monetary instruments has a postponed effect. That is why, the role of the discount rate in ensuring the short-term financial equilibrium of enterprises is minimized. Similarly, the growth rate of mortgage lending stimulates the development of the economic sectors in the long run. So, in the current period, the mutual influence of the financial positions of banks and enterprises is most pronounced. There is a cross-sectoral interaction between the banking and the real sector of the economy. Quantitative growth ensuring the stability of banks and the financial equilibrium of enterprises stimulates a qualitative transformation of the environment for their performance.
The results of the forecast based on model (1) show that in the next two years, while preserving the existing direction of monetary and economic policy, there is no ground to expect the significant increase in the index of enterprises' financial equilibrium ( Figure 5).
CONCLUSION
The theory and practice of interaction between the banking and the real sectors of the economy in Ukraine are somewhat different. It results, first of all, from the negative impact of bank lending on the financial equilibrium of enterprises. The banking system is not able to provide the economic entities with affordable funds because of the incorrectly set priorities of government economic regulation. Assets of Ukrainian enterprises are seven times higher than financial resources of banks. Therefore, the basis for development should be laid by economic policy, while monetary policy has to take adequate compensative and stimulating measures. In Ukraine, the situation is quite opposite.
The banking and the real sectors of the economy have a common problem -violation of capital accumulation process. The financial capacity of both of them is exhausting that is confirmed by the decreased monetization of the Ukrainian economy. Therefore, it is not a question of growth today, but of the renewal of the socio-economic role of banks. This requires reducing the outflow of capital from the country through the channels of import of goods and export of national economy revenues. Enterprises need long-term investment through replenishment of their own capital rather than increase in liabilities, because the half of the banks' loans to economic entities is now the bad ones.
There is a conflict of financial opportunities. Enterprises cannot be effective recipients of bank loans; in turn, banks cannot take higher financial risks. The solution to the problem is in the sphere of development and implementation of the government economic policy, whose main task has to be innovation development of domestic production.
The role of the banking system in realization of these pathways may be enhanced by the implementation of a loyal refinancing regime for banks that heavily grant investment loans. The liquidity of such banks will be increased. It is worth adding fiscal incentives, for instance, an exemption of the interest income on issued investment loans from taxation. This will enhance the financial interest of banks to expand investment lending. Exemption of interest income on long-term deposits from taxation will contribute to the formation of long-term liabilities of banks. At the same time, the use of fiscal instruments to limit | 2019-08-03T00:34:34.582Z | 2019-07-08T00:00:00.000 | {
"year": 2019,
"sha1": "99230ab123f6ad3dd26db6e4596009f971f1b7b2",
"oa_license": "CCBY",
"oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/12235/BBS_2019_02_Yehorycheva.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "225fd2930d213cf2f4783d474b3cb1c58bb48529",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
251149766 | pes2o/s2orc | v3-fos-license | Mycobiota and mycotoxin-producing fungi in southern California: their colonisation and in vitro interactions
ABSTRACT Fungal contamination in water-damaged houses has become a major concern because of their potential health effects. During our survey in 100 water-damaged houses in southern California, we have reported 42 outdoor and 14 indoor fungal species throughout the year. Six commonly occurring indoor fungal species are Alternaria alternata, Aspergillus niger, Chaetomium globosum, Cladosporium herbarum, Penicillium chrysogenum and Stachybotrys chartarum. In the damp building materials, S. chartarum was found to be associated with A. niger, C. globosum and P. chrysogenum but not with A. alternata and C. herbarum. Stachybotrys chartarum showed strong antagonistic effect against A. alternata and C. herbarum and significantly inhibited in vitro growth of A. alternata and C. herbarum but had no effect on A. niger, C. globosum and P. chrysogenum. Two trichothecenes, produced by Stachybotrys sp., trichodermin and trichodermol, significantly inhibited spore germination and in vitro growth of A. alternata and C. herbarum but had no effect on A. niger, C. globosum, P. chrysogenum and S. chartarum. In the damp building materials (drywall, ceiling tile, and oak wood), S. chartarum significantly inhibited the growth of A. alternata and C. herbarum and had no effect on A. niger, C. globosum and P. chrysogenum in these substrata.
Introduction
Although fungi are ubiquitous, not all the fungi grow in the same environment. The population of fungal species differ significantly in outdoor and indoor environments (Andersen et al. 2011(Andersen et al. , 2017(Andersen et al. , 2021. Certain fungi such as species of Cladosporium and Penicillium can grow both in wet and semi-dry materials, whereas species of Chaetomium and Stachybotrys need wet building materials to grow and multiply. Fungal exposures in damp or waterdamaged houses have become a major concern because of their potential health effects and there is a clear relationship between contaminated indoor environments and illness. Indoor air quality became an important issue since 1960s when researchers found that indoor pollutant levels in water-damaged houses can reach or exceed those of outdoor levels (Anderson et al. 1997;Shelton et al. 2002). It is now well-known fact that indoor air quality is an essential part of our health since we spend 90% of our time indoors inhaling approximately 15 m 3 of ambient air every day (Sundell 2004). The occupants of wet, mouldy buildings have increase in subjective complaints and children in damp homes show higher respiratory and other illness (Montgomery et al. 1989;Anderson et al. 1997;Hyvarinen et al. 2002). It is reported that fungi colonising in one area of the home can spread and contaminate the entire home (Hegarty et al. 2019). The list of symptoms generally consists of upper respiratory complaints, including headache, eye irritation, epistaxis, nasal and sinus congestion, cough, and gastrointestinal complaints (Platt et al. 1989). The exposure to fungal spores enhances the histamine release triggered by both allergic and non-immunologic mechanisms in the cultured leukocytes (Mahmoiudi and Gershwin 2000). Besides water damage, high temperature and relative humidity can also contribute to the higher occurrence of indoor fungi in the house (Christensen et al. 1995;Davies et al. 1995;Burge 2004).
We have studied 100 water-damaged houses in Southern California and compared the occurrence of fungal flora both inside and outside of houses for a period of one year. In this study, it was found that the fungal population varied significantly in outdoor and indoor damp environments and certain fungi occur in both the environments. Six most commonly occurring fungal species in the indoor air and damp building materials were Alternaria alternata (Fr.) Keissl., Aspergillus niger van Tieghem, Chaetomium globosum Kunze ex Steud., Cladosporium herbarum (Pers.) Link ex Gray, Penicillium chrysogenum Thom and Stachybotrys chartarum (Ehrenb. ex Link) Hughes (Chakravarty and Kovar 2013). It was found that certain fungi occur alone or in combination with other fungi. Interactions between the different fungi in a water-damaged house are unavoidable because spores of a single fungal species alone may contain various metabolites, and waterdamaged site is always a habitat of more than one fungal species (Anderson et al. 1997;Nielsen et al. 2001). Many mycotoxins are thought to be involved in chemical signalling between organisms or species and the production of some of the mycotoxins may be stimulated or inhibited when microorganisms interact with each other (Nielsen et al. 2001). It was found that S. chartarum was growing alone or in combination with A. niger, C. globosum and P. chrysogenum in the damp or water-damaged building materials but not with A. alternata and C. herbarum. The frequent occurrence of S. chartarum with A. niger, C. globosum and P. chrysogenum in water-damaged building materials means that they often share their habitat in common damp or water-damaged building materials without any effect on the mycotoxin produced by S. chartarum. Stachybotrys chartarum produces secondary metabolites known as macrocyclic trichothecenes that are harmful to human and animal health and affects at cellular level (Nielsen et al. 2001;Brasel et al. 2005;Kock et al, 2021). Several compounds in trichothecenes family have been isolated including trichodermin and trichodermol, which are toxic to several species of fungi (Ueno 1983Hiratsuka et al. 1994Hinkley et al. 2000;Skrobot et al. 2017).
The objectives of this study were to investigate (1) the population of fungal flora in outdoor environments in southern California and their occurrence in indoor environments, and (2) their colonisation and in vitro interactions in damp building materials.
Sampling location and fungal flora
Outdoor and indoor samples were taken from 100 water-damaged houses in Los Angeles County between 1 June 2020 and 3 May 2021. According to the US National Weather Service, National Oceanic and Atmospheric Administration (www.noaa.gov), summer (June to August) temperature ranged from 35°C to 25° C during daytime and 18°C to 15°C during the evening. Autumn (September to November) temperature ranged from 30°C to 22°C during daytime and 18°C to 12° C during the evening. Winter (December to February) temperature ranged from 20°C to 18°C during daytime and 10°C to 8°C during the evening. Spring (March to May) temperature ranged from 25°C to 20°C during daytime and 15°C to 10°C during the evening. The average rainfall during study period was about 13 cm. Air samples for both outdoor and indoor environments was collected using Zefon air-o-cell sampler at a flow rate of 15 litres per minute (www.zefon.com). The air passes through a collection device (cassettes) which catches fungal spores embedded in a cover slip. This cover slip contains a sticky and optically clear sampling media which can permanently collect and hold fungal spores. The samples were collected twice a month for a period of one year. After being returned to the laboratory cassettes were opened, cover slips were removed and stained with cotton blue. The fungal spores were identified using Nikon Labophot-2 compound microscope with magnification up to 1000X based on their morphological characteristics (de Hoog et al. 2000;Samson et al. 2004;Domsch et al. 2007).
Fungal species and mycotoxin
The surface samples from 100 water-damaged houses were taken from drywall, ceiling tile and wood using a culture swab in the area where fungal growth can be seen. Five samples were taken from each building materials. Immediately after being returned to the laboratory the culture swabs were aseptically streaked out onto Petri plate in three different nutrient media (obtained from Hardy Diagnostics, 1430 W McCoy Lane, Santa Maria, CA 93455, USA). These media were malt extract agar with 0.1% chloramphenicol (Cat # W80), Dichloran-Glycerol-18 (Cat # W85), potato dextrose agar (Cat # W60) and carrot agar (carrot 250 g, yeast extract 1 g, agar 20 g, distilled water 1000 mL, pH 6.5). The plates were incubated at 25°C in the dark for 7 to 10 days. The fungal species from the Petri plates were surveyed through DNA metabarcoding analyses of the rDNA ITS2 region.
Six species of fungi were consistently isolated and identified through DNA metabarcoding analysis of the rDNA ITS2 region. These species were Alternaria alternata, Aspergillus niger, Chaetomium globosum, Cladosporium herbarum, Penicillium chrysogenum and Stachybotrys chartarum. Isolated and identified fungi were deposited at the fungal culture collection bank of the Pasteur laboratory.
Two trichothecenes, trichodermin (PLT07) and trichodermol (PLT09), were used in this study. Trichodermin and trichodermol were extracted from the culture of S.chartarum. Stachybotrys chartarum was grown in liquid malt extract (10-litre still culture) for 3 weeks. The mycelia were removed from the liquid culture by filtration through cheesecloth. The culture filtrate was passed through an XAD-16 column, which was then washed with acetone (1 litre). The acetone elute was concentrated under reduced pressure which was diluted with water and extracted with ethyl acetate three times. Removal of ethyl acetate under reduced pressure gave a yellow solid, which was then chromatographed on a Sephadex LH-20 column (acetone). The crude fraction was chromatographed on thin layer chromatography plates (1:1 acetone-hexane) and yielded two fractions. Chromatography of the first fraction on a silica gel column (3% acetonedichloromethane) yielded 5.2 mg of trichodermin. Chromatography on the second fraction on a silica gel (5% methanol-dichloromethane) yielded 2.3 mg trichodermol. The spectroscopic data (IR, MS, 1 H NMR, and 13 C NMR) obtained for these two compounds were consistent with the literature (Ueno 1983).
In vitro antagonism study
Antagonism of S. chartarum against A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum was studied on malt extract agar (MEA), potato dextrose agar (PDA) and carrot agar (CA) media in 90-mm Petri plates. Stachybotrys chartarum (a slower growing fungus) was inoculated with 5-mm agar plugs at the margin of the plate and allowed to grow at 25°C in the dark. For each medium, there were 15 replicates. Seven days later, 5-mm mycelial disks of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum from agar cultures were placed separately on the agar plates opposite to S. chartarum and the Petri plates were then incubated at described above. Colony diameter of the fungi was measured 6 days later. The inhibition zone formed around A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum were measured after 6 days. Each of these inhibition zones were measured in a straight line from the edge of the S. chartarum colony to the edge of the A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum colonies.
Effect of culture filtrates of S. chartarum on the in vitro growth of five species of fungi in agar diffusion plates
The effect of culture filtrates of S. chartarum on A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum was studied on agar diffusion plates. These plates were prepared from 90-mm Petri plates containing 40 mL of MEA, PDA and CA media, by removing 5-mm-diameter agar plugs from each of four quarters of the plate. Five-mm-diameter agar plugs of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum were separately inoculated in the centre of the agar diffusion plates, and incubated at 25°C in the dark. The culture filtrate of S. chartarum was prepared by filtering 15-day-old liquid cultures (grown in carrot extract (CE) in 250 mL flasks on a shaker) through both Whatman No. 1 filter paper and a 0.45-μm Millipore filter and then drying on a rotary evaporator at 45°C. The evaporated sample was resuspended in 5-mL distilled water. One mL of filter sterilised concentrated culture filtrate of S. chartarum was added to diffusion wells of each of the 15 replicate plates containing 3-day-old cultures of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum. The plates were incubated at 25°C in the dark. After incubation for 7 days, the zone of inhibition formed around each diffusion well was measured. Microscopic examinations of hyphae of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum were also made.
Effect of culture filtrates of S. chartarum on the in vitro mycelial growth of five species of fungi
For this experiment, S. chartarum was grown in liquid malt extract (ME), potato dextrose broth (PDB) and CE at 25°C in the dark. After 15 days, the mycelia were harvested on Whatman No. 1 filter paper. The culture filtrate was collected and stored at 2°C in the dark overnight. Fifty mL of liquid media (ME, PDB, and CE) were autoclaved in 250-mL flasks for 15 min at 121° C. When cooled down, flasks were separately inoculated with five agar plugs (5 mm diameter) of actively growing mycelia of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum. The agar was removed with a sterile scalpel and only mycelial mats were inoculated in the flasks. Five mL of culture filtrate of S. chartarum was then added to each flask. The control consisted of 5 mL of sterile distilled water. There were 15 replicates for each treatment and each fungal species. The flasks were kept in the dark on a shaker at room temperature (24 ± 2°C). After 4-week incubation period, the mycelia of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum were harvested on a Whatman No. 1 filter paper, oven dried at 70°C for 48 h and the dry weight of mycelia calculated.
Effect of trichodermin and trichdermol on the spore germination and in vitro growth of six species of fungi
To test the effect of trichodermin and trichodermol on the spore germination, A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum were grown on 2% MEA at 25°C in the dark for 1 week, whereas S. chartarum was grown on CA for 2 weeks. The spore suspension was prepared by transferring from fungal culture with a transfer loop into 9-mL sterile distilled water and the concentration of the suspension was adjusted to approximately 10 5 spores/mL. Ten μL of spore suspension of A. alternata, A. niger, C. globosum, C. herbarum, P. chrysogenum and S. chartarum was mixed separately with 10 μL of filter sterilised trichodermin and trichodermol in a cavity slide. The slides with spores were kept moist by placing them on glass rods on the moistened filter paper in Petri plates and sealed with parafilm. There were 15 replicates for each treatment and each fungal species. Spore germination was recorded after 24 h incubation at 25°C in the dark, and 100 spores were counted for each treatment.
To test the effect of trichodermin and trichodermol on the in vitro growth of A. alternata, A. niger, C. globosum, C. herbarum, P. chrysogenum and S. chartarum, these fungi were grown on multiwell tissue culture plates (1.8 × 1.5 cm diameter × length of individual wells). Two percent MEA was used for A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum and for S. chartarum, CA was used. Twenty-five μL of trichodermin and trichodermol at 1, 10, 100 and 1000 ppm in acetone-dichloromethane was added separately to the surface of the nutrient media in each well. For each concentration, 15 multiwells were used. In the control, only 25 μL of acetonedichloromethane was used. All the multiwells were kept in a laminar flow hood for 1 min to allow the solvents to evaporate. Each agar well was individually inoculated with 5-mm agar plugs of A. alternata, A. niger, C. globosum, C. herbarum, P. chrysogenum and S. chartarum and then multiwells were wrapped with parafilm and incubated at 25°C in the dark. There were 15 replicates for each treatment and each fungal species. After 5 days of incubation, the colony diameters of A. alternata, A. niger, C. globosum, C. herbarum, and P. chrysogenum were measured and mycelia were observed under a microscope. For S. chartarum colony diameter and microscopic observation were made after 10 days.
Growth of fungal species in the building materials
Three building materials (powdered drywall, powdered ceiling tile, and oak wood chips) were used in this study. The materials were crushed into coarse material using a grinder. One hundred gram of drywall, ceiling tile, and wood chips were soaked separately in ME in each of 95 flasks. After 1 h, the ME was drained and flasks were autoclaved for 60 min at 121° C. There were five replicates for each treatment and each fungal species. When cooled, 30 flasks containing dry wall, 30 flasks containing ceiling tile, and 30 flasks containing oak wood chips were separately inoculated with five agar plugs (5-mm diameter) of actively growing A. alternata, A. niger, C. globosum, C. herbarum, P. chrysogenum and S. chartarum. Five control flasks received five agar plugs (5-mm diameter) without any fungal species growing on it. The flasks were kept in the incubator at 25°C in the dark. After 45 days of incubation, flasks were removed from the incubator, observed under a microscope and isolation of the fungi was made from the inoculated drywall, ceiling tile and wood chips.
Interaction of S. chartarum on the growth of five fungal species in the building materials
Three building materials described above were used in this study. One hundred gram of drywall, ceiling tile and wood chips were soaked separately in ME in each of eighty 250-mL flasks. After 1 h, the ME was drained and flasks were autoclaved for 60 min at 121°C. There were five replicates for each treatment. When cooled, 25 flasks containing dry wall, 25 flasks containing ceiling tile and 25 flasks containing oak wood chips were aseptically inoculated with five agar plugs (5-mm diameter) of actively growing mycelia of S. chartarum. The flasks were incubated at 25°C in the dark. The flasks were shaken periodically to fragment the mycelia of the fungi on the drywall, ceiling tile and wood chips. After 28 days, five flasks containing each of S. chartarum growing on drywall, ceiling tile and wood chips were separately inoculated with five agar plugs (5 mm diameter) of A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum and returned to the incubator. Five control flasks contained only S. chartarum did not receive any treatment. The following treatments resulted: S. chartarum + A. alternata, S. chartarum + A. niger, S. chartarum + C. globosum, S. chartarum + C. herbarum, S. chartarum + P. chrysogenum and only S. chartarum. After 45 days of incubation, flasks were removed from the incubator, observed under a microscope, and isolation of the fungi was made from the inoculated drywall, ceiling tile and wood chips.
Statistical analysis
Data were subjected to analysis of variance (Zar 1984). Individual means were compared using Scheffe's test for multiple comparisons using SAS software (SAS Institute Inc 2016). Means followed by the same letters (a,b,c and so on) in tables for a particular fungal species against S. chartarum and its metabolite trichodermin and trichodermol are not significantly (P = 0.05) different from each other by Scheffe's test for multiple comparison.
Fungal population
A total of 100 water-damaged houses were studied. From the outdoor air, a total of 42 fungal species and from indoor air 14 fungal species were consistently reported throughout the year (Table 1). There were no seasonal variations of these fungi throughout the year during study period.
The fungal population and diversity were greater in outdoors than indoors. Indoor fungal species were reported from environments of water-damaged houses.
Species of Alternaria, Aspergillus, Chaetomium, Cladosporium, Penicillium and Stachybotrys were always present in water-damaged building materials ( Table 2). Species of Alternaria and Cladosporium were absent in the same substrate when Stachybotrys was present. On the other hand, species of Aspergillus, Chaetomium and Penicillium were associated Stachybotrys in these waterdamaged building materials (Table 2). Stachybotrys was isolated from 83 water-damaged houses from the surface of the building materials, whereas species of Alternaria, Aspergillus, Chaetomium, Cladosporium and Penicillium were consistently isolated from all 100 houses in all three building materials.
Inhibitory effect of S. chartarum towards five fungal species
The in vitro growth of A. alternata and C. herbarum was significantly inhibited when grown in dual culture with S. chartarum on all three culture media used ( Table 3). The inhibition zones, formed around the colonies of the fungi, ranged from 9.5 mm to 23.5 mm for A. alternata and 8.0 mm to 25.5 mm for C. herbarum in all three nutrient media tested (Table 3). Aspergillus niger, C. globosum and P. chrysogenum had no effect on their growth when grown together with or without S. chartarum, and no inhibition zones were formed around the colonies of A. niger, C. globosum and P. chrysogenum (Table 3).
Similarly, in agar diffusion plates, inhibition zone was observed when A. alternata and C. herbarum were treated with culture filtrate of S. chartarum (Table 4). For A. alternata inhibition zone ranged from 6.5 mm to 25.5 mm and for C. herbarum it was 15.0 mm to 31.5 mm (Table 4). No inhibition zone was observed for A. niger, C. globosum and P. chrysogenum (Table 4). Control plates without culture filtrate of S. chartarum had no inhibition zones (Table 4).
Effect of culture filtrates of S. chartarum on the in vitro mycelial growth of five species of fungi
The mycelial growth of A. alternata and C. herbarum was significantly inhibited when treated with culture filtrate of S. chartarum (Table 5). Vacuolation of the hyphae of A. alternata and C. herbarum was observed under a microscope. The mycelial growth of A. niger, C. globosum and P. chrysogenum was not affected when grown with or without culture filtrate of S. chartarum. (Table 5). No vacuolation of the hyphae of A. niger, C. globosum and P. chrysogenum was observed under a microscope when grown together with culture filtrate of S. chartarum.
Effect of trichodermin and trichdermol on the spore germination and in vitro growth of five species of fungi
At 1 ppm of trichodermin and trichodermol, spore germination of A. alternata and C. herbarum was not affected (Table 6). Spore germination of A. alternata and C. herbarum was significantly reduced when treated with trichodermin and trichodermol at 10, 100 and 1000 ppm (Table 6). Both these compounds had no effect on spore germination of A. niger, C. globosum, P. chrysogenum and S. chartarum at 1, 10, 100 and 1000 ppm (Table 6).
Effect of trichodermin and trichdermol on the in vitro mycelial growth of six species of fungi
Both trichodermin and trichodermol at 1 ppm had no effect on the in vitro growth of A. alternata and C. herbarum (Table 7). In vitro growth of A. alternata and C. herbarum was significantly reduced when treated with trichodermin and trichodermol at 10, 100 and 1000 ppm (Table 7). Both these compounds had no effect on in vitro growth of A. niger, C. globosum, P. chrysogenum and S. chartarum at 1, 10, 100 and 1000 ppm (Table 7).
Effect of S. chartarum on five fungal species on colonisation in building materials
All the six species of fungi grew well in drywall, ceiling tile and oak wood chips (Table 8). Both A. alternata and C. herbarum failed to grow in these substrate when grown together with S. chartarum (Table 8). The growth of A. niger, C. globosum and P. chrysogenum was not inhibited when grown together with S. chartarum in these substrate (Table 8). A. alternata Values are the means of 15 replicates. Means followed by the same letters (a, b,c) in a row for a particular species are not significantly (P = 0.05) different from each other by Scheffe's test for multiple comparison
Discussion
The present study shows that a higher number of fungal species present in the outdoor environments, and there is a significant difference between the fungal populations in both indoor and outdoor environments. Indoor fungal flora is thought to be a function of dispersal from the outdoor environment and growth and resuspension from the indoor environment (Ara et al. 2004;Horner et al. 2004;Adams et al. 2013;Atya et al. 2019;Andersen et al. 2021).
The source of fungi that grow indoors is from the outside environment, and many of these fungi are capable of finding suitable growth conditions in indoor environments. During this study, species of Aspergillus and Stachybotrys are found in both outdoor and indoor environments; however, Hegarty et al. (2019) found these two species occur only in the indoor environment. Present study shows indoor environment supported the growth of waterindicator fungi as well as leaf and soil-borne fungi. Estensmo et al. (2021) concluded from their study that indoor fungal flora are structured by occupancy as well as outdoor seasonality and temporal variability should be accounted for indoor air quality study. Modern building materials, once moistened, may provide rich ecological niches for various fungi. It has long been postulated that damp or homes with high humidity have a musty smell or have obvious fungal growth. Building materials in southern California consist wood, drywall, insulation, ceiling tile and these materials provide excellent food source for fungi. Once spores of fungi land on these damp building materials, spores will germinate, grow and multiply within a few weeks. The occupants of these homes have increase in subjective complaints including upper respiratory, asthma, gastrointestinal and other illness (Davies et al. 1995;Anderson et al. 1997;Taskinen et al. 1997Taskinen et al. ., 1999. In this study, we have found that species of Alternaria, Aspergillus, Chaetomium, Cladosporium, Penicillium and Stachybotrys grow luxuriantly in these substrata. These fungi are water indicator and present when there is food source and high humidity. It was interesting to note that A. alternata and C. herbarum grow in the same substrate when S. chartarum was absent and failed to grow when S. chartarum was present. On the other hand, colonies of A. niger, C. globosum and P. chrysogenum were often associated with S. chartarum. In vitro inhibition of fungi has been attributed through parasitism or surface contact followed by penetration and formation of intercellular hyphae within the host hyphae, or production of antifungal compounds by antagonist fungus (Demain 1984). In this study, S. chartarum was an effective antagonist against A. alternata and C. herbarum owing to the production of antifungal substances that were excreted in the culture media. Stachybotrys chartarum was capable of inhibiting growth of both A. alternata and C. herbarum. Although S. chartarum did not parasitise or penetrate the hyphae of A. alternata and C. herbarum, the metabolites produced by S. chartarum appeared to kill these two fungi. The cytoplasm of A. alternata and C. herbarum coagulated and disintegrated when exposed to the S. chartarum culture, its culture filtrate, and trichodermin and trichodermol. In this study, mycelial growth of A. alternata and C. herbarum was also inhibited in both agar and liquid culture by S. chartarum. The crude extract of S. chartarum also strongly reduced the growth of A. alternata and C. herbarum. These results indicate that inhibition of A. alternata and C. herbarum by S. chartarum was not hyphal parasitism rather it is a chemical in nature.
All these six fungal species described above are known to produce mycotoxin. Mycotoxins are the secondary metabolites of fungi that represent a chemically diverge group of organic, non-volatile, and low molecular weight compounds. Mycotoxins are usually produced when conditions favour fungal growth such as moisture, pH, growth medium, and temperature. Among these fungi, S. chartarum is considered to be one of the most toxic fungi and produce cytotoxic compound known as trichothecene. Trichothecenes are secondary metabolites produced by species of Stachybotrys as well as other fungi that are harmful to human and animal health causing wide range of diseases (D'Mello et al. 1999;Shifrin and Anderson 1999;Gutleb 2002;Skrobot et al. 2017). There are over 150 trichothecenes and trichothecenes derivatives have been isolated and characterised (Gutleb et al. 2002). They are all non-volatile, low molecular weight sesquiterpene epoxides and share a tricyclic nucleus and usually contain an epoxide at C-12 and C-13, which are essential for toxicity (Desjardins et al. 1993). The trichothecene skeleton is chemically stable and not degraded by heat or neutral or acidic pH (Eriksen 2003). In this study, two trichothecene, trichodermin and trichodermol produced by species of Stachybotrys showed significantly inhibitory effect on A. alternata and C. herbarum at 10, 100 and 1000 ppm. Both spore germination and mycelial growth of these fungi were significantly inhibited by these compounds. Interestingly, A. niger, C. globosum, P. chrysogenum and S. chartarum were not affected by any concentration of trichodermin and trichodermol up to 1000 ppm. This indicates that the effect of these trichothecenes varied considerably amongst different fungal species.
The antifungal activity of S. chartarum was also confirmed on drywall, ceiling tile and oak wood chips where it prevented the growth of A. alternata and C. herbarum in these substrates. For antibiotic metabolite production to occur in these substrate, certain conditions for secondary metabolism would have to met. Secondary metabolite formation usually occurs after a period of hyphal growth has taken place, thus establishing a certain hyphal mass, age, or growth rate. It is interesting to note that growth of S. chartarum was not affected by its own metabolite. Both A. alternata and C. herbarum colonised abundantly in building materials in the same houses when S. chartarum was absent, however, they failed to colonised in the same building materials when S. chartarum was present.
From this study, distinct interaction patterns between S. chartarum against A. alternata and C. herbarum was identified and determined by the antifungal compounds production by S. chartarum. Trichodermin and trichodermol were toxic to both A. alternata and C. herbarum at concentrations 10 ppm and above but had no effect on A. niger, C. globosum and P. chrysogenum up to 1000 ppm. This may explain why A. alternata and C. herbarum were absent in the same building materials when colonised by S. chartarum but the growth A. niger, C. globosum and P. chrysogenum was not affected.
This study shows that there exist an associated mycobiota on the damp building materials based on interactions between these fungi. The first association of fungi was between A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum.
Although all the fungi produce mycotoxins, these fungi have found their niche on damp or wet building materials without inhibiting growth of each other. A second strong association was seen between mycotoxin producing fungi A. niger, C. globosum, P. chrysogenum and S. chartarum, and they seem to coexist together on damp or wet building materials.
Conclusions
Fungal population varied significantly in outdoor and indoor damp environments and certain fungi occur in both the environments. In damp houses, fungal population can reach or exceed those of outdoor levels causing indoor fungal contamination. Modern building materials, once moistened, may provide rich ecological niches for various fungi. It has long been postulated that damp or homes with high humidity have a musty smell or have obvious fungal growth. Indoor fungal flora is thought to be a function of dispersal from the outdoor environment and growth and resuspension from the indoor environment. The source of fungi that grow indoors is from the outside environment and many of these fungi are capable of finding suitable growth conditions in indoor environments. In this study, indoor environment supported the growth of water-indicator fungi as well as leaf and soil-borne fungi. Six commonly occurring fungal species in damp or water-damaged houses in southern California are A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum and S. chartarum. Stachybotrys chartarum was a strong antagonist against A. alternata and C. herbarum but had no inhibitory effect on the growth of A. niger, C. globosum, and P. chrysogenum. Two trichothecenes, produced by S. chartarum, trichodermin and trichodermol, significantly inhibited spore germination and in vitro growth of A. alternata and C. herbarum but had no inhibitory effect on A. niger, C. globosum, P. chrysogenum and S. chartarum. Stachybotrys chartarum significantly inhibited the growth of A. alternata and C. herbarum in the building materials (drywall, ceiling tile and oak woods) and had no effect on the growth and colonisation of A. niger, C. globosum and P. chrysogenum. There seemed to appear two associations of fungi in the damp or water damaged building materials. The first association of fungi was between A. alternata, A. niger, C. globosum, C. herbarum and P. chrysogenum. A second strong association was seen between A. niger, C. globosum, P. chrysogenum and S. chartarum and they seem to coexist together. | 2022-07-29T15:06:19.692Z | 2022-07-27T00:00:00.000 | {
"year": 2022,
"sha1": "7f4fb094317e2a5414fac1b80b1ca7d8a9609a40",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21501203.2022.2104950?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d71b386f142f846cba8444bf54208e0ce3526533",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
7422811 | pes2o/s2orc | v3-fos-license | Pilon: An Integrated Tool for Comprehensive Microbial Variant Detection and Genome Assembly Improvement
Advances in modern sequencing technologies allow us to generate sufficient data to analyze hundreds of bacterial genomes from a single machine in a single day. This potential for sequencing massive numbers of genomes calls for fully automated methods to produce high-quality assemblies and variant calls. We introduce Pilon, a fully automated, all-in-one tool for correcting draft assemblies and calling sequence variants of multiple sizes, including very large insertions and deletions. Pilon works with many types of sequence data, but is particularly strong when supplied with paired end data from two Illumina libraries with small e.g., 180 bp and large e.g., 3–5 Kb inserts. Pilon significantly improves draft genome assemblies by correcting bases, fixing mis-assemblies and filling gaps. For both haploid and diploid genomes, Pilon produces more contiguous genomes with fewer errors, enabling identification of more biologically relevant genes. Furthermore, Pilon identifies small variants with high accuracy as compared to state-of-the-art tools and is unique in its ability to accurately identify large sequence variants including duplications and resolve large insertions. Pilon is being used to improve the assemblies of thousands of new genomes and to identify variants from thousands of clinically relevant bacterial strains. Pilon is freely available as open source software.
Introduction
Massively parallel sequencing technology has dramatically reduced the cost of genome sequencing, making the generation of large numbers of microbial genomes accessible to a wide range of biological researchers.For example, a single Illumina HiSeq2500 has the ability to generate the equivalent of 300 bacterial genomes of sequencing data in a single day using only one flow cell.Comparisons of whole genome sequence data from hundreds of microorganisms have provided unprecedented views on all aspects of microbial diversity, and there is growing recognition that 'hundreds' of genomes is the minimum scale needed to address pressing questions related to microbial evolution, diversity, pathogenicity and resistance to antimicrobial drugs [1][2][3][4].As such, the methods needed to analyze these large volumes of data -including assembling and calling variants relative to a reference -must be robust, accurate, scalable, and able to operate without human intervention.Several computational methods exist that make improvements to the quality of draft assemblies by recognizing and correcting errors involving (i) single bases and small insertion/deletion events (indels) [5], (ii) gaps [6], (iii) read alignment discontinuities [7] or by reconciling multiple de novo assemblies into an improved consensus assembly [8].However, no single tool performs integrated assembly improvement of all error types.Computational tools for identifying sequence polymorphisms also exist [9,10], but focus primarily on identifying variants in the human genome [11], and particularly small events (SNPs and small indels) or structural rearrangements (chromosomal rearrangements) [11].Furthermore, many of these tools require multiple steps to identify and subsequently filter variants to remove noise and false calls.In addition, for tools able to identify variants that exceed the length of the sequence reads (read-length) being evaluated, they generally indicate the approximate chromosomal location and estimated size of the predicted variant relative to the reference, but often do not provide exact coordinates [11].For insertions that are longer than the read-length -particularly common in the microbial worldcurrent tools do not assemble and report the inserted sequence.
We introduce Pilon, an integrated software tool for comprehensive microbial genome assembly improvement and variant detection, including detection of variants that exceed sequence read-length.Conceptually, Pilon treats assembly improvement and variant detection as the same process (Figure 1).Both start with an input genome -either an existing draft assembly or a reference assembly from another strain -and use evidence from read alignments to identify specific differences from the input genome supported by the sequencing data.Applying those changes to a draft genome assembly yields an improved assembly, while reporting the changes with respect to a reference genome yields variant calls.
In genomic regions where read alignments are poor, Pilon is capable of filling out and correcting sequence through an internal local reassembly process.This capability allows Pilon to further improve assemblies by filling gaps and correcting local misassemblies, and it also enables Pilon to capture many large insertion, deletion, and block substitution variants in their entirety.These larger events are often completely missed or inaccurately characterized by conventional variant calling tools that rely solely on read alignments.Pilon has built-in heuristics to determine which corrections and calls are of high confidence, so no separate filtering criteria need be specified.This allows for the automated processing of hundreds or thousands of data sets representing different microbial species with minimal human intervention.
We benchmarked Pilon both as an assembly refinement tool and variant caller.For assembly refinement, we used finished reference genome sequences from Mycobacterium tuberculosis F11, Streptococcus pneumoniae TIGR4 and Candida albicans SC5314 as benchmarks to evaluate the accuracy of Pilon in improving draft assemblies.Pilon-improved assemblies were more contiguous and complete than non-Pilon-improved assemblies and contained improved sequences for genes implicated in pathogen-host interaction and virulence.We also evaluated Pilon's performance against tools specializing in assembly base quality improvement and gap filling, and, in each case, Pilon made more correct improvements while making far fewer mistakes than the other tools.For variant calling, we used read data from M. tuberculosis F11 to call polymorphisms against the finished M. tuberculosis H37Rv genome to evaluate Pilon's ability to accurately call polymorphisms.Pilon performed as well or better when compared with two state-of-the-art variant detection tools in calling small variants, and Pilon differentiated itself in its ability to identify large-scale variants.
Assembly improvement evaluation
Assessing accuracy on bacterial assemblies.To test the accuracy of Pilon's improvements on bacterial assemblies, we sequenced and created draft assemblies for two bacterial strains with finished references: S. pneumoniae TIGR4 and M. tuberculosis F11 (see Methods).These strains were chosen because they represent different GC content (40% and 66% GC content, respectively) and both possess genomic features that are known to confound assemblers, leading to mis-assembled and/or incomplete genome sequences [12][13][14].Sequence reads from both libraries were aligned back to their respective draft assemblies using BWA [15], and Pilon was run with those alignments.
To assess the benefits of running Pilon, we compared the original draft and Pilon-improved assemblies to each other and to their respective finished genome sequence.Pilon made significant improvements to the contiguity of both draft assemblies, increasing the contig N50 size by 443 Kbp for S. pneumoniae TIGR4 (see Table 1) and 196 Kbp for M. tuberculosis F11, even though the F11 draft assembly had been generated with assistance from a close reference.In addition, Pilon assemblies were more complete, with the M. tuberculosis F11 and S. pneumoniae TIGR4 Pilonimproved assemblies containing an additional 11,516 bp and 9,608 bp, respectively.
Observed gains in genome coverage and contig N50 were principally due to Pilon's ability to recognize and fill (or partially fill) by local assembly ''captured gaps'', i.e., missing sequence between contigs within a scaffold.When run with default settings, Pilon does not introduce ambiguous bases or additional Ns during this process.Across the two draft assemblies, Pilon completely and accurately filled 17 of the 44 captured gaps (39% closure rate) including 8 gaps that represented more than 1 Kbp in sequence length (see Table S1).None of Pilon's gap closures were incorrect, though one was judged to be ''no worse'': the sequence used to bridge the gap was correct, but an error in the original assembly in one of the gap flanks was not detected by Pilon.An additional 14 gaps (32% of total captured gaps) were partially filled by Pilon, and 13 (93%) of those extensions were error-free.The one partial fill judged to be ''Incorrect'' involved a repetitive structure that Pilon extended into flanking sequence belonging to a different copy of the repeat.
We compared Pilon's ability to close gaps in these assemblies with two other tools commonly used for this purpose, IMAGE [16] and GapFiller [17] (see Table S1b).Pilon's overall gap closure rate was only somewhat higher than that of the other tools, but its accuracy was dramatically better.Across the two assemblies, IMAGE closed 13 captured gaps (30% closure rate) but only two of those closures were found to be correct by alignment with the reference (15% precision).Similarly, GapFiller closed 16 of captured gaps (36% closure rate) in the two assemblies, but only four of its closures were correct (25% precision).In addition to filling captured gaps, Pilon also corrected 43 single-bases and 4 small indels across both genomes, and all 47 changes were found to be correct by alignment against the reference (100% accuracy; see Table S2).By comparison, iCORN [18] made 47 single-base changes and 2 single-base deletions, but only 35 of the 49 (71%) of its changes were correct.
Optionally, Pilon can also make changes to genomic locations at which it finds significant evidence for more than one alternative, choosing the allele with the most support even where the evidence is insufficient to make a confident call.When run with this option on these assemblies, Pilon made 10 changes to ambiguous bases, but only 3 were verified to be correct.This option is turned off by default starting with Pilon version 1.8.
Pilon also detected and attempted to fix local mis-assemblies by reassembling contig regions that were suspected to be incorrectly assembled.Three of these regions were correctly fixed (see Table 1 and Table S1) and a fourth we classified as ''No worse''.For the latter, Pilon correctly identified a repetitive region within the original M. tuberculosis F11 draft assembly that contained extra sequence with respect to the M. tuberculosis F11 reference.However, Pilon's change introduced a deletion with respect to the reference, underscoring the difficulty of accurately assembling repetitive regions with short read data [13].
For the M. tuberculosis F11 and S. pneumoniae TIGR4 Pilon improved assemblies, there were 13 and 4 regions, respectively, where Pilon detected a problem in the draft assembly, but was unable to provide solutions.In each of these cases, Pilon flagged the coordinates of the problematic region, and, in 10 of these cases, it also reported the length of the detected tandem repeat confounding resolution of the region.For example, Figure 2 shows scaffold00001 coordinates 3159800-3159898 of the M. tuberculosis F11 draft assembly, along with Pilon-generated genome browser tracks representing some of the internal metrics it used to identify this region as problematic.In this case, Pilon noted that it was unable to resolve a 57 bp tandem repeat, which enabled an experienced analyst to confirm the presence of a mis-assembly and accurately narrow the bounds of the unresolvable region.Manual comparison of the draft assembly with the reference revealed that there should have been three full and one partial copies of the 57 bp repeat in tandem, whereas the draft assembly only contained one full and one partial copy of the repeat.
Effect of assembly improvements on gene calls.To assess the impact of Pilon-improvement on gene calls (i.e., functional interpretation of the genome), we examined Pilon improvements with respect to genes by investigating the regions that were affected by Pilon modifications and the effect of these modifications on coding sequences.Thirty-two genes and seven intergenic regions were impacted by Pilon changes to the M. tuberculosis F11 Pilon-improved assembly; of these, nearly all (95%; 37 of 39) were correct improvements.Nearly half (13) of the genes that were affected by a fix involved transposases that were completely or partially filled with sequence that perfectly matched the reference genome (see Table S3).One additional transposase had a single base pair corrected with perfect match to the reference.A particularly complex 13 Kbp region in M. tuberculosis F11 is highlighted in Figure 3.This region harbors three sets of transposases in close proximity that were not captured in the draft M. tuberculosis F11 assembly, but were accurately filled in by Pilon.Two of the gaps were completely closed, and the third transposase set was completely captured along with an additional gene.However, due to Pilon's conservative overlap requirement for closure (95 bp), that gap was not closed despite a 42 bp overlap in the extended flank sequences.
Of the remaining 19 genes, 6 were PE/PPE family protein encoding genes.Five corrections were perfect and, in one case (TBFG_11946), Pilon identified the problematic region, but could not completely resolve the problem.However, the correction that Pilon applied did not make the situation worse.Pilon also identified and accurately corrected a mis-assembly (highlighted in Figure S1) in which a gene had been truncated due to a collapsed repeat in the draft assembly.In S. pneumoniae TIGR4, 20 genes and 12 intergenic regions were affected by fixes from Pilon.A majority (15 of 20) of the improved genes were transposases, of which Pilon was able to completely or partially fill 8 that matched completely and perfectly with the reference; the remaining 7 were individual base pair corrections.Pilon was also able to partially fill other genes encoding repetitive cell wall surface proteins -including choline binding protein A [19] and pneumococcal surface protein A [20] both implicated in adhesion and virulence in S. pneumoniae.
Assessing accuracy on the assembly of the larger, polymorphic genome of C. albicans.To evaluate Pilon's ability to accurately improve assemblies of diploid genomes containing a high level of heterozygosity, we ran Pilon on an Illumina ALLPATHS-LG assembly of the SC5314 strain of C. albicans (Methods), for which there is a high quality reference curated by the Candida Genome Database (www.candidagenome.org).At 14.3 Mb, the C. albicans genome is 3-to 7-fold larger than the bacterial genomes evaluated here.It consists of 8 chromosomes that are present at diploid levels with an average of one SNP found at every 330-390 bases, although large regions of most chromosomes display loss of heterozygosity [21,22].
Pilon was capable of improving the assembly and added .33 Kb of sequence (see Table 1).While the increase in contig N50 was relatively small (56 Kb), Pilon completely or partially filled 61% (156 of 256) of the total captured gaps including in both homo-and hetero-zygous regions of the genome.Homozygous Together, these tracks reveal the true bounds of the mis-assembly, and indicate that there are likely missing copies of the tandem repeat in the draft assembly.In this case, manual analysis revealed the draft assembly was missing two of three full copies of a 57-base tandem repeat.doi:10.1371/journal.pone.0112963.g002Figure 3. Comparative view of a transposase-rich region of the M. tuberculosis F11 genome (coordinates 1,991,000 to 2,006,300) obtained from the draft (A) and Pilon-improved (B) assemblies.In the draft assembly, three regions containing transposases (shown in blue) remained unassembled resulting in gaps.In the Pilon-improved assembly, all three sets of transposases were successfully assembled.The Pilonimproved assembly also contained a hypothetical gene, TBFG_11790 (shown in red), missing from the draft assembly.Though TBFG_11790 was not fully closed in the Pilon-improved version, closer inspection revealed that there was a 42 bp overlap in assembled sequence at this site.By default, Pilon will not close gaps unless there is at least 95 bp overlapping sequence to minimize spurious joins.doi:10.1371/journal.pone.0112963.g003regions had a slightly higher fraction of completely closed gaps (33%; 8 of 24) as compared to heterozygous regions (20%; 46 of 232).Of completely filled gaps, 93% had full length alignment to the reference (including 300 bp of their flanking sequences) at 94% sequence identity or higher.Less than 100% identity is to be expected when comparing a heterozygous genome assembly against a flat reference.In several of the lower-identity cases, most of the base differences were in the flanks present in the draft assembly rather than the filled gap itself, suggesting that the gaps may have been caused by the original assembler's inability to assemble sequence through a highly polymorphic region.
Pilon also identified and corrected regions in the reference assembly where the read alignment evidence disagreed, including 44 regions that were likely mis-assembled.The nearly 27,000 corrected single-bases were mostly at heterozygous sites; Pilon identified these as potential bases to fix, as the majority of readevidence favored an alternate allele from the reference base in the draft.These positions represented about half of the ,70,000 heterozygous SNP positions in this Candida genome [23].While we did not investigate every change that Pilon made to this assembly, our results indicate that Pilon is suitable to be run on larger diploid genomes and can improve the quality of a draft assembly, resulting in fewer and longer contigs and an improved gene set.
Assembly
improvements in a production environment.Given promising results from the benchmarking experiments, we implemented Pilon in the Broad Institute's de novo genome assembly production pipeline and assessed its performance by comparing assembly metrics from Pilon-improved assemblies of 50 representatives of the Enterobacteriaceae (including Escherichia, Klebsiella, and Enterobacter) to non-Pilon improved versions.Pilon reduced the mean number of contigs in the 50 assemblies from 33.7 to 20.9 (see Figure S2), a 38% reduction in total contigs representing closure of 47% of captured gaps.As a result, Pilon nearly doubled the contig N50 from 392 Kbp to 780 Kbp (99% increase; see Figure S3), capturing, on average, an additional 14,681 bp of sequence per assembly.This increase in genome size equates roughly to the addition of ,14 genes per genome (based on the average bacterial gene size of ,1 kb).Scaffold numbers were unchanged since, currently, Pilon does not attempt to join or break scaffold structures.
Variant detection evaluation
Assessing accuracy of polymorphism calls.To test the accuracy of Pilon's variant detection capability, we used BWA to align approximately 200-fold coverage of reads from the same M. tuberculosis F11 fragment and long insert libraries used in the assembly improvement assessment to the M. tuberculosis H37Rv finished reference genome.We generated two sets of variant calls with Pilon, one using both fragment and long insert reads as input, and one using only fragment reads.We also ran two popular variant detection tools, GATK UnifiedGenotyper (GATK-UG) and SAMtools/BCFtools (SAMtools), starting with the same aligned fragment BAM.All variant sites, including substitutions, deletions or insertions, were identified and two categories of variants were assessed: single nucleotide polymorphisms (SNPs) and multi nucleotide polymorphisms (MNPs) greater than 1 bp.Predicted polymorphisms were compared to a curated truth set of variants produced by comparing the M. tuberculosis F11 finished genome to the finished M. tuberculosis H37Rv genome (see Methods), resulting in a list of 1,325 events (summarized in Table 2) of which the majority were SNPs.We then compared Pilon's performance to that of the other two variant detection algorithms.
Overall, Pilon performed better in identifying SNPs, including single nucleotide insertions and deletions than did GATK-UG or SAMtools (Table 3).Pilon identified 8 to 11 percentage points (pp) more single nucleotide substitutions, 4 pp more single nucleotide deletions, and 4 to 8 pp more single nucleotide insertions from the curation set than did GATK-UG or SAMtools, respectively.Pilon's ability to precisely call single nucleotide substitutions was also high -only 3% of calls were not accounted for in the curation set -which was on par with the other two tools.Similarly, Pilon had perfect precision in calling single nucleotide insertions and only 5% of single nucleotide deletion calls were not accounted for in the curation set.
For MNPs, we allowed for a combination of two or more smaller events in the prediction set to contribute to a larger variant since there may be equivalent ways of representing the changes as a series of smaller edits (see Methods).Pilon greatly outperformed the other two variant callers in accurately identifying variants that involved more than one nucleotide (see Table 3; bottom three rows).Pilon identified three times as many multi nucleotide insertions as either GATK-UG or SAMtools (63% versus 17 or 21% of curated events), but made slightly more false predictions.For multi nucleotide deletions, Pilon identified two times as many events from the curation set than did GATK-UG or SAMtools and made fewer unsupported calls.In addition, Pilon identified all six curated multi-substitution events while the other two tools missed at least one, even when multiple SNPs were accounted for in these regions.
We next examined how overlapping the three tools were in either missing or overcalling variants.Panel A of Figure 4 summarizes the total number of variants appearing in the curation set that could not be detected by one or more of the variant callers.Pilon uniquely missed only one curated variant, while SAMtools and GATK-UG missed many more (32 and 13, respectively).The majority of variants that were missed by Pilon were also missed by SAMtools and GATK-UG (52 events).In addition, all three tools made predictions that were not supported by the curation set (summarized in Panel B of Figure 4), but, among unique unsupported events, Pilon and GATK-UG had ,3-fold fewer than SAMtools.Altogether, there were only 21 predictions where two or more of the tools agreed that there should be a variant called, most of which were SNPs, although four of the seven events shared by Pilon and GATK-UG were multi-nucleotide indels (5-15 nt in length).
Given the broad definition of 'multi' in Table 3 (.1 bp), we also evaluated how well Pilon performed for variants that were larger than 50 bp (see Table 4).We chose 50 bp since it is a length that is larger than the size of events for which short-read aligners are typically able to align, but shorter than the individual read length of the data used (101 bp).Overall, Pilon was able to accurately identify 74% of these large variants, including 100% of substitutions, 68% of insertions and 77% of deletions from the curated list.Of the eleven insertions that were missed by Pilon, eight involved a repetitive element (5 tandem insertions and 3 IS6110 insertions).Similarly, the six deletions not detected by Pilon involved deletion of one or more copies of a tandem repeat.Four of these tandem repeat regions were correctly reported by Pilon as possible tandem repeat variants in its standard output, but Pilon currently makes no attempt to provide a definitive copy number call in the presence of significant tandem repeat structures.Pilon also identified three events .50bp that did not match variants in the curation set.These unsupported calls occurred within complex variable regions of the genome in which multiple nearby repeat structures prevented Pilon from correctly identifying the precise correct location or form of the events.
Since Pilon performed well in identifying and resolving MNPs and since GATK-UG and SAMtools were not explicitly designed to call these large variants, we sought to compare Pilon's MNP calls to that of methods specifically designed for MNP detection [11].Though neither was described for use on microbial data, we evaluated how well BreakDancer [24] and CLEVER [25], two algorithms developed to detect large variants in eukaryotes, performed in calling MNPs on the M. tuberculosis test set.BreakDancer was unable to identify any MNP found in the curation set and CLEVER identified 21 multi nucleotide deletions, of which only 1 corresponded to a variant in the curated list.No large insertions or substitutions were predicted (data not shown).
Evaluating Pilon variant calls without long insert data.It is unsurprising that Pilon was better able to call larger variants since it is optimized to use both fragment (or small) and long (or large) insert libraries.Since many sequencing projects do not have access to long insert data and to also make a more direct comparison to existing variant callers that are not optimized to accept these data, we evaluated Pilon's performance using data from fragment insert libraries alone.To do this, we ran Pilon using the aligned fragment paired end reads from the M. tuberculosis F11 genome to the M. tuberculosis H37Rv finished reference genome.We then compared this output (''Pilon-frags'') to the previously analyzed output from GATK and SAMtools and to Pilon output using data from both library types (''Pilon'').
Pilon-frags performed well in identifying both single and multi nucleotide variants (see Table 3).Pilon-frags identified only 2 pp fewer single nucleotide substitutions, 4 pp fewer single insertions and 4 pp fewer single deletions as compared to the original Pilon output.Pilon-frag performance in calling SNPs was better or on par with both GATK-UG and SAMtools.Remarkably, Pilon-frags was also able to identify a large fraction of the MNPs, with nearly identical performance to Pilon with long insert read data.Pilonfrags also performed very well in calling variants larger than 50 bp (see Table 4), with one less insertion call and 4 fewer deletions calls as compared to Pilon.
To better understand the qualitative differences in what Pilon and Pilon-frags reported, we examined the concordance between results for each variant type.For SNP calls, we observed high concordance in the outputs from Pilon-frags and Pilon (95.2%; 871 of 915 events) (see Table S5).Discordance in SNPs often involved a position where a variant was found in both Pilon runs, but was considered high quality in one and low quality in the other.In fact, only 7 of 915 SNPs (0.8% of total) were confidently predicted to differ between the two Pilon run conditions, suggesting that the value of long insert library data when calling SNPs is small.However, for SNPs within repetitive regions of the genome, long insert data appeared to be very helpful in disambiguating these events (Table S6).Small indel variant calls were also highly concordant for the two Pilon runs (93.3%; 56 of 60 events), and 78.5% concordance (73 of 93) for large indels.
For larger variants, the discordance between Pilon with and without long insert data was larger (see Table S5), particularly in regions of the genome encoding transposable IS6110 repeat elements.While Pilon-frags detected many of these events, the sequences that were assembled and reported at these sites were often incomplete, as illustrated in Table S7.Given the length of the IS6110 repeat (,1.3 Kbp), the fragment pairs -only ,180 bp apart -were unable to span the entire length of the IS6110 elements, leading to two large indels being called, one coming in from each side of the IS6110 e.g.Table S7, position 1,541,957.Pilon's improved ability to capture the full sequence of larger insertions is the primary value of including long insert read data for variant calling applications.
Assessing large-scale genome duplications.In addition to identifying substitutions and indels of various sizes, Pilon is able to identify areas in which the read evidence suggests additional copies of large genomic regions (.10 Kbp) compared with the input draft assembly or reference genome.These regions could indicate large collapsed repeats in an assembly improvement application or large genomic duplications in a variant detection application.To evaluate Pilon's ability to identify large duplications, we resequenced M. tuberculosis T67, a strain previously reported to harbor a large-scale duplication [26], using fragment and long insert libraries, and aligned the reads to the M. tuberculosis H37Rv finished reference.Pilon was then run to detect variants in T67 using H37Rv as a reference.
Pilon identified two duplication events that were .10Kbp in size and separated by ,3 Kbp at M. tuberculosis H37Rv coordinates 3,494,063-3,551,070 (57 Kbp) and 3,554,192-3,712,284 (158 Kbp) resulting in a combined duplication of ,215 Kbp.The left gene boundary (Rv3128c) of the first predicted duplication and right gene boundary (Rv3427c) of the second predicted duplication corresponded to the upstream and downstream boundaries in the previously reported M. tuberculosis T67 duplication [26].Upon closer inspection, the 3 Kbp intervening region contained two copies of the IS6110 element, which are routinely found in multiple copies within the M. tuberculosis genome (16 copies in H37Rv) [27].Because these elements occur so frequently in the genome, the incremental coverage from the duplication was not sufficient for Pilon to identify them as part of the duplication event, breaking a true large duplication into two reported pieces.
Discussion
Pilon is an assembly improvement algorithm and variant caller that identifies differences between a draft assembly or closely
Type of variation between F11 and H37RV
Total found
Single insertion 26
Single deletion 31
Multi deletion 47
The full list can be found in Table S8.doi:10.1371/journal.pone.0112963.t002related reference assembly using evidence supported by the sequencing data, resulting in either an improved assembly or a list of variants in VCF format.We have demonstrated Pilon's performance on several microbial genomes with varying GCcontent and different ploidy.Our results indicate that applying Pilon yields more contiguous and accurate assemblies.Furthermore, variant calls made by Pilon are of high quality when compared with other state-of-the-art tools, and Pilon's ability to find insertion and deletion events considerably larger than readlengths sets it apart from traditional variant calling tools.For many of the sub tasks that Pilon performs, there are a variety of existing tools that might be used in sequence to achieve similar output as Pilon.However, using existing tools to achieve the full complement of analyses performed by Pilon would require implementation of a complicated workflow that hands data around to various tools, and development of post-processing algorithms to ensure that results from the various tools are in agreement.Pilon is a single, benchmarked tool that performs comparably well, if not better, than other tools that do only a fraction of the work.In addition, Pilon is easy to use and is successfully being utilized for assembly and variant detection of thousands of data sets in Broad Institute's production pipelines.
We showed that the assembly improvements that were introduced by Pilon were both accurate and biologically relevant.In particular, several highly repetitive genes that were captured by Pilon are known to play a role in virulence and host-pathogen interactions [28,29].To date, it has been difficult to study these genes comparatively, because they are often not captured or are only partially captured in draft assemblies.Furthermore, we were able to place more genes with repetitive features accurately in the genome.In the M. tuberculosis case, Pilon improved the sequence accuracy and placement of genes encoding transposases, which have an important role in genome reorganization in this species [27] and are used in strain typing schemes [30].In addition, for M. tuberculosis, Pilon had a significant impact on the repetitive, GC-rich and not well understood PE and PPE genes, an expanded family of highly repetitive genes that account for about 10% of the gene repertoire in this species [31], and are implicated in pathogen-host interactions and virulence [32,33].Pilon was able to resolve these repetitive structures because of its ability to use the long-distance mate pair information afforded by long insert libraries.In addition, data from long insert libraries often enable Pilon to completely fill in large sequence insertions and assemble across gaps.
For variant calling, the primary benefit of Pilon over other variant callers is its ability to capture large sequence polymorphisms and highly polymorphic local regions by performing a local assembly to generate complete sequences.By capturing intervening sequences in large variants, Pilon enables a more comprehensive view of the biological differences between strains e.g., new genes that confer antibiotic resistance or virulence.In addition, by integrating the SNP and large sequence variant detection in a single tool, Pilon is less likely to erroneously call SNPs in regions that are affected by a large variant.Long insert libraries provide information that resolves larger, multi-nucleotide events, in particular by allowing Pilon to completely assemble inserted sequences.However, even without long insert libraries, most of these events are identified, albeit often with an incomplete alternative sequence.
Pilon was also able to perform comparatively well in calling small variants, including SNPs and small indels.While all three variant callers benchmarked in this study had similar precision in their calls, Pilon demonstrated better recall (fewer false negatives) on single-base polymorphisms.There are two reasons for this.The three rows marked with 'Single' indicate single nucleotide variants.The three rows marked with 'Multi' indicate variants involving two or more nucleotides, which also include very large events that span several Kb.Recall (R) is the fraction of curated events that were called by the program.Precision (P) is the fraction of calls that the program made that were also described in the curation.The F-measure is the harmonic mean of recall and precision and provides measure of the trade-off between recall and precision.''N/A'' indicates that all events of this type were captured in another variant category.doi:10.1371/journal.pone.0112963.t003 First, there are a few local areas of the genome with very high polymorphism rates between F11 and H37Rv.When the local polymorphism rate is very high, the short-read aligners are unable to produce enough good alignments for any of the tools to call the base differences from the resulting pileups.However, Pilon was able to detect some of these problem areas and reassemble them into block substitutions, correctly capturing dozens of polymorphic locations the other tools were unable to resolve.Second, the use of long insert libraries allowed Pilon to make definitive calls inside some repeat regions by heavily weighting long insert reads which were unambiguously anchored to the flanks of the repeat area, capturing additional true SNPs.We note that GATK contains a highly sophisticated collection of tools for variant calling.However, several of its tools and their associated best practices for human variant calling are not applicable to many microbial projects, because they rely upon a database of known variant locations (such as those found at dbSNP) to perform recalibration.For microbial variant calling, the extent of variation across the microbial species under investigation is typically unknown, so no such catalog of variation is available a priori.There is also a fundamental difference between GATK and Pilon's approach to variant calling.GATK's UnifiedGenotyper is designed to be aggressive in detecting possible variants, relying on a user-controlled VariantFiltration step to filter out calls of lower confidence or quality to minimize false positives.Pilon, on the other hand, relies exclusively on internal heuristics to make a determination of which calls are confident.This makes Pilon easy to use ''out of the box'' in a highly automated environment, though it is less configurable than GATK for custom applications.
There are several areas where improvements could be made in future versions of Pilon with respect to assembly improvement and variant calling.First, it seems likely that there will be some benefit in iteratively applying Pilon in the assembly improvement and insertion variant calling process.Currently, Pilon builds out the gap/inserted sequence without re-aligning reads to build off the newly extended sequence.Using an iterative strategy has recently been used with some success by PAGIT [6].Second, Pilon currently does not attempt to make fixes to larger structural issues within assemblies or make changes to scaffold architecture.With data from long insert libraries, it should be feasible to break and/ or join scaffolds accurately.Third, tandem repeats continue to be challenging and may require a more specific approach.These regions are inherently difficult with short read data because there is no unambiguous information in the data to determine how many copies are present.This challenge is true for any de novo assembly; in order to resolve a tandem repeat, reads are needed that anchor into unique sequence on either side and read through the entire tandem repeat sequence.Lacking this, tandem copy numbers can only be speculated from mate-pairing information, depth of coverage calculations and library insert sizes.
Currently, Pilon is rather conservative in its corrections: (i) it uses a large cut-off to merge overlaps, and (ii) it will not attempt to resolve significant tandem repeats structures definitively.Notwithstanding the challenges encountered with tandem repeats, Pilon does an excellent job with other repetitive sequences and is able to fix many genes of known repetitive gene families and is able to fill in many transposable elements.While we have evaluated Pilon's assembly improvements on both haploid and diploid genomes and obtained positive results for both, we acknowledge that there is still significant opportunity for future improvement in Pilon's handling of diploid genomes.Pilon could be enhanced to understand IUPAC ambiguity codes in its input genome and generate them in its output, and Pilon's heuristics for identifying insertions and deletions in diploid genomes could be improved, including its ability to recognize and report heterozygous indels.Finally, the local reassembly process could be improved to perform better in heterozygous regions.Even so, our results indicate that in its current form, Pilon is able to make valuable improvements to diploid genomes.
We have evaluated Pilon's performance using microbial genomes with finished references.However, there is no inherent limitation on the size of genomes to which Pilon can be applied.For example, we have used Pilon to improve assemblies of larger genomes, including 16 strains of the Anopheles genus (,200 Mbp diploid genome), but we were unable to verify the accuracy of Pilon's improvements since these genomes have not been finished.Pilon runs within minutes on small microbial genomes and will complete overnight on larger eukaryote genomes, such as Anopheles, which is similar to the tools included in our benchmarking.
Conclusion
Ultimately, Pilon has great utility and addresses an urgent need for better and more efficient methods to deal with the thousands of microbial genomes that are being produced.We have shown that Pilon performs well as compared to the state-of-the-art for both assembly improvement and variant detection, often outperforming these tools.Pilon is also unique in its user-friendly integrated approach to assembly improvement and is unique in its ability to identify large variants accurately in microbial genomes.As a recent addition to the production process for microbial genomes at Broad Institute, Pilon has been used to automatically improve the quality of over 8,000 prokaryote and eukaryote genomes prior to their submission to Genbank, and it has been used to call variants on over 6,000 genomes.
Detailed algorithm description
Input requirements.Pilon requires an input genome in FASTA format and one or more BAM files containing sequencing reads aligned to the input genome.The BAM files must be sorted in coordinate order and indexed.For Illumina data, these BAM files are usually produced by an aligner such as BWA [15] or Bowtie 2 [34].It is recommended that single best hit or random selection among equal best alignments is used as input into Pilon.Pilon can use three types of BAM files: 1. Fragments: paired read data of short insert size, typically , 1 Kbp.Reads should be in forward-reverse (FR) orientation; 2. Long inserts: paired read data of longer insert size, typically .
1 Kbp.Reads should be in forward-reverse (FR) orientation.Sequencing of long insert libraries that are generated using the standard Illumina mate-pair library preparation protocol typically result in reverse-forward (RF) read orientation, so they will need to be reversed in the BAM file.3. Unpaired: unpaired sequencing read data.
To use Pilon with default arguments, read length should be 75 bases or longer and total sequence coverage should be 50x or greater, though deeper total coverage of .100x is beneficial.Pilon can also make use of longer reads, such as those from Sanger capillary sequencing and circular-consensus or error-corrected reads from Pacific Biosciences (PacBio) sequencing.However, Pilon is not currently tuned to the error model of raw PacBio reads, and their use may introduce false corrections.
Pilon makes extensive use of pairing information when it is available, so paired libraries are highly recommended.Pilon is capable of using paired libraries of any insert size, as it scans the BAMs to compute statistics, including insert size distribution.
Improving local base accuracy and identifying
SNPs.Pilon improves the local base accuracy of the contigs through analysis of the read alignment information.First, Pilon parses the alignment information from the input data and summarizes the evidence from all the reads covering each base position.Alignments can be less trustworthy near the ends of reads, especially in differentiating between indels and base changes, so Pilon ignores the alignments from a small number of bases at each end of the read, which is configurable at run time.For each base position in the genome, Pilon builds a pileup structure which records both a count and a measure of the weighted evidence for each possible base (A,C,G,T) from the read alignments.The contribution of base information from each read is weighted by the base quality reported by the sequencing instrument as well as the mapping quality computed by the aligner.
When Pilon is building pileups from paired alignment data, only reads from ''valid'' pairs (i.e., those with the PROPER_PAIR flag set in the BAM by the aligner, indicating the reads of a pair align in proper orientation with a plausible separation) contribute evidence to the pileups.It is crucial that the PROPER_PAIR flag is set accurately by the tool that produced the BAM file.A count of non-valid alignments covering each position is also kept to help identify areas of possible mis-assembly.Pilon also keeps track of ''soft clipping'' in the alignments, which exclude sub-sections of a read which aligned poorly.A tally of soft-clip transitions is kept at each genomic location as another aid in identifying possible local misassemblies.
From the pileup evidence, Pilon classifies each base in the input genome into one of four categories: 1. Confirmed: the vast majority of evidence supports the base in the input genome; 2. Changed: the vast majority of evidence supports a change of the base in the input genome to another allele; 3. Ambiguous: the evidence supports more than one alternative at this position; 4. Unconfirmed: there is insufficient evidence to make a determination at this position due to insufficient depth of coverage by valid reads.
Ambiguous bases can occur for several reasons.If the genome is diploid, this is expected at heterozygous polymorphic locations.Difficult-to-sequence regions may result in a large enough fraction of sequencing errors to result in an ambiguous call.Finally, if the input genome has a smaller number of copies of a repeated genomic structure than occurs in the true genome (a ''collapsed repeat''), the aligned reads may have originated from more than one instance of the repeat structure; where there are differences in the true instances of the repeat, the alignments can show mixed evidence.
Paired read information, especially information from long insert libraries which span a longer distance, is extremely valuable in helping resolve ambiguous locations due to collapsed repeats.Pairs for which one read lands inside a repeat element, but the other lands in unique anchoring sequence on the flanks of the repeat help to resolve the true base content of the repeat structure.Data from long inserts will typically have a higher alignment mapping quality than short-range fragment pairs that lie completely within the repeat because the fragment pairs may not be able to be placed uniquely among the repeats.Since Pilon uses mapping quality to weigh the evidence from each read, the long inserts can often pick the correct haplotype variations of the repeat structure.
Pilon includes corrections to single-base errors in its output genome, and optionally, it can also change ambiguous bases to the allele with the preponderance of evidence.
Finding and fixing small indels.While recording the baseby-base pileup evidence, Pilon also records the location and content of indels present in the alignments.Indel alignments which represent equivalent edits to the input genome may appear at different coordinates in the alignments.For instance, if the input genome has the sequence ACCCCT, but the read evidence suggests one of the Cs should be deleted (ACCCT), each individual read alignment might show a deletion at any of the four C coordinates.Pilon shifts alignment indels to their leftmost equivalent edit in the input genome, so that the evidence from all the equivalent edits is combined into evidence for a single event at a one location.
Pilon makes an insertion or deletion call if a majority of the valid reads support the change, though that threshold is lowered somewhat for longer events, as it is typically more difficult for aligners to identify longer indels in short read data.Called indels from the input genome are fixed in Pilon's output genome.
Fixing mis-assemblies, detecting large indels, and filling gaps.Pilon is capable of reassembling local regions of the genome when there is sufficient evidence from the alignments that the contiguity of the input genome does not match the sequencing data.For assembly improvement applications, this could be an indication of a local mis-assembly.For variant calling applications, this could be caused by insertions or deletions too large to be reflected in the short read alignments.
Pilon tries to identify areas of potential local read alignment discontiguity in the contigs of the input genome by employing four heuristics: (i) a large percentage of reads containing a soft-clipped alignment at a given base position, (ii) a large ratio of invalid pairs to valid pairs spanning a location, (iii) areas of extremely low coverage and (iv) rapid drops in alignment coverage over a distance on the order of a read length.Once Pilon has identified an area for local reassembly, it treats the suspicious region (which may be a single base or a larger region) as untrusted, using alignments to the trusted flanks on both sides to identify a collection of reads that might contribute evidence for the true sequence in the suspicious region.
Unpaired reads with partial alignments to the flanks are included in the collection.For paired data, Pilon identifies pairs in which one of the reads is anchored by proper alignment to one of the flanks (e.g., with forward orientation on the left flank, or reverse orientation on the right flank), but whose mate is either unmapped or improperly mapped (e.g., to a remote location in the genome).For fragment pairs, both reads of such pairs are included in the collection; for long inserts, only the unanchored read is included in the collection.
From the collected reads, Pilon builds a De Bruijn assembly graph (default K = 47).For each k-mer in the reads, it uses the same pileup structure to record the bases which follow that k-mer, including weighting by base quality.Then, the pileups are evaluated to determine the link(s) to the next k-mer(s); this results in either a single base call, resulting in one forward link to the next k-mer, or an ambiguous call, resulting in two links forward and a branch in the assembly graph.This process automatically prunes the assembly graph of most sequencing errors, as infrequent base differences are unlikely to present enough evidence to affect the forward links.A minimum coverage cutoff of five for each forward link also prunes the assembly graph of many false links that could appear because of sequencing errors.
Pilon then tries k-mers from the trusted flanks as starting points to walk into the untrusted region from each side, building all possible extensions with up to five branching points (2 5 possible extensions).Tandem repeats with combined length .K cause loops in the local assembly graph, and they are detected by noting when the assembly walk reaches an already-incorporated k-mer.Pilon currently does not attempt to determine the copy number of such tandem repeats; instead, it will report the length of the repeat structure encountered in its standard output, and it will not attempt to close the two sides.
When no tandem repeat is detected, the resulting extensions from each side are combinatorially matched for possible perfect overlaps of sufficient length (2K+1) to be considered for closure.If there is exactly one such closure and it differs from the input genome, the assembled flank-to-flank sequence will replace the corresponding sequence in the input genome.Since the default kmer size is 47, an overlap of 95 bases is required for closure.
If there are no closures or more than one possible closure, Pilon will identify a consensus extension from each flank.If an optional argument is set to allow opening of new gaps, Pilon will replace the suspicious region with the consensus extensions from each flank and create a gap between them; otherwise, it simply reports that it was unable to find a solution.These reports identify areas that an assembly analyst might wish to investigate manually.
Pilon also attempts to fill gaps between contigs in a scaffold (''captured gaps'') in the input genome.In order to fill captured gaps, Pilon employs the same local reassembly technique described above, treating the gap itself as the ''untrusted'' region.If there is a unique closure, the gap is filled; otherwise, consensus extensions from each flank are used to reduce the size of the gap.Pilon does not currently attempt to join or break scaffolds.
Large
collapsed repeat (segmental duplication) detection.Pilon includes heuristics that attempt to flag areas indicative of large (.10 Kbp) collapsed repeats with respect to the input genome.These are characteristically large contiguous areas that appear to have double (or higher) read coverage compared to the rest of the genomic element being analyzed.Long insert data are excluded from this computation, as we have found long insert coverage to be far more variable across some genomes.Pilon does not attempt to fix these potentially collapsed regions, but it does report them in its standard output for further investigation.
In variant calling applications, large segmental duplications in the sequenced strain with respect to the reference have the same signature as large collapsed repeats in a draft assembly; a duplicated region of the genome will result in double the number of reads covering that sequence.Pilon's reporting of large collapsed repeat regions can be used to identify candidate segmental duplications.
Output files.Pilon generates a modified genome as a FASTA file, including all single-base, small indel, gap filling, mis-assembly and large-event corrections from the input genome.In the assembly improvement case, this is the improved assembly consensus.In variant detection mode, this is the reference sequence which has been edited to represent the consensus of the given sample more closely.
Pilon can optionally generate a Variant Call Format (VCF) [http://vcftools.sourceforge.net/specs.html]file, which lists copious detailed information about the base and indel evidence at every base position in the genome, including two scores regarding variant quality: the QUAL column, and a depth-normalized call quality (QD) field in the INFO column.For additional details on the VCF format, we refer to the VCF specification referred above.Changes generated by local reassembly, often triggered by larger polymorphisms in variant calling applications, are included as structural variant records (SVTYPE = INS and SVTYPE = DEL).Pilon can also, optionally, generate a ''changes'' file which lists the edits applied from input to output genome in tabular form, including source and destination coordinates and source and destination sequence.Finally, Pilon will optionally (with thetracks option) output a series of visualization tracks (''bed'' and ''wig'' files) suitable for viewing in genome browsers such as IGV [35] and GenomeView [36].Tracks include basic metrics across the genome, such as sequence coverage and physical coverage, as well as some of the calculated metrics Pilon uses in its heuristics for finding potential areas of mis-assembly, such as percentage of valid read pairs covering every location.
Pilon's standard output also contains useful information, including coverage levels, percentage of the input genome confirmed, a summary of the changes made, as well as some specifically flagged issues which were not corrected, such as potentially large collapsed repeat regions, potential regions of misassembly which were not able to be corrected, and detected tandem repeats that were not resolved.
Data generation
All sequencing data used for these experiments were generated from an Illumina HiSeq 2000 machine.For sequencing M. tuberculosis F11 and T67, two libraries were generated: one PCRfree 180 bp insert paired fragment library [37] and large insert 3-5 Kbp long insert library [38].S. pneumoniae TIGR4 data also consisted of two libraries: one robotically size selected 180 bp insert paired fragment library [37] and a large insert 3-5 Kbp long insert library [38].The sequencing data for C. albicans SC5314 was generated from three libraries: a robotically size-selected 180 bp insert paired fragment library [37], a gel-cut 4 Kbp long insert library [39], and a 40 Kb Fosill library [40].Sequencing data were submitted to the Sequence Read Archive with identifiers: SRX347313, SRX347312, SRX105400, SRX110130, SRX347317 and SRX347316.
Evaluation methods
Assembly improvement.All draft assemblies were generated using ALLPATHS-LG [41].The draft assembly for Mycobacterium tuberculosis F11 utilized 100x of the 180 bp insert fragment library and 50x of the 3-5 Kb long insert library and was executed using ALLPATHS-LG v45395 utilizing the ASSISTED_PATCH-ING = 2.1 parameter and the M. tuberculosis H37RV reference genome for assisting (GenBank accession: CP003248).The draft assembly for S. pneumoniae TIGR4 was created using ALL-PATHS-LG v45925 with default parameters and using 100x of the 180 bp insert fragment library and 50x of the 3-5 Kb long insert library.The C. albicans SC5314 utilized 100x of the 180 bp insert fragment library, 100x of the gel-cut 4 Kb long insert library and 50x of the Fosill library, and was assembled with ALLPATHS-LG v39846 utilizing the ASSISTED_PATCHING and HAPLOI-DIFY options with the C. albicans SC5314 reference sequence as a reference for assisting.
We benchmarked Pilon's ability to close gaps in the draft bacterial assemblies against two tools built for this purpose, IMAGE v2.4.1 [16] and GapFiller v1. 10 [17].The same sets of sequencing reads used as input to Pilon were used for IMAGE (fragment library only) and GapFiller (fragment and long insert libraries).IMAGE was run in the manner implemented in the PAGIT [6] example scripts: 6 iterations, one with a kmer size of 61, three with a kmer size of 49, and two with a kmer size of 41.GapFiller was run for 10 iterations with a libraries.txtfile specifying a ratio r = 0.5 and library insert sizes computed by Pilon from the aligned bams.
To evaluate the quality of Pilon's single base and small indel corrections to the draft assemblies, we also ran iCORN v0.97 [18], the consensus sequence improvement tool in PAGIT, on the same draft assemblies using the same sets of fragment reads.iCORN was run in the manner implemented in the PAGIT example scripts, only changing the library insert size mean and range parameters.For TIGR4, we used a mean of 180 and a range of 120-300.For F11, we used a mean of 226 and a range of 100-500, since the PCR-free library preparation resulted in a wider range of insert sizes.
Fixes to the assemblies (Table S1) made by Pilon and the other assembly improvement tools were assessed by extracting the changed region of sequence in the output genome along with 300 bp flanks on each side.These extracted sequences were aligned to their respective finished reference genomes with BLASTN [42], and the accuracy of the changes was assessed by manually inspecting the alignments for accuracy, judging each fix as ''Correct'' or ''Incorrect''.For larger block changes which resulted from local reassembly (gap filling and fixing of local misassemblies), a third category of ''No worse'' was established for situations in which: (i) the draft assembly contained a mis-assembly in the changed region, (ii) Pilon made a change attempting to fix the mis-assembly, and (iii) the fix was not entirely correct, but was no worse than the original problem.
For the assembly improvement statistics, Bases added was calculated by tallying bases added in locations where resulting fixes resulted in a net gain of bases during gap filling and mis-assembly correction processes, as reported in the Pilon standard output indicated by the "fix gap" or "fix break" lines.
Variant calling.Variant calls were made using M. tuberculosis H37Rv (GenBank accession: CP003248) as the reference and the M. tuberculosis F11 aligned read and long insert fragments as input data.From the sequenced fragment and long insert libraries, a random subset of read pairs was selected from each library to obtain an estimated 200x coverage of the M. tuberculosis H37Rv reference genome.Each library's reads were aligned to the M. tuberculosis H37Rv reference genome using BWA (version 0.5.9-r16) to generate BAM files suitable for input to the variant calling processes.
Pilon: Pilon was run with the -variant command line option, specifying the M. tuberculosis H37Rv reference genome and the above BAM file(s) as inputs.We evaluated two Pilon variant calling sets, one generated using both fragment and long insert library BAMs, and one using only the fragment library BAM.
These VariantFiltration settings filtered out variant calls at locations with less than 10 unambiguous read alignments, where 80% or more of the read depth had ambiguous mappings, where fewer than 80% of the reads supported the alternate allele, or more than half of the reads contained spanning deletions.This filter expression was based on one previously used to call variants from the European Escherichia coli O104:H4 outbreak [38], adjusting depth and allele balance thresholds to yield the best performance tradeoff between false negative and false positive results on these data.
SAMtools/BCFtools: The same aligned fragment library BAM file described above was used as input for variant calling using SAMtool/BCFtools v0.1.19according to recommendations found on the SAMtools webpage (http://samtools.sourceforge.net/mpileup.shtml).samtools mpileup was used to generate pileups in bcf format, and variants were called using bcftools using the -bcg option.Finally, variants were filtered using vcfutils.plvarFilter -d 10 to filter out calls at locations where the aligned coverage was less than 10 reads.We chose the minimum depth of 10 to be consistent with the filtering used for GATK Unified-Genotyper.
CLEVER and BreakDancer: The aligned fragment and combined fragment and long insert library described above were used as input for CLEVER v2.0rc3 and BreakDancer 1.3.6.clever -sorted -use_xa was used to generate calls for CLEVER.bam2cfg.pl-g -h was used to generate the Break-Dancer config file, which was then used with breakdancer-max.
Curating differences between F11 and
H37Rv.Differences between the finished M. tuberculosis F11 (GenBank accession: CP000717) and M. tuberculosis H37RV (GenBank accession: CP003248) references were curated by employing a banded Smith-Waterman algorithm to align syntenic regions of the two genomes.Alignments were run, separately, for each syntenic portion of the two sequences.When the alignment diverged significantly, the program was run again to pick up at the next syntenic block.The resulting alignments over syntenic regions identified coordinates of small blocks of mismatches, typically only a single-base long, but in some cases up to 289 bp.Areas where there was a significant break in synteny or where the banded Smith-Waterman alignment produced questionable results were analyzed using either Nucmer [43], ClustalW [44] or Blast2 [45] to verify the nature of the difference and to obtain more accurate coordinates.In some cases, the alignments proved too difficult to get accurate coordinates, but approximate definitions of these differences were obtained.The resulting table of differences between the two references (Table S8) has each difference annotated with most likely coordinates, with two exceptions where the variation between the strains was so high that it was impossible to know whether each difference was captured individually.The two highly variable regions corresponded to coordinates 1636857-1639600 and 3928967-3949709, which, together, account for less than 0.5% of the M. tuberculosis H37Rv genome.These regions were excluded from all variant analyses.Variant Assessment.The resulting variant calls were compared to a manually curated set of differences between M. tuberculosis F11 and M. tuberculosis H37Rv as described above.Based on this comparison, recall and precision were calculated according to the strategy described in [46].Briefly, recall is a measure of completeness of calls against the curated truth set; false negatives lower the recall score.Precision is a measure of the accuracy of the calls made; false positives lower the precision score.Specifically, recall = tp_c/(tp_c+fn) and precision = tp_p/(tp_p+ fp), where tp_c is the number of true positive calls based on the curation set, tp_p is the number of true positive calls from the set of predicted variants, fp is the number of false calls from the set of predicted variants, and fn is the number of missed calls from the predicted variants based on the curation set.The F-measure is the harmonic mean of the recall and precision rates, providing an ''overall'' metric that captures tradeoff between recall and precision.
True positives in the prediction set had at least one variant site called in the curation set.For variants that affected more than a single base in the curation set (i.e., multi nucleotide polymorphisms), we allowed for a combination of two or more smaller events in the prediction set to be marked as correct, since tools may call a densely polymorphic region as a block substitution rather than a series of equivalent single-base changes.For example, for the multi nucleotide substitution in the curation set, ACCGT = .CCTGA, three SNP calls at the same location, A. C, C.T and T.A, would be counted as a true positive.In addition, predicted variants that affected more than 20 bases were allowed to match only partially with the curation set because there can be different ways to manually curate sites that vary among the M. tuberculosis F11 and H37Rv finished reference genomes.In particular, resolution of tandem repeats was challenging for both prediction and curation of variants since it was difficult to determine which copy of the repeat was inserted or deleted.In these cases, we counted the variant as correct if a similar event was predicted within 100 nucleotides.
Availability
Pilon is open source software available under the GNU General Public License Version 2 (GPLv2).Pilon is written in the Scala programming language, and it makes extensive use of the open source Picard Java libraries (http://picard.sourceforge.net)for parsing BAM and FASTA files.Pilon is compiled into a single Java Archive (JAR) file which runs inside a 64-bit Java Virtual Machine environment.Binary and source distributions can be obtained from GitHub (http://github.com/broadinstitute/pilon/releases/).Results in this paper were obtained with Pilon version 1.5.A summary of all command-line options is available in Table S9.
Online documentation, as well as two example data sets to test Pilon on the same data as was used in this manuscript, are available from the web site http://broadinstitute.org/software/pilon/.We provide the Streptococcus pneumoniae TIGR4 data set as an assembly improvement example and the Mtb F11 data set as a variant calling example.
Figure 1 .
Figure 1.Simplified overview of the Pilon workflow for assembly improvement and variant detection.The left column depicts the conceptual steps of the Pilon process, and the center and right columns describe what Pilon does at each step while in assembly improvement and variant detection modes, respectively.During the first step (top row), Pilon scans the read alignments for evidence where the sequencing data disagree with the input genome and makes corrections to small errors and detects small variants.During the second step (second row), Pilon looks for coverage and alignment discrepancies to identify potential mis-assemblies and larger variants.Finally (bottom row), Pilon uses reads and mate pairs which are anchored to the flanks of discrepant regions and gaps in the input genome to reassemble the area, attempting to fill in the true sequence including large insertions.The resulting output is an improved assembly and/or a VCF file of variants.doi:10.1371/journal.pone.0112963.g001
Figure 2 .
Figure 2. Example Pilon generated genome browser tracks.This region was flagged by Pilon as containing a possible local mis-assembly, but Pilon was unable to determine a fix due to a tandem repeat sequence.The tracks shown here include: Pilon Features track indicating the extent of the region flagged by Pilon as containing a potential mis-assembly, Valid Coverage track indicating the sequence coverage of valid read pair alignments excluding the clipped portions of the alignments, Clipped Alignments track indicating the number of reads soft-clipped at each location, Pct Bad Alignments track indicating the percentage of the total reads aligned to each location which are not part of Valid Coverage.These tracks are created with the '-tracks' command-line option.Together, these tracks reveal the true bounds of the mis-assembly, and indicate that there are likely missing copies of the tandem repeat in the draft assembly.In this case, manual analysis revealed the draft assembly was missing two of three full copies of a 57-base tandem repeat.doi:10.1371/journal.pone.0112963.g002
Figure 4 .
Figure 4. Venn diagram of the overlap in false negative (A) and false positive (B) calls by the three variant detection tools, Pilon, GATK UnifiedGenotyper and SAMtools.False negative calls are the number of unique events from the curation set that was missed by each tool.Overlaps in the Venn diagram show the number of variants that were missed by multiple tools.False positive calls are the number of predictions from M. tuberculosis F11 that were not supported by the curation set.Overlaps indicate predictions that were shared among tools.doi:10.1371/journal.pone.0112963.g004
Table 1 .
Summary assembly statistics before and after Pilon improvement.
Table 2 .
Summary of variant types curated in the M. tuberculosis H37Rv and M. tuberculosis F11 finished genome comparison.
Table 3 .
Recall and precision metrics for M. tuberculosis F11 variants called against M. tuberculosis H37Rv by Pilon (with and without long insert library data), GATK UnifiedGenotyper and SAMtools.
Table 4 .
Pilon's performance in calling variants in M. tuberculosis F11 that were larger than 50 nt.
Variants are divided by type across the rows.Missed variants are those that were annotated in the curation, but were not identified by Pilon.The called variants are those that were annotated in the curation that Pilon accurately identified.doi:10.1371/journal.pone.0112963.t004 Table S1 Assessment of gap filling and local reassembly fixes.b: Comparison of assembly gap closures among Pilon, IMAGE, and GapFiller.(PDF) Table S2 Assessment of base corrections by Pilon and iCORN.(PDF) Table S3 Detailed information regarding the gene based assessment of F11 assemblies.(XLSX) Table S5 Summary of SNP, small in-dels, and large indels in M. tuberculosis F11 relative to H37Rv.(PDF) Table S6 Example SNPs only found with regular Pilon.(PDF) Table S9 Pilon Command Line Arguments.(PDF) | 2018-04-03T03:36:32.664Z | 2014-11-19T00:00:00.000 | {
"year": 2014,
"sha1": "9ecb5716024a3df6da5f461a8df1944323953db7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0112963&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ecb5716024a3df6da5f461a8df1944323953db7",
"s2fieldsofstudy": [
"Computer Science",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261571577 | pes2o/s2orc | v3-fos-license | A Comparative Analysis of CNN Models in Deep Learning for Leaf Disease Detection
: There is a huge growth in the population of India therefore there is necessity for more food. But in farming the unavoidable plant diseases causing a large problem. There is a need to figure out as to how much food produced by the farmers is affected. Because in the coming future a greater amount of people are to be fed. Plant leaf disease detection is very important, because depending on the amount of growth in crop the farmers make their money. Then here comes CNN in help. It is tool which is smart enough to identify the diseases and their types. In order to detect the disease in plant a Convolutional Neural Network(CNN) with the help of image processing beside is in use here in our paper. A Convolutional Neural Network is an artificial neural network which is specially designed to deal with image recognition[1] tasks when an image is input. Here the idea is to use CNN models to spot diseases in apple, grape, corn and potato. This idea is to use CNN models to spot diseases in apple, grape, corn, and potato plants. We proposed an algorithm. This paper mainly focused on CNN models CNN, AlexNet,VGG16 in deep learning that will be compared in the study.
Introduction:
The main food source of India is crop production. To improve the productivity of crops we are taking the help of advanced technology. Many varieties of plants and crops are cultivated by Indian farmers. Therefore a lot of research is been centred to search ways to cultivate more food which is healthy. In the present scenario depending completely on human speculation to detect efficiently the diseases in plant is not a good choice. Yet the modern expansion in computer vision provides the solutions to issues faced with plant and leaf which is rapid, consistent and accurate. In these recent years an outstanding amount of research has been done in deep learning in the fields like image recognition, sentiment analysis and speech recognition. Therefore convolutional neural network will be most effective means to detect diseases in leaf and plants and solve the problem raised [2]. An algorithm is proposed for the leaf disease detection. We will be importing libraries needed in the initial stage: OS, Tensorflow, pandas, Matplotlib, Cv2,Keras, random, NumPy pandas, etc. Neat the function is defined to label images and to load the training data. The images are been categorized based on the plant disease's code names. The resizing and is done to a group of random images while training the images and matching labels are added consequently.
Leaf images are added to do the testing and a CNN algorithm to add series of layers is for classification. These layers are like convolutional and pooling layers used for process optimization in every phase. And the required output is obtained from the dense layer. The learning rate is used to affect the rate of our models learning process. Later the data is been loaded to our built model, to designate the nature of leaf a healthy or diseased using a variable and afterwards the in this variable the model is saved. And this variable is used for the detection.
LITERATURE SURVEY
Nishant Shelar and colleagues discovered spotinfections in leaves and categorized them according to thediseased leaf categories using various learning algorithms . Network is to acquire and analyze data from leaf photosin order to determine healthy or diseased leaves ofmedical plants using image processing methods. [1] The paper introduces by sumit and colleagues a novel approach to image recognition by utilizing deep learning techniques. The researchers investigated three distinct neural network architectures: Faster R-CNN, R-CNN, and SSD. Their efforts resulted in a commendable validation accuracy of 94.6%. This proposed method is capable of identifying a range of diseases affecting leaves from apple, cherry, grape, peach, pepper, potato, strawberry, and tomato plants. [2] S.Bharath , K.Vishal Kumar , R.Pavithram, T.Malathi , "Detection of Plant Leaf Disease using CNN ". Image processing and CNN model can be used to improve plant disease detection techniques. It consisting data of 38 different plant leaf diseases which used to predict.The conclusion of this paper is to predict the pattern of plant diesease using CNN. [3] The investigation led by Murk Chohan and colleagues was centred on the identification of diseases in plant leaves. To enhance the dataset's sample size, they implemented augmentation techniques. For testing purposes, 15% of the data from the Plant Village dataset was employed. [4] Ali Arshagi and colleagues undertook a study in this paper, focusing on the analysis of five distinct categories of potato diseases, including Healthy, Black Scurf, Common scab, Black leg, and Pink dot. The researchers proceeded to evaluate the outcomes produced by various disease classification methods, including AlexNet, GoogLeNet, VGG, and R-CNN. [5] PROPOSED METHODOLOGY: We cannot feed the image data directly to our neural network. To prepare image data for input into a neural network, a series of pre-processing steps are required. This involves converting the images into a format that the network can understand, which often means transforming them into NumPy arrays.
Feature Extraction:
In Convolutional Neural Networks (CNNs), filters are employed to extract features from images. The remarkable aspect is that during training, the network autonomously detects these filters, determining both their quantities and parameter values. This intrinsic learning process adjusts filter amounts and content. As a hyperparameter, you prescribe the desired quantity of filters and their dimensions, influencing the architecture's capability to discern intricate image characteristics.
CNN Classification:
The CNN classifier for image classification is built using a special kind of neural network called CNN. Its main job is to look at pictures and decide which category they belong to. It learns to pick out important parts from the pictures and use them to figure out the right category, like telling what's in a photo. When you give the network a picture in the form of numbers (numpy array), it gives you a number between 0 and 1 that tells you how sure it is about its decision.
VGG16:
The VGG16 model attains an impressive test accuracy of 92.7% on ImageNet, an extensive dataset encompassing over 14 million training images distributed among 1000 distinct object classes. VGG16 represents an advancement over the AlexNet architecture by adopting a strategy of substituting the large filters with a series of smaller 3×3 filters. Unlike AlexNet, where the kernel size is 11 for the initial convolutional layer and 5 for the subsequent layer, VGG16 utilizes a consistent kernel size of 3×3 | 2023-09-07T15:09:18.604Z | 2023-09-03T00:00:00.000 | {
"year": 2023,
"sha1": "6fdb2e2eb95bdc8a8305596b5ec7c1fd56241492",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2023/5/6041.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fa23e9b066c8e4aaa159cea4bf74bc35cbd7d8ea",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
259132832 | pes2o/s2orc | v3-fos-license | Integrative profiling analysis reveals prognostic significance, molecular characteristics, and tumor immunity of angiogenesis-related genes in soft tissue sarcoma
Background Soft tissue sarcoma (STS) is a class of malignant tumors originating from mesenchymal stroma with a poor prognosis. Accumulating evidence has proved that angiogenesis is an essential hallmark of tumors. Nevertheless, there is a paucity of comprehensive research exploring the association of angiogenesis-related genes (ARGs) with STS. Methods The ARGs were extracted from previous literature, and the differentially expressed ARGs were screened for subsequent analysis. Next, the least absolute shrinkage and selection operator (LASSO) and Cox regression analyses were conducted to establish the angiogenesis-related signature (ARSig). The predictive performance of the novel ARSig was confirmed using internal and external validation, subgroup survival, and independent analysis. Additionally, the association of the ARSig with the tumor immune microenvironment, tumor mutational burden (TMB), and therapeutic response in STS were further investigated. Notably, we finally conducted in vitro experiments to verify the findings from the bioinformatics analysis. Results A novel ARSig is successfully constructed and validated. The STS with a lower ARSig risk score in the training cohort has an improved prognosis. Also, consistent results were observed in the internal and external cohorts. The receiver operating characteristic (ROC) curve, subgroup survival, and independent analysis further indicate that the novel ARSig is a promising independent prognostic predictor for STS. Furthermore, it is proved that the novel ARSig is relevant to the immune landscape, TMB, immunotherapy, and chemotherapy sensitivity in STS. Encouragingly, we also validate that the signature ARGs are significantly dysregulated in STS, and ARDB2 and SRPK1 are closely connected with the malignant progress of STS cells. Conclusion In sum, we construct a novel ARSig for STS, which could act as a promising prognostic factor for STS and give a strategy for future clinical decisions, immune landscape, and personalized treatment of STS.
Conclusion: In sum, we construct a novel ARSig for STS, which could act as a promising prognostic factor for STS and give a strategy for future clinical decisions, immune landscape, and personalized treatment of STS. KEYWORDS soft tissue sarcoma, angiogenesis, prognosis, immune landscape, immunotherapy Background Sarcomas are a class of malignant tumors originating from mesenchymal tissue, about 80% of which originate from soft tissue and 20% from bone (1). Among them, soft tissue sarcoma (STS) comprises more than 70 histological subtypes, and the most frequently observed subtypes are leiomyosarcoma, liposarcoma, synovial sarcoma, and rhabdomyosarcoma (2). Although STS is relatively rare, it has a high lethality. According to statistics, more than 5,800 sarcoma patients die yearly in the United States, accounting for 40% of new cases (3). Since it the highly aggressive with early relapse and metastasis, the clinical outcome of STS is not ideal (4). Previous studies have demonstrated that the 5-year survival rate after diagnosis of STS is only 55.5-56.5%, and the patients with metastasis or recurrence are only about 20% (3,5). Overall, the prognosis of the patient with STS remains dismal, and the development in recent years seems to have gotten stuck in a bottleneck. Therefore, it is urgent to find reliable biomarkers for early diagnosis, risk stratification, and prognosis prediction of STS.
Angiogenesis is the process of forming new blood vessels from pre-existing ones, which offers an adequate metabolic supply and nutrients for tumor growth and is widely considered to play an essential role in tumorigenesis and development (6). With the sustained rapid cellular proliferation and a high metabolic rate of tumor cells, the rapid development of new vascular networks is often required, which is driven by angiogenic factors such as the vascular endothelial growth factor (VEGF) family, hypoxiainducible factors (HIFs), and fibroblast growth factors (FGFs) (7). Tumor angiogenesis not only supplies nutrients and natural migration pathways for tumors but also promotes tumor progression and regulates the tumor microenvironment (8). Accordingly, targeted tumor angiogenesis therapy has been investigated as a potential anti-tumor therapeutic approach. For instance, anlotinib, a multikinase angiogenesis inhibitor, shows an anti-tumor ability in several STS entities (9). In addition, the identification of promising angiogenesis-related markers and signatures has also been pursued as an attractive strategy for tumor diagnosis and prognostic evaluation. Yuan Yang et al. established a prognosis signature rely on angiogenesis-related genes (ARGs), which can help to predict prognosis, immune infiltration status, and chemotherapy sensitivity in hepatocellular carcinoma (10). However, it remains unclear whether angiogenesisrelated signatures (ARSig) can be used in the prognosis and therapy prediction of STS.
Herein, we first constructed a novel signature for STS based on the ARGs, which exhibited excellent predicted performance for the prognosis of STS. Subsequently, the functional enrichment analysis was conducted to investigate the underlying mechanisms. Additionally, the relationships between the ARSig and the tumor immune microenvironment, immune therapy response, and the sensitivity of chemotherapeutic agents were investigated using a serial bioinformatic analysis. It may provide a promising predictor for prognosis prediction and clinical management of STS.
Data collection
The expression profile, copy number variation (CNV), somatic mutation, and clinical characteristics of the STS cohort were downloaded from The Cancer Genome Atlas database (TCGA, https://www.cancer.gov/aboutnci/organization/ccg/research/ structural-genomics/tcga). The individual lacking survival information and other clinicopathological features were excluded from subsequent analysis, and the R package "GeoTcgaData" was utilized to convert ensemble ids to gene symbols. In addition, the expression and clinical data of the three independent cohorts (GSE17674, GSE21050, and GSE71118) were extracted from the Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/ geo/) database. The clinical information of the above patients is shown in Tables S1-S3. The R package "AnnoProbe" was applied to map probes, and the R package "limma" was applied to calculate the average values of multiple probes. Among them, the GSE17674 gene set was utilized to identify differentially expressed ARGs, while GSE21050 and GSE71118 cohorts were considered external validation cohorts for the validation analysis. For normalization, the RNA-sequencing data was converted by log2. The ARGs were obtained from previous literature, and their detailed information is shown in Table S4 (11,12).
the DEARGs. The visualization used the volcano plots and heatmaps based on the R package "ggplot2" and "heat map." The principal component analysis (PCA) was performed to explore the distribution differences of samples.
Screening of DEARGs related to the prognosis of STS
To explore the relevance between the DEARGs and the prognosis of STS, we applied the univariate COX regression analysis screening the DEARGs related to prognosis in STS. The screen criteria were set as P-values < 0.05, and these prognostic DEARGs were selected for subsequent signature construction.
Derivation of angiogenesisrelated signatures
All TCGA-STS cohorts (n=260) were randomly split into the training cohort (n=130) and testing (n=130) cohort by "caret" package in R software. In the training cohort, the least absolute shrinkage and selection operator (LASSO) regression analysis was performed to identify candidate signature ARGs from the prognostic DEARGs. Subsequently, the candidate signature ARGs were included in the multivariate Cox regression analysis to construct the optimal ARSig. The ARSig risk score of each STS individual was computed as the following: ARSig risk score = b i *X i (b i and X i represent the regression coefficients and expression level of gene i, respectively). Next, every STS cohort was divided into high-and low-risk groups according to the median risk score of the training cohort. To compare the difference in the overall survival (OS) between the distinct ARSig risk groups, we then performed Kaplan-Meier (KM) survival analysis using the "survival" package. In addition, the receiver operating characteristic (ROC) curve and the area under the curve (AUC) were used to assess the predictive accuracy of the novel ARSig (15). The distribution of ARSig risk score and survival status were plotted in R software.
Evaluation and validation of the novel ARSig
To estimate the credibility of the novel ARSig, we performed the internal and external validation based on the training cohort, the entire cohort, GSE21050, and GSE71118. The above analyses were also conducted in the internal and external validation cohorts. Moreover, the subgroup clinical survival analysis based on different clinical features was performed to investigate the general applicability of the novel ARSig. To assess whether the novel ARSig was an independent indicator of OS in STS, we performed univariate and multivariate Cox regression analyses by combining multiple clinical characteristics. In addition, prognostic signatures for STS based on gene expression were systematically searched from PubMed for predictive performance comparison. Table S5 includes previously published prognostic models collected in this study.
Identification of DEGs and functional enrichment analysis
We performed differential expression analysis and functional enrichment analysis to explore the difference in molecular function between the distinct risk groups. Initially, the differentially expressed genes (DEGs) were screened using the limma package. The criterion for screening DEGs was false discovery rate-adjusted P-value < 0.05 and | logFC | > 0.585. Also, the volcano and heat map was applied to visualize the differential expression analysis results. Subsequently, the functional enrichment analysis based on these DEGs was performed utilizing the "clusterProfiler" package, including Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis (16). The functional enrichment analysis results were visualized using the bubble plot.
Identification of top ten hub genes
The "GOSemSim" package was used to conduct the Friend analysis for screening the hub gene (17). The association between the signature ARGs and each hub gene was investigated utilizing Pearson's correlation analysis. Then, the difference in the expression of these hub genes between the low-and high-risk groups was compared. The KM survival analysis was applied to explore the relationship between the expression of each hub gene and the OS of patients with STS.
Gene set enrichment analysis and Gene set variation analysis
To identify the enriched cellular pathways in the high-and lowrisk STS cohort, we performed GSEA and GSVA analyses (18, 19). For GSEA, the KEGG gene set (c2.cp.kegg.v7.4.symbols.gmt) was extracted from The Molecular Signatures Database. Then, the GSEA was carried out using the "clusterProfiler" package, and the result was visualized using the R software. Meanwhile, the R package "GSVA" was applied to conduct GSVA analysis, and the limma package was employed to compare the difference in the enriched pathways between the low-and high-risk groups. The pathways with |logFC| > 0.15 and false discovery rate-adjusted Pvalue < 0.05 were considered significantly enriched pathways and illustrated in clustered heat maps.
Relationship of ARSig with Tumor Microenvironment, immune checkpoints, and immune cell infiltration in STS
Besides, the association of the novel ARSig with TME and Immune Cell Infiltration was explored in our study. First, we assessed the TME score using the ESTIMATE (Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data) algorithm (20). The TME score consists of immune, stromal, and tumor purity scores. Then, the CIBERSORT algorithm was utilized to assess the abundance of immune infiltrating cells (21). Generally, the immune checkpoint gene expression is closely associated with the sensitivity of immunotherapy. Therefore, we obtained the immune checkpoints from previous literature and compared their expression level between the distinct risk groups. Furthermore, the connection between the TME score and immune cell infiltration with the prognosis of STS was investigated by KM survival analysis.
Mutation and CNV analysis
To explore the relationships between the ARSig and somatic mutations, we analyzed mutation annotation data from the TCGA database using the "maftools" package. Next, the tumor mutation burden (TMB) scores for each STS patient were calculated, and the difference in the TMB scores between the two risk groups was compared by statistical analysis. In addition, the mutations of the genes with mutation Top 20 in the low-and high-risk groups were visualized using waterfall plots. Furthermore, we analyzed the association of the ARSig risk scores with the cancer stem cell (CSC) index.
Immunotherapy response and drug sensitivity analysis
To further guide the treatment selection for STS, we assess the responses to immunotherapy and chemotherapeutic agent in STS. The response to immunotherapy inhibitors (anti-CTAL-4 and anti-PD-L1) of STS patients in the distinct risk groups was evaluated by the Subclass Mapping (SubMap) algorithm (22). The Bonferroni correction was employed to correct the P-value of the test level, and the Bonferroni P-value less than 0.05 was considered a statistical significance. For chemotherapy drug sensitivity comparison, the R package "pRRophetic" was applied to determine the half maximal inhibitory concentration (IC50) (23). Then, the Wilcoxon sign-rank test was applied to compare the IC50 of chemotherapy agents between the two different risk groups.
Establishment of a predictive nomogram
Based on the multivariate Cox progression analysis result, a nomogram composed of independent prognostic factors was constructed using the R package "rms." (24). Additionally, the calibration curve and decision curve analysis (DCA) draws utilizing the R packages "caret" and "rmda", which could assess the predictive reliability of the nomogram. Moreover, we further conducted the ROC curve to estimate the predictive performance of the nomogram by using the "survival ROC" package in R software.
Cell lines and cell culture
The sources of the cell lines used in the present study were all described in previous research (25). All the cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM, Procell) containing 10% fetal bovine serum and 1% penicillinstreptomycin solution. Cell cultures were performed at 37°C in a humidified atmosphere containing 5% CO2.
Quantitative reverse transcription PCR
Total RNA was collected using RNA Express Total RNA Kit (New Cell & Molecular Biotech), and RNA was reverse transcribed utilizing the Revert Aid First Strand cDNA Synthesis Kit (Thermo Scientific), according to the manufacturer's instructions. Next, RT-qPCR was performed by Hieff qPCR SYBR Green Master Mix (High Rox Plus) (YEASEN Biotech Co., Ltd). The GAPDH was applied for the internal reference for normalization. The relative expression of each gene was calculated with the 2 -DDCT method. The specific primer sequences used in the present study are shown in Table S6.
Cell transfection
Negative control (NC), ADRB2, and SRPK1 siRNAs were purchased from Hanbio (Shanghai, China). SW872 cells were seeded in a 6-well plate. When cell area reached 50%, 50nmol NC, ADRB2, and SRPK1 siRNAs were separately transfected into cells using 5uL Lipofectamine 2000 reagent (Invitrogen) for 12 hours. The sequence of siRNA used in our research is illustrated in Table S7.
Cell proliferation assays
Cell counting kit-8 (CCK-8, New Cell & Molecular Biotech) was used to detect the viability of SW872 cells. SW872 cells were placed in a 96-well plate (2000 cells per well) and incubated overnight. Cells were transfected and cultured for indicated times (0, 24, 48, 72, and 96 hours). In each well was added 10ul CCK-8 solution combining 90ul DMEM containing 10% FBS. After 1.5 hours of incubation, the optical absorbance at 450nm was measured with a microplate reader.
Colony-forming assays
The colony-forming assays were carried out for cell proliferation detection. After transfection, 1000 SW872 cells were seeded in 6-well plates and cultured for 2 weeks. Cells were fixed in 4% PFA for 15 minutes and stained with 0.2% crystal violet for 15 minutes.
Wound healing assay
Wound healing assays were performed to reveal the migration capacity. SW872 cells were placed in different 6-well plates and underwent transfection when the cell area reached 70%. When cell confluence reached 100%, wound healing assays were performed using a 100ul pipette tip to scratch the cells to make a separate wound. Afterward, wounded cells were washed with PBS, and the remaining cells were cultured in DMEM containing 2% FBS. Migration capacity was evaluated by light microscope by quantifying the area covered by migrated cells at 0 and 48 hours.
Transwell assays for migration
After the above-mentioned transfection, Transwell migration assays were carried out using a 24-well chamber (Corning). Cells (2 x 10 4 ) were suspended in 100ul DMEM and added to the upper layer of chambers. 700ul DMEM containing 10% FBS was added below the chambers. Cells were cultured for 24 hours at 37°C, and then the upper chambers were cleaned with cotton swabs. SW872 cells penetrated and adhered to the bottom of the chamber and were fixed with 4% PFA for 15 min and stained with 0.5% crystal violet for 15 min. Chambers were imaged under a microscope.
Transwell assays for invasion
After the transfection, Transwell invasion assays were used to examine cell invasion ability. First, 50ul Matrigel (diluted using DMEM containing 10% FBS at 1:8) was loaded in a 24-well chamber (Corning). DMEM containing 10% FBS was added to the lower chamber, and suspension of DMEM containing 5 x 104 cells was added to the upper chamber. After incubation for 24 hours at 37°C, the upper chambers were cleaned with cotton swabs. SW872 cells penetrated and adhered to the bottom of the chamber and were fixed with 4% PFA for 15 min and stained with 0.5% crystal violet for 15 min. Chambers were imaged under a microscope.
Statistical analysis
The R software (version 4.0.1) and GraphPad Prism (version 9.0.0) were used for statistical analysis. The difference between the two distinct risk groups was compared with the Wilcoxon test. A Chi-square test was used to analyze the clinicopathological characteristics of the two risk groups. The difference in the overall survival rate of STS between the high-and low-risk groups were compared using the Log-rank test. The expression of signature ARGs between normal and STS cell line was evaluated by one-way analysis of variance (ANOVA). The Pearson correlation test was applied to explore the correlation between two variables. A P-value less than 0.05 represent a statistically significant difference.
Establishment and validation of the novel ARSig for STS
The flow chart of our study is presented in Figure S1. Initially, we identify 5499 DEGs (3900 upregulated and 1599 downregulated) in the STS cohort through differential expression analysis. The volcano and heat map of these DEGs is presented in Figures 1A, B. The PCA analysis indicates that the STS and normal tissue samples could be clearly separated by the combined expression of these DEGs ( Figure 1C). Next, we obtained 1605 ARGs from previous studies. From the intersection between DEGs and ARGs, we identify 511 DEARGs in STS, including 403 upregulated and 108 downregulated ARGs ( Figure 1D). The upregulated and downregulated ARGs are shown as cluster heatmaps and volcano plots in Figure S2. Subsequently, we find 116 DEARGs relevant to the prognosis of STS by univariate analysis (Table S8), which are enrolled for the angiogenesis-related signature construction. For ARSig construction, we first screen the candidate prognostic DEARGs through LASSO Cox regression analysis ( Figures 1E, F). Next, the multivariate Cox regression analysis is applied to optimize the signature ( Figure 1G). As a result, the novel ARSig composed of five prognostic DEARGs (ADRB2, SRPK1, SQSTM1, SULF1, and MAGED1) is established. According to the multivariate analysis results (Table S9) With the risk score increasing, the number of STS deaths also increases. Consistently, the KM analysis suggests that the STS patients with a lower risk score displayed a significantly improved survival rate than those with a higher risk score ( Figure 1J). Furthermore, the AUC of the ROC curve for 1-, 3-, and 5-year survival was 0.835, 0.843, and 0.801, respectively, which indicated the predictive power of the novel ARSig ( Figure 1K).
To estimate the predictive robustness of the novel ARSig, we performed internal validation in the testing and the entire STS cohort. As shown in Figures S3-4, we observed similar results in the training and the testing STS cohort. We also use the external cohort (GSE21050 and GSE71118 cohort) to verify the predictive performance of the novel ARSig ( Figures 1L-S). Consistent with the results from the internal cohort, the distribution plot and Kaplan-Meier survival analysis indicated that the STS in the lowrisk group exhibit a better prognosis than those in the high-risk groups. In aggregate, these results confirmed that the novel ARSig had a promising performance in predicting the prognosis of patients with STS.
Evaluating the performance of novel ARSig
To determine the prognostic generality of the novel ARSig, we further compared the risk score between distinct clinical subgroups and carried out a subgroup KM survival analysis. There was no significant difference in the risk score distribution between the distinct clinical subgroup, indicating that the novel ARSig was relatively independent of the clinical characteristics (Figures 2A-E, S5). In addition, the subgroup survival analysis demonstrates that the low-risk group patients have an improved OS comparing to the high-risk subgroup in distinct clinical features (age, gender, margin status, metastasis status, and new tumor events. Figures 2F-J). Importantly, we also implement univariate and multivariate Cox regression analyses to investigate whether the novel ARSig is an independent prognostic factor for STS patients. The univariate analysis indicates that the risk score, age, margin status, metastasis, and new tumor events are remarkably associated with OS ( Figure 2K). Encouragingly, the multivariate analysis result further confirmed that the ARSig risk score is an independent prognostic indicator affecting the OS of STS ( Figure 2L). Moreover, we also found that the c-index of our signatures based on ARGs performs better than almost all previous signatures ( Figure S6). To facilitate the clinical application of the novel ARSig, we further construct a nomogram incorporating the ARSig risk score and independent clinical factor. According to the nomogram, we could precisely estimate the 1-year, 3-year, and 5-year survival rates of each STS individual ( Figure 2M). Encouragingly, the calibration curves exhibits that the actual values of the 1-, 3-, and 5-year OS match those predicted by the nomograph, indicating the nomogram we built is reliable and accurate ( Figure 2N). The 1-, 3-, and 5-year area under the ROC curve of the nomogram are 0.854, 0.763, and 0.787, respectively ( Figure 2O). Also, the DCA demonstrates that the nomogram has the best clinical net benefit comparing with other variables ( Figure 2P). Overall, these findings show that the novel ARSig is successfully constructed and exhibited reliable and has excellent performance for the OS prediction of STS.
The signature ARGs in STS
Subsequently, we perform the KM survival analysis to investigate the respective prognostic value of each signature ARG. Similarly, we find that the STS patient with mitigation of ADRB2 and SQSTM1 has poorer OS ( Figures 3A, B), while the augmented levels of MAGED1, SRPK1, and SULF1 seem to account for a better prognosis in STS (Figures 3C-E). Collectively, these results imply that the abnormal expression of these signature ARGs might be relevant to the prognosis of STS.
Functional enrichment analysis and angiogenesis-related hub genes in STS
To comprehend the difference in the functional pathways among the distinct risk groups, we identify 1006 DEGs between the low-and high-risk groups (Figures 3F, G). Then, the functional enrichment analysis is conducted based on these DEGs. The GO analysis results indicate that these DEGs are mainly enriched in immune-related functions, like humoral immune response, humoral immune response mediated by circulating immunoglobulin, regulation of humoral immune response, immunoglobulin complex, and immunoglobulin receptor binding ( Figure 3H). Also, Figure 3I shows the top twenty pathways these DEGs enriched. Among them, the Human T−cell leukemia virus 1 infection, Viral protein interaction with cytokine and cytokine receptors, and Antigen processing and presentation are immune-related, while the Cell adhesion molecules are associated with tumorigenesis. Moreover, we define ten potential hub genes (AHNAK2, GPC2, DBNDD2, OLFM1, SCRG1, TNFAIP8L2, FILIP1L, CYSTM1, PARM1, and NCAPG) in the identified angiogenesis-associated GO progress through the Friend analysis ( Figure 3J). We observe a remarkably co-expression relevance between the signature ARGs and these ten hub genes ( Figure 3K). Almost all these hub genes display an abnormal expression in the STS compared to normal tissue, except for SCRG1 (Figures 3L-U). Equally, the KM survival also suggests that all ten hub genes exhibit significant prognostic effects in STS ( Figure S7).
Exploring the underlying pathways in STS
To further verify the molecular mechanism difference between the distinct risk groups, we perform the GSEA and GSVA analysis. The GSEA shows that the high-risk STS patient mainly associated with tumorigenesis pathways, such as basal cell carcinoma, cell cycle, DNA replication, hedgehog signaling pathway, and Wnt signaling pathway ( Figure S8A). Meanwhile, those mainly enriched pathways in the low-risk group are relevant to immunity function ( Figure S8B). In the following GSVA analysis, we obtain results consistent with the previous GSEA, such as the low risk mainly concentrated in complement and coagulation cascades, chemokine signaling pathway, and graft versus host disease ( Figure 4A). Altogether, these results provide promising clues for inferring the underlying mechanism of the novel ARSig regulating STS progress.
TME and immune cell infiltration analysis
Given these above functional enrichment analysis results and the critical role of tumor immunity in tumor development, we further investigate the immune status among the various ARSig risk groups. Initially, the ESTIMATE analysis indicates that the low-risk STS patients displayed an enhanced immune and stromal score and a lower tumor purity score, hinting the STS cohort in the low-risk group has a better immune infiltration ( Figures 4B-D). Also, we find that both the patients with an augmented immune and stromal score or an attenuated tumor purity score exhibits an ameliorated prognosis ( Figures 4E-G). Subsequently, we evaluate the infiltrate proportion of the 22 types of immune cells in STS by applying the CIBERSORT algorithm ( Figure S9A). We observe that the abundance of naive B cells, CD8 T cells, CD4 memory resting T cells, Monocytes, M1 Macrophage, resting dendritic cells, and resting mast cells are elevated in the low-risk groups, while the infiltration level of CD4 + T cells, Resting NK cells, M0 Macrophage, and activated dendritic cell in the low-risk group is lower than those in the high-risk groups ( Figures 4H, I). Besides, there are remarkable correlations between the ARSig risk score and signature ARGs with the proportion of the immune cell infiltration (Figures 4J; S9B). Notably, the KM survival demonstrates an enhanced infiltration level of naive B cells, activated NK cells and CD8 T cells are relevant to an improved prognosis in STS (Figures S9C-E). Contrary, the patients with an increasing abundance of M0 Macrophage, M2 Macrophage, and CD4+ T cells have a poorer OS (Figures S9F-H).
Association of the novel ARSig with tumor mutation burden
Considering the importance of CSC and TMB in tumor generation and development, we explore their association with the novel ARSig. Figures 5A-C indicates the relationship between ARSig risk scores and the CSC index. We find that risk score is positively correlated with the CSC index, and the STS patients with a lower CSC index exhibits an ameliorated prognosis. For TMB, the higher risk is correlated to an elevated TMB score (Figures 5D, E). Also, the waterfall plot indicates that TP53, TTN, and RB1 are the top three mutation rate genes in the low-risk group ( Figure 5F). Similarly, TP53 shows the highest mutation frequency in the highrisk group, followed by ATRX and MUC16 ( Figure 5G). Then, we investigate somatic copy number alterations in these signature ARGs and hub genes. Among them, MAGED1, AHNAK2, and TNFAIP8L2 have widespread CNV increases, while OLFM1 and SCRG1 display CNV decreases ( Figure 5H). The locations of the CNV alterations in these genes on their respective chromosomes are presented in Figure 5I. We further observe that the high-risk group company with an elevated frequency of copy number amplification compared to the low-risk group (Figures 5J, K).
Prediction efficacy of the immunotherapy and chemotherapy
Immune checkpoint modulators are known to play a critical role in tumor immunity and immunotherapy. We find that the expression of virtually all immune checkpoints is upregulated in the low-risk group compared with the high-risk group ( Figure S10). Therefore, we further assess the response to immune checkpoint inhibitors (CTLA4-blocker and PD1-blocker) in the subgroup classified by ARSig risk score. As present in Figure 6A, the STS patients in the low-risk groups have a better response to PD1-blocker (Bonferroni P-value < 0.05). Equally, we estimate the response of the STS cohort to commonly used chemotherapeutic agents by comparing the difference in IC50 between the distinct The correlation between the ARSig risk score and the infiltration of immune cells. * represent P < 0.05, ** represents P < 0.01, *** represents P < 0.001, and ns represent no significance. risk groups. The STS cohort in the low-risk group has a higher IC50 of axitinib, cisplatin, cytarabine, docetaxel, doxorubicin, gemcitabine, midostaurin, pazopanib, vinblastine, vinorelbine, and vorinostat than those in the high-risk group (Figures 6B-L). In contrast, the IC50 of lenalidomide, erlotinib, and gefitinib in the low-risk group is lower than those in the high-risk group (Figures 6M-O).
The effect of signature ARGs in STS
Importantly, we verify the expression of each signature ARG in the STS cell lines using RT-qPCR. As shown in Figure S11, we observe that the whole signature ARGs are significantly dysregulated in STS cell lines. Considering that ARDB2 and SRPK1 are aberrantly elevated in the STS, we further explore the function of ARDB2 and SRPK1 in STS. As shown in Figures 7A, 8A, the expressions of SRPK1 and ARDB2 were significantly downregulated in SW872 cells after siRNA transfection. The CCK8 results show that the attenuation of SRPK1 and ARDB2 could lead to the slowing down of the proliferation rate of SW872 ( Figures 7B, 8B). Consistently, the colony-forming ability of SW872 is attenuated with the downregulation of SRPK1 and ARDB2 (Figures 7C, 8C). Also, compared to negative control groups, the percentage of EdU-positive cells exhibits a downward trend in the siRNA-SRPK1 and siRNA-ARDB2 groups (Figures 7D, 8D). On the other hand, the scratch test indicates that the moving distance of SW872 in the siRNA-SRPK1 and siRNA-ARDB2 group was significantly less than that of the control group ( Figures 7E, 8E). Moreover, the transwell migration and invasion assay reveal that the SRPK1 and ARDB2 diminished could inhibit SW872 cell migration and invasion ( Figures 7F, G, 8F, G). Hence, these above-mentioned results imply that the abnormal overexpression of ARDB2 and SRPK1 could promotes the malignant phenotype of soft tissue sarcoma cells, further validating our bioinformatic analysis results.
Discussion
STS is a heterogeneous malignant disease deriving from mesenchymal, constituting 1% of adult malignancies and 15% of malignant neoplasms in childhood (26). Since the aggressiveness, metastasis, and relapse of tumor, the overall survival rates of STS remain suboptimal. Therefore, it is critical to establish an effective prognostic biomarker for risk stratification and precision prognostic prediction of STS. Angiogenesis has been revealed to play a crucial role in carcinogenesis and progression, which is highly dependent on angiogenic cytokines (27, 28). For instance, the secretion of VEGF is essential to tumor vascularization, and its inhibition disrupts tumor progression (29). HIF1 is a heterodimeric protein consisting of HIF1a and HIF1b subunits, and it is also known to be an important stimulus for tumor angiogenesis (30). In addition, several recent research has demonstrated that the angiogenesis-related gene signature was closely linked to the prognosis of various cancer patients. Xin Qing et al. identified an angiogenesis-associated genes signature, contributing to predicting the prognosis, clinical characteristics and TME of gastric cancer (12). Similarly, the angiogenesis-related gene signature exhibited a promising ability for the prognosis and treatment response prediction of glioblastoma multiforme and will help the therapeutic strategies selection in glioblastoma multiforme (11). However, numerous studies have only evaluated the role of single ARGs in STS. The research systematically elucidates the holistic impact of the combinatorial of diverse ARGs is still lacking.
In the present study, we identified 116 DEARGs with prominent prognosis significance of STS. Subsequently, a novel ARSig consisting of five angiogenesis-associated genes was successfully established using LASSO, univariate, and multivariate COX regression analysis. The novel prognostic ARSig exhibited an effective ability to stratify the prognosis of STS. Our results show that the STS patients in the low-risk groups have an improved prognosis, while the prognosis of STS in the high-risk group is significantly poorer. Next, the prediction performance of the novel ARSig is further confirmed using the ROC curve, internal validation, and subgroup survival analysis. In addition, the univariate and multivariate Cox analysis demonstrate that the ARSig risk score is an independent prognostic predictor for the OS of STS. Encouragingly, a consistent validation result in predicting OS is also founded in the external cohort (GSE21050 and GSE71118), which further corroborate the reliability and potential of our signature. Herein, we construct a novel prognostic signature based on ARGs, which could be used as a reliable and independent marker to help conduct personalized prognostic evaluations in STS.
To further investigate the association of the novel ARSig with STS, we explore the difference in underlying mechanisms between the two distinct risk groups using GSEA and GSVA. Interestingly, we observe that the GSEA and GSVA results both show that the STS patients with a higher risk score mainly enriched in cell cycle, DNA The invasion abilities of NC, SRPK1-siRNA1, and SRPK1-siRNA2 groups were demonstrated using transwell assay for invasion. ** represents P < 0.01, *** represents P < 0.001, and **** represents P < 0.0001.
replication, and hedgehog signaling pathway. As is known to all, growing evidence has confirmed that these pathways are involved in the progression of various tumors. For instance, PLA2G10 could promote the cell cycle progression of soft tissue leiomyosarcoma cells through upregulated of the expression of cyclin E1 and CDK2 (31). The dysregulated of DNA replication results in abnormal gene phenotypes that trigger normal cells to transform into malignant ones (32). In addition, the hedgehog signaling pathway also plays a vitally important role in the tumor. Dongdong Cheng et al. prove that CNOT1 cooperates with LMNA to aggravate the occurrence of osteosarcoma by regulating the Hedgehog signaling pathway (33). On the contrary, the patients in the low-risk group seem relevant to immune-related responses, which may affect the tumor immunity microenvironment of STS. Given these results and previous studies, it is reasonable to believe that these identify pathways provided novel insights into the relationship between the novel ARSig and tumor biology of STS. Meanwhile, ten key hub genes (AHNAK2, GPC2, DBNDD2, OLFM1, SCRG1, TNFAIP8L2, FILIP1L, CYSTM1, PARM1, and NCAPG) are determined using the Friend analysis, which is associated with the prognosis of STS. The Friends analysis is a commonly used method for identifying hub genes in the pathway (34). Surprisingly, the functional role of these ten hub genes in tumor has been widely reported in previous studies. AHNAK2 has been shown to be a prognostic marker in papillary thyroid cancer, clear cell renal cell carcinoma (ccRCC), and lung adenocarcinoma (35)(36)(37). Minglei Wang et al. reveal that the overexpression of AHNAK2 could drive tumorigenesis and progression of ccRCC by facilitate EMT and cancer cell stemness (36). FILIP1L is a tumor suppressor with diminished expression in various tumors (38). For instance, the downregulation of FILIP1L causes the aberrant stabilization of a centrosome-associated chaperone protein, thereby driving aneuploidy and progression in colorectal adenocarcinoma (39). Guoming Chen et al. demonstrate that GPC2 could sreve as a Potential prognostic, diagnostic, and immunological biomarker in pan-cancer (40). In addition, it is revealed that the elevated TNFAIP8L2 inhibit the survival and proliferation of colorectal cancer cell line, while endogenous TNFAIP8L2 facilitate the tumorigenesis when exposure to dangerous environment (41). NCAPG is overexpressed in cardia adenocarcinoma (CA), which could suppress the apoptosis and advocate the epithelial-mesenchymal transition of the CA cell line via activating the Wnt/b-catenin signaling pathway (42). Consistently, OLFM1 could inhibit the growth and metastasis of colorectal cancer cells through affect the NF-kB signalling pathway (43). Also, the oncogenic potential and important role of PARM1 in leukemogenesis were proved by Cyndia Charfi et al., which could promote anchorage and cell proliferation capacity (44). However, research on the role of SCRG1, DBNDD2 and CYSTM1 in tumorigenesis and development is currently lacking. Collectively, these hub genes exhibit a significant association with tumors, representing a promising clue for future biomarker research in STS.
It has shown that the tumor immune microenvironment is closely relevant to the progression and invasion, with the tumor immune microenvironment receiving considerable attention past The cell proliferation rate of NC, ADRB2-siRNA1, and ADRB2-siRNA2 groups was detected by CCK-8 assay. (C) Colony formation abilities in NC, ADRB2-siRNA1, and ADRB2-siRNA2 groups. Colony numbers were shown in the corresponding column at the right. (D) The cell proliferation rate of NC, ADRB2-siRNA1, and ADRB2-siRNA2 groups was detected using Edu-assay. Percentages of Edu-positive cells were quantified in corresponding columns at right. (E, F) The migration ability of NC, ADRB2-siRNA1, and ADRB2-siRNA2 groups was illustrated by scratch tests and transwell assay for migration. (G) The invasion abilities of NC, ADRB2-siRNA1, and ADRB2-siRNA2 groups were demonstrated using transwell assay for invasion. ** represents P < 0.01, *** represents P < 0.001, and **** represents P < 0.0001.
few years (45). In the low-risk group, the immune and stromal scores and the abundance of immune infiltration augmented significantly, indicating the STS cohort with a low-risk score has a better immune status. Consistently, previous research has demonstrated that immune infiltration is an ignored prognostic factor for tumor (46), and the ameliorated immunity status was related to the prognosis of STS (47). Interestingly, we observe a decreased M0 infiltration and enhanced M1 macrophage infiltration degree in the low-risk group, and the STS patient with more M0 and M2 macrophage infiltration degrees has an attenuated prognosis. As we all know, macrophages are very versatile cells with a high degree of plasticity and have various functions in various pathological processes (48). Macrophages are broadly categorized into M1 classically activated macrophages, and M2 alternatively activated macrophages (49). Among them, M1 macrophages have anti-tumour effects, while M2 macrophages have pro-tumour effects (50). Therefore, it is reasonable to believe that the infiltration degree of macrophages may partly account for the different tumor immune microenvironment among distinct risk groups, and the different immune status is closely correlated with the prognosis of STS in different ARGsig risk groups.
Recently, immunotherapy has become a promising strategy, which is expected to become the predominant anti-tumor treatment in the future (51). However, not all malignancies benefit from immunotherapy (52). Therefore, stratifying and differentiating patients is necessary for the effectiveness of immunotherapy (53). In the present study, we observe that the low-risk STS patients had an elevated expression of immune checkpoint genes. Similarly, the STS cohorts with low ARSig risk scores exhibits a positive response for anti-PD1, indicating the novel ARSig has a potential ability to predict response to immunotherapy in STS. Also, chemotherapy is another important alternative therapeutic method for patients with STS (54). We find that the low-risk STS cohort responded better to lenalidomide, erlotinib, and gefitinib, while the high-risk STS patients are more sensitive to axitinib, cisplatin, cytarabine, docetaxel, doxorubicin, gemcitabine, midostaurin, pazopanib, vinblastine, vinorelbine, and vorinostat. It may help clinicians choose an appropriate chemotherapy plan based on the risk score. In general, the novel ARSig we presented may provide insight into the individualized immunotherapy and chemotherapy of STS.
Notably, we finally detect the expression levels and the effect of signature ARGs using in vitro experiment in the STS cell line, and the result shows that there was a significant difference in the expression of these ARGs among the STS and control cells, increasing the credibility of our study. It is worth mentioning that some ARGs have been demonstrated to be associated with the malignant progression of cancer. For example, the ARDB2 signaling could facilitate the progression and sorafenib resistance of hepatocellular carcinoma via inhibited autophagic degradation of HIF1a (55). SRPK1 is frequently overexpressed in gastric cancer, resulting in tumor cell growth by regulating the small nucleolar RNA expression (56). Consistently, our study reveals that ARDB2 and SRPK1 could promote the proliferation, migration, and invasion ability of SW872. As a member of ARGs, the specific mechanism by which SRPK1 and ARDB2 play a role in angiogenesis is also worth exploring. Currently, studies have reported that the inhibition of SRPK1 can reduce the expression of pro-angiogenic VEGF, thereby maintaining the production of anti-angiogenic VEGF isoforms (57). Also, Yingwei Chang et al. proved that the SRPK1 could affect the angiogenesis via the PI3K/ Akt signaling pathway (58). However, the mechanism of ARDB2 in angiogenesis remains unclear. Hence, these results further confirm the reliability of our study, but the specific mechanisms of ARDB2 and SRPK1 in the angiogenesis of STS are worth further exploration in the future.
Conclusion
Briefly, our study reveals that the identified ARSig is a robust prognostic marker for OS prediction in patients with STS. Furthermore, the stratification base on the novel ARSig could guide the clinical decision, tumor immune microenvironment prediction, personalized immunotherapy and chemotherapy of STS. It is reasonable to believe that our study offers a valuable basis for further research.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found within the article/Supplementary Materials.
Author contributions
SH and ZL contributed to the conception and made final approval of the version, BL performed the study concept and design and wrote the manuscript. CL performed the experiment. CF, HW, HZ, and CT helped with data analysis. All authors contributed to the manuscript revision, read, and approved the submitted version. | 2023-06-12T13:08:06.203Z | 2023-06-12T00:00:00.000 | {
"year": 2023,
"sha1": "0e6ecd44501960df87f06700b9e65d72fc1e7c95",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0e6ecd44501960df87f06700b9e65d72fc1e7c95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236913736 | pes2o/s2orc | v3-fos-license | Interaction between the level of immunoglobulins and number of somatic cells as a factor shaping the immunomodulating properties of colostrum
The aim of this study was to investigate the association between immunoglobulins and SCC as a factor in shaping the content of the immunostimulatory components of colostrum. Seventy-eight multiparous Polish Holstein–Friesian cows were selected for the experiment. Colostrum samples were collected immediately after calving (up to a max. of 2 h). The cows were divided into groups according to the following levels: Immunoglobulins (IG class)—(IG1) over 50 g/L, (IG2) up to 50 g/L; SCC class—(SCC1) up to 400 000/ml, (SCC2) 400–800 000/ml, (SCC3) over 800 000/ml. Colostrum assigned to the IG1 SCC1 group had a statistically significant higher (p ≤ 0.01) concentration of both whey proteins and fatty acids compared to the IG1 SCC2 and SCC3 groups. The concentration of IgG, IgM, and IgA was shown to be higher in IG1 SCC1 than IG2 SCC3 by 226%, 149%, and 115%, respectively. The concentration of lactoferrin was shown to be higher in IG1 SCC1 than IG2 SCC3 by 149%. The determination of colostrum quality based on the concentration of immunoglobulins in the colostrum may not be sufficient because serum IgG concentrations at birth show a linear increase relative to colostrum SCC. A breakdown of colostrum into quality classes, taking into account the level of SCC, should therefore be introduced.
The concentration of colostrum immunoglobulins (Ig) varies with the cow's health status 1 , amount of colostrum produced, calving season 2 , breed, age 3,4 , production system 5 , length of non-lactating period 6 , prepartum milking 7 , time delay between parturition and first milking 8,9 , and heat stress 10 . Additionally, increased levels of Ig were demonstrated in cows inoculated with vaccines containing E. coli, Coronavirus and Rota antigens 11 . In cow's milk, IgG is the main isotype, followed by IgA and IgM 12 . IgG plays a role in the immune response to infections, while IgA is involved in protecting the mucous membranes, and IgM is the first line of defense against infections 13 . Due to its structure in ruminants, the placenta prevents the transfer of antibodies to the fetus, therefore the concentration of Ig in the blood serum of calves is low 14 . The neonate is immunonaive and dependent on passively acquired maternal immunoglobulins 15 . Likewise, colostrum intake by calves is crucial to protect against gastrointestinal tract infections. This uptake of IgG occurs partly via passive transport across the epithelium when it is not yet fully closed, but also actively via the FcRn, the neonatal IgG receptor that is expressed on the intestinal epithelium in neonatal calves 16 . The functional but naive immune system of the newborn does not permit an effective immune response for the first three weeks of life 17 . A very important aspect of calf rearing is the feeding of high-quality colostrum, in conjunction with appropriate frequency, quantity, and temperature 19 . When one of these elements does not meet the standards, the consequence is the calves' increased susceptibility to diarrhea and infections, which in turn has a direct impact on production economics. McGuirk www.nature.com/scientificreports/ reported that calves with failed passive transfer have mortality and morbidity risks up to six times higher than those that have succeeded. Colostrum is also one of the factors, that is responsible for the microbial colonization of the gastrointestinal tract. Puppel et al. 19 reported that high-quality colostrum stimulated significant development of Lactobacilli and Bifidobacterium spp., simultaneously reducing Coliforms and Enterococci. Strain growth depends on lactoferrin (LF) and lysozyme (LZ) concentration, which are involved in maintaining the balance of the intestinal microflora, due to their properties [20][21][22] . Additionally, lactoferrin and casein inhibit lipid peroxidation as well as the formation of peroxide radicals and iron oxide 23 .
The quality and quantity of cows' mammary gland secretions are closely related to udder health. Puppel et al. 1 reported that in colostrum from the first milking, the IgG concentration was two-fold greater and the C18:2 cis9trans11 three-fold greater in colostrum with somatic cell count (SCC) ≤ 400 000 cells/ml, than in colostrum with ≥ 400 000 cells/ml. It should be emphasized that fatty acids, are antibacterial agents that destabilize bacterial cell membranes 24 , due to their amphipathic properties. The consequence is increased membrane permeability and cell lysis, as well as inhibition of the enzymatic activity of the membrane 25 . Calves cannot fight infectious agents because their immune system is underdeveloped at the time of birth. After birth, they therefore depend immunologically on the successful, passive transfer of maternal Ig via colostrum 26 . Ferdowsi Nia et al. 27 reported that feeding neonates with high SCC colostrum decreased serum IgG levels and also increased the incidence of diarrhea in calves.
The aim of this study was to investigate the association between immunoglobulins and SCC as a factor in shaping the content of the immunostimulatory components of colostrum in the first milking after calving.
Results
Colostrum samples were collected from multiparous cows and evaluated for IgG and SCC content. Based on IgG and SCC samples were assigned to the following categories: the level of: Immunoglobulins (IG 1 , IG 2 ) and Somatic Cell Count class (SCC 1 , SCC 2 , SCC 3 ). The colostrum assigned to IG 1 was characterized by significantly higher concentrations of casein, total protein, and fat. The study showed that IG class had a statistically significant (p ≤ 0.01) effect on the formation of the level of colostrum's functional parameters in the first milking after calving (Fig. 1).
In comparing IG 1 and IG 2 , the concentration of lactoferrin (LF), α-lactalbumin (ALA), and β-lactoglobulin (BLG) was higher by almost 30% for IG 1 than for IG 2 (Fig. 2a), while the G, M, and A immunoglobulins were higher by almost 35% (Fig. 2b). Based on the analysis of the obtained results, a statistically significant (p ≤ 0.01) relationship between Ig class and the level of bioactive whey proteins was demonstrated (Fig. 2a,b).
Colostrum assigned to the IG 1 group had significantly higher concentrations of C18:1 trans11, C18:2 n-6, C18:3 n-3, and C18:2 cis9 trans11. The study showed that the IG class had a statistically significant (p ≤ 0.01) effect on the formation of bioactive fatty acids during the first milking after calving (Fig. 3).
The colostrum assigned to the IG 1 and SCC 1 groups had significantly higher concentrations of protein, fat, and casein, compared to the IG 1 SCC 2 and SCC 3 groups (Fig. 4). The study showed that the Ig × SCC interaction had a statistically significant (p ≤ 0. 01) effect on the formation of the functional parameters of colostrum during the first milking after calving.
Immunoglobulins (IG class): (IG1) over 50 g/L, (IG2) up to 50 g/L. Data were presented as LSM with SEM. Statistical differences between IG class at P ≤0.01 Colostrum samples were collected immediately after calving (up to a max. of 2 hours), before the first suckling of the calf. www.nature.com/scientificreports/ Colostrum assigned to the IG 1 SCC 1 group had a statistically significant (p ≤ 0.01) higher levels of bioactive whey proteins compared to the IG 1 SCC 2 and SCC 3 groups (Fig. 5a,b). The study showed that the Ig x SCC interaction had a statistically significant (p ≤ 0.01) effect on the level of lactoferrin, α-lactalbumin (Fig. 5a), and immunoglobulin during the first collection after calving (Fig. 5b).
The lowest level of β-lactoglobulin in IG 1 SCC 1 was reported as 6.311 g/L, while the highest level in IG 2 SCC 3 was 12.941 g/L. The study showed that the Ig x SCC interaction had a statistically significant (p ≤ 0.01) effect on the formation of β-lactoglobulin (Fig. 5a).
The colostrum assigned to the IG 1 SCC 1 group had a statistically significant (p ≤ 0.01) higher concentration of bioactive fatty acids compared to the IG 1 SCC 2 and SCC 3 groups (Fig. 6). The highest level of C18:2 cis9trans11 in IG 1 SCC 1 was demonstrated to be 0.515 g/100 g fat, while the lowest level in IG 2 SCC 2 was 0.231 g/100 g fat. The study showed that the Ig x SCC interaction had a statistically significant (p ≤ 0.01) effect on the formation of Conjugated Linoleic acid (CLA) (Fig. 6).
Discussion
The most important immunostimulating components in colostrum are immunoglobulins, which affect the immunity of the calf organism [28][29][30] . In order for the body to take full advantage of these components, attention is paid to factors such as the quality of the colostrum, the time elapsed since birth (which is crucial, due to the decreasing Data were presented as LSM with SEM. Statistical differences between IG class at P ≤0.01 Colostrum samples were collected immediately after calving (up to a max. of 2 hours), before the first suckling of the calf. Immunoglobulins (IG class): (IG1) over 50 g/L, (IG2) up to 50 g/L; SCC class: (SCC1) up to 400 000/ml, (SCC2) 400-800 000/ml, (SCC3) over 800 000/ml. Data were presented as LSM with SEM. Interaction between IG class and SCC class at P ≤0.01 Colostrum samples were collected immediately after calving (up to a max. of 2 hours), before the first suckling of the calf. www.nature.com/scientificreports/ absorption capacity of the substance through the intestinal epithelium), and the quantity of colostrum 2,31 . IgG are powerful effector molecules that can mediate tissue inflammation by complement activation, and by engaging classical FcγRs and C-type lectin receptors 32 . The concentration of IgG, IgM, and IgA was shown to be higher by 226%, 149%, and 115%, respectively, in IG 1 SCC 1 than in IG 2 SCC 3 (Fig. 5b). Maunsell et al. 6 reported that altered cell function caused by mastitis might reduce IgG 1 transport, and result in low colostral IgG 1 concentrations in infected glands. Bollinger et al. 33 , reported, that intestinal epithelium has the capability of transporting Ig into the lumen via the expression of the polymeric immunoglobulin receptor. However, bacteria can bind free IgG in the intestine, or block IgG molecules from being taken up and transported into enterocytes, thereby impairing IgG absorption 34 . It is assumed that good quality colostrum is characterized by a high concentration of IgG. Concentrations of this component should be higher than 50 g/L 35 . The calf 's intestinal absorption phase is nonspecific for the class of immunoglobulins and is operative, essentially, only during the first 24 h after birth 36 . With the average absorption of immunoglobulins through the intestine (20-30%), during the first six hours of life, the calf should ingest 100-200 g of Ig. This allows adequate passive transfer, which is guaranteed Immunoglobulins (IG class): (IG1) over 50 g/L, (IG2) up to 50 g/L; SCC class: (SCC1) up to 400 000/ml, (SCC2) 400-800 000/ml, (SCC3) over 800 000/ml. Data were presented as LSM with SEM. Interaction between IG class and SCC class at P ≤0.01 Colostrum samples were collected immediately after calving (up to a max. of 2 hours), before the first suckling of the calf. www.nature.com/scientificreports/ by the colostrum assigned to IG 1 SCC 1 . Therefore, it can be concluded that colostrum characterized by a higher concentration of IG has higher immunostimulatory properties. Immunomodulating and/or antimicrobial substances, including lactoferrin and lysozyme, are present in mammary secretions and may contribute to the protection of neonates 37 . The study showed that the Ig x SCC interaction had a statistically significant (p ≤ 0.01) effect on lactoferrin concentration. The concentration of LF was shown to be higher in IG 1 SCC 1 than in IG 2 SCC 3 by 149% (Fig. 5a). In juveniles there is an initial lack of intestinal barrier, so LF has the opportunity to stay in the intestine longer and consequently penetrate the bloodstream 38 . LF increases both in vivo and in vitro enterocyte differentiation. In addition, LF was found to increase in vitro enterocyte proliferation resulting in higher cell density in cell flasks 39 . Therefore, it can be concluded that colostrum characterized by a higher concentration of lactoferrin has higher immunostimulatory properties.
Bravo-Santano et al. 40 and Kenny et al. 41 reported that the innate immune system in epithelial cells takes advantage of long chain free fatty acids such as C18:2 n-6 and C18:1 cis9, because they are considered bactericidal or bacteriostatic depending on their structural characteristics 25 . The concentration of C18:1 trans11, C18:2 n-6, C18:3 n-3, and C18:2 cis9trans11 was shown to be higher in IG 1 SCC 1 than in IG 2 SCC 3 by 267%, 197%, 169%, and 222%, respectively (Fig. 6). Studies have shown that the de novo synthesis of fatty acids and the acylation of long-chain fatty acids using glycerol can be impaired in mammary glands during the inflammatory process. Meydani et al. 42 reported that n-3 FAs decrease bacterial lipopolysaccharide-induced production of the pro-inflammatory cytokines IL-1 and TNF from peripheral blood lymphomonocytes. However, in the event of inflammation, the membrane's enrichment of n-3 FAs reduces the ability of the endothelial cells to respond to stimulation by bacterial lipopolysaccharide, IL-I, IL-4, or TNF in terms of the intercellular adhesion molecule-1, as well as soluble mediators, such as IL-6 and IL-8, that are able to provide positive feed-back to amplify the inflammatory response 43,44 . Thus, it can be concluded that colostrum of good cytological quality significantly influences the natural defensive mechanisms of calves, because the above-mentioned acids show anti-inflammatory effects.
In conclusion, the level of immunostimulatory components in colostrum is variable, and one of the modulating factors is cytological quality. A breakdown of colostrum into quality classes, taking into account the level of SCC: up to 400 000/ml, 400-800 000/ml, over 800 000/ml., should therefore be introduced.
Methods
All cows were handled in accordance with the regulations of the Polish Council on Animal Care; and the Second Ethics Committee for Animal Experimentation in Warsaw of the Ministry of Science and Higher Education (Poland) reviewed and approved all procedures (Approval number: WAWA2/086/2018). During the experiment, the cows were under veterinary care. Dry cows were fed according to the guideline's rules of the Nutrient Requirements Committee.
Seventy-eight multiparous (in second lactation) Polish Holstein-Friesian cows were selected for the experiment. Colostrum samples (250 ml) were collected in sterile plastic containers containing the preservative Mlekostat CC immediately after calving (up to a max. of 2 h), before the first suckling of the calf, and then transported to the Warsaw University of Life Sciences and stored frozen (− 20 °C) until the planned analysis.
Immunoglobulins (IG class): (IG1) over 50 g/L, (IG2) up to 50 g/L; SCC class: (SCC1) up to 400 000/ml, (SCC2) 400-800 000/ml, (SCC3) over 800 000/ml. Data were presented as LSM with SEM. Interaction between IG class and SCC class at P ≤0.01 Colostrum samples were collected immediately after calving (up to a max. of 2 hours), before the first suckling of the calf. C18:1tran11 C18:2n6 C18:3n3 C18:2 cis9trans11 Each sample of colostrum was centrifuged for 15 min at 5,000 × g in a microcentrifuge, and then the fat layer was removed. The remaining solubilized sample (5 mL) was heated to 40 °C and then, 10% solution of acetic acid was added to precipitate the casein fraction. After thawing, each sample was centrifuged for 15 min at 14,000 × g in a microcentrifuge. The supernatant was filtered through a nylon filter and used in further steps of the analysis: whey proteins and immunoglobulins.
Concentrations of whey proteins were determined using an Agilent 1100 Series RP-HPLC (Agilent Technologies, Waldbronn, Germany). Separations were performed at ambient temperature using solvent gradient on a Jupiter column C18 300A (Phenomenex, Torrance, CA, USA). The chromatographic conditions were as follows. Solvent A was acetonitrile (Merck, Darmstadt, Germany), water (Sigma-Aldrich) and trifluoroacetic acid (Sigma-Aldrich) in a ratio of 70:930:1 (v/v/v). Solvent B was acetonitrile, water, and trifluoroacetic acid in a ratio of 930:70:1(v/v/v). The flow rate was 1.4 ml/min and the detection wavelength was 220 nm. All samples were analyzed in duplicate. The identification of peaks as lactoferrin and lysozyme was confirmed by comparing them with the standards (Sigma-Aldrich, USA).
Concentrations of immunoglobulins (G, M, A) were determined using an Agilent 1100 Series RP-HPLC (Agilent Technologies, Waldbronn, Germany). The chromatographic conditions were as follows. Solvent A was acetonitrile (Merck, Darmstadt, Germany), water (Sigma-Aldrich) and trifluoroacetic acid (Sigma-Aldrich) in a ratio of 20:980:1 (v/v/v). Solvent B was acetonitrile, water, and trifluoroacetic acid in a ratio of 980:20:1(v/v/v). The column was first equilibrated at 25% mobile phase A for 2 min at a 2 mL/min flow rate. The elution was performed as a gradient of mobile phase A, from 25 to 60% over 5 min at 2 mL/min. The detection wavelength was 280 nm. All samples were analyzed in duplicate. The identification of peaks as immunoglobulins was confirmed by comparing them with the standards of Bovine Ig (Sigma-Aldrich, USA).
Fatty acid methylation was carried out using the trans-esterification method PN-EN ISO 5509:2000 45 . Concentrations of fatty acids were determined using an Agilent 7890 GC gas chromatograph (Agilent Technologies, Waldbronn, Germany) and Varian Select FAME column. The separation was performed at pre-programmed temperature: 130 °C for 1 min; 130-170 °C at 6.5 °C min − 1; 170-215 °C at 2.75 °C min −1 ; 215 °C for 12 min, 215-230 °C at 20 °C min −1 and 230 °C for 3 min. Helium at a flow rate of 25 cm s −1 and constant pressure was used as the carrier gas, the injector temperature was 240 • C, and the detector temperature was 300 °C. All samples were analyzed in duplicate. Each peak was identified using pure methyl ester standards (Supelco, USA).
Statistical analysis.
The data were compiled statistically by a multi-factor analysis of variance using the least squares method. The decomposition of bioactive components was checked using the Shapiro-Wilk test. All tests were conducted using an IBM SPSS 23 46 . After a preliminary analysis of the samples, cows were divided into groups according to the level of Immunoglobulins (IG class): (IG 1 ) over 50 g/L, (IG 2 ) up to 50 g/L; SCC class: (SCC 1 ) up to 400 000/ml, (SCC 2 ) 400-800 000/ml, (SCC 3 ) over 800 000/ml.
The statistical model was: where: y is the dependent variable, µ is the overall mean, A i is the fixed effect of the IG class (I = 1 − 2), B j is the fixed effect of the SCC class, A i × B j is the interaction between IG class and SCC class, and e ijk is the residual error.
Data availability
All data generated or analyzed during this study are included in this published article. The datasets used and/or analyzed in the current study are available from the corresponding author on reasonable request. | 2021-08-05T06:18:20.986Z | 2021-08-03T00:00:00.000 | {
"year": 2021,
"sha1": "d770cee923d2ff6b08db83250bbb5e9c351542e4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-95283-1.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a9da36bf1f13ecac08022b51887c444d26a1dfb9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235729929 | pes2o/s2orc | v3-fos-license | Favorable Pregnancy Outcomes in Women With Well-Controlled Pulmonary Arterial Hypertension
Introduction: Since pregnancy in women with pulmonary arterial hypertension (PAH) is associated with a high risk of morbidity and mortality, it is recommended that pregnancy should be avoided in PAH. However, some women with mild PAH may consider this recommendation as unsuitable. Unfortunately knowledge on pregnancy outcomes and best management of PAH during pregnancy is limited. Methods: Data from all women with PAH who were followed during pregnancy by a multidisciplinary team at a tertiary referral center for PAH and who delivered between 2004 and 2020 were retrospectively analyzed in a case series. PAH risk factor profiles including WHO functional class (WHO-FC), NT-pro-BNP, echocardiographic pulmonary arterial pressure (PAP) and right heart function were analyzed prior to, during and following pregnancy. Results: In seven pregnancies of five women with PAH (median age 29 (27; 31) years), there were no abortions or terminations. Five pregnancies were planned (all in WHO-FC I-II), two incidental (WHO-FC II, III). During pregnancy none of the women had complications or clinical worsening of PAH. After a median pregnancy duration of 37 1/7 weeks all gave birth to healthy babies by cesarean section in spinal anesthesia. During pregnancy, PAP tended to increase, whilst the course of WHO-FC and NT-pro-BNP were variable and no trend could be detected. Conclusion: Women with PAH with a low risk profile closely followed by a multidisciplinary team had a favorable course during and after pregnancy, resulting in successful deliveries of healthy newborns.
INTRODUCTION
Pulmonary arterial hypertension (PAH) is a relatively rare condition with exertional dyspnea being the main symptom (1). Diagnosis of PAH requires evidence of precapillary pulmonary hypertension by right heart catheterization (RHC) that is defined by a mean pulmonary artery pressure (PAP) >20 mmHg and a pulmonary artery wedge pressure <15 mmHg along with a pulmonary vascular resistance (PVR) >3 WU at rest (2). Patients in whom mean PAP decreases by >10 mmHg below 40 mmHg during vasoreactivity testing, are classified as vasoreactive. Prognosis is more favorable in patients with PAH with a positive response to vasoreactivity testing (1,3). After confirmation of PAH diagnosis in an expert center, timely initiation of often combined therapy is warranted in nonvasoreactive PAH. Rather high doses of calcium channel blockers (CCB) are used in vasoreactive patients. Supportive therapy as well as counseling patients concerning their disease and general measures are important as combined approach with the drug therapy (4). A cornerstone of counseling women with PAH of childbearing age is discussing contraception, family planning and the risks of pregnancy (5). It is generally recommended that pregnancy should be avoided in patients with PAH, particularly in persistently compromised pulmonary hemodynamics (1,6). Physiological changes associated with pregnancy and puerperium pose a great challenge on the cardiovascular system including large increases in blood volume and oxygen consumption (7,8). Patients with PAH have a limited ability to compensate this physiologic changes and to increase in cardiac output and blood volume due to the pathology of the pulmonary vasculature. Both pregnancy and delivery of a child exposes patients with PAH to the risk of right heart failure with the highest incidence of maternal mortality occurring during the first days after delivery (7). In addition to these pathophysiological threats, some of the commonly used PAH drugs are teratogenic. Thus, a secure contraception should be discussed, prescribed and repeatedly assessed in patients with advanced PAH. If unforeseen pregnancy occurs, early termination should be considered, which however does not go without risk either (6). Despite the enormous risks which pregnancy implicates in PAH for both, the patient and the fetus, some patients with mild symptoms and little impairment due to PAH in everyday life decide to become pregnant or to continue an incidental pregnancy. However, literature on the management and course of pregnancies in PAH is rare (1,9,10). The aim of the present retrospective study was to analyze outcomes of all pregnancies in women with PAH treated in Switzerland's largest expert center for pulmonary hypertension. More data on pregnancies in a diverse spectrum of patients with PAH are needed as a basis for better counseling women with PAH in childbearing age.
METHODS
This is a retrospective analysis of data from women diagnosed with PAH who conceived a pregnancy between the years 2004 and 2020 and were followed in the center for pulmonary hypertension at the University Hospital Zurich. Only patients diagnosed with PAH by RHC before pregnancy were eligible. All patients provided general consent for retrospective data analysis Abbreviations: PAH, pulmonary arterial hypertension; RHC, right heart catheter; PAP, pulmonary artery pressure; PVR, pulmonary vascular resistance; CCB, calcium channel blockers; 6MWD, 6-min walking distance; FAC, fractional area change; TAPSE, tricuspid annular plane systolic excursion; TRPG, tricuspid regurgitation pressure gradient; NT-pro-BNP, N-terminal pro brain natriuretic peptide; ERA, endothelin receptor antagonist; WHO-FC, World Health Organization functional class. and the local ethics committee declared that no additional authorization is needed. The course during pregnancy in one patient (ID 4) was already published in a case report in 2009 (8).
Baseline data including demographics such as age, parity, classification of PAH, medication, co-morbidities, hemodynamics by RHC, echocardiographic values, World Health Organization functional class (WHO-FC) and 6min walking distance (6MWD) were collected. The course of pregnancy was closely followed and hemodynamics by echocardiography including right ventricular function (fractional area change, tricuspid annular plane systolic excursion (TAPSE) and tricuspid regurgitation pressure gradient), blood N-terminal pro brain natriuretic peptide (NT-pro-BNP) and the WHO-FC were documented during the 1st, 2nd, and 3rd trimester as well as at follow-up's 3-5 months and 1-2 years after delivery as well as at the most recent follow-up. If the 6MWD was not available at follow-up, the peak work rate and oxygen uptake from ergospirometry were retrieved. Complications during pregnancy such as eclampsia, preeclampsia, thrombosis and FU, follow up; TRPG, tricuspid regurgitation pressure gradient; FAC, fractional area change; TAPSE, tricuspid annular plane systolic excursion; WHO-FC, World Health Organization functional class; NT-pro-BNP, N-terminal pro brain natriuretic peptide. REVEAL 2.0 risk score (14); ESC/ERJ risk score (risk class of each parameter divided by available parameters) (15,16).
No significance before and after pregnancy was detected by Wilcoxon test for any value.
worsening PAH related symptoms were noted. The method of delivery, anesthesia and perinatal complications as well as the condition of the neonates were documented, including neonatal death, small for gestational age (under the 10th customized centile) (11), preterm deliveries (prior to 37 weeks of gestation) (12), low birth weight (≤2,500 g) (13) and need for admission of the newborn to the neonatal intensive care unit. Well-controlled PAH was defined as patients in low risk profile under PAH targeted therapy (10).
Data Presentation and Statistics
The different variables prior to, during and following pregnancy are descriptively presented as median (quartiles) or numbers (%
Patients and Baseline Characteristics
During the observational period, seven pregnancies in five women diagnosed with PAH occurred. In three women pregnancy was planned [ID 1 (2×), 2 and 3 (2×)] whilst two women became pregnant unplanned (ID 4 and 5). All of these pregnancies resulted in successful childbirth. Women with abortion would have been included as well, but no abortion occurred. All of these patients were regularly seen during pregnancy at the University Hospital Zurich, approximately once per month. Two patients gave birth twice and data from both pregnancies were analyzed (ID 1.1/1.2, and 3.1/3.2). One patient (ID 2) already had a child but her PAH diagnosis was made after her first pregnancy and herein only data from the second pregnancy was included. The other two women were nulliparous. Baseline characteristics of the last observation before pregnancy are shown in Table 1. The median maternal age at the beginning of the pregnancy was 29 (26; 36) years. Three patients were in WHO-FC I, three patients in WHO-FC II and one patient in WHO-FC III (ID 4). The latter was a women with PAH due to systemic lupus erythematosus with unplanned pregnancy and she was the only one with a PVR >4 WU before becoming pregnant. Three women were classified as idiopathic PAH whilst the last one had schistosomiasis associated PAH. The three women whose pregnancies were planned, were in the low-risk group according to risk stratification. We did not exclude patients with high risk PAH, but none of them became pregnant. This is likely because these women were firmly advised against pregnancy. The present collective of women with PAH who got pregnant was estimated to be <8% of women with PAH being in the childbearing age and followed at our center. Of interest, all three women revealed a >20% reduction of the PVR in vasoreactivity testing in the pre-pregnancy RHC and were prescribed CCB, albeit a formal >10 mmHg reduction of the mean PAP was not reached (but starting from a low mean PAP level). The most recent RHC was performed between 5 days to 20 months before pregnancy. For women with PAH who gave birth twice (ID 1 and 3), no RHC was performed between the two pregnancies. The median 6MWD was 630 (482; 750) m and the median measured NT-pro-BNP was 44 (41; 70) ng/l (norm < 130 ng/l).
Course of the Pregnancy
The median pregnancy duration was 37 1/7 (37 0/7; 28 0/7) weeks. There were no clinical signs of worsening PAH during pregnancy. The course of echocardiographic data, NT-pro-BNP and WHO-FC before, during and after pregnancy are shown in Table 2. The changes of the WHO-FC are illustrated in Figure 1.
There were no statistically significant changes in NT-pro-BNP that tended to increase in two and to decrease in five patients. Changes in drug therapy and detailed description of the medications of each woman before and during pregnancy as well as in childbed are shown in Table 3. In one patient (ID 2), the phosphodiesterase type 5 inhibitor was stopped during pregnancy because of headache and this patient additionally received diuretics due to leg edema. In another patient (ID 4), a CCB was given instead of a phosphodiesterase type 5 inhibitor as she reported, that she previously responded well to CCB, albeit formal vasoreactivity criteria were not fulfilled in the preceding RHC (3). In the two patients who were taking an endothelin receptor antagonist (ERA) when unintended pregnancy was discovered (ID 4 and 5), this medication was immediately stopped because of potential teratogenic effects (17,18). The patient with ID 4 was switched to inhaled iloprost as she was in WHO-FC 3 before the discovery of the pregnancy, she had to stop the ERA and we wanted to keep her on dual combination therapy. The patients with schistosomiasisassociated PAH (ID 5) was switched from ERA to off-label oral selexipag after interdisciplinary discussion with the hospital pharmacist as this patient had an unfavorable hemodynamic profile at diagnosis, which responded very well under vasodilator combination therapy (reduction of the PVR from initially 9.2 to 2.5 WU before pregnancy onset) and we thus wanted to keep her on combination therapy In childbed, or in some cases already in the third pregnancy trimester, patients received prophylactic anticoagulation in form of subcutaneous low molecular weight heparin. Patient ID 4 was on vitamin K antagonist therapy at the time of discovery of pregnancy because of lupus-antigens, which was immediately stopped due to the known teratogenic effect and replaced by low molecular weight heparin (19).
None of the women had severe pregnancy associated complications such as preeclampsia, eclampsia or thrombosis. In one patient (ID 2) an early pregnancy bleeding was present at 8 2/7 weeks of gestation. Therefore, progesterone was applied. In another pregnancy (ID 3.2), there was a rhesus constellation between mother and fetus. This was treated with maternal injection of an anti-D-prophylaxis.
Perinatal Period and Delivery
In all cases, a cesarean section under regional anesthesia was recommended by our multidisciplinary team and successfully conducted in spinal anesthesia. Extended hemodynamic monitoring, including invasively measured blood pressure, central venous pressure and arterial and mixed-venous oxygen saturation was performed during delivery. All mothers gave birth to healthy babies. None of the neonates was small for gestation age or had low birth weight. After delivery, all mothers were observed in the intensive care unit for 1-2 days for safety reasons. Six sections were planned before term deliveries, with the latest at 38 0/7 gestational weeks. In one woman (ID 1.1) it was decided to terminate the gestation for safety reasons at week 33 5/7, whilst an extremely hot summer with temperatures over 35 • C when the patient suffered from leg edemas and increased dyspnea. This patient had postpartal uterine bleeding under therapeutic anticoagulation on day two after delivery. That was treated only with fluid management and iron supplementation. Whilst this bleeding occurred, the patients was on the normal ward and was readmissioned to the intensive care unit for 1 day for safety observation without need for additional therapies.
Follow-Up and Safety
Patients were regularly followed after pregnancy in the pulmonary hypertension center. Follow-up data of WHO-FC, NT-pro-BNP and echocardiography are shown in Table 2 and of 6MWD or cardiopulmonary exercise testing in Table 4. Symptoms returned back to pre-pregnancy levels and only the patient with PAH associated with systemic lupus erythematosus had slightly worse hemodynamics at follow-up. She had difficulties to perform exercise tests due to chronic knee pain. In all women, 6MWD 3-5 month after delivery was shorter compared to pre-pregnancy values, but in the majority, exercise capacity returned to pre-pregnancy values after 1-2 years.
DISCUSSION
This case series summarizes seven successful pregnancies of women with well-controlled PAH who were stratified to be in the prognostic low-risk group. With one exception, patients were in WHO-FC I-II and had low NT-pro-BNP values. Hemodynamics were not severely impaired and PVR was <4 WU before pregnancy. All women were closely followed in our center, gave birth to healthy children and no serious adverse events or relevant worsening of PAH occurred (mean follow-up 4 years and 2 months). Compared to other series (9,10,20), all pregnant PAHpatients in in the present report were in a low risk group before pregnancy onset according to current guidelines or REVEAL scores (1).
During pregnancy, the body has to adapt to physiological changes affecting the cardiovascular system and several other organs. In PAH-patients, the pulmonary vascular disease impairs a physiologic vasodilator response. This may lead to increased pulmonary vascular resistance and the imminent risk of right heart failure (10,21). Pregnancy may also uncover a previously undiagnosed PAH, as shown in a series from Sheffield, where 4/9 pregnant woman had a de-novo diagnosis of PAH discovered due to excessive dyspnea during pregnancy (9,22). The management of pregnancies in PAH patients' needs a multidisciplinary team in a specialist center with experience in managing PAH as well as in complicated pregnancies and deliveries (9,10,20,23). These women should be closely seen by experts and monitored for their disease as well as for fetus growth retardation (6,24). The best follow-up strategy for pregnant patients with PAH is not known. Common risk assessments, such as the WHO-FC may be biased by increased dyspnea sensation which is caused by increased neural respiratory drive also seen in pregnancies in healthy women (25). Exercise tests by the 6MWD or ergospirometry are subject to the same bias. Due to the increased cardiac output in pregnancy, an increase in the tricuspid regurgitation pressure gradient is a physiological consequence and the same is true for right ventricular volumes, which has to be considered during echocardiographic followup. Common symptoms and signs of PAH, such as dyspnea, leg edema and fatigue, are also frequently found during pregnancy and distinction thus needs expertise (6,9,10,26). The physiological adaptation to pregnancy may only be slightly impaired in patients with well-controlled PAH being in a good functional class and having only minor restrictions in physical activity. Good exercise tolerability before pregnancy onset may be a sign of preserved right ventricular contractile reserve and thus a beneficial indicator for favorable pregnancy outcomes. The paradigm of avoiding pregnancy for active women with well-controlled PAH may be associated with a very high psychosocial burden, lack of acceptance for these recommendations. Pulmonary hypertension centers face the dilemma of adherence to guidelines versus individualized patient counseling and provision of best follow-up care in difficult circumstances. In our collective, all patients wished to become mothers. In three patients, counseling was provided before contraception was stopped, in two of which, ERA was stopped and RHC was performed 3 months after stop in order to verify the absence of hemodynamic worsening. Two unplanned pregnancies occurred under potentially teratogenic ERA despite repeated counseling and prescription of contraceptive drugs.
In the current case series, all five women were on optimized PAH targeted medical treatment with 3/5 revealing borderline vasoreactivity in the most recent RHC before pregnancy. During pregnancy, these women did not reveal signs of PAH-worsening as assessed by WHO-FC, NT-pro-BNP or hemodynamics by echocardiography and thus, as in other series, RHC was not repeated (9). At echocardiography, right heart function was normal in all woman and only the woman with lupus-associated PAH with unplanned pregnancy was in a higher WHO-FC III, which even improved to II the third trimester, possibly related to a favorable immunomodulatory effect is systemic lupus erythematosus during pregnancy (27). However, her lupus worsened 11 weeks after delivery with affection of the hair, skin, joints and serosa resulting in a need to intensify immunosuppressive therapy.
In patients with PAH being in the intermediate to high risk group, labor, delivery and the post-partum period are associated with a high risk of mortality described up to 36% but with better outcomes in multidisciplinary PH-centers in recent years (1,9,10,28,29). As others, we recommended delivery by cesarian section in spinal anesthesia, as an expert team can be present, unforeseen hemodynamic threats during labor can be avoided (9,10,20,30). The first week of the post-partum period carries the greatest risk for maternal death (31). In a study published in 2014, 77 pregnancies were observed with an overall mortality of 16%, whereof 11% died in the post-partum period related to PAH, with the exception of one (22). An increase in cardiac output of up to 80% may be observed during this period due to auto-transfusion associated with uterine involution and the resorption of leg edema, which may lead to right ventricular failure and therefore, we observed PAH women for several hours at the ICU. Another risk in the postpartum period are thromboembolic events and therefore, prophylactic anticoagulation was started right after cesarean section in our patient collective (6). In all cases, a close monitoring is required during delivery and in the post-partum period (23).
The major limitation of this study is the small number of patients which got pregnant during the observational period. However, PAH is a rare disease and affected woman are generally counseled against pregnancy.
All pregnant women seen at our center between 2004 and 2021 had a favorable outcome, most probably due to their well-controlled PAH with low risk profiles managed by a multidisciplinary team in a reference center for pulmonary hypertension. We hope that our series together with others help treating physicians to counsel women with well-controlled PAH in the childbearing phase that do not wish abortion. However it has to be clearly stated that pregnancy may pose PAH-woman with less favorable risk profiles to considerable risk of disease progression and should thus be avoided.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
SU is the guarantor and takes responsibility for the content of the manuscript, including the data and analysis. NC, SU, and SS contributed to acquiring, analyzing, and interpreting the data, writing and revising the article critically for important intellectual content and providing final approval of the version to be published. CB, ML, ES, FG, AG, RS, and FK contributed to data collection, analysis and revising the article critically for important intellectual content. All authors take responsibility for all aspects of the reliability and freedom from bias of the data presented and their discussed interpretation. All authors contributed to the article and approved the submitted version. | 2021-07-05T13:31:25.599Z | 2021-07-05T00:00:00.000 | {
"year": 2021,
"sha1": "1136db9e396dd054f3ce68f19b6077beaf3b04d9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.689764/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1136db9e396dd054f3ce68f19b6077beaf3b04d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17852995 | pes2o/s2orc | v3-fos-license | Binge Eating Behavior and Weight Loss Maintenance over a 2-Year Period
Objective. To investigate the relationship between binge eating behavior and weight loss maintenance over a two-year period in adults. Design. Secondary data analysis using the Keep It Off study, a randomized trial evaluating an intervention to promote weight loss maintenance. Participants. 419 men and women (ages: 20 to 70 y; BMI: 20–44 kg/m2) who had intentionally lost ≥10% of their weight during the previous year. Measurements. Body weight was measured and binge eating behavior over the past 6 months was reported at baseline, 12 months and 24 months. Height was measured at baseline. Results. Prevalence of binge eating at baseline was 19.4% (n = 76). Prevalence of binge eating at any time point was 30.1% (n = 126). Although rate of weight regain did not differ significantly between those who did or did not report binge eating at baseline, binge eating behavior across the study period (additive value of presence or absence at each time point) was significantly associated with different rates of weight regain. Conclusion. Tailoring weight loss maintenance interventions to address binge eating behavior is warranted given the prevalence and the different rates of weight regain experienced by those reporting this behavior.
Introduction
In the United States, 34% of adults aged 20 and over are considered overweight, 34% obese, and 6% extremely obese [1]. It is estimated that approximately two-thirds of overweight and obese individuals are currently trying to lose weight [2,3]. Despite the fact that losing weight and maintaining weight loss can be difficult, research shows that 17-20% of overweight or obese individuals are able to lose weight and maintain their weight loss for at least 1 year [4,5]. A sustained reduction of 10% of initial body weight is generally considered a success for clinical and research purposes, as this amount, even for very obese persons, can result in numerous health and economic benefits, such as an increase in life expectancy and a reduction in comorbidities such as diabetes, hypertension, and coronary heart disease, along with associated medical costs [5,6]. Thus, obtaining better understanding of factors that influence successful weight loss maintenance (WLM) is important and could lead to the development of more effective intervention strategies.
The National Weight Control Registry (NWCR) has provided important information regarding correlates of successful weight loss maintenance (defined as having lost at least 13.6 kg (30 pounds) and having kept it off for a minimum of one year). The NWCR is a group of over 3,000 individuals who have lost an average of 30 kg and maintained this loss for an average duration of 5.5 years [5]. Data from the NWCR indicates that successful weight loss maintainers have high physical activity levels, eat a diet low in fat and high in carbohydrate, and regularly self-monitor their weight, while higher levels of depression, disinhibition, and binge eating increase the risk of weight regain [5], suggesting that binge eating may be a behavior important to successful WLM.
Although considerable research examining the relationship between binge eating and weight loss has been conducted [7][8][9][10][11], there is a paucity of research on the relationship between binge eating and prolonged WLM. Of interest, rates of binge eating in NWCR participants are comparable to those observed in community samples, with 8% of members reporting 4 or more binges per month [12]. Beyond this observational study, empirical evidence on the relationship between binge eating and WLM is scarce. The weight loss literature on this topic is mixed. Most studies have not found binge eating to be a strong predictor of weight loss success [7][8][9][10][11]. However, one study found that binge eaters who abstained from binging earlier in the program had significantly greater weight loss than those who continued binge eating [13] and several other studies have found that individuals who engage in binge eating regain weight more rapidly after treatment compared to nonbingers [9][10][11]. It has also been documented that participants who binge eat are more likely to drop out of treatment programs [7,[9][10][11], which may mask potential relationships that exist between binge eating and weight loss outcomes.
Although there is limited support for binge eating as a predictor of weight loss, binge eating behavior may be negatively associated with WLM, based on evidence from the studies cited previously, along with the evidence that successful weight maintainers report low rates of binge eating [12] and individuals who binge eat have been shown to gain more weight over time [14].
The Diagnostic and Statistical Manual of Mental Disorders 5th edition (DSM-5) now includes binge eating disorder as a psychiatric illness in recognition of the psychological distress and impaired functioning that is associated with this behavior. A binge eating episode is defined as consumption of a subjectively large amount of food and sense of loss of control over eating during this episode [15]. To meet the DSM-5's criteria for BED, an individual must engage in a binge eating episode at least 1 time per week for 3 months [15]. Furthermore, the DSM-5 denotes classifications for severity of binge eating: if 1-3 binge episodes occur weekly, this is classified as mild binge eating disorder. BED of moderate severity occurs at a frequency of 4-7 episodes per week, severe at a frequency of 8-13 episodes per week, and extreme if 14 or more episodes occur weekly [15]. The prevalence of BED in individuals seeking treatment for obesity has been reported to be as high as 25%, compared to estimates of 2-5% from community samples [12].
Although binge eating behavior has been studied extensively, controversy exists about how to best operationalize the disorder in a research setting [16]. Many studies have measured binge eating at a sole time point (e.g., [7,14]) while few studies have examined binge eating over time [17,18]. This is important because it is possible that binge eating behavior fluctuates over time and a single time point may not capture all experiencing symptoms. Though data has been used to assert that binge eating behaviors remain stable [18], other research shows that certain disordered eating behaviors, like binge eating, fluctuate over time [17]. Though Hawkins and Clement [18] reported temporal stability of binge eating (test retest = 0.88), their sample consisted of undergraduates, most of whom were of normal weight, and the binge eating scale utilized had only moderate internal consistency (0.68). Moreover, the period of assessment for stability was five weeks and it is possible that cycles of binge eating exceed five weeks. Consequently, it may be important to measure binge eating at multiple time points in order to more accurately assess the relationship between binge eating and weight change, particularly among adults who are trying to maintain a clinically meaningful weight loss.
An important and potentially confounding factor to consider when studying binge eating is depression. Though depression and binge eating are not interchangeable, it has been suggested that there is a relationship between the two [19]. Empirical data has supported this relationship, especially in women [8]. A number of previous studies have found depression to be associated with both binge eating [9,[20][21][22] as well as with weight status [23,24]. More recently, negative affective states have been shown to precede binge eating in overweight and obese adults [25]. Thus, measuring and attempting to prevent depression score from confounding the relationship between binge eating behavior and weight loss maintenance may be advisable.
The current study addresses a gap in the literature by examining the relationship between binge eating behavior and WLM among adults who recently lost at least 10% of their body weight and enrolled in a study to test the effect of an intervention on WLM success. Because binge eating may fluctuate over time and this may influence WLM, binge eating behavior was assessed at multiple time points. Binge eating behavior was measured at baseline, 12 months, and 24 months, allowing for both cross-sectional and longitudinal approaches to categorizing this behavior. We hypothesize that (H1) individuals who report binge eating behaviors at baseline will have a greater rate of weight regain over the 24-month trial compared to individuals who do not report binge eating at baseline. Additionally, accounting for binge eating status at multiple time points, taking a longitudinal approach to this construct, will be informative beyond the use of baseline binge eating status alone. Specifically, we hypothesize that (H2) participants who report binge eating at multiple time points will have a greater rate of weight regain over the 24-month trial relative to participants who report no binge eating or less consistent binge eating (reporting binge eating at one out of 3 time points). Because of its potential role in the relationship between binge eating and weight regain, depression was included in analyses.
Participants.
Keep It Off (KIO) participants are members of the Minnesota-based HealthPartners managed-care organization who intentionally lost at least 10% of their body weight during the previous year. Briefly, a participant's progression through the study was as follows: investigators recruited participants who had lost at least 10% of their body weight in the past year, interested individuals were telephonescreened for eligibility ( = 875), and those who met inclusion criteria were enrolled and randomized to one of two weight loss maintenance groups ( = 419) and then received either the self-directed or guided weight loss maintenance intervention. Participants in the self-directed intervention received less education about weight loss maintenance (two 20-minute phone calls and a self-monitoring logbook to keep track of progress) compared to the guided intervention (twenty-four 20-minute phone calls plus followups along with the self-monitoring logbook which they were required to submit at regular intervals to intervention staff) [26].
During eligibility screening, participants were asked to describe how (e.g., weight loss program or decreasing caloric intake on their own) and why they purposefully lost weight. Inclusion criteria for participation in the study were as follows: 19 to 70 years old, BMI > 20.5 kg/m 2 , and the capacity to communicate with research staff by telephone. Participants could not have a history of anorexia nervosa, previous bariatric surgery, recent diagnosis of a non-skin cancer or congestive heart failure, and/or current participation in another phone-based weight loss program or in another weight-management study. The KIO intervention and primary outcomes are described in detail elsewhere [26,27].
Weight and Height.
Weight and height were measured at baseline, 12 months, and 24 months during in-person visits with subjects wearing light clothes and without shoes (Seca 770 Medical Scale; Seca 214 Portable Height Rod). Body Mass Index (BMI) was calculated from this measure in addition to an in-person height measurement.
Binge Eating Behavior.
Binge eating behavior was selfreported via survey at baseline, 12 months, and 24 months during in-person visits and defined using three items from the Eating Disorder Diagnostic Scale (EDDS): (1) eating what other people would regard as an unusually large amount of food "Have there been times when you felt that you have eaten what other people would regard as an unusually large amount of food (e.g., a quart of ice cream) given the circumstances?" (Response options = yes/no); (2) a perceived loss of control during these episodes "During the times when you ate an unusually large amount of food, did you experience a loss of control (feel you could not stop eating or control what or how much you were eating)?" (Response options = yes/no); and (3) the frequency of binge episodes "How many TIMES per week on average over the past 6 months have you eaten an unusually large amount of food and experienced a loss of control?" (Response options = 0, 1, 2, . . .13, 14+) [28]. The EDDS has good psychometric properties: internal consistency: alpha = 0.89; test-retest reliability: = 0.87 [28,29].
Categorizing Binge Eating
Behavior. At baseline and at 12 and 24 months, each participant was categorized as a binge eater if they reported eating a large amount of food and feeling a loss of control over eating at least once per week on average over the past 6 months. In cases of missing data, if it could not be verified that the participant self-reported this behavior at least once a week, in accordance with DSM-5 criteria, they were labeled as missing or not experiencing binge eating depending upon the other information available.
For example, some participants may have reported eating a large amount and skipped the question about experiencing a loss of control but reported a frequency of an average of >1 times per week; these participants were classified as engaging in binge eating because they reported a frequency specified by the DSM-5, and the final question clearly states episodes where both a large amount and a loss of control were experienced.
Binge eating behavior was categorized as being present or absent at each time point (baseline, 12 months, and 24 months). For H1, we were interested in how binge eating behavior at baseline was associated with rate of weight regain over two years. For H2, we were interested in how the consistency of binge eating behavior at multiple time points (e.g., an individual may not have reported binge eating at baseline but may have reported binge eating at 12 months and/or 24 months) was associated with rate of weight regain over two years.
Binge Eating Behavior at
Baseline. H1 is concerned with whether participants met the criteria for binge eating at baseline. All analyses pertaining to this hypothesis used the aforementioned definition of binge eating behavior and applied it to baseline only.
Binge Eating Behavior over Time
Any Binge Eating. Assessment of binge based on assessments at multiple time points may provide different estimates of the occurrence of this behavior relative to a single point in time, such as baseline. For this reason, a variable was created to assess whether participants met criteria for binge eating behavior at any data collection time point throughout the two-year study. This is a dichotomous variable-if a participant met the aforementioned criteria for binge eating at baseline or 12 months or 24 months, they would be classified as having binge eating behavior. As shown in Table 2, nearly a third of the sample self-reported binge eating behavior during at least one time point during the two years of the study.
Recurrent Binge Eating.
To address H2, a cumulative binge score assessing recurrent binge eating was created; this score is the sum of self-reported binge eating behavior at each time point with a minimum of 0 indicating no binge eating at any time point and a maximum of 3 indicating a participant selfreporting binge eating at all three time points (baseline, 12 months, and 24 months). A three-level categorical variable was then created for individuals reporting no binge eating at any time points, binge eating behaviors at one time point, or binge eating behaviors at 2 or more time points. Since a score of 2 meant reporting binge eating more often than not (2 out of 3 possible time points), we classified them as consistent binge eaters and those with a score of 1 as inconsistent binge eaters. This variable was treated as categorical in the modeling.
Severity of Binge Eating
Behavior. Based on the new DSM-5 definitions for binge eating severity, severity of binge [30]. The 11item scale has a maximum possible score of 33. A participant's score on the CES-D is the sum of responses to 11 items (e.g., "I felt sad"; rated on a scale of "rarely or none, " "some, " "moderate, " and "all") regarding the amount of time at which they felt they agreed with the statement in the past week, with two items reverse-coded. A score of 9 or higher on the 11item scale corresponds with a score of 16 or higher on the 20-item scale and represents clinically significant symptoms of depression [31]. Additional demographic variables used in the analyses included gender and age.
Statistical Analysis.
Measures of central tendency and dispersion were calculated for demographic variables, depression, binge eating, and body weight for the study sample and then stratified by baseline and by consistency of binge eating behavior. -tests and analysis of variance compared means between groups for continuous variables, and chi-square tests compared proportions between groups for categorical variables.
Hypotheses 1 and 2 were tested by estimating mixed models to predict three weight measures per person from two key predictors and their interaction: baseline binge eating status/consistency of binge eating status and the time at which weight was measured (baseline, 12 m, and 24 m). Main effects for treatment group assignment, depression, age at baseline, and gender were included as covariates, as well as the treatment by time interaction to control for intervention efficacy. Interactions between binge eating and depression and between treatment group, time, and binge status were included in preliminary models to ensure that the effects of primary interest were not modified by other modeled covariates. Neither interaction was significant and thus not retained in the final models. The nonsignificant interaction between binge eating and depression indicated that participants that were more depressed did not show a significantly different relationship between binge eating and weight over the two years compared to participants who were less depressed. Study participant was the unit of analysis in these models, with repeated weight observations nested within participants. Two random effects, the weight intercept and the time slope, were estimated for each participant with the remaining parameters treated as fixed.
The prediction that binge eaters would regain weight at a faster rate over the two-year period would be most strongly supported by a significant interaction between binge status and time. The key difference between the H1 and H2 models was that binge eating behavior had two values (i.e., not a binge eater at baseline and binge eater at baseline) in the H1 model but three (i.e., not a binge eater at any time, inconsistent binge eater, and consistent binge eater) in the H2 model.
All statistical analyses were completed using Statistical Analysis System (SAS) version 9.3 and the Statistical Package for the Social Sciences (SPSS) version 20 [32,33]. Table 1 presents baseline demographic and psychosocial characteristics for the sample. The majority of participants were Caucasian, married, employed, nonsmoking females with an average age of 46.5 years. The average BMI was 28.5 kg/m 2 , classifying most participants as overweight despite recent weight loss of at least 10% of body weight. Average baseline depression scores were 5.5, indicating nonclinically significant depression symptoms.
Participant Characteristics.
Prevalence of self-reported binge eating is presented in Table 2 for baseline, 12 months, 24 months, and any of the three time points. Binge eating behavior was self-reported by about one-fifth of the sample at baseline and 12 months; the proportion of the sample reporting binge eating behavior was about one out of six at 24 months. About 30% of the sample met this criterion when considering any time point. As shown in Table 2 the majority (three-quarters or greater) of participants reported binge eating episodes that would be classified as mild, occurring between 1 and 3 times per week. Binge eating behavior was missing for 28 participants at baseline (6.7% of the sample), 76 participants at 12 months (18.1% of the sample), and 80 participants at 24 months (19.1% of the sample). Table 3 presents demographic and descriptive statistics for the study sample, stratified by binge eating behavior at baseline and by binge eating behavior consistency (no binge, inconsistent binge, or consistent binge). Significant differences observed between those not reporting binge eating behaviors and those reporting binge eating behaviors at baseline included weight, BMI (calculated using weight), baseline depression, and average depression score, all of which were higher in those reporting binge eating behavior. BMI was not significantly different between genders at any time point but was significantly, but weakly positively, associated with both baseline depression and average depression at each time point ( < 0.05; -values: 0.109-0.217).
Stratifying by binge consistency shows that approximately 69.9% of participants reported no binge eating behavior, 17.7% reported inconsistent binge eating behavior (reported binge eating at one time point), and 12.4% were consistent binge eaters (reported binge eating at two or three time points). Similar to the comparisons made by baseline binge status, these categorizations are associated with significant differences in baseline BMI and depression scores. Baseline depression score was not significantly different between groups of binge eating consistency in this analysis; however age was such that those reporting no and consistent binging were significantly older than those reporting inconsistent binging. Figure 1 presents descriptive results of weight change over 24 months for (a) those who reported presence or absence of binge eating at baseline (no baseline binge eating corresponded with an average gain of 7.7 pounds (3.5 kg); baseline binge eating corresponded with an average regain of 11.7 pounds (5.3 kg)) and (b) those who reported no binge eating, inconsistent binge eating, or consistent binge eating (average regains of 7.0 pounds (3.2 kg), 12.2 pounds (5.5 kg), and 13.4 pounds (6.1 kg), resp.), supporting the hypothesis that binge eating is associated with more weight regain. Figure 2 displays the model-estimated weight gain trajectories by (a) baseline binge eating behavior and (b) binge consistency. In both graphs, those that report binge eating are heavier at baseline, corroborating significant differences reported in Table 3. In Figure 2(a), individuals who report no binge eating behavior at baseline gain roughly 3.8 pounds (1.7 kg) per year while individuals who report binge eating behavior at baseline gain about 5.7 pounds (2.6 kg) per year ( value for time < 0.001). The difference in rate of regain was in the predicted direction but did not reach conventional levels of statistical significance ( < 0.08, pseudo-2 = 0.545).
Binge Eating Behavior and WLM.
As for the additional parameters that were estimated, the intervention by time interaction was statistically significant ( = 0.011), which is consistent with the primary outcome paper result that participants in the guided versus selfdirected intervention arm regained weight at a slower rate over the 24-month followup [26]. Baseline depression and gender were significant main effects ( = 0.006 and <0.001, resp.) meaning that more depressed people weighed more than less depressed people and women weighed less than men. Age was not related to weight regain ( = 0.08). Figure 2(b) depicts the weight trajectories for groups categorized by binge consistency. The yearly rate of regain for participants that did not meet criteria for binge eating at any of the three time points was 3.6 pounds (1.6 kg) on average, for those reporting binge eating at one time point was 6.1 pounds (2.8 kg) on average, and for those reporting binge eating at 2 or 3 time points was 6.5 pounds (3.0 kg) on average. In contrast to the first model that only considered binge eating status at baseline, this model reveals that participants who did not report binge eating, reported inconsistent binge eating, The effect appears to be driven by the two binge groups relative to the group reporting no binge eating. Models were rerun using BMI as an outcome. Weight and BMI trajectories followed similar patterns and significance levels of terms were very close to the reported values. For ease of interpretability, weight was chosen as the outcome.
Discussion
This is one of the first studies to assess the relationship between binge eating behaviors and weight loss maintenance. Participants in this study had lost at least 10% of their body weight in the past year and were randomized to either a selfdirected program or a guided intervention designed to help maintain weight loss. Results show that binge eating behavior is associated with greater weight regain independent of the effect of the WLM intervention.
Numerous individuals reported binge eating at baseline and reported no binge eating at future evaluations and vice versa. In response to this observation, a new variable was created to capture binge eating consistency throughout the study. Data were modeled separately using baseline binge eating behavior and the measure of binge eating consistency to examine how results might differ according to whether only baseline binge eating is considered or a more longitudinal proxy for binge eating behavior is used. Concerning our first hypothesis, when considering only baseline binge eating, rate of weight regain was not statistically different ( = 0.077) between groups, though differences in rates were in the hypothesized directions as depicted by Figure 2(a). When using the newly created variable to address binge eating behavior consistency throughout the study, differences in rate of weight change were more pronounced and were statistically significantly different ( = 0.013), supporting our second hypothesis. This could be due to increased power to detect an effect or a truly increased rate of weight regain when taking binge behavior at any time point into account. Both baseline and average depression scores were significantly related to weight change over 2 years. The effect of the treatment group on weight regain over time was also substantial, as those randomized to the self-directed group regained significantly more weight by the 24-month followup compared to those in the guided maintenance intervention. However, no significant interaction was found between depression and binge eating, or binge eating, treatment group, and time, as binge eating was equally detrimental for those in both treatment groups. This may be a result of the fact that the intervention focused more broadly on weight loss maintenance strategies and did not specifically target binge eating behaviors.
The present investigation has several limitations. First, because the primary goal of the Keep It Off study was to evaluate the effectiveness of a WLM intervention, data may not have been collected in a manner that would most accurately assess binge eating behavior. Three questions were used from the Eating Disorder Diagnostic Scale along with guidance from newly defined DSM-5 criteria to determine whether an individual self-reported engaging in binge eating behavior. The EDDS is subject to limitations inherent in any self-report measure: retrospective recall bias, memory (especially over a 6-month period), and distortion or inaccurate reporting due to desire to please the experimenter or social desirability. However, in one study self-report has shown to have adequate validity compared with the gold standard interview method for assessing binge eating disorder behaviors [34]. Another limitation includes inability to assess directionality of binge eating and weight change due to impracticably small numbers of participants grouped according to the direction of change in binge status. Trends were in the expected direction, as individuals who reported binging at the end but not at the beginning of a year gained more weight than those reporting 8 Journal of Obesity binging at the beginning but not at the end of the year. Future research investigating directionality may be designed to better investigate "binge cycles" using clinical populations.
Finally, the sample recruited for this study was not representative, as participants were mostly Caucasian (86.87%) and female (81.62%). Previous research shows that, unlike other eating disorders, BED is distributed fairly equally among women and men. Research also shows that differences exist in the rates and severity of binge eating, as well as in the psychological correlates of binge behaviors by ethnicity. It would be advisable for future research on binge eating to include greater proportions of male subjects and ethnic minorities.
Results from this study demonstrate primarily that individuals reporting engaging in binge eating behaviors regain weight at a faster rate than those who do not report binge eating behaviors. For future studies, assessing binge eating behavior at multiple time points may be helpful in examining the magnitude and change in binge eating behaviors over time. This may capture those individuals who do not meet the full criteria for BED, but who engage in a level of subclinical binge eating behaviors that could affect health outcomes.
Based on the findings presented here, binge eating behaviors affect a substantial percentage of individuals that have lost weight and are trying to prevent weight regain. Binge eating behaviors were shown, in this study, to significantly affect the amount and rate of weight regained over a twoyear period. As mentioned, little attention has been paid to the role of binge eating in weight loss maintenance. Work has been done looking at different treatment modalities for individuals experiencing binge eating and suggests that cognitive behavioral therapy (CBT) works to decrease symptomology but does not substantially reduce body weight while standard weight loss therapy is more successful in terms of amount of weight loss during the active treatment phase [35][36][37][38]. However, as time progresses after treatment, recurrence of binge eating symptoms may contribute to weight regain, which could be one factor contributing to nonsignificant weight differences between treatment groups at followup [35]. Additionally, interpersonal therapy (IPT) has been studied in treating BED and found to reduce symptoms [39] and performed significantly better than behavioral weight loss [40]. IPT may be a promising treatment modality to include when treating BED. Developing interventions that offer more tailored support is essential in moving forward to promote weight loss maintenance in the appreciable portion of individuals who have lost weight and experience binge eating behavior. | 2018-04-03T05:07:33.152Z | 2014-05-08T00:00:00.000 | {
"year": 2014,
"sha1": "58368fa387dcea78688121af73311085a8b9fb4e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jobe/2014/249315.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eff416a955a5b3993386459483baaf528365a88c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56443823 | pes2o/s2orc | v3-fos-license | Dynamical Properties of Omani Crude Oils for Flow Through a Vertical Annulus and a Cylindrical Pipe
لم صخ : دب يف انمق لا هذه ةيا ةسارد ىدم يف قيقحتلاب ا ماخلا طفنلا نم تانيعل ةرارحلا ةجرد ىلع ةجوزللاو ةفاثكلا دامتع لوقح نم تعمج ةينوبركورديه خم .نامع ةنطلس يف ةفلت مث هذهل ةيكيمانيدلا صاوخلا ةسارد يف تاسايقلا تانايب مادختسا مت عقاوملا ةينوبركورديهلا تانيعلا نايرس ةجذمنب انمق دقل .ةفلتخم ةرارح تاجرد دنع ةفلتخملا طوغضلاو ةيبذاجلا ريثأت تحت يناوطسأ )ب( يقلح )أ( يسأر بوبنأ ربع طفنلا ماخل ةفلتخملا تساب مادخ كلذو يقبطلا عئاملا نايرسل ينوتوينلا بيرقتلا نم ةرارحلا ةجرد ىلع عئاملا نايرس دامتعإ نكمي ثيحب يبوساح جمانرب ريوطت مت . تانيعلا هذه نايرس صاوخ داجيلإ دلونير دادعلأ ةبسنلاب برطضملا نايرسلا عضوو يقبطلا نايرسلا عضو نيب لصفلا نم لقأ 0222 نم ىلعأو 0222 . نلا يتلا جذام ةعرسلل يبناجلا ليكشتلل تدمتعا ةديدجلاب تسيل دماجلا حطسلا ىلع ةجوزللا ةوقو ةلتكلا نايرس لدعمو ديدحتلاب فدهت ثحبلا اذه يف ةمدقملا تاباسحلا نكل )ةثدحتسملا( مادختسلا ميق عم ةينامع ماخ طفن تانيع AIP ؛ةفلتخم لنو ت دلا صاوخلا ىلع ءوضلا ةبوسحملا جئاتنلا ىق نوكت نأ نكمي . ينوتوينلا بيرقتلا راطإ يف ةددحملا تانيعلا هذهل ةيكيماني ةقحلالا تاباسحلاو ةساقملا ةيئايزيفلا صاوخلا صاوخلل ربع هلقنو ماخلا طفنلا جارختسا لثم تفلتخم ضارغأ يف ةدئاف ثاذ .بيبانلأا
Introduction
emperature dependence of thermo-physical properties, e.g., densities and viscosities of hydrocarbon fluids plays an important role in many fields of petroleum industries including enhanced oil recovery, oil purification, and transportation of produced fluids (Ahmed, 2000).When producing heavy oils, the high viscosity manifests as one of the impediments to recovering these oils from the oil rigs.The temperature dependence of the transport properties influences the relevant flow dynamics, which in turn affect the flow mechanism (Li et al. 2004).We present here a model study of the temperature dependence of certain dynamic properties such as velocity distribution, mass rate of flow and viscous force on the solid surface through a vertical annulus and a cylinder.
The present investigation primarily aims at understanding the profile of a number of dynamic properties of Omani crude oils through a vertical annulus at various temperatures and applied pressures.We may mention that fluid flow through an annulus has wide applications in various branches of science and technology including nuclear reactor engineering, oil and gas production, aero-engines, turbo-machinery, chemical engineering, steam generators and heat exchangers.Knowledge of the dynamics of energy flows either in the form of heat transfer or fluid flows through a vertical annulus or normal cylinder under atmospheric and pressurized conditions, can help us understand the re-flooding phenomenon during the emergency cooling in a water-cooled reactor (Shiotsu and Hama, 2000).It is particularly important in the petroleum industry because annular flow of liquid and gas in a pipeline segregates the material of lighter molecular weight by restricting its flow down the center of the pipe while allowing the material of heavier molecular weight to form a thin film and flow along the pipe wall.The lighter mass fluid or gas can also be in the form of a mist or colloidal suspension known as an emulsion.The interface between the flowing materials may not be entirely precise, and can involve gas and liquid mixtures.It has been suggested (Prada and Bannwart, 2001) that the use of a core-annular flow pattern may be made attractive as an artificial lift method in heavy oil wells by inducing the flow pattern by the lateral injection of relatively small quantities of water in order to get a lubricated oil core along the pipe.Concurrently we have also investigated the flow properties of a number of Omani crude oil samples through a vertical cylinder.The research work on vertical upward core flow is nonetheless scanty; particularly, the theme of research on flow dynamics with Omani crudes is novel and in its infancy (Arafin et al. 2011).However, the works of Shertok (1975), Bai et al. (1996) and, Ho and Li (1994) on various samples are worth mentioning here.
The prime interest in annular flow is to study the parameters relevant to the transport of fluids through straight pipes whether horizontal or vertical.However, presently, interest in annular flow goes beyond straight pipes to include intersections in pipe line networks, such as T-junctions, where phase segregation is likely to exist (Adechy and Issa, 2004).
We have used here the Newtonian fluid approximation relevant only to laminar and linear fluids flows.Certainly crude oils are far from Newtonian fluids.It is relevant to mention here that the Newtonian approximation is highly limited to homogeneous, isotropic and non-compressible systems where stress-velocity gradient relations are non-linear.Above all, any non-Newtonian fluid has a non-zero term on the right hand side of the momentum-balance equation.Consideration of non-Newtonian approximation will certainly necessitate the non-linear form of the Navier-Stokes equation.Thus it is apparent that the present approximation yields qualitative results only.
Thus for a non-laminar and nonlinear case, the situation will be certainly complex, and that will be considered in a future endeavor.In the present investigation, initially, the densities of the samples were measured by using the Anton Paar density meter (DMA 5000) within temperatures from 20°C to 70°C, at steps of 5°C.Subsequently, the kinematic and dynamic viscosities for all the samples were measured within this range of temperatures using a state-of-the-art Cannon-Fenske viscometer.Finally, the density and viscosity data were utilized to calculate the velocity distribution, maximum and average velocities of the bulk samples, and their mass flow rate under various thermal conditions.The calculations were constrained by prefixing the Reynolds number, Re, at 2000, ensuring a laminar flow as suggested by Bird et al. (2002).
T
The layout of the paper is as follows: In Section 2 we describe the experimental details for measuring the densities and viscosities of the samples followed by a display of graphical results for temperature dependence of density and viscosity, and we end with fitting formulas.Basic model-based formulation for the dynamical quantities is presented in Section 3. Details of calculated results are presented and discussed in Section 4 followed by some concluding remarks in Section 5.
Experimental measurements
In this section, we describe the experimental details of measuring the densities and viscosities at various temperatures for all the five samples.
Density measurements at various temperatures
We briefly introduce here the classification of the crude oil samples that were procured from different oil fields in Oman.The two samples (Oman Export and Receive Line) comprising mixtures of crude oils were collected from the Petroleum Development of Oman (PDO).The other three samples were collected from Erad field, Mabruk field, and Zal-41 field located in different regions in Oman.We have used the American Petroleum Institute (API) oil gravity number to classify our oil samples: Here, 0 is the density (g cm −3 ) measured at temperature 15.6°C and at atmospheric pressure.The API numbers usually vary from 5 for very heavy oils to nearly 100 for light condensates (Batzle and Wang, 1992).Clearly, our samples may be classified into three types.The densities of the samples were measured by using the Anton Paar density meter (DMA5000) as shown in Figure 1.The unit consists of a U-shaped oscillating tube, a system for electronic excitation, frequency counting, and a display.The injected sample volume is kept constant and vibrated.The density is calculated based on a measurement of the sample oscillation period and temperature.The temperature was controlled to ± 0.01°C during the measurement using a built-in thermostat.By measuring the damping of the U-tube's oscillation caused by the viscosity of the filled-in sample, the instrument automatically corrects the viscosityrelated errors.The Anton Paar density meter is calibrated to measure density to an accuracy of ± 5 × 10 −3 kg m −3 .This device was used to measure the density in the range of temperature varying from 20°C to 70°C, through temperature increments of 5°C.
From the practical point of view, density data of crude oil as a function of temperature provide important information, which is useful for various industrial applications ranging from exploration to refining and transportation.The density data of crude oil were plotted in Figure 2.These data can be adequately represented by the equation: where ρ r is the density at 20°C, m is the slope of the density versus temperature curve (dρ/dT), T is the temperature (°C), and T r is the room temperature (20°C).The error in density, Δρ, is 0.005 kg m -3 .The equations for density as a function of temperature obtained by a fitting procedure are written in the form of Eq. 2 as follows: Erad sample: ( ) 933 .008 0.638( ) Oman Export sample: ( ) 868 .980 0.699( ) Receive Line sample: ( ) 851 .426 0.701( ) Mabruk sample: ( ) 824 .6450.683( ) Zal_41 sample: ( ) 817 .7720.686( )
Viscosity measurements at various temperatures
Two types of Cannon-Fenske viscometers, type 350 and 300 with calibration constant 5 × 10 −6 m 2 s −1 and 2.5 × 10 −6 m 2 s −1 respectively, were used to measure the viscosity.A Cannon-Fenske viscometer type 350, was used to measure viscosity of the Erad sample, the heaviest crude among the five samples.For the rest of the crude oil samples, a Cannon-Fenske viscometer type 300 was used.The working principle of the viscometer is based on the fact that the average velocity of steady flow in a round tube depends inversely on viscosity.The viscometer determines the kinematic viscosity by timing the fluid flow through a capillary tube as it passes between two etched lines on the glass wall.In our experiment, the viscometer was inserted into a constant temperature bath whose temperature was controlled to ± 0.1°C using a Haake D8 circulation thermostat.To establish the efflux time, the thermostat was set at a desired temperature and the sample liquid was allowed to fall freely down past an upper mark and the time taken for the meniscus to pass the lower mark was measured.The kinematic viscosity (ν k ) was determined by multiplying the measured transit time of the fluid column in seconds with the calibration constant.The error of the viscosity measurement is < 0.35%.The error in the measurement of viscosity is too small to be shown in the plot (Figure 3a -b).
From the Figures 3a and 3b, it is clear that the kinematic viscosity decreases exponentially (Arrhenius type) as the temperature increases; this correctly reflects the effect of temperature on the crude oil viscosity.From the plot, it is also evident that crude oil samples with lower API values have higher kinematic viscosity.An identical behavior of the viscosities of various Omani crude oils has been reported previously (George et al. 2006).
The equations of kinematic viscosity as a function of temperature were obtained by exponential fit of the data of various crude oil samples.These are listed as follows: Erad: The dynamic viscosity, μ(Pa s), is determined from the product of kinematic viscosity, ν, and density, ρ (μ = νρ).
It is noted that the viscosity of a fluid is highly temperature-dependent.The viscosity of the crude oil decreases exponentially as the temperature increases from 20°C to 70°C.The present results are consistent with the correlation approach (Naseri et al. 2005) for prediction of crude oil viscosity.
Formulation
The present study applies only to steady-state flow relevant to Newtonian fluids.By steady state flow it is understood that the flow conditions at each point in the stream do not change with time.For this approximation, the momentum balance equation is (Bird et al. 2002): where p in and p out are the momentum entering into and going out of the system respectively.ΣF j is the sum of the forces acting on the system.As shown in Figure 4, we focus our attention on a region of length L, sufficiently far from the ends of the wall so that the entrance and exit disturbances are not included in L. This ensures that in this region the velocity component V z does not depend on z.
Flow through a vertical annulus
The flow of fluids in an annulus (Figure 4) is encountered frequently in physics, chemistry, biology, and engineering.The laminar flow of fluid in an annulus may be analyzed by means of momentum balance described in the previous section.By considering an incompressible fluid flowing in steady state in the annular region between two coaxial circular cylinders of radii κR and R (Figure 4), the equation for velocity profile of Bird et al. (2002) can be written with slight rearrangement in the form: where κ is the ratio of the inner to the outer radius (κ = r in /R out ), P o is the atmospheric pressure, and r is the distance measured from the outer surface of the inner cylinder to the inner surface of the outer cylinder.The quantity P L represents the combined effect of pressure, p L and the gravitational term (ρgL).μ and ρ are viscosity and density of the fluid respectively.With the help of figure 4 it can be shown that the term (P 0 -P L ) is the net pressure, ΔP which can be replaced by ΔP = Δp = p a -ρgL.p a is the applied pressure as shown in figure 4. Since ρ is a function of temperature, the net pressure, Δp, is a function of temperature as well.The fluid will remain static if the net pressure is equal to zero, which means that the applied pressure, p a at the bottom of the annulus will exactly balance ρgL.Since our prime objective is to determine the temperature dependence of flow properties we express all the relevant equations as a function of temperature through the temperature dependence of μ and ρ.Therefore Eq. 6 can be written in terms of net pressure as: The second term in the square bracket may be termed as the pipe characteristic function, which at any point along r depends solely on the pipe dimension.
Eq. 7 works only under the conditions that (i) the fluid is of constant density ρ (incompressible), (ii) the flow is laminar, (iii) the annulus length (L) is very large and (iv) there are no end effects.In fact, at the tube entrance and exit, the flow will not necessarily be parallel everywhere, so the tube surface effect will be ignored (Bird et al. 2002).
Since, we need to know the maximum velocity, the average velocity and the shear stress at a surface we have to know the range of the applied pressure for the flow to be in laminar at a given radius, length, viscosity and density.In this study we have assumed the flow to be laminar by restricting the Reynolds number, Re to 2000 and constraining Re by the equation (Bird et al. 2002): where, z v is the average velocity of the velocity profile, z v given by: B may be termed the annular constant.It is obtained from the expression: Combining equations ( 8) and ( 9) it can be shown that: Because μ 2 (T) decreases more rapidly with increasing temperature than ρ does, Δp will therefore decrease with increasing temperature.Substituting Eq. 11 in Eq. 7 and Eq. 9, the expressions for will decrease with increasing temperature because the viscosity decreases more rapidly with increasing temperature than density.Although Re depends on temperature through ρ and μ, as suggested by Eq. 8, we have kept it fixed at 2000 for all temperatures.This will force the average velocity to adjust according to Eq. 8.By doing so no generality is lost with regard to the condition for laminar flow.This strategy makes the computation simple and less cumbersome.The mass rate of flow, w is given by (Bird et al. 2002): Substituting Eq. 13 in Eq. 14 one can get: Again, the mass rate of flow, w will decrease with increasing temperature because of the presence of μ in the numerator of Eq. 14.By keeping Re fixed at 2000, the net pressure, Δp, z v , z v and w have been calculated from Eq. 10, Eq. 11, Eq. 12 and Eq. 14 respectively as a function of temperature, T. The viscous force exerted by the fluid on the walls of the annulus, F z , is given by (Bird et al. 2002): Since Δp decreases sharply with temperature (Eq.11) at a fixed Re, F z is expected to decrease with increasing temperature sharply as well.
Flow through a vertical cylinder
The equation of the velocity profile, z v obtained for a laminar flow through a cylindrical tube (Bird et al. 2002) is expressed as: where R is the radius of the cylinder.The variable r is the distance measured from the center to the inner wall of the cylinder.The temperature dependence of z v for a cylinder is: . The Reynolds number, Re for the flow in a cylinder is given (Bird et al. 2002) by the equation: As in the case of annular flow, we have kept the Reynolds number, Re, fixed at 2000 for the study of laminar flow in a cylinder as well.Following the same derivation procedure as in an annulus, one can show, in terms of Re, that the net pressure, Δp, average velocity, z v and velocity profile, z v , for a cylinder can be written as: And, finally the mass rate of flow, 4 () 8 ( ) The viscous force exerted by the fluid on the wall of the cylinder, F z , is given by (Bird et al. 2002): As in the case of an annulus, F z will decrease with increasing temperature in a similar fashion.
Vertical annulus
The calculated results on dynamical properties of the two extreme samples, namely Zal-41 [lightest] and Erad [heaviest] samples through an annulus are presented and discussed in this section.The outer radius (R out ) and inner radius (R in ) of the annulus are 0.5 m and 0.1 m respectively (r in / R out = κ = 0.2 ) and length, L=50 m.Since the velocities [Eqs.7 and 9] within the bulk of the fluid depend on the viscosity () T , which decreases as the temperature increases, it seems that the profile of () z vT as shown in Figure 6 is in contradiction with these equations.But the infused constraint owing to using a fixed Reynolds number Re = 2000 in Eq. 12 and Eq. 13, turns back the velocity profile; this is now reflected by the joint effect of The profile of the curves in Figure7 can be explained in terms of Eq. 11 which clearly demonstrates that for a fixed Reynolds number Re, the net pressure () pT will decay because of the presence of the factor 2 ( ) / ( ) TT
which decreases as T increases keeping also in mind that () T decreases more rapidly than ).(T
The rapid increase of viscosity values at the limit of the low temperature range is also truly reflected Concurrently it is also relevant to mention that the results ideally reflect the varying viscous nature of the lightest (Zal-41) and heaviest (Erad) samples through a rapid variation of the profile of () pT at this limit.
The profiles of ) (T p and w(T) display some similar features; this is primarily because of the presence of the viscosity at the numerators in Eq. 11 and Eq. 15.
Vertical cylinder
The calculated results on dynamical properties of two extreme samples, namely Zal-41 [lightest] and Erad [heaviest] samples through cylindrical tubes are presented and discussed in this section.The length and radius of the cylinder are 30 m and 0.1m respectively.11) which involves the viscosity term, μ 2 in its numerator indicating that this term decreases more rapidly than the density, ρ in the denominator.The zero net pressure means that the applied pressure, p a should equal the pressure (ρgL) due to the column of liquid in the cylinder.Since ρ decreases with increasing temperature, the applied pressure should decrease as well in order to balance the liquid column pressure for no flow through the pipe.p a in general will decrease with increasing temperature for a fixed Reynolds number.Owing to the presence of the net pressure () pT in the viscous force F z , Figure 13 follows the profile of () pT as shown in Figure 11.
Conclusions
We have investigated a few dynamical properties of some Omani crude oils through an annulus and a cylindrical pipe of fixed dimensions, in terms of the measured density and viscosity of these samples at various temperatures.The underlying methodology involves a simple model of fluid dynamics based on Newtonian approximation applied to laminar flow.
Erad Sample Cylinder
The measured density and viscosity respectively follow the equations:
Figure 4 .
Figure 4. Flow through an annulus.The various forces acting on the top and bottom surface of the annulus.
Reynolds number, Re, reduce respectively to:
20 )Figure 5 .
Figure 5. Flow in a vertical cylinder of length L and radius R. The various forces acting on the top and bottom faces of the cylinder are shown by arrows.
Figure 6 .
Figure 6.The velocity profiles for the (6a) flow of light (Zal-41 sample) and (6b) heavy (Erad sample) crude oils through an annulus at various temperatures.
Figure 9 .Figure 10 .
Figure 9.The viscous force, F z , on the walls of the annulus as a function of temperature at Reynolds number 2000 for: (9a) light (Zal-41) and (9b) heavy (Erad) crude oils.Owing to the presence of the net pressure () pT in the viscous force F z , Figure 9 follows the profile of () pT
Figure 11 .
Figure 11.The net pressure on the fluid samples in a cylinder as function of temperature at various Reynolds number Re values for: (11a) a Zal-41 sample and (11b) an Erad sample.Figures10 (a-b) show plots of the net pressure values as a function of temperature for upward laminar flow (Re=2000) of Zal-41 and Erad crude oil samples through a cylindrical pipe.The pipe is 30 m long and has a radius of 10 cm.The net pressure is quite high at low temperatures and decreases sharply up to about 40 o C after which it slowly reduces to zero at higher temperatures.The heavy crude oil (Erad sample) which has low API value (19.19) shows a sharp decrease in net pressure in comparison with the light crude oil (Zal-41 sample) having a high API value of 40.89.The shape of the curves can be predicted from equation (11) which involves the viscosity term, μ 2 in its numerator indicating that this term decreases more rapidly than the density, ρ in the denominator.The zero net pressure means that the applied pressure, p a should equal the pressure (ρgL) due to the column of liquid in the cylinder.Since ρ decreases with increasing temperature, the applied pressure should decrease as well in order to balance the liquid column pressure for no flow through the pipe.p a in general will decrease with increasing temperature for a fixed Reynolds number.Owing to the presence of the net pressure
Figure 12 .
Figure 12.The mass rate of flow w through a cylindrical pipe as a function of temperature at various Reynolds number Re values for: (12a) a light Zal-41 sample and (12b) a heavy Erad sample.
Figure 13 .
Figure 13.The viscous force F z on the walls of the cylinder as a function of temperature at Reynolds number 2000 for: (13a) light (Zal-41) and (13b) heavy (Erad) crude oils. | 2019-04-08T13:13:18.028Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "87199fa5370d772c1d282159afdc0870fe515e83",
"oa_license": "CCBY",
"oa_url": "https://journals.squ.edu.om/index.php/squjs/article/download/381/392",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "87199fa5370d772c1d282159afdc0870fe515e83",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
260091282 | pes2o/s2orc | v3-fos-license | Effect of Doping on the phase stability and Superconductivity in LaH10
We present a computational investigation into the effects of chemical doping with 15 different elements on phase stability and superconductivity in the LaH10 structure. Most doping elements were found to induce softening of phonon modes, enhancing electron-phonon coupling and improving critical superconducting temperature while weakening dynamical stability. Unlike these dopants, Ce was found to extend the range of dynamical stability for LaH10 by eliminating the van Hove singularity near the Fermi level. The doped compound, La0.75Ce0.25H10, maintains high-temperature superconductivity. We also demonstrate that different Ce doping configurations in the LaH10 structure have a minimal effect on energetic stability and electron-phonon coupling strength. Our findings suggest that Ce is a promising dopant to stabilize LaH10 at lower pressures while preserving its high-temperature superconductivity.
The search of binary hydrides [20,21] has shown diverse structures and chemistry in these compounds, which provide a broad platform to optimize the energetic stability and superconductivities.
Since the superconductivity in hydrides is mainly due to H, doping on the metal site is likely to maintain its superconductivity. Recently, high-throughput screening in the MgB2-like systems shows that the doping on the metal site can effectively improve the stability and maintain the superconductivity [29]. Metals from the same family share similar characteristics, allowing them to be combined into disordered solid mixtures. This property allows us to use binary compounds as foundational blueprints for crafting ternary alloy super hydrides from the original crystal structure [30][31][32][33]. LaH10, with the highest among experimentally synthesized superconductors, is a potential parent structure for doping to manipulate its HTS and pressure-dependent stability.
In this paper, based on first-principles calculations, we investigate the effects of chemical doping on phase stability and superconductivity in the LaH10 structure. A total of 15 elements are selected as dopants: K, Rb, Cs, Ca, Sr, Ba, Sc, Y, Ti, Zr, Hf, In, Tl, Ce, and Lu. The first thirteen elements are more likely to donate electrons to H atoms to enhance the stability of the H cage framework, and the strong correlation effect caused by d electrons is not significant [21]. Ce and Lu have also been theoretically predicted to have good superconducting potential [34,35]. We will use the La0.75M0.25H10 model to examine their dynamical stability and superconductivity under high pressure.
Stability calculations
The La0.75M0.25H10 structure was constructed by replacing one La atom with M metal (M=K, Rb, Cs, Ca, Sr, Ba, Sc, Y, Ti, Zr, Hf, Ce, Lu, In, Tl) in the conventional cell (four formula units (f.u.)) shown in Fig. 1(a). This results in a symmetry reduction to Pm-3m. Structure relaxations and electronic properties were carried out using the Perdew-Burke-Ernzerhof (PBE) [36] functional in the framework of the projector augmented wave (PAW) method [37] as implemented in the VASP code [38]. The configurations of valence electrons used in the PAW method are shown for these elements in Table. S1. A plane-wave basis set with an energy cutoff of 500 eV and uniform Γ-centered k-point grids with a density of 2 × 0.025Å were employed in the self-consistent calculations and structure relaxations. The structures were optimized until the maximum energy and force were less than 10 eV and 1 meV/ Å, respectively.
To investigate the dynamical stability, we used the finite displacement method by constructing a supercell with ~352 atoms and uniform Γ-centered k-point grids with a density of 2 × 0.025Å .
The second-order force constant extraction and the harmonic phonon dispersion relationship calculation were performed with Phonopy code [39]. We employed quasi-harmonic approximation (QHA) to explore the finite temperature thermodynamics.
Electron-phonon coupling calculations
Harmonic phonon dispersion and electron-phonon coupling (EPC) were calculated within the density functional perturbation theory (DFPT) [40], as implemented in the QUANTUM ESPRESSO package [41,42]. Ultrasoft pseudopotentials [43] with PBE functional were used with a kinetic energy cutoff of 80 Ry and a charge density cutoff of 800 Ry. The valence electron configurations used in USPP were the same as in PAW potential, so the calculations performed with QE and VASP were consistent. Self-consistent electron density and EPC were calculated by employing 8×8×8 k-point meshes and 4×4×4 q-point meshes. A dense 16×16×16 k-point mesh was used for evaluating electronphonon interaction matrix.
The main input element to the Eliashberg equations is the Eliashberg spectral equation ( ) defined as [44,45] where ( ) is the states at the Fermi level , representative the phonon frequency of the mode v with wave vector q. The phonon linewidth , which is the imaginary part of the phonon self-energy, is defined as , , is the EPC matrix element, and Ω . is the volume of the Brillouin zone (B.Z.).The EPC constant is calculated by We chose the gaussian smearing width of 0.02-0.03 Ry based on the convergence test in Supplementary Note 1. was first estimated using McMillan-Allen-Dynes (MAD) formula [44,45] with Coulomb pseudopotential * = 0.13 [46,47].
where are two separate correction factors [44], which are functions of , , , and * . The logarithmic average frequency is computed as:
Migdal-Eliashberg approach
The thermodynamic properties of superconducting ternary La0.75M0.25H10 hydrides were also estimated using the Migdal-Eliashberg (ME) approach due to the strong electron-phonon coupling constants observed in these systems. The isotropic Eliashberg equations defined on the imaginaryfrequency axis, which incorporate the superconducting order parameter function = ( ) and the electron mass renormalization function = ( ) take the following form [48,49]: where = 1/ , and the electron-phonon interaction pairing kernel is given by, Hence, the superconducting order parameter was defined by the ratio ∆ = / and the superconducting transition temperature was estimated from the following relation ∆ ( * , = ) = 0. We used the same Coulomb pseudopotential as the one used in MAD calculations, i.e., * =0.13. The Eliashberg equations were solved iteratively in a self-consistent way with a maximal error of 10 between two successive iterations. The convergence was controlled by the sufficiently high number of Matsubara frequencies: = ( / )(2 − 1), where = 0, ±1, ±2, . . . , ± and = 1100 [50][51][52]. Ⅲ . RESULTS AND DISCUSSION
Phase stability
We first evaluate the dynamical stability of ternary La0.75M0. can be stabilized as low as ~ 130 GPa [53,57], similar to the experimental observation at ~ 140 GPa [58]. Therefore, the pressure stability range of present La0.75Ce0.25H10 is expected to expand further by including anharmonic and QNE effects.
Given the harmonic dynamical stability, we evaluate the thermodynamic stability of La0.75Ce0.25H10. We calculated its enthalpy on the ternary phase diagram at 200 GPa, as shown in Fig. S2(a). The results show that the energy of the La0.75Ce0.25H10 structure is only 1 meV/atom higher than that of the convex hull. In addition, we also considered finite temperature thermodynamics (see Supplementary Note 2) and found the of La0.75Ce0.25H10 (Pm-3m) has promising thermodynamic stability up to 300 K.
Electron-phonon coupling and superconductivity
We calculate the EPC constant λ using the DFPT method and Eliashberg theory for the dynamically stable structures at 400, 250, and 200GPa. We first compute the superconducting transition temperature ( ) by the MAD formula, presented in Table 1. Due to the large λ (>2) in these compounds, we also employ Eliashberg formalism to investigate the impact of EPC on the and superconducting energy gap. The temperature-dependent behavior of the superconducting energy gap ∆( ) is computed by solving the ME equations in the mixed representation (defined simultaneously on the imaginary and real axis) [59,49]. The results are presented in Fig. 2 In Fig. 3(a) To understand the origin of the increased λ and by doping, we use La0.75Hf0.25H10 as an example and compare its phonon spectra to the LaH10 in Fig. 4. We find the substitution of La with Hf induces significant softening of high-frequency phonon modes. As shown in Fig. 4(a), with the Hf substitution, a few phonon modes appear in the low-frequency range of 360-900 cm -1 , while no phonon modes exist in the same area for LaH10. The H atoms dominate these phonon modes (see the projected phonon DOS in Fig. S7). Comparing the Eliashberg spectral function between LaH10 and La0.75Hf0.25H10 in Fig. 4 (b) and (c), one can see the phonon softening at the range of 360-900 cm -1 significantly promotes the EPC in this region. Similar enhancement of phonon linewidth in 360-900 cm -1 can be found by comparing Fig. 4 (d) and (e). If we integrate Eq. (3) to = 900cm , we find the contribution to from frequencies less than 900cm is 0.18 and 1.01 for LaH10 and La0.75Hf0.25H10, respectively. Therefore, the phonon softening in La0.75Hf0.25H10 significantly enhances the EPC. This mechanism is also seen in other superconducting systems [60][61][62][63]. The analysis of La0.75Hf0.25H10 illustrates that substituting La with Hf changes the bonding with H atoms and softens vibrational modes. Such phonon softening enhances the EPC and increases the λ and , simultaneously. We also analyzed the EPC for other dopants and found similar effects, as shown in Fig. S8 and Table S2, i.e., the substitution of La leads to phonon softening, which contributes to strong EPC in the middle-and low-frequency regions.
The effects of Ce
Ce is the only substitution that increases the pressure range of LaH10 stability while maintaining the high-temperature superconductivity with a slight weakening of the EPC in the harmonic approximation. To understand the effect of Ce substitution on dynamic stability, we compare the phonon spectrum between LaH10 and La0.75Ce0.25H10 at 200 GPa in Fig. 5(a) and (b). In LaH10, the imaginary frequency modes on the Г-X, Г-M, and Г-R paths are dominated by the vibrations of hydrogen atoms. When Ce is introduced, these modes become stiffer, and the imaginary frequency disappears. In Fig. 5(c) This ultrasoft pseudopotential leads to charge transfer and the re-appearance of imaginary modes caused by the Ce-4f electron as discussed in Supplementary Note 3. These results suggest the strong effect of Ce-4f electrons in stabilizing the LaH10 at low pressures. So far, the substitutional effect of Ce was only considered with Pm-3m La0.75Ce0.25H10 structure.
We further examine the stability of other La0.75Ce0.25H10 polymorphs at 200 GPa. As shown in Fig. 6 Fig. S9 suggest five phases are dynamically stable, which is noted in Fig. 6. To explore the possible superconductivity in these structures, we employ a recently developed frozenphonon method to compute the zone-center EPC strength for stable structures. This efficient method can identify strong EPC candidates in hydrides because the zone-center EPC strongly correlates with the full Brillouin zone EPC in these materials [64]. Using this method, we compute the zone-center EPC, , for 5 dynamically stable polymorphs. As shown in Fig. 6, different structures show similar as the one of the Pm-3m phase. Therefore, Ce occupation in the La0.75Ce0.25H10 does not affect its energetic stability and EPC. To confirm the zone-center EPC calculations, we also performed DFPT calculations of full Brillouin zone EPC for the P4/mmm phase (see details in Fig. S10). We obtained of P4/mmm as 2.64, slightly smaller than the Pm-3m phase (λ=3.08). This is consistent with the zone-center EPC calculations. The was estimated 215K (with ME approach) at 200GPa, which is slightly smaller than the one of Pm-3m phase (246K). Since these polymorphs have similar energy, they may form a random solid solution in the experimental synthesis. Nevertheless, such a mixture should maintain the HTS because of the similar electron-phonon coupling strength in these phases.
Ⅳ. CONCLUSIONS
In summary, based on first-principles calculations, we have investigated the effects of chemical doping on phase stability and superconductivity in the LaH10 structure.
Supplementary Note 1 | Convergence test of electron-phonon coupling calculations
We test the convergence of lambda with different smearing coefficients using the method suggested in Ref. [1][2][3]. As shown in Fig.S1, considering the same precision for all structures (La0.75M0.25H10 with lattice constant ~ 4.5Å), we believe that broadening of ~ 0.02-0.03Ry would be a suitable selection. We analyze the finite-temperature thermodynamic effect for La-Ce-H. The convex hull in Fig. S2 (a) indicates the competing phases with La0.75Ce0.25H10 are LaH10 and La0.5Ce0.5H10. Therefore, we consider the finite-temperature thermodynamics by computing the Gibbs free energy change of the reaction 0.5LaH10+0.5La0.5Ce0.5H10→ La0.75Ce0.25H10. We employed quasi-harmonic approximation (QHA) to compute the Gibbs free energy for these phases. With static calculations, La0.75Ce0.25H10 (Pm-3m) is above the convex hull by 1 meV/atom. From Fig. S2 (b), we find the free energy difference does not change significantly with the temperature. Therefore, the vibrational effect does not show a strong contribution on the stability of La0.75Ce0.25H10. La0.75Ce0.25H10.
Supplementary Note 3 | Effect of 4f electron
To study the effect of 4f electron on the stability of La0.75Ce0.25H10, we compute phonon spectrum with and without 4f electrons. Figure S3 shows that inclusion of Ce's 4f electron can stabilize the imaginary frequency phonon modes.
To understand this effect, we analyzed the spatial difference of charge density distribution between calculations with and without 4f electrons in Fig. S4(a). With the presences of Ce-4f electrons, the charge near H atoms forming the Ce cage are significantly reduced (blue region). The Bader charge analysis also shows a charge decrease of ~0.005-0.015 per H atom in the Ce cage. This can be due to the significant increase of charge near Ce atoms with Ce-4f electron inclusion. The change of charge distribution modifies the partial density of states. As shown in Fig. S4(b), with the Ce-4f electron the states of H-1s at the Fermi level are reduced. All these leads to the stabilization of the phonon spectrum. | 2023-07-24T04:01:44.116Z | 2023-07-21T00:00:00.000 | {
"year": 2023,
"sha1": "29dc455153bb49c8f202a289509e29c39cd49dd9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bc6f820053985bfb6f8ebf31d2e2ec4cfd888616",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247986525 | pes2o/s2orc | v3-fos-license | Research Article On Compact Trans-Sasakian Manifolds
We study 3-dimensional compact and simply connected trans-Sasakian manifolds and fi nd necessary and su ffi cient conditions under which these manifolds are homothetic to Sasakian manifolds. The fi rst two results deal with fi nding necessary and su ffi cient conditions on a compact and simply connected trans-Sasakian manifold to be homothetic to an Einstein Sasakian manifold and in the third result deals with fi nding necessary and su ffi cient condition on a compact and simply connected trans-Sasakian manifold to be homothetic to a Sasakian manifold.
Introduction
It is well known that for an almost contact metric manifold ðM, F, ζ, η, gÞ (cf. [1]), the product M = M × R has an almost complex structure J, which with product metric g makes ð M, gÞ an almost Hermitian manifold. The properties of the almost Hermitian manifold ð M, J, gÞ control the properties of the almost contact metric manifold ðM, F, ζ, η, gÞ and provide several structures on M such as a Sasakian structure and a quasi-Sasakian structure (cf. [1][2][3]). There are known sixteen different types of structures on ð M, J, gÞ (cf. [4]), and using the structure in the class W 4 on ð M, J, gÞ, a structure ðF, ζ, η, g, α, βÞ was introduced on M, which is called trans-Sasakian structure (cf. [5]), that generalizes Sasakian structure, Kenmotsu structure, and cosymplectic structure on a contact metric manifold (cf. [2,3]), where α and β being the real functions defined on M.
Recall that a trans-Sasakian manifold ðM, F, ζ, η, g, α, βÞ is called a trans-Sasakain manifold of type ðα, βÞ, and trans-Sasakian manifolds of type ð0, 0Þ, ðα, 0Þ, and ð0, βÞ are called a cosymplectic, a α-Sasakian, and a β-Kenmotsu manifolds, respectively. It is on account of a result proved in [6] that a trans-Sasakian manifold of dimension five or greater than five reduces to a cosymplectic manifold, a α-Sasakian manifold, or a β-Kenmotsu manifold, so there is an emphasis on studying three-dimensional trans-Sasakian manifolds.
Among other questions, finding conditions under which a compact 3-dimensional trans-Sasakian manifold ðM, F, ζ , η, gÞ is homothetic to a Sasakian manifold is of prime importance. The geometry of 3-dimensional trans-Sasakian manifold is also important owing to Thurston's conjecture (cf. [7]), and fetching conditions on a 3-dimensional trans-Sasakian manifold ðM, F, ζ, η, gÞ in matching it among Thurston's eight geometries becomes more interesting. It is worth noting that in Thurston's eight geometries, the first place is occupied by the spherical geometry S 3 .
An interesting work on 3-dimensional trans-Sasakian manifolds is found in [14,15], where the authors have considered other aspects in Thurston's eight geometries. In [10], it is asked whether the function β on a 3-dimensional compact trans-Sasakian manifold ðM, F, ζ, η, g, α, βÞ satisfying grad β = ζðβÞζ necessitates the trans-Sasakian manifold to be homothetic to a Sasakian manifold. In [15], it is shown that this question has negative answer.
Einstein Sasakian manifolds are very important due to their geometric importance (cf. [16]). In this paper, in our first two results, we find necessary and sufficient conditions on a compact simply connected 3-dimensional trans-Sasakian manifold ðM, F, ζ, η, g, α, βÞ to be homothetic to an Einstein Sasakian manifold, and in the third, we find a necessary and sufficient condition on a compact simply connected 3-dimensional trans-Sasakian ðM, F, ζ, η, g, α, βÞ to be homothetic to a Sasakian manifold.
In the first result, we consider a compact and simply connected trans-Sasakian manifold ðM, F, ζ, η, g, α, βÞ of positive constant scalar curvature τ, the function β satisfying Fischer-Marsden equation shows that the functions α and β are related to τ by the inequality βðα 2 − β 2 − τ/4Þ ≥ 0, and the Ricci operator Q satisfying Codazzi-type equation with respect to vector field ζ necessarily implies that ðM, F, ζ, η, g, α, βÞ is homothetic to an Einstein Sasakian manifold. In the second result, we show that a compact simply connected trans-Sasakian manifold with function α constant along the integral curves of ζ, scalar curvature τ satisfying the inequality αð6α 2 − τÞ ≥ 0, and the Ricci operator Q satisfying Codazzi-type equation with respect to vector field ζ necessarily imply that ðM, F, ζ, η, g, α, βÞ is homothetic to an Einstein Sasakian manifold. Finally, in the last result, we show that on a compact and simply connected trans-Sasakian manifold, the function β satisfies the differential inequality ζðβ 2 Þ ≤ −2β 3 , and vector fields ð∇QÞðgradα, ζÞ, ζ are orthogonal, which necessarily imply that ðM, F, ζ, η, g, α, βÞ is homothetic to a Sasakian manifold, where the covariant derivative ð∇QÞðU, ζÞ = ∇ U Qζ − Qð∇ U ζÞ for a smooth vector field U on M.
For a smooth function h on the Riemannian manifold ð M, gÞ, then the operator A h defined by is called the Hessian operator of h, and it is a symmetric operator. Moreover, the Hessian HessðhÞ of h is defined by
Advances in Mathematical Physics
The Laplace operator Δ on ðM, gÞ is defined by Δh = div ðgrad hÞ, and we also have Fischer-Marsden differential equation on a Riemannian manifold ðM, gÞ is (cf. [18])
Trans-Sasakian Manifolds Homothetic to Einstein Sasakian Manifolds
In this section, we find necessary and sufficient conditions for a compact and simply connected 3-dimensional trans-Sasakian manifold ðM, F, ζ, η, g, α, βÞ to be homothetic to an Einstein Sasakian manifold.
is homothetic to an Einstein Sasakian manifold of positive scalar curvature, if and only if, the Ricci operator Q satisfies Proof. Suppose ðM, F, ζ, η, g, α, βÞ is a compact simply connected 3-dimensional trans-Sasakian manifold satisfying the hypothesis. Then, equation (13) gives and taking trace in above equation and using equation (12), we have Note that by equation (3), we have ∇ ζ ζ = 0, and therefore, HessðβÞðζ, ζÞ = ζζðβÞ. Using this equation and equation (17) in equation (16), we get Now, using equation (5), we have Ricðζ, ζÞ = 2ðα 2 − β 2 − ζðβÞÞ. Thus, the above equation becomes Using equation (6), we have div ðζðβÞζÞ = ζζðβÞ + 2βζð βÞ, and inserting it in the above equation, we conclude Integrating the above equation, we get Using the inequality in the statement, we conclude Since M is simply connected, it is connected, and therefore equation (22) implies either (i) β = 0 or (ii) α 2 − β 2 − τ /4 = 0. Suppose (ii) holds, then as τ is a constant, we get ζð α 2 Þ = ζðβ 2 Þ, which in view of equation (4) implies βζðβÞ = −2α 2 β; that is, 3β 2 ζðβÞ = −6α 2 β 2 . Thus, we have Using equation (6), we have div ðβ 3 ζÞ = ζðβ 3 Þ + 2β 4 , and inserting it in above equation, we get Integrating the above equation, we get Now, using (ii) in above integral, we have and since the scalar curvature τ > 0, through above integral, we conclude that β = 0. Thus, using equations (2), (3), (4), and (5), take the forms ζ α ð Þ = 0, Taking the covariant derivative in the second equation of equation (28), we get 3 Advances in Mathematical Physics and using equation (27) in above equation, we arrive at Now, using the Codazzi equation type condition on Q in the hypothesis, we get Using the second equation in equation (27), we compute the Lie derivative of g with respect to ζ to conclude that is, ζ is a Killing vector field and that the flow of ζ consists of isometries of the Riemannian manifold M. Thus, we have and using equation (27), we conclude Combining the above equation with equation (31), we have Taking the inner product with ζ in above equation, we conclude We claim that M being simply connected, α ≠ 0; for if α = 0, then by equation (27), we see that ζ is parallel and that η is closed, which implies η is exact; that is, η = df for a smooth function f on M. This implies ζ = gradf , and M being compact, there is a point q ∈ M such that ðgradf ÞðqÞ = 0, and we get ζðqÞ = 0, a contradiction to the fact that ζ is a unit vector field. Hence, α ≠ 0, and equation (36) implies UðαÞ = 0, U ∈ ΓðTMÞ; that is, α is a nonzero constant. Now, equation (28) gives QðζÞ = 2α 2 ζ, and taking the covariant derivative in this equation yields Using the condition in the hypothesis and equation (34) with α ≠ 0, in above equation, we get Operating F on above equation while using equation (1) and QðζÞ = 2α 2 ζ, we conclude This proves that M is an Einstein manifold. Finally, using equation (27), with α a nonzero constant, we compute Hence, by Theorem 1, we conclude that M is homothetic to a compact simply connected Einstein Sasakian manifold of positive scalar curvature. The converse is trivial. | 2022-04-07T15:10:17.103Z | 2022-04-05T00:00:00.000 | {
"year": 2022,
"sha1": "c8d96440f94e57f4cb75ed2419343495d58367ce",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amp/2022/9239897.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb53fbe8cdc6b6941137596ede68eccf7f341eef",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
247618974 | pes2o/s2orc | v3-fos-license | Efficient Pairing in Unknown Environments: Minimal Observations and TSP-based Optimization
Generating paired sequences with maximal compatibility from a given set is one of the most important challenges in various applications, including information and communication technologies. However, the number of possible pairings explodes in a double factorial order as a function of the number of entities, manifesting the difficulties of finding the optimal pairing that maximizes the overall reward. In the meantime, in real-world systems, such as user pairing in non-orthogonal multiple access (NOMA), pairing often needs to be conducted at high speed in dynamically changing environments; hence, efficient recognition of the environment and finding high reward pairings are highly demanded. In this paper, we demonstrate an efficient pairing algorithm to recognize compatibilities among elements as well as to find a pairing that yields a high total compatibility. The proposed pairing strategy consists of two phases. The first is the observation phase, where compatibility information among elements is obtained by only observing the sum of rewards. We show an efficient strategy that allows obtaining all compatibility information with minimal observations. The minimum number of observations under these conditions is also discussed, along with its mathematical proof. The second is the combination phase, by which a pairing with a large total reward is determined heuristically. We transform the pairing problem into a traveling salesman problem (TSP) in a three-layer graph structure, which we call Pairing-TSP. We demonstrate heuristic algorithms in solving the Pairing-TSP efficiently. This research is expected to be utilized in real-world applications such as NOMA, social networks, among others.
I Introduction A Introduction of a pairing problem
Various systems and applications require to combine multiple elements into an array of pairs, including information or communication technologies. The process of partitioning the set of N elements into N/2 disjoint sets with exactly 2 elements each is called "Pairing" in this paper. An example is found in non-orthogonal multiple access (NOMA) in the latest wireless communication systems [1][2][3][4][5][6]. In NOMA, multiple terminals share a common frequency band simultaneously, which greatly improves the frequency utilization efficiency. The key process here is user pairing: the base station allocates higher and lower transmission power for communications to the terminals located far and near the base station, respectively. The terminals then conduct successive interference cancellation (SIC) calculations to extract the original signal. Therefore, determining the combination of user pairing that maximizes the total data rate of all users is critical. However, to the best of the authors' knowledge, optimal pairing algorithms which can work with a large number of users or terminals have not been proposed, even though various pairing algorithms have been proposed in previous studies [7,8]. When the number of users N = 10, the total number of possible pairings is 945. With N = 100, the total number becomes in the order of 10 78 , which is a double factorial scaling as introduced in Sect. III. Therefore, an efficient pairing strategy is indispensable. The importance of pairing is also observed in other situations and applications, such as college admission [9], economics [10] and donor exchange [11] among others [12,13].
In this paper, we demonstrate a fast pairing algorithm consisting of an efficient recognition of compatibilities among elements as well as an efficient determination of the pairing that yields high total compatibility. Here, compatibility quantifies the performance of a given pair and total compatibility is the summation of compatibilities of all pairs for a pairing, where we also call the given set of pairs among all elements a pairing. The optimal pairing should maximize the total compatibility of the system. However, in general, obtaining the globally maximal total compatibility would require an exhaustive search of all pairings. Therefore, a heuristic algorithm is needed to obtain an approximately maximal total compatibility. This study highlights the following two aspects in discussing the pairing problem.
The first point is the time duration required to obtain information about the compatibilities of the system, which we call observation time hereafter. In the absence of prior information about compatibilities, multiple observations are required to infer the compatibility between all elements. Furthermore, we presuppose that we cannot directly measure individual compatibility among elements; only the total compatibility of a certain pairing is observable. The fewer observations, the shorter the overall time required for pairing. More generally, the objective of an algorithm for compatibility observation is to guess as accurately as possible the real compatibilities in as few steps as possible, which is schematically illustrated in Fig. 1(a). In this paper, we demonstrate that by exploiting the inherent structural properties of the pairing problem, which we call exchange rules, the number of observations needed for acquiring all compatibilities is significantly reduced.
The second point is the efficient derivation of the optimal pairing based on the information on compatibility; we call the required time for this process combining time in this paper. Even if complete information about the compatibilities is available, it may take a considerable amount of time to find the optimal pairing because of the huge number of possible pairings, as schematically represented in Fig. 1(b). In this paper, we transform the derivation of optimal pairing into a traveling salesman problem (TSP). TSP is a widely known combinatorial optimization problem to find the shortest pathway in a graph G(V, E) for a salesman while visiting all vertices V via edges E. In addition to the compatibility information, we append two more layers to account for the requirement of the pairing problem; we call the re-formulated pairing problem the Pairing-TSP. Notably, the resulting graph is not fully connected. Once the situation is represented by a TSP problem, we can benefit from a variety of heuristic algorithms in the literature to efficiently solve the combinatorial explosion issue. Furthermore, this paper proposes a novel heuristic algorithm that is different from conventional algorithms and suitable for pairing problems.
Regarding the second point discussed above, a related problem is maximum weight matching (MWM). In MWM, the goal is to select edges from a weighted graph G(V, E) so that any two selected edges do not share common vertices while maximizing the sum of the weights of the selected edges. This is a combinatorial optimization problem. The pairing problem discussed in this study is a particular case of MWM where the graph is complete and the number of vertices is even. Several efficient algorithms have been proposed for solving MWM. For example, Gabow [14] proposed an algorithm with a computation time of |E||V |+|V | 2 log |V |, Cygan et al. [15] developed a randomized algorithms with a computation time of L|V | ω for graphs with integer weights (ω < 2.373 is the exponent of N × N matrix multiplication [16] and L is the maximum integer edge weight), while Duan et al. [17] worked on an algorithm achieving an approximation ratio of (1 − )M , with computation time of |E| −1 log −1 for arbitrary weights and |E| −1 log N for integer weights ( is a positive arbitrary value and M is the maximum value). Here, |V | = N, |E| = N (N − 1)/2.
In our paper, we approach the problem from a different perspective and present new methods for the pairing problem. In particular, we formulate it as a TSP problem. Additionally, MWM literature does not consider any approach to obtain compatibility information to the best of our knowledge, which is the main aspect of the observation phase in our manuscript. There are two phases. (a) The first is the observation phase to grasp the compatibility among elements. (b) The second is the combining phase to find a pairing yielding high compatibility.
B Overview of this paper
With a view to the efficient realization of optimal pairing, the present study demonstrates an efficient observation strategy to measure the compatibilities among the entities on the basis of limited information. Furthermore, based on the insight that the optimal pairing problem is transformed into a TSP problem in a three-layer graph structure, we demonstrate heuristic algorithms to find a high-performance pairing that can be applied even when the number of elements is large.
This paper is organized as follows. First, we formulate the pairing problem in Sect. II. Second, for the observation phase, in Sect. III, we show the minimum number of observations needed to infer the complete set of compatibilities and propose an observation algorithm with a computational complexity of the square of the number of elements. Sect. IV examines the combining phase, where we introduce how to convert the pairing problem to a TSP and propose an algorithm for solving the resulting Pairing-TSP. In Sect. V, we numerically evaluate the performances of the combining phase algorithms. Sect. VI concludes the paper.
II Objective function and constraints
Here we assume that the number of elements is an even natural integer N , while the index of each element is a natural number between 1 and N . We define the set of all users U as follows; Then, we define the set of all possible pairs for U as D: The compatibility between the elements i and j is denoted by C i,j . The reward function f of a pair is given by its compatibility. f ({i, j}) = C i,j . We define pairing S as follows; Then, C sum (S), which is called "the total compatibility of pairing S" hereafter is defined as follows; And we define the set of all pairings S = {S}. The pairing problem discussed in this study is formulated as follows; max: C sum (S) subject to: S ∈ S.
III Observation Phase A Exchange Rule
As discussed in the Introduction, we assume that each C i,j cannot be directly observed, but C sum (S) is observable. By observing such C sum (S) values with different pairings S, we can recognize all C i,j values. The number of all available pairings is (N − 1)!!, meaning that the number of necessary observations is at most (N −1)!!. Here, the number denoted by n!! is the double factorial of an odd number n defined by n(n − 2)(n − 4) · · · 3 · 1. Therefore, the total number of possible pairings dramatically increases when N becomes large, indicating the importance of efficiently recognizing compatibilities with as few observations as possible. In the following, we prove that the total compatibility C sum (S) of all possible pairings can be calculated based on a limited number of observations, leading to a significant reduction of the required observations.
To improve the readability of the following discussion, we define the exchange rule as: This exchange rule describes the amount of change in the total compatibility between a pairing S containing {i, j} and {k, l} and a pairing S containing {i, k} and {j, l}. Therefore, each exchange rule can be calculated from two observations. For example, to find [1,2,3,4] in N = 8, we can observe C 1,2 + C 3,4 + C 5,6 + C 7,8 and C 1,3 + C 2,4 + C 5,6 + C 7,8 and calculate the difference. For a large N , there are many sets of pairings corresponding to any given exchange rule, such that finding one exchange rule will give the amount of change between multiple sets simultaneously. For example, [1, 2, 3, 4] mentioned above is also the difference between C 1,2 + C 3,4 + C 5,7 + C 6,8 and C 1,3 + C 2,4 + C 5,7 + C 6,8 .
B Observation Algorithm
In this part we propose a simple algorithm with observation time in O(N 2 ). As an example, we will use the C i,j setting shown in Table 1 to illustrate the proposed observation algorithm. As discussed earlier, we assume that each compatibility (C i,j ) cannot be observed directly. It will prove beneficial to not calculate the original set of compatabilities C i,j directly, but to use a derived set of compatiabilities denoted byC i,j andC sum (S) with the two following properties: for a given pairing, C sum (S) =C sum (S), and someC i,j are always equal to 0. If such properties hold, we could calculate any total compatibility viaC i,j with a reduced number of observations, instead of via C i,j .
Indeed, we found that suchC i,j exists and can be defined as the following; In this definition, we found that the number of non-zerõ C i,j (i > j) elements is (N − 1)(N − 2)/2, which is smaller than the number of C i,j (i > j) elements. That is, this definition reduces the number of non-zero elements.
When we denote a pairing as S, from the definition ofC i,j we can write; The above equations prove that C sum (S) andC sum (S) are equal for any pairing S. As a consequence, any exchange rule can be written using either C i,j orC i,j while providing the same value. Thanks to this property andC 1,j = 0 for any j, the computation can be greatly simplified. For example, if we compute the value of the exchange rule [1, i, j, k] forC i,j , we can transform the equation as follows; That is, we can obtain the difference between the two elements (C i,k andC j,k ) from a single exchange rule. In the proposed observation algorithm, the following values (Eqs. (2), (3), and (4)) are obtained from observations; By definition, following equations hold; Eq. (5) represents the changes along the horizontal di- Table 2(a)). Let C 2,3 be given by x,C i,j is represented using Eqs. (5), and (6). Then, by using Eq. (4), x is determined, and subsequently all C i,j values are determined as summarized in Table 2
C Minimum Number of Observations
We proved the following theorem; Theorem 1 The minimum number of observations required to know the entire set of compatibilitiesC is This theorem is based on the idea that if there are x total linearly independent pairings, then the required number of observations is x. This algorithm is proved by the following explanation. First, by design, the set of C i,j preserves the total compatibility C sum (S) obtained from C i,j for all pairings, and the number of independent
IV Combining Algorithm
Based onC i,j obtained by the observation algorithm, we can compute the total compatibility C sum (S) of all possible pairings S. However, as discussed in the Introduction, the number of pairings scales up very quickly as a function of N . In this section we re-formulate the pairing problem into a Traveling Salesman Problem (TSP) to realize an efficient combining algorithm.
A Traveling Salesman Problem
TSP concerns finding the route that minimizes the total cost of traveling to a given set of locations, with the cost between each two locations being given. The salesman starts his or her tour from a starting node and visits all other nodes exactly once before returning to the starting node. The complexity of the TSP stems from the large number of possible routes which scales up very quickly with the number of nodes, such that a brute-force solving considering all possible routes is too costly in general.
B Solving Pairing Problem as a TSP: Pairing-TSP Figure 2: The path of the traveling salesman problem in the three-layer graph structure (Pairing-TSP) corresponds to the pairing problem. An example case with the number of elements (N ) being 6 is illustrated. The first and second layers have N nodes, and the third layer has N/2 nodes. All nodes in the first layer are connected with each other. All nodes in the second layer are connected to a different node in the first layer and all nodes in the third layer. By constructing such a three-layer graph structure, the solution to the corresponding TSP problem provides the pairing yielding high compatibility.
In this study, we transform the problem of heuristically finding the pairing with a large total compatibility into a TSP with a three-layer network structure, which is schematically shown in Fig. 2. We call the re-formulated problem Pairing-TSP.
In this Pairing-TSP, we arrange the first and the second layers to have N nodes, while the third layer is configured with N/2 nodes. Let the N nodes of the first and the second layers be indexed with natural numbers ranging from 1 to N . In the first layer, the cost of the route between the nodes i and j are given by −C i,j . There is a one-to-one correspondence between the nodes in the first layer and the nodes in the second layer; in other words, there is a unique link between each node i in the first layer and the corresponding node i in the second layer. As the other links between the first and second layers are not permitted, the Pairing-TSP results in a non-complete graph.
Finally, the third layer consists of N/2 nodes indexed between 1 and N/2, N being even. Here, the nodes in the second layer and the nodes in the third layer are fully connected. That is, the node i in the second layer is connected with nodes j (j = 1, · · · , N/2) in the third layer. Note that the cost of all routes except intra-first-layer links is set to zero. Nevertheless, remember that the salesman must visit all nodes in the second and the third layer too, not just the first layer. Now we demonstrate that the solution of such Pairing-TSP corresponds to the solution of pairing by noticing the following two inherent constraints.
First, consider a route that goes from a node in the third layer to a node in the second layer and then goes back to the third layer, as shown by the red lines in Fig. 3(a). Such a route fragment cannot be included in the solution of TSP. Each node in the third layer can be connected to at most 2 nodes in the second layer. Therefore, if different nodes in the third layer are connected to the same node in the second layer, there will be at least one node in the second layer that cannot be connected to the third layer. With these reasons, a route fragment such as the red lines in Fig. 3(a) is forbidden.
Secondly, the case of the thick red lines in Fig. 3(b) of three consecutive connections in the first layer cannot be included in the solution of TSP. The reason is if such connections exist, then there has to be the configuration of Fig. 3(a) somewhere. Therefore, by construction, the salesman never visits three consecutive nodes in the first layer; instead, after visiting two nodes in the first layer, the salesman always moves to the second layer. Finally, in the solution of Pairing-TSP, the salesman will visit two nodes in the first layer consecutively via visiting the second and the third layer. When the connection between the nodes i and j in the first layer is included in the solution of Pairing-TSP, we consider that elements of i and j are paired.
Since the summation of the cost along the route of a solution of Pairing-TSP and the total compatibility of the corresponding pairing are opposite in sign, minimizing the cost of Pairing-TSP is equivalent to maximizing the total compatibility C sum (S) by appropriate pairing construction. For those reasons, we can guarantee the correspondence between the original pairing problem and Pairing-TSP.
C Pairing-Nearest Neighbor Method (PNN)
In solving Pairing-TSP, we propose two algorithms on the basis of existing algorithms for the general TSP. The first one is what we call the pairing-nearest neighbor method, which is referred to as PNN in short hereafter. PNN is a modification of the nearest neighbor method, which is an algorithm to visit the nearest unvisited node from the current node [18]. As discussed in Sect. IV.B., a solution of Pairing-TSP does not allow three or more consecutive node visits in the first layer, the salesman needs to go to the second, the third, and the second layer before coming back to the first layer again. If there are multiple least-cost routes to the next node, they are assumed to be chosen ran-
Algorithm 1 Pairing-Nearest Neighbor Method (PNN)
Require: Array indexes start at 1 1: input: C (C is the N ×N compatibility matrix, whose element C[i][j] stores compatibility C i,j ) 2: q (q stores the nodes the salesman visits, and q[x] denotes the node which salesman visits xth) 3: s ← start point in the first layer 4 Move from the 2nd layer to the 1st layer. In all cases, the destination is chosen only from the unvisited nodes. In this manner, duplicate visits to any node are avoided.
D Pairing 2-opt Method (P2-opt)
The second algorithm is what we call the pairing 2opt method referred to as P2-opt hereafter, which is a modification of the 2-opt method [19] 2-opt method compares the original and one alternative route and updates the current solution by reconnecting some of the nodes so that the total cost decreases [19] end for 29: end while 30: return S combinations for 2 given pairs of 4 nodes. Therefore, the proposed P2-opt compares the costs of three routes. If the compatibility is not improved by recombining any of the pairs, the algorithm terminates. Fig. 4 illustrates the reconnection procedure of the proposed P2-opt with an example of pairing when N = 6. A pseudo-code of P2-opt is shown in Algorithm 2. The three alternatives are represented by lines 8 to 10. In P2-opt, the rewiring is considered only on the first layer among these three alternatives. This rewiring never introduces duplicate visits. Note that the connections involving the 2nd and 3rd layers have zero cost for the salesman. Therefore, any rewired route in the first layer, which is a pairing S, provides a certain route for the salesman in the three-layer graph structure.
V Simulation A Problem Setting
We constructed the compatibility set C i,j by generating uniform random numbers between 0 and 10000. A total of 100 different C i,j sets were generated for each setting, and the average over different settings was examined. Note that each set of compatibilities C i,j is reconstructed here following the observation algorithm based on the construction ofC i,j described in Sect. III. We want to compare the performance of PNN versus random pairing, evaluate how much P2-opt can improve a solution found by PNN through additional rewiring steps, and how the performance gain depends on the number of rewirings as introduced in Sect. IV.
B Performance Indicator for the Derived Pairing
Let C sum (S) be the total compatibility that corresponds to the pairing S derived through the combining algorithm. The larger C sum (S) and the closer it is to the global maximum, the better it is. To quantify the performance of the combining algorithm in terms of how far C sum (S) is from the maximum, we define P as a performance indicator with the following formula: where N is the number of nodes in the first layer, C max is the upper limit value of C i,j , and C min is the lower limit value of C i,j . In this simulation, C max = 10000 and C min = 0. P ranges from 0 to 1 and represents the relative distance of the current pairing from the theoretical minimum or maximum possible values for C sum (S), 0 being for the absolute worst and 1 for the absolute best pairing, respectively.
C Performance of PNN and P2-opt
We conducted a performance comparison between (a) No-Strategy, (b) PNN, (c) PNN and P2-opt as a function of the number of elements N from 100 to 1000, as summarized in Fig. 5. Herein the exchange limit l was fixed to be 600. "No-Strategy" indicates random selection of the route in the first layer. "PNN and P2-opt" means that we get an initial solution by PNN and update solution by P2-opt.
D Effect of P2-opt
As described in Sect. IV.D., P2-opt aims at reducing the total cost of a TSP route by locally exchanging connections. To examine the effect of such an exchange, here we set an upper limit to the number of exchanges in P2-opt, which we define by the P2-opt exchange limit denoted by l. Fig. 6 shows the evolution of P as a function of l for different element numbers N from 100 to 1000 in intervals of 100, each point representing the average among 100 different compatibility sets. This result highlights two trends: first, P saturates beyond a certain limit l; second, as N increases, increasing l improves the performance until a new saturation level. Indeed, when N = 100, the performance reached its maximum value with l = 100, whereas P monotonically increases until l = 600 when N = 1000. These observations demonstrate that a sufficient exchange limit exists depending on the number of first-layer nodes of the given problem.
E Number of checks of P2-opt
In the P2-opt algorithm, two pairs of the current pairing are compared at every turn, and the nodes are reconnected accordingly if the rewiring improves the total compatibility (Fig. 4). Here, the order in which the pairs are checked is round-robin, meaning that each time the pairs are reconnected, they are rechecked from the beginning. Therefore, there is a possibility of double-checking, meaning that certain reconnections are re-calculated. That is to say, there is a room for further accelerating the algorithm in reducing the number of checks. In the meantime, the computation cost of the P2-opt algorithm represents how often compatible pairs are compared, which we call the number of checks (NOC). The circular marks and their associated error bars in Fig. 7 represent the mean and the standard deviation of the NOC, respectively, when the number of elements N ranges from 100 to 2000 in intervals of 100. For each N , 100 different compatibility sets were examined. The exchange limit l was given by 600 regardless of N . However, when P2-opt achieves the local maximum pairing and the algorithm terminates, then the total number of exchanges is actually less than l. From Fig. 7, we can observe several trends. Clearly, the NOC increases as the number of elements increases. However, the slope flattens when the number of elements is greater than approximately 1200. Furthermore, the standard deviation gets larger when the number of elements goes beyond 1200. To examine the inherent mechanisms behind such tendencies, we analyzed the time evolution of the NOC per exchange loop. The curves in Fig. 8 represent the evolution over exchange loops of the NOC regarding compatibility settings whose number of elements ranges from 100 to 1200 in intervals of 100, averaged over 100 different compatibility sets for each setting. The P2-opt exchange limit l was fixed at 600. From Fig. 8, we observe that the average NOC initially increases as number of exchange loops elapses. Initially, any rewiring may improve the total compatibility; hence the NOC per exchange loop is small. As the number of exchanges increases, rewiring may not necessarily improve the total compatibility because the calculated route may already be in a good solution. Therefore, the NOC until actual rewiring happens increases. Beyond a certain point, the calculated route has a relatively low cost; therefore, the NOC grows until P2-opt has converged, but becomes 0 once P2-opt has converged. In Fig. 8, 100 trials were simulated for each N and averaged over, such that the NOC gradually decreased after some point because the number of converged trials steadily increased.
Indeed, in the case of N = 500, the NOC becomes almost zero when exchange loop is 300. Similarly, in the case of N = 1000, the NOC becomes very small when the total number of exchanges is 600. In the case of N = 1200, however, the NOC is large, approximately 8 × 10 4 when the total number of exchanges is 600. That is to say, the search for a better solution may be insufficient. Such an observation is consistent with the change of the slope in Fig. 7 induced at N = 1200. In other words, when N is small, the variance is small because a sufficiently lowcost route solution has been obtained, whereas when N is greater than 1000, the l is insufficient, and so the variance becomes large, and the slope of the graph against N is slow.
F Comparison of computational costs
In this section, we discuss the computational complexity of each method. First, the total number of possible pairings is (N − 1)!!. Therefore, the computational complexity by enumeration is (N − 1)!!, and the number of observations required is also (N − 1)!!. On the other hand, the number of observations needed for the proposed observation algo-
VI Conclusion
In this study, we propose an algorithm for efficiently and heuristically determining a pairing that provides large total compatibility among entities, which lies in a process at the heart of some of the latest information and communications technologies such as non-orthogonal multiple access (NOMA) in wireless networks, matching problems in economics, among others. We identify two main phases to optimize the pairing: observation and combination. One of the main hypotheses of this study is that one can only observe the total compatibility for any given one pairing.
In the meantime, the number of all possible pairing pairings grows as (N − 1)!!, where N is the number of entities. Therefore, efficient strategies to measure the compatibility among elements are essential. We demonstrate that the minimum number of observations to know the complete set of all compatibilities is smaller than the total number of combinations of this set. This finding does not depend on the combining phase. Also, by exploiting the exchange relationships inherent in the problem, we propose an efficient algorithm scaling as O(N 2 ) to observe all compatibilities among elements. In the combining phase, we demonstrate that the derivation of the best pairing is equivalent to solving a traveling salesman problem (TSP) in a three-layer graph structure, which we name Pairing-TSP. We propose two heuristic approaches to efficiently resolve Pairing-TSP: the pairing-nearest neighbor (PNN) and the pairing 2-opt (P2-opt) methods, both of which exploit unique characters inherent in the architecture of Pairing-TSP. Numerical simulations confirm the principles of the algorithms. In summing up, the present study first proposed an algorithm to estimate the compatibility among elements only via the total compatibility with minimal observations. Then, through the insight that the pairing problem is equivalent to solving a special class of TSPs, we demonstrate heuristic methods to accomplish pairing efficiently. We consider that the contents herein contribute to achieving more efficient pairing than conventional methods, especially for the case of a large number of users in NOMA systems, as well as other pairing applications. We expect our findings to be applicable also to social systems such as social networking services and education.
VII Effect of Initial Node in PNN
In the PNN, traveling starts from a node in the first layer.
Here we examined the effect of the starting node on the resultant pairing performance. More specifically, we analyzed the standard deviation of the performance indicator P defined in Sect. V.B. while changing the starting node through all N nodes in the first layer. In the simulations, N was given from 100 to 1000 with a 100 interval, while 100 types of compatibilities were prepared for each given N . We calculated standard deviations for N initial nodes for each of the 100 compatibility sets. Then, we averaged all 100 standard deviations for each N . The red, green, and : The standard deviation of the performance for each method as a function of N when the initial point is changed. N is set from 100 to 1000 and we prepare 100 types of compatibilities for each N .
blue circular marks in Fig. 9 show the standard deviation of the performance indicator P as a function of the number of elements N when the pairing attribution was conducted with completely random strategy (or No-Strategy), PNN, and PNN and P2-opt, respectively. We can observe that the standard deviation decreases as the number of elements increases for all methods. In particular, the dependence of the performance on the initial node of PNN and P2-opt is smaller than that of No-Strategy and PNN. Since the maximum standard deviation is smaller than 0.015 when N = 100 in the case of PNN and PNN and P2-opt, we can conclude that the initial node selection in PNN has a negligible effect on the resultant pairing quality. | 2022-03-24T06:47:33.457Z | 2022-03-23T00:00:00.000 | {
"year": 2022,
"sha1": "1d4e19d6c04609e977f866c138a8c7a96955be5c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7219bf2562bf848913d6ceb265d91a391a36c2ce",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
118678878 | pes2o/s2orc | v3-fos-license | Systematic study of the hidden order in URu$_{2}$Si$_{2}$ as a multipolar order; the role of triakontadipoles
A systematic search for possible order parameters of the so-called hidden order of URu$_{2}$Si$_{2}$ is conducted. Among the possible candidates that do fulfill the experimental symmetry restrictions on the hidden order parameter, we find one candidate that stand out -- one of the components of a triakontadipole multipole tensor that belongs to the $A_{1u}$ irreducible representation of the point group $D_{4h}$. This solution is characterized by a $\vec{Q}=(0,0,1)$ ordering of the triakontadipoles and has a symmetry that forbids magnetic moments as well as most other multipole ordering on the uranium sites. This hidden order phase is closely related to the the antiferromagnetic phase, which is manifested in the similarities of the geometries of the calculated Fermi surfaces. Finally is is found that this non-magnetic solution allows for another secondary superimposed order parameter that belong to $B_{2u}$, which gives rise to an anisotropic in-plane susceptibility without breaking the tetragonal crystal symmetry.
A systematic search for possible order parameters of the so-called hidden order of URu2Si2 is conducted. Among the possible candidates that do fulfill the experimental symmetry restrictions on the hidden order parameter, we find one candidate that stand out -one of the components of a triakontadipole multipole tensor that belongs to the A1u irreducible representation of the point group D 4h . This solution is characterized by a Q = (0, 0, 1) ordering of the triakontadipoles and has a symmetry that forbids magnetic moments as well as most other multipole ordering on the uranium sites. This hidden order phase is closely related to the the antiferromagnetic phase, which is manifested in the similarities of the geometries of the calculated Fermi surfaces. Finally is is found that this non-magnetic solution allows for another secondary superimposed order parameter that belong to B2u, which gives rise to an anisotropic in-plane susceptibility without breaking the tetragonal crystal symmetry.
The hidden order (HO) of the heavy fermion material URu 2 Si 2 has attracted a lot of interest since its discovery in 1985 [1], as described in a recent review [2]. The main enigma is at 17 K there is clear signature of a secondary phase transition but not of any observable order parameter (OP) in the HO phase. The only signal is that there is a tiny staggered magnetic moment which cannot account for the change in entropy at the phase transition. Another important aspect of the HO is that it under pressure goes through a phase transition to an anti-ferromagnetic (AF) phase with relatively large moments. This phase transition is now conclusively established to be of first order [3]. This and the fact that the order parameter of the HO phase does not cause any change in observable symmetries, such as crystal symmetry or magnetic moments, put some severe conditions on its symmetry properties.
There are numerous theoretical suggestions for the HO parameter in the literature, see Ref. [2] for a fuller account of these. Some are explicitly itinerant in nature as e.g. unconventional density waves [4], orbital currents [5], or helicity order [6]. Other focus more on the local atomic order parameter. In fact all multipolar order up to rank five have been suggested; quadrupoles [4], octupoles [8], hexadecapoles [9] and triakontadipoles [10].
In this Letter we will allow for a general real space atomic order parameter that can arise due to ordering of the itinerant uranium f -states. First we systematically study what symmetries of the OP that are consistent with various experimental observations. Then we test which of the OP candidates can be stabilized in a realistic calculations of the correlated f -electrons by means of a combined density functional theory and correlation treatment (DFT+U ). These complementary studies point towards which OP are allowed by symmetry and compatible with the electronic structure of URu 2 Si 2 . This study point conclusively towards an OP which is a superposition of two triakontadipole components belonging to different irreducible representations of the isogonal point group: A 1u ⊕ B 2u .
We will allow for a staggered OP as a superposition of several independent OP, where each OP takes a general form where Q is an ordering wave vector, R n are the uranium atomic positions, N the number of atoms in the crystal, f † n is the f -electron creation operator at atom site n and Γ α is the operator for the local multipole of type α. Here Γ α is a matrix-operator in the 14-dimensional space of f -orbitals and f n is a vector-operator in the same space. In order to have Eq. (4) to include all possible OP stemming from the f -shell, α should enumerate all possible degree of freedom within this shell. This is known to be handled by the so-called tesseral multipole tensor moments (TMTM) [1-3, 7, 10, 12], with α = {kpr; t}, where the corresponding expansion matrices Γ kpr t are known matrices in the f -orbital space, see also Supplementary Materials (SM) [16] for more details. The multipole tensors in Eq. (5) have a simple physical interpretation; for even k, they are multipoles of the charge (p=0) or spin-magnetization (p=1), while for odd k they are multipoles of the corresponding currents. The rank of the tensor is given by r and its time reversal (TR) symmetry is given by (−) k+p . arXiv:1209.3057v1 [cond-mat.str-el] 13 Sep 2012 I: The TMTM components that corresponds to IR of the group D 4h . In the enumeration of the components t, n ≥ 0 is an integer. The experimental compatibility for HO OP belonging to different IR of D 4h are ranked from the discussion points i-iv. Plus (minus) sign means that it has some (dis-) advantageous features for TR-even (g) and TR-odd (u) IR, respectively. For point iii there is the possibility of superimposed OP and the different letters indicate without any ranking order which two IR have to contribute together.
It is easy to show that TMTM have simple transformations rules under point group operations. For a rotation with an angle φ around its quantization axis c and for a rotation by π around the first perpendicular direction a they behave as respectively. In Table I the irreducible representations (IR) of the TMTM are determined for the tetragonal crystal point group D 4h through the characters of its two generators, c 4 = R(ĉ, π/2) and c 2 = R(â, π), where a and c denote the lattice directions. Before we present our calculations we will reinvestigate which of the TMTM OP that are compatible with the most important constraints put on the HO OP that have been gathered by the huge amount of experimental studies. It is not possible to cover all experimental aspects [2], so we concentrate on experimental observations that have simple and direct implication on the symmetry aspects of the HO.
i) Non-broken lattice symmetry. At high temperatures URu 2 Si 2 has the crystal symmetry of the space group 139 (I4/mmm) with its isogonal point group D 4h (4/mmm). It is established that there are no lattice distortions at the HO transition at 17 K, i.e. the HO phase also belong to the tetragonal crystal class. Since the uranium atoms are situated at maximally symmetric Wyckoff sites the local site symmetry is also D 4h . Then from Table I it is clear that if the local four-fold rotation remains, the OP has to belong to the A 1 or A 2 IR of the point group. However when the crystal symmetries are taken into account one can see that the broken local four-fold symmetry can be taken care of by crystal symmetry operations, which has been noted earlier for the case of quadrupoles [18]. The remedy is a four-fold screw axis generated by the non-symmorphic symmetry operation (c 4 | 1 2 1 2 1 2 ). In the cases of B 1 and B 2 IR the tetragonal symmetry is recovered for an ordering wave vector Q = (001), while for E the corresponding wave vector is (00 1 2 ). ii) Vanishing magnetic moments. Since it is now established that the phase transition under pressure from the HO phase to the AF phase is of first order [3], the HO OP cannot belong to A 2u since it is the IR of magnetic moments along the c-axis. If we are looking for a symmetry reason for the non-existence of magnetic moments in the HO phase, the IR of the HO OP cannot neither belong to E u , the IR of the in-plane magnetic moments.
iii) Low symmetry in-plane susceptibility. Recently there have been reported that the HO phase has a broken four-fold symmetry [5]. An analysis has shown that this arises from the OP squared that interacts with the applied magnetic field squared, which leads to a non-vanishing B 2g IR of the magnetic susceptibility [6,16]. In Ref. [6] it was shown that only the OP belonging to E has a B 2g IR in the direct product representation of its square, Another option is that OP is a linear combination of two independent OP of different IR. Then since [16] A 1ν ⊗B 2ν = A 2ν ⊗ B 1ν = B 2g , we see that the co-existence of these type of OP would also lead to the observed variation of the magnetic susceptibility. Note that the two OP have to have the same TR symmetry ν, i.e. even or odd. iv) Good hideout. From Table I we can observe that even when the primary OP would have a high rank which makes it hard to observe directly, it will leave traces in terms of induced multipoles of lower ranks. In that sense there are two type of OP that have the chance to be better hidden than others. They are OP with t = −4 and either even r ≥ 4 belonging to IR A 2 or with odd r ≥ 5 belonging to IR A 1 .
Summary of candidates. Our survey is summarized in Table I. It is a linear combination of OP that best fulfill the criteria from point i-iv, either A 1u ⊕ B 2u or A 2g ⊕ B 1g , while the best pure OP belongs to either A 1u or A 2g .
Electronic structure calculations. In order to determine which of the TMTM are compatible with the electronic structure, we will perform a systematic survey in terms of realistic calculations. Care has to be taken to allow for TMTM OP solutions as of Eq. (4) that belong to the different IR of the isogonal group D 4h as listed in Table I. In our approach we break the symmetry by inducing staggered w kpr t on the uranium sites and determine the largest possible symmetry group that is compatible with their existence. Then by iteration we determine if this starting assumption converges to a non-trivial solution. In principle it is possible that two cases of induced OP of the same IR do lead to two different solutions, but they may of course also converge to the same solution.
The electronic structure is determined with the DFT+U approximation within the APW+lo method as implemented in the elk-code [8,9]. The calculations were performed in a similar way as earlier described [2,7,10], and is presented in more details in SM [16].
For each possible TMTM component of each IR as given in Table I we have started a calculation with a large value of the corresponding multipole. All calculations enforce a staggering wave vector Q = (001) for tensors with rank r ≥ 1. It was found that one solution exists for each TR-labelled IR and these results are collected in Table II for the case of U = 1 eV. In order to quantify the importance of different tensor components we have utilized the concept of polarization π kpr t [7], which is a normalization independent quantity that directly measures the importance of the different contributions to the polarization of the density matrix. It is proportional to the square of the components of the TMTM w kpr t and all components except kpr = 000 add up to a total polarization π tot , see SM [16] for more details.
In the case of TR-even all IR converged to the trivial un-polarized case of A 1g with only the rotational invariant tensors, w 000 0 and w 110 0 , being non-zero. They correspond to expectation values of the f -occupation and the operator · s, respectively. The calculated large value of w 110 0 leads to an enhancement of the intrinsic spin-orbit coupling, which brings the solution in to a relativistic regime with predominantly j = 5/2 occupation. The resulting OP is not actually staggered.
When the TR symmetry is broken we find in all cases that components of the triakontadipole w 615 t dominate. In addition we find smaller contributions from all TMTM components that are allowed by symmetry as given in Table I, that is both TR-odd as well as TR-even. However, only two TMTM have significant polarizations, w 110 0 and w 615 t for all cases except A 2u and E u . In all these cases the w 110 0 have similar polarizations as in the TR-even case. The case of A 2u is the most complicated. Here there is a strong competition for the exchange energy between the magnetic dipole moments along c-axis and the two allowed triakontadipole components. The magnetic dipoles have significant polarization for all three MP variants, spin w 011 0 , orbital w 101 0 , and spin dipolar w 211 0 , while the triakontadipole w 615 0 has a contribution of similar strength but w 615 4 has the largest polarization. Although all initializations lead to the same solution, the convergence is often extremely slow which indicates that there is a flat energy-landscape. This was observed in the earlier study where we first observed the triakontadipoles in the A 2u phase [10]. For the two dimensional IR E u we have performed two calculations where the OP is oriented along the in-plane symmetry directions (100) and (110), respectively. Both the two solutions are stable although different, indicating a strong anisotropy. As anticipated both cases posses non-vanishing magnetic moments. The local magnetic moments are 1.1 and 0.8 µ B , respectively. No polarized solution could be found in the case of tetragonal E u , i.e. with the ordering vector Q/2.
For the TR-odd case we have also looked for the possibility of superimposed OP. Only one new solution was found when allowing for various combinations of superimposed OP, but this is one of the HO OP candidates that were singled out fulfilling all the experimental constrains. It is A 1u ⊕ B 2u , with the ψ 615 −4 ( Q) two orders of magnitude larger than ψ 615 2 ( Q). The product of these two OP interacts with the global magnetic field in the ab-plane and gives rise to a nonzero B 2g IR of the magnetic susceptibility [16], which explains the in-plane response of the torque experiments [5].
In Fig. 1 the variation of the Fermi surface (FS) sheets are displayed, from the uncorrelated cases to the finite U = 0.5 eV and 1.0 eV cases for both the two solutions with largest polarizations, the A 1u and A 2u . At U = 0 they reproduce the FS of Ref. [26]. A 2u corresponds to their AF solution, and the, in this case unpolarized, A 1u corresponds to their paramagnetic. In this limit, we observe that the FS of the two solutions are radically different, while at finite U they become surprisingly similar. Hence if we would identify the non-magnetic solution with a A 1u OP as the HO phase and the magnetic solution A 2u as the AF phase, we see that the calculated FS are in excellent qualitative accordance with recent Shubnikov-de Haas measurements [27] that observe only minor changes in the phase transition from HO to AF phase under pressure. In the calculated case we can see very large resemblances in topology as well as sizes between the FS of the two solutions at both U = 0.5 eV and U = 1 eV. Since the FS for the two different U -values do not even have the same topology, the FS geometry is very U dependent. So it is left for a future study to calculate more in detail the U variation of the FS and to compare more quantitatively with experiments. In this study we have performed a detailed and systematic survey of possible HO OP in terms of ordering within the uranium f -bands. A picture arises from some common features of Tables I and II. First we observe there are no indication at all in the calculations for a TR-even HO OP. Secondly, it is clear that the AF solution at high pressure is the IR A 2u where the OP in the calculation is a superposition of magnetic vector OP and TR-odd rank 5 TMTM components, with the last ones dominating. Thirdly, this phase competes with an almost pure TMTM OP of the kind ψ 615 −4 ( Q) which belong to the IR A 1u . These two solutions, A 2u and A 1u , are among the ones that have the largest polarizations, as shown in Table II. The HO phase A 1u only allows a few multipolar components with tesseral component t = −4, where all except w 615 −4 become small. This leads to an optimally hidden OP. It is further found that there exist a solution where this A 1u OP is superimposed with a smaller B 2u triakontadipole OP, which would explain the recently observed anisotropic in-plane susceptibility [5].
From Table II we can directly observe that the various solutions for the different IR all have a large contribution from the rank 5 TMTM OP ψ 615 . These results are in good accordance with our earlier observations that these trikontadipoles play a large role for URu 2 Si 2 in particular [10] and for the time reversal symmetry breaking for moderate or strong spin-orbit coupling in general [7]. The latter is summarized as Katts' rules. After completion of this study we became aware of a recent calculational study [28] which also identify the importance of large rank 5 multipoles in the electronic structure of URu 2 Si 2 . In those random phase approximation (RPA) calculations only the j = 5/2 states are included (see [16] for a discussion) and it was concluded that the magnetic r = 5 E u state is the best candidate for the HO phase.
The support from the Swedish Research Council (VR) is thankfully acknowledged. The calculations have been performed at the Swedish high performance centers HPC2N and NSC under grants provided by the Swedish National Infrastructure for Computing (SNIC).
ORDER PARAMETERS
In this study our main focus is on order parameters (OP) of the general form where Q is an ordering wave vector, R n are the uranium atomic positions, N the number of atoms in the crystal and f † n is the f -electron creation operator at atom site n. Γ kpr t is the operator for the local tesseral multipole tensor moment (TMTM) component, where the trace is over the f orbitals. For a f -shell 0 ≤ k ≤ 6, 0 ≤ p ≤ 1 and |k − p| ≤ r ≤ k + p, which constitute 26 different multipole tensors, of which 13 are time reversal (TR) even and 13 TR-odd, and the total number of tensor components are 196 = 14 × 14. This then accounts for the full freedom of the 14-dimensional density matrix ρ n = f n f † n . In the 14-dimensional space of f -orbitals, Γ kpr t is a matrix-operator and f n is a vector-operator. In a {jm j }-representation of the f -states (jj-basis) we have Here = 3, s = 1/2, the (. . . )-and {. . . }-symbols are the Wigner-3j and -9j, respectively, N kpr is a normalization factor and [a...b] = (2a + 1)...(2b + 1). [1,2] The operator T brings a spherical tensor, which was used in earlier studies [2,3], to a tesseral form The tesseral form is convenient when considering rotational symmetries as in the present study. The TMTM in Eq. (4) have a simple physical interpretation; for even k, they are multipoles of the charge (p=0) or spin-magnetization (p=1), while for odd k they are multipoles of the corresponding currents. The rank of the tensor is given by r and its time reversal (TR) symmetry is given by (−) k+p . Hence all possible OP stemming from the f -shell is covered by a superposition of OP in terms of TMTM of Eq. 4. For instance the OP of an ordinary spin density wave is given by ψ 011 t ( Q). This can be easily seen since Γ 011 t =σ t , the Pauli spin matrices in tesseral form, i.e.σ 1 = σ x ,σ −1 = σ y andσ 0 = σ z , respectively. Another example is ψ 112 2 ( Q), which is one of the staggered quadrupoles models suggested in Ref. 4.
Symmetry properties
It is easy to show that TMTM have simple transformations rules under point group operations. For a rotation with an angle φ around its quantization axis (which we denote z) and for a rotation by π around the first perpendicular direction (which we denote x) they behave as R(ẑ, φ) w kpr t (n) = cos(tφ) w kpr t (n) + sin(tφ) w kpr −t (n) R(x, π) w kpr t (n) = (−) t+r sgn(t) w kpr t (n) , respectively. Hence the rotational properties are determined by the rank r and the component t only. In Table I of the main Letter (ML) the irreducible representations (IR) of the TMTM are determined for the isogonal point group D 4h through the characters of its two generators, c 4 = R(ĉ, π/2) and c 2 = R(â, π). In addition the TMTM behave under a TR-operation Θ as Θ w kpr t (n) = (−) k+r w kpr t (n) .
As the local site group for the uranium atoms is equal to the isogonal point group, Table ML-I describes the local symmetry for the different IR. Then it is clear that only A 1 and A 2 are compatible with the fourfold rotational symmetry of the tetragonal space group. However, this symmetry is recovered also for the other IR if super-cells are allowed for. This comes from the non-symmorphic group element c 4 | 1 2 1 2 1 2 , a fourfold screw operation that connects the corner (n even) and body-centered (n odd) uranium sites in the original bct structure. It maintains the tetragonal symmetry also for the other IR, as for B 1 and B 2 we get while for E we get (q is an integer 0 ≤ q ≤ (r − 1)/2) where for odd t = [q] = 2q + 1, w kpr t and w kpr −t span the two-dimensional IR. The operations leading to a change of sign of the TMTM, i.e. Eqs. (11) and (14), correspond to ordering vectors Q = (001) and Q/2, respectively.
Susceptibility
As discussed in the ML there are no experimental signature that the tetragonal crystal symmetry is broken in the hidden order (HO) phase. However, recently in measurements of the in-plane susceptibility on single URu 2 Si 2 crystals an anisotropic component was detected. [5] This was subsequently analyzed by Thalmeier and Thakimoto [6]. Here we will extend their analysis and show that the experiments can be explained by a non-vanishing B 2g contribution to the in-plane susceptibility. The experiments are based on measurements of the torque on the sample in a uniform external magnetic field H that are constrained to the ab plane of the tetragonal crystal. This torque is given by where V is the sample volume, χ the susceptibility tensor and H = (H a , H b , 0) = H (cos φ, sin φ, 0). The in-plane part of the symmetric χ tensor is decomposed into three IR Then the torque is given by i.e. with two-fold symmetric contributions from the B 1g and B 2g IR of the susceptibility, respectively. Furthermore, these susceptibilities will have contributions from the HO, Ψ( Q), through its interaction with the magnetic field in the Landau free energy expansion where only second order interactions appear due to the staggering of the HO, Q = 0, while the magnetic field is uniform.
. . are also possible. Hence in a general case, with an OP having contributions from all possible IR, we have that Here Ψ Eνa and Ψ E νb span the two-dimensional IR E ν and are chosen such that c 4 E νa = E νb , c 2 E νa = (−) r+1 E νa and c 2 E νb = (−) r E νa , with r the rank of the OP tensor.
Relativistic effects
In presence of strong spin-orbit coupling, there will be a large splitting between the j = − 1/2 = 5/2 and j = + 1/2 = 7/2 states. Then it is useful to study the density matrix in a jj-basis ρ n = ρ where each sub-matrix ρ j1j2 n is spanned by m 1 and m 2 with −j 1 < m 1 < j 1 and −j 2 < m 2 < j 2 .
Strong spin-orbit coupling Let us first discuss the limit of very strong spin-orbit coupling. In this limit the TMTM w 110 0 is related to w 000 0 through w 110 0 = − 4 3 w 000 0 , and only the j = 5/2 occupation is non-zero. As we well discuss below, URu 2 Si 2 does not fulfill this criterion and hence rather possess an intermediately strong spin-orbit coupling. However it is interesting to study this limit not least since it is assumed in some other theoretical studies. In this relativistic limit the j = 7/2 states are much higher in energy than the j = 5/2 states and the occupation is restricted to the sub-matrix ρ Now from the definition of γ jjr t in Eq. (7), one can see that r in Eq. (23) is given by the vector coupling of two j = 5/2 angular momenta, and hence can take values in between 0 and 5. From general relations for exchange of two columns in the 9j symbols Now the fact that 2( + s + j) is always even the 9j-symbol in Eq. (23) has to be zero for odd k + p + r. Hence only even k + p + r TMTM contribute in this limit. This in turn gives that odd (even) r multipolar moments have to be TR-odd (TR-even). The different rank r TMTM are then related through Eq. (23), e.g. in the case of r = 5 the TMTM with kpr = 505, 415 and 615 have fixed ratios of In this case the 615 TMTM is largest which is just an example of the general case. The spin dependent (p = 1) TMTM have largest weight for the r = k − 1 tensor moments, as these are favored by the spin-orbit coupling for less than half-filled f -shell.
Intermediately strong spin-orbit coupling
For a full f -shell the TMTM OP with highest rank is the ψ 617 t ( Q) of rank 7 and in this case the TR symmetry is not given by rank r, as also odd k + p + r TMTM are allowed. They are included in all the analysis for completeness, although these tensor moments are somewhat obscure. Within these the orbital and spin degrees of freedom couple in an axial way, as of e.g. w 111 = 2 3 × s. These tensor moments only arise from the off diagonal j 1 = j 2 blocks of Eq. (22) as they are not allowed in the block-diagonal part as the corresponding 9j-symbols have to vanish according to Eq. (24). They will be included in the analysis and it would be fascinating if they would play a role, but in the present case of URu 2 Si 2 they are of marginal interest.
That URu 2 Si 2 belongs to the case of intermediately strong spin-orbit coupling can be directly seen from the occupation numbers. Both the f -occupation, w 000 0 , as well as the effective spin orbit coupling, w 110 0 , are essentially independent on the assumed IR and take values around 2.6 and −2.5, respectively. This corresponds to that w 110 0 has | 2012-09-13T22:47:21.000Z | 2012-09-13T00:00:00.000 | {
"year": 2012,
"sha1": "3fd8e293ea4b64a02717244799307dd31e5344eb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3fd8e293ea4b64a02717244799307dd31e5344eb",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
41073434 | pes2o/s2orc | v3-fos-license | Role of Pradhamana Nasya and Trayodashanga Kwatha in the management of Dushta Pratishyaya with special reference to chronic sinusitis
Dushta Pratishyaya is the chronic stage of Pratishyaya, which occurs due to neglect or improper management of the disease Pratishyaya. In modern science, chronic sinusitis can be correlated with Dushta Pratishyaya on the basis of the signs, symptoms, complications, and prognosis. Changing lifestyles, rapid urbanization, and the increase in cases of antibiotic resistance are responsible for the rise in the prevalence of sinusitis. In the present clinical study, 37 patients were registered and were randomly divided into three groups: A, B, and C; of the 37 patients, 31 completed the full course of treatment. In group A, Trayodashanga Kwatha with Madhu was given orally; in group B, Pradhamana Nasya with Trikatu + Triphala Churna was administered; and in group C (combined group), Pradhamana Nasya was administered initially, followed by oral Trayodashanga Kwatha with Madhu. In group A, complete relief was observed in 10% of the patients; in group B, marked improvement was observed in 81.82% of patients; and in group C, marked relief was observed in 60% of patients. In comparison to other groups (Group A and Group B), Group C showed percentage wise better results in most of the symptoms.
the atmosphere. Due to the increase in environmental pollution and the busy lifestyles of today, rhinitis is a common disease in the present era. [4] Improper management of this stage leads to sinusitis, which may later lead to chronic sinusitis. [3] More than 120 million Indians suffer from at least one episode of sinusitis each year [4] and, according to the American Academy of Otolaryngology and Head and Neck Surgery, more than 37 million Americans suffer from at least one episode of sinusitis each year. [5] Jamnagar is a coastal city with many industries. Due to the air pollution, respiratory infections are common. A large number of patients with sinusitis also report to the ENT -Shalakya, OPD of IPGT and RA.
Once the sinuses are infected, improper management and poor dietary habits can lead the disease into a chronic phase. [3] This chronic sinusitis is too difficult to drain out completely. It remains as a focus of infection, leading to inflammation in all associated structures, e.g., the tonsils, ear, pharynx, larynx etc. [6] Ultimately, it may lead to complications such as otitis media, orbital cellulitis, osteomylitis, etc. [7] In modern medical science, a wide range of antibiotics and decongestants are available for the treatment of sinusitis. [8] But these drugs can help only in the initial stage. Once pus collection forms in the sinuses and is not drained spontaneously, only surgical intervention can help. [7] After drainage of the sinuses, antibiotics can help. FESS (functional endoscopic sinus surgery), Caldwell-Luc operation, Howarth's operation, etc., are the chief operative procedures to drain the sinus if conservative measures fail. [3] These surgical procedures are associated with many complications, including bleeding, oro-antral fistula, infraorbital anaesthesia, neuralgia, and paraesthesia. [7] The modern treatment modalities for chronic sinusitis are also expensive and not free from side effects. Also, frequent use of antibiotics leads to the gradual development of drug resistance. Moreover, no drug for treatment of allergy and viral infection is available in the modern science. Acharya Sushruta has not clearly mentioned the line of treatment of Dushta Pratishyaya. The treatment advised by Vagbhattacharya for Dushta Pratishyaya is similar to that for Rajayakshma and Krimi Roga. [9] Among the various treatment modalities, Nasya is the chief procedure to drain Doshas from Shirah. [9] Considering all these facts, we carried out this clinical trial to find out the best treatment protocol for the management of the Dushta Pratishyaya. As Dushta Pratishyaya is chronic stage of the Pratishyaya and Kapha Dosha is predominant in this condition, Pradhamana Nasya was selected as the chief Shodhana procedure in this study. Trikatu + Triphala Churna for Pradhamananasya [10] and Trayodashanga Kwatha [11] were selected. In the combination of Trikatu + Triphala Churna, Triphala neutralizes the Tikshnata of Trikatu and thus makes it easier for the patient to it. The main ingredient of Trayodashanga Kwatha is Dashamula and it is Shothahara, Rasayana, and Tridoshashamaka. [12,13] For assessment of the results, we evaluated the subjective improvement in clinical features and, in addition, as an objective parameter, we measured C-reactive protein (CRP) levels, which is a biomarker of inflammation.
Selection of patients
Subjects for the study were selected from among the patients attending the OPD/IPD of the Department of Shalakya, Institute for Post Graduate Teaching & Research in Ayurveda,Gujarat Ayurved University, Jamnagar. We used simple random sampling for selecting the subjects for the study.
Inclusion criteria: Patients having signs and symptoms of Dushta Pratishyaya (chronic sinusitis) and between the ages of 8 years and 80 years were selected for the study.
Exclusion criteria: Patients below 7 years and above 80 years; those having history of hypertension or diabetes mellitus; those with any chronic debilitating infectious disease; those with any inflammatory disease; and those requiring surgical treatment (e.g., for nasal polyp) were excluded from the study.
A proforma was prepared for data collection, incorporating all the relevant points from the point of view of both Ayurveda and modern medicine.
The trial drugs were prepared in the pharmacy of Gujarat Ayurved University.
Ethical clearance
The study protocol was cleared by the ethical committee of the Institute. Written consent was taken from each patient for participation in the study. Patients were free to withdraw from the study at any time without giving any reason. [16] Dose for Pradhamana Nasya: 1-3 Muchchuti (250-750 mg) Duration: Pradhamana Nasya was given for a maximum seven sittings, with an interval of 1 day between each sitting. [17] 3. Group C (combined group): Shodhana + Shamana Chikitsa Pradhamana Nasya was given as described in group B. After completion of seven sittings of Nasya, Trayodashanga Kwatha with Madhu was given orally for 45 days.
Investigations
Follow-up: All patients were asked to come for follow-up every 7 days for 1 month.
Criteria for assessment
The assessment was done by evaluating the changes in the signs and symptoms after treatment with the help of a suitable scoring method by giving score in the range 0 to 4. Changes in the value of CRP were also noted.
Statistical analysis
The information, gathered on the basis of observations was subjected to statistical analysis and Student's paired 't' test was applied. For comparison between two therapies, Student's unpaired 't' test was applied. The results were interpreted at P<.05, P<.01, and P<.001 significance levels.
Observations
The study was conducted on 37 patients who were randomly allotted into three groups.
In Nasavarodha, groups A and B showed almost equal results, i.e., 95% and 95.83%, respectively. Groups A and B showed equal results in Ghranaviplava (100%). All three groups showed equal results in Jwara (100%).
Effect of therapy on x-ray findings
In opacity of sinuses, group A showed statistically highly significant result in both maxillary and right frontal sinuses, group B showed highly significant result in both maxillary and frontal sinuses and group C showed highly significant results in both frontal sinuses.
In ethmoid sinuses, percentage wise group A showed better results than group C but the results are statistically insignificant [ Table 2].
Effect of therapy on tenderness over sinuses
Though percentage wise Group A and Group C showed better result in comparison to Group B in tenderness on Frontal, maxillary and ethmoid sinuses, the results are insignificant, whereas result of Group B in Frontal sinuses are highly significant and in maxillary sinuses are significant [ Table 3].
Effect of therapy on hematological values
The total leucocyte count was reduced by 12.44% in group C. Reduction of absolute eosinophil count was by 12.69% in group C and by 1.78% in group B. Erythrocyte sedimentation rate (ESR) decreased by 27.5% in group B and by 6.66% in group A; the ESR increased by 35.02% in group C. CRP was decreased by 71.25% in group C, by 42.56% in group A, and by 1.91% in group B. In this study, a total 13 (41.94%) patients in the three groups showed increase in the CRP after treatment (as compared to the values before treatment) though in all cases the increased values were almost within normal range. All these patients, however, had symptomatic relief. The increase in CRP was more common in group B (Pradhamana Nasya) and was elevated beyond the normal range in some patients. Possibly, the oral drug (Trayodashanga Kwatha) given in the other two groups (Group A and Group C) has an anti-inflammatory action and benefit of this drug was not available to the patients in group B; that may be the reason for increase of CRP level more in patients of Group B [ Table 4].
The changes in hematological values are statistically nonsignificant but all patients had good relief in symptoms.
Probably the time period was short for changes to be seen; further studies with longer duration of treatment may provide the answer.
Total effect of therapy
In group A, complete relief was observed in 10% of the patients, marked relief in 70% of the patients, and moderate relief in 20% of the patients. In group B, marked improvement was observed in 81.82% of the patients and moderate improvement was observed in 18.18% of the patients. In group C, marked relief was observed in 60% of the patients, moderate relief in 30% of the patients, and mild relief in 10% of the patients.
Selection of the problem
Changing lifestyles, increased pollution, rapid urbanization, and increase in resistance to antibiotics are responsible for the increased prevalence of upper respiratory tract infections. The incidence of upper respiratory tract infection is very high in India. The most common problem related to upper respiratory tract is Pratishyaya or rhinitis which in the later stage converts into Dushta Pratishyaya or chronic sinusitis. In modern medical science, a wide range of effective antibiotics and decongestants are available. But these drugs can help only in the initial stage; if pus collection forms in the sinuses and does not drain spontaneously only surgical intervention can help. After drainage of the sinuses, antibiotics can help. The surgical procedures may themselves lead to complications. The modern medical treatment modalities for chronic sinusitis are expensive and not Hence, we felt the need to derive a treatment protocol that would help drain the sinuses, remove the pathology, and promote immunity. [18] Triphala has Ruksha Guna and Tridoshashamaka as well as Sroto Shodhana, Shothahara, Vatanulomana, Kaphanissaraka, antibacterial, anti-inflammatory, and immunomodulatory properties. [18] All these properties of Trikatu + Triphala Churna Yoga help to remove the pathology and promote local immunity. Thus, Trikatu + Triphala Churna was selected for Pradhamana Nasya in the present study.
Selection of drug
For Abhyanga in Purva Karma of Nasya, Saindhava Taila was selected which is described by Acharya Sushruta in the context of Shwasaroga Chikitsa. Saindhava and Tila Taila have Snigdha Guna and Tridoshashamaka properties [18] Saindhava also has Sukshmasrotogami properties [19] by which it reaches the minute channels. Hence, Saindhava Taila was selected for Abhyanga as the Purvakarma of Pradhamana Nasya in the present study. Swedana Karma (which is also done in Purva Karma) causes liquefecation of the accumulated Doshas especially vitiated Kapha.
Again sinusitis is the inflammation of sinus mucosa and Acharya Charaka has accepted Dashamula as Shothahara Kashaya. In this context Trayodashanga Kwatha is also indicated in Pratishyaya.
Observations
In present study most of the patients (97.29%) were taking Sheetambu, followed by 64.86% who were taking AtiGuru Ahara, and 59.45% Vishamashana. The Aharaja Nidanas are a reflection of the changing and busy lifestyles of today and play a role in the pathogenesis of the disease.
The data of Viharaja Nidana reveals that all the patients showed increase of symptoms in Ritu Vaishamya and Ritu Sandhi, whereas Raja Sevana was found in 78.37% and Dhuma Sevana in 72.97% of the patients. Here, Ritu Vaishamya and Ritu Sandhi are such etiological factors which are very difficult to avoid and Raja-Dhuma Sevana are also very difficult to avoid all the time for all persons and these etiological factors are described as Sadyojanak Nidan for Pratishyaya by Acharya Sushruta. These observations reveal that the area of this study is prone to atmospheric pollution. Chronic contact with such unavoidable Hetus will nullify the effect of the therapy. This could be the reason why patients were not getting relief even after taking treatment for many years.
Krodha as Manasika Nidana was observed in the maximum numbers of the patients (67.56%). It is also reflection of the Prakriti of the patients and also today's lifestyles. This also plays major role in the pathogenesis of the disease.
In Kasa, group B showed better results. This may be due to the sudden reduction of postnasal drip due to the treatment. In Mukhadaurgandhya also, group B showed better results, which suggests that Pradhamana Nasya is very effective for removing Dushta Kapha quickly.
When we used Student's unpaired 't' test to compare the results in the combined group with that in the Trayodashanga Kwatha group, there is no statistically significant difference between the groups in the relief obtained in any of the symptoms. However, on analyzing the percentage of relief obtained, combined therapy gives better results in Shirahshula, postnasal drip, Shirogaurava, Kasa, Aruchi, and Mukha Daurgandhya than does Trayodashanga Kwatha alone.
On comparing the Group C (Combine therapy) with Group B (Pradhamana Nasya), Group C showed better (highly significant) results in Kasa and Mukha Daurgandhya and significantly better results in postnasal discharge. In other symptoms like Nasasrava, Shirogaurav and Aruchi, symptom wise Group C showed better results than Group B but the difference is statistically insignificant.
Better results in the Sarvadehika Lakshanas were also seen in the combined group. the combined group also showed less recurrence of the symptoms after completion of treatment as compared to the other two groups. This may be due to an immunomodulatory property of the drug. Recurrence was more common in group B than in the other groups. This may be due to incomplete Shodhana. While Nasya will relieve the local pathology, the general vitiation of Doshas and Agni is not dealt with efficiently; hence, though rapid symptomatic relief may be seen after Nasya, recurrence can occur.
The variations in the results also depend upon the chronicity of the disease, effect of the weather, and the Bala of the individuals. It also depends upon the Bhishagvashyata of the patients.
Mode of action of Nasya
In Purva Karma of Nasya, Abhyanga and Swedana is done. Abhyanga causes Mruduta of Doshas and Swedana causes Vilayana (liquification) of accumulated Doshas. In the language of modern science, Abhyanga and Swedana increases the local blood supply and Swedana also liquefies the mucous. Due to vasodilatation the permeability of blood vessels increases, which makes the drug absorption faster.
In Pradhana Karma, the drug in Churna form is administered into the nostrils through Pradhamana Nadiyantra in the head-low position of the patient. Thus, the drugs reach the Shringataka and from there, through different Siras, it spreads to other parts like Netra, Shirah, etc. and removes the morbid Doshas. [22] By the properties of drug, it causes Srotoshuddhi and makes the Anulomana Gati of Vayu (mitigation of Vayu), which is hampered in Dushta Pratishyaya.
In Pashchata Karma, Urdhvanga massage and Swedana helps to drain out the Doshas and Swedana also causes Srotomukhavishodhana.
In addition, the drug compound has Srotoshodhana, antiinflammatory, antibacterial, etc., properties, which help to treat the disease.
Mode of action of Trayodashanga Kwatha
Most of the ingredients in Trayodashanga Kwatha are Katu, Tikta Rasa Pradhan; Laghu, Ruksha, Tikshna Guna Pradhana and having Ushna Veerya, Katu Vipaka; Vatanulomana, Shothahara, and Srotoshodhana properties. All these properties are very useful to remove the Srotorodha and promote the expulsion of vitiated Kapha from the sinuses. The Deepana and Pachana properties of Trayodashanga Kwatha cause Amapachana. By Amapachana and also Dhatvagnideepana, the Sara Dhatus are formed properly (Samyaka), which increases the Vyadhikshamatva (immunity). Vedanasthapana, Kasahara, Kanthya, etc., properties provide symptomatic relief. The anti-inflammatory properties of the ingredients reduce the inflammatory process in the nose and paranasal sinuses. The antibacterial activity arrests secondary infection and prevents recurrence of the disease.
Conclusion
Each of the three groups showed better results relative to the other two in different symptoms. In comparison to other groups (Group A and Group B), Group C showed percentage wise better results in most of the symptoms.
The recurrence rate in the combined group was significantly lower than in the other groups. Better results in the Sarvadehika Lakshanas were also seen in the combined group. For good and long-lasting results Shodhana or Shamana therapy alone will not be adequate. A combination of Shodhana and Shamana therapy will yield better and longer lasting effects. | 2018-04-03T05:33:45.265Z | 2010-07-01T00:00:00.000 | {
"year": 2010,
"sha1": "6558c8b6c3c7e92f3ac8837e42e933a5dabe566a",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3221066",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "1dcde3dfbb6a9524439b8a413aec61b6cb9cba5f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261573518 | pes2o/s2orc | v3-fos-license | STAIN FUNGI CONTROL IN PINUS SP. WOOD WITH SILICA MESOPOROUS PARTICLES LOADED WITH ESSENTIAL OILS
The use of essential oils (EO) carried onto mesoporous silica particles (MSPs) was tested to control pinewood stains. Three types of MSPs were synthesized and physicochemically characterized with N 2 physisorption (type IV), X-ray diffraction [Miller indices (100), (110), (200)], scanning electron microscopy, zeta potential (negative values), dynamic light scattering (< 200 nm) and thermogravimetric analysis (5% to 10% weight loss). A response surface design was used to find the EO loading conditions to control stain, the latter was measured as colour change with the CIEDE 2000 formula. The essential oil loading onto MSPC was physicochemically confirmed by a weight loss of 47% in the thermogravimetric analysis. The Citrus, Syzygium sp. and Tagetes sp. oils carried onto mesoporous particles MSPC (30:1 w/w) controlled the pinewood stain caused by Alternaria sp. and Geosmithia sp. This was demonstrated by the absence of pigmentation and scarce fungal growth.
INTRODUCTION
Wood stain is caused by a heterogeneous fungal consortium, which varies depending on the wood extraction sites, the living history of the tree and species. Generally, wood stain does not affect the structure of the wood but causes aesthetic and economic losses (Komut 2022). The microbial decomposition of wood is controllable by an array of different organisms or their components. Sajitha et al. (2018) controlled sapstain on rubber wood caused by fungus Lasiodiplodia theobromae with the use of Bacillus subtilis' secondary metabolites. Studies like this exhibit the potential use of secondary metabolites to control wood stains, also other fungi are sensitive to other plant bioactive compounds (Deresa and Diriba 2023).
Essential oils (EOs) are complex mixtures of plant secondary metabolites, abundant in e.g., terpenes, terpenoids and phenylpropenes. These components usually attach to the surface of microorganisms, penetrate the cell membrane, and cause their death (Jobdeedamrong et al. 2018). EOs are used in agricultural and biotechnological industries for their antifungal, antimicrobial and antioxidant activities. For instance, sapstain biocontrol in yellow pinewood (Pinus spp.) was achieved with Pongamia pinnata seed oil (Sahu et al. 2022). Likewise, components isolated from EOs such as cinnamaldehyde, eugenol and α-pinene inhibited yeast-like Candida spp. (Saracino et al. 2022). These show that EOs can control fungi that affect surfaces, namely, compact discs, parchment, and other forms of information storage (Cappitelli and Sorlini 2005), as well as, structural materials, such as wood. However, their physicochemical properties, including insolubility in water and instability, complicate their use (Jin et al. 2019). The loading of essential oils (EOs) within silica mesoporous particles (MSPs) protects their physicochemical and bioactive properties.
The advantages of MSPs are of interest to carry EOs because MSPs are mechanically resistant, chemically stable and biocompatible. Their applications are determined by properties, such as broad specific surface area, adjustable diameter and pore size. Complexes formed by mesoporous particles and EOs or their pure components are effective in growth control for problematic microorganisms (Bravo et al. 2018). Lu et al. (2020) reported an increase in the release time of citral from 5 h to 96 h when loaded onto mesoporous silica nanocolumns. This establishes the possibility of using MSPs as biocide agent carriers in agriculture, among other industries, like the wood industry. The interest in obtaining alternatives to synthetic chemical agents, that are environmentally friendly to control wood stains is still relevant. In this work, the use of EOs loaded onto MSPs is assessed to control wood stains by Alternaria sp. and Geosmithia sp. on Pinus sp. wood.
Conservation and maintenance of the fungal isolates
Wood stain fungi Alternaria sp. and Geosmithia sp. were isolated from pinewood splinters from a pine beam with stain provided by the wooden collection of the Faculty of Engineering and Wood Technology in Michoacan University of Saint Nicholas of Hidalgo. Molecular methods identified the fungi until genre, and they were propagated in potato dextrose agar (Martínez-Pacheco et al. 2022).
Essential oils (EOs)
Syzygium sp. (clove) (SO) and citric (CO) essential oils were obtained from a local market. Tagetes sp essential oil (TO) was donated by Espinoza-Madrigal (2019).
Analogue MCM-41 particle (MSP) synthesis
A hydrolysis/condensation particle synthesis was performed according to Ma et al. (2011). Briefly, 0.2 g of hexadecyltrimethylammonium bromide (CTAB) were diluted in 38.4 mL of deionized water by magnetic stirring at 400 rpm at 25ºC. Afterwards, 13.6 mL of 96% ethanol and 4 mL of 29% ammonium hydroxide were added to the mixture and stirred at 400 rpm for 5 min. 3 mL of cyclohexane or 2.5 mL of benzene were added to the mixture as swelling agents and were mixed at 400 rpm for 5 min. Three types of particles were synthesized and designated as follows: MSP (without swelling agent), MSPC and MSPB (with cyclohexane and benzene as swelling agents, respectively). Subsequently, 1 mL of tetraethyl orthosilicate (TEOS) was added and stirring continued at 400 rpm for 3 h. The particles were recovered by vacuum filtration. Then were calcined at 540ºC in a furnace for 9 h to remove the surfactant. The particles were kept in airtight glass containers.
MSP and pinewood probe characterization
Wood from Pinus sp. without any preservative treatment was collected from a local sawmill and fractioned in wooden blocks (probes) of uniform dimensions. MSPs diameters and superficial charges were determined through the dynamic light scattering (DLS) and zeta potential techniques (NanoBrook® 90 Plus). Suspensions of the samples were carried out in ultrafiltered deionized water at 2 ppm and were sonicated for 10 min (Branson® 2510). Pore diameter and size distribution were analyzed via nitrogen adsorption/desorption isotherms (Quantachrome® ASiQwin), with the mathematical model BJH and compared with the IUPAC classification of isotherms and hysteresis loops. The X-ray diffraction patterns (XRD) were obtained with a D8 Advance DAVINCI® diffractometer at low angles (2θ = 0º to 5º) with CuKα as a radiation source (λ = 1.5406 Å). A thermogravimetric analysis (TGA) was performed on the particles with and without EO in a Simultaneous thermal analyzer (STA) 6000 with a heating ramp from 25ºC to 550ºC and 10ºC/min heat steps. The results were expressed as relative weight. MSPs morphology and the surface of the probes after removing the fungi were analyzed via Scanning electron microscopy (SEM) (Jeol® JSM 7600F) at 5 kV. Before their analysis, the probes were covered with a copper layer by aspersion. The EOs composition was characterized by gas chromatography (CG Thermo Scientific model TRACE® 1310). Helium was used as the mobile phase. A 15 m long column, with a 0.25 mm internal diameter and a stationary phase layer of 0.25 μm was used. The chromatograms were analyzed with the software Xcalibur® (Thermo Scientific® 2019); they were compared with standards and the NIST® (2008) MS v.2.0. The reported compounds were the ones that met the following criteria: area percentage of ≥ 2 and a coincidence factor of ≥ 900.
Loading of EOs onto MSPs
The EOs were loaded onto MSPs. Hexane was used as a solvent. 100 mg of MSP were stirred in a hexane and EO solution in an orbital shaker at 400 rpm at 25ºC. An experimental design with the response surface method according to Khairudin et al. (2019) was used with three levels of EOs:MSPs ratios (10:1 w/w, 20:1 w/w or 30:1 w/w) and of time (12 h, 24 h or 36 h).
Evaluation of the control of pinewood stain from Alternaria sp. and Geosmithia sp. with EOs onto MSPs
Stain control was evaluated in situ on probes measuring 7 mm x 3 mm x 70 mm following the ASTM D4445-09 (2009) standard. They were sterilized at 121ºC and 121 kPa for 20 min. After that, the probes were covered with a layer of MSPs loaded with EOs obtained in the previous experiment (4 mg MSPs). The MSPs from each treatment of the experimental design were applied on the probes, arranged in Petri dishes (90 mm x 15 mm), on glass slides (three probes per Petri dish). A 25 mm 2 inoculum from the fungi was added to each probe and they were incubated at 25ºC for 4 weeks. The probes with MSPC+CO were inoculated with Geosmithia sp. and the ones with MSPC+SO and MSPC+TO were inoculated with Alternaria sp. Probes treated with hexane, 4 mg of empty MSPs, 1.8 mg non-loaded EO and 2% (w/v) OPP were used as controls. Humidity in the Petri dishes was maintained with filter paper dampened with sterile deionized water. The colour of the probes was measured with a colour meter (TES® 135A) in the following stages: after sterilization, after the addition of the MSPs and after removing the inoculum from the probes. Three colour measurements were made on each probe using the CIE L*a*b* coordinates. The colour difference (ΔE) was calculated with the software R using the CIEDE 2000 formula (Luo et al. 2001). The code programmed for this experiment was published on RPubs by Méndez-Pérez (2021).
Statistical analysis
All experiments were performed in duplicate. The results are shown as mean ± standard error. The software Statistica© v.8 was used to obtain the significance of the data.
Physicochemical characterization of MSPs and EOs
Growth of chromogenic fungi is a relevant problem that has not been solved. Essential oils inhibit the growth of stain fungi such as Alternaria sp. and Geosmithia sp. Loading EOs onto mesoporous particles is an alternative to extend their biocide action.
Three types of particles were obtained with different swelling agents and were physicochemically characterized. They showed Type IV isotherms (Fig. 1a) with type A (or H1) hysteresis and tubular pores. MSPs exhibited a specific surface area of 1 235.20 m 2 /g and a pore volume of 0.83 cm 3 /g. This result is similar to the physicochemical parameters reported for MCM-41 particles (Shao et al. 2020). Swelling agents cyclohexane or benzene produced particles with disordered pore structures (Popa et al. 2020) as shown in Fig. 1d. Mean pore diameter of 2.20 nm was similar in MSPB and MSP. Pore size distribution was uniform for MSPs, but it was not uniform for MSPC and MSPB. This pore size distribution was attributed to the swelling agent being introduced into some micelles and not in others. Similarly, trimethyl benzene (TMB) as a swelling agent produced different pore diameters in mesoporous hydroxyapatite particles (Zeng et al. 2014). The X-ray diffraction analysis (Fig. 1b) for the MSP showed the typical pattern for a hexagonal array of MCM-41 particles.
Fig. 1: Characterization of the MSPs and MSPC+CO particles: a) N 2 adsorption isotherms, b) X-ray diffraction, c) SEM of MSPC without EO, d) pore size distribution, e) thermogravimetric analysis, and f) SEM of MSPC with EO. The colour blue, red, green and yellow represents MSP, MSPC, MSPB, and MSPC+CO, resp.
The values of zeta potential (Fig. 3) for the MSP, MSPC and MSPB were in the stability interval (instability interval is -30 mV to 30 mV). The negative values of zeta potential on the surface of MSPs are a consequence of silanol groups, which were also observed on mesoporous silica particles with three different pore sizes (2 nm, 5 nm and 10 nm) (Ahn and Kwak 2020). The morphologies of MSPC without EO (Fig. 1c) were observed by scanning electron microscopy. The particles exhibited a semispherical shape: the MSPC diameters were between 200 nm and 400 nm .
Thermogravimetric analysis was done for the MSP and MSPC (Fig. 1e). The MSPs lost approximately 5% of their total weight. The weight loss at 100ºC is a consequence of the loss of water physisorbed on the surface. A similar result for MCM-41 particles was reported by Hachemaoui et al. (2020).
The antifungal effect of the EOs depends on their composition and quantity of active molecules. The gas chromatograms of the EOs (Fig. 2) show the major components of the CO (Fig. 2a). D-limonene stood out with a relative abundance of 47.34%. While α and β-pinene, γ-terpinene and terpinolene were 7.39%, 21.35%, 15.34% and 3.01%, resp. D-limonene is a monoterpene that inhibits the growth of a wide variety of fungi by damaging the cell membrane (Sattary et al. 2020). In the SO chromatogram (Fig. 2b) the major components were eugenol (52.89% relative abundance), caryophyllene (14.17%), eugenol acetate (12.89%), humulene (5.89%) and caryophyllene oxide (3.02%). Eugenol is effective as an antifungal because it inhibits ergosterol synthesis, altering the cytoplasmatic membrane (Li et al. 2021).
Influence of the swelling agents on the particle size and zeta potential in loading of EOs onto MSPs
To know the interaction between variables while looking for a more significant antimicrobial effect in the loading of EO onto MSPs, response surface experimental designs were used. Yue et al. (2020) used this methodology to inhibit Botrytis cinerea with tea tree EO and cyclodextrins. The experimental design was used to select the loading conditions of the essential oil that resulted in the most significant stain control on pinewood.
The influence of the swelling agents on the way the EOs are carried onto the MSPs was analyzed. Changes in particle size and zeta potential indicate the adhesion of the EOs on the surface of the particles or inside the material's pores (Fig. 3). Based on this we selected the MSPC to fulfil the purpose of this work. Since the zeta potential values of the MSPC+CO were similar to the values of the empty MSPC. The inference is that particle's surface did not present a layer of oil, and the particles carried the oil inside their pores. On the other hand, the zeta potential values of the MSP changed, which infers the formation of an oil layer on the particles. This fact does not allow for the control of the quantity of EO carried on the particle (Gao et al. 2019). Fig. 3: Influence of the swelling agents on the particle size and zeta potential in loading the EOs onto MSPs. The particle diameter (bars) and zeta potential (dots) values of MSP, MSPC and MSPB are represented in colour blue, red, and green, respectively. The results are the mean of a n = 3 ± SE. *Indicates significant difference in particle diameter between the control and the treatment. t-Student p > 0.05. ▲ Indicates significative difference in zeta potential between the control and the treatment. t-Student p > 0.05.
The MSPC were characterized after the addition of the CO to further confirm its presence inside the pores. In the X-ray diffractogram for MSPC (Fig. 1b) the Bragg reflections were modified, meaning that the structure is irregular. MSPC+CO showed a diffraction pattern similar to MSPC. Poyatos-Racionero et al. (2021) obtained comparable diffractograms with MCM-41 particles loaded with carvacrol, cinnamaldehyde, and thymol. The change in the Bragg reflections is due to the quantity of organic matter (EO) inside the material's pores. The surface of the MSPC+CO (Fig. 1f) presented roughness attributed to oil adhesion on the particles. According to TGA analysis, 47% of the total weight of the MSPC+CO particles is EO. This proves the presence of the oil in the particles.
Loading conditions of the EOs onto MSPC and control of stain in Pinus sp.
The loading conditions of the EOs onto the MSPC were selected based on the results shown in Fig. 4. The factors considered were the EO:MSPC ratio and the time of interaction between EOs and MSPs. The colour change (ΔE) was the criterion for measuring the stain control on the probe. The anti-stain effect on pinewood was performed with MSPCs loaded with CO, TO and SO. The results using MSPC+CO (Fig. 3c) exhibited a lower change of colour when the EO:MSPC ratio was 30:1 w/w. The MSPC+TO showed similar behaviour with a stronger influence of time: 12 h were adequate for interaction. The photographs of the probes after treatment with MSPC+EO showed control of stain with the three oils. The loaded EOs had better control of the stain than the non-loaded oils on pinewood. The three EOs loaded onto MSPC had similar effects in the control of wood stain (Fig. 4). The treatments that resulted in less stain on pinewood were: 11 for MSPC+SO, 8 for MSPC+TO and 4 for MSPC+CO. Delta E values reached lower levels with MSPC+TO. Fig. 4: Selection of the loading conditions of the EOs onto the MSPC for stain control on Pinus sp. probes. Panels A, B and C show the response surface graphs that resulted from the experimental design for SO, TO and CO, respectively. The colour blue represents low stain and the colour red represents high stain. The photographs show representative probes of the treatments with EOs without a carrier, the treatments with loaded EOs and the controls (without inoculum, inoculated with Alternaria sp. and with Geosmithia sp.).
Control of the development of Geosmithia sp. on the surface of Pinus sp. treated with MSPC+CO
The SEM images (Fig. 5) after the treatment of the probes with MSPC+CO showed the absence of fungal growth and pigmentation. A decrease in fungal biomass was observed (Figs. 5c,d,f). A scarce quantity of conidia (marked with red arrows) was found in the treated probes. The effectiveness of the MSPC in carrying EO was demonstrated. Therefore, stain control on probes was achieved because the fungi Alternaria sp. and Geosmithia sp. are sensitive to the three EOs studied. Panel D stood out, it corresponds to treatment 4, in which the MSPC:EO ratio was 30:1 w/w and the interaction time for the loading of EO was 12 h. This evidence exhibited that the CO loaded onto MSPCs prevented the adherence of the fungus to the surface of the wood and consequently, controlled the stain on the probes.
CONCLUSIONS
The essential oil loading onto MSPC was physicochemically confirmed by a weight loss of 47% in the thermogravimetric analysis. The Citrus, Syzygium sp. and Tagetes sp. oils carried onto mesoporous particles MSPC (30:1 w/w) controlled the pinewood stain caused by Alternaria sp. and Geosmithia sp. This was demonstrated by the absence of pigmentation and scarce fungal growth.
ACKNOWLEDGMENTS
Thanks to UMSNH for partial funding. TMP and JMA obtained a CONAHCyT scholarship. | 2023-09-07T15:12:24.515Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "85b87271ee909f94636438436927a2520aaaba94",
"oa_license": null,
"oa_url": "https://doi.org/10.37763/wr.1336-4561/68.4.692703",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c880b8ebcd8e33a2d67e43f5f42adb79c8fa39f4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
235321251 | pes2o/s2orc | v3-fos-license | Use of Electronic Medical Records to Estimate Changes in Pregnancy and Birth Rates During the COVID-19 Pandemic
Key Points Question Can electronic health care records be used to monitor and project changes in pregnancy and birth rates after the COVID-19 pandemic societal shutdown? Findings In this cohort study of pregnancies within a large US university health care system, a model using electronic medical records (used retrospectively from 2017 and modeled prospectively to 2021) projected an initial decline in births associated with the COVID-19 pandemic societal shutdown, predominantly related to fewer conceptions following the societal changes instituted to control COVID-19 spread. This decline was followed by a projected birth volume surge anticipated to occur in summer 2021. Meaning These findings suggest that electronic medical records can be used to model and project birth volume changes and demonstrate that the COVID-19 pandemic societal changes are associated with reproductive choices.
Introduction
The COVID-19 pandemic and associated societal measures to control the spread of the virus have brought about significant changes in almost every aspect of life in the US and globally. The economic and societal effects of the pandemic are vast, with consequences not only for large systems such as health care and education, but also for individuals and families. Previous large societal disruptions in the US, such as the 1918 H1N1 Influenza pandemic, 1 Great Depression (1929), 2 and Great Recession (2008) 3 have influenced population growth and fertility rates, but the exact effects of COVID-19 on fertility and birth rates are speculative to date. In the US, a multitude of factors could variably influence pregnancy rates. For example, economic concerns may lead people to postpone conception, whereas decreased access to contraceptive services could lead to increased unintended pregnancy rates. A 2020 Guttmacher Institute study 4 demonstrated that 40% of women reported changes in plans for childbearing due to the COVID-19 pandemic, 41% of women with children reported worry about being able to take care of their children, and 33% of women had to delay or cancel an appointment for reproductive health or contraception care. Additionally, the COVID-19 pandemic has highlighted longstanding economic and health disparities in the US, 4-6 yet how such disparities will influence fertility rates and obstetric complications are unknown at this time.
Anticipatory planning for birth rates is important for health care systems to appropriately anticipate increasing or decreasing staffing needs and patient volumes. Population size and population dynamics are of interest to economists to document size of economy and model working and/or aging populations. Often, the consequences of major societal events such as economic and natural disasters or infection pandemics are documented only after the fact or as decreasing birth rates are noted. At the University of Michigan Hospital, we use projection modeling and active management of estimated date of deliveries (EDD) to control obstetric birth volumes in our system and anticipate staffing needs for our birth center. In this analysis, we applied our EDD management modeling techniques to project anticipated delivery volumes after the COVID-19 pandemic and describe the projected decrease in birth rates from our center.
Methods
This observational cohort study including all pregnancy episodes within the health care system of the University of Michigan Hospital, a large US academic hospital, examined birth rates retrospectively from 2017 through current pregnancy episodes and modeled them prospectively to October 2021.
The primary exposure was the COVID-19 pandemic societal shutdown. The stay-at-home order in Michigan was placed on March 15, 2020. Our primary outcome of interest was the start of pregnancy episodes within our health care system, trends in the volume of pregnancies, and evaluation of potential explanations for these volume changes. This analysis was approved by the University of Michigan institutional review board, which exempted the study because it used deidentified data kept by the system for institutional quality improvement, and deemed that no direct informed consent from individual patients was required. Data reporting and analyses followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cohort studies.
Pregnancy episodes begin with a patient-initiated contact with our health care system to request pregnancy-related care, such as initial prenatal visit or ultrasound for pregnancy dating.
Pregnancy episodes remain open until 1 of the following events occurs: (1) delivery, (2) documented pregnancy loss (ie, miscarriage, ectopic, etc), or (3) no contact with the patient for 90 days after the EDD. The final EDD was determined by best obstetric estimate 7 based on last menstrual period (LMP) and ultrasound and is associated with the pregnancy episode in the medical record. We excluded any deleted episode that had been entered in error as noted by a comment entered in the electronic medical record (EMR).
Statistical Analysis
We conducted descriptive data analyses using the pregnancy episode data from our health care institution's internal EMR system. The data included patient-specific demographic characteristics (eg, age and race), clinical characteristics (eg, parity), and socioeconomic indicators (eg, insurance status, zip code of residence). Race/ethnicity is patient-reported and documented in the EMR. Because this analysis consists of pregnancy episodes, no sex-specific analyses were performed. Baseline demographic characteristics in our obstetric population were analyzed in 2 ways: an annual comparison of pregnancy episodes between 2019 and 2020 and a comparison of new pregnancy episodes started before and after the COVID-19 mandated societal shutdown measures in March 2020. For the before and after comparisons, we defined time periods as January 1 through March 31, 2020, as the pre-societal shutdown period, and April 1 through June 30, 2020, as the post-societal shutdown period. Comparisons in demographic characteristics were compared with parametric or nonparametric tests as appropriate for continuous variables and with χ 2 tests for categorical variables.
We analyzed the volume of initiation of new pregnancy episodes in our system compared with prior years and explored the association between the COVID-19 lockdown measures on volume of pregnancy episodes and projected future births. We identified the number of pregnancy episodes that began each week between January 2017 and March 2021 and used an interrupted time series study to characterize the differences in pregnancy starts both before and after the initiation of the spring lockdown. A Poisson regression model was used, and Fourier terms were used to adjust for annual seasonal patterns in conception, as described by Bernal et al. 8 Because there is a delay between conception and the start of a pregnancy care episode of at least 2 weeks, we assigned the intervention binary variable of pre-and post-societal lockdown to begin March 29, 2020, 2 weeks after the start of the spring stay-at-home order, and last until June 14, 2020, 2 weeks after the end of the order. To examine whether prenatal care was being delayed during the COVID-19 pandemic, we examined the time between LMP and contact with the health care system in the time frame before the COVID-19 shutdown to the time period after the shutdown using both the Kolmogorov-Smirnov test and the Fleming-Harrington test for right-censored, 2-sampled time data. We projected future delivery volumes each month from October 2020 through February 2021 using EDDs of active pregnancy episodes with a growth rate applied, which represented the volume of patients who would deliver in an upcoming month who did not yet have a record in the EMR. This growth rate was based on data from 2019. We tested the accuracy of our birth volume projection models compared with actual birth volumes for December 2020 through February 2021.
We explored factors contributing to observed pregnancy volume changes including COVID-19 shutdown of in vitro fertilization (IVF) cycles and preterm birth rates. Rates of preterm births (ie, delivery <37 weeks' gestation) per week were explored and quantified using changepoint analysis to determine whether an abrupt change occurred during time-series data. Preterm birth rates before and after the identified changepoint were statistically compared using the Kolmogorov-Smirnov test.
Two-sided P < .05 were considered significant. Statistical analyses were performed with the R statistical programming language version 3.6.0 (R Project for Statistical Computing) and Microsoft Excel (Microsoft Corp). Missing data were handled in the following manner: missing race data were included in the other category, unknown Hispanic ethnicity was categorized as non-Hispanic, unknown insurance was categorized as non-Medicaid/Medicare, unknown zip codes were categorized as greater than 30 miles away, and unknown age was excluded from age analysis but included in pregnancy episode and birth projection analysis. We used our institutional EDD capacity modeling to project future delivery volumes based on pregnancy episodes in our system (Figure 3). In this modeling, we captured known EDDs within our system and calculated anticipated delivery volumes based on those EDDs. As the time latency to EDD increases, there are fewer EDDs in our system, which is expected because the early gestational ages of those pregnancies with EDDs that are more than 30 weeks away increase the proportion of EDDs that have not yet presented for care. Sixty days away from any given EDD, we are aware of 95%
Results
of patients who will deliver on a given day, and even 150 days (approximately 5 months) away from any given EDD, we have relatively complete information about projected deliveries (83% are known).
Based on our model, decreased delivery volume was expected from October 2020 through Given that pregnancy episodes are closed after delivery occurs, if there was a sizeable increase in preterm births, pregnancy episodes would fall out of the system and could contribute to the appearance of decreased pregnancy episode volumes. We found, on the contrary, that the overall preterm birth rates decreased after the onset of the COVID-19 pandemic (eFigure 2 in the Supplement). Changepoint analysis to detect abrupt changes in time-series data determined that July 2020 represented the time point after which preterm birth rates appeared to be different. Prior to July 2020, the rate was 13.3% (or 12.0 preterm births per week of 89.8 weekly births) vs 10.2% from July to December 2020 (9.1 preterm births of 89.7 weekly births) (P = .001).
Discussion
In this cohort study, we demonstrated EMR data on pregnancy episode volumes and projected birth volumes could be monitored and projected with accuracy without waiting for changes in birth volume to signal decreasing (or increasing) birth rates after major societal events. We documented an anticipated decline in births after the COVID-19 pandemic, starting in November of 2020 and persisting until spring of 2021, after which is projected a rebound in anticipated births that may exceed anticipated birth volume based on annual trajectories derived from the prior 5 years of institutional data. In our institution, we modeled pregnancy episode volume to surveil for capacity constraints and projected periods of high and low delivery volume for internal planning reasons.
These same modeling techniques can be applied to estimate impacts on anticipated birth rates within a hospital or health care system, or for local/state epidemiologic surveillance. Our data suggest that the anticipated decrease in the birth rate may be best explained by lower conception rates in the weeks and months immediately following the March 2020 COVID-19 pandemic major societal shutdown. Additionally, we found that preterm birth rates may have decreased after the COVID-19 pandemic shutdown.
Pandemics and other major societal events alter population dynamics by both changing fertility rates and changing aging and death rates. [1][2][3][9][10][11] Changing birth rates in other societal crises have been linked retrospectively to changes in economic conditions, morbidity and mortality rates among reproductive age populations, and other destabilizing societal conditions (eg, separations caused by war deployments, access to health care/contraception). Often, changes in birth rates are recorded as birth rates change, not modeled prospectively to anticipate these changes and plan accordingly. 1 How the COVID-19 pandemic may affect birth rates has been speculated in the lay press 12-17 but has not been fully documented, even as the societal impacts of the pandemic have persisted for longer than a year.
Population dynamics are of interest for governments, businesses, and economists because fluctuations in young and aging, workforce, and school-aged populations are critical variables in the ability to plan appropriately for social well-being, to make investments, and to anticipate economic patterns. 9,11,18,19 In fact, encouragement of childbearing is the focus of recent government and societal policies, such as 12-month paid parental leave and other financial bonuses for childbearing in countries concerned with declining fertility rates. 18
Strengths and Limitations
Strengths of our study include the ability to use novel modeling techniques to project birth rate volumes with relative certainty prospectively instead of waiting until decreased birth rates are observed. Additionally, by layering on demographic characteristics, such as maternal age and race/ ethnicity, assisted reproduction, and preterm birth rates, we can estimate the effects of the COVID-19 pandemic on birth demographic characteristics prospectively. We could apply changing demographic patterns within our models to anticipate fertility rates in specific patient populations, plan for highand low-risk obstetric volumes, and plan for neonatal intensive care unit and pediatric subspecialty needs.
This study had several limitations that should be considered. First, our data are based on a single tertiary care academic center that serves both a local population as well as transported referral populations from throughout the state of Michigan. Thus, our findings may or may not be generalizable to other centers or other regions of the country. Second, we found no significant maternal demographic differences from pre-to post-COVID-19 pandemic time frames. This may be because it is too early to detect those nuanced changes or because of the specific demographic composition of our obstetric population. Third, our projections require that the pregnancy be known to the health care system. Thus, we fail to capture early pregnancy losses, terminations, or pregnancies that might be ongoing but not presenting for prenatal care.
Conclusions
In this cohort study, we documented decreased birth rates following the COVID-19 pandemic societal changes, followed by a projected birth volume surge, suggesting that major societal changes may factor into reproductive choices. We demonstrated the use of EMR modeling to project birth rates and investigate in real-time the underlying reasons for changes in observed birth rates. These projection modeling techniques can be used in partnerships between hospitals and governmental or societal organizations to minimize the detrimental effects of the COVID-19 pandemic on society. | 2021-06-04T06:16:23.112Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "045884af8c99e9b50f84f04ae4bd55fdf6ffa35b",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2780572/stout_2021_oi_210344_1622057768.42286.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5541e9524538639b071b4ff299c9752fb9df70df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226472951 | pes2o/s2orc | v3-fos-license | Comparative Study on Performance Improvement of SPWM/THIPWM based DCMLI Grid Connected DFIG
. Multi-level voltage source converter is integrated in various fields in renewable energy power generation technologies such as wind and solar sources for applications that need higher voltage and higher power. In wind power generation market, doubly fed induction generator (DFIG) based on wind power generation is now the leading technology as they are economically feasible, they do offer a variable speed and efficient substitute to the fossil fuel. This paper proposes a DFIG based on a back to back diode clamped multilevel converter systems (DCMLI) fired comparatively by sinusoidal pulse width modulation (SPWM) and third harmonic injection pulse width modulation (THIPWM) techniques. By using these technologies, the DFIG performance is compared for different wind speeds under normal operation condition. The proposed approach shows that the DCMLI systems generate a near sinusoidal voltage with lower values in total harmonic distortion (THD) thus, upgrading the power quality that is produced by DFIG. Lastly, the variation of frequency of induced rotor voltage and the active power flow due to the wind speed changes when the rotor speed changes from super synchronous to sub synchronous speeds is investigated.
Introduction
Meanwhile the last three decades, the resources of renewable energy are strongly penetrated into the conventional grid of power system. The generation of wind energy has always existed and will keep to display its crucial existence in the near future. The power system operators and planners have an enormous challenge to assure stability, security and reliability in power supply with a significant contribution of wind energy conversion system (WECS) power plants. The capacity power range of the large wind turbines is from 3 to 12 MW. Most types of wind turbines are the type of variable-speed operation through utilizing a doubly fed induction generator (DFIG) with the pitch control technique or a synchronous generator which doesn"t have gearbox connection.
The variable-speed WECS based on DFIG amount is almost fifty percent of the total number of all globally installed wind turbines. This is because they have obvious features as follow, lower cost, the back to back converters power rating is lower, speed span is wide, independent power control, and lower power 1osses [1]. The DFIG is able to be operated in two different modes: sub-synchronous mode or super synchronous mode due to the converters have bi-directionally power flow ability.
The DFIG is connected directly to the grid via the three phase stator winding. As well as the rotor is connected to the grid via back to back voltage source converters "grid side converter (GSC) and rotor side converter (RSC)". The converters are linked via a DC bus of coupling capacitor [2,3]. The converter power rating is about 20 to 30 % from rated capacity of DFIG.
In the presented paper a WECS based on a back to back diode-clamped multilevel converter systems (DCMLI) fired comparatively by SPWM and THIPWM techniques are proposed. A comparative study on the wind turbine system based on a DCMLI fired by SPWM has been carried out for different wind speed profile at normal condition case. Also, the variation of frequency of induced rotor voltage and the active power flow due to the variation in wind speed according to the rotor speed changes from super synchronous to sub-synchronous speeds are considered in the next section.
The principle of multi-level inverter
By means of using multi-level voltage source converter technology, it is now possible to acquire the higher voltage with standard power rating components and nearly pure signal for voltage and current signals. Further, the filters size and cost are also reduced by integrating multilevel technology. Multilevel inverters have been commonly used in WECS because voltage stress on switching devices of multilevel converters are lower, reduced variations of active and reactive power, the operating switching frequency can be decreased, total harmonic distortion (THD) is lower, output current has very low distortions, and lower dV/dt value [3,4]. This is obtained by using the arrangement of more switches than what is utilized in the two-level converters. There are many types of multilevel inverters topologies existing, such as: Diode Clamped multilevel inverters, Flying Capacitor multilevel inverters and Cascaded H-bridge multilevel inverters [5][6][7][8].
Asynchronous DFIG Wind Power Generator Grid Connected
The variation in wind speed changes DFIG rotor speed which results to changing in the rotor current frequency. A controllable frequency and magnitude of rotor voltage can be injected to the three-phase DFIG rotor circuit, therefore a three-phase voltage source control is needed. In this study DCMLI with two, three, four and five level DCMLI are used. The main characteristics of this topology are: -The profile of its output waveform is nearly sinusoidal. -The whole capacitors have to be pre-charged at the beginning of the operation. -Output voltage level depends on the capacitor voltage. In order to keep the balancing of the three-phase output voltage, the equality of their capacitor is needed in this case [9]. In the system proposed, SPWM and THIPWM techniques have been implemented as shown in Fig. 1. The total voltage across all capacitors is DC voltage Vdc which is equal to 1150 V. Each switching device has a limited voltage stress across it which is equal to Vdc via the clamping diodes. An n level inverter needs 2(n-1) switching devices, (n-1) (n-2) diodes, and (n-1) voltage sources or capacitors [10,11]. The five-level DCMLI is illustrated in Fig. 2. The capacitors C1, C2, C3 and C4 are series connected. These capacitors are used to split the DC-bus voltage into five levels where the voltage across all capacitors are equal to 1150 V, each capacitor is 50 mF and their total capacitance equals 10 mF for five-level converter case.
Additionally, four complementary pairs of switches that are set so that when a switch is turned ON, its complementary pair is turned OFF for each phase. The complementary switch pairs for phase leg A are (S1, S5), (S2, S6) and so on. Furthermore, the switches in a diodeclamped inverter that are switched ON for a phase leg are always in series and adjacent as it is shown in Table 1. The five-levels of output voltage Van are denoted as Vdc/2, Vdc/4, 0, -Vdc/2 and -Vdc/4 as it is shown in Table 1 where Vdc is equal to 1150 V in the system. To obtain the voltage level Vdc/2, the switches S1, S2, S3 and S4 are to be switched ON. Similarly, to obtain the voltage level -Vdc/2 and 0, the switches (S5, S6, S7, S8) and (S3, S4, S5, S6), are to be switched ON correspondingly as it is shown in Table 1. The switching states in one leg of a five-level DCMLI are shown in Table 1. The switches S1, S2, S3, S4, S5, S6, S7 and S8 are IGBT"s switches which have rating power 500 kVA equal to 30% of the DFIG rating power. Each blocking diode and the active switches have the same voltage rating, diodes will be required an "m" number of diodes in series. Similarly, for each phase the required number of diodes would be (m-1) × (m-2). Therefore, the blocking diodes number in a DCMLI is linked to the levels number quadratically. The five-level diode clamped inverter are diodes D1, D1', D2, D2', D3 and D3'. These six diodes help in clamping the switching voltage to half of the DC bus voltage.
During the past decades, SPWM has been used greatly in control applications of AC motors. In this method, the three-phase signals of modulating reference are compared with a common signal of triangular carrier. Therefore, from their intersections points the firing signal of power switching devices of the converter are determined. However, this method cannot use the full voltage of supply in the inverter and has high harmonic distortion in the produced waveform of output voltages because the SPWM characteristics of switching are not of a symmetrical nature. THIPWM is a modified technique for generating modulation signals, and it has a lot of features such as: reduction in the losses of commutation, greater modulation index amplitude, higher utilization of the DCbus voltage and reduction in the total harmonic distortion (THD) of the waveform of output voltages, compared with the SPWM technique [12][13][14][15][16][17][18].
A modulation signal, a carrier-based SPWM and THIPWM techniques are applied to four-level DCMLI by comparing reference waveform signal with the triangle carrier signal and the voltage across the three upper switches in phase c as shown in Fig. 3 and Fig. 4. Those techniques have been utilized to study the effect of a multi-level inverter (MLI) on power quality of wind energy.
The model of power grid
The presented paper adopts the DFIG which has 1.5 MW capacity with vector control [19][20][21], DFIG based on WECS is built in MATLAB. The DFIG is connected using 575V/25 kV transformer to a 25kV bus. 25 kV bus is connected to 120 kV grid through 25 km line feeder. The grid consists of 25kV/120kV transformer, a 150 MW power plant with 13.8kV generator with step-up transformer 13.8kV/25kV. In DFIG, the stator is directly connected to the grid and a back to back MLI fired by SPWM and THIPWM techniques in the circuit of the rotor which initiate the link between rotor and grid sides. Back to back MLI are coupled by the capacitor in a DClink of 10mF as illustrated in Fig 1. No further control is applied to pitch angles in vector control, so the paper studies only the effect of the DCMLI on the THD of DFIG voltage and current. The case of wind variation is studied in three steps: 12 m/s from zero to 2 second, 9 m/s from 2 second to 8 second and 12 m/s from 8 second to 19 second as shown in Fig. 5.
Simulation results and discussions
From the result it is found that the five-level multilevel diode clamped SPWM converter and the four level THIPWM have the optimum result and meet the IEEE std 519 limits. Now the full detailed simulation data are presented for the vector control for the WECS based on DFIG with 5-level SPWM diode clamped converter. The vector control is divided into GSC and RSC control [22][23]. In the presented system showing THD and performance in output voltage and current for the two, three, four and five level types of SPWM converters compared at different operating points in the system as shown in Table 2 and pointed in Fig. 1 for wind speed 12 m/s, 9 m/s and all of these results for the applied systems are achieved without installation of any static filters. Also, presents two, three and four level types of THIPWM converters and their THD in output voltage and current at different points have been illustrated in Table 2.
A two-level DCMLI output fired by SPWM is showing that THD which is 16.2% is highly unacceptable THD value as illustrated in Table 2. Therefore, a fivelevel DCMLI fired by SPWM is used to improve the quality of the supply in the circuit of the rotor. The fivelevel DCMLI waveform of the output voltage is illustrated in Fig. 6. It is clear that DCMLI fired by SPWM is injecting the same magnitude value of output voltage. However, it is improving the quality of output voltage. THD has reduced vastly to 3.21% and it is shown in Fig. 7. A two-level inverter output by THIPWM in Table 2 shows that THD is found 11.56% at 12m/s wind speed which is highly unacceptable but, it is lower than two-level inverter output by SPWM. The four-level inverter by THIPWM output voltage waveform has a value of THD at PCC equal to 4.39% is accepted according to IEEE std. The number of multi-level diode clamped levels needed in the technique THIPW is less than the number of multilevel converters by SPWM, four-level is only needed to get accepted THD value according to IEEE std in THIPWM instead of five-level in SPWM technique. Performance of DFIG due to wind speed variation in three variable steps pattern will appear as an effect on variation in active power, rotor speed and the frequency of induced rotor voltage according to the variation of the input wind speed. The DC link voltage performance is sustained constant at 1150 volts which is its reference value during the MW simulation as declared in Fig. 8. Furthermore, for rotor speed profile it is noticed that at 12 m/s wind speed the rotor speed is in super-synchronous mode settles at 1.2 p.u and rotor speed starts to decrease. As a result, the mode turns to be under-synchronous when the wind speed decreased as it is illustrated in Fig. 9.
The other graphs in Fig. 10 (a) and Fig. 10 (b) show the root mean square (RMS) values of voltage and current (Vgrid, Igrid) at PCC after step up transformer 575 V/ 25kV point. In addition, voltage in grid side converter (VGSC) is shown in Fig. 10 (c). The graphs show the effect of changing the wind speed from 12 m/s to 9 m/s and from 9 m/s to 12 m/s on voltage and current at these points as shown in Fig. 10. The active power flow is illustrated in Fig. 11 (a) at wind speed 12 m/s and its direction from rotor side to grid. The active power flow at stator is shown in Fig. 11 (b). The total power from the wind system is the power from rotor side converter plus the power from the stator of DFIG with its rated value and all power flow in the same side drain to the grid as shown in Fig. 11 (c). When wind speed changed to 9 m/s, the active power flow is illustrated as shown in Fig. 11 and the active power flow symbols are mentioned in Fig. 1. The total active power from the wind system power "Ps-Pr" is decreased because the direction of active power flow at rotor side is reversed and became from grid to rotor side to compensate the decreasing in rotor speed and flux. Frequency of the voltage rotor side is increased to increase the frequency of the flux to maintain the active power constant at its rated value that is induced from stator of DFIG. Figure 12 illustrates the changing in frequency of induced rotor voltage. At 12 m/s wind speed, the frequency reaches 27 Hz and when the wind speed decreased to 9 m/s, the frequency starts to increase at 2 second. The frequency increases due to compensating the decreasing in wind speed value that will affect and reduce the magnetic field applied in rotor, so the frequency increases as a result of active rotor power reversing its direction from the grid to rotor side to supply the shortage of magnetic flux. Here Fig. 13 shows a load sharing between the grid and wind turbine system for the dynamic load 10 MW, 3 MVAR power flow during the wind speed profile for the whole system as it is shown in Fig. 1. In Fig. 13 (a), shows a load sharing of active power flow from wind turbine to dynamic load and Fig. 13 (b) illustrates a load sharing of active power flow from grid to dynamic load. Table 3 shows the value of modulation index (MI) during the change of wind speed at five-level SPWM and fourlevel THIPWM in GSC.
Conclusions
The performance of a diode clamped multilevel inverter based vector controlled DFIG is investigated under different operating conditions. The main focus of that work is to use wind energy with minimal harmonic distortion levels under different speeds of wind energy without static filters intervention. A 5-level DCMLI using SPWM technique is capable to maintain voltage THD levels to 3.21% at normal operating conditions of 12 m/s whereas THD levels does not exceed 4.19% at worst conditions of low wind speeds. A 4-level THIPWM based DCMLI is capable to maintain THD levels at normal wind speeds at 4.39% and at 4.72% at low wind speeds. The graphs showing the variation of rotor injected voltage frequency and active power flow during super-synchronous and sub-synchronous rotor speeds in response to wind speed variations show superior performance. The main contribution in the proposed model, in that research work, is a simulative study to the internationally well adopted DFIG grid interconnection using multilevel power electronics topology with the avoidance of using static filters and find out the number of levels that achieve the IEEE 519 criteria of THD which is less than 5% for both techniques. The study showed that the proposed approach using THIPWM or SPWM resulted in a superior performance in terms of THD mitigation in all modes of operating conditions. Comparative results were demonstrated and discussed with definite conclusions. | 2020-08-27T09:06:46.972Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "33a150a5522b08da4c776de732898c5e8fcb32de",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/46/e3sconf_ceege2020_01005.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "881edc4f09a89d51f99cd9d5c7f2f1ef206c928e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248146192 | pes2o/s2orc | v3-fos-license | Treatment of Paranasal Sinus Indolent Mucormycosis
Background and Objectives: Mucormycosis of the nasal cavity and paranasal sinuses is a rare but highly aggressive especially in patients with diabetes or immunosuppressed patients. However, chronic non invastive type of mucormycosis can be observed in immunocompetent patients. In this study, we investigated indolent mucormycosis cases and assessed the clinical and radiological outcomes of indolent mucormycosis of paranasal sinus in healthy patients treated by endoscopic sinus surgery (ESS) alone. Materials and Methods: A retrospective chart analysis of 9 patients with the diagnosis of indolent mucormycosis of paranasal sinus and treated by ESS between 2007 and 2017 was performed. The data were collected from the medical records: age, sex, clinical presentations, pre- and postoperative endoscopic findings, underyling diseases, pathology, pre- and postoperative radiological findings. Radiologic images were reviewd to assess the involved side and sinus, bony sinus wall changes. Results: The histopathologic findings revealed mucormycosis with broad, non-septated, right-angled hyphae. Although the diagnosis of mucormycosis, the patients were not any antifungal agents after surgery. There was no disease progression and recurrence. Conclusion: In the case of paranasal sinus mucormycosis, ESS alone is thought to be sufficient to the treatment for indolent cases in immunocompetent patients without evidence of preoperative computed tomography and endoscopic findings of invasion and antifungal treatment may not be necessary.
Introduction
Mucormycosis of the nasal cavity and paranasal sinuses is a rare but opportunistic infection of the class Phycomycetes, order Mucorales. It is highly aggressive and causes a rapidly progressive, life-threatening disease, especially in patients with diabetes or immunosuppressed patients. The most effective treatment consists of immediate surgical debridement, administration of systemic antifungal drugs, recovery of compromised immunity. 1,2) However, asymptomatic or chronic indolent type of mucormycosis can be observed in immunocompetent patients. [1][2][3] There is no consensus of treatment for indolent sinonasal mucormycosis patients.
In this study, we collected 9 cases of mucor fungal balls and assessed the clinical and radiologic results of indolent mucormycosis of paranasal sinus in healthy patients treated by endoscopic sinus surgery (ESS) alone.
Materials and Methods
A retrospective review of nine patients with the diagnosis of chronic indolent mucormycosis of paranasal sinus and treated by ESS alone between 2007 and 2017 was performed. Institutional Review Board approval was obtained from the (IRB number 2018-0153). Informed patient consents were waived because it was a retrospective chart review. The following data were collected from the medical records: age, gender, clinical presentations, pre-and post ESS endoscopic findings, underyling diseases, histopathology, pre-and postoperative radiologic findings.
Radiologic images were reviewed to assess the involved side and sinus, changes of bony sinus wall. (Table 1).
Preoperative paransal sinus computed tomography (CT)
showed total or partial opacification of the unilateral maxillary sinus with focal calcifications without bony destruction of the involved sinus or invasion of the orbit or brain in all patients ( Fig. 1). Five patients had inflammatory nasal pol-yps in the involved paranasal sinus (Fig. 2).
Outcomes
All patients underwent ESS based on clinical symptoms, paranasal CT with suspicision of fungal sinusitis. Surgical procedures included uncinectomy, ethmoidectomy and middle meatal antrostomy with removal of clay like, thick brownish-green material. After surgery, the patients were given antibiotics for 3 days and then discharged for general protocol for fungal ball.
The histopathologic findings showed mucormycosis with broad, non-septated, right-angled hyphae (Fig. 3). In spite of diagnosis of mucormycosis, the patients were not any antifungal agents after surgery.
All patients had monthly angled endoscopic follow up ( Fig. 4) and postoperative 3 months paranasal CT follow up (Fig. 5). The average follow up duration was 11.7 months.
There was no recurrence.
Discussion
Mucormycosis is a clinical disease caused by fungi from 4 families of the Mucorales, which is a member of the class Zygomycets. The zygomycetes are hyaline fungi commonly found in breads and fruits. They are ubiquitous in soil, in vegetation, and in the air, making them frequent inhabitants of the upper airway mucosa but do not become an infectious source in healthy people. 1,4) They become pathogenic when the patient's resistance has been changed. 4) Some studies suggest that mucormycosis is an opportunistic infection in approximately 5% to 12% of all fungal infections in the high-isk group. 5) Rhinocerebral or craniofacial zygomycosis is a common form and usually fatal or fatal. Typically, the patient is at high risk for leukopenia or metabolic acidosis. Diabetes mellitus is the single most common disease associated with the development of invasive mucormycosis. 5,6) In this study, two patients were presented with diabetes mellitus, but all were well-controlled, and did not show any signs of acidosis and appeared to be non-invasive fungal ball.
Rhinocerebral mucormycosis is the most common form, accounting for one-third to one-half of all mucormycosis. 5) The first symptom of rhinocerebral murcomycosis is sinusitis or periorbital cellulitis, 7,8) followed by suffusion of conjunctiva, blurry vision, and soft tissue swelling after oc- occurs. 14,15) However, indolent rhinocerebral mucormycosis in immunocompetent patients is often characterized by subtle symptoms and signs. 16,17) The symptoms in this study were facial pain, nasal stuffiness, foul odor, postnasal drip and non -specific features and similar pattern to chronic sinusitis and fungal ball. No necrotic eschar was observed in the endoscopic findings.
The treatment of invasive mucormycosis requires reversal of the predisposing factors, aggressive surgical removal of infected tissue and systemic antifungal medical therapy. 5,16) There is no consensus on the extent of surgery in the treat- showed that limited mucormycosis can be cured with surgical debridement alone 2,18) or with combination of surgery following a course of amphotericin B. 16) In previous study, authors reported that indolent mucormycosis of paranasal sinus in four case series immunocompetent patients could be treated by endoscopid sinus surgery alone, and antifungal drugs may not be needed. 2) In this study, we showed that, in nine case series, paranasal indolent mucormycosis of healthy patient could be treated by ESS alone.
Conclusion
Paranasal sinus mucormycosis is very rare in healthy patients. This study still includes a small number of patients.
However, ESS alone is thought to be sufficient for the treatment of chronic indolent cases in immunocompetent patients without evidence of invasion and antifungal treatment may not be necessary. | 2022-04-14T15:23:32.580Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "20edc7501d79e6ea25b493545cda54060948ac42",
"oa_license": "CCBYNC",
"oa_url": "http://www.jcohns.org/download/download_pdf?pid=jcohns-33-1-3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "965a2c7dc611d48e375c78822ce1019499fb4ee5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
158878467 | pes2o/s2orc | v3-fos-license | Plowshares or Swords ? Fostering Common Ground Across Difference
With political polarization challenging forward progress on public policy and planning processes, it is critical to examine possibilities for finding common ground across difference between community participants. Inmy research on contentious planning processes in the United States, I found four areas of convergence between participants over transportation policy and process related to public process and substantive matters. These convergences warrant planners’ attention because they united stakeholders coming from different vantage points.
Introduction
Political polarization in the United States is hindering progress in public policy and meaningful engagement at all levels of government.How do legislative requirements-like those for regional sustainability planning in California-help or hinder meaningful public engagement?What are the biggest challenges and opportunities for improving engagement?
Public process design is critical when participants are ideologically divided and do not trust each other or the public agencies in charge.In these cases, it is important to seek common ground on contentious, ideologically charged issues connected to sustainability.For example, all participants in a process may not agree on whether climate change exists, but they might agree that electric and hybrid vehicles should pay their fair share of road costs.They may not be able to agree on whether highdensity development is beneficial, but they could pursue joint fact-finding to assess its effects on property rights and values, gentrification and displacement, and public services like schools, police and fire departments.
During my research on contested sustainability planning and infrastructure processes, unexpected areas of convergence emerged in the San Francisco Bay Area, the Atlanta, Georgia region, and the City of Gainesville, Florida (Trapenberg Frick, 2013, 2016, forthcoming;Trapenberg Frick, Weinzimmer, & Waddell, 2015) 1 .These convergences arose despite staunch disagreement over which planning strategies would support prosperity in these areas.In the Bay Area, the Metropolitan Transportation Commission and the Association of Bay Area Governments held meetings aimed at developing the region's first Sustainable Communities Plan, known as Plan Bay Area and adopted in 2013.Tea Party and property rights activists came out in force to block these meetings and were not alone in their opposition.Plaintiffs from across the political spectrum filed four lawsuits against the plan: two had connections to Tea Party and property rights activists; one was brought by the building industry; and one was filed by environmental organizations.In the progressive stronghold of Marin County, citizens not affiliated with Tea Party or property rights groups opposed requirements associated with higher density development planning if cities wished to access regional funds.
In the Atlanta region, Tea Party and property rights activists led the opposition to a 2012 regional sales tax proposal.The measure would have dedicated half of the new tax revenue to public transit projects.A coalition of strange bedfellows emerged: Sierra Club and National Association for the Advancement of Colored People leaders joined the opposition, in part because they felt the proposed transit projects were not the ones the area needed.Although it is hard to say what effect the coalition had on the measure, the tax measure failed decisively with 63 percent of voters in opposition.
A loose coalition also emerged in Gainesville between Tea Party and property rights activists and some residents from East Gainesville, a lower-income African American neighborhood.They argued that the City's proposed Bus Rapid Transit line was too costly and unnecessary.The BRT line was initially proposed for funding in a county-based transportation sales tax before the voters in 2012.Due partly to this opposition, the county dropped the transit line from funding consideration in tandem with other transit projects.
Areas of Common Ground
I found four areas of convergence between participants over transportation policy and process in these areas.These convergences warrant planners' attention because they united stakeholders coming from different vantage points.
First, some conservative activists in Atlanta supported vehicle-miles-traveled (VMT) fee as a replacement for the gas tax if major administrative and privacy challenges were overcome.They argued that drivers of electric and hybrid vehicles are not paying their full share of transportation system costs.Progressives have often advocated for this fee transition as well with the hope that funding could be directed to transit, bicycle, and pedestrian projects.
Second, conservative activists in both the Bay Area and Atlanta questioned the wisdom of running costly rail lines in low-density areas.Their arguments aligned with those of environmentalists and other progressives who would rather have seen transit investment in central cities for equity and efficiency reasons, and with academic researchers who caution that mass transit needs a sufficient density of residents and jobs to generate significant transit ridership.In Gainesville likewise some conservative activists supported improved bus service for low-income residents for reasons related to equity and cost.
Third, conservative activists in the Bay Area and Atlanta regions questioned the authenticity of the planning process, suggesting that planners merely went through the motions to arrive at a predetermined outcome.Progressive activists in those regions and planning scholars have had similar concerns, debating for decades whether large-scale planning processes with public meetings and hearings are meaningful formats for public input.
Fourth, activists across the political spectrum opposed the 2012 sales tax proposal in Atlanta because they viewed it as a regressive across-the-board tax rather than a user fee.Planning scholars similarly caution against sales taxes to fund transportation infrastructure.They argue that in states where local sales taxes for transport run rampant, states should move towards a user fee approach.This could include gas taxes, tolls, congestion pricing, parking charges and transit fares.Federal gas tax revenue, a major funding source, has declined significantly as the U.S. Congress has not increased the tax since 1993.Local areas have looked to increasing sales taxes through voter approved ballot measures to shore up the difference.In contrast to the Atlanta case of opposition, some Bay Area environmental activists have reluctantly supported sales tax increases over the years if they included a broad-based package of transportation modes.
Opportunities
When the public is ideologically divided over planning issues, a way to move forward could be by seeking areas of common ground like the ones outlined above.As one Tea Party leader advised me, "When the left and right sits down and actually communicates with each other, many times both sides are amazed that there is agreement on issues.You just have to be able to respect the fact [that] both sides have a right to believe the way they do politically and not focus on it.If you disagree on 90% of the issues, you will be much more successful if you try to find a way to work together on the 10% you agree on." Planners could draw from the theory of agonism to reframe their approach to civic engagement.I draw inspiration from political theorists Chantal Mouffe and William Connolly's key scholarship in this area.In agonistic contexts, participants come to consider their opponents as legitimate adversaries rather than as enemies unworthy of engagement.In such moments, people maintain their core values and identities (Mouffe, 2013).As a result, an agonistic ethos of respect may emerge between otherwise divergent citizens (Connolly, 1995).I find this ethos and framing opens up opportunities for activists to discuss potential common ground across difference even if in limited ways or agreeing to disagree.As they voluntarily participate in deliberations, they can seek to redirect or exit the discussions.Critically important to activists is retaining their primary identities to remain legitimate to their side of the aisle.
Mouffe's interest in agonism stems from her critiques of the theory of communicative rationality which she argues privileges consensus and speech practices devoid of emotions.This situation in turn stifles passionate debate and excludes dissenting views.Some planning scholars consider agonism as an antidote to communicative planning theory which they argue masks power dynamics and reinforces existing societal inequities (for a summary of debates, see Bond, 2011 also see Innes & Booher, 2015).In divided cities, for example, planning scholars look to agonism in tandem with other strategies as a way forward for transitioning city actors to living with difference (Bollens, 2012, p. 239;Gaffikin & Morrisey, 2011).Other scholars argue that agonism and communicative practices can co-exist as planning processes evolve (Fougère & Bond, 2016;Inch, 2015;Legacy, 2016).
One way to set the stage for agonistic engagement and inform community negotiations would be for activists and planners to jointly conduct analyses that examine, for example, the range of potential property rights impacts (Jacobs & Paulsen, 2009) and full lifecycle costs of projects and plans.These analyses might underscore and/or uncover critical issues that warrant further attention and, thus, bolster continued activist involvement.Planning-related policy efforts and legislation could recommend such analyses be undertaken as part of larger processes that include public engagement.To aid deliberations and mutual understanding, these recommendations could include independent mediators trained in conflict negotiation and resolution as well as other techniques including in-depth interviews with key stakeholders and non-traditional activities such as site visits and walking tours of standard public meetings (e.g., Forester, 2009).Public agency planners and elected officials' participation is critical if they or proposed plans seem likely under attack be it from conservative or progressive and environmental activists.While an agonistic ethos might emerge between stakeholders, the gate keepers of plan making (public agency officials) could elect to dismiss or not incorporate mutual understandings stemming from such activities unless they engage in reframing their enemy Other into at least an adversary.
Pilot funding through public or other sources could be provided to implement agonistic processes and examine their strengths and weaknesses.Pilots and evaluation would be worth the cost if agonistic relations between divergent actors can be fostered and community engagement is improved-potentially paying dividends by also laying the groundwork for activist relations on other planning endeavors.
In sum, it is worthwhile to establish the long-term objective of transitioning from highly antagonistic, counterproductive encounters to interactions of agonistic debate.Such an objective-with its focus on convergence among opposing parties-may serve states, regions and localities well as they assess their public participation and planning requirements, particularly those related to contentious issues like sustainability and climate change. | 2018-12-12T18:55:53.840Z | 2017-10-31T00:00:00.000 | {
"year": 2017,
"sha1": "87ae308c74d0d25f40d1d1ab3e66ecb064c879fd",
"oa_license": "CCBY",
"oa_url": "https://www.cogitatiopress.com/urbanplanning/article/download/1181/1181",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "87ae308c74d0d25f40d1d1ab3e66ecb064c879fd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
238787985 | pes2o/s2orc | v3-fos-license | Development of an Iron-Based Fischer—Tropsch Catalyst with High Attrition Resistance and Stability for Industria Application
: In order to develop an iron-based catalyst with high attrition resistance and stability for Fischer–Tropsch synthesis (FTS), a series of experiments were carried out to investigate the effects of SiO 2 and its hydroxyl content and a boron promoter on the attrition resistance and catalytic behavior of spray-dried precipitated Fe/Cu/K/SiO 2 catalysts. The catalysts were characterized by means of N 2 physisorption, nuclear magnetic resonance (NMR), X-ray diffraction (XRD), Raman spectrum, X-ray photoelectron spectroscopy (XPS), H 2 -thermogravimetric analysis (H 2 -TGA), temperature-programmed reduction and hydrogenation (TPR and TPH), and scanning and transmission electron microscopy (SEM and TEM). The FTS performance of the catalysts was tested in a slurry-phase continuously stirred tank reactor (CSTR), while the attrition resistance study included a physical test with the standard method and a chemical attrition test under simulated reaction conditions. The results indicated that the increase in SiO 2 content enhances catalysts’ attrition resistance and FTS stability, but decreases activity due to the suppression of further reduction of the catalysts. Moreover, the attrition resistance of the catalysts with the same silica content was greatly improved with an increase in hydroxyl number within silica sources, as well as the FTS activity and stability to some degree. Furthermore, the boron element was found to show remarkable promotion of FTS stability, and the promotion mechanism was discussed with regard to probable interactions between Fe and B, K and B, and SiO 2 and B, etc. An optimized catalyst based on the results of this study was finalized, scaled up, and successfully applied in a megaton industrial slurry bubble FTS unit, exhibiting excellent FTS performance.
Introduction
Fischer-Tropsch synthesis (FTS) is the major route for converting syngas (CO+H 2 ) made from coal or natural gas into a wide variety of hydrocarbons. The products are further processed to obtain fuel and chemicals. Increasingly stringent environmental regulations are pushing the drive for clean fuels (low-sulfur, low-aromatics), and concerns about the huge consumption of liquid fuel make FTS an environmentally friendly and promising route for coal-or gas-rich regions.
The industrial catalysts for FTS are mainly iron-and cobalt-based catalysts [1]. Ironbased catalysts are relatively inexpensive, possess reasonable activity for FTS, and have lower sensitivity towards poisons and excellent water gas shift (WGS) activity compared with cobalt catalysts, which makes iron-based catalysts the preferred catalysts for hydrocarbon production via FTS using coal-derived syngas [1][2][3][4]. Iron-based catalysts can be divided into precipitated iron catalysts for low-temperature FTS and molten iron catalysts 2 of 19 for high-temperature FTS, according to different synthetic routes. Precipitated catalysts can be utilized at 230-280 • C, which is suitable for producing diesel oil and wax, and molten catalysts can be used to produce low-carbon olefins and gasoline at 320-340 • C [1]. The application of a slurry bubble column reactor (SBCR) for liquid-phase FTS using a precipitated iron-based catalyst is advantageous because of the excellent control of the highly exothermic reaction heat in large-scale industrial operation. Nevertheless, the use of iron catalysts in the most economical SBCR has been limited by their high rate of attrition to ultrafine particles leading to catalyst loss, difficulty in wax/catalyst separation, and product contamination, resulting in mass transfer limitations and downstream processing unit shutdown [5,6]. Several literature works have reported that operational difficulties, caused by catalyst breakup (attrition), were encountered during an F-T demonstration run in an SBCR at LaPorte Texas. The filtering system was plugged after one day of operation, and the external catalyst/wax separation in a settling tank was inefficient, resulting in gradual loss of the catalyst from the reactor, using a precipitated iron catalyst [7][8][9]. The problems were attributed to breakup of the original catalyst particles into fine particles. Also, researchers at Sasol in South Africa reported that an Fe F-T catalyst, used in fixed-bed reactors at Sasol, may be structurally too weak for use in the SBCR and that solid/wax separation was a major developmental challenge in the commissioning of a semi-commercial-scale SBCR (2500 barrels of liquid product per day) [10,11]. It is believed that the attrition process of iron catalysts for F−T synthesis includes both fracture (the fragmentation of particles) and abrasion/erosion (the process by which particle surface layers or corners are removed) [12,13]. Particle erosion is particularly serious as it generates more fine particles because the catalysts undergo various stresses during the F−T reaction, such as collision, friction, pressure, and thermal and chemical stresses. Therefore, the attrition property of the F-T catalyst must be improved before its industrial application. In order to overcome this obstacle, numerous researchers have launched studies on catalyst optimization through fundamental structure design and novel preparation technology, etc.
SiO 2 was used as a binder to improve the strength of iron-based coprecipitated catalyst for a slurry bed and to protect iron grains from sintering during F-T reaction [1,14]. Many researchers have researched the effect of SiO 2 on the attrition resistance of catalysts [15]. Goodwin et al. investigated the adding process and the amount of added SiO 2 and found that the attrition resistance of the catalyst prepared by adding binder SiO 2 after precipitation was better than that prepared by adding precipitated SiO 2 during precipitation [16]. In addition, the authors also found that the higher the particle density, the better the attrition resistance property [17]. Therefore, the attrition resistance got worse with higher content of SiO 2 , as it lowers the particle density of the catalyst. However, precipitated SiO 2 can be used in the preparation of attrition-resistant spray-dried iron catalysts when present in a suitable amount less than 12 wt%, as long as appropriate precipitation and spray drying techniques are employed. Bukur et al. [18] investigated the effect of precipitated SiO 2 and binder SiO 2 on the attrition resistance, and also the effect of different SiO 2 sources (silica sol, ethyl orthosilicate, potassium silicate). The increase in the fraction of particles smaller than 10 µm for the three catalysts after~300 h of FTS reaction was 0.7%, −3.4%, and −0.3%, respectively. The CO+H 2 conversion after 150 h of FTS reaction was 72%, 74-78%, and 72%, while the CH 4 selectivity was 3.4%, 2.6%, and 2.0-2.4%, respectively. The results showed that the catalyst prepared from silica sol had the best attrition resistance, but the worst activity and selectivity. Hou et al. [19] observed the morphologies of catalysts with different SiO 2 contents before and after reaction by SEM, and found that the attrition resistance of the catalysts improved as SiO 2 content was increased from 100Fe/5Cu/4.2K/15SiO 2 to 100Fe/5Cu/4.2K/40SiO 2 , while the reduction and carbonization of the fresh catalysts were inhibited. For 100Fe/5Cu/4.2K/15SiO 2 , the CO conversion at 300 h and 500 h are 48.41% and 47.82%, respectively, while for 100Fe/5Cu/4.2K/40SiO 2 , the CO conversion at 300 h and 500 h are 31.68% and 32.55%, respectively. Chang Hai et al. [20] studied the effect of SiO 2 addition parameters (temperature, pH value, and aging time) on the structure and performance of an Fe/Cu/K/SiO 2 catalyst prepared by a coprecipitation method. This work shows that 55 • C is the best addition temperature for the lowest deactivation rate as well as the best selectivity, and it is beneficial to improve attrition resistance by prolonging the aging time within 150 min. With the decrease in pH, the attrition resistance is strengthened whereas the reduction of the catalyst becomes more difficult.
Researchers have mainly studied the effects of SiO 2 addition method, content, source type, and process conditions on the attrition resistance of precipitated iron catalysts and its activity and selectivity. However, the relationship between attrition resistance and catalyst reaction stability have been rarely reported yet.
In addition to the attrition resistance property, the FTS reaction stability of the precipitated iron catalyst is also key to the stable operation of the industrial plant. Generally, the deactivation of Fischer-Tropsch catalysts is attributed to poisoning [21], oxidation [22], sintering [23], and coking [24]. The deactivation from S poisoning is mainly due to the insufficient purity of syngas and can be solved by purification [1]. For the other types of deactivation, many researchers avoid iron grain sintering and oxidation by adjusting the interaction between Fe and Si or by adding Mn, Zn, and Zr additives [25][26][27][28][29]. However, in view of catalyst deactivation from coking, although some research results have shown that the coke is formed on the catalyst surface due to a Boudard reaction, and the addition of K promoter will make the coking more obvious [30][31][32], there is few work reported about the effect of promoters on anti-coking during FTS reaction as well to make further improvement of catalyst stability based on it.
It is suggested that a suitable iron catalyst for SBCR application requires excellent resistance to not only physical fracture/abrasion but also chemical erosion which occurs with coking, causing deactivation under reaction conditions. An integrated study on catalyst physical attrition resistance and reaction stability is presented herein. In this paper, the effects of SiO 2 content and silanol content within the silica sources on the attrition resistance, FTS stability, and performance of an as-prepared iron catalyst were investigated. Based on the proposition that the CO adsorption and dissociation can be controlled and the Boudard reaction can be restrained by adjusting the electron density on the catalyst surface, a novel catalyst design with a novel promoter was developed to restrain coking, and it demonstrated excellent FTS stability for industrial application.
Effect of Silica Content on Attrition Resistance and Stability of Catalyst
Based on the conclusions of previous research works, an increase in silica content in the catalyst improves the attrition resistance and stability of the catalyst [19]. As a primary comparison of this present research, a group of model precipitated iron catalysts were prepared (see Table 1) with the traditional formula 100Fe 2 O 3 : 4.4CuO: 3.5K 2 O: 17.5SiO 2 , and various SiO 2 contents at the fixing ratio of Fe/Cu/K. The correlation between the silica content and the attrition index, stability, and reaction performance was investigated. As can be seen from Figure 1, the attrition index of the catalyst decreases with the increase in silica content, indicating the attrition resistance is improved. Moreover, the FTS deactivation rate of the catalyst decreases with the increase in silica content, i.e., the FTS stability is enhanced. Furthermore, the FTS reaction performance data ( Table 2) showed that the CO conversion rate of the catalyst decreased with the increase in silica content, while the methane selectivity increased, which was consistent with the law obtained by Cheng-Hua Zhang et al. [33].
For a further understanding, H 2 -TG tests were carried out for the above catalysts with different SiO 2 contents. The results (Table 3) demonstrated that the reduction process under H 2 atmosphere is basically divided into two stages: the first stage is from Fe 2 O 3 reduced to Fe 3 O 4 , and the second stage is from Fe 3 O 4 to FeO or Fe. The results show the reduction degree during the first stage differs insignificantly for all samples, from 30% to 35%. However, it is significant for the second stage. The reduction degree of the whole process was found to be lowered with the increase in silica content in the catalyst. It can be explained that the increase in silica content results in more Fe-O-Si within the catalyst, hence strengthening the interaction between Fe and Si, leading to the difficulty of catalyst reduction and easiness of activation. For example, the catalyst SFT-4 with 20% SiO 2 content has a reduction degree of 66.66% under H 2 reduction process from 120 • C to 1200 • C, and as a result, it led to a lower activity. Therefore, although the catalytic stability is improved by increasing the SiO 2 content, the negative effect is to sacrifice the FTS activity related to the lowered reduction degree. It is suggested to find a suitable SiO 2 content for the optimized catalyst through synthetically balancing FTS performance, stability, and attrition resistance. Among the above catalysts (Table 1), SFT-3 with silica content of 17.5% shows the best performance in view of integrated FTS. However, the deactivation rate of 0.24‰/h predicts that the CO conversion is 24% lower after 1000 h of reaction, and the significant activity loss is far away from the requirements of industrial application. As a comparison, an attrition index test was launched over the SFT-3 sample under simulated FTS reaction conditions in a CSTR with extra high stir speed. The results (Table 4 and Figure 2) indicate that the SFT-3 sample with an attrition index of 6.15 wt%/h is still very fragile in the simulated reaction attrition test.
Effect of Silanol Content on Attrition Resistance and Stability of Catalyst
As revealed from the above results, the increase in silica content in the catalyst improved its physical attrition resistance and FTS stability. By proposing that the attrition resistance and stability are mainly attributed to the silica content in the catalyst, the content of silanol groups that interact with Fe increases with silica content, and so does the Fe-Si interaction. A catalyst with high attrition resistance and stability can be obtained by increasing the silanol content within the silica source without changing the silica content in the catalyst. A series of experiments were launched from the preparation of model catalysts with the traditional formula 100Fe 2 O 3 : 4.4CuO: 3.5K 2 O: 17.5SiO 2 , using five types of silica source with different silanol contents. The silanol content of different silica sources was calculated by 29 Si NMR tests ( Figure 3 and Table 5). The physical attrition index was tested by the standard method, and the FTS performance of the catalysts was investigated.
The five peaks from left to right of each spectral line in Figure 3 represent the peaks of Si with four hydroxyl groups, three hydroxyl groups, two hydroxyl groups, a single hydroxyl group, and no hydroxyl groups, denoted as Q 4 , Q 3 , Q 2 , Q 1 , and Q 0 , respectively [34]. The content of hydroxyl groups in these silica sources obtained by fitting is shown in Table 5. After the calculation based on analysis data, the molar number of hydroxyl per 100 mol Si in the five kinds of silica sources is 142, 117, 111, 103, and 44, respectively. Tables 6 and 7 show the FTS performance and structural parameters of the catalysts prepared with different SiO 2 sources. Figure 4 shows the attrition index and deactivation rate of the as-prepared catalysts. It was noted that the attrition index decreased as the content of silanol groups used in the preparation was increased, indicating that an increment in the content of silanol groups was beneficial to enhance attrition resistance. In comparison with the SFT-3 prepared from KSi-5 with the least silanol groups, the deactivation rate of the catalyst prepared from KSi-1 to KSi-4 remained stable at a lower attrition index. It was found to contradict the above results shown in Figure 1, indicating that there was no significant correlation between attrition index and reaction stability once the attrition index was lower than a certain value, although enhancement of attrition resistance is expected to improve the reaction stability ( Figure 1). The textural properties of the catalysts shown in Table 6 reveal that the difference in silanol content causes less change in the catalyst texture. The XRD spectra of these five catalysts prepared from five different silica sources are shown in Figure 5. It can be seen that the peaks of these spectra are relatively diffuse, indicating fine crystal grains of the samples. The characterization by means of TEM for Cat-1 and SFT-3 catalysts prepared from KSi-1 and KSi-5 is shown in Figure 6. It can be seen that the catalyst prepared with KSi-1 has smaller grain size and uniform distribution, while the catalyst prepared with KSi-5 shows larger mean grain size and wider distribution. According to the Oswald ripening rule, better uniformity of catalyst grains is conducive to inhibiting metal particle sintering and improving the stability of catalysts [35].
The results indicated that a silica source with more silanol groups was helpful to forming more Fe-O-Si bonds and building a stronger skeleton, hence improving the attrition resistance of the catalyst. The enhancement of the Fe-Si interaction promotes the uniform distribution of iron oxide particles in the silica matrix and the good dispersion of Fe grains, further improving the stability of the catalyst [33].
Cat-1 possessed high attrition resistance and FTS stability, and its performance was further tested under FTS conditions in a high-speed stirred CSTR. The results (Table 8 and Figure 7) show that the catalyst still maintained perfect morphology after 200 h of testing, and the content of fine powder caused by catalyst attrition in liquid phase was significantly reduced from 627 ppm to 22 ppm in comparison with SFT-3 (see Table 4). Table 8. Fe content in liquid products of Cat-1.
Sample
Fe Content in Liquid Products (ppm)
After 30 min Deposit After 60 min Deposit
Cat-1 22 20 Figure 7. SEM image of Cat-1 after attrition test in slurry bed reactor. Figure 8 shows the FTS reaction performance with reaction time for the five prepared catalysts. The trends indicate that there was a gradual deactivation for around 500-600 h reaction time. CO conversion decreased to lower than 60% and CO 2 , while CH 4 selectivities increased to above 20% and 3.5%, respectively.
Anti-Carbon Deposition Formula Design
Raman spectroscopy was performed on the catalyst Cat-1 after running in a slurry bed for 200 h and 500 h, as shown in Figure 9. The scattering peaks of 1360 cm −1 and 1580 cm −1 in the figure indicate a certain amount of carbon deposition on the catalyst. It is believed that carbon deposition tends to cover the active sites on the catalyst surface, resulting in a deactivation of the catalyst; meanwhile, the growth of carbon deposition weakens the interaction of catalyst grains. For exothermic reactions such as FTS, due to the poor thermal conductivity of deposited carbon, the thermal inhomogeneity of the catalyst particles increases and results in thermal stress and poor stability of the catalyst. Generally, it is believed that carbon deposition on Fischer-Tropsch synthesis catalysts is formed through CO Boudard reaction. The existence of K will intensify the trend of carbon deposition; this is because the potassium will provide electrons to iron, which will make the iron more conducive to the chemical adsorption of CO and weak H 2 dissociation adsorption on catalyst surface; CO is easier to dissociate and adsorb, which makes carbon deposition easier to form [30][31][32]. To relieve the carbon deposition on the catalyst, there is a method of adding electronegativity-strong, non-metallic elements that effectively adjust the electron density of active phases. On the other hand, the said interaction between non-metallic oxides and SiO 2 in the catalyst can be adjusted to indirectly affect the electron-donating capacity of K. It is recognized as a novel way to improve the anti-carbon deposition capability of the catalyst.
An XPS test was carried out on the prepared catalysts. Figure 10a shows that the binding energy of B 1s in Cat-1B is around 191.8 eV, which belongs to the peak of B 3+ in B 2 O 3 [36]. Figure 10b compares the binding energies of Cat-1 and catalyst Cat-1B in Si 2p, in which the peak of 103.5 eV corresponds to the binding energy of Si in Si-O-Si, and 101.9 eV is the binding energy of Si affected by Fe 3+ ions. It can be seen that the proportion of peak at 101.9 eV decreases obviously after adding B, indicating that B 2 O 3 combines with SiO 2 , and the electron-absorbing ability of B increases the binding energy of SiO 2 , while the overall Si 2p peak position also shifts towards the direction of high binding energy after adding B [37,38]. The peak at 711.2 eV corresponds to the Fe 3+ binding energy of Fe 2p 3/2 , and the peak at 724.4 eV corresponds to the Fe 3+ binding energy of Fe 2p 1/2 . The two peaks of Fe 2p in Cat-1B are shifted towards higher binding energy than those in Cat-1. This indicates that there is a certain interaction between B 2 O 3 and Fe, and the electron absorption of B leads to the stronger binding energy of Fe 3+ in the 2p orbital [36]. In Figure 10c, the peaks of 293.1 eV and 295.9 eV correspond to the peak positions of K 2p 3/2 and K 2p 1/2 , respectively. The K 2p peak of Cat-1B showed in Figure 10d shifts to the direction of high binding energy relative to that of Cat-1, indicating that B 2 O 3 has an electron-absorbing effect on K [39]. An in situ XRD test was carried out to investigate the crystal-phase changes in the catalysts at 265 • C and syngas as reducing atmosphere with H 2 /CO = 20/1, collecting data per hour for 9 h. It can be seen from Figure 11 that reduction/carbonization bands appeared in the range of 2θ 39-47 • for the two catalysts during the initial 1 h of reduction, while the majority of the sample is still iron oxide hydrate. After 3 h of reduction, obvious iron carbide crystal forms appeared. Comparing with the standard card (PDF 89-2544), it was revealed that the generated iron carbide is mainly Fe 5 C 2 . By calculating the crystal face peak at 40.8 • (−112), the grain size at different reduction periods was obtained, as shown in Table 9. It can be seen that the grain size of Fe 5 C 2 in Cat-1 is larger than Cat-1B from the 4 h. After 5 h, the grain size of the catalyst basically varied less until the reduction after 9 h. The grain sizes of Cat-1 and Cat-1B are 21.7 nm and 15.5 nm, respectively, indicating that a complete reduction/carbonization was obtained within 5 h under the above conditions. The grain size variation of the experimental catalysts during the reduction process (Table 9) indicated there is an interaction between promoter B and Fe, which is in accordance with the former results based on XPS analysis. Thus, it is expected to form smaller iron carbide grains. For the sake of further verification, an H 2 -TPR test was carried out on Cat-1 and Cat-1B and is illustrated in Figure 12. The characteristic peaks of CuO reduced to Cu and iron oxide hydrate reduced to Fe 3 O 4 that usually occur at 290 • C partially overlap with the temperature corresponding to Fe 3 O 4 →FeO reduction peak, which formed the peak pattern with the characteristics of forward extension. The obvious reduction peak in the range of 320-450 • C corresponds to the Fe 3 O 4 →FeO reduction process. It was found that the reduction peak can be divided into two parts at higher temperatures above 450 • C. The reduction peak at 450-750 • C is attributed to FeO→α-Fe, while the reduction of Fe 3+ /Fe 2+ oxide interacting with SiO 2 occurs at 750 • C. The whole H 2 reduction process of the catalyst is consistent with previously reported studies [19,40]. For a comparison between Cat-1 and Cat-1B, the reduction temperature of the catalyst after B addition shifted to a higher temperature. It was demonstrated that Fe interacts with B, which has been shown in XPS and in situ XRD previously. Temperature-programmed hydrogenation (TPH) was carried out over used Cat-1 and Cat-1B catalysts in order to compare the anti-carbon deposition performance. The fitted TPH curves are shown in Figure 13. The peak at 270-390 • C is the α-carbon release peak. The peak at 420-455 • C is the hydrogenation peak of β carbon. These two carbon species can be considered as reactive carbon species on the catalyst surface. The peak at 480-688 • C is γ carbon, which is the hydrogenation peak of carbon in iron carbide. The peak at 600-750 • C is δ carbon, which is the hydrogenation peak of deposited carbon on the surface of the catalyst [41]. The calculated contents of different carbon species are shown in Table 10. It can be seen that Cat-1B has less δ carbon and more active carbon species than Cat-1, which also indicates that Cat-1B has better reaction performance. Figure 13. TPH of (a) Cat-1 and (b) Cat-1B after reaction. Green lines from the left to right are the peaks of α, β, γ, and δ carbon, respectively. The black line is the total fitting curve. The textural properties of the two catalysts are shown in Table 11, and the reaction performance of Cat-1 and Cat-1B in CSTR is shown in Figure 14 and Table 12. From the results in Table 11, the addition of B does not have a significant impact on the attrition resistance of the catalyst, but the specific surface area decreases, the average pore size increases, and the stability of the catalyst is significantly improved, with the deactivation rate reduced to 0.012‰/h. The stability improvement of Cat-1 B comes from the addition of B. XPS data show that the interaction of B and Fe reduces the electron density of Fe active phase. TPH results show that the carbon deposition of the catalyst after the reaction is less than that of the catalyst without B, which indicates that B can improve the anti-carbon deposition ability of the catalyst by regulating the electron density of the active phase of Fe. In situ XRD results show that B is beneficial to stabilize smaller grains of iron carbide. All these effects are beneficial to the improvement of the stability of the catalyst. B reduces the specific surface area and increases the average pore size, which is because the B-SiO 2 interaction modifies the properties of SiO 2 and then changes the texture characteristics of the catalyst, although B has no significant effect on the attrition resistance of the catalyst.
Industrial Application of Catalyst with Highly Attrition Resistant and Stability
Based on sample Cat-1B, a number of steps were executed, including formula and preparation process optimization scaling up, and long-term stability test. The finalized iron-based FTS catalyst (reported as CNFT-1) was successfully manufactured ( Figure 15) and evaluated at industrial trial scale. The ton-scale products have a uniform particle size distribution and ideal morphology, while the FTS reaction performance was in accordance with the laboratory catalyst.
Subsequently, the CNFT-1 catalyst was applied in a megaton FTS synthesis plant. The excellent attrition resistance was verified in the industrial unit, illustrated by the lower iron content of the wax product, and much cleaner oil products and wax products ( Figure 16). The FTS plant data confirmed that the CNFT-1 catalyst has excellent industrial application performance (see Table 13) with high attrition resistance and activity, low CH 4 selectivity, and high wax-to-oil ratio.
Catalyst Preparation
Catalysts were prepared by a patented method (CN101602000). In brief, the solution of Fe(NO 3 ) 3 ·9H 2 O and Cu(NO 3 ) 2 ·3H 2 O at appropriate Fe/Cu ratio was co-precipitated with sodium carbonate solution at pH = 7 and T = 70 • C. The precipitation was centrifuged and fully washed. After the cake was slurried, an appropriate amount of K 2 SiO 3 was added into the slurry and the pH value was adjusted. After centrifugation, the cake was re-slurried with desired K 2 CO 3 and de-ionized water, and the mixture was spray-dried at 200 • C. Finally, the catalyst was dried at 100 • C overnight and calcinated at 500 • C for 6 h.
Nitrogen Adsorption/Desorption
BET surface area, pore volume, and average pore diameter measurements were determined by nitrogen isothermal physisorption at liquid nitrogen temperature using a Micromeritics ASAP 3020. Before the adsorption measurements, samples were degassed under vacuum at 90 • C for 1 h and 350 • C for 3 h. 29 Si NMR of liquid K 2 SiO 3 samples placed in 10 mm PTFE tubes was conducted with an Si-free probe on an Advance III 400 MHz spectrometer (Bruker, Karlsruhe, Germany).
In situ XRD was carried out on a Rigaku D/max-2600/PC apparatus (Rigaku, Tokyo, Japan) equipped with a D/teX ultra-high-speed detector and scintillation counter. The X-ray generator consisted of a Cu rotating anode target with a maximum power of 9 kW. All the tests were operated at 40 mA and 40 kV. In situ XRD patterns were recorded in an Anton Paar XRK-900 cell equipped with a H 2 /CO = 20 gas system.
Scanning and Transmission Electron Microscopy
The scanning electron microscopy (SEM) images were collected on a Nova NanoSEM 450 scanning electron microscope (FEI, Hillsboro, OR, USA). The transmission electron microscopy (TEM) images were collected on an ARM200F electron microscope (JEOL, Tokyo, Japan) operated at 200 kV.
X-ray Photoelectron Spectroscopy (XPS)
XPS measurements were recorded using a Thermo Escalab 250Xi (Thermo Scientific, Waltham, MA, USA) system at base pressure of 1 × 10 −9 mbar. Samples were excited with monochromatized Al Kα radiation (hν = 1486.6 eV). The analyzer was operating in a constant pass energy mode (20 eV). The C 1s peak of adventitious carbon (284.8 eV) was used as a reference for estimating the binding energy.
Raman Spectrum
The Raman spectrum was obtained on HR-800 laser confocal spectrometer (Horiba, Kyoto, Japan).
Temperature-Programmed Reduction (TPR)
H 2 -TPR experiments were conducted using a Micromeritics Autochem II 2920 auto adsorption apparatus (Micromeritics, Norcross, GA, USA). First, 100 mg of sample was degassed and reduced in 10 vol% H 2 /Ar with a flow rate of 50 mL/min. The temperature was ramped linearly from 50 • C to 900 • C at 10 • C/min. The H 2 consumption was detected by a thermal conductivity detector (TCD) during the run.
3.3.8. Temperature-Programmed Hydrogenation (TPH) TPH was conducted in a quartz tube reactor equipped with a mass spectrometer. Typically, 50 mg of the sample was in situ reduced and carburized before the TPH experiment. During the TPH, the temperature was increased from room temperature to 820 • C at a rate of 10 • C/min in 20 vol% H 2 /Ar flow (50 mL/min in total).
H 2 -Thermogravimetric Analysis (H 2 -TGA)
A NETZSCH STA449C Thermal Analyzer was used for gravimetric measurement. The catalyst powders (approximately 10 mg) were put into an alumina crucible, Ar was introduced at RT into the Thermal Analyzer, and temperature was increased to 120 • C at a temperature ramp of 5 • C/min, purging for 1 h, and then H 2 /Ar (H 2 :Ar = 5:95) gaseous mixture was introduced and temperature was increased to 1200 • C, keeping a temperature ramp of 5 • C/min.
Attrition Index
The attrition index was measured on an air jet cup attrition index tester. The calcined catalysts were sieved with standard sieves of 53 and 120 µm before attrition index testing. The sieving process was applied until particles no longer passed through. The attrition index of the iron-based catalysts was tested using an ASTM D5757-95 method in a 3-hole attrition index tester. In the jet cup test, 50 g of each sample was used with an air jet having a flow rate of 10 L/min (with a relative humidity of 60 ± 5%) at room temperature for 5 h. The fines were collected with a thimble filter at the outlet of the jet cup chamber. The weight of the fines collected was divided by the weight of the total sample recovered to calculate the weight percentage of fines lost, then divided by 5 to obtain the weight loss hourly.
Attrition Test under Reactive FTS Conditions
Catalyst attrition strength was tested in a 1 L vigorously stirred tank reactor under reactive FTS conditions. In each run, 10 g catalyst and 500 mL liquid paraffin were added into the reactor. The syngas used was mixed from pure H 2 and CO; the H 2 /CO ratio was adjusted by multiple mass flow meters. Before FTS tests, the catalyst was reduced with H 2 /CO = 5:1 syngas under 260 • C, 2.3 MPa, 5000 mL/(g-cat·h) GHSV for 24 h. After reduction, the temperature of the reactor was adjusted to 265 • C, the H 2 /CO was set to 3.0, the GHSV was raised to 20,000 mL/(g-cat·h), and the agitation speed was adjusted from 800 rpm to 2000 rpm. The liquid products were collected with a cold trap and a hot trap. The tail gas flow was vented. After 200 h of reaction, the reactor was depressurized to normal pressure and purged with nitrogen to cool down to 130 • C, then the stirring was stopped and the sample started settling at 130 • C. After settling for 30 min and 60min, 10 g of upper liquid phase sample was taken respectively to test the solid content in slurry.
Fischer-Tropsch Synthesis Performance Test
The Fischer-Tropsch synthesis (FTS) performance of the catalyst was tested in a 1 L stirred-tank reactor. In each run, 10 g catalyst and 500 mL liquid paraffin were added into the reactor. The syngas used was mixed from pure H 2 and CO; the H 2 /CO ratio was adjusted by multiple mass flow meters. Before FTS tests, the catalyst was reduced with H 2 /CO = 5:1 syngas under 260 • C, 2.3 MPa, 5000 mL/(g-cat·h) GHSV for 24 h. After reduction, the temperature of the reactor was adjusted to 265 • C, the H 2 /CO was set to 3.0, and the GHSV was raised to 20,000 mL/(g-cat·h). The liquid products were collected with a cold trap and a hot trap. The tail gas flow was measured with a wet gas meter before being vented. The CO conversion and the selectivity of the gaseous products were acquired by 7890A gas chromatograph (Agilent, Santa Clara, CA, USA). H 2 and CO were separated on a Porapak N (2 m) column with Ar as the carrier gas, and were quantified by a TCD detector. CO 2 and CH 4 were separated by CHX after 13X (2 m) column, with Ar as the carrier gas, and were detected by the subsequent TCD detector. C 1 -C 5 hydrocarbons were analyzed using an Al 2 O 3 elastic quartz capillary column (50 m × 0.53 mm), with N 2 as the carrier gas and FID as the detector. The amounts of oil, wax, and aqueous products were also measured by weighting.
Conclusions
The present work provides a demonstration of commercial iron FTS catalyst development. Firstly a systematic research on physical attrition resistance and reaction stability has been carried out, then a novel catalyst with highly attrition resistance and stablity based on binder and promoter optimization has been developed and verified by industrial-scale performance.
It was revealed that the contents of silica and related hydroxyls have a significant influence on the attrition resistance of the catalysts. With the increase in silica, the attrition resistance and reactive stability of the catalysts are enhanced, while the activity falls because the higher the silica content, the lower the reduction degree. Attrition resistance is further raised by increasing the content of silanol within the silica source, without activity loss. There is a suitable range for a linear relationship between reactive stability and attrition resistance.
The boron promoter is found to be greatly beneficial to the FTS stability. Through a series of characterization tests, it was revealed that the boron promoter reduces carbon deposition on the catalyst surface and improves the FTS stability. In the present work, an iron catalyst was designed and was successfully applied in an industrial FTS plant in China.
Future studies are required to optimize the stability of FTS over 1000 h and intensively focus on selectivity through more research routes, including in situ characterization, severe conditions testing, and fundamental simulations of promotion, etc. | 2021-09-09T20:49:43.171Z | 2021-07-27T00:00:00.000 | {
"year": 2021,
"sha1": "6949119cf7f123f9073139c110af4522f57469e7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/11/8/908/pdf?version=1627611214",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "38de7ef2ac2221f4262e7dc4aaf4d3c0c884022f",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
259008799 | pes2o/s2orc | v3-fos-license | Range expansion of Bombus (Pyrobombus) bimaculatus Cresson in Canada (Hymenoptera, Apidae)
Abstract Background The two-spotted bumble bee, Bombusbimaculatus Cresson, 1863 (Hymenoptera, Apidae), is a common species in central North America, with few published records of this species in Canada west of Ontario or east of Quebec. New information Based on recently collected specimens from Saskatchewan and confirmed records posted to iNaturalist (https://www.inaturalist.org/) in the past 10 years (i.e. since 2013), we provide evidence that this species has only recently expanded its range in Canada, westwards into the Prairies Ecozone (Manitoba, Saskatchewan) and east into the Maritime Provinces (New Brunswick, Nova Scotia, Prince Edward Island).
Introduction
Bumble bees are amongst the most familiar of insects and have been the subject of natural history investigation for centuries. There have been several historic treatments of bumble bee taxonomy and distribution in North America, but Williams et al. (2014) provide the most detailed and recent information on species distribution in North America. Since then, recent additions to the North American fauna include Bombus kluanensis Cannings, 2016 (Williams et al. 2016), a taxonomic treatment of the B. bifarius Cresson, 1878 complex resulting in the recognition of two species, B. bifarius and B. vancouverensis Cresson, 1878(Ghisbain et al. 2020) and the recognition of B. johanseni Sladen, 1919 as a valid species (Sheffield et al. 2020), first shown by Martinet et al. (2019) who described it as a new taxon, B. interacti Martinet, Brasero and Rasmont, 2019. With respect to the bumble bees of Canada, Curry (1984), Neave (1933) and Laverty and Harder (1988) provided keys to the species of Saskatchewan, Manitoba and eastern Canada, respectively;Buckell (1951), Cannings (2011) and Sheffield and Heron (2019) summarised species in British Columbia; Sheffield et al. (2014) and Gibbs et al. (2023) reviewed the species of the Canadian Prairie Provinces and Manitoba, respectively. Other important works with coverage of species occurring in Canada include Stephen (1957) for the west, Plath (1934) and Mitchell (1962) for the east and the more general works of Cresson (1863), Franklin (1913), Frison (1923), Frison (1926) and Milliron (Milliron 1971, Milliron 1973a, Milliron 1973b.
Bombus bimaculatus Cresson was described in 1863 from material from Connecticut (Cresson 1863), with synonymous species described from West Virginia (Cresson 1878) and Massachussettes (Bequaert and Plath 1925). In an early comprehensive treatment of New World bumble bees, Franklin (1913) commented that B. bimaculatus was very rare in south-eastern Canada, occurring mostly in the United States from New England as far west as eastern Nebraska. Plath (1934) reviewed the species of eastern North America and depicted B. bimaculatus more extensively distributed in Canada, ranging from eastern Ontario (including just north of Lake Superior), southern Quebec and perhaps New Brunswick and throughout all of the eastern United States as far west as North Dakota, South Dakota, Nebraska and Kansas. Laverty and Harder (1988) also recorded B. bimaculatus Cresson from southern Ontario and Quebec though not New Brunswick and Nova Scotia despite the earlier reports of Boulanger et al. (1967) andVander Kloet (1977), respectively, suggesting that this species was mostly known from the Mixedwood Plains Ecozone in Canada and, perhaps, into bordering locations in the Boreal Ecozone. It was not known from Manitoba (Neave 1933, Turnock et al. 2006) until first reported by Sheffield et al. (2014) from specimens identifed by CSS that were collected in 2009; these records were subsequently included in Williams et al. (2014). Curry (1984) did not report this species from Saskatchewan and, more recently, this species was not recorded from Alberta (Prescott et al. 2019). Subsequent works in Nova Scotia (Sheffield et al. 2003, Sheffield et al. 2009, Sheffield et al. 2013) also did not include B. bimaculatus, though it was recently re-confirmed in New Brunswick (Brooks and Nocera 2020), based on iNaturalist observations and Cape Breton Island, Nova Scotia (Dominey 2021, Kosick 2021, Kosick 2021.
In the United States, Franklin (1913) and LaBerge and Webb (1962) also reported this species from Nebraska, but the latter indicated that it was primarily an eastern species with relatively few numbers from the eastern part of the State. Other records from the western part of its range in the United States include Franklin (1913) andFrison (1923) from Kansas, Franklin (1913), Frison (1926), Medler and Carney (1963, Colla et al. (2011) from Minnesota andFrison (1926) from South Dakota;Koch et al. (2015) included only eight records from North Dakota, South Dakota and Nebraska. This species was not included in any of the treatments for the western North America (Stephen 1957, Koch et al. 2012, but was recorded for the first time from eastern Montana in 2017 (Dolan et al. 2017). Lozier et al. (2011) and Jacobson et al. (2018 all indicated that its populations were stable to increasing in North America. In the last decade, data from both active sampling and online databases have accumulated, suggesting that this species' range has been spreading in Canada and also westwards into the United States. Our purpose is to provide documentation that support that this species is now much more widespread in North America than recent treatments (i.e. Williams et al. (2014)) account for.
Materials and methods
A database containing 34,802 [modified to remove observations without latitude and longitude coordinates and dates] North American occurrence records determined as Bombus bimaculatus was downloaded from the Global Biodiversity Information Facility (GBIF); additional specimens from the dataset of Koch et al. (2015). The GBIF data also contained "Research Grade" observations from iNaturalist (https://www.inaturalist.org/). The specimen determinations were accepted as is, though some specimens from the western United States pre-2000 (i.e. Arizona, Colorado, Montana and Oregon) are likely questionable, based on past (i.e. Titus (1902), Franklin (1913), Frison (1923), Frison (1926, Stephen (1957), Medler and Carney (1963), Thorp et al. (1983)) and more recent taxonomic treatments (Koch et al. 2012, Williams et al. 2014. No records from these States were included in the recent database of Koch et al. (2015). We acknowledge that there are likely misidentifications in these datasets, though we feel that these do not impact the over patterns presented in this paper for documenting the range and spread of B. bimaculatus.
Discussion
The recent spread of B. bimaculatus into western Canada, including Manitoba (Sheffield et al. 2014, Williams et al. 2014, Gibbs et al. 2023, Saskatchewan (Figs 2, 3) and the Maritime Provinces (Brooks and Nocera 2020, Dominey 2021, Kosick 2021, Kosick 2021, see Fig. 1) is seemingly by natural means as there are no confirmed records of this species being managed in any way that would promote its range expansion artificially. The oldest record from the Maritime Provinces on iNaturalist is from 2013 (i.e. just pre- Williams et al. (2014), though that work was likely in press at the time) from New Brunswick ( Fig. 1b) and, at present, about 300 observations exist supporting that the species now widespread in that Province. The first record from Nova Scotia came three years later (i.e. 2016) and within two years, it was also widespread, now totalling ca. 400 observations. Establishment on Prince Edward Island has been a little slower, with only 39 observations since it was first detected on iNaturalist in 2019. 2014)), respectively, expanding the northern edges of its range. Expansion of the range of some bumble bee species has been linked to climate change in Europe (see ), specifically warming winters Galimberti 2020, Biella et al. 2020); Ghisbain et al. (2021) recently summarised the factors that could contribute to the spread of pollinators, including bumble bees, these also being linked to the climate (including heat tolerance), but also dietary flexibility. There is at least one older report on B. bimaculatus from Nova Scotia in the 1970s (Vander Kloet 1977), though the identity has not been confirmed, but it seems dubious. At least two of the pre-2000 records from Oregon (both from 1931) were identifed by Theodore Frison in 1932 (GBIF.org 2023), though as indicated above, these are likely misidentified (and see Koch et al. (2015)). As such, it will likely be important to monitor the spread of B. bimaculatus in Canada and the United States outside of its historic range and detemine if its arrival is impacting the established bumble bee fauna (e.g. Biella and Galimberti (2020), Ghisbain et al. (2021)). Equally important will be the continued digitisation and verification of historic collections from museum collections; it is quite possible that some species may be found outside their documented ranges (e.g. Klymko and Sabine (2015)). (Gibbs et al. 2023) and Maritime Canada (Sheffield et al. 2003) and Newfoundland (Hicks et al. 2018) since the 1990s is likely due to the use of commercially available colonies in Canada (e.g. Whidden (1996), Van Westendorp and McCutcheon (2001)). Under both scenarios (i.e. natural spread and introduction), the impacts that non-native and recently-arriving native bumble bee species may have on local populations is of concern and should be monitored. | 2023-06-02T15:11:56.329Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "22881c5f7ed19b12176064d5f5746ea6f1768374",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3897/bdj.11.e104657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31d6e642de3b4aefcf6c382b4f2178c561ff54a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.