id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
246371202
|
pes2o/s2orc
|
v3-fos-license
|
A highly selective and sensitive chemiluminescent probe for leucine aminopeptidase detection in vitro, in vivo and in human liver cancer tissue
Leucine aminopeptidase (LAP) is involved in tumor cell proliferation, invasion, and angiogenesis, and is a well-known tumor marker. In recent years, chemiluminescence has been widely used in the field of biological imaging, due to it resulting in a high sensitivity and excellent signal-to-noise ratio. Here, we report the design, synthesis, and evaluation of the first LAP-activated chemiluminescent probe for LAP detection and imaging. The probe initially had no chemiluminescence but produced an extremely strong chemiluminescence after the release of the dioxetane intermediate in the presence of LAP. The probe had high selectivity over other proteases and higher signal-to-noise ratios than commercial fluorophores. Real-time imaging results indicated that the chemiluminescence was remarkably enhanced at the mice tumor site after the probe was injected. Furthermore, the chemiluminescence of this probe in the cancerous tissues of patients was obviously improved compared to that of normal tissues. Taken together, this study has developed the first LAP-activable chemiluminescent probe, which could be potentially used in protein detection, disease diagnosis, and drug development.
Introduction
Cancer is a major disease that seriously endangers human health and life and causes great suffering and burden to families and society. 1 Therefore, early cancer detection, diagnosis, and treatment can remarkably reduce mortality and increase cure rates for patients with cancer. 2 Enzymes are an important class of biomarkers for tumor diagnosis and prognosis. 3 Certain aminopeptidases are highly expressed in many malignant tumors compared to that in normal tissue. 4 Leucine aminopeptidase (LAP; EC 3.4.11.1) is one of these enzymes and belongs to the M1 and M17 peptidase families as an important protein that catalyzes the hydrolysis of the N-terminal leucine residue of a protein or peptide. 5 LAP is overexpressed in malignant tumor cells and is involved in tumor cell proliferation, invasion, and angiogenesis (e.g., HepG2 cells are LAPoverexpressed). 6 In medical diagnosis, LAP can be used as a cancer-related biomarker for tumor tracking. Therefore, the development of highly sensitive and selective in situ detection methods that target LAP is of great importance for medical diagnosis and pathophysiology.
LAP levels can be detected via several methods, but the efficient tracking of LAP activity in vitro and in vivo remains a challenge. Among the methods for tracking LAP in vivo, Lleucine-p-nitroaniline ultraviolet detection is not suitable for real-time tracking because of its low sensitivity and poor stability. 7 In addition, uorescent probes for monitoring LAP activity have greatly limited biological applications, because they present disadvantages, such as spectral crossover, auto-uorescence background interference, photobleaching, and epidermal scattering under excitation by an external light source. 8 Therefore, chemiluminescence detection methods that can overcome these disadvantages should be developed. 9 Chemiluminescence has developed into a promising sensing and imaging tool, because it does not require excitation by an external light source, is free of light scattering and auto-uorescence interference, rapidly achieves extremely high signal-to-background ratios, and improves imaging sensitivity. 10 The phenoxy 1,2-dioxetane luminophore (Schaap's) is an excellent backbone for the construction of highly sensitive chemiluminescent probes. 11 Some intriguing studies have aimed to improve and extend the use of Schaap's probes for the monitoring of various chemical and biological processes. Several activable chemiluminescent probes have been developed for the detection of small molecules 12 (e.g., H 2 S, H 2 O 2 , and
Results and discussion
Design and synthesis of probes 1 and 2 The general design of the LAP-activable chemiluminescent probe is shown in Fig. 1. Probe 1 was designed by caging a LAP recognition substrate, an acryl-substituted phenoxy 1,2-dioxetane luminophore (Int 4-1), and a self-immolative group linker p-aminobenzyl alcohol (PABA). According to relevant literature, L-leucine-containing substrates are highly specic for LAP and have been used in the design of a variety of uorescent probes. 6 The luminophore, Int 4-1, emits intense chemiluminescence for in vivo imaging under physiological conditions. The use of PABA to link the L-leucine substrate and the Int 4-1 luminophore might help reduce the spatial site resistance of the probe and facilitate interaction with the narrower and deeper LAP active site. 16 Aer the amide bond was hydrolyzed by LAP, probe 1 was converted into Int 3-1 accompanied by a spontaneous 1,6elimination within PABA, which departs at physiological pH (pH 7.4), to generate Int 4-1. Subsequently, the peroxide bond in Int 4-1 was cleaved via a chemical excitation process, which generates a uorescent product 6-1 and a chemiluminescence at about 550 nm. Thus, LAP can selectively switch on the yellow green chemiluminescence of probe 1, which allows the sensitive detection of LAP activity in vitro and in vivo. The near-infrared (NIR) region is considered superior for in vivo animal imaging. NIR emission chemiluminescence is preferred for in vivo imaging because of its deeper penetration and less light scattering than other methods. 17 We constructed a NIR emission chemiluminescent probe (probe 2) by applying a similar design approach. The dicyano methyl chromone was introduced at the ortho position of the phenol as an acceptor to extend the conjugated p-electron system of the phenolic luminophore, which results in the red-shi of its emission wavelength. 18 Probe 1 was successfully synthesized and characterized (Fig. 2). Compound 5 was synthesized according to the reported method. 6 Compound 6 with a chlorine at the C2 position and an electron-withdrawing acrylic substitute at the C6 position was synthesized according to the method reported by Shabat's group. 13 The direct reaction between compounds 5 and 6 affords compound 7. The protecting groups of compound 7 were subsequently removed using ZnBr 2 to prepare compound 8. Compound 8 was oxidized by singlet oxygen ( 1 O 2 ) to prepare probe 1. Then, probe 2 was successfully synthesized and characterized (see ESI, Scheme S1 †).
Spectroscopic responses of probes 1 and 2 to LAP in vitro
The responses of probes 1 and 2 (10 mM) to LAP (100 U L À1 ) were rst investigated. Aer probe 1 was incubated with LAP, a new UV absorption peak was observed at approximately 400 nm, accompanied by strong uorescence emission at about 560 nm. These UV and uorescence spectra were identical to that of the uorescent product 6-1 (Fig. 3A). The subsequent highperformance liquid chromatography (HPLC) analysis (1 h incubation of probe with LAP, Fig. 3B) showed that probe 1 (TR ¼ 5.13 min, Fig. S1 †) was mostly converted into the uorescent product 6-1 (TR ¼ 4.3 min, Fig. S2 and S3 †). The synthetic route to product 6-1 is illustrated in Scheme S2. † Probe 1 did not initially show chemiluminescence but displayed a clear chemiluminescence at 550 nm aer incubation with LAP at 37 C for 10 min (Fig. 3C, inset). The chemiluminescence kinetic curve of probe 1 aer incubation with LAP shows a rapid increase in the signal, which reached the maximum at approximately 20 min (Fig. 3D). However, the absorption, uorescence, and chemiluminescence spectra of probe 2 had no change before and aer incubation with LAP ( Fig. S4 † and 3C).
The computer docking simulation results showed that the probe 1 molecule easily reached the Zn ion coordination center through the open hydrophobic cavity of LAP, whereas probe 2 cannot t in the catalytic active center because of steric hindrance (Fig. 3E). In addition, probe 1 has three hydrogen bonds with LAP amino acid residues (Lys264, Met272, Asp275), and the amide bonds are potentially hydrolyzed under the catalysis of these amino acids and the Zn ion, which nally releases a bright chemiluminescence signal. 19 Therefore, LAP could effectively activate probe 1 to produce signicant chemiluminescence, which was investigated subsequently. The analytical conditions, including pH and temperature for the LAP assay, were studied. The hydrolysis effect of LAP on probe 1 at different temperatures was investigated. As shown in Fig. S5, † the chemiluminescence at 550 nm was remarkably enhanced as the temperature increased from 25 C to 37 C in the presence of LAP. The result indicated that within the experimental temperature range, probe 1 molecules have greater enzyme activity and stronger catalytic hydrolysis ability at a higher temperature. The effect of pH on the LAP activation of probe 1 was also investigated. Results showed that the maximum chemiluminescence was observed when probe 1 was incubated with LAP at pH of 6.0 to 8.0, which is close to the pH of the extracellular environment (6.5 to 7.4, Fig. S6 †). Therefore, probe 1 is suitable for detecting endogenous LAP under biologically relevant conditions.
Aer the chemiluminescence response of probe 1 toward LAP was conrmed, the sensitivity of the probe to LAP was further determined. We compared the sensitivity of probe 1 to that of a commercially available uorogenic probe, Leu-AMC (the synthetic route is illustrated in Scheme S3 †), which uoresces at 450 nm aer cleavage by LAP. Different LAP concentrations were used to plot the signal-to-noise (S/N) ratio against the enzyme concentration in logarithmic scale for the direct comparison of the sensitivity of probe 1 and Leu-AMC (Fig. 4A). Remarkably, probe 1 exhibited a limit of detection (LOD) of 0.008 U L À1 , whereas Leu-AMC detected LAP with a LOD of 0.22 U L À1 (Fig. S7 †). The enhanced sensitivity of probe 1 (27.5-fold) clearly demonstrates the advantage of our chemiluminescent substrates versus currently existing uorescent substrates for LAP detection assays. Additionally, the LOD of probe 1 was much lower than those of other reported uorescent probes (Table S1 †). The chemiluminescence images of probe 1 incubated with different LAP concentrations were acquired using the IVIS Lumia XR III system (Fig. 4B). The chemiluminescence images became bright as the LAP concentration increased. The S/N ratios of probe 1 and Leu-AMC aer reaction with LAP for Fig. 3 (A) Absorption (dashed line) and fluorescence (solid line) emission spectra of probe 1. Probe 1 (10 mM) was incubated with or without LAP (100 U L À1 ) in enzyme assay buffer at 37 C for 6 h. (B) HPLC analysis of probe 1 (10 mM) before (black) and after (red) incubation with LAP (100 U L À1 ) in enzyme assay buffer (pH ¼ 7.4) at 37 C for 1 h. (C) Chemiluminescence spectra and images (inset) of probes 1 and 2 (10 mM). The probes were incubated with or without LAP (100 U L À1 ) at 37 C for 10 min. (D) Chemiluminescence kinetic profiles of probes 1 and 2 (10 mM) after incubation with or without LAP (100 U L À1 ) at 37 C for 6000 s. (E) Low-energy binding models of probes 1 (cyan) and 2 (green) bound to the LAP (PDB: 2hc9) interface.
1 h were measured. Probe 1 had higher S/N ratios than Leu-AMC and achieved S/N values of approximately 1260 in less than 20 min (Fig. 4C), whereas Leu-AMC had S/N ratios consistently under 43. The superior sensitivity and high S/N ratios (more than 30-fold) of probe 1 clearly prove the advantage of the chemiluminescence modality over uorescence modality for diagnostic assays.
The specic identication of target analyzers is one of the important indicators to evaluate the usability of probe 1. Next, the specic response of probe 1 to LAP was tested. When probe 1 (10 mM) was incubated with amino acids (e.g., Hcy, L-Cys, Glu), metal ions (e.g., KCl, Na 2 S, Al 2 (SO 4 ) 3 ) and enzymes (b-galactosidase, cathepsin B, alkaline phosphatase pyroglutamyl aminopeptidase I, LAP), only incubation with LAP resulted in chemiluminescence (Fig. 4D, S8 †). Ubenimex (Ube) is a LAP inhibitor that inhibits enzyme activity by specically binding to the Zn ion coordination catalytic center of LAP. 5 Ube treatment dramatically inhibited the chemiluminescence intensity, which indicates that probe 1 has specicity for LAP.
Detection of LAP activity in living cells
Considering that probe 1 shows an ideal optical response for LAP detection in vitro and low toxicity to HepG2 and LO2 liver cells (Fig. S9 †), probe 1 was applied to distinguish normal cells from liver cancer cells by cell imaging. HepG-2 cancer cells incubated with probe 1 showed gradually enhanced chemiluminescence and the maximum signals were achieved at around 30 min. By contrast, the chemiluminescence was very weak over the course of incubation when cell lines were pretreated with Ube (Fig. S10A †). Probe 1 was incubated with different amounts of LO2 and HepG2 cells for 30 min, and the chemiluminescence images in these wells were acquired (Fig. 5A). The chemiluminescence images became brighter with increasing cell number; the chemiluminescence images of HepG-2 tumor cells were much brighter than those of LO2 cells (Fig. 5B). Notably, a good linear relationship was observed between the chemiluminescence intensity and cell number from 2500 to 40 000 in the wells (Fig. S10B †). These results show that probe 1 can detect LAP activity in living cells in real time and distinguish tumor cells and normal cells through sensitive chemiluminescence imaging in different cells.
Probe 1 was incubated with LO2 cells, HepG2 cells, and HepG2 cells pretreated with Ube (40 mM) for 1 h. Results showed that the uorescence intensity in HepG2 cells was remarkably enhanced compared to those in LO2 and Ube-pretreated HepG2 cells ( Fig. 6 and S11 †). Therefore, the change in uorescence intensity is caused by endogenous LAP in cells, which indicates that LAP expression in HepG2 cells is higher than that in LO2 cells. Moreover, Ube can inhibit LAP activity in HepG2 cells and conrm the LAP-specic activation of probe 1 at the cell level.
The cell subcellular localization experiment was performed to study the location where probe 1 enters the main agglomeration of cells aer enzyme activation. The experimental results showed that in HepG2 cells, the uorescence signal activated by probe 1 overlaps with the lysosome dye signal (Fig. S12 †). This phenomenon is similar to the uorescence distribution of uorescence product 6-1 directly incubated with HepG2 cells. This nding was obtained possibly because the uorescent product 6-1 activated by LAP has exposed hydroxyl groups and interacts with the weak acidity of lysosomes, which results in the gathering of a large amount of hydrolyzed product 6-1 in the lysosomes.
LAP detection in liver cancer model
In vivo real-time imaging offers a powerful tool for accurately diagnosing disease and suspicious lesions with valuable spatiotemporal precision. Current uorescent probes for imaging LAP activity are not suitable for in vivo experiments, because they are disturbed by intrinsic auto-uorescence background signals. Accordingly, considering the prominent performance of the LAP probe in cellular chemiluminescence imaging, we examined the applicability of probe 1 for the in vivo real-time visualization of endogenous LAP activity in HepG2 tumor-bearing mice. Chemiluminescence images were collected at different times aer the in situ injection of PBS, probe 1, or probe 1 with Ube pre-injection. The experimental results showed that the chemiluminescence signal rapidly responded to LAP activity and reached the maximum at 10 min aer probe 1 was injected into the tumor region ( Fig. 5C and S13 †). The region treated with Ube exhibited a low chemiluminescence signal under the same conditions. A very small amount of chemiluminescence signal was observed in the PBS control group. Therefore, probe 1 can be used for LAP imaging in living tumors and the real-time monitoring of LAP activity in situ.
Tissue samples from tumor-bearing mice were prepared and analyzed using probe 1 to further conrm the usability of probe 1 in complex biological systems. The results in Fig. 5D are consistent with the measurement trend of Leu-AMC. A higher chemiluminescence enhancement, which indicates a higher LAP level, was obtained in the serum, liver, and tumor tissues than in other tissues. The results showed that probe 1 has excellent chemiluminescence performance, accuracy, and sensitivity for LAP analysis.
LAP detection in human tissue samples
Probe 1 was used to detect LAP in human tissue samples to determine whether it can distinguish normal tissue and liver cancer tissue. Chemiluminescence images were acquired aer probe 1 was incubated with different concentrations of the supernatant (10% tissue homogenate) for 10 min at 37 C in enzyme reaction buffer (Fig. 7). As shown in Fig. 7A, a super strong chemiluminescence signal was observed aer probe 1 was incubated with the supernatant of 10% liver cancer tissue homogenate, whereas a weak chemiluminescence intensity was detected in normal tissues. LAP activity in human tissue samples (normal and liver cancer tissues) was detected using probe 1. The results are consistent with the measurement trend of Leu-AMC, which indicates that probe 1 is a reliable detection method that can be used to distinguish normal tissue and liver cancer tissue and detect LAP activity (Fig. 7B).
Conclusions
In summary, we designed and synthesized a LAP-activable chemiluminescent probe and proved its mechanism and spec-icity for LAP detection. The sensitivity for LAP detection of the chemiluminescent probe was greatly improved compared with traditional uorescent probes. Furthermore, our study shows that probe 1 can detect LAP-related cells in vitro and can be used to image LAP in living tumors and monitor its activity in situ in real time. In addition, the chemiluminescence of the probe in liver cancer tissues was remarkably enhanced compared to that in normal tissues, which enables the differentiation between liver cancer tissues and normal tissue through endogenous LAP detection. Therefore, the successful application of probe 1 makes it a valuable imaging tool for LAP-related physiological and pathological processes and medicine. We anticipate that the design strategy proposed in this study could be broadly used for the development of other protease probes for protein detection, disease diagnosis, and drug discovery.
Data availability
All experimental supporting data and procedures are available in the ESI. †
Conflicts of interest
There are no conicts to declare.
|
2022-01-29T16:03:56.700Z
|
2022-01-27T00:00:00.000
|
{
"year": 2022,
"sha1": "136151b0c9c4eb75bce70251e8acb4fdf0fceeea",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6a5a18d9ae71c179d65aa011dee2a3308b9dd343",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250093466
|
pes2o/s2orc
|
v3-fos-license
|
Factors associated with weak positive SARS-CoV-2 diagnosis by reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR)
During the COVID-19 pandemic, the reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) assay has been the primary method of diagnosis of SARS-CoV-2 infection. However, RT-qPCR assay interpretation can be ambiguous with no universal absolute cut-off value to determine sample positivity, which particularly complicates the analysis of samples with high Ct values, or weak positives. Therefore, we sought to analyse factors associated with weak positive SARS-CoV-2 diagnosis. We analysed sample data associated with all positive SARS-CoV-2 RT-qPCR diagnostic tests performed by the Victorian Infectious Diseases Reference Laboratory (VIDRL) in Melbourne, Australia, during the Victorian first wave (22 January 2020–30 May 2020). A subset of samples was screened for the presence of host DNA and RNA using qPCR assays for CCR5 and 18S, respectively. Assays targeting the viral RNA-dependent RNA polymerase (RdRp) had higher Ct values than assays targeting the viral N and E genes. Weak positives were not associated with the age or sex of individuals’ samples nor with reduced levels of host DNA and RNA. We observed a relationship between Ct value and time post-SARS-CoV-2 diagnosis. High Ct value or weak positive SARS-CoV-2 was not associated with any particular bias including poor biological sampling.
INTRODUCTION
Undoubtedly, the reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) assay is the frontline diagnostic test for COVID-19. Even though it is extensively used, the qPCR diagnostic test has several drawbacks. Firstly, viral load decreases as the disease progresses, thus its detectability via RT-qPCR test also decreases. 1 This complicates result interpretation in later stages of the disease. Additionally, the interpretation of RT-qPCR results is ambiguous as there is no universal absolute cycle threshold (Ct) cut-off value to determine whether a sample is positive or negative. The background is assay dependent and thus the cut-off value differs between assays. A comprehensive list of SARS-CoV-2 RT-qPCR-based diagnostic assays is provided in Supplementary Table 1 (Appendix A). Therefore, different tests can interpret the same sample differently when the Ct value is high. Despite this, several studies have investigated the use of Ct values as a proxy for severity of disease and infectiousness. Correlations have been reported between Ct values (indicative of viral load) and severity of disease and mortality. [2][3][4] Importantly, RT-qPCR cannot distinguish between the presence of actively replicating infectious virus particles and the non-infectious nucleic acid remnants from dead virus, complicating the interpretation of RT-qPCR results. Detecting replication-competent virus requires culturing the virus; however, the method is cumbersome and requires biosafety level III facilities which are not widely available. Therefore, alternate measures need to be considered to determine the infectious status of a person. Bullard Another inaccuracy associated with the RT-qPCR assays is the occurrence of false negative results. In addition to the analytical sensitivity of the diagnostic assay, several biological factors may contribute to false negatives such as the timing of sampling, infection stage, presence of PCR inhibitors, inappropriate sample type, suboptimal biological sampling, low viral load, and variability on viral shedding. [9][10][11][12][13][14] Whether these same factors play a role in weak positives remains unknown. Therefore, understanding the factors behind weak positives in SARS-CoV-2 diagnostic tests and determining the infectious status of such samples is critical for determining the significance of weak positive diagnosis. To that end, we analysed all positive diagnostic tests performed by the Victorian Infectious Diseases Reference Laboratory (VIDRL) over the period of 22 January 2020 to 30 May 2020 for the incidence of weak positives and potential factors associated with weak positive testing.
METHODS
Source of data and comparison of gene target sensitivity COVID-19 screening data, from 22 January 2020 to 30 May 2020, were obtained from VIDRL as a large database file that outlined several aspects such as the age, sex, outcome, gene target(s) tested, Ct values and additional comments such as recent travel history. The majority of sample types were classified as 'nose and throat swabs' or 'nasopharyngeal swabs' (Supplementary Table 2, Appendix A). An in-house qRT-PCR assay was used to determine the presence of SARS-CoV-2 RdRp, N or E gene. 15 Bovine viral diarrhoea virus RNA was spiked into each sample to assess for nucleic acid extraction, reverse transcription and PCR inhibition. A sample was declared positive when two assays, testing for different gene targets (the RdRP gene target was the primary screen and the N or E gene was used as a confirmatory screen), yielded a positive result (with a Ct value of 45). 16
Analysis of weak positives
For analysis of weak positive samples, three Ct cut-off values were chosen for the analysis based on the review of several RT-qPCR-based assays that have been authorised by the US Food and Drug Administration (FDA) for emergency use (Supplementary Table 1, Appendix A). These cut-offs were Ct cycle 36, 38, and 40. Based on the three Ct cut-off values, the data were sorted and evaluated for the incidence of weak positive samples and different parameters such as age, sex, and sample type (for RdRP gene target). The weak positives were defined as samples with Ct > Ct.
Longitudinal sampling
Samples were included for longitudinal sampling if a minimum of three independent tests were performed after a positive test in the same individual. Follow-up tests were only included if they were nasopharyngeal samples or nose and throat swabs.
Quantification of CCR5 and 18S copies
Sample quality was determined by quantification of host RNA and DNA through quantification of the 18S gene and CCR5 gene, respectively. Quantification of 18S RNA copies and CCR5 DNA copies was performed by qPCR as we have described previously. [17][18][19][20] All samples were run in triplicate wells and an average was taken.
Statistical analysis
Paired analysis was conducted between different gene targets (RdRP gene versus E gene, RdRP gene versus N gene, and E gene versus N gene) using Wilcoxon matched-pair signed-rank test to compare the sensitivities of these gene targets. Change in Ct value over time after the first positive test was performed via linear regression analysis. Comparison of Ct values before and after the first negative test for discordant testing was made using a paired Student's t-test. The comparison of the average number of 18S copies for weak positives and the other positive samples, for each Ct cut-off, was performed using an unpaired t-test (Welch's t-test). Mann-Whitney U test was performed to compare the average number of CCR5 copies for weak positives and the other positive samples. Spearman correlation analysis was conducted to determine the correlation between the average number of 18S and CCR5 copies with Ct value. Visualisation of data and the data analysis were performed on GraphPad Prism (Version 8.4.3, GraphPad Software, USA).
Analysis of SARS-CoV-2 diagnostic testing
VIDRL conducted a total of 77,650 RT-qPCR tests between 22 January and 30 May 2020. Of these, 1792 tests (2.31%) were positive for SARS-CoV-2 based on detection of either RdRP, N, or E gene wherein the RdRP RT-qPCR assay was used as the primary screen and the N or E gene assay was used as the confirmatory screen. We compared the Ct values for RdRP, E, and N genes from positive samples using a paired non-parametric test and observed that the RdRP assay had significantly higher Ct values than assays for the E gene (p<0.0001) and for the N gene (p<0.0001) (Fig. 1). The Ct values for the E and N gene assays were not statistically significantly different (p=0.5989, Fig. 1). These data suggest that the N gene and E gene RT-qPCRs may be more sensitive than the RdRP gene assay.
Using Ct values at 36, 38, and 40 that are within the range of the suggested cut-off for several different FDA-authorised COVID-19 RT-qPCR assays (Supplementary Table 1, Appendix A), we determined that the RdRP gene assay showed the highest number of available samples with weak positives (observed Ct > Ct cut-off values) compared to the N and E genes (Table 1). Therefore, further analysis on weak positive samples was focused on the RdRP gene to maximise the largest available sample size of the three genes.
Factors associated with high Ct values (weak positives)
Based on the three Ct cut-off values, the data for weak positive samples was evaluated for a range of different parameters. When we compared weak positive samples and the other positive samples, we did not detect any significant differences in sex ( Supplementary Fig. 1, Appendix A) or age ( Supplementary Fig. 2, Appendix A) of the individuals tested, suggesting these factors were not relevant to weak positives. We then sought to determine whether the quality of the sample would be associated with a weak positive. We utilised host gene DNA and RNA quantities as a measure of the sampling depth. 12 The amount of host RNA was determined by ribosomal RNA 18S quantification, and the amount of host DNA was determined by CCR5 quantification. 15,21 We observed that there were significantly more 18S copies in the weak positives compared to the other positive samples for each Ct cut-off value (p<0.05) ( Fig. 2A-C). We observed no difference between weak positives and other positive samples in terms of the amount of CCR5 copies for each Ct cut-off value (Fig. 2D-F). Furthermore, we observed no association between the quantity of 18S copies and Ct value (Fig. 2G), or between the quantity of CCR5 copies and Ct value (Fig. 2H). Taken together, these results suggest that poor sampling is not a factor in whether a positive sample has a low or high Ct value.
Longitudinal sampling
The COVID-19 screening data had several instances where the same individual (n=42 individuals) was tested multiple times over a period of days. We used this data set to determine if the time since the first positive test was a factor in the Ct value of a positive test. We indeed found a relationship between time post first positive test and Ct value, with the Ct value increasing over time (p=0.001 and r 2 =0.6550, Fig. 3).
In addition, in a subset of individuals (n=11/42, 26%), we observed 'blipping' of positivity, where two positive tests were separated by a negative test ( Supplementary Fig. 3, Appendix A). 21 The Ct values of the positive tests that bracketed the negative were typically high ( Supplementary Fig. 4, Appendix A). Prior to the negative test, the Ct value of the positive test tended to be comparatively lower (mean=36.08, SD=2.906) than after the negative test (mean=38.17, SD=3.070). However, the increase was not statistically significant (p=0.1529) ( Supplementary Fig. 4, Appendix A). These data suggest that time and likely viral clearance contributes to high Ct value positive tests.
DISCUSSION
The instances of weakly positive samples with high Ct values are an important consideration in the diagnosis of SARS-CoV-2 via PCR. To assess factors associated with weak positive tests we analysed SARS-CoV-2 diagnostic testing performed by VIDRL. The proportion of weak positive results (based on arbitrary Ct cut-off values) ranged from 5.44% to 19.61%, 2.54%-24.71%, and 4.57%-14.46%, for the RdRP, N, and E genes, respectively. Aside from an association between higher Ct value and increased time from the first positive test, we were unable to find any factors associated with weak positive SARS-CoV-2 diagnosis.
We observed no correlation between cellular RNA or DNA input and the Ct value of the SARS-CoV-2 diagnostic assay. This indicates that sub-optimal biological sampling or amount of cellular material in the sample does not dictate Ct value and therefore is not a determinant of weak positive results. Interestingly, this is in contrast with another study that showed sub-optimal sampling may contribute to false-negative assays. 12 Further, our data are inconsistent with other studies that have seen associations between the Ct value of an internal control (p30 subunit of ribonuclease P, RPP30) and Ct value for SARS-CoV-2 gene targets. 22,23 This may reflect differences in host gene expression in the respiratory tract and warrants further study. The majority of diagnostic kits use RPP30 as the host internal control (Supplementary Table 1, Appendix A), although others have used 18S, suggesting its utility in assessing biological sampling of the respiratory tract. 24,25 It remains unclear whether markers of host DNA or RNA are surrogate measures of sampling quality, as the transcriptome in the nose and throat may change during a SARS-CoV-2 infection. Several studies have reported differential expression of some host genes in the nasopharyngeal swabs of COVID-19 positive individuals compared to healthy individuals. [26][27][28][29] Amati et al. analysed the expression profiles of several SARS-CoV-2 host invasion genes and found overexpression of several host genes specifically ACE2 and DPP4 in the nasopharyngeal and oropharyngeal swabs. 29 In another study, Biji et al. reported a consistent upregulation of S100 family genes (S100A6, S100A8, S100A9, and S100P) in nasal swabs known to be involved in the differentiation of myeloid cells to dendritic cells and macrophages. 28,30 Studies have also shown differential expression of some host genes across infection status, age, sex, and the type of sample. 26,27 This indicates the potential of such host genes to be used as surrogate markers to distinguish between individuals with SARS-CoV-2 infection and healthy individuals. Interestingly, we observed more 18S RNA in weakly positive samples which may indicate increased host cell turnover following a resolving infection or may also be a result of immune cell infiltration.
The Ct values for samples tested for two different genes were compared, and our data suggest that the E gene and N gene assays were more sensitive than the frontline RdRP assay, while the N and E gene assays had comparable sensitivities. This finding is supported by several studies that reported a higher sensitivity of N and E gene targets over the RdRP gene. 31,32 Vogels et al. evaluated analytical sensitivities of several primer-probe sets including those from the China Centre for Disease Control (CDC), USA CDC, Hong Kong University (HKU), and Charité Institute of Virology. 33 They reported that the sensitivities for these primer-probe sets were comparable; however, the analytical sensitivity for the Comparison of the average number of CCR5 copies was made using the Mann-Whitney U test. Associations were made using a Spearman correlation. ns, p>0.05, *p<0.01. N gene was higher than for an ORF1 gene target, for the HKU and China CDC primer-probe sets and the sensitivity of the E gene was significantly higher than the RdRP gene for the Charité primer-probe set.
In several individuals with multiple tests, we observed viral 'blipping' where a negative test was sandwiched between positive tests. The Ct value of the positive tests on both sides of the negative test tended to be high (Ct>36), indicating that the RT-qPCR test might be detecting the remnants of dead viruses or cleared virally infected cells rather than viral recrudescence. Wolfel et al. demonstrated that in the case of swabs, the viral RNA load decreased significantly after day 5 of symptoms. Furthermore, they also reported that the viral sub-genomic mRNAs, found only in the infected cells, were detectable only up to day 5 post the onset of symptoms. 21 In a study by Sohn et al., patient samples (nasopharyngeal swabs and saliva) with prolonged positive RT-qPCR results and rebound Ct values were analysed to determine the presence of actively replicating SARS-CoV-2. For these samples, they reported a mean Ct value of >30 and failed to isolate actively replicating virus. 34 In another study, Manzulli et al. analysed 84 patient samples for the presence of SARS-CoV-2 using an RT-qPCR assay at the time of hospitalisation and three days after resolution of symptoms, followed by in vitro cell culture. 35 All patients were reported positive for the SARS-CoV-2 N gene after the clearance of symptoms. However, 83 of the 84 patients returned a negative cell culture result, indicating the lack of viable virus. These studies further support our finding that in individuals where 'blipping' is observed, the RT-qPCR assay is unlikely to be an indication of viral recrudescence. Perhaps RT-qPCR assays that detect subgenomic species of SARS-CoV-2 may better distinguish individuals who are shedding virus, providing greater clarity to when individuals are infectious. The importance of understanding weakly positive SARS-CoV-2 diagnosis is further emphasised in the era of pre-existing immunity as studies have shown that breakthrough infections in vaccinated individuals and individuals with prior infection had higher Ct values compared to primary infections. 36,37 Further, other studies have shown associations between Ct value and markers of immunity including neutralising antibody titres. 36,38 Our study is limited by the fact that samples that tested as negative were not retested, this may confound our study particularly in the case of viral blipping where positive tests tended to have high Cts near the limit of detection. It is possible some of these negatives may have become positive with multiple replicate tests or use of other gene targets.
CONCLUSIONS
A range of tests have been used for the detection of SARS-CoV-2 and amongst them, RT-qPCR has been the most extensively used. The occurrence and factors behind high Ct value or weak positives are of relevance to diagnosis by PCR. Our data suggest that weak positives are not the result of any particular bias including poor biological sampling.
|
2022-06-29T13:04:27.472Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e4cf30584b26aa1aa6ad2ec1177f63622176379b",
"oa_license": null,
"oa_url": "http://www.pathologyjournal.rcpa.edu.au/article/S0031302522001714/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c542d8cdc3d8ee0d0aa2129d7409a55376aece5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115813653
|
pes2o/s2orc
|
v3-fos-license
|
Informed Learning: a catalyst for change in theological libraries
This paper discusses the potential interest of informed learning as a catalyst for change in theological libraries. Informed learning is a label for the relational approach to information literacy and information literacy education. It was created to highlight the importance of simultaneous attention to both information and learning when we consider peoples’ experiences in their information rich lives. The paper explores the idea of informed learning, suggesting that serious attention to informed learning experiences may challenge our thinking about our role as information professionals and the ways in which we serve our clients. The paper then moves to explore our current understandings of informed learning in faith communities and suggests some ways in which theological librarians can work to build informed communities.
Introduction
Who are the members of the communities served by theological libraries? What is their experience of information literacy or informed learning? What do they use information for, and how do they use it? What do they consider to be information? For each of us our responses may be somewhat different. We may be serving academic researchers and teachers who are heavily concerned with their discipline and its norms, we may be serving students who are struggling with their learning or who have a desire to go beyond the learning that is offered them. We may be serving those who are interested in the mission field and wish to understand the people they will be working with, whether this is in a church context or some other setting.
What do we know about how people within the college context and in the faith communities beyond, use information? How can we use this knowledge to help people develop their information literacy experience, and become aware of others as information users also? This paper will develop these ideas through looking at the key concepts associated with the idea of informed learning, discussing research findings about informed learning in faith communities, and suggesting ways in which information professionals can build informed communities. Central to the position of the paper is the idea that librarians have a vital role to play in revitalizing and refreshing their clients' experiences of information use, as well as enhancing clients' awareness of their own information experiences and the experiences of the wider community of faith. Figure 1 is a simple illustration of the broadening perspective that can be adopted. While critical aspects of the information literacy experience in the broader academic, workplace and community contexts are also relevant, I focus here on the faith community context as one which holds distinctive characteristics which may usefully be understood not only by librarians but also by other members of the community.
The experience of effective information use: from information literacy to informed learning
Since the mid 1990s, the idea of information literacy has been interpreted as ways of experiencing effective information use, and is the cornerstone of the relational approach to information literacy and information literacy education (Bruce 1997(Bruce , 2008. Information use from a relational perspective is always for something, or some purpose, a purpose which typically involves changing one's experience or awareness of the world in some way. For example, looking at a bus timetable may change (by way of confirming or modifying) one's experience of the public transport system. Even in becoming aware of arrival or departure times, our experience is modified. Such change in awareness may be understood as learning (Marton and Booth 1997); where learning is coming to experience or be aware of the world in different, usually more complex ways. Thus the experience of information use has come to be expressed as 'using information to learn' or 'informed learning'.
Continuing this example, it may be expected the experience of reading the bus timetable may vary between different people, some may have a positive and others a frustrating experience. These different experiences may result from focusing on different aspects of the timetable, correctly or incorrectly, or from attending to other aspects of the wider situation. What constitutes information may thus differ… for one individual information may be the arrival and departure times, for another it may be the poor quality of the web site the timetable is available on, or for another it may be the significant difference between what is stated on the timetable and what actually happens. And the 'information' may be used in different ways. In each case the information may also be experienced as objective (an unchanging fact), subjective (subject to varied interpretation) or transformational (impacting, changing the life of the person involved). These ideas, the variation in ways of experiencing information use for a particular purpose, the variation in what constitutes information and variation in the way that information is experienced, are all central to the idea of informed learning.
We, as information professionals and educators can transform and ground our understanding of information literacy by focussing on people's experience of using information in different contexts. People's experience of information literacy may be said to be an "intricately woven fabric, revealing different patterns of meaning depending on the nature of the light cast upon it" (Bruce 1997, 151). This brings us back to the kinds of questions with which I opened this paper: • What does the community we serve experience as information?
• What informs them as they go about their work, study or spiritual development?
• What does the experience of information literacy look like in our communities? College Church or other Institution
Community of Faith
• What information practices are important? How do people experience information use within these practices? • How do people experience information use in the context of the college and beyond its walls? In the taking up of their vocation? In the wider community of faith?
In attempting to answer these questions we are exploring: what does it mean for people to be informed learners in theological, religious or faith communities?
We may of course begin to answer the question by looking at the typical experience of information literacy or informed learning in the wider academic community and exploring how this might relate to the theological college community. Figure 2 below identifies the seven key experiences of informed learning, across academic, workplace and community contexts, all of which may presumably have applicability in a theological college context. We would clearly want members of the college community, students, educators, researchers and others to experience using information to learn as 1) using technology for information awareness and communication, 2) using sources of different kinds, including information professional support, 3) adopting information processes based on personal heuristics, 4) identifying links between information encountered and projects of interest, 5) building a knowledge base in an unfamiliar area, 6) gaining insights through drawing on intuition and the personal knowledge base, and 7) using information wisely for the benefit of others. (Bruce, 2008) Such insights for the theological community do not reveal the context specific insights that would be useful for deeper understandings of religious and faith communities.
Why talk about informed learning and why explore it in faith communities?
In the previous section of this paper I explained the initial derivation of the term informed learning. An extended description of informed learning, derived from interpretive research into informed learning over the last fifteen years looks like this: Informed learning is using information creatively, reflectively effectively and ethically in order to learn in any of life's paths. It is learning that is grounded in the effective information practices of professional community and academic life. (Bruce 2008, viii ) The label 'informed learning' has proven attractive to some people as an alternative to the phrase 'information literacy'. Informed learning as a phrase draws attention to those interpretations of information literacy that focus more on actual engagement with information for learning than on the learning of information, bibliographic or library skills. The phrase also reflects the etymology of inform (Bruce, Hughes and Somerville 2011), where inform suggests giving form or shape to.., giving life to.., giving organizing power to.., moulding; essentially information forms learning. The use of that language is intended to reposition our thinking beyond skills to the experience of using information to learn; and to encourage us to simultaneously attend to that which is learned, as well as the experience of information use in the learning process.
Why explore informed learning in faith communities? Firstly, as suggested above, to provide contextualized insights into the experience of informed learning within such communities. Secondly, because there appear to be interesting differences between, at least some, faith communities and our more common interpretations of academic, workplace and civic settings. Of particular interest in faith communities is the way in which professional community and academic life intermingle for many people. If we take the church community as an example, it is not so easy for students to separate their study from their own faith development in the church community, or for the ministers to separate their professional work from academic scholarship. There seems to be an interesting integration across what have been traditionally separated academic, community and workplace contexts in our thinking about librarianship and information literacy also. Thirdly the nature of information use experiences appears significantly different from the nature of experiences of information literacy or informed learning in settings that have been traditionally explored.
For us as librarians and information researchers this reinforces the potential value of moving beyond thinking about providing training in library skills, information skills or research skills, to becoming involved in the wider arena of making people aware of how they and others use information to learn, both for their own development and also for the enrichment of those they serve or will serve.
Understanding information use in faith communities
We come now to the question of how we can develop deeper understandings of information use in faith communities.
In seeking to understand information use in the wider community, there is a wide range of information behaviour and information literacy research available. Attention to the relational model of information literacy is more focused, and interested readers should look especially for titles by Mandy Lupton (2004Lupton ( , 2008, Sylvia Edwards (2006) and Susie Andretta (2007a, b) for important contributions. Mandy Lupton (2004) focuses on effective information use in the context of academic essay writing and later on students' experiences of the relationship between information literacy and learning (2008). Sylvia Edwards draws attention to the experiences of learning to search the internet in the academic context. Susie Andretta makes available key papers developing the relational approach and explains the application of the theory to practice in higher education.
Recently Margaret Blackmore has used the relational framework as a platform for understanding threshold concepts associated with information literacy in academic settings (Blackmore, 2010).
The importance of information and religion for research and practice has recently been highlighted by the establishment of a Center for the Study of Information and Religion at Kent State University. This Center has key goals which are vital to our understanding of information use in faith communities, see Figure 3 below.
To investigate the importance of information in the religious world To understand the relationship between the information seeking behaviour of clergy and the knowledge that supports them To advance understanding of the role of information in religious practice
Informed learning in one faith community
In addition to the wider body of knowledge around the information seeking and behaviour in religious contexts, we have been privileged within the QUT Information Studies Group to be able to conduct some pilot research into the experience of informed learning in one faith community, a church community in the Uniting Church of Australia. The experiences of informed learning which we have described reveal a range of information and learning experiences of church community members. I highlight here the essential features of the experiences which may be of particular value to librarians, educators and students of theology; as well as perhaps to the wider community of church leadership and lay people. Summaries of these experiences, presented here, have recently been submitted for publication. Details are available in Gunton, Bruce and Stoodley (2011).
Informed learning in the church community is experienced as using information in five different ways. Each of these ways involves a particular meaning being associated with the informed learning experience. Each also involves a depiction of what constitutes information in that experience and an indication of how the information is used in learning.
In each of the categories below the original manuscript notes a shift in focus of awareness from, in each of categories one to five, 1) God and faith, 2) relationships with people 3) the business or operations of the church, 4) service within the community, to 5) service outside the community. This shifting focus is an important aspect of understanding the variation in informed learning experiences. Similarly, those aspects which are not considered, or which we say are in the margin of awareness, widen across categories from 1) relationships, 2) management issues, 3) own service role in community, 4) issues outside the immediate community, to 5) proactivity and interest in other faiths. Also, across the categories, information is experienced differently from 1) received and personalised, 2) embedded in relationship and shared, 3) corporate and systematic, 4) personalised and responsive, to 5) personalised and applied beyond the community. Learning is variously experienced as 1) reflective, 2) communal, 3) evidence-based, 4) kinaesthetic, and 5) kinaesthetic and responsive. These variations can be tabulated (Table 1)
Forms of information (what constitutes information)
: the Bible (text), artistic (visual) expression and narrative, craft, stories, drama and song around the same message.
Learning experiences (How information is used): personal reflection and study, small group/peer discussion, informal conversation, formal education in the form of workshops, seminars and lectures, and learning by doing.
Focus: God and personal faith
While text is a common point of departure for exploring faith; for example biblical commentary, academic research, theological treatise, personal reflections, other forms of information such as artistic expression, such as art, music, and drama, are significant for growing faith. Peoples' experiences suggest that using information in the form of artistic expression such as music improves learning by increasing interest and improving recollection. Visual information appears to be also vital to spiritual development, complementing other forms of information and supporting a wide range of learning approaches. Relating text (for example God's Word to everyday life is also enhanced through narrative, stories and parables.
Category 2: Informed learning is experienced as using information generated through social and pastoral interactions to grow relationships.
Forms of information: church notices, sharing of stories and personal experiences, and sharing of beliefs and faith journey and artistic expression through stories, music, song, drama, poetry, etc.
Learning experiences: community activities, engagement in informal and social interactions; sharing with the wider community in worship services; supporting one another to cope with life experiences; emerging use of social media.
Focus: Relationships
This experience of using information usually involves two or more members of the community engaging in face-to-face interaction. Social and pastoral relationships are developed and strengthened through sharing information, which may be personal or confidential.
Category 3: Informed learning is experienced as using collaborative approaches to engage with corporate information to develop administrative functions.
Forms of Information: a broad range of documentation, shared in print, digital format via email, or in audio, as presented during meetings or conversations; minutes, quotes, invoices, reports, statistics.
Learning experiences: in groups, where the sharing of information is interactive rather than solitary learning experiences, such as reading or listening
Focus: Business operations
Information in this experience may be accessed from external sources, for example quotes or invoices property development or maintenance; or the documentation detailing council requirements for a church property. It may also be sourced internally, for example committee reports, the informal information shared by one minister with another, or statistical data. Information is used to manage the church, as decreed by the Word of God and implemented by the people of God. Members of the church community prefer to learn in groups, engaging in interactive information sharing rather than solitary learning.
Category 4: Informed learning is experienced as using personal interpretations of gifts and talents in response to needs within the community.
Forms of information: text, such as church notices, and verbal information distributed in face-to-face interactions, such as worship services or committee meetings, including digital information.
Learning experiences: learning by doing, using kinaesthetic styles, putting learning into practice; engaging in acts of service.
Focus: Service within the community In this experience members of the church community use a combination of spiritual, operational and relational information. They use spiritual and introspective information to determine how they might serve. External information indicates where the need for help or service is required.
Category 5: Informed learning is experienced as using personal interpretations of gifts and talents in response to needs beyond the church community.
Forms of information: controlled, published, printed formats. Materials provided by charities and other mission organisations. Materials prepared by the church about outreach programs. Commonly received during face-to-face interactions, such as a worship service.
Learning experiences: Learning by doing, using kinaesthetic styles, putting learning into practice; engaging in acts of service Focus: Service outside the community In this experience information is usually controlled and in a published, printed format. Materials received from charities and missions are acted upon.
Theological librarians building informed communities
Make building an informed community a goal. An informed community might comprise informed scholars, students and researchers; informed workers in ministry and mission; and informed members of the community of faith. An informed community engages in a wide range of information experiences and is aware of how they, and those they serve use information to learn.
Building an informed community is building an information literate community; one in which people are empowered to engage with information for learning, where they are aware of the different kinds of information use experiences they might engage in, and where they are aware of others' experiences of information use and are able to work with those preference.
Building an informed community in a theological college might fundamentally involve a) expanding our own awareness of peoples' experiences of using information to learn, b) helping staff and students expand their awareness of their own experiences and those of relevant community or organisational groups, and c) making it possible for people to engage with the experiences most effective for them.
a) expanding our own awareness of peoples' experiences of using information to learn
Our own experiences of information literacy will influence how we think about informed learning. Research done in public libraries indicates that for librarians, information literacy may be experienced as the acquisition of technical skills. This might present the need for a significant refocus beyond the skills to the experience of learning and using information to learn.
Others may understand information literacy as experienced organically through the process of being part of a learning community. This kind of view is perhaps more easily developed into a focus on the experience of information use. (Demasson, Partridge and Bruce 2010) As we widen our own experience of the use of information for learning, we also need to consider the information experience of members of our college communities, as well as those of the wider faith community and beyond; this may often involve including the experience of less represented or marginalised groups. This may come from reading or observation or talking with others, individually or in groups according to our own preferences.
Engaging in research, individually or in groups is also an interesting options for raising awareness of the peoples' information experiences in our communities. An excellent handbook for those wishing to become involved in such evidence based practice would be Exploring methods in information literacy research (Lipu, Williamson and Lloyd 2007).
b) helping staff and students expand their awareness of their own experiences and those of relevant community or organisational groups
Draw attention to experiences of using information to learn in the wider academic community, in faith communities and in other relevant contexts. Help people become aware of how information may be used to grow faith, develop relationships and manage the church in order to support the spiritual wellness of the community and the cultivation of lifelong learning in faith contexts (Gunton 2011 ).
Focus attention on how information is used in learning experiences in the church community. Understanding these experiences, and becoming interested in people's experience of using information in church community, may assist members of the college community, both with their own information use, and their use of information in the service of others.
Use resources such as Informed Learning (Bruce 2008) to identify simple strategies for helping students become informed learners by exploring their student experiences of information use, exploring the experience of information use in relevant communities, designing experiences of effective information use into student learning opportunities and encouraging reflection on information use. Informed Learning also provides a range of Tips and Tricks such as: • Sample engagements blending information interaction with content learning • Questions to ask yourself and academic colleagues • Prompts to build into student assignments • Suggestions for possible learning tasks When we see our role as supporting learning, rather than helping people use the library, we may become interested in partnering or leading college thinking around other aspects of supporting student learning. The QUT library for example has developed an integrated literacies program which brings together a focus on a wide range of student needs under the auspices of the library, including study skills, team-work skills, etc, and for the theological library I would suggest a focus on the spiritual wellness of the college community would also be appropriate.
In this paper and elsewhere we have seen how people use information in different ways in their faith walk (Gunton 2011). Librarians can bring this kind of knowledge to the table and partner or indeed lead the process of building spiritual wellness in the college community and beyond.
In thinking about what you might be able to do consider something small, something focused, something targeted at a key group that might benefit, something that might involve those with responsibility for funding as the well being of the college and its library.
|
2019-04-16T13:21:25.193Z
|
2013-06-28T00:00:00.000
|
{
"year": 2013,
"sha1": "1070bb4b51b41f832d6c92c2261730dd4ccddcfb",
"oa_license": null,
"oa_url": "https://serials.atla.com/anztla/article/download/215/85",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fac3697a3896bf8c6b42be51ec1fde03af1d7665",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering",
"Political Science"
]
}
|
259178444
|
pes2o/s2orc
|
v3-fos-license
|
Patients’ well-being during the transition period after psychiatric hospitalization to school: insights from an intensive longitudinal assessment of patient–parent–teacher triads
Background The transition period after psychiatric hospitalization back to school is accompanied by various challenges, including a substantial risk for rehospitalization. Self-efficacy and self-control, as transdiagnostic variables and important predictors of coping with school demands, should be crucial factors for successful adaptation processes as well as an overall high well-being during school reentry. The present study therefore investigates how patients’ well-being develops during this period, and how it is related to patients’ self-control and academic self-efficacy, as well as parents’ and teachers’ self-efficacy in dealing with the patient. Methods In an intensive longitudinal design, daily ambulatory assessment measures via smartphone were collected with self-reports from the triadic perspective of 25 patients (Mage = 10.58 years), 24 parents, and 20 teachers on 50 consecutive school days, starting 2 weeks before discharge from a psychiatric day hospital (mean compliance rate: 71% for patients, 72% for parents and 43% for teachers). Patients answered daily questions between five and nine o'clock in the evening about their well-being, self-control, academic self-efficacy and about positive and negative events at school, as well as parents and teachers about their self-efficacy in dealing with the patient. Results Multilevel modeling revealed that on average, patients’ well-being and self-control decreased during the transition period, with trends over time differing significantly between patients. While patients’ academic self-efficacy did not systematically decrease over time, it did show considerable intra-individual fluctuation. Importantly, patients experienced higher well-being on days with higher self-control and academic self-efficacy as well as with higher parental self-efficacy. Daily teacher self-efficacy did not show a significant within-person relationship to daily patients’ well-being. Conclusions Well-being in the transition period is related to self-control and self-efficacy of patients and their parents. Thus, addressing patients’ self-control and academic self-efficacy, as well as parental self-efficacy, seems promising to enhance and stabilize well-being of patients during transition after psychiatric hospitalization. Trial registration Not applicable, as no health care intervention was conducted. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-023-01197-0.
Background
Children and adolescents with mental health problems are psychiatrically hospitalized if outpatient treatment is not sufficient for reducing symptomatology. Inpatient hospitalizations in child and adolescent psychiatry usually last numerous weeks and are generally associated with considerable health improvement [1]. However, between 14 and 38% of psychiatrically hospitalized children and adolescents experience a rehospitalization within 12 months after discharge [2][3][4][5], with most readmissions within the first 3 months after discharge [4,6,7]. During this period of high risk for rehospitalization, patients face the challenge to adjust to their different post-discharge environments [8] and many demands of the transition after discharge from psychiatric hospitalization are school-related [9,10]. The need to examine the post-discharge phase closely and interventions to reduce rehospitalizations is accordingly great [11][12][13]. As reintegration refers to the transition after psychiatric hospitalization to school, the terms reintegration and transition are used synonymously.
Stressors during transition
Stressors during transition after psychiatric hospitalization are consistently found in the academic, social and emotional domain, all adding to potentially preexisting difficulties [10,[14][15][16]. Academic stressors concern the risk of patients falling behind at school due to hospitalization [14]. This situation is potentially initiating a vicious circle: the need to catch up the missed work leads to stress, in turn worsening the symptoms, which again impacts learning, and so on [14][15][16]. The academic situation must be considered specifically in view of mental disorders coming along with a heightened risk for school drop-out and a lower educational attainment over time [17,18]. In addition to academic stressors, there are social stressors, such as patients reporting problems with peers, bullying, and losing friendships [16,19]. Social stressors further include not knowing how to handle social situations, being insecure about explaining the personal absence, and concerns about stigmatization [15,16]. Patients express to be overstrained by social situations, potentially leading to withdrawn behavior and social isolation [14,16,19,20]. Beyond academic and social stressors, emotional stressors exist, and even though mental health usually improves during psychiatric hospitalization, the transition can be a setback in terms of patients' emotional experiences [14]. Residual symptoms often persist or reappear after discharge, and patients must deal with transition related anxiety, emotional instability, and nervousness [8,16,21]. Altogether, patients frequently report to feel emotionally overwhelmed by reentering school, even leading to some patients not fully returning to school [14][15][16].
In consideration of the stressors in the academic, social, and emotional domain, it is likely that the transition experience is determined by the ability to meet transition demands and to buffer against those stressors [14]. Succeeding at school, having positive relationships and social interactions, and less emotional strain should reduce the amount of stress and thereby positively influence post-discharge adjustment and enhance well-being. Those assumptions are in line with research on primary to secondary school transitions, which shows that good school attendance and increased academic engagement, the ability to build positive and stable peer relationships, as well as control of negative emotions contribute to smooth transitions [22].
Well-being, self-control, and academic self-efficacy as important variables during transition
Well-being is the affective and cognitive judgment about how well one's life is going [23][24][25]. Following the World Health Organization, mental health is defined not only as the absence of symptoms of mental disorders, but also as the presence of well-being [26]. Correspondingly, patientreported outcomes of subjective health-related quality of life have gained in importance alongside clinical indicators of specific symptoms to assess health care outcomes [27][28][29]. Low levels of well-being pose a risk factor for future psychopathological symptoms [30,31], while subjective well-being and psychopathology are predictive of school functioning [32].
Children transitioning from primary to secondary school exhibit a decline in self-control over time, with less decline coming along with better post-transition adjustment [33]. Self-control denotes the capacity to regulate behavior, thoughts, and emotions, allowing to overcome or change dominant response tendencies and is related to a variety of beneficial outcomes [34]. Higher levels of self-control predict better functioning on academic, social, and emotional domains [33], areas in which patients experience vast demands during the transition from psychiatric hospital to school. In the academic domain, higher levels of self-control come along with better attainment at school, in the social domain with less peer problems and better interpersonal relationships as well as in the emotional domain with less emotional problems, better coping with stress and overall better psychological adjustment [33,[35][36][37].
Regarding the risk of the vicious circle-exacerbating symptoms due to stress arising from catching up the missed schoolwork, in turn impairing learning which again is a stressor-academic self-efficacy likely prevents sliding into the same or helps to break out of it. Academic self-efficacy is the belief about one's own capability to execute necessary actions that should lead to a desired academic goal [38,39]. The feeling of competence to master the respective academic situation determines how well a student engages and persists in learning [39]. High academic self-efficacy can be a resource by improving motivation and academic achievement [40,41], but transitioning from middle to high school comes along with a large decrease of academic self-efficacy [42].
Self-efficacy and self-control should be crucial factors for successful adaptation processes as well as an overall high well-being during reintegration after psychiatric hospitalization. The aim of the present study was to examine patients' well-being during reintegration after psychiatric hospitalization to school as well as patients' self-control and academic self-efficacy as transdiagnostic variables that may enhance patients' well-being and their ability to cope better with the transition-related stressors.
Importance of the triadic perspective
Since the individual is not independent of its context [43,44], the embedding social environments should be considered in addition to individual characteristics of the patient, when investigating the reintegration period. The family and school environments are usually the two most important areas of life for children and adolescents, and mental health problems are often not limited to family settings but also manifest themselves in the school context. From the patient's perspective, their parents and teachers are the most important attachment figures who provide support during reintegration [45].
However, many parents experience high levels of strain during their child's inpatient hospitalization and express concerns about their parenting skills regarding the transition period after discharge [46]. Parental selfefficacy, which is the expectation about the own ability of successful parenting, is associated with parental competence and plays a relevant role for academic, social, and emotional outcomes of a child and its adjustment to school [47][48][49]. In that light, parental self-efficacy can be an important resource for patients' recovery and wellbeing [48], as parents can help their children in coping with demands during reintegration [50]. Parents with lower self-efficacy tend to show less parental involvement [47,48], in turn being associated with less decrease of symptomatology post-discharge [51] and an increased risk for rehospitalization for the child [6].
Teachers express that they need more knowledge and skills to deal with students returning to school after psychiatric hospitalization, as most students still show problematic behavior in the school context [21]. They report not feeling confident in their ability to manage mental health problems in class [52,53]. But teachers' understanding and support is needed by patients during and after psychiatric hospitalization [16,50]. Teacher selfefficacy is defined as the belief about one's own capability to influence and support a student's learning [54]. A high level of self-efficacy among teachers promotes a supportive environment in class, which in turn increases student motivation and academic achievement [55]. Further, teachers with higher levels of teacher self-efficacy are more likely to work persistently and engage positively with challenging students [56,57]. Additionally, teacher self-efficacy is associated with positive teacher-studentrelationships [58], with the quality of these relationships being particularly important for students who are academically at risk [59]. The present study therefore complements patients' perspectives by the perspectives of parents and teachers for an investigation of this triad and examines self-efficacy among parents and teachers in dealing with the patient as facilitating factors for patients' transition process after psychiatric hospitalization.
Present study: investigating between-and within-person processes of patient-parent-teacher triads
Despite a few existing studies [see 10], data on school reintegration after psychiatric hospitalization is limited, particularly for younger children and children outside the United States of America [10]. The current study is, to the best of our knowledge, the first quantitative, intensive longitudinal study applying an ambulatory assessment design to examine patients' transition to school following psychiatric hospitalization, focusing on patient-parent-teacher triads. It examines patients' well-being, self-control, and academic self-efficacy, as well as parents' and teachers' self-efficacy as important diagnosis-independent variables during the transition.
Smartphone-based data of triads were assessed in an intensive longitudinal ambulatory assessment on 50 consecutive school days starting 2 weeks before discharge. Assessing variables in patients' daily lives enhances generalizability and increases ecological validity while preventing retrospective biases [60][61][62]. The resulting time series data allow us to investigate within-person processes (variation within an individual) in addition to between-person differences (variations between individuals). Those within-person processes cannot be inferred from the predominant between-person based designs, as evidence from the between-person level cannot validly be transferred to the within-person level [e.g., 63]. Those two levels can only be separated, and hence within-person processes depicted, with an intensive longitudinal study design.
However, it is difficult to control for confounding variables as, for instance, the situations in which questionnaires are answered may differ between assessments [60]. We aimed to attenuate this by assessing and controlling for daily negative and positive events (one item each with a dichotomous response format) as they are found to influence daily well-being [64]. We further followed the prediction-based approach [65] and assessed patients' pre-discharge well-being, self-control, and academic selfefficacy at baseline to explore if those variables can predict beneficial outcomes during transition. It is plausible that patients with more favorable characteristics have more latitude to deteriorate over time, but at the same time may have more resources buffering against stressors. All those considerations are reflected in our hypotheses formulated below.
Hypothesis 1 Time trend effects.
We expected patients' well-being (H1a), self-control (H1b), and academic self-efficacy (H1c) to decline on average over time during the transition period. We further expected patients to differ significantly in this trend over time. We aimed to explore if the individual specific time-effect for each variable is associated with the respective extent of well-being (H1aa), self-control (H1ba), and academic self-efficacy (H1ca) at baseline, whereby both an attenuating or boosting effect coming along with higher levels is conceivable.
Hypothesis 2
Between-person effects of the patients and triads.
We expected patients generally showing higher selfcontrol (H2a) and academic self-efficacy (H2b) to also show higher levels of well-being. Further, we expected patients with parents (H2c) and teachers (H2d) of generally higher parental or teacher self-efficacy regarding the child to show higher levels of well-being.
Hypothesis 3 Within-person effects of the patients.
We expected higher well-being on days with higher self-control (H3a) and higher academic self-efficacy (H3b). That is, we expected structural dependence on the within-person level of patients' well-being and selfcontrol as well as academic self-efficacy.
Hypothesis 4
Within-person effects of the triads.
We expected higher patient well-being on days with higher parental self-efficacy (H4a) and on days with higher teacher self-efficacy (H4b). That is, we expected structural dependence on the within-person level of patients' well-being and parents' parental self-efficacy as well as teachers' teacher self-efficacy.
Design and aim
The present study is an intensive longitudinal study applying an ambulatory assessment design to examine patients' transition to school after psychiatric hospitalization, focusing on patient-parent-teacher triads. It aims to examine patients' well-being, self-control, and academic self-efficacy, as well as parents' and teachers' self-efficacy as important diagnosis-independent variables during the transition.
Participants
Participants were recruited from 2016 to 2018 at the psychiatric day hospital for children in the Department of Child and Adolescent Psychiatry, Psychosomatics, and Psychotherapy of the University Hospital Tuebingen, Germany. During their treatment, clinic employees informed them about the study. Exclusion criteria were inability to attend school, a profound developmental disorder without language development or a psychotic disorder. A total of 27 children with belonging 24 parents and 20 teachers were recruited. Two children dropped out during the study, resulting in the final sample of 25 children, ranging in age from 7 to 13 years (M = 10.58, SD = 1.62). Total sample description is displayed in Table 1, with primary diagnosis based on the German edition of 10th Revision of the Classification of Mental and Behavioral Disorders (ICD-10, [66]). Patients and parents received a cinema coupon compensation, with monetary amount ranging from 10 Euro (first 20 days participation) over 20 Euro (first 40 days participation) to 25 Euro (50 days participation). Teachers got compensated with a book present.
Material
The present study is part of a larger one that examined environment-related predictors of successful day hospital treatment [67]. As we focused on the ambulatory assessment data related to the formulated hypotheses in the present study, only the measures used for these analyses are presented in detail.
Baseline measures
Well-being For the assessment of well-being, we used the emotional well-being subscale of the revised, selfreport version for children aged 7 to 13 of the questionnaire for measuring health-related quality of life (Kid-KINDL R , [68,69]). The subscale consists of four items asking for the frequency of events during the last week with response categories ranging from "never" to "all the time" on a five-point Likert-Scale (e.g., "During the past week, I had fun and laughed a lot. ") with literature evidence of satisfying reliability (Cronbach's α = 0.68) and validity [68,69]. For the current study, it also resulted in a still satisfying reliability (Cronbach's α = 0.50).
Self-control For the assessment of self-control, we used the brief German parent-rating version of the Self-Control Scale (SCS-K-D, [70]). We adapted the 13 items by simplifying the language to be suited as a self-report questionnaire for children, also resulting in satisfying internal consistency for the current study (Cronbach's α = 0.84). Patients had to rate the extent to which the statements apply to them in general, from "not at all" to "totally true" on a five-point Likert scale (e.g., "I do nothing that I will regret later. ").
Academic self-efficacy For the assessment of academic self-efficacy, we used an established scale consisting of seven items (WIRKSCHUL, [71]). We adapted the items by simplifying the language for children, also resulting in satisfying internal consistency for the current study (Cronbach's α = 0.67). Patients had to rate the extent to which the statements apply to them in general, from "not at all" to "totally true" on a five-point Likert scale (e.g., "I can solve even complex tasks in class, when I make an effort. ").
App-based daily measures on the smartphone
The application used for the ambulatory assessment was movisensXS [72] running on the NEXUS 5 smartphone by LG Electronics, provided for all participants. For a complete overview of all daily items see Additional file 1 -Daily Measures.
Well-being For the daily assessment of well-being, we developed one item globally asking for how they were overall today ("How were you overall today?"). This is in accordance with just slightly differing single-items asking for well-being, exhibiting satisfying validity and reliability in the literature [24,[73][74][75]. Possible answers ranged from "very bad" to "very good" on a five-point Likert scale.
Self-control and academic self-efficacy For the daily assessment of self-control and academic self-efficacy, we shortened the scales used at baseline (i.e. an adaption of the brief German parent-rating version of the Self-Control Scale, SCS-K-D, [70] and the scale WIRKS-CHUL, [71]) to four and three items, respectively. For an overview of all items see Additional file 1 -Daily Measures (e.g. self-control, "Today, I have done something, I regretted later"; e.g. academic self-efficacy, "Today, I was able to solve even complex tasks in class, when I made an effort"). Further, we changed the referring time frame from general to the present day. The reliabilities (Cronbach's α and McDonald's ω) for the current study are reported in Table 3 in the results section.
Negative and positive events To control for daily negative and positive events, children were asked what happened at school with one item each ("Did something happened today that you thought was bad/good at school?") and a dichotomous response format (yes/no). Following a positive response, participants were able to indicate what exactly happened to them in a free answer format ("If something like that happened today, what was it?").
Parental and teacher self-efficacy For the assessment of parental and teacher self-efficacy, we used three items of an established scale (WIRKLEHR, [76]). We changed the referring time frame from general to the present day. Items further got appropriately adapted to fit the reference to the specific child (i.e., the/my child; e.g., parent: "Today I was able to guide my child also in problematic situations. "; teacher: "Today I was convinced that I can teach the child the subject material also in problematic situations. "). The reliabilities (Cronbach's α and McDonald's ω) for the current study are reported in Table 3 in the results section.
Procedure
An overview of the procedure can be seen in Fig. 1.
During the patients' stay at the psychiatric hospital, the patients and parents were informed about the study by an employee of the day hospital. In case of general interest in participation, the patients, and their parents, as well as a teacher of the patient, were invited to the study. All subsequent study appointments were held by study employees of the psychiatric hospital. Verbal and written study information, including informed consent forms, was provided for parents and patients, after a regular appointment at the psychiatric hospital. For teachers, this information appointment took place either by phone and post or at the hospital. After the participants gave written informed consent, an introduction appointment was made, separately for the patients with parents and the teacher, one or two days before the school pilot period started. All participants filled out baseline measure questionnaires (only patients' well-being was assessed at discharge, as being part of standard diagnostic, to minimize patient's burden). A smartphone was delivered to all participants, and they were introduced to the application implementation. A smartphone-based measurement burst for 50 consecutive school days was conducted, starting with the first day of regular school attendance of the school pilot period. The school pilot period is a phase around 2 weeks before discharge, where patients attend their regular school but subsequently continue to attend the psychiatric hospital. If the school pilot period proceeds successfully, patients get discharged.
Participants were prompted with an acoustic signal at half past six in the evening to answer the smartphonebased measurement questions. It was possible to work on the questions self-initiated between five and nine o'clock in the evening. Introduction and items were presented audiovisually, so that children who were not yet confident readers could have the items read aloud. Questions with free answer format could be answered and saved by voice recording. This daily ambulatory assessment took about five minutes per day. Data got transferred from the smartphone and safely stored on the servers of the university and university hospital during the routine appointments at follow-up. The final appointment was further used for all participants to fill out follow-up questionnaires [67].
Data analyses
Multilevel modeling analyses with repeated measurements (Level 1) nested within patients (Level 2) were calculated to account for the multilevel structure of ambulatory assessment data [77]. Data was analyzed with the statistical software R [78]. We calculated multilevel models with the "nlme" package [79]. Multilevel reliability was calculated with the "reliability" function of the "semTools" package [80]. We centered the predictors on the grand mean for between-person effects and on the personal mean for the within-person effects. An overview of all equations is depicted in Additional file 2 -Mixed models equations. We depicted the most complex model below for description of model building, whereby simpler models arise by leaving out respective predictors. That is, for the depicted model, we investigated wellbeing ij as outcome measure for individual i (i = 1, …, N) on day j (j = 1, …, n i ). We stepwise built on the unconditional random-intercept only model, with firstly introducing the two dummy-coded (0 = no, 1 = yes), fixed time-varying level 1 predictors of negative (γ 10 ) and positive events (γ 20 ). Then, we added the fixed time-varying level 1 predictor day (γ 30 ), which was next allowed to vary between participants (μ 3i ). We then introduced the fixed timevarying level-1 predictors (γ 40, γ 50 ), being personal mean centered, giving insight into within-person relationships, which are allowed to vary randomly (μ 4i, μ 5i ). After specifying all level-1 predictors, we added the fixed time-constant, grand-mean centered level-2 predictors (γ 01, γ 02 ), giving insight into between-person relationships. Hence, the intercept (β 0i ) of each patient is modeled as a function of the mean intercept (γ 00 + γ 01 (between i ) + γ 02 (between i )) and random error (μ 0j ). The slopes (β 1i, β 2i, β 3i, β 4i, β 5i ) as a function of the mean slope (γ 10, γ 20, γ 30, γ 40, γ 50 ) and respective random error (μ 3i, μ 4i, μ 5i ; [77]). All variables, up to control variables, were included if significant.
Altogether, model 1 includes the trend of well-being (H1a), and its' relationship with patients' self-control and academic self-efficacy as within-and between-person effect, controlled for daily events. Model 2 and model 3 include the trend of self-control and academic selfefficacy of the patient. Model 4 and model 5 include the relationship between patients' well-being and parental respective teacher self-efficacy as within-and betweenperson effect, controlled for time and daily events. We separated the model for patients' and triads' variables due to huge differences in compliance for patients, parents, and teachers, resulting in different amount of missing data which results in model non-convergence if not separated.
Equation for m1: + β 5i (within academic self -efficacy ij ) + r ij Level 2 β 0i = γ 00 + γ 01 (between self -control i ) + γ 02 (between academic self -efficacy i ) + µ 0i β 1i = γ 10 β 2i = γ 20 Correlations between random effects were calculated, except if the model did not reach convergence. Fixed and random effects were tested with likelihood ratio test comparing the appropriate (nested) models, and when found to be significant, predictors were added to the model. To account for correlation of adjacent time points, we applied the first-order autoregressive error structure (AR(1), [81]). In case of time slopes varying randomly, we calculated bivariate correlations using Pearson's r between the individual specific trend over time and the corresponding (grand-mean centered) baseline measure, to find out if there is a relationship between baseline measures and individual slopes. All tests assumed a significance level of α = 0.05.
Compliance
We calculated compliance rates as the percentage of prompts being responded to, averaged over participants
Descriptive statistics
Descriptive statistics for the baseline measures are provided in Table 2. We computed reliability at each level by using multilevel confirmatory factor analysis [82], with Cronbach's alpha (α) and McDonald's omega (ω) depicted in Table 3. Intraclass correlations (ICCs) were computed, that is the ratio of the between person variance to the total variance, indicating if multilevel analysis is adequate [77]. Within-person variability is indicated by the intra-individual standard deviation (ISD), a measure for the amplitude of fluctuation [83]. Due to the longitudinal design, we further considered the mean square successive difference (MSSD), additionally taking the temporal dependency into account, with higher scores displaying higher degrees of instability [60,83,84]. This is important as individuals with the same ISDs can exhibit different MSSDs due to different amount of temporal dependency [83]. Descriptive statistics as well as ICCs, ISDs, and MSSDs are depicted in Table 4. The answers over time of the patient's variables can be seen in Fig. 2.
Multilevel analyses
To examine the assumption that data are missing at random, we investigated the relationship between missingness and baseline measures, average daily measures, gender, age, and IQ. There were no interrelations, hence, missingness of data can be ignored [85].
Final models of the multilevel analyses are depicted in Tables 5, 6 and 7. A complete overview of all models is depicted in Additional file 3 -Multilevel analyses. The models m1, m4, and m5 all include well-being as the dependent variable and time as the predictor variable, which is reported only for the first time in the following for reasons of clarity. This is justified as this effect did not differ in significance and only slightly in extent.
Hypothesis 1: Time trend effects. We expected patients' well-being (H1a), self-control (H1b), and academic self-efficacy (H1c) to decline on average over time during the transition period. We further expected patients to differ significantly in this trend over time. We aimed to explore if the individual specific timeeffect for each variable is associated with the respective extent of well-being (H1aa), self-control (H1ba), and academic self-efficacy (H1ca) at baseline, whereby both an attenuating or boosting effect coming along with higher levels is conceivable.
There was neither a significant correlation between baseline well-being and individual specific trends over time of well-being (r(23) = 0.20, p = 0.344; H1aa), nor between baseline self-control and individual specific trends over time of self-control (r(21) = − 0.19, p = 0.398; H1ba), nor between baseline academic self-efficacy and individual specific trends over time of academic self-efficacy (r(21) = − 0.19, p = 0.398; H1ca).
Hypothesis 2: Between-person effects of the patients and triads. We expected patients generally showing higher self-control (H2a) and academic self-efficacy (H2b) to also show higher levels of well-being. Further, we expected patients with parents (H2c) and teachers (H2d) of generally higher parental or teacher self-efficacy regarding the child to show higher levels of well-being. On a between-person level, there was a significant positive relationship between patients' well-being and self-control (m1: γ 01 = 0.602, SE = 0.165; H2a), meaning that children with one point more self-control than the average child experience a higher well-being of about 0.602 than the average child. There was no significant relationship between academic self-efficacy (m1: γ 02 = 0.119, SE = 0.108; H2b), parental self-efficacy (m4: γ 01 = 0.063, SE = 0.145; H2c), or teacher self-efficacy (m5: γ 01 = 0.376, SE = 0.215; H2d) and patients' well-being.
Hypothesis 3: Within-person effects of the patients. We expected higher well-being on days with higher self-control (H3a) and higher academic self-efficacy (H3b). That is, we expected structural dependence on the within-person level of patients' well-being and self-control as well as academic self-efficacy.
There was a significant positive within-person relationship between well-being and self-control (m1: γ 40 = 0.178, SE = 0.064; H3a), meaning that on days with an increase of one point of self-control compared to the personal mean, well-being increases about 0.178. The relationship significantly differed between patients (m1: σ μ4j = 0.211; Fig. 4). There was a significant positive within-person relationship between well-being and academic selfefficacy (m1: γ 50 = 0.207, SE = 0.057; H3b), meaning that on days with an increase of one point of academic self-efficacy compared to the personal mean, well-being increases about 0.207. The relationship significantly differed between patients (m1: σ μ5j = 0.206; Fig. 4). Hypothesis 4: Within-person effects of the triads. We expected higher patient's well-being on days with higher parental self-efficacy (H4a) and on days with higher teacher self-efficacy (H4b). That is, we expected structural dependence on the within-person level of patients' well-being and parents' parental self-efficacy as well as teachers' teacher self-efficacy.
There was a significant positive within-person relationship between patients' well-being and self-efficacy of the parent (m4: γ 40 = 0.138, SE = 0.051; H4a), meaning that on days with an increase of one point of parental self-efficacy compared to the personal mean, patients' well-being increases about 0.138. The relationship is not different between patients (m4: σ μ4j = 0.192; Fig. 5). There was no within-person relationship between patients' wellbeing and self-efficacy of the teacher (m5: γ 40 = 0.146, SE = 0.096; H4b), further not differing between patients (m5: σ μ4j = 0.001).
Qualitative reports of daily events
The statement whether a positive or negative event occurred, served as a control variable with a dichotomous response format (yes/no). In case of a yes-response, the event could be specified in a free answer format. Although these events were not the focus of the present study, qualitative reports provide insights and therefore some exemplary responses on positive and negative events are given in the following. Among negative events, patients reported, for example, not being able to be attentive, having difficult exercises and too much homework, receiving bad grades, having social conflicts with peers and teachers, experiencing bullying, and somatic symptoms such as abdominal pain or headache. Among positive events, patients reported, for example, less difficult tasks and less homework, receiving good grades, being able to participate in class, having positive social contacts with peers, and getting praised from teachers. Table 6 Results of mixed model analyses, self-control, and academic self-efficacy depending on time
Discussion
The present study investigated between-and withinperson processes of patient-parents-teacher triads during transition from psychiatric hospitalization to school. The results show that over 50 consecutive schooldays, patients' well-being and self-control decreased on average over time (H1a, H1b). Even though patients' academic self-efficacy did not decrease on average (H1c), patients experienced instability and fluctuation of their self-efficacy over time. There was no significant relationship between the individual specific trends over time and the extent of the same variables before the transition (H1aa, H1ba, H2ca). At the between-person level, patients with general higher self-control experienced general higher well-being (H2a). However, there was no significant between-person relationship between patients' well-being and patients' academic self-efficacy (H2b), parents' (H3c) or teachers' self-efficacy (H3d). Importantly at the within-person level, patients experienced higher well-being on days with higher self-control and academic self-efficacy (H3a, H3b). Further patients experienced higher well-being on days with higher parental self-efficacy (H4a), but not teacher self-efficacy (H4b).
The present results, especially the decrease of patients' well-being and self-control in the weeks following discharge, emphasize that reintegration can be very challenging for patients. Moreover, lower well-being increases the risk for psychopathological symptoms [30,31] and, in adults, decreases the chance for recovering [86]. The present findings are therefore consistent with studies showing that the period immediately after discharge is associated with a heightened risk for rehospitalization [4,6,7]. It seems reasonable to assume that causes for the decrease of patients' well-being are comparable to the challenges patients have reported in other studies during reintegration, such as having to make-up schoolwork or experiences of exclusion [e.g., 14,16,19. Evidence for these challenges can also be found in the present qualitative results regarding negative events at school. For example, in the present study, patients reported academic difficulties such as poor performance as well as social difficulties such as bullying and conflicts with peers during the transition period after discharge. Patients' self-control also declined over the course of the psychiatric hospital to school transition, as already shown in the context of school-to-school transitions [33]. Transition-related stressors are again a likely explanation for the decline in patients' self-control, as an increase in stress over middle childhood prospectively predicts a decrease in selfcontrol [87]. Interestingly, patients' academic self-efficacy did not exhibit a decline over time. As most patients attended primary school, academic demands may be easier to meet, thereby perhaps not undermining academic self-efficacy as much. Nevertheless, it is very positive that patients' academic self-efficacy did not decline during transition and may form a resource that patients can draw on to meet the accompanying academic demands. Beyond the average developments, patients differed meaningfully concerning the individual trend in well-being, self-control, and academic self-efficacy over time, being in accordance with individually different reports concerning school reentry [16]. A few patients in the present study even showed a positive development during reintegration, which can be seen in the individual specific effects of well-being showing positive slopes in Fig. 3 in the results section. This makes it conceivable that there exist factors promoting a successful transition. Identifying factors promoting a successful transition by specifically looking at patients with positive transition developments could yield insights for promising future interventions to provide greater assistance to patients at risk for negative developments. For example, as concerns for emotions when considering returning to school, as well as psychological and emotional difficulties pre-discharge, come along with a less favorable post-discharge experience [16], these factors should be considered in future studies. One aim of the present study was to explain heterogeneity with pre-discharge factors to predict individual specific trends over time. However, patients' well-being, self-control and academic self-efficacy did not turn out to be relevant predictors. Our daily measurements show that these constructs are subject to considerable fluctuations. Hence, a single measurement at baseline being only an extract can be a reason for the missing effect, underlining the importance of repeated longitudinal measurements.
In the present study, it turned out that self-control and academic self-efficacy can be important strengths for patients to draw upon, as on days with higher levels of self-control and academic self-efficacy, patients experienced higher well-being. This relationship is existent beyond the influence of daily events. We cannot claim definitive answers about causality, but it seems plausible that on days with higher self-control and academic self-efficacy, patients are more capable to cope with academic, social, and emotional demands in the transition period. That is, patients may be more likely to make up the missed schoolwork [39], have positive interactions with others and a better psychological adjustment [35]. By that, adaption to post-discharge environments should be facilitated. This is in line with findings from schoolto-school transitions, showing that the ability to control negative emotions and good school attendance can be protective factors against negative impacts of the transition [22].
The present study further showed that on days with higher parental self-efficacy, patients experienced higher well-being. It is very likely that parents will be able to support the child more consistently and aid to buffer against the number of stressors the child is faced with on days with higher parental self-efficacy. Further, supporting, engaging and responsive parents are found to be a resource for positive school-to-school transitions [22]. As teacher self-efficacy [see 55 for a review] was assumed to be important for patients during transition, it is surprising to not be a significant predictor of patients' well-being. However, as teachers' compliance rate was rather low, and a trend in the expected direction was evident, we cannot make a concluding statement and more research is needed on this matter. Looking at the comparison between the within-and between-person level, we found that patients with generally higher self-control generally report higher wellbeing. Except for this effect, we did not find a significant relationship on the between-person level for well-being and academic, parental, and teacher self-efficacy. This contrasts with the positive within-person effects between well-being and academic and parental self-efficacy. Those results underline that aggregated data does not represent the individual, and effects on the group and individual levels are not implicitly related [88]. Hence, research being based on the group level informing about practical implications for treatment can be problematic, as between-person findings cannot be generalized to the individual [89]. By solely looking at between-person relationships, we would not conclude academic self-efficacy and parental self-efficacy to be important for the transition period. However, within-person processes reveal that those two are promising to support patients.
Implications
Implications of the present results include different ways patients can be supported in the transition period aiming not only high but also stable levels of well-being. This means, for instance, strengthening patients' self-control [see e.g. 90,91], and academic self-efficacy [see e.g. 92, 93], but also parents' self-efficacy [see e.g. 47, [94][95][96], through interventions during treatment and aftercare. Well-evaluated aftercare programs are needed to help patients, as well as their relevant attachment figures, to cope with stressors and occurring problems for a successful reintegration and to stabilize their well-being. As the relationships between well-being and self-control or academic self-efficacy differ between patients, with a few also showing negative relationships, future studies should aim at gaining a deeper understanding by investigating possible moderators of those relationships. It is very likely that well-being is fueled by multiple sources, with the ones assessed in the present study being important ones among others. We aimed to control for daily events to find interrelations of well-being and other variables independent of external events. However, as daily events are significantly related to daily well-being [64], specifying them may reveal important contributors of wellbeing. That is, good grades, receiving praise of the teacher and being able to participate in class came along with an increase of daily well-being. On the other hand, not being able to concentrate, numerous and complex exercises and receiving bad grades came along with a decrease of daily well-being. Hence, the ability to cope with academic situations seems to have a major impact on patients' wellbeing, as also evidenced by the positive within-person relationship between academic self-efficacy and wellbeing. Thus, attachment figures or mental health professionals talking with the patient about negative events and promoting positive ones seem helpful, emphasizing the importance of supportive accompaniment and aftercare during the transition from psychiatric hospitalization to school. Furthermore, future studies may expand the number of perspectives by including patients' peers, as patients are concerned about reactions of their peers to their return to school as well as effects of their absence on friendships [8,14]. In the light of the current COVID-19 pandemic, it can be additionally surmised, that the frequent transition between schooling contexts from home to school, could be associated with partially similar challenges such as concerns about friendships due to the social distancing, making up schoolwork or emotional readjustment at school.
Limitations
For the app-based daily measures on smartphones, established scales had to be shortened to minimize the study's burden on families. This resulted, however, in rather low reliability only for the self-control scale from daily assessment and underlines the need to develop short questionnaires, which are even more important for strained samples. For the assessment of self-efficacy in a daily manner, we decided to use both items for self-efficacy and self-efficacy related experiences, expecting experiences to fluctuate more from day to day. Even though not being established, this is warranted by the internal consistency of the resulting scale. Further, it is an advantage of ambulatory assessment to ask for situations of the present day, as the way individuals master single situations builds one of the most important sources for self-efficacy [97][98][99][100]. However, the operational definition of this study, asking for past situations of the present day, therefore shows a discrepancy with the theoretical definition. This should be considered when interpreting the results.
Further, well-being was assessed as part of the standard diagnostic at discharge and thus approximately 2 weeks later compared to the other two baseline measures of self-control and self-efficacy. However, as none of the variables proved to be a relevant predictor of the individual specific trends over time, this temporal offset at baseline seems negligible.
Despite the satisfying compliance rates, which are comparable to other studies [see 101, 102 for reviews], not all children treated at the day hospital during the 2-year recruitment period participated in the present study, which must be considered when making statements about the generalizability of the present results. For all participants, answering daily questions was rather time-consuming, which was probably a barrier to study participation. Reasons for the still satisfying compliance rates, particularly regarding the 50-day study length, may be the graduated reward, the feeling of being taken seriously with one's situation through daily questioning, and the automatic repeated prompts, which increase compliance rates especially in clinical samples [102]. However, teachers' compliance rates were comparatively much lower than those of patients and parents. This may be because rewards for teachers were not graduated, and daily questioning took part in the evening, falling in the end of work, as well as because teachers were not affected by the patients' transition to the same extent. The low compliance rates among teachers may be one reason that teacher self-efficacy was not a significant predictor of patients' well-being.
The statement whether a positive or negative event occurred, served as a control variable with a dichotomous response format. Furthermore, the level of distress caused by a negative event could be predictive of daily well-being. However, since we had to limit the number of daily questions so that the burden on patients did not become too high and since this was only a control variable, distress was not assessed additionally. In future studies, this aspect can possibly be considered further. Future studies can also further examine between-and withinperson processes during reintegration using additional methods of analysis (e.g., diary analysis using hierarchical linear modeling or latent growth curve analysis), for which the present sample size was too small.
Conclusions
To the best of our knowledge, the current study was the first quantitative, intensive longitudinal study applying an ambulatory assessment design investigating the transition period of patients after psychiatric hospitalization to school, focusing on patient-parent-teacher triads. As successful transition to school is assumed to be crucial for post-discharge adjustment [8], we aimed to gain a better understanding by investigating patients' well-being on school days. We further pursued to identify variables across diagnosis being beneficial for patients' well-being, allowing interventions to address those, and hence alleviate problems during the reintegration period.
The present results show that patients exhibited a decline in well-being and self-control during the weeks following discharge. Further, patients' well-being, self-control, and academic self-efficacy were subject to considerable fluctuation over time. Meaningful differences between patients on the within-person level underpin the importance of ambulatory assessment studies and multilevel modeling and advocate an individualized analysis of needs and strengths of patients and their environments. Flexible interventions tailored to individual needs are indicated, as a general solution fitting all may not exist [10]. On average, to positively influence the transition process and patient's well-being, it seems beneficial to support patients' self-control and academic self-efficacy, as well as self-efficacy of relevant attachment figures such as parents. The present study indicates with a multiperspective view how complex and challenging the transitional period after a psychiatric hospitalization is and how important it is to offer accompaniment and support to all parties involved during this period.
|
2023-06-17T13:08:01.198Z
|
2023-06-16T00:00:00.000
|
{
"year": 2023,
"sha1": "b6406be35dcf535a1fd34c170e55c669142a1ab7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "1479b4d5442c2205791dc0de9dfea142d3272f33",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
228927312
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Thermal Diffusivity Analysis after Irradiation
The diffusion calculation gives a vivid understanding as to what happens in the SiC-cladded material. Molecular Dynamics (MD) and Molecular Statics are being employed to study the diffusion coefficient phenomena. The MD simulations in this study are been built on the ZBL potential. In this work we initially applied the MD simulation for minimization within the temperature range of 1000-3000 K. Then the MOX fuel is then used to perform assessment of radiation damage by ions at burnup temperatures as well. Various chemical states are developed depending on the condition of the fuel. Within the fuel lattice the O atoms break bonds with the U-Pu atoms at higher temperature. The very short diffusion lengths mechanisms results were obtained measured for uranium atom over the course of this 300ps simulation.
Introduction
All Nuclear fuel raw materials come from the earth like the fossil fuels. These major materials generate its radioactive by-products from the extraction, enrichment, fabrication, and consumption stages of the process [1]. When a spent fuel is taken out of the reactor it produces a lot of heat as well as radiation. Prior to previous studies metallurgy been used to enhance mechanical properties which have shown that adding a duration quantities with several oxides to ceramic bodies will have large effects on the mechanical performance and microstructure of the ceramic [2][3][4]. During the inclusion of minor actinides like CmO2, NpO2 and AmO2 with UO2 or ThO2 is desirable so that these species can undergo transmutation in a reactor [5]. Computational models can be used to analyze the nuclear materials so that we can have a better understanding of their behaviors, which can help to increase their efficiency and stability.
Silicon carbide (SiC) has an excellent thermo-mechanical property that makes it a subject of interest for high-temperature mechanical applications. There are numerous phases with high thermal conductivity, including the silicon carbide nanoparticles SiC [6][7] and nanotubes [8], which when added to UO2 pellets makes thermal conductivity improved. However, there are two main conflicting effects as to when introducing the SiC into UO2. Initially, with a high thermal conductivity substance, SiC will enhance the thermal conductivity of UO2. Nevertheless, as a strong-covalent-bond compound, SiC increases the sintering difficulty, reduces the density and consequently weakens the thermal conductivity. Therefore, there shall be a critical sintering condition at which the enhancing effect counteracts the weakening effect. IOP Conf. Series: Materials Science and Engineering 958 (2020) In this present work, additional calculations and MD simulations were employed to completely evaluate the impact diffusion of (SiC, U-Pu) O2 microstructural-based system. This is very significant towards understanding results of non-uniform lattice change of the mixed oxides from no to high burnup. This affirms the diffusion mechanisms exclusively have a dependent factor on point defects. Emphasis is place on comparing excessive oxygen diffusion coefficient through the comparison in with the mixed oxide fuel. This was ideally made so that predictions can begin to be made for actinide oxides.
Interatomic Potential Model
Molecular dynamics simulation provides a unique insight with the atomic-scale processes applicable to solve problems in material science. We performed our simulation by using the Largescale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code from Sandia National Laboratories [9]. Molecular dynamics simulation uses a force computed in empirical potentials, usually known as classical molecular dynamics.
The interatomic potential energy of a system V(r) of n atoms with coordinates r i , r j , …, r n can be written in terms of one, two, as well as many-body terms as: (1) Furthermore repulsive potential described in the short interatomic distances is accustomed to collision cascade simulations as a result of high kinetic energy within atoms. This geometry optimization was carried out in LAMMPS using the empirical potential of nuclear fuel model Ziegler-Biersack-Littmark (ZBL) [10]. (2) This effective potential perhaps is disintegrated into the long-range Coulombic interaction of ionic particles and the short-range portion. (3) Within the charges and distant from each other by , the long-range portion is: The interactions with each other and with U-Pu and O typically involved, produces only short-range repulsion and dispersion interactions [11]. The short-range portion can be written as: With and representing atomic numbers and the Bohr radius.
All time step of a molecular dynamics (MD) run has an optimisation of the shells to their zero-force positions for each atomic configuration. Short-range interatomic potential has effect on the shell particles which will then decrease some computational activity within the non-bonded interactions. There is no electrostatic force of interaction that exists between the core as well as its own shell. The shell models give a better representation of the equilibrium properties including the elastic constants in static calculations [12].
The Density functional theory (DFT) data was obtained for the selected MOX property of interest using Molecular statics calculations, such as melting temperature, oxygen vacancy migration energy and diffusion coefficient phenomena. The interaction potentials valid for all values of r ij have been developed for (SiC, U-Pu) O 2 by fitting to a database of energies of different structures calculated using DFT [13]. This was done due to reduce any uncertainty produced by the selected functional form and range of activity within the spline.
Results and Discussion
The simulation results revealed several derived important aspects using the diffusion coefficients for large and small (SiC, U-Pu) O 2 clusters. The results seems to be trailing off towards lower temperatures, which is clearly coherent, since an error or noise could result in an over estimation of the diffusion coefficient when the number of displacement is small. As can be observed in Fig. 1 the derived diffusion coefficient for the small 324-ion (SiC, U-Pu) O 2 cluster. The fitted graphs correspond with 0.53 eV and 0.31 eV of activation energy for diffusion.
In Fig. 2, a representative plot of the diffusion coefficients of oxygen derived from 3129 ion with (SiC, U-Pu) O 2 cluster results. Here the surface migration results are clearly invalid. The bulk oxygen results show almost similar behavior as the 324-ion cluster, where the diffusion coefficient seems higher than expected at lower temperatures. Although the diffusion length of uranium is almost certainly too short to perform any reliable analysis derived diffusion coefficients which is presented in Fig. 3. The widening of the distribution graphs due to thermal motion of the ions is what is calculated.
Conclusion
In this work, a pair in the simulated structure with the interatomic ZBL potential function was used. With the ZBL potential we plotted a comparison of the LAMMPS calculated diffusion coefficients of the MOX with temperature dependence. These graphs could be seen that lattice predictions for the preexponential factor were all in the range of 10 -12 to 10 -10 m 2 /s. A crystal of 3129 ions of (SiC, U-Pu) O 2 cluster was simulated at various temperatures. The crystallite which was used during this simulation was cut along the stable (111) plane. This was done to obtain pure stoichiometric MOX. The influence of SiC is more significant in the MOX having results which further verify the developed formula. This model presented can be incorporated into fuel recycling model to improve calculations of the fuel accident conditions and thermal conductivity as well.
|
2020-10-29T09:08:45.749Z
|
2020-10-27T00:00:00.000
|
{
"year": 2020,
"sha1": "985da3e799bbe45bacf8b4a670c517db03cc28be",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/958/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb56257b9261850c51e112fb860816cff2e865cb",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
244727145
|
pes2o/s2orc
|
v3-fos-license
|
Drought tolerance test of various potential local rice genotypes in Aceh West-South Region at vegetative stages
Aceh has a lot of local rice genotypes that locally cultivated in West- South Region Aceh. The potential of local rice as a source of genes have not been evaluated and identified of drought tolerance. Abiotic stresses such as drought are serious things that affected plant productivity. This study aimed to determine the drought tolerance of several potential local rice genotypes in South-West Region Aceh as parents (P1) in order to become the basic population in creating the new high yielding varieties that were resistant to drought. This study was carried out in Randomized Block Design (RDB) with 3 replications. The observed variables were: Plant height and Number of tiller per clumps at 10, 20, 30 and 40 days after planting, root length, number of roots, wet and dry weight of roots at days 40 after planting. The study found that the treatment of drought stress significantly affect the plant height and number of tillers, best result was found at rangan lango genotype. Based on the research results, it can be concluded that there are 3 genotypes of local West-South Aceh region that are potentially resistant to drought stress in the vegetative Stage, namely the Lango genotype, Arias genotype and Pade Manggeng genotype.
Introduction
Aceh has a lot of rice genotypes that locally cultivated especially in West-South Region Aceh. West-South region of Aceh are one of areas that rich of local rice diversity, and need to be identified and utilized as a source of genes in the assembly of new superior varieties to support food security and sustainable agriculture. [1] reported that Aceh local rice varieties are very diverse.
In plant breeding programs, local rice as a source of genes needs to be evaluated and identified tolerance levels to abiotic stress to obtain new potential genes that are resistant to abiotic stress and have high yileds. One of the main problem of rice production are limited water. Drought stress are one of abiotic stresses that can lead to decrease of the yield and quality of rice [2; 3] Futhermore, rice cultivation in various region of Indonesia having deficiency of sufficient irrigation facilities which leads to a decrease in rice crop production [4]. [1] said that abiotic stresses have negatif effect to survival, biomass production, and crop yield. Drought stress is a serious threat to plant productivity.
Long lasting drought stress could increase cuticle thickness and disrub plant metabolism. The thikness of cuticle can make the cuticle permeable less to water. Thus condition causes delays in growth of stems and leaves especially in vegetative stage [5]. Through the result of research on drought tolerance test on several Aceh local rice genotypes, it will be obtained information that can be useful in breeding programs in order to increase rice production especially in Aceh. This research aims to determine the drought tolerance of several potential genotypes of local rice in West-South Region Aceh as parents (P1) in order to become the basic population in creating the new high yielding varieties that were resistant to drought.
Materials and Methods
This study was conducted at experimental farm in Agriculture faculty Teuku umar university form October to December 2019. The research materials used was seeds of West-south Aceh local rice genotypes which had potential to drought tolerance (Sigupai, Sirende, Pade Manggeng, Pade Geudok, Sambai, Tinggong, Sikleng, Arias, Sijane, Rangan Lango, Borayek, Rasi Singki, Ramos and Inpago 5 as a control), soil, Fertilizer (Urea, SP-36, KCL), and plant's pot. , This study was carried out in Randomized Block Design (RDB) with 3 replications. The observed variables were: Plant height and Number of tiller per clumps at 10, 20, 30 and 40 days after planting, root length, number of roots, wet and dry weight of roots at days 40 after planting
Seed treatment and planting
The seeds used are soaked in clean water for 24 hours, seeds are germinated for 48 hours by wrapping them with a damp cloth. After germination, the seeds are transferred to the prepared nursery container. The planting medium used in this study was alluvial soil and manure with a ratio of (2:1) then put in a pot size of 8 kg/pot and given water, then stirred by hand until evenly distributed.
Drought stress treatment
Drought stress applied three weeks after planting, at the age 7 DAP, then regularly irrigated for a week to see the level of tolerance. As a control carried out at optimum conditions where water were available in sufficient quantities.
Plant height
The results showed that several local rice genotypes from South West Aceh had a very significant effect on plant height at 40 DAP. Significant effect on plant height at 30 DAP, but no significant effect on plant height at 10 and 20 DAP. The average plant height of several local rice genotypes at the age of 10, 20, 30 and 40 DAP after the LSD 0.05 Test Showed in Table 1.
Based on LSD test showed that the highest plant height was found at Rangan Lango Genotypes (62.83 cm) at 10 and 20 DAP, although statistically did not show a significant diference. The highest plant height also found in Rangan lango Genotypes at 30 (91.50 cm) DAP but had not significant difference with sigupai (85.33 cm), rasi singki (82.83 cm), pinang geudok (81.83 cm), Sijane (81.67 cm) and Sambai Genotype (76.67cm), but had significant difference with others genotypes. Besides, the highest plant height at 40 DAP also showed in Rangan Lango (104.33 cm) did not show significant difference with Rasi Singki (100.83 cm) and Sigupai (96.83 cm), but had significant difference with other genotypes. The results showed that, from several genotypes tested, plant height growth was thought to be influenced by genetic traits, where the genotypes had different growth characters. Treated drought stress also affects plant growth, where plants need sufficient water in the process of growth and development. Lack of water in plants can affect vegetative growth [6], in addition, lack of water in plants will affect all metabolic processes in plants and result in disruption of plant growth [6]. Elongation of the crown or plant height is one of the avoidance mechanisms in plants to survive in drought stress, but crown length is not used in the form of being the main selection [7]. High and low plant growth has no effect on production, because high plant growth does not guarantee the greater yields [8]. Plant height is one of the components that affect plant fall. Along with the absorption of N nutrients by plants, the higher the performance of a plant, the higher the the possibility of its fall [9].
Number of tillers per clump
The results showed that several local rice genotypes from South West Aceh had a very significant effect on the number of tillers in the 30 and 40 DAP groups. Significantly affected the number of tillers aged 20 DAP, but had no effect on the number of tillers at 10 DAP. The average number of tillers of several local rice genotypes at the age of 10, 20, 30 and 40 DAP after the LSD0.05 test Showed in Table 2.
The highest number of tillers at 10 DAP showed on Pade Manggeng Genotype (2.50) but did not show significant different from other genotypes. Meanwhile, the highest number of tillers at 20 DAP showed on Borayek genotype (4.67 tillers), Pade manggeng (4.33 tillers), Arias (4.33 tillers), Sirende (4.17 tillers), and did not show significant difference with Sijane (3.50 tillers) but had significantly difference to other genotypes. The highest number of tillers at 30 DAP showed on Sirende (7.00) Arias (7.00), Pade manggeng (5.17) Borayek (5.00), and did not show significant difference with Tingggong (4.33) and Sijane (4.33). Besides, the highest number of tillers at 40 DAP was found on Arias (8.50) and did not show significant difference with Rasi singki (8.00), but had significant difference with other genotypes. The difference in number of tillers is thought to be due to genetic factors, this is in libe with the opinion of [10] which stated that the results of genetic analysis showed that vertical-horizontal type of tiller growth was controlled by a recessive gene la-1, furthermore The dwt1 gene controls the uniformity of rice tillers, these genes can be heredity [11].
Besides from genetics, growth environmental conditions also affect plant growth. Drought during vegetative phase can inhibit the growth of the number of plant tillers, this is in accordance with the opinion of [6], that water plays an important role in the translocation of nutrients from the roots to all parts of the plant, so that lack of water results in inhibition of plant development. The result showed that Arias and Rasi Singki Genotype had a potential resistant to drought. The results showed that genotypes Arias and Rasi Singki were more resistant to stress, which was characterized by a higher number of tillers compared to other genotypes.
Root length, number of roots, root wet weight, and root dry weight
The study found that several local rice genotypes from West-South Aceh region not too statistically different to root length, number of roots, root wet weight, and root dry weight (Table 3). However, the root length of the Sigupai genotype tends to be longer than other genotypes, and the highest number of roots, root wet weight, and root dry weight tends to be found in Pade manggeng genotype.
Conclusion
Based on the research results, it can be concluded that there are 3 genotypes of local West-South Aceh region that are potentially resistant to drought stress in the vegetative stage, namely the Lango genotype, Arias genotype and Pade Manggeng genotype.
|
2021-11-30T20:03:01.314Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "70cb05c4e5376295c2c2075bc0b13ddfd56db032",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/911/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "70cb05c4e5376295c2c2075bc0b13ddfd56db032",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238717815
|
pes2o/s2orc
|
v3-fos-license
|
Study of the Optimal Waveforms for Non-Destructive Spectral Analysis of Aqueous Solutions by Means of Audible Sound and Optimization Algorithms
: Acoustic analysis of materials is a common non-destructive technique, but most efforts are focused on the ultrasonic range. In the audible range, such studies are generally devoted to audio engineering applications. Ultrasonic sound has evident advantages, but also severe limitations, like penetration depth and the use of coupling gels. We propose a biomimetic approach in the audible range to overcome some of these limitations. A total of 364 samples of water and fructose solutions with 28 concentrations between 0 g/L and 9 g/L have been analyzed inside an anechoic chamber using audible sound configurations. The spectral information from the scattered sound is used to identify and discriminate the concentration with the help of an improved grouping genetic algorithm that extracts a set of frequencies as a classifier. The fitness function of the optimization algorithm implements an extreme learning machine. The classifier obtained with this new technique is composed only by nine frequencies in the (3–15) kHz range. The results have been obtained over 20,000 independent random iterations, achieving an average classification accuracy of 98.65% for concentrations with a difference of ± 0.01 g/L.
Introduction
Acoustic spectroscopy is one of the most promising techniques for nondestructive testing of many materials. This work shows that acoustic spectroscopy in the audible range is also well prepared for the study of liquid solutions. No method can claim superiority, but sound-based sensing of liquids has several advantages over optical techniques and can be easily combined with other methods, such as electroacoustic measurements, as discussed in [1]. A review devoted to describing the advantages and limitations of acoustic spectroscopy, with a particular focus on pharmaceutical applications can be seen in [2].
However, most studies on this research topic are devoted to ultrasound techniques and devices, due to their higher energy and bandwidth than sounds in the audible range. This can be seen in the monographs devoted to this topic, like [3] and [4]. The last one is very interesting because a research by Contreras et al. [5], page 51, describes the ultrasonic measurement of different sugar concentrations with an accuracy of 0.2% in water volume for pure sugar solutions. They measured the velocity of ultrasound and the density in solutions of D-glucose, D-fructose, and sucrose at various concentrations (0-40% w/v) and temperatures (10-30 • C).
This conversion of acoustic data to sound velocity is the norm in most ultrasound studies of liquids. The calculation of sound velocities introduces some important problems and uncertainties due to the need of using statistical or theoretical models and the existence of other processes, as is explained in several publications, for example the Dzida et al. 's excellent review about the determination of the speed of sound in ionic liquids [6]. A de-scription of a low-cost system for the measurement of sound velocity in liquids can be found in [7].
The use of ultrasound for the study of pure liquids and solutions is limited, com-pared to its application to colloids, suspensions, and emulsions, as reviewed in [8]. However, there have been remarkable advances in recent years, as can be seen, for example, in [9][10][11][12]. The acoustic research of aqueous electrolytes was performed by Pal and Roy in [13] using the Fourier spectrum pulse-echo technique, which is discussed in detail in [14].
The number of publications is too large for an exhaustive literature survey, so only a handful of representative examples are shown here. For a detailed review of ultrasound spectroscopy for particle size determination see [15]. In [16] Silva et al. studied polydisperse emulsions by means of acoustic spectroscopy within the frequency range of (6)(7)(8)(9)(10)(11)(12)(13)(14) MHz in order to measure the droplet size distribution of water-in-sunflower oil emulsions for a volume fraction range from 10 to 50%. They concluded that the methodology was suitable for polydisperse particle size characterization for moderate concentrations up to 20% and the results were in good agreement with those obtained by laser diffraction analysis. Other interesting application to food analysis can be seen in [17], where the mechanism of rehydration of milk protein concentrate powders is studied by means of broadband acoustic resonance dissolution spectroscopy. Moreover, ref. [18] describes the use of an ultrasonic pulse echo system for vegetable oils characterization.
Good reviews of high-resolution ultrasound spectroscopy can be found in [19,20]. All measurements are based on the previous determination of the speed of sound and attenuation in the samples. A number of advantages and applications of this technique are clearly described, for example, samples with very small volumes can be analyzed using different ranges of pressure and temperature. As is explained in [19], at frequencies below 100 MHz, which is clearly the case of audible frequencies, for nano-sized dispersions or solutions, the contribution of scattering to attenuation can be neglected. Thus, attenuation at this long-wavelength regime is determined by the thermal and the shear (visco-inertial) effects. In spite of this, we show that audible acoustic spectroscopy can achieve impressive accuracy in the determination of fructose concentration in water.
Another interesting application of ultrasound spectroscopy is the monitoring of biocatalysis in solutions and complex dispersions, even in real-time, reviewed in detail by Buckin and Caras in [21]. The information that can be extracted from ultrasound data is impressive: substrate concentrations along the entire course of the reaction, time profile analysis of the degree of polymerization, reaction rate evolutions, kinetic mechanism evaluation, kinetic and equilibrium constant measurements, and real-time traceability of structural changes in the medium associated with chemical reactions, among others.
Finally, an interesting and fascinating application of audible acoustic measurements can be found in [22,23]. Both deal on the determination of Martian rock properties using the microphone of the recent NASA Perseverance rover. This microphone is used to record the sounds associated with the microcrater-forming laser induced breakdown spectroscopy device shots.
Additionally, artificial intelligence (AI) algorithms have been incorporated into many engineering applications in recent years. They are integrated in research always providing a remarkable improvement in performance and efficiency. The use of these algorithms is enhanced by continuous and increasing computing power and massive data collection. Although they do not always offer the optimal solution, they approach it with a very acceptable balance of cost and accuracy. Moreover, in many applications there is no unique solution, but rather several solutions under conflicting criteria. Recent studies on the application of IA in different fields of engineering can be found in: computer engineering [24][25][26][27], electrical engineering [28,29], petroleum engineering [30], fluid mechanic engineering [31,32], energy engineering [33][34][35][36], and acoustic engineering [37].
In this work a direct application of audible acoustic spectroscopy to the determination of fructose concentrations in distilled water is presented. It is shown that no data conversion to speeds of sound is necessary, hence eliminating the source of some uncertainty, and most importantly, accuracies of the order of 1 part in 100,000 (0.001%) in weight can be achieved. The use of audible sound has some advantages over ultrasound, mainly the low cost of the measuring equipment and the noncontact nature of the measurements. In order to optimize the technique, the results from a series of different pulses and noises were previously compared and the best sound was selected for the final determination. This is a clear improvement over our previous technique based on resonant vibrations of the sample, which involved direct contact [38].
In this work, 364 samples of different concentrations of high purity fructose in distilled water were used for the study of the best pulse characteristics for acoustic chemical analysis. A constant volume of 150 mL for all samples was selected. The container was a simple cylindrical glass. A small anechoic chamber was used to place the samples and to make the sound recordings. The microphone was placed vertically over the surface of the liquid. The sound source was one earpiece placed parallel to the microphone over the liquid surface.
Different sound configurations were explored: chirp, square pulses, white noise, and maximum length sequence (MLS). In the end, MLS produced the best results in our preliminary studies and was selected for the final analysis. The samples were excited by these sounds during 30 s intervals and the reflected sound was recorded. These recordings were divided into 2-s samples whose spectra were calculated by means of the Praat program [39]. The resulting spectra were processed by means of a grouping genetic algorithm (GGA) taking a training set of 80% and a test set of 20%. This algorithm provided a classifier with more than 98.5% classification accuracy, even for concentrations with a difference of ±0.01 g/L.
Materials and Methods
The experimental system was composed of three main parts: the anechoic chamber, the sound system, and the samples. The liquid sample was placed inside a small handmade anechoic chamber of exterior dimensions (width, high, depth) 80 × 72 × 56, in centimeters. Its interior was isolated using 2-cm thick foam and a frequency-dependent absorbent pyramidal material of 4 cm in the base and 6 cm high. Thus, the interior volume of the chamber is 58 × 61 × 40 cm. A cylindrical glass with a volume of 200 mL filled with 150 mL of a water and fructose solution was placed at the center of the chamber. The glass mass was 123 g, with a diameter of 8 cm. Figure 1 shows the schematic diagram of the experimental setup.
In this work a direct application of audible acoustic spectroscopy to the determination of fructose concentrations in distilled water is presented. It is shown that no data conversion to speeds of sound is necessary, hence eliminating the source of some uncertainty, and most importantly, accuracies of the order of 1 part in 100,000 (0.001%) in weight can be achieved. The use of audible sound has some advantages over ultrasound, mainly the low cost of the measuring equipment and the noncontact nature of the measurements. In order to optimize the technique, the results from a series of different pulses and noises were previously compared and the best sound was selected for the final determination. This is a clear improvement over our previous technique based on resonant vibrations of the sample, which involved direct contact [38].
In this work, 364 samples of different concentrations of high purity fructose in distilled water were used for the study of the best pulse characteristics for acoustic chemical analysis. A constant volume of 150 mL for all samples was selected. The container was a simple cylindrical glass. A small anechoic chamber was used to place the samples and to make the sound recordings. The microphone was placed vertically over the surface of the liquid. The sound source was one earpiece placed parallel to the microphone over the liquid surface.
Different sound configurations were explored: chirp, square pulses, white noise, and maximum length sequence (MLS). In the end, MLS produced the best results in our preliminary studies and was selected for the final analysis. The samples were excited by these sounds during 30 s intervals and the reflected sound was recorded. These recordings were divided into 2-s samples whose spectra were calculated by means of the Praat program [39]. The resulting spectra were processed by means of a grouping genetic algorithm (GGA) taking a training set of 80% and a test set of 20%. This algorithm provided a classifier with more than 98.5% classification accuracy, even for concentrations with a difference of ±0.01 g/L.
Materials and Methods
The experimental system was composed of three main parts: the anechoic chamber, the sound system, and the samples. The liquid sample was placed inside a small handmade anechoic chamber of exterior dimensions (width, high, depth) 80 × 72 × 56, in centimeters. Its interior was isolated using 2-cm thick foam and a frequency-dependent absorbent pyramidal material of 4 cm in the base and 6 cm high. Thus, the interior volume of the chamber is 58 × 61 × 40 cm. A cylindrical glass with a volume of 200 mL filled with 150 mL of a water and fructose solution was placed at the center of the chamber. The glass mass was 123 g, with a diameter of 8 cm. Figure 1 shows the schematic diagram of the experimental setup. The proposed method uses differential measurements and the acoustic performance of the chamber and the environment is sufficient for this purpose. Measurements of the chamber performance were made by means of a Brüel and Kjaer 2250 acoustic analyzer, resulting in 28.2 dBA of background noise and a mean reverberation time of 0.17 s. The frequency response of the chamber is represented in Figure 2.
The proposed method uses differential measurements and the acoustic performance of the chamber and the environment is sufficient for this purpose. Measurements of the chamber performance were made by means of a Brüel and Kjaer 2250 acoustic analyzer, resulting in 28.2 dBA of background noise and a mean reverberation time of 0.17 s. The frequency response of the chamber is represented in Figure 2. The used microphone was the model ECM-TL3 of Sony, an electret capacitor with omnidirectional pattern, frequency response range (20 Hz-20 kHz) with sensitivity of −35 dB that was placed vertically 2.5 cm over the liquid surface, 1.5 cm from the center. In parallel, in a symmetric position to the center of the glass, one earpiece model Sony MDRXB50APB.CE7 was used as the sound source, with a frequency response range of (4-24) kHz, a sensitivity of 106 dB/mW, and an impedance of 40 ohms (1 kHz). The test signals were generated by a computer while the recordings were made by another computer and an external audio card.
The measuring system, background sound, and noise sound generated by the sound card is represented in Figure 3. A maximum level of 0.0271 is measured against the levels near 1 (to full scale) of the signals. The used microphone was the model ECM-TL3 of Sony, an electret capacitor with omnidirectional pattern, frequency response range (20 Hz-20 kHz) with sensitivity of −35 dB that was placed vertically 2.5 cm over the liquid surface, 1.5 cm from the center. In parallel, in a symmetric position to the center of the glass, one earpiece model Sony MDRXB50APB.CE7 was used as the sound source, with a frequency response range of (4-24) kHz, a sensitivity of 106 dB/mW, and an impedance of 40 ohms (1 kHz). The test signals were generated by a computer while the recordings were made by another computer and an external audio card.
The measuring system, background sound, and noise sound generated by the sound card is represented in Figure 3. A maximum level of 0.0271 is measured against the levels near 1 (to full scale) of the signals.
of the chamber and the environment is sufficient for this purpose. Measurements of the chamber performance were made by means of a Brüel and Kjaer 2250 acoustic analyzer, resulting in 28.2 dBA of background noise and a mean reverberation time of 0.17 s. The frequency response of the chamber is represented in Figure 2. The used microphone was the model ECM-TL3 of Sony, an electret capacitor with omnidirectional pattern, frequency response range (20 Hz-20 kHz) with sensitivity of −35 dB that was placed vertically 2.5 cm over the liquid surface, 1.5 cm from the center. In parallel, in a symmetric position to the center of the glass, one earpiece model Sony MDRXB50APB.CE7 was used as the sound source, with a frequency response range of (4-24) kHz, a sensitivity of 106 dB/mW, and an impedance of 40 ohms (1 kHz). The test signals were generated by a computer while the recordings were made by another computer and an external audio card.
The measuring system, background sound, and noise sound generated by the sound card is represented in Figure 3. A maximum level of 0.0271 is measured against the levels near 1 (to full scale) of the signals. The microphone was connected to a PC sound card MAudio Fast Track Ultra 8R. An amplification factor of 70% for the channel was used to avoid adding internal noise from the card. The measurements were taken with a recording rate of 44.1 kHz by means of the free Audacity software [40]. Sound amplitude was kept below the 70% of the maximum level in order to avoid saturation effects. Some test signals were generated by MATLAB [41] and Audacity software: MLS signal: a signal generated by MATLAB, taking into account that the maximum length is 30 s. The amplitude is 60% of the full scale (FS); 2.
White noise: a signal generated by Audacity, with an amplitude of 60% of the FS; 3.
A set of chirp signals generated by Audacity, with a duration of 1 s each, from 150 Hz to 15 kHz; 4.
Square pulses with a period of 250 ms and 50% of duty cycle.
Each audio recording had a duration of 30 s, more than enough to ensure the precision and stability of the measurements. Later analyses showed that the recordings were stable enough to allow their partition in several 2 s intervals in order to increase the number of recordings for the classification algorithm. Changes among different spectra from the same sample were so low that they were not measurable.
The experiments were performed with a set of 364 samples of water solutions with different concentrations of fructose (see Tables 1-3). The volume of each sample was 150 mL. Distilled water was used as solvent. Food grade pure fructose (>99%) was used for the liquid samples. The concentrations of fructose were from 0 to 9 g/L. A more detailed study was done between 2 g/L and 3 g/L in increments of ±0.1 g/L and between 2.01 g/L and 2.09 g/L in increments of ±0.01, in order to explore the performance of the system. The mass of fructose was measured by means of an analytical balance, a Homgeek TL-Series balance with an accuracy of 50 g/0.001 g. Table 1. Number of samples and their composition used in the experiment. A total of 130 samples of distilled water with different concentrations of fructose, in the range of 0 g/L to 9 g/L, were analyzed. A set of 130 samples of water and fructose solutions with 10 concentrations between 0 g/L and 9 g/L, 117 samples of water and fructose solutions with 9 concentrations between 2 g/L and 3 g/L, 117 samples of water and fructose solutions with 9 concentrations between 2.0 g/L and 2.1 g/L have been analyzed inside an anechoic chamber using audible sound configurations.
Samples were numbered and visual inspection was used in order to ensure that complete dilution was achieved, and no bubbles were formed. Careful manipulation of the samples was done in order to avoid the formation of bubbles or wall drops. Each measurement took 30 s and they were taken in a consecutive way.
Each measurement was divided into different 2-s intervals, after verifying that such time was more than enough for accurate and precise spectral information. The input data of the classification algorithm are the spectra of the audio measurements of 2-s in duration.
The experiment was carried out with one audio measurement of each of the 28 different concentrations. That means a total of 364 audio samples. The power spectrum of every interval was made using the default Praat 6.0.40 options as can be seen in our previous work [38]. Similarly, a cepstral smoothing of 100 Hz and a decimation procedure were applied (65,537 points); averaged in order to reduce the number of points to a reasonable size (655 points) without losing the main peak structure of the spectra. In summary, the classification algorithm processed 364 input data, each of them being a spectrum defined by 655 values in the frequency range (20 Hz-22.05 kHz).
Algorithm for Clustering Problem
The spectral response of the liquid samples to the vibrational stimulation of the MLS sounds was used as data input to a genetic grouping algorithm (GGA) to perform the classification of the liquid mixtures according to their fructose concentration. Since the nature of the samples was not affected, it is a non-destructive method. The GGA is itself a genetic algorithm (GA) explicitly modified for solving clustering problems. A brief description of GAs and GGAs is given in this section.
From Genetic Algorithm to Grouping Genetic Algorithm
GA is a bio-inspired algorithm based on the theory of evolution of species by natural selection. A population of individuals fights against each other to gain the resources to survive. Each individual represents an encoded solution of the optimization problem. It is therefore an evolutionary optimization algorithm. The optimization strategy is usually applied to solve problems where it is almost impossible to find the optimal solution and there are several solutions with opposing criteria. The objective is to find one or multiple solutions which are close enough to the optimal one, with a very acceptable balance between cost and accuracy. On the other hand, "evolutionary" means that the algorithm computes the solutions through successive generations, undergoing an evolutionary process that enhances an overall improvement in the fitness value of the majority. Individuals with better fitness values are likely to survive longer than individuals with worse fitness. Along successive generations, individuals will appear that are more fit than others and will progressively improve their fitness. Each generation of individuals undergoes changes through recombination, mutation, and selection functions. These functions allow the diversity of individuals and therefore the exploration of the solution space. The execution of the evolutionary algorithm is completed when it reaches a stop condition. The most popular stopping conditions are a maximum number of generations and population convergence. Population convergence is reached when there is no progress in improving the fitness of individuals over several consecutive generations. A more extensive introduction can be found in [42].
The GGA is a modification of the GA oriented to solve grouping and clustering problems [43][44][45][46]. The fundamental difference of a GGA versus a GA lies in the encoding of the solution and the use of search operators to manage this encoding. The encoding is key to ensuring high performance in the execution of the algorithm [47]. A solution in the GGA is composed of two sections: the assignment part and the grouping part. The grouping section labels all the groups involved in the solution. The assignment part associates each element to a single group. The value stored in the assignment part is the group assigned to each element. The information about the grouping is in the content of the solution itself and in its length. The total length of the solution is the number of elements to be classified plus the number of groups considered in the solution.
The Fitness Function: The Extreme Learning Machine
The fitness function numerically characterizes the individual and allows to rank the individuals of a population from best to worst aptitude. The fitness function used in the GGA is the extreme learning machine (ELM). It is a relatively simple machine learning algorithm that generalizes a single hidden layer feedforward network (SLFN), used for regression, binary classification, and multi-classification [48][49][50][51][52][53]. The input layer takes the input values for a given set of features from the data. The feature set can include all the features of the data or a subset of them. The output layer provides a classification of the data according to the fixed feature set. The single intermediate layer is adjusted by the training of the network. After training, the classification accuracy of the ELM is calculated according to the defined feature set. ELM has demonstrated good performance with extremely high speed [54][55][56]. This last feature is fundamental for its integration in the GGA, since an extremely high number will be executed during each generation of the evolutionary algorithm.
The fitness function is applied to an individual by calculating by ELM the classification accuracy of each of the groups considered in the solution. The best rate is assigned as the fitness value to the individual and the classification rates of the rest of the groups are then discarded. The best classification accuracy corresponds to the group of features that classifies the individual with the best accuracy among all the groups considered in the solution. The rest of the groups are not relevant to the solution.
Metaheuristic GGA+ELM Algorithm Application for Spectral Analysis
As already mentioned, the spectral data have 655 values in the frequency range (20 Hz-22.05 kHz). Obviously 655 characteristics is far too high for classification purposes. The target of the optimization algorithm is to reduce the number of features useful for classifying liquid samples. This means a wrapper feature selection [57] where the GGA maximizes the classification accuracy. The solution is composed of a collection of features varying in length and composition. The set of features extracted by the GGA among the 655 total will constitute the classifier applicable on the spectra of the liquid samples. The challenge is to not exceed more than 10 features and to achieve a classification accuracy of more than 95%.
The training and testing data sets are disjoint sets randomly selected from the total of samples. The usual ratio is 80/20, with the training set having the largest number (80%) and the test data the remaining 20%. The population size usually used in the literature varies between 20 and 100 [58,59]. The pair composition of individuals for the crossover operation is randomized. This method has also provided good results in previous research [60,61]. The crossover operation generates a population increase of 50% (a single offspring from each pair), on which a 10% mutation is applied [47,59]. This percentage is higher than usual in genetic algorithms, with the purpose of quickly exploring multiple areas of the solution space. The survival population for the next generation is composed of the winners of pairwise tournaments among the total population. The matches are chosen randomly. The fitness function value of the fighters determines the winner of each tournament.
As already described, an individual is coded as a set of groups, where each group is a collection of features that can be a valid classifier of the input data. Not all groups of an individual are useful for classification, but only those with better accuracy. Note that considering a specific individual, each feature of the 655 is only present in a single group. In the GGA fitness function, the ELM algorithm is applied over each group of the individual to classify the testing set data from the knowledge of the training data. The group with the best classification accuracy is selected as a candidate classifier. The fitness of the individual takes the value of the classification accuracy of this highlighted group, which is the best accuracy obtained among all the groups of the individual.
The stopping condition employed in optimization is the maximum number of generations. To ensure the high-quality solutions are found within a reasonable computation time, the maximum number of genera considered is G max = 50.
Results and Discussion
The spectral analysis was performed on a total of 364 spectra from 28 different fructose contents, having 13 samples of each concentration. The 28 concentrations have been grouped into three data tables with their respective fructose concentration increments: ±1 g/L in Table 1, ±0.1 g/L in Table 2, and ±0.01 g/L in Table 3. The algorithm was run on the three sample collections. One purpose of this work is to obtain a limited set of frequencies able to satisfactorily classify the samples according to their concentration. The main objective is to determine the degree of discrimination of the classifier on fructose concentration using this method. It is expected that the accuracy classification of samples in Table 3 will be lower than the accuracy classification of samples in Table 1, as in Table 3 the concentration increment is much lower than in Table 1. It is also desired to know whether the accuracy classification of concentrations with a difference of ±0.01 g/L is acceptable or not. Figure 4 shows the averaged spectra for each concentration from Table 1. The spectra are defined by the sound pressure level (dB/Hz) over the audible frequencies range (20 Hz-22.05 kHz). The sound pressure level is normalized in all the curves in Figure 4, with values in the range (−1-1).
Acoustic Response Spectrum
best accuracy obtained among all the groups of the individual.
The stopping condition employed in optimization is the maximum number of generations. To ensure the high-quality solutions are found within a reasonable computation time, the maximum number of genera considered is Gmax = 50.
Results and Discussion
The spectral analysis was performed on a total of 364 spectra from 28 different fructose contents, having 13 samples of each concentration. The 28 concentrations have been grouped into three data tables with their respective fructose concentration increments: ±1 g/L in Table 1, ±0.1 g/L in Table 2, and ±0.01 g/L in Table 3. The algorithm was run on the three sample collections. One purpose of this work is to obtain a limited set of frequencies able to satisfactorily classify the samples according to their concentration. The main objective is to determine the degree of discrimination of the classifier on fructose concentration using this method. It is expected that the accuracy classification of samples in Table 3 will be lower than the accuracy classification of samples in Table 1, as in Table 3 the concentration increment is much lower than in Table 1. It is also desired to know whether the accuracy classification of concentrations with a difference of ±0.01 g/L is acceptable or not. Figure 4 shows the averaged spectra for each concentration from Table 1. The spectra are defined by the sound pressure level (dB/Hz) over the audible frequencies range (20 Hz-22.05 kHz). The sound pressure level is normalized in all the curves in Figure 4, with values in the range (−1-1). Table 1.
Acoustic Response Spectrum
Each spectrum can be characterized by particular markers associated with the chemical composition of the mixture. The selection of a group of frequencies manually as a classifier of liquid mixtures from their concentration is a tedious and complex task because of the large number of spectral lines (the algorithm handles the range of audible frequencies as 655 spectral lines).
The average spectra obtained with the samples from Tables 2-3 also show a high complexity. Among the mean spectra from Table 3, with increments of 0.01 g/L, some of them are relatively similar to each other, and it may be necessary to select more focused frequency ranges to discriminate one concentration from another.
The optimization algorithm was run separately for each sample collection (Tables 1-3) providing several solutions. Each solution was composed of a set of frequencies that classify the samples with high accuracy. Two of these solutions were then taken to compose a combined decision system. This combined classifier works as a unique and common classifier over all samples used. In the following lines, the performance of this classifier on samples with concentrations from Tables 1-3 is analyzed.
Feature Extraction
The GGA+ELM algorithm performs feature extraction optimizing the accuracy classification of samples according to their fructose concentration. The genetic algorithm is fed with spectral information as shown in Figure 4. Each feature is a spectral line. The optimal solution, if it exists, is unknown. The algorithm delivers two of the best solutions found in the execution. Each solution is composed of a set of frequencies (feature extraction) that classify with high accuracy. Not all solutions have the same number of frequencies.
A 2.7 GHz Intel Core i7 processor was used. The specific parameter values of the GGA and ELM are summarized below. Five independent simulations were performed for Table 1. Each spectrum can be characterized by particular markers associated with the chemical composition of the mixture. The selection of a group of frequencies manually as a classifier of liquid mixtures from their concentration is a tedious and complex task because of the large number of spectral lines (the algorithm handles the range of audible frequencies as 655 spectral lines).
The average spectra obtained with the samples from Tables 2 and 3 also show a high complexity. Among the mean spectra from Table 3, with increments of 0.01 g/L, some of them are relatively similar to each other, and it may be necessary to select more focused frequency ranges to discriminate one concentration from another.
The optimization algorithm was run separately for each sample collection (Tables 1-3) providing several solutions. Each solution was composed of a set of frequencies that classify the samples with high accuracy. Two of these solutions were then taken to compose a combined decision system. This combined classifier works as a unique and common classifier over all samples used. In the following lines, the performance of this classifier on samples with concentrations from Tables 1-3 is analyzed.
Feature Extraction
The GGA+ELM algorithm performs feature extraction optimizing the accuracy classification of samples according to their fructose concentration. The genetic algorithm is fed with spectral information as shown in Figure 4. Each feature is a spectral line. The optimal solution, if it exists, is unknown. The algorithm delivers two of the best solutions found in the execution. Each solution is composed of a set of frequencies (feature extraction) that classify with high accuracy. Not all solutions have the same number of frequencies.
A 2.7 GHz Intel Core i7 processor was used. The specific parameter values of the GGA and ELM are summarized below. Five independent simulations were performed for each of the three sets of samples. The time for each simulation was approximately 1 h, so the total computation time was about 15 h. Table 1; ELM = 11 in Tables 2 and 3.
No information on which frequencies to be tested first was given to the algorithm. Table 4 lists the frequencies (kHz) of the independent classifiers. Classifier 1 consists of 4 frequencies in the range (8)(9)(10)(11)(12)(13)(14)(15) kHz, and Classifier 2 selects five frequencies in the range (3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)) kHz. The two classifiers are combined into a single classification system. It is noteworthy that with only nine features can characterize the 28 concentrations in Tables 1-3. Table 4. Characteristics of the two classifiers provided by GGA+ELM to discriminate the fructose concentrations of Tables 1-3. The classifiers make a decision according to the value of aver-age energy density on specific frequencies in the acoustic response spectrum.
Discussion
A total of 20 M random and independent iterations was run for the two independent classifiers and the combined classifier on random test sets. The results are reported in Table 5. For each set of concentrations (±1 g/L, ±0.1 g/L, and ±0.01 g/L) the average value and standard deviation of the classification accuracy are given. Table 5. Performance of the two classifiers provided by GGA+ELM and the voting system classifier to discriminate the fructose concentrations referred in Tables 1-3. The average values and standard deviation of the accuracy were estimated from 20 M independent and random iterations. Overall, it is observed that the combined classifier is valid for all concentrations in Tables 1-3 (with a minimum average accuracy of 98.65% over the 20 M iterations). As the average classification accuracy decreases, the difference between sample concentrations b becomes smaller: 99.82% at ±1 g/L (Table 1), 98.98% at ±0.1 g/L (Table 2), and 98.65% at ±0.01 g/L ( Table 3). The standard deviation also increases in this direction: 0.0123 with ±1 g/L (Table 1), 0.0266 with ±0.1 g/L (Table 2), and 0.0272 with ±0.01 g/L (Table 3). This pattern meets the expected results: the difficulty of discrimination rises with higher class similarity. In the following lines, we elaborate on the results for each set of classes (fructose concentration), analyzing Tables 1-3 separately.
The classification of the samples of Table 1 (0-9 g/L) has very satisfactory results with the three classifiers: Classifier 1 and 2 of Table 4 and the combined classifier of them. With the three classifiers an average accuracy of more than 97% over 20 M random iterations is obtained. It is very remarkable that Classifier 1 can characterize, with only four spectral lines, up to ten concentrations with an average accuracy of 99.71%. Combining the two decision-makers in a single classifier achieves an average accuracy of better than 99.8%.
The nine frequencies of the combined classifier are located in the range (3-15) kHz. These frequencies have been highlighted in Figure 5 on the spectral information of the vibrational absorption bands for each concentration of Table 1. Note that the combination of these spectral lines allows the ten classes to be differentiated. Not all frequencies are equally important in the classification operation, some frequencies are more decisive in the classification among several classes. There may be other sets of frequencies that classify the samples with similar accuracy. The optimization algorithm offers solutions close to the optimal solution, without ensuring that there is a unique solution.
The classification of the samples of Table 1 (0-9 g/L) has very satisfactory results with the three classifiers: Classifier 1 and 2 of Table 4 and the combined classifier of them. With the three classifiers an average accuracy of more than 97% over 20 M random iterations is obtained. It is very remarkable that Classifier 1 can characterize, with only four spectral lines, up to ten concentrations with an average accuracy of 99.71%. Combining the two decision-makers in a single classifier achieves an average accuracy of better than 99.8%.
The nine frequencies of the combined classifier are located in the range (3-15) kHz. These frequencies have been highlighted in Figure 5 on the spectral information of the vibrational absorption bands for each concentration of Table 1. Note that the combination of these spectral lines allows the ten classes to be differentiated. Not all frequencies are equally important in the classification operation, some frequencies are more decisive in the classification among several classes. There may be other sets of frequencies that classify the samples with similar accuracy. The optimization algorithm offers solutions close to the optimal solution, without ensuring that there is a unique solution. Table 1. Each pattern is constructed with the average normalized energy density value for each of the nine frequencies at a concentration from Table 1. The similarity between some patterns is interesting. For example, the curve characterizing the 1 g/L concentration is similar to the 5 g/L concentration pattern. The same happens with the 4 g/L and 9 g/L concentration patterns. This phenomenon has been reproduced in all the experiments performed and we believe that it may be associated with the resonance of the container used.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 17 Figure 6 shows ten different patterns for the concentration classification of Table 1. Each pattern is constructed with the average normalized energy density value for each of the nine frequencies at a concentration from Table 1. The similarity between some patterns is interesting. For example, the curve characterizing the 1 g/L concentration is similar to the 5 g/L concentration pattern. The same happens with the 4 g/L and 9 g/L concentration patterns. This phenomenon has been reproduced in all the experiments performed and we believe that it may be associated with the resonance of the container used. The application of the combined classifier on 143 samples from Table 2 (concentrations between 2.0 g/L and 3.0 g/L with increments of ±0.1 g/L) provided satisfactory results. As in the previous case, a sequence of 20M random and independent iterations was carried out. As shown in Table 5, Classifiers 1 and 2 obtained an average accuracy higher than 85%, whereas their combined classifier improves the average classification accuracy to 98.98% with a standard deviation 0.0266. This result is very satisfactory, although worse than that obtained for 1 g/L concentrations. The greater the similarity among classes, the more complex the classification and the lower the precision. Figure 7 presents the patterns for the concentrations between 2.1 g/L and 2.9 g/L. These have been generated from the average and normalized energy density values for the nine frequencies of the combined classifier. Very similar values are observed in general since the variation in concentration is only ±0.1 g/L. Extreme closeness is appreciated in some cases such as the 2.3 g/L and 2.4 g/L concentrations, also with the 2.8 g/L and 2.9 g/L concentrations. The application of the combined classifier on 143 samples from Table 2 (concentrations between 2.0 g/L and 3.0 g/L with increments of ±0.1 g/L) provided satisfactory results. As in the previous case, a sequence of 20M random and independent iterations was carried out. As shown in Table 5, Classifiers 1 and 2 obtained an average accuracy higher than 85%, whereas their combined classifier improves the average classification accuracy to 98.98% with a standard deviation 0.0266. This result is very satisfactory, although worse than that obtained for 1 g/L concentrations. The greater the similarity among classes, the more complex the classification and the lower the precision. Figure 7 presents the patterns for the concentrations between 2.1 g/L and 2.9 g/L. These have been generated from the average and normalized energy density values for the nine frequencies of the combined classifier. Very similar values are observed in general since the variation in concentration is only ±0.1 g/L. Extreme closeness is appreciated in some cases such as the 2.3 g/L and 2.4 g/L concentrations, also with the 2.8 g/L and 2.9 g/L concentrations. For classes with a concentration difference of ±0.01 g/L in Table 3, the 143 samples were also analyzed by Classifiers 1, 2 and the combined system. The last columns of Table 5 show the results obtained by each classifier over 20M random and independent iterations. Classifier 1, with 4 frequencies in the 8-15 kHz range and an average classification accuracy of 98.65% was much more effective than Classifier 2, where the accuracy decreased to 80.78%. The combination of both classifiers did not improve the accuracy of Classifier 1, so the combined system considers only the decision of the first classifier, ignoring the decision of the second one. As discussed in the beginning of the section for the concentrations of Table 1, not all frequencies contribute equally in the classification. At the high similarity concentrations of Table 3, the frequencies of Classifier 2 do not add new information in the decision process over the information of Classifier 1. As a result, all the samples were classified with an average accuracy of 98.65% and a standard deviation of 0.0272.
The allocation of the four frequencies of Classifier 1 in the middle band of the spectrum, between 8 and 15 kHz, is understandable. It is the band with high mean energy density values. Figure 8 shows the position of these frequencies in the mean spectra of the nine concentrations between 2.01 g/L and 2.09 g/L. Figure 9 shows the patterns generated by Classifier 1 of the average and normalized energy density for the four frequencies. As in the previous cases there are very similar patterns, for example in the 2.01 g/L and 2.02 g/L patterns. There is also close similarity between the 2.03 g/L, 2.05 g/L, and 2.09 g/L patterns. For classes with a concentration difference of ±0.01 g/L in Table 3, the 143 samples were also analyzed by Classifiers 1, 2 and the combined system. The last columns of Table 5 show the results obtained by each classifier over 20M random and independent iterations. Classifier 1, with 4 frequencies in the 8-15 kHz range and an average classification accuracy of 98.65% was much more effective than Classifier 2, where the accuracy de-creased to 80.78%. The combination of both classifiers did not improve the accuracy of Classifier 1, so the combined system considers only the decision of the first classifier, ignoring the decision of the second one. As discussed in the beginning of the section for the concentrations of Table 1, not all frequencies contribute equally in the classification. At the high similarity concentrations of Table 3, the frequencies of Classifier 2 do not add new information in the decision process over the information of Classifier 1. As a result, all the samples were classified with an average accuracy of 98.65% and a standard deviation of 0.0272.
The allocation of the four frequencies of Classifier 1 in the middle band of the spectrum, between 8 and 15 kHz, is understandable. It is the band with high mean energy density values. Figure 8 shows the position of these frequencies in the mean spectra of the nine concentrations between 2.01 g/L and 2.09 g/L. Figure 9 shows the patterns generated by Classifier 1 of the average and normalized energy density for the four frequencies. As in the previous cases there are very similar patterns, for example in the 2.01 g/L and 2.02 g/L patterns. There is also close similarity between the 2.03 g/L, 2.05 g/L, and 2.09 g/L patterns.
Conclusions
We have described a new non-invasive method based on the spectral analysis of audible scattered sounds to conclude the concentration of liquid mixtures according to their chemical composition with low cost. The spectral information was analyzed by a metaheuristic algorithm. ELM was integrated to implement the fitness function of the optimization algorithm GGA and extract a reduced set of frequencies as a classifier. The acoustical response spectrum of the samples to MLS sounds was used after a previous comparison with other sounds, like chirps, square pulses, and white noise. It was sufficient to examine the spectrum response at a few frequencies instead of analyzing the whole range of audible frequencies.
The experiments were carried out with 364 measurements from 28 samples of distilled water and fructose mixtures (150 mL) with a fructose concentration varying between 0 and 9 g/L. The 28 concentrations were grouped into three sets with increasing difficulty: ten between concentrations 0 g/L and 9 g/L with ±1 g/L increments, nine concentrations between 2.1 g/L and 2.9 g/L with ±0.1 g/L increments, and nine between 2.01 g/L and 2.09 g/L with ±0.01 g/L increments.
This work has allowed us to reduce the problem to a set of only nine frequencies on (3-15) kHz able to classify samples with concentrations of any of the three sets described.
In the most complex case, the proposed classifier was able to discriminate fructose concentrations with variations of ±0.01 g/L with an average accuracy of 98.65%. The higher the concentration difference, the better the classification accuracy. For samples with increments of ±0.1 g/L the average accuracy is 98.98%, and when the concentration increments are ±1 g/L, the average accuracy rises to 99.82%.
The optimization algorithm returned different solutions with similar performance. The solution to the problem was not unique. It is important to note that changes in the number of types are likely to change the set of frequencies selected in the solutions. Each frequency had a different weight in the classification process. Future research will be focused on analyzing other chemicals, more complex mixtures and improving the accuracy of this sensing method.
|
2021-09-27T19:54:43.846Z
|
2021-08-09T00:00:00.000
|
{
"year": 2021,
"sha1": "883448deec82a16986948a42356b8b7fe21f75c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/16/7301/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c77ca642b00877f1c8e5e56882c50a4c4fac6466",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
16701438
|
pes2o/s2orc
|
v3-fos-license
|
MIT Open Access Articles Effect of Droplet Morphology on Growth Dynamics and Heat Transfer during Condensation on Superhydrophobic Nanostructured Surfaces
Condensation on superhydrophobic nanostructured surfaces offers new opportunities for enhanced energy conversion, efficient water harvesting, and high performance thermal management. These surfaces are designed to be Cassie stable and favor the formation of suspended droplets on top of the nanostructures as compared to partially wetting droplets which locally wet the base of the nanostructures. These suspended droplets promise minimal contact line pinning and promote passive droplet shedding at sizes smaller than the characteristic capillary length. However, the gas films underneath such droplets may significantly hinder the overall heat and mass transfer performance, which has not been considered previously. In this work, we investigated droplet growth dynamics on superhydrophobic nanostructured surfaces to elucidate the importance of droplet morphology on heat and mass transfer. By taking advantage of well-controlled functionalized silicon nanopillars, we observed the growth and shedding behavior of both suspended and partially wetting droplets on the same surface during condensation. Environmental scanning electron microscopy was used to demonstrate that initial droplet growth rates of partially wetting droplets were 6 larger than that of suspended droplets. We subsequently developed a droplet growth model to explain the experimental results and showed that partially wetting droplets had 4-6 higher heat transfer rates than that of suspended droplets. Based on these findings, the overall performance enhancement created by surface nanostructuring was examined in comparison to a flat hydrophobic surface. We showed these nanostructured surfaces had 56% heat flux enhancement for PW droplet morphologies, and 71% heat flux degradation for S morphologies in comparison to flat hydrophobic surfaces. This study provides insights into the previously unidentified role of droplet wetting morphology on growth rate, as well as the need to design Cassie stable nanostructured surfaces with tailored droplet morphologies to achieve enhanced heat and mass transfer during dropwise condensation
experimental results showed that while both S and PW droplets ejected at identical length scales, the growth rate of PW droplets was 6 larger compared to that of S droplets. This effect was further highlighted with experiments demonstrating S to PW droplet transitions, which showed a 2.8 increase in growth rate due to the change in wetting morphology. Accordingly, the heat transfer of the PW droplet was 4-6 higher than that of the S droplet. Based on these results, we compared the overall surface heat and mass transfer performance enhancement created by surface structuring with that of a flat hydrophobic surface. We showed these nanostructured surfaces had 56% heat flux enhancement for PW droplet morphologies, and 71% heat flux degradation for S morphologies in comparison to flat hydrophobic surfaces. In contrast to previous studies, we show that designing Cassie stable superhydrophobic nanostructured surfaces is not the only requirement for efficient dropwise condensation and that the droplet morphology prior to shedding must be carefully considered to achieve enhanced heat and mass transfer.
RESULTS AND DISCUSSION
To study the effects of droplet wetting morphology on growth rate and overall heat transfer, we fabricated silicon nanopillar surfaces ( Figure 1A) with diameters of d = 300 nm, heights of h = 6.1 μm, center-to-center spacings of l = 2 μm (solid fraction φ = πd 2 / 4l 2 = 0.018 and roughness factor r = 1 + πdh' / l 2 = 3.26) using ebeam lithography and deep reactive ion etching (DRIE). The DRIE fabrication process was used to create nanoscale roughness (scallops) on the sides of the pillars. The surfaces were subsequently functionalized using chemical vapor deposition of (tridecafluoro-1,1,2,2-tetrahydrooctyl)-1-trichlorosilane to create Cassie stable superhydrophobic surfaces (see Methods section for details).
Droplet growth on the surfaces was characterized using ESEM at a water vapor pressure P = 1200 ± 12 Pa and substrate temperature T s = 9 ± 1.5 °C (see Methods section for details). Figure 1B shows the two distinct droplet morphologies, PW and S, on the structured surface. PW droplets nucleated within a unit cell (area between 4 pillars) and, while growing beyond the confines of the unit cell, their apparent contact angle increased and they spread across the tops of the pillars in the shape of a balloon with a liquid bridge at the base of the pillars. Before coalescence with neighboring droplets, an increasing proportion of the droplet contact area was in the composite state and demonstrated an apparent contact angle of θ PW = 164 ± 4º for 〈R〉 > 15 µm. S droplets nucleated and grew on the tops of the pillars in a spherical shape with a constant apparent contact angle of θ S = 164 ± 6º. At these droplet sizes (〈R〉 ~ l), the S wetting configuration is typically energetically unfavorable due to a Laplace pressure instability mechanism, 31 but is attributed here to the presence of the nanoscale scallop features on the pillar sides that pin the contact line (see Supporting Information, sections S3 and S4). Figure 1C shows time lapse images of both PW and S droplets, which highlights the drastic difference in droplet morphology and growth rates on the surface (see Supporting Information, VideoS2). As the droplets grew and began to interact with each other, removal via coalescence-induced droplet ejection 12,13,15 was observed for both S and PW droplets. The results suggest that the contact line pinning force for both morphologies is in fact below the critical threshold for ejection (see Supporting Information, section S5 and VideoS1).
The experimentally obtained average droplet diameters as a function of time for the PW and S morphologies are shown in Figures 2A and B, respectively. The growth rate of the S droplet was initially 6 lower than that of the PW droplet for 〈R〉 < 6 μm. As the droplets reached radii 〈R〉 > 6 μm, the growth rates for both morphologies became comparable which suggests a similar mechanism limiting droplet growth at the later stages.
To provide insight into the experimental results and capture the growth dynamics related to the different droplet morphologies, we developed a thermal resistance based droplet growth model. The model, which accounts for the presence of hydrophobic pillar structures, is an important extension of a previous model suitable for dropwise condensation on flat hydrophobic surfaces. 10 Figure 2C shows schematics of the PW and S droplets with the associated parameters used in the growth model. Heat is first transferred from the saturated vapor to the liquid-vapor interface through resistances associated with the droplet curvature (R c ) and liquidvapor interface (R i ). Heat is then conducted through the droplet and the pillars to the substrate through resistances associated with the droplet (R d ), hydrophobic coating (R hc ), pillars (R p ) and gap (R g where R tot is the total thermal resistance through the droplet, R is the droplet radius, ρ w is the liquid water density, h fg is the latent heat of vaporization, T sat is the vapor saturation temperature, σ is the water surface tension, ΔT is the temperature difference between the saturated vapor and substrate (T sat -T s ), δ HC and h are the hydrophobic coating thickness (~ 1 nm) and pillar height, respectively, k HC , k w , and k P are the hydrophobic coating, water, and pillar thermal conductivities, respectively, and h i is the interfacial condensation heat transfer coefficient. 34 The first, second and third terms in the denominator represent the liquid-vapor interface (R i ), droplet conduction (R d ), and pillar-coating-gap (P-C-G) thermal resistances (R p , R hc , R g ), respectively ( Figure 2C). The heat transfer rate is related to the droplet growth rate dR/dt by (2) During early stages of growth (R < 6 μm), the conduction resistance (R d ) is negligible compared to the other thermal resistances. Therefore, for the PW droplet, the pillar (R p + R hc ) and liquid bridge (R g + R hc ) resistances dominate the heat and mass transfer process. However, for the S droplet, the only conduction path is through the pillars (R p + R hc ), which results in a higher total thermal resistance and the observed 6 lower initial growth rate.
Note that the pillar (R p ), coating (R hc ) and gap (R g ) thermal resistances are not the only reasons for the divergent growth behavior of the two droplet morphologies. The higher initial contact angle of S morphology (see Supporting Information, section S3) contributes to its slower growth rate due to a lower droplet basal contact area. As both droplet morphologies reach a critical radius, R cd ≈ 6 μm, the conduction resistance (R d ) begins to dominate and limit the growth rate in both cases. 32 A theoretical estimate of R cd was obtained by balancing the conduction resistance through the droplet, R d = Rθ/(4πR 2 k w sinθ), with the interfacial, R i = 1/(2πR 2 h i (1-cosθ) and P-C-G, R P-C-G ~ k P φ/(k HC πR 2 sin 2 θ(δ HC k P + hk HC )) thermal resistances. 35 The interfacial and conduction resistances become equivalent at a radius R cd = 4k w sinθ(R i + R P-C-G )/θ ≈ 6 μm, which is in good agreement with our experiments.
The results from the model (red lines) are also shown in Figures 2A and B and are in excellent agreement with the experiments (black circles). Model solutions were obtained for ΔT = 0.12 K where ΔT was chosen based on the best fit between the model and experimental growth rate data. The approximate value of ΔT from the experiments was ΔT = T sat (P = 1200 Pa) -282.15 ± 1.5 K = 0.65 ± 1.5 K. Therefore, the value used in the model is within the error of the experimental apparatus. In addition, the small value of ΔT is consistent with the assumption that only molecules near the substrate contribute to the phase change process, i.e., the local vapor pressure is lower than the measured bulk vapor pressure. 28 In order to gain further insight, we compared the experimental results with the power law exponent model. 13,[36][37][38][39][40][41][42] When droplet dimensions are larger than the surface pattern length scales (〈R〉 > 2 μm), droplets grow as breath figures on a surface with an expected average droplet radius of 〈R〉 = ρt α where α, the power law exponent, ranges from 0 to 1 depending on the droplet, substrate dimensions and growth limiting conditions.
for the PW and S drops, respectively. Both values were within the range of 0 to 1, but differ from the expected 1/3 power law. 40 This result indicates that vapor diffusion to the droplet interface was not the limiting growth mechanism, instead a kinetic barrier was formed due to the low ESEM pressures (P = 1200 Pa). 28 When the average droplet diameter 〈2R〉 reached the coalescence length, both morphologies grew at a power law exponent of α PW = α S = 0.05 ± 0.15 as expected, i.e., the average diameter was constant due to coalescence induced droplet ejection. 13
Transitioning Droplets
In certain cases when the nanoscale scallop features on the pillars could not pin the droplet contact line, we observed S droplets transitioning to PW droplets ( Figure 3A) (see Supporting Information, VideoS3). This phenomenon further demonstrated the importance of the droplet wetting morphology on growth rate. Figure 3B shows the growth rate of three distinct S droplets, two of which underwent transition into the PW state. Upon transition, a liquid bridge formed between the droplet and substrate and the apparent contact angle decreased.
The growth rate of these droplets increased by 2.8 compared to the S droplet immediately after transition. The transitioned growth rate (dR/dt = 0.34 µm/s) exceeded the steady growth rate of a comparably sized PW droplet (dR/dt = 0.18 µm/s), indicating that the driving potential for growth was larger. The increased rate was attributed to a larger substrate-vapor temperature difference (T sat -T s ) due to additional subcooling from the constriction resistance at the base of the pillars (T s -T s '). 35 By determining the average temperature at the base between pillars using a spatial conduction resistance and incorporating the additional surface subcooling into the droplet growth model, the theoretical results show excellent agreement with the experiments ( Figure 3B) (see section S7 of Supporting Information). Note that at these transitioning length scales (~ 10 -6 m), surface diffusion growth due to adsorbed atoms on the substrate is negligible and cannot account for the rapid increase in growth. 43-45
Implications to Heat Transfer
Based on the understanding developed for individual droplet growth rates, we investigated the heat and mass transfer performance of the two distinct droplet morphologies. To quantify the difference in performance prior to coalescence-induced ejection, the total heat removed Q by the individual droplet was determined (3) where l c is the coalescence length or alternatively, can be considered the coalescing droplet diameter when droplets merge and shed from the surface. 13 R* is the critical droplet radius for nucleation which is approximated as zero due to its small magnitude (~ 10 nm). The ratio of the heat transfer rates for individual PW and S droplets, q PW / q S , is therefore approximated by (4) where θ PW and θ S are the PW and S contact angles at coalescence, respectively, and τ pw and τ s are the PW and S droplet coalescence times (times at which coalescence occurs) corresponding to a coalescence length l c , respectively. The coalescence times for the experimental and modeling results in Figure 4 were obtained from the growth rates in Figures 2 and 3. The higher error at lower coalescence lengths is due to the larger deviation between experimental and model growth rates for the S morphology, as well as larger experimental error associated with ESEM measurements for small droplet sizes. Figure 4 shows the heat transfer ratio model overlaid with experiments, where a 4-6 droplet heat transfer increase during dropwise condensation was demonstrated for PW compared to S droplets. As expected, the increased thermal resistance associated with the S droplet morphology decreases the growth rate and, as a result, severely limits individual S droplet heat transfer when compared to its PW counterpart. The heat transfer enhancement diminishes at larger coalescence lengths due to the increasing droplet conduction thermal resistance for both droplet morphologies, resulting in similar growth rates. Figure 4 indicates that meeting the criteria for Cassie stable surfaces is not the only requirement for heat and mass transfer enhancement. In fact, preferential formation of Cassie droplets with the S morphology can even degrade total surface heat and mass transfer performance when compared to a flat (non-nanostructured) hydrophobic surface, which is investigated in the next section.
Comparison to a Flat Hydrophobic Surface
The insights gained regarding individual droplet wetting morphology led to an investigation of the overall performance enhancement created by nanostructuring compared to a flat (no surface structuring) hydrophobic surface. Specifically, we aimed to address whether the benefit of droplet departure below the characteristic capillary length created by nanostructuring outweighs the disadvantage of reduced growth rates due to the increased thermal resistance associated with the S droplet morphology.
Additional ESEM droplet growth studies were performed on a flat hydrophobic surface for comparison (see section S8 of Supporting Information). The flat surface sample consisted of a silicon substrate, functionalized by CVD as described above. Droplet growth on the flat surface was characterized using identical condensation conditions as the nanostructured surfaces and also showed good agreement with the thermal resistance model.
To compare the theoretical surface heat and mass transfer performance on the flat and nanostructured surfaces, we combined droplet size distribution theory, to account for the fraction of droplets on the surface of a given radius R, with the developed droplet growth model. For small droplets, the size distribution n(R) is determined by 10 Ȓ is the average maximum droplet radius (departure radius), τ is the droplet sweeping period, and R e is the radius when droplets growing by direct vapor addition begin to merge and grow by droplet coalescence, For large droplets growing mainly due to coalescence, the droplet distribution N(R) was determined from 17 (12) The total surface steady state condensation heat flux, q", was obtained by incorporating the individual droplet heat transfer rate (Equation 1) with the droplet size distributions (Equations 5 and 12) For droplets growing on the flat surface (F), Ȓ was assumed to be 2 mm, 10 and l c = 2R e = 28 ± 7 µm.
Droplet growth on the structured surface above the coalescence length for both PW and S morphologies was neglected because most droplets coalesced and ejected from the surface. 15 In addition, the sweeping time τ was assumed to be infinite on the nanostructured surface due to the coalescence induced ejection departure mechanism, and l c = 2R e = 2Ȓ = 10 ± 2 µm. Figure 5 shows the total surface heat flux, q", as a function of the difference between the wall and saturation temperature, ΔT, for these surfaces with the three identified wetting morphologies (PW, S, and F). As expected, the structured surface with the PW wetting morphology showed a 56% heat flux enhancement when compared to that of the flat surface. Meanwhile, a 71% heat flux degradation was shown for the surface with the S wetting morphology which indicated the increased thermal resistance and the slower growth rate prior to coalescence outweighed the benefits of droplet ejection. Figure 5 indicates that meeting the criteria of Cassie stability is not the only requirement for heat and mass transfer enhancement via nanostructuring.
This comparison ( Figure 5) assumed only PW or S droplet morphologies existed exclusively on the structured surfaces. In actuality, approximately the same number of PW and S wetting morphologies were observed on the nanostructured surface in this work, resulting in a total surface heat flux degradation of 12% when compared to the flat hydrophobic surface. It is important to note that the difference in observed coalescence lengths between the flat and structured surfaces contributed to the heat and mass transfer performance. To control for this parameter, we investigated the hypothetical case where the coalescence length for all three droplet morphologies is equivalent, l c,PW = l c,S = l c,F = 10 ± 2 µm. For the hypothetical case, the PW and S wetting morphologies showed an 11% enhancement and an 80% degradation compared to the flat surface, respectively. As expected, the PW enhancement decreased and S degradation increased due to the higher heat and mass transfer of the F morphology associated with the increased population of droplets with radii below the coalescence length. 1,16,17 To gain a broader understanding of the P-C-G thermal resistance, the developed model was used to investigate the effect of pillar height (h) and coalescence length (l c ) on the PW to F heat flux ratio (q" PW / q" F ) ( Figure 6). This comparison assumed l c = 2R e = 2Ȓ for the PW surface, l c = 2R e = 28 ± 7 µm for the F surface, and that scaling down the pillar height does not affect the PW surface wetting state or contact angle behavior.
As expected, the results show that the heat flux ratio increases as h decreases due to the smaller P-C-G thermal resistance. In addition, a reduction in l c acts to increase the heat transfer ratio due to earlier droplet removal from the surface and a higher population of smaller droplets. 15 The results of these analyses further emphasize the conclusion that structured surface droplet wetting morphology needs to be carefully controlled to realize enhanced condensation heat and mass transfer. Furthermore, the analysis suggests the importance of minimizing the thermal resistance of the PW morphology (i.e., by reducing pillar height), while ensuring Cassie stability to achieve dropwise condensation heat and mass transfer enhancement via surface structuring.
CONCLUSIONS
In summary, we demonstrated the importance of droplet wetting morphology on condensation growth rates for Cassie stable surfaces via an in situ ESEM study of S and PW droplet morphologies on superhydrophobic nanostructured surfaces. While both droplet morphologies demonstrated coalescence induced droplet ejection at identical length scales, the initial growth rate of the PW morphology was 6 higher than that of the S morphology due to the increased contact with the substrate. Additionally, transitioning S to PW droplets showed a rapid 2.8 increase in growth rate due to the change in wetting morphology and surface subcooling. The experimental results were corroborated with a thermal resistance-based droplet growth model and showed PW droplets had a 4-6 higher heat transfer rate than S droplets for the observed coalescence lengths. Based on these results, which showed the importance of droplet wetting morphology on individual droplet heat and mass transfer, we investigated the overall performance of the structured surface compared to a flat hydrophobic surface. Using droplet distribution theory combined with the droplet growth model, we showed that these nanostructured surfaces with PW morphologies had 56% total surface heat flux enhancement, while S morphologies had 71% heat flux degradation when compared to a flat hydrophobic surface. These results shed light on the previously unidentified importance of droplet wetting morphology for dropwise condensation heat and mass transfer on superhydrophobic nanostructured surfaces as well as the importance of designing Cassie stable nanostructured surfaces with tailored droplet morphologies to achieve enhanced heat and mass transfer during dropwise condensation.
METHODS
Fabrication Procedure of Silicon Nanopillars. Silicon nanopillar surfaces ( Figure 1A) with diameters of d = 300 nm, heights of h = 6.1 μm, and center-to-center spacings of l = 2 μm (solid fraction φ = πd 2 / 4l 2 = 0.0177 and roughness factor r = 1 + πdh' / l 2 = 3.26) were fabricated using e-beam lithography and deep reactive ion etching. Chemical vapor deposition (CVD) of (tridecafluoro-1,1,2,2-tetrahydrooctyl)-1trichlorosilane was used to functionalize and create Cassie stable superhydrophobic surfaces (see section S2 of Supporting Information). The samples were first cleaned in a plasma cleaner (Harrick Plasma) for 20 minutes, then immediately placed in a vacuum chamber containing an open container of silane at room temperature and held at 17.5 kPa for 30 minutes. Upon removal from the chamber, the samples was rinsed in ethanol, DI water, and then dried with N 2 . Goniometric measurements on a smooth silanated silicon surface showed an advancing and receding contact angle of θ a = 119.2° ± 1.3° and θ r = 86.1° ± 1.3°, respectively.
ESEM Imaging Procedure. Condensation nucleation and growth were studied on these fabricated surfaces using an environmental scanning electron microscope (EVO 55 ESEM, Zeiss). Back scatter detection mode was used with a high gain. The water vapor pressure in the ESEM chamber was 1200 ± 12 Pa. The sample temperature was set to 9 ± 1.5°C using a cold stage, resulting in nucleation of water droplets on the sample surface from the saturated water vapor. Typical image capture was obtained with a beam potential of 20 kV and variable probe current depending on the stage inclination angle. To limit droplet heating effects, 26 probe currents were maintained below 1.9 nA and the view area was kept above 400 μm 300 μm. A 500 μm lower aperture was used in series with a 1000 μm variable pressure upper aperture to obtain greater detail. The sample temperature was initially set to 10 ± 1.5 °C and was allowed to equilibrate for 5 minutes. The surface temperature was subsequently decreased to 9 ± 1.5 °C resulting in nucleation of water droplets on the sample surface. Images and recordings were obtained at an inclination angle of 70 to 80 degrees from the horizontal to observe growth dynamics and wetting morphologies close to the droplet base. Recordings were obtained at 2.5 s time increments corresponding to 0.4 fps. Copper tape was used for mounting the sample to the cold stage to ensure good thermal contact.
29.
Varanasi q" S q" PW q" F Figure 6. Theoretical heat flux ratio (q" PW /q" F ) of a surface favoring PW droplet formation (q" PW ) compared to a flat hydrophobic surface (q" F ) as a function of coalescence length (l c ) and pillar height (h). l c = 2R e = 2Ȓ for the PW surface, and l c = 2R e = 28 ± 7 µm for the F surface. As expected, the heat flux ratio increases as h decreases due to the diminishing P-C-G thermal resistance. In addition, reducing l c acts to increase the heat transfer ratio due to earlier droplet removal from the surface and higher population of small droplets. 23 Inset: Heat flux ratio (q" PW /q" F ) as a function of h for the experimentally measured coalescence length, l c = 10 ± 2μm.
Data Collection:
The average droplet radius (Figures 2 and 3) is defined as the radius measured in each video frame during the condensation process. Because droplets vary in initial size once condensation begins, the growth data was normalized with respect to the droplet radius. The droplet radius as a function of time (each frame) was recorded for all clearly visible droplets during condensation (13 PW droplets and 16 S droplets).
Once the radius as a function of time was obtained, the droplets were ordered and averaged in terms of size, i.e.
a droplet that began growth with an initial radius of 5 µm was only averaged with other droplets once they reached a radius of 5 µm and above. The growth rate of new nucleating droplets (above radii of 5 µm) showed good agreement with the growth rate of droplets growing from initial radii of 5 µm, which indicates that this method is appropriate.
S2. ENERGETICALLY FAVORED WETTING STATE
The distinct growth behavior in Figure 1B for can be explained using an energy approach. The relevant energy barrier dictating whether or not the contact line will de-pin is approximated by considering the energy required for the liquid to advance through a unit cell of a structured surface. 1 The result of such an analysis is (S1) When * 1 the contact line near the base of the pillars can overcome the energy barrier to de-pin and a Wenzel droplet is formed. If * 1 complete de-pinning is not possible and the droplet spreads over the top of the pillar array forming a nominally Cassie droplet when ≫ . This interpretation is consistent with the behavior observed in Figure 1C where, after accounting for the scallop features on the pillar sides as ′ /2 , 3 * 0.63.
S3. DROPLET WETTING MORPHOLOGY AND CONTACT ANGLES Wetting Morphology
To confirm the wetting state of the S and PW wetting morphologies, higher magnification ESEM imaging was performed. The beam potential was kept constant at 20 kV while the probe current was reduced to 1.2 nA to minimize electron beam heating effects. 4 The results of the imaging ( Figure S1) indeed show PW droplets form a liquid bridge connecting the base of the droplet and substrate, whereas S droplets form on the tips of pillars and are not substantially impaled by the pillars.
The droplet does not substantially penetrate into the pillars 5 for the S morphology due to the flat tips and scallop features on the sides created by the DRIE process (see Figure 1A) that act to pin the contact line. Figure S1. High magnification ESEM images of the S and PW droplet wetting morphologies performed at a beam potential of 20 kV and probe current of 1.2 nA. Condensation conditions: P = 1200 ± 12 Pa, T s = 282 ± 1.5 K. a) Initial formation of an S droplet. The droplet forms on a pillar tip, and grows across neighboring pillars remaining in the suspended state throughout. Adjacent pillars bend slightly due to surface tension 6-9 created by droplet receding (due to ESEM beam induced droplet evaporation). b) High magnification image of a PW droplet. A liquid bridge connecting the droplet to the substrate is observed below the droplet base. c) Adjacent S and PW droplet at very early stages of growth. The pillar tips are visible in the center of the image, as well as at the contact line of the S droplet, indicating the droplet is indeed suspended on the pillar tips, and is not appreciably impaled by the pillars.
Contact Angles
ESEM images of water droplets show high topographic contrast such that reliable contact angle measurements can be made. 10 Droplet contact angles were determined from frame by frame analysis of condensation videos (including VideoS1 and S2). Contact angles were determined by fitting a circle to each individual droplet (spherical approximation) and determining the slope of the tangent where the droplet neck intersects the fitted circle. This approach led to larger errors for S droplets due to the difficulty in determining where the base of the droplet intersects the fitted circle. Figure S2 shows
S4. SUSPENDED DROPLET PINNING DUE TO PILLAR SCALLOPS
To validate the idea that S droplet formation is due to the presence of scallop features on the pillar sides that act to pin the contact line, additional ESEM condensation experiments were performed on samples having smooth pillars. In contrast to the scalloped pillar samples, droplet transitioning on smooth pillars should occur more readily due to the lack of pinning points created by the scallops. Condensation on the smooth pillar surface resulted in similar (randomly distributed) nucleation behavior as the scalloped pillar surface. Condensing droplets formed both S and PW morphologies. In contrast to the scalloped pillar surface, the number of droplets that underwent transition from the S to PW wetting morphology greatly increased ( Figures S3 and S4). This result supports our assumption that the pillar scallops play an important role in pinning the S droplet contact line and hinder transition to the PW wetting morphology. Figure S3. ESEM image of droplet growth on a smooth nanopillar surface for two consecutive image frames (a) t = 0 seconds, and (b) t = 80 seconds. Red dashed circles show S droplets prior to transition. Blue dashed circles show S droplets that underwent successful transition to the PW state. The frequency of transition for the smooth-pillar surface is higher than that of the scalloped-pillar surface, supporting the assumption that the pillar scallops play an important role in pinning the S droplet contact line and hinder transition to the PW wetting morphology. Figure S4. Close up ESEM images of droplet growth on a smooth nanopillar surface for three consecutive image frames (a) t = 0 seconds, (b) t = 80 seconds and (c) t = 160 seconds. Red dashed circles show S droplets prior to transition, and blue dashed circles show S droplets that underwent transition to the PW wetting morphology. As in the scalloped pillar case, droplets nucleating on the tops of pillars remain pinned in the S state throughout. Ensuring smoothness of the nanostructure does not guarantee droplet transitioning for all S drops. However, it does indicate that the energy barrier for transition is reduced.
S5. DROPLET COALESCENCE AND REMOVAL Nanostructured Surface
During the condensation process, droplet removal via coalescence-induced jumping 13,14 was observed (see VideoS1). The spontaneous out of plane droplet motion occurs due to the surface energy released during droplet coalescence ( Figure S5). Analysis of ESEM data showed a nucleation density of N = 3.14x10 9 m -2 . The measured average coalescence length was 11.2 µm with a standard deviation of 2.94 µm. The rms length predicted by a randomly distributed Poisson distribution, 15, 16 l c = 1/(πN) 0.5 ≈ 1/(4N) 0.5 , was 10.07 µm, which is within one standard deviation of the experimentally measured average. Therefore, the average droplet coalescence diameter at steady state was assumed to be l c = 10 ± 2 μm, which is 30 smaller than the droplet capillary length. Both PW and S droplets were observed to undergo coalescence-induced jumping, which suggests that the contact line pinning force for both droplet morphologies is below the critical threshold for jumping. Figure S5. Coalescence-induced droplet shedding at three separate locations. Images a), c) and e) show the condensing droplet surfaces prior to coalescence, while images b), d) and f) show the corresponding surfaces after coalescence and ejection. Labels A and B denote the coalescing droplets. For clarity and ease of observation, the three cases shown are all large droplet diameter coalescence events exceeding the average coalescence diameter of 10 ± 2 μm.
Flat Surface
During the condensation process on the smooth surface, droplet removal via coalescence-induced jumping 13,14 was not observed (see VideoS4), instead growth due to direct accommodation of vapor molecules and coalescence dominated ( Figure S6). Analysis of ESEM data showed a nucleation density of N = 3.42x10 8 m -2 .
The measured average coalescence length was 28.7 µm with a standard deviation of 7.06 µm. The large error associated with the measured initial coalescence length on the flat surface was due to the non-uniformity from spot to spot on the flat sample. The rms length predicted by a random Poisson distribution, 15, 16 l c = 1/(πN) 0.5 ≈ 1/(4N) 0.5 , was 30.51 µm, which is within one standard deviation of the experimentally measured average.
Therefore, the average droplet interaction diameter at steady state was assumed to be l c = 2R e = 28 ± 7 μm.
S6. DROPLET GROWTH MODELING
The predicted growth of each droplet (PW and S) was obtained by modifying the model originally developed by Umur and Griffith 17 to account for the pillar geometry and the details of the surface wetting. At the scales considered in this work (~ 10 -6 m), the dominant mode of droplet growth is due to the direct accommodation of vapor molecules at the droplet interface. 18 For a droplet with radius R(t) on a structured superhydrophobic surface, as shown in Figure S7(a), the droplet contact angle θ varies with the droplet radius according to the fit given in Figure S1. The local vapor (T sat ) and surface (T s ) temperatures are assumed to be constant throughout the growth process. The droplet heat transfer, q, is determined by considering all thermal resistances from the saturated vapor through the condensing droplet to the substrate ( Figure S7(b)). All thermal resistances associated with the droplet are presented in terms of individual temperature drops: the liquid-vapor interfacial resistance due to direct vapor molecule accommodation at the droplet interface (ΔT i ), the conduction resistance through the droplet (ΔT d ), the conduction resistance through the pillars (ΔT P,S ) or liquid bridge and pillars (ΔT P,PW ), the hydrophobic coating resistance (ΔT HC ), and the resistance due to the curvature of the droplet (ΔT C ).
Internal droplet convection was neglected in the model since the droplets were sufficiently small so that conduction is the primary mode of heat transfer through the droplet. 19,20 This assumption was validated by calculating the characteristic Rayleigh, ∆ / ≅ 1. The temperature drop is due to droplet curvature (ΔT C ) given by 21 where T sat is the water vapor saturation temperature, σ is the water surface tension, h fg is the latent heat of vaporization, and ρ w is the liquid water density.
The temperature drop between the saturated vapor and liquid interface (ΔT i ) is given by where q is the heat transfer rate through the droplet and h i is the condensation interfacial heat transfer coefficient given by 17,22 , where R g is the specific gas constant and ν g is the water vapor specific volume. The condensation coefficient, α, is the ratio of vapor molecules that will be captured by the liquid phase to the total number of vapor molecules reaching the liquid surface (ranging from 0 to 1). We assume α = 0.9, which is appropriate for clean environments such as the ESEM, 21 where T b1 is the liquid temperature of the droplet base and k w is the condensed water thermal conductivity. The temperature drop due to the hydrophobic coating is calculated using a conduction resistance given by where T b2 is the temperature of the silicon pillars beneath the hydrophobic coating, δ HC is the hydrophobic coating thickness (δ HC = 1 nm), φ is the structured surface solid fraction (φ = 0.0177), and k HC is the coating thermal conductivity (k HC = 0.2 W/mK).
The conduction resistance through the pillars is dependent on the wetting morphology of the droplet. For the S morphology, the temperature drop associated with the conduction resistance is given by where T s is the substrate temperature, h is the pillar height (h = 6.1 μm), and k P is the pillar thermal conductivity (k P = 150 W/mK). Figure S7. Heat transfer resistance network in the droplet and pillar structure. The schematic outlines the parallel path of heat flowing through i) the hydrophobic coating (R hc ) followed by the pillar (R p ) and ii) the liquid bridge (R g ) followed by the hydrophobic coating (R hc ). Schematic is not to scale.
For PW droplets, the conduction resistance temperature drop through the pillar and coating structure is calculated by considering a parallel heat transfer pathway from the base of the droplet to the substrate surface ( Figure S8) (S10) Equating S8 and S9, the growth rate is ∆ .
(S11) Equation S11 was numerically discretized such that a numerical solution for the droplet radius as a function of time can be obtained as (S12) To obtain sufficient accuracy and resolution, the time step used in the numerical simulation was Δt = 0.01 s.
Material properties were obtained using NIST software (REFPROP) such that all input parameters used were temperature dependent.
It is important to note, while θ PW varies as a function of time t (i.e., θ PW is a function of R(t)), in this study it is treated as a constant in the volume derivative / (E.10) due to the slowly varying nature of / . To determine the error associated with this approximation, we determined the error associated with this approximation. The temperature difference between the base of the pillar (T 1 ) and the subcooled region (T 2 ) between adjacent pillars was obtained by determining the spreading resistance 29 where k p is the substrate thermal conductivity, and a, b, and c are geometric parameters ( Figure S9). Figure S10 shows the calculated temperature distribution between the pillars. The average subcooling was determined by an area-averaged integral of the temperature difference distribution (ΔT(x)) given by where R is the radius on the area integration (1 μm), ΔT(x') is the temperature difference as a function of transformed coordinate x' (x' = 1400 nm -r), and r is the radial coordinate originating from the center of a pillar unit cell ( Figure S9(b)). Figure S10. Temperature difference between the pillars as a function of location x. The maximum temperature difference between the base of the pillar and the midpoint between two diagonal pillars was determined to be 0.048 K. The average temperature difference was approximated as the area-weighted integral of the function (Eq. S15) and was calculated to be 0.044 K. Inset: Schematic temperature distribution of the pillar/substrate with the spreading resistance.
S8. FLAT SURFACE ESEM GROWTH
Droplets on the flat hydrophobic sample nucleated randomly on the surface and grew with an approximately constant contact angle of 120°, which is in good agreement with the macroscopically measured advancing contact angle θ a = 119.2° ± 1.3° ( Figure S11(a)). The experimentally obtained average droplet diameter as a function of time for the PW, S, and F morphologies prior to coalescence are shown in Figure S11(b). The growth rate of F droplets was higher than that of PW or S morphologies due to lower effective contact angle and lower droplet conduction resistance. Additionally, the P-C-G thermal resistance is not present on the flat surface ( Figure S11(c)). However, F droplets had higher droplet contact line pinning and as a result, larger (gravity dependent) droplet removal size. 14 The average coalescence length of F droplets was found to be l c = 2R e = 28 ± 7 μm (see section S5).
x [nm]
T The growth rate of F droplets was higher than that of PW or S morphologies due to a lower effective contact angle and lower droplet conduction resistance. Additionally, the P-C-G thermal resistance is not present on the flat surface. Experimental data (black circles) were obtained from ESEM video (P = 1200 ± 12 Pa, T s = 282 ± 1.5 K) (see Supporting Information, VideoS1, VideoS2, and VideoS4). The theoretical prediction (red line) was obtained from the thermal resistance model (for model derivation and parameters see section S6). PW and S droplet growth data is identical to data presented in Figures 2a and b. (b)
|
2016-03-14T22:51:50.573Z
|
2012-02-13T00:00:00.000
|
{
"year": 2012,
"sha1": "09a662dc0ab2c6509ea167869715518ae1d85d33",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/85004/1/Effect%20of%20Droplet%20Morphology%20on%20Growth%20Dynamics%20and%20Heat%20Transfer%20during.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "00c53902deb7e3bee0d3c5c7cfc92b74fc23b07c",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
6782952
|
pes2o/s2orc
|
v3-fos-license
|
Chinese Herbal Medicine Suppresses Invasion-Promoting Capacity of Cancer-Associated Fibroblasts in Pancreatic Cancer
Pancreatic cancer remains one of the leading causes of cancer-related deaths, due to aggressive growth, high metastatic rates during the early stage and the lack of an effective therapeutic approach. We previously showed that Qingyihuaji (QYHJ), a seven-herb Chinese medicine formula, exhibited significant anti-cancer effects in pancreatic cancer, associated with modifications in the tumor microenvironment, particularly the inhibition of cancer-associated fibroblast (CAF) activation. In the present study, we generated CAF and paired normal fibroblast (NF) cultures from resected human pancreatic cancer tissues. We observed that CAFs exhibited an enhanced capacity for inducing pancreatic cancer cell migration and invasion compared with NFs, while QYHJ-treated CAFs exhibited decreased migration and invasion-promoting capacities in vitro. The results of further analyses indicated that compared with NFs, CAFs exhibit increased CXCL1, 2 and 8 expression, contributing to the enhanced invasion-promoting capacities of these cells, while QYHJ treatment significantly suppressed CAF proliferation activities and the production of CAF-derived CXCL1, 2 and 8. These in vitro observations were confirmed in mice models of human pancreatic cancer. Taken together, these results suggested that suppressing the tumor-promoting capacity of CAFs through Chinese herbal medicine attenuates pancreatic cancer cell invasion.
Introduction
Due to aggressive growth and a high metastatic rate during the early stage, pancreatic cancer remains a highly lethal malignant disease [1], and only approximately 10-20% of pancreatic cancer is resectable at the time of diagnosis [2]. Gemcitabine has been the standard treatment for advanced pancreatic cancer; however, the median survival is 5-6 months, with the frequent development of chemo-resistance during the treatment [3]. Thus, pancreatic cancer remains a dreadful disease, and there is an urgent need of further studies to reveal the molecular mechanisms of tumor invasion and metastasis to develop an effective therapeutic approach to prevent and/or treat of pancreatic cancer.
Cancer associated fibroblasts (CAFs), predominant components of the tumor stroma, have been extracted from several invasive human carcinomas, including pancreatic cancer [4,5,6]. Pancreatic ductal adenocarcinoma is characterized by an extensive stromal response called desmoplasia [7]. Within the tumor stroma, CAFs are the primary cell type, which play an important role in tumor progression [5]. CAFs secrete multiple factors, including CXC, CC chemokines, and other inflammatory mediators, that promote the proliferation, invasion, and metastasis of cancer cells. Moreover, accumulating evidence has demonstrated that CAFs play a key role in the acquisition of drug resistance in tumor therapy [8], which negatively impacts clinical outcomes [9,10]. Therefore, inhibiting the activation of CAFs might represent a potential therapeutic approach for pancreatic cancer treatment.
QYHJ, a seven-herb Chinese medicinal formula used for treating pancreatic cancer in China, inhibits both tumor growth and metastasis in nude mice models of pancreatic cancer [11,12,13]. In addition, the combined use of QYHJ with conventional Western medicine prolongs survival time in patients with liver metastases from pancreatic cancer [14]. However, the underlying molecular mechanism remains unclear.
Here, we demonstrated that CAFs exhibited an enhanced capacity for inducing pancreatic cancer cell migration and invasion compared with NFs, while QYHJ-treated CAFs exhibited decreased migration-and invasion-promoting capacities in vitro. In addition, we showed that compared with NFs, CAFs express high levels of CXCL1, 2 and 8, contributing to the enhanced invasion-promoting capacity of these cells. Thus, QYHJ treatment could suppress the proliferation activities and CXCL1, 2 and 8 expression levels in CAFs. Taken together, these results suggest that suppressing the tumor-promoting capacity of CAFs with Chinese herbal medicine attenuates pancreatic cancer cell invasion.
Ethics Statement
All animal experiments were conducted in accordance with the guidelines of the National Institutes of Health for the Care and Use of Laboratory Animals. The study protocol was also approved through the Committee on the Use of Live Animals in Teaching and Research, Fudan University, Shanghai. At the end of the study, all animals were decapitated through spinal cord separation at the nuchae to minimize suffering. The fibroblasts were isolated from resected tissues of two patients with pancreatic ductal adenocarcinomas (PDAC). Before sample collection, written informed consent was obtained from each patient in accordance with institutional guidelines, and the study was approved through the committees for the ethical review of research at the Fudan University Shanghai Cancer Center.
Cell Lines and Mice
The human pancreatic cancer cell line Capan1 and BxPC3 were obtained from the American Type Culture Collection, and cultured in Dulbecco's modified Eagle medium (DMEM; Gibco, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS; Gibco, Carlsbad, CA) at 37 uC with 5% CO 2 . The morphology of the Capan1 cell line was regularly assessed, and the cells were tested for the absence of mycoplasma contamination (MycoAlert, Lonza, Rockland, ME, USA). The Capan1 and BxPC3 cell lines at the 30,40 passages were used in our study.
Female BALB/c-nu/nu nude mice aged 4-6 weeks were obtained from the Shanghai Institute of Materia Medica, Chinese Academy of Sciences (Shanghai, China), housed in laminar flow cabinets under specific pathogen-free conditions and provided food and water ad libitum.
Isolation of CAFs and paired NFs
Stromal fibrolasts were isolated as previously described [15]. Briefly, surgically resected pancreatic cancer tissues were obtained from two patients with pancreatic ductal adenocarcinoma. Written informed consent was obtained before tissue collection. The fresh pancreas tumor tissue and adjacent normal tissue (at least 2 cm from the outer tumor margin) were minced into 1-3 mm 3 fragments and digested with 0.25% trypsin at 37uC for 30 min. The resulting fragments were centrifuged at 600 xg for 5 min and washed once with DMEM containing 10% fetal bovine serum. The tissue fragments were subsequently plated and incubated at 37uC. The culture medium was changed twice a week for 3-4 weeks. Under these conditions, fibroblasts were explanted from tissue fragments while other cells were mostly retained in the tissue. Fibroblasts formed multi-layer colonies spreading on the culture dish. After 3-4 weeks cultured cells were roughly trypsinized and re-plated into T25 culture flasks (passage one). The fibroblasts were then sub-cultured for another 2-3 passages until the cultures were free of contamination of epithelial cells and subsequently maintained in DMEM supplemented with 10% fetal bovine serum, 2% penicillin and streptomycin (Invitrogen, Carlsbad, CA, USA). The cells were grown at 37uC in a humidified atmosphere containing 5% CO 2 .NF and CAF strains were used at the 4-5 passages.
Immunohistochemistry and immunofluorescence
Immunohistochemistry (IHC) was performed as previously described [16]. Briefly, the tumor tissue samples were fixed in 10% formalin and embedded in paraffin wax. Unstained 3-mm sections were subsequently cut from the paraffin blocks for IHC analysis. The sections were stained with the following antibodies: rabbit anti-vimentin (1:200), rabbit anti-a-SMA (1:100), rabbit anti-CXCL1 (1:100), rabbit anti-CXCL2 (1:200), and rabbit anti-CXCL8 (1:25) at 4uC overnight. The sections were incubated with secondary antibodies, and the avidin-biotin peroxidase complex was used according to the manufacturer's instructions (Vector Laboratories, CA, USA). An immunoglobulin-negative control was used to rule out non-specific binding. Two independent investigators and one pathologist, all of whom were blinded to the model/treatment type for the series of specimens, performed all procedures. To quantitatively evaluate the CAF-induced proliferative activity in each group, we calculated the ratio of the area positive for vimentin and a-SMA staining to the total area in histological sections from ten fields under light microscopy (2006) [11].
Drugs and Reagents
QYHJ, a seven-herb Chinese medicinal formula, comprised Scutellria barbata (Ban zhi lian), Heydyotis diffusa (Bai hua she she cao), Amorphophallus kiusianus (She liu gu), Coix lacryma-jobi (Yi ren), Gynostemma pentaphyllum (Jiao gu lan), Ganoderma luncidum (Ling zhi) and Amomum cardamomum (Bai dou kou), was prepared as previously decribed [11,12,13]. Briefly, QYHJ powder was obtained from Jiang-yin Tianjiang Pharmaceutical Co, Ltd. To ensure standardization and maintain the interbatch reliability of QYHJ, a high performance liquid chromatography (HPLC) chromatographic fingerprint was developed for quality control. The fingerprint chromatograms of QYHJ formula are shown in our previous report [17]. The final decoction of QYHJ was prepared after dissolving the herbal powder in distilled water to the required concentration. The daily dosage for rabbit and nude mice was 15 g/kg and 18 g/kg, respectively, calculated according to the following human-rabbit or human-mouse transfer formula: Db = Da 6 (Rb/Ra) 6 (Wb/Wa)2/3, where D, R, and W represent dosage, shape coefficient, and body weight, respectively, and a and b represent human and mouse or rabbit, respectively. Human recombinant CXCL1, 2, and 8 were obtained from PeproTech (Rocky Hill, NJ, USA). The anti-CXCR1 blocking antibody was obtained from R&D Systems (Minneapolis, MN, USA). The
Preparation of QYHJ-Containing Serum
The QYHJ-Containing Serum was prepared as previously described [18]. Briefly, 20 female rabbits, weighing 180062200 g, were randomly divided into two groups: the QYHJ-Containing Serum group and the vehicle control group. The QYHJ-Containing Serum group was gavaged with intragastric QYHJ once a day for 7 consecutive days (12.5 g/kg body weight/time), and the vehicle control group was gavaged with intragastric deionized water. Blood was collected from the carotid artery within 2 h after the last administration and incubated at room temperature for 2 h. The serum from the whole blood was centrifuged (4,000 r/min for 10 min) and inactivated in a 56uC water bath for 30 min. The serum was stored at 280uC, and repeated freezing and thawing were avoided.
Migration and invasion assay
Cell migration and invasion were determined using transwell cell migration plates (Corning, NY, USA) and Matrigel invasion chambers (Matrigel-coated membrane, BD Biosciences, San Jose, CA, USA) as previously described [19]. Briefly, the cells (1.0610 4 ) were seeded in serum-free medium into the upper chamber and allowed to invade toward 10% FCS in the lower chamber as a chemoattractant. After 12 h (for migration assays without matrigel coating) or 48 h (for invasion assays with matrigel coating), the cells that had invaded through the membrane and adhered to the underside of the membrane were counted as previously described [16].
Western Blot Analysis
Total protein was extracted from the cultured cells, and quantitated using the bicinchoninic acid assay kit (Pierce, Rockford, IL, USA). Western blotting was performed according to our previous report [16]. Equal amounts of protein from different samples were separated through 10% SDS-polyacrylate gel electrophoresis and transferred to a polyvinylidene fluoride membrane. Blocking buffer, containing 5% non-fat milk, was used to block and incubate the membranes with primary antibodies. The expression of the target proteins was examined using an enhance chemiluminescence (ECL) kit (Amersham Pharmacia Biotech, Uppsala, Sweden), and the quantitative analysis was performed using ImageJ software.
RNA isolation and real-time PCR
Total RNA was isolated from the cells using TRIzol Reagent (Invitrogen, San Diego, CA, USA), and reverse transcription PCR (RT-PCR) was performed according to the manufacturer's instructions (TakaRa). Quantitative real-time PCR measurements were performed using SYBR green I (Roche Diagnostics, Branchburg, NJ, USA). The primer sequences for CXCL1, 2, and 8 are listed in Table 1. GAPDH was used as an internal control. All experiments were performed in triplicate and repeated twice.
Conditioned medium (CM) preparation
CM from CAFs and NFs was obtained as previously described, with slight modifications [20]. Briefly, the cells were maintained in DMEM at 37uC for 48 h. The conditioned medium from CAFs or NFs was harvested. The conditioned medium was cleared through centrifugation, followed by 1:1 dilution with defined medium prior to treating the Capan 1 cells; the conditioned medium was freshly transferred without storage.
To obtain CM from QYHJ-treated CAFs, isolated CAFs were plated onto T25 flasks (1 6 10 6 cells) and cultured in DMEM containing 10% QYHJ-Containing Serum or Control-Containing Serum. Twenty-four hours later, the medium was removed and the cells were washed twice with PBS. Cells were subsequently incubated with 5 ml fresh DMEM medium for an additional 48 hours. The CM was subsequently harvested as described above.
Enzyme-Linked Immunosorbent (ELISA) assay
The serum CXCL1, 2, and 8 concentrations were measured using an ELISA as previously described [17]. The blood sample was stored at room temperature for 30 min, centrifuged (120006g) for 15 min, and cryopreserved at -80uC. The concentrations were measured using a sandwich ELISA kit (DuoSet; R&D Systems, Minneapolis, MN, USA). To detect cytokine secretion in the culture medium, the cells were plated onto 6-well plates and treated as described above. After 48 h, the media were collected and stored at 280uC until further use.
Cell proliferation assay
The cell proliferation assay was performed as previously described [16]. Approximately 5610 3 cells in 0.1 ml were plated in duplicate wells of 96-well plates. After overnight incubation, the suspension was removed and changed into different condition media. Subsequently, the indices of cell proliferation were assessed daily using the Cell Counting Kit-8 (CCK-8, Dojindo, Molecular Technologies, Inc, Gaithersburg, MD), and the growth curve was drawn according to the OD values obtain from CCK-8 assay.
Establishment of xenograft tumor models
Xenograft tumor models were established as previously described [16]. Briefly, Capan1 cells (2610 6 in 0.2 ml) in logarithmic phase were injected subcutaneously into the right axilla of nude mice. The length and width of the tumors (in mm) were measured weekly using calipers. The tumor volume was calculated using the formula (a6b 2 )60.5, where a and b are the long and short dimensions, respectively. The mice were sacrificed when the tumors reached 1.5 cm in diameter. The tumors were removed and weighed. Each group consisted of at least six mice.
Statistical analysis
The data are expressed as the means 6 standard deviation (SD). Statistical analyses were performed using analysis of variance (ANOVA) and Student's t-test. A P-value of ,0.05 was considered statistically significant. All statistical analyses were performed using the SPSS 15.0 software package.
Generation of cancer-associated fibroblast (CAF) cultures
We extracted fibroblasts from the resected specimens of two pancreatic ductal adenocarcinomas. The tumor masses were dissociated, and various cell types were separated to obtain populations of CAFs. We also isolated a second population of fibroblasts from a noncancerous region of the pancreas at least 2 cm from the outer tumor margin in each of the same two patients. We termed these cells ''normal fibroblasts (NFs).'' All experiments were performed through comparisons between CAFs and the corresponding NFs, thereby avoiding bias due to interindividual differences. The purity of the cultured fibroblast cells was verified through immunostaining using vimentin and PDGFR-b (mesenchymal cell marker) and E-cadherin and cytokeratin (CK) (epithelial cell marker). In addition, D31 and D2-40 were used to exclude the endothelial origin ( Figure 1A and Figure S1). All cell cultures were vimentin and PDGFR-b positive and 91% to 100% E-cadherin, cytokeratin, CD31 and D2-40 negative. Compared with NFs, the established CAFs expressed high levels of alpha smooth muscle actin (a-SMA), a marker of activated fibroblasts, typically expressed in CAF but not normal quiescent fibroblasts. These observations were further confirmed through Western blot analysis ( Figure 1B). Taken together, our results indicated that we successfully established paired CAF and NF cultures from human PDAC.
Pancreatic cancer cells treated with conditioned media from QYHJ-treated CAFs showed reduced invasion
CAFs contribute to the invasive and metastatic process in human pancreatic cancer [5]. In this study, we examined the migration and invasion-inducing effects of CAF on human pancreatic cancer cells using transwell chambers with or without Matrigel coating. The transwell assays demonstrated that the conditioned medium (CM) from control-treated CAFs exhibited an enhanced capacity for inducing pancreatic cancer cell migration and invasion ( Figure 2 and Figure S2). Next, we evaluated the effects of QYHJ on CAF-induced cell migration and invasion, the CM from QYHJ or control-treated CAFs were harvested and the effects of this medium on pancreatic cancer cell migration and invasion were also evaluated. We observed that CM from QYHJ-treated CAFs exhibited decreased migration and invasion-promoting capacities compared with that of controltreated CAFs (Figure 2 and Figure S2), these results suggested that
QYHJ inhibits the secretion of CXCL1, 2 and 8 form CAFs
CAFs directly stimulate tumor cell proliferation through various growth factors, hormones and cytokines in a context-dependent manner [5]. We previously showed that CAFs promote the migration and invasion of Capan1 cells in vitro, and this property was inhibited after treated with QYHJ. We next attempted to identify the potential molecules that mediate the correlation between fibroblasts and cancer cells. Previous studies have shown that the members of the CXC chemokine subfamilies, including GRO1 (CXCL1), GRO2 (CXCL2) and IL-8 (CXCL8), are among one of the most highly up-regulated genes identified in CAFs compared with NFs in pancreatic cancer [21]. Therefore, we examined the expression of these factors in CAFs and paired NFs. We observed that CAFs exhibit increased CXCL1, 2 and 8 mRNA and protein expression compared with NFs. In addition, we observed that QYHJ treatment significantly suppressed the expression of CXCL1, 2 and 8 in both CAF cell lines ( Figure 3A-B). ELISA assay also confirmed that CAFs treated with QYHJ secreted lower levels of CXCL1, 2 and 8 into the medium ( Figure 3C). Thus, these results suggested that QYHJ inhibits the expression of CXCL1, 2 and 8 in CAFs.
QYHJ inhibited CAF proliferation in vitro
Previous studies have indicated that the number of CAFs in the stroma is significantly associated with the poor differentiation and prognosis of cancers [22,23,24], and reduced CAF numbers were also observed when the tumor was treated [11]. We therefore hypothesized that QYHJ affects CAF proliferation. The in vitro proliferation assay indicated that QYHJ treatment inhibited the proliferation of CAFs in both cell lines (Figure 4), thereby contributing to the suppressed CXC chemokine secretion from CAFs.
CXCLs increased the migration and invasion of human pancreatic cancer cells
CXC chemokines have been associated with cancer cell invasion in many types of human cancers [25]. As we have observed that CAF exhibited increased CXC chemokines expression and secretion, we confirmed whether the upregulated secretion of CXC chemokines contributes to the enhanced capacity for inducing pancreatic cancer cell migration and invasion. We tested the effects of CXC chemokines on cell migration and invasion using transwell chambers with or without Matrigel coating. The addition of recombinant human CXCL1, 2 and 8 increased both the migration and invasion of Capan 1 cells in the transwell assays. Furthermore, the inhibition of CXC chemokine signaling using receptor antagonists significantly blocked CXC chemokine-induced cell migration and invasion ( Figure 5 and Figure S3).
QHYJ treatment inhibit tumorigenesis and expression of CXCL1, 2, 8 in vivo.
To further confirm the effects of QYHJ on CAF proliferation activity and CXCLs production in vivo, we established mice xenograft models using Canpan1 cells. The mice were randomly divided into QYHJ and control groups, treated with QYHJ (0.2 ml, gavage, daily) or normal saline as a control, respectively. We observed that QYHJ treatment resulted in reduced tumor growth ( Figure 6A). Consistent with the results of the in vitro analyses, QYHJ reduced CAF proliferation in tumors ( Figure 6B). Moreover, IHC confirmed that tumors treated with QYHJ exhibited the reduced expression of CXCL1, 2, and 8 ( Figure 6D). Therefore, our results suggested that QHYJ treatment inhibited CAF proliferation and the expression of CXCL1, 2, and 8, potentially reflecting the anti-cancer effects of QHYJ in pancreatic cancer.
Discussion
In this study we showed that the QYHJ inhibits pancreatic cancer cell invasion and metastasis by targeting CAFs, particularly the production of CXCL1, 2, and 8. These findings further confirmed our previous speculation that cells in the tumor microenvironment might serve as pivotal targets for Chinese herbal medicine [11].
Traditional Chinese medicine (TCM) is based on a unique theory formed in lone-term practical experience. For the last thousand years, TCM has been widely practiced in China, and more than 90% of modern Chinese cancer patients have received TCM therapy during treatment [26]. Recently, TCM has been used abroad and is well accepted in many countries, particularly for the treatment of oncology [27]. TCM is based on the concept of holism, considering the interrelationship of the human body and the surrounding environment on the macro level. At the microscopic scale, we consider the holistic relationship between cancer cells and the microenvironment. Indeed, recent studies have confirmed that tumor cells do not act in isolation, but rather subsist in a rich microenvironment provided by resident fibroblasts, inflammatory cells, endothelial cells, pericytes, leukocytes, and the extracellular matrix [28]. As the cancer progresses, the surrounding microenvironment is activated, coevolving through continuous paracrine communication, to support carcinogenesis [29]. Pancreatic cancer is characterized by an extensive stromal response called desmoplasia [7]. CAFs are the primary cell type in the tumor stroma, and the importance of a role for CAFs in tumor progression is well accepted [5]. Therefore, as cancer is no longer considered a discrete entity defined only through the traits of cancer cells within the tumor, eventually affecting the entire organism, TCM offers a holistic approach to regulate the integrity of all body functions and the interaction between the humans and the surrounding environment. Targeting the tumor microenvironment might represent a potential therapeutic approach for pancreatic cancer treatment.
QYHJ is a seven-herb Chinese formula used in the treatment of pancreatic cancer in China. We previously showed that QYHJ inhibits both tumor growth and metastasis in nude mice with pancreatic cancer models [12,13]. The combination of QYHJ treatment with conventional Western medicine prolongs survival in patients with pancreatic cancer liver metastases [14].The exact mechanism underlying the effects of QYHJ in pancreatic cancer treatment remains unclear. Recent studies have indicated that QYHJ treatments dramatically alter the tumor microenvironment, observed through decreased CAF proliferation [11]. Therefore, in this study, we further evaluated the effects of QYHJ on CAF proliferation and the production of CAF-derived chemokines. We observed that QYHJ inhibited CAF proliferation both in vitro and in vivo. In addition, the inhibition of CXCL production through QYHJ treatment resulted in the reduced invasion of pancreatic cancer cells. Thus, this study was the first to identify a new target of pancreatic cancer cells using Chinese herbal medicine. CXCL1, 2, and 8, produced primarily by mononuclear cells, macrophages and a smaller percentage of fibroblasts, endothelial cells, T and B lymphocytes, chondrocytes and amnion cells, are pleiotropic cytokines that induce tumor formation, promote tumor proliferation and facilitate tumor metastasis [30,31]. Increasing evidence has shown that CXCL1, 2, and 8 are frequently elevated in many types of human cancers [30,32,33,34], including pancreatic cancer [31]. In addition, therapies targeting CXCL1, 2, and 8 in the treatment of cancers have been reported [30,35,36], and the down-regulation of CXCL1, 2, and 8 inhibited the invasion of tumor cells [32] [35]. Using in vitro function assays, we demonstrated that CAFs exhibit increased CXCL1, 2 and 8 expression in pancreatic cancer, contributing to the enhanced invasion-promoting capacity of these cells. Therefore, targeting CXC chemokine signaling between CAF and cancer cells through pharmacological inhibition might provide a promising therapy for pancreatic cancer. The results obtained in the present study showed that Chinese herbal medicine QYHJ could significant suppress the production of CAF-derived CXCL1, 2 and 8, thereby preventing pancreatic cancer cell invasion.
Thus, in this study, we have demonstrated that CAFs exhibited an enhanced capacity for inducing pancreatic cancer cell migration and invasion compared with NFs, while QYHJ-treated CAFs exhibited decreased migration and invasion-promoting capacities in vitro. In addition, we showed that QYHJ significantly suppressed CAF proliferation activities and the production of CAF-derived CXCL1, 2 and 8. Taken together, these results suggested that suppressing the tumor-promoting capacity of CAFs through Chinese herbal medicine attenuates pancreatic cancer cell invasion. Figure S2 Pancreatic cancer cells BxPC3 treated using conditioned media from QYHJ-treated CAFs showed reduced invasion. The cell migration and invasion capacity of pancreatic cancer cells BxPC3 treated with conditioned media (CM) from NFs, Ctrl-treated CAFs and QYHJ-treated CAFs was compared using transwell chambers with or without Matrigel coating. The numbers of cells that traveled through the membrane were counted in 10 fields under a 620 objective lens. Original magnification, 6200. The results represent the means 6 SD of the values obtained in three independent experiments. Statistical significance was calculated using ANOVA. * P,0.05. (TIF) Figure S3 CXCLs increased the migration and invasion of human pancreatic cancer cells BxPC3. Migration and invasion assays were performed on BxPC3 cells treated with vehicle, 100 ng/ml CXCL1, 2, and 8, or their antognists 20 mg/ ml anti-CXCR1 antibody and 400 nM SB 225002 as indicated, using transwell cell chambers. The number of cells that invaded the membrane was counted in 10 fields under the 620 objective lens. Original magnification, 6200. The results are presented as the means6SD of values obtained in three independent experiments. The statistical significance was calculated using ANOVA. *P , 0.05. (TIF)
|
2016-05-12T22:15:10.714Z
|
2014-04-29T00:00:00.000
|
{
"year": 2014,
"sha1": "d9d8e86cd15f5f10717a2e22c93242acb9ca0f34",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096177&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9d8e86cd15f5f10717a2e22c93242acb9ca0f34",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
32747004
|
pes2o/s2orc
|
v3-fos-license
|
JEAP, a Novel Component of Tight Junctions in Exocrine Cells*
Tight junctions (TJs) consist of transmembrane proteins and many peripheral membrane proteins. To further characterize the molecular organization of TJs, we attempted here to screen for novel TJ proteins by the fluorescence localization-based expression cloning method. We identified a novel peripheral membrane protein at TJs and named it junction-enriched and -associated protein (JEAP). JEAP consists of 882 amino acids with a calculated molecular weight of 98,444. JEAP contained a polyglutamic acid repeat at the N-terminal region, a coiled-coil domain at the middle region, and a consensus motif for binding to PDZ domains at the C-terminal region. Exogenously expressed JEAP co-localized with ZO-1 and occludin at TJs in polarized Madin-Darby canine kidney cells, but not with claudin-1, JAM, or ZO-1 in L cells. Endogenous JEAP localized at TJs of exocrine cells including pancreas, submandibular gland, lacrimal gland, parotid gland, and sublingual gland, but not at TJs of epithelial cells of small intestine or endothelial cells of blood vessels. The present results indicate that JEAP is a novel component of TJs, which is specifically expressed in exocrine cells.
To further characterize the molecular organization of intercellular junctions, we attempted here to screen for novel proteins localized at cell-cell junctions by the fluorescence localization-based expression cloning method in which cDNAs can be isolated based on the subcellular localization of their GFP fusion protein products (42). We have cloned several novel cDNA fragments, one of which specifically localizes at TJs. We named this protein junction-enriched and -associated protein (JEAP) and characterized it.
EXPERIMENTAL PROCEDURES
Cell Culture, Plasmids, and Transfection-An endothelial cell line, MS-1, was obtained from American Type Culture Collection and cultured in DMEM with 5% fetal calf serum. MDCK cells were kindly supplied by Dr. W. Birchmeier (Max-Delbruck-Center for Molecular Medicine, Berlin, Germany) and cultured in DMEM with 10% fetal calf serum. For generation of ecotropic retrovirus competent MDCK cells (MDCK/EcoVR), the ecotropic virus receptor (EcoVR) cDNA was inserted into pCAGGS-puro (43) and transfected to MDCK cells using LipofectAMINE reagent (Invitrogen). The cells were then cultured for 24 h, replated, and selected with 5 g/ml of puromycin (Invitrogen). Each clone was isolated and infected with pMXII-EGFPN recombinant retrovirus (44), and a high competent clone for infection was used for the following studies. EcoVR cDNA (45) and pMX (46) were kindly provided by Dr. T. Kitamura (Tokyo University, Tokyo, Japan). Claudin-L cells and JAM-L cells were kindly supplied by Dr. Sh. Tsukita (Kyoto University, Kyoto, Japan). L cells were transfected by use of LipofectAMINE.
Construction and Screening of the cDNA Library-To identify novel cDNAs that encode proteins localized at cellular junctions, we created a cDNA-GFP fusion library from MS-1, a mouse endothelial cell line, as described previously (42). The resulting library contained 3 ϫ 10 5 independent clones and showed an average insert size of cDNA with 1,500 bp. The expression library was then co-transfected into 293/ EBNA-1 cells (Invitrogen) with a packaging vector, pCL-Eco (Imgenex, * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. § These authors contributed equally to this work. San Diego, CA), and converted to retroviruses. From a pilot experiment using VE-cadherin-EGFP fusion retrovirus with various cell lines, we found that MDCK/EcoVR cells were suitable for visual screening of junctional proteins because these cells are cuboid and showed bright signals. MDCK/EcoVR cells were, thus, infected with 2 ml of variously diluted retrovirus supernatants to obtain singly infected cells. The initial frequency of the EGFP-positive cells was ϳ4% as determined by fluorescence-activated cell sorting (FACS) analysis. After expansion, EGFP-positive cells were sorted and cultured at 50 cells per each well in 96-well plates. When the wells became confluent, we performed screening of 10 plates under the fluorescence microscope and selected 6 wells containing cells with junctional staining patterns. These cells were replated into 10-cm dishes and a single clone was obtained. Each clone was expanded in 24-well plates, and the integrated cDNA was recovered by PCR from the genomic DNA. Among the cDNAs showing the junctional staining pattern, we obtained a novel cDNA clone and analyzed it. Full-length JEAP cDNAs were obtained from a MS-1 cDNA library constructed with pMXII vector (44).
Immunofluorescence Microscopy-Immunofluorescence microscopy was performed as described (47)(48)(49). Briefly, MDCK/EcoVR cells were fixed with 3.7% formaldehyde in PBS at room temperature for 15 min. The fixed sample was treated with 0.2% Triton X-100 in PBS for 15 min and washed with PBS three times. After the sample was soaked with PBS containing 1% bovine serum albumin, the sample was incubated with various combinations of the anti-E-cadherin, anti-occludin, anticlaudin, anti-ZO-1, and anti-JEAP Abs and washed with PBS, followed by incubation with fluorescein isothiocyanate-, Cy3-, Cy5-labeled secondary pAbs (Jackson). After incubation, the sample was washed with PBS, embedded in PBS containing 50% glycerol and 0.1% 1,4-diazabicyclo octane (DABCO), and analyzed by a LSM 510 confocal laser scanning microscope (Zeiss).
Ca 2ϩ Switch Experiments-Ca 2ϩ switch experiments using JEAPexpressing MDCK cells were done as previously described (26,50). Briefly, the pMXII JEAP IRES-EGFP expression vector was constructed by inserting the full-length JEAP coding region amplified by PCR from the pMXII JEAP cDNA into pMXII IRES-EGFP (44). The expression vector was transfected into 293/EBNA-1 cells with a packaging vector, converted to retroviruses, and infected to MDCK/EcoVR cells. JEAP-expressing MDCK cells were washed with PBS and cultured in DMEM without serum for 1 h. The cells were then transferred to DMEM with 5 mM EGTA and cultured for 2 h. After the culture, cells were washed with PBS and cultured either in DMEM without serum for 2 h or in DMEM with 100 nM TPA for 1 h.
RESULTS
Identification of JEAP-To identify novel cDNAs that encode proteins localized at cell-cell junctions, we created a cDNA-GFP fusion library from MS-1, a mouse endothelial cell line (Fig. 1A). The expression library was then co-transfected into 293/ EBNA-1 cells with a packaging vector and converted to retroviruses. MDCK/EcoVR cells were infected with 2 ml of variously diluted retrovirus supernatants to obtain singly infected cells. We performed screening and selected 6 wells containing cells with junctional staining patterns. These cells were replated into 10-cm dishes, and a single clone was obtained. The integrated cDNA was recovered by PCR from the genomic DNA. Among the cDNAs showing the junctional staining pattern, we obtained a novel cDNA clone and further analyzed it.
The full-length cDNA clone was isolated from the MS-1 cDNA library. We named this protein JEAP because it localized at TJs as described below. The full-length clone of the JEAP cDNA encoded a protein with 882 amino acids with a calculated molecular weight of 98,444 ( Fig. 2A). JEAP contained a polyglutamic acid repeat at the N-terminal region, a coiled-coil domain at the middle region (51,52), and a consensus motif for binding to PDZ domains at the C-terminal region (53) (Fig. 2B).
To confirm whether the isolated clone encodes the entire coding region of JEAP, 293/EBNA-1 cells were transfected with the JEAP cDNA, and the expressed protein was detected by Western blotting. A single band of about 105 kDa was detected in the extract from the 293/EBNA-1 cells transfected with JEAP, but not in that from nontransfected 293/EBNA-1 cells (Fig. 2C). The size of the expressed protein was similar to that of endogenous JEAP in MS-1 cells. Therefore, we concluded that the isolated cDNA encodes the full-length of JEAP.
To first confirm the junctional localization of the full-length protein, full-length JEAP was expressed in MDCK/EcoVR cells as an EGFP fusion protein. The fusion protein localized at the apical region of the lateral membrane of polarized MDCK cells (Fig. 3A). The distribution pattern of the fusion protein was similar to that of ZO-1, which localized at TJs in polarized epithelial cells (15,54), but different from that of E-cadherin which showed broad distribution along the lateral membrane. To confirm the co-localization of JEAP and ZO-1, full-length JEAP was stably expressed in MDCK/EcoVR cells, and the localization of the expressed protein was compared with that of endogenous ZO-1. Exogenously expressed JEAP co-localized with ZO-1 (Fig. 3B). These results suggest that JEAP localizes at TJs.
Localization of JEAP at TJs of Exocrine Glands-We next examined tissue distribution of JEAP in various mouse tissues including liver, brain, lung, kidney, spleen, testis, ovary, and heart by Western blotting, but did not detect JEAP in any of these tissues (data not shown). We then examined tissue distribution immunohistochemically. JEAP was detected specifically in exocrine glands including pancreas, submandibular gland, lacrimal gland (Fig. 4A), parotid gland, and sublingual gland, but not in brain, heart, liver, kidney, spleen, gall bladder, or duodenum (data not shown). In exocrine glands, JEAP was expressed around the terminal portion of serous glands. In the terminal portion, JEAP showed a similar staining pattern to that of ZO-1. Although MS-1 cells expressed JEAP, we detected no staining signal for JEAP in endothelial cells in any organs. Immunoelectron microscopy revealed that JEAP indeed localized at TJs, but not at AJs or desmosomes, in the lacrimal gland (Fig. 4B).
Incorporation of JEAP into Cell-Cell Junctions along with Other Components of TJs-We next monitored the behavior of JEAP and other AJ and TJ components during the disruption and reformation of cell-cell junctions. For this purpose, we used a MDCK cell line stably expressing JEAP, because it has been shown that when MDCK cells are cultured at 2 M Ca 2ϩ for 2 h, AJs and TJs are disrupted, and the staining of the AJ and TJ components except nectin, afadin, and ZO-1 disappear from the plasma membrane, and that when the cells are recultured at 2 mM Ca 2ϩ for 2 h, AJs and TJs are reformed where all the AJ and TJ components reconcentrate (26,50). At 2 mM Ca 2ϩ , JEAP co-localized with ZO-1 at cell-cell junctions as described above (Figs. 3B and 5). When the cells were cultured at 2 M Ca 2ϩ for 2 h, the immunofluorescence signal for JEAP disappeared, whereas the sparse signal for ZO-1 was detected near the plasma membrane. After reculture of the cells at 2 mM Ca 2ϩ for 2 h, JEAP as well as ZO-1 accumulated at the cell-cell junctions (Fig. 5). It has been shown that when MDCK cells, precultured at 2 M Ca 2ϩ for 2 h, are cultured with TPA at 2 M Ca 2ϩ for 1 h, a TJ-like structure is formed, although AJs are not formed (26,50). We have shown that claudin, occludin, JAM, nectin, ZO-1, and afadin, but not E-cadherin, ␣or -catenin, are concentrated there (26,50). 2 JEAP was also recruited to the TPA-induced TJ-like structure (Fig. 5). Similarly, JEAP colocalized with occludin and claudin-1 at cell-cell junctions (data not shown). These results indicate that JEAP is incorporated into TJs along with other components of TJs.
No Recruitment of JEAP to Claudin-based or JAM-based Cell-Cell Contact Sites-We examined whether JEAP directly interacts with claudin or JAM. For this purpose, cadherindeficient L cells stably expressing claudin-1 or JAM (claudin-L and JAM-L cells, respectively) (8,39) were transiently transfected with pMXII JEAP IRES-EGFP. In claudin-L cells, ZO-1 was concentrated at cell-cell contact sites, but transiently expressed JEAP was not concentrated there (Fig. 6). In JAM-L cells, ZO-1 was concentrated at cell-cell contact sites, but JEAP was not concentrated there (Fig. 6). Although we have not examined the in vitro binding of JEAP with claudin, JAM, or ZO-1, these results suggest that JEAP does not directly interact with any of these proteins. DISCUSSION In the present study, we have identified a novel component of TJs by visual screening with cDNA-EGFP fusion proteins expressed in the living cells. This strategy is fascinating for cloning of junctional proteins that are difficult to obtain from cell fractionation techniques (e.g. components may be lost during biochemical purification or dynamically shuttle among subcellular compartments and may be restricted in a small fraction of organelles, cell types, or tissues). In fact, we have isolated a novel TJ protein, JEAP, which is expressed in very restricted compartments, the terminal portion of exocrine glands.
JEAP is a novel type of a TJ component consisting of the coiled-coil domain and a consensus motif for binding to PDZ domains. Coiled-coil domains have been identified in a variety of cytoskeletal proteins and are involved in inter-or intramolecular protein-protein interactions (51,52). The coiled-coil domain of JEAP shows weak similarity to the conserved domain of intermediate filament proteins and myosin tail as estimated by CD search (www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml). In the case of TJ proteins, occludin and cingulin also have similar conserved coiled-coil domains (10,29,41,55). Occludin interacts with ZO-1 via the coiled-coil domain (29). Another feature of JEAP is the C-terminal consensus motif for binding to PDZ domains (53). Integral membrane TJ proteins, claudin and JAM possess a PDZ-binding motif and interact with membrane-associated guanylate kinases (MAGUK) including ZO-1, -2, or -3, via the PDZ domain (6,7). Although JEAP contains the potential protein-protein interaction domains, the mechanism of its specific localization at TJs remains unknown. The present results, however, suggest that JEAP does not directly interact with claudin-1, JAM, or ZO-1. JEAP may interact with other known or still unidentified molecule(s) and localizes at TJs through interaction with this protein(s). Identification of such a molecule would be important for our understanding of the mechanism of the specific localization of JEAP at TJs.
At present, the function of JEAP in the exocrine glands remains unknown. However, the presence of cell type-and tissue-specific peripheral membrane proteins at TJs suggests that specialized epithelial and/or endothelial cells possess unique junctional complexes depending on their functions. In this context, membrane proteins at TJs, claudins, and JAMs, are differentially expressed in various combinations in epithelial and/or endothelial cells (6,7). Recently, it was reported that claudin-2 controls paracellar permeability of MDCK cells (56). It is tempting to speculate that JEAP also regulates assembly or integrity of TJs of exocrine-terminal portions in which TJs function as a barrier to prevent spilling of exocrine juice into organs.
A data base search has revealed two related proteins, KIAA0989 and angiomotin, suggesting that JEAP comprises a family. KIAA0989 has not been characterized but angiomotin is a 72-kDa protein that is expressed selectively in capillary endothelial cells as well as in actively angiogenic tissues, such as placenta and solid tumors (57). It localizes at the lamellipodia of the leading edge of migrating endothelial cells and is implicated in the angiostatin-mediated regulation of cell motility and capillary formation. The primary structure, subcellular localization, and tissue distribution of angiomotin are, how- ever, different from those of JEAP, suggesting that each member of the JEAP family has unique characteristics in terms of function and subcellular distribution.
|
2018-04-03T02:16:04.379Z
|
2002-02-15T00:00:00.000
|
{
"year": 2002,
"sha1": "1c72456ebd100539431081465851350a975064e6",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/7/5583.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9afb805dcd70e50d2145941fd08a2583a6985755",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254074391
|
pes2o/s2orc
|
v3-fos-license
|
UDP-sulfoquinovose formation by Sulfolobus acidocaldarius
The UDP-sulfoquinovose synthase Agl3 from Sulfolobus acidocaldarius converts UDP-d-glucose and sulfite to UDP-sulfoquinovose, the activated form of sulfoquinovose required for its incorporation into glycoconjugates. Based on the amino acid sequence, Agl3 belongs to the short-chain dehydrogenase/reductase enzyme superfamily, together with SQD1 from Arabidopsis thaliana, the only UDP-sulfoquinovose synthase with known crystal structure. By comparison of sequence and structure of Agl3 and SQD1, putative catalytic amino acids of Agl3 were selected for mutational analysis. The obtained data suggest for Agl3 a modified dehydratase reaction mechanism. We propose that in vitro biosynthesis of UDP-sulfoquinovose occurs through an NAD+-dependent oxidation/dehydration/enolization/sulfite addition process. In the absence of a sulfur donor, UDP-d-glucose is converted via UDP-4-keto-d-glucose to UDP-d-glucose-5,6-ene, the structure of which was determined by 1H and 13C-NMR spectroscopy. During the redox reaction the cofactor remains tightly bound to Agl3 and participates in the reaction in a concentration-dependent manner. For the first time, the rapid initial electron transfer between UDP-d-glucose and NAD+ could be monitored in a UDP-sulfoquinovose synthase. Deuterium labeling confirmed that dehydration of UDP-d-glucose occurs only from the enol form of UDP-4-keto-glucose. The obtained functional data are compared with those from other UDP-sulfoquinovose synthases. A divergent evolution of Agl3 from S.acidocaldarius is suggested.
Introduction
UDP-sulfoquinovose is the nucleotide-activated form of sulfoquinovose (6-deoxy-6-C-sulfo-d-glucopyranose, Qui6S) and is required for the incorporation of sulfoquinovose into glycoconjugates. Among those is, for instance, the sulfolipid sulfoquinovosyl diacylglycerol which is found in the chloroplast membrane of plants and in cyanobacteria (Benning et al. 1993;Benning 1998;Riekhof et al. 2003;Sato et al. 2003;Shimojima 2011;Denger et al. 2014), with cyanobacterial strains of the genus Spirulina having recently gained interest because of the anti-HIV properties of some of their sulfoquinovose-containing sulfolipids (Kwei et al. 2011). In the hyperthermophilic archaeon Sulfolobus acidocaldarius sulfoquinovose is either a component of the glycosylated, membrane-associated cytochrome b complex (Zähringer et al. 2000), the major surface (S-) layer protein SlaA (Peyfoon et al. 2010), or the subunit of the archaellum filament FlaB (Meyer et al. 2013). Further reports on sulfoquinovose in archaea concern a so far uncharacterized oligosaccharide modifying the S-layer protein of Haloferax volcanii (Eichler 2013;Parente et al. 2014) and an operon encoding a sulfoquinovose synthase (SqdB) in the haloarchaeon Haloquadratum walsbyi (Bolhuis et al. 2006).
Work on the sulfolipid biosynthesis in Arabidopsis thaliana identified SQD1 as the biosynthesis enzyme for UDP-sulfoquinovose (Benning et al. 1993;Benning 1998), with SQD1 currently being the best-investigated UDP-sulfoquinovose-synthase. It shows high sequence similarity to sulfolipid biosynthesis enzymes of different organisms and to sugar nucleotide modifying enzymes such as UDP-glucose epimerase GalE (Thoden et al. 1996a;Liu et al. 1997) and dTDP-glucose-4,6-dehydratase (Gross et al. 2000;Allard et al. 2002). In a three-dimensional model of SQD1, which is based on the 1.8-Å crystallographic structure of UDP-glucose 4-epimerase (Liu et al. 1997) as a template, an NAD + binding site and active site interactions were predicted . The proposed reaction mechanism of SQD1 was confirmed after its crystallization at 1.6-Å resolution in a complex with NAD + and the putative substrate UDP-d-glucose (Mulichak et al. 1999). The SQD1 protein has a bi-domain structure with a Rossmann fold for NAD + binding, revealing high structural similarity with the GalE enzyme (Thoden et al. 1996b). It is a member of the SDR family of enzymes (Kavanagh et al. 2008), with its structure showing conservation of the catalytic SDR amino acid residues. The Rossmann-fold fingerprint sequence at the pyrophosphate binding site of SDR enzymes is replaced by a G-XX-G-XX-G sequence in SQD1 (Mulichak et al. 1999), while the characteristic Y-XXX-K motif and a Ser/Thr residue are located at the active site of SQD1, to form the catalytic triad of SDR enzymes (Kavanagh et al. 2008) with Thr145, Tyr182, and Lys186 (Mulichak et al. 1999).
The proposed mechanism for SQD1 catalysis suggests that, in the absence of a sulfur donor, the reaction continues to the UDP-4-keto-glucose-5,6-ene product. At the active site of the enzyme, UDP-d-glucose and NAD + are bound, with the latter in the oxidized state. In a subsequent step, a sulfur donor would transfer sulfite to UDP-4-keto-glucose-5,6-ene by a nucleophilic addition across the double bond, followed by reduction of the 4-keto group and regeneration of NAD + (Mulichak et al. 1999).
Characterization of SQD1 from A. thaliana (Sanda et al. 2001) showed that the highly purified enzyme exists as a complex with ferredoxin-dependent glutamate synthase (Shimojima et al. 2005), and the crystal structure of SQD1 showed that the NAD + cofactor is tightly bound to the N-terminal domain of the enzyme (Mulichak et al. 1999). The main bottleneck for fully elucidating the mechanism of SQD1 was its low in vitro activity. Recombinant SQD1 expressed in Escherichia coli showed low in vitro activity as well (Sanda et al. 2001). Thus, the in vivo-mechanism of the sulfite transfer to C-6 of UDP-d-glucose by UDPsulfoquinovose synthases is still unknown.
To elucidate the biosynthesis of UDP-sulfoquinovose in S. acidocaldarius, its genome was scanned for the presence of homologues of the bacterial sqdB or eukaryal sqd1 genes known to encode UDP-sulfoquinovose synthases (Meyer et al. 2011). The scan revealed Saci0423 with ~40 % sequence identity to SQD1 of A. thaliana. As shown for SQD1, Agl3 is a member of the SDR superfamily of enzymes (Field and Naismith 2003;Kavanagh et al. 2008). In a recent study, the agl3 (saci0423) gene was confirmed to code for the UDP-sulfoquinovose synthase involved in the biosynthesis of the S. acidocaldarius S-layer N-glycan. Targeted deletion of agl3 impaired UDP-sulfoquinovose synthesis and resulted in a mutant lacking sulfoquinovose in its S-layer glycan (Meyer et al. 2011). In addition, the lack of agl3 resulted in a reduced molecular mass of FlaB, indicating that FlaB is also modified with the N-glycan containing sulfoquinovose (Meyer et al. 2013). In close vicinity of agl3, several genes are localized which are predicted to code for carbohydrateactive enzymes linked to the UDP-sulfoquinovose metabolism. These include agl4 (saci0424), annotated as a glucokinase, predicted to provide glucose 1-phosphate as a substrate for the NDP-glucose pyrophosphorylase Agl2, encoded by agl2 (saci0422), which would generate UDPglucose, serving as the immediate substrate for Agl3. Eventually, agl1 (saci0421), coding for a membranebound glycosyltransferase, would be involved in the transfer of sulfoquinovose from UDP-sulfoquinovose to the N-acetylglucosamine residue of the N-glycan (Meyer et al. 2011).
In the present study, we investigate the in vitro activity of Agl3 and propose that different reaction products can be formed dependent on the type and the amount of substrate. A complex reaction cycle is followed, including oxidation, dehydration, enolization, and reprotonation of UDP-d-glucose. Point mutation studies have been performed by targeting amino acids known to be crucial for enzymatic activity of enzymes from the SDR superfamily (Field and Naismith 2003;Kavanagh et al. 2008). This study provides new insights into the reaction mechanism of Agl3 of S. acidocaldarius and additionally unveils possible evolutionary differences between the planta enzyme SQD1 and the prokaryotic UDP-sulfoquinovose synthase.
The phylogenetic analysis using ClustalW2 divides the UDP-sulfoquinovose synthases into four clusters (Fig. 1b). The (cyano)bacterial UDP-sulfoquinovose synthases are distributed in clusters I and II, with UDP-sulfoquinovose synthases from plants mainly localized in cluster II-SQD1 from A. thaliana, however, is localized in cluster I. In cluster III, Agl3 of S. acidocaldarius and UDP-sulfoquinovose synthases from other Crenarchaeota and hyperthermoacidophilic Euryarchaeota are localized. Apart from them are the UDP-sulfoquinovose synthases from halophilic Euryarchaeota (cluster IV). This cluster is more distant to the other three clusters, showing only 36 identical amino acids with other UDP-sulfoquinovose synthases. The enzyme of the halophilic archaea obviously evolved rather distant to those from plants, hyperthermoacidophilic archaea, and (cyano)bacteria. Among bacteria, the cyanobacterial UDP-sulfoquinovose synthases are predominant, but also other (non-photosynthetic) bacteria contain genes coding for this enzyme. The phylogenetic analysis showed that the occurrence of UDP-sulfoquinovose synthases in all domains of life obviously results from divergent evolution (Fig. 1b). Whether the categorization of the UDP-sulfoquinovose synthases into different clusters also reflects different reaction mechanisms is currently unknown but might be supported by observations made in the course of this work.
Since no crystal structure of Agl3 is available, we chose an approach for the prediction of functional epitopes similar to that used for SQD1 from A. thaliana ) before a high-resolution crystal structure of this protein became available (Mulichak et al. 1999). The obtained model clearly shows high similarity of the overall fold of both proteins (for details see Supplemental Information, Figure S1).
Point mutation studies of predicted critical amino acids in Agl3
Upon comparison of the amino acid sequence of Agl3 with that of other UDP-sulfoquinovose synthases (Fig. 1a) including SQD1 (Fig. 2a) (Mulichak et al. 1999), potential residues of Agl3 were selected that could be involved in catalysis. Very similar to SQD1, our Agl3 model (Fig. 2b) comprises an N-terminal NAD + -binding domain with a highly conserved Rossmann fold and a C-terminal UDPd-glucose binding domain. In SQD1, the catalytic residues were determined to be Thr145, Tyr182 and Lys186, with the latter two being present within the Y-XXX-K motif (Mulichak et al. 1999;Kavanagh et al. 2008). The corresponding amino acids in Agl3 are Thr144, Tyr182 and Lys186. From the respective alanine mutations (Tables 1, 2) the mutant T144A fully retained activity, whereas Ala replacement of His95, Arg101, Met145, Tyr182 and Lys186 (Fig. 2b) in Agl3 rendered the enzyme inactive. Both S180A and T185A mutant proteins showed impaired conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene (Tables 1, 2).
Since most of the selected mutants were inactive and did not allow any prediction of a possible reaction pathway, additional mutations were introduced which open the possibility of a dehydratase mechanism for Agl3 Hegeman et al. 2001). It was previously shown that in the dehydration reaction of dTDPd-glucose-4,6-dehydratase (Allard et al. 2002) and GDPd-mannose-4,6-dehydratase (Somoza et al. 2000) the two active site amino acids tyrosine and glutamic acid are crucial for the acid-catalyzed release of the hydroxyl group at C-6 of the substrate. According to our active site model of Agl3 (Fig. 2b), these amino acids could be represented by Tyr148 and Glu147 in Agl3, supporting a possible UDPd-glucose dehydratase activity for UDP-sulfoquinovose biosynthesis. The respective alanine mutants (Tables 1, 2) showed strongly impaired conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene ( Fig. 2c) in the absence Partial amino acid comparison, close-up of the active site, and reactivity of wild-type and specific mutants of Agl3. a Selected region of the sequence alignment of Agl3 and SQD1 demonstrating a high degree of amino acid similarity of both enzymes. The positions of E147, Y148, and the characteristic Y-XXX-K motif of SDR proteins are indicated (Kavanagh et al. 2008). b Homology model of the active site of Agl3 with the critical amino acids drawn in green sticks, the NAD-cofactor in pink and the substrate in yellow. For clarity the backbone of residues 96-108 is not shown; (the figure was generated by pymol version 1.2r1). c In vitro conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene (arrow) in absence of sulfite by the wild-type Agl3 (graph 1), Y148A mutant (graph 2), and E147A mutant (graph 3) of sulfite, supporting their important roles in the catalytic activity of Agl3 and the assumed dehydratase-like function. To ensure that the observed decrease in activity was not a result of protein instability, far-UV circular dichroism spectra of wild-type Agl3 and the variants E147A and Y148A were recorded. Figure 3 demonstrates the mainly α-helical structure of Agl3 as well as the almost identical overall secondary structure composition of the three mutant proteins.
In the homology model of the putative active site of Agl3, the residues Glu147 and Tyr148 are at a distance of >5 Å away from UDP-d-glucose (Fig. 2b). This suggests that either a conformational change of the G-XX-G-XX-G motif must occur to facilitate the reaction or UDP-d-glucose needs to change its orientation to favorably interact with these residues. Overall, UDP-sulfoquinovose synthases contain several highly conserved glycine and proline residues, suggesting a highly conserved structural mode of switching Agl3 from an inactive to an active conformation.
Catalytic conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene in the absence of a sulfur donor Previously, we demonstrated that recombinant Agl3 from S. acidocaldarius is active in vitro, converting UDP-d-glucose and sulfite to UDP-sulfoquinovose in a yield of approximately 10 % (Meyer et al. 2011). Here, we show that in the absence of sulfite, the in vitro conversion of UDPd-glucose results in the formation of UDP-d-glucose-5,6ene (Fig. 4). This new reaction product eluted at a retention time of 3.8 min in the RP-HPLC experiment (Fig. 4a). The Agl3-catalyzed conversion of UDP-d-glucose was further investigated by NMR spectroscopy and ESI-MS (Fig. 4b, c). Supplementation of the reaction mixture with NAD + , Table 1 Alanine mutations of selected amino acids of Agl3: used primers, primer sequence, target mutation and overall mutant enzyme activity a Qualitative determination: ++ strongly active, + active, (+) lesser active, (±) less active, -inactive protein
Primer name
Primer sequence Target mutation Mutant enzyme activity a Sqsyn1-Forward-H95A 3′-GCCATAGTGGCTTTCGCTGAG-5′ Histidine 95 NADH or FAD to a final concentration of 1 mM, each, did not result in a higher catalytic conversion of UDP-d-glucose, as was concluded from the yield and retention time of the reaction product upon RP-HPLC separation (not shown). The 1 H NMR spectrum of that sample revealed the signals of non-reacted UDP-d-glucose (verified by using an UDP-d-glucose standard) and of a second compound in a ratio of 2.8:1, as was deduced from the integrated signals of the respective anomeric protons (at 5.5 and 5.6 ppm, respectively). The minor, slightly low-field shifted anomeric proton of the second compound displayed a heteronuclear coupling to 31 P, and the presence of an intact diphosphate unit was confirmed by the chemical shifts in the 31 P NMR spectrum (Fig. 4b, inset). Despite overlapping signals from the UDP-component of UDP-d-glucose and the reaction product, the remaining signals could be fully assigned using COSY, TOCSY, edHSQC and HMBC data. Thus, structural proof for the presence of an exocyclic double bond was obtained from the edited HSQC spectrum (Fig. 4b). Two low-field shifted 1 H NMR signals were observed at 4.81 and 4.78 ppm with a correlation to a 13 C NMR signal at 98.5 ppm. Both protons displayed small values for the geminal coupling constant as well as allylic spin-spin couplings to H-4″. The latter signal revealed a large coupling constant confirming a trans-orientation relative to H-3″, in agreement with a gluco-configuration.
Final support for the proposed UDP-d-glucose-5,6-ene structure was derived from HMBC-correlations of H-4″ as well as H-1″ to a 13 C NMR signal at 155.3 ppm, which are in full agreement with the partial structure of an exocyclic enol ether (Table 3). Based on these data, the structure of the enzymatic reaction product could be unambiguously determined as UDP 6-deoxy-d-xylo-hex-5-enose (UDP-dglucose-5,6-ene). Signals for keto or hydrated keto groups as well as signals for deuterated UDP-d-glucose in the product mixture could not be detected within the detection limit of the NMR instrument. Also, in situ monitoring of the enzymatic reaction in an NMR tube for 5 h at 333 K (60 °C) revealed a slow formation of the UDP-d-glucose-5,6-ene product but did not provide direct evidence of a 4-keto intermediate (Li et al. 2014).
The MS-spectrum of the reaction mixture of UDPd-glucose conversion showed a major peak at 565.0 u (corresponding to the substrate), accompanied by two smaller peaks at 566.0 and 567.0 u, respectively, and one further, though even smaller, peak at 547.0 u showing the same pattern of accompanying peaks, each one unit apart, as seen with the major peak, but with much less intensity (Fig. 4c).
Formation of UDP-d-glucose-5,6-ene in the course of UDP-sulfoquinovose biosynthesis To investigate the formation of UDP-d-glucose-5,6-ene (Fig. 5) in detail, the RP-HPLC profile of the conversion of 3 mM UDP-d-glucose to UDP-sulfoquinovose was compared at high (3 mM; Fig. 5a, graph 1), and low sulfite concentration (0.1 mM; Fig. 5a, graph 2). At 3 mM sulfite, Agl3 rapidly converted UDP-d-glucose to UDP-sulfoquinovose (Fig. 5a, graph 1), with complete product formation as indicated by the lack of detectable UDP-d-glucose-5,6ene. In contrast, at 0.1 mM sulfite, a considerable accumulation of UDP-d-glucose-5,6-ene occurred in the reaction Fig. 3 Far-UV CD spectra of 7 μM wild-type Agl3 and the mutants E147A and Y148A in 20 mM phosphate buffer, pH 7.0 1 3 mixture (Fig. 5b, graph 2). Under sulfite-saturated conditions, UDP-d-glucose is first converted to UDP-d-glucose-5,6-ene, followed by conversion into UDP-sulfoquinovose (Fig. 5c). Based on the kinetics of UDP-sulfoquinovose biosynthesis, in combination with the accumulation of UDP-d-glucose-5,6-ene in the absence of sulfite, we conclude that conversion of UDP-d-glucose to UDP-sulfoquinovose by Agl3 is a two-step reaction with UDP-d-glucose-5,6-ene being a crucial reaction intermediate of this pathway (Figs. 4, 5). However, due to the available analytical tools in our laboratory, we were only able to isolate and unambiguously characterize starting and end points of these reactions. The appearance of UDP-d-glucose-5,6-ene is taken as an additional indication of a dehydration step being included in the complex Agl3 reaction mechanism.
Preliminary investigations of the dehydration mechanism of Agl3 and role of the NAD-cofactor Monitoring of the conversion of UDP-d-glucose is challenging, because of the low in vitro activity of Agl3 and the fact that the absorption spectra of UDP-d-glucose, UDP-d-glucose-5,6-ene and UDP-sulfoquinovose are identical. Supplementing the reaction mixture with NAD + during Agl3 activity measurement did not lead to accumulation of NADH (monitored at 340 nm). This is consistent with our observation that supplementing with NAD + does not increase the catalytic activity of Agl3 (data not shown) and indicative of a tight binding of NAD + to the polypeptide matrix of Agl3 and no exchange with the medium. Fig. 4 Conversion of UDPd-glucose by Agl3 in the absence of sulfite (a) RP-HPLC analysis of the conversion of UDP-d-glucose to the UDPd-glucose-5,6-ene intermediate (arrow at 3.8 min) in the presence of Agl3 (graph 1). In the negative control experiment (absence of Agl3; graph 2), no conversion was observed. b NMR expansion plot of an edH-SQC spectrum of the mixture of UDP-d-glucose (major component) and UDP-d-glucose-5,6-ene (minor component from the enzymatic conversion). The inset (b) shows an expansion plot of the 31 P NMR spectrum of the mixture displaying the intact diphosphate linkages of both compounds. c ESI-MS analysis of UDP-d-glucose-5,6-ene The nature of the tightly bound prosthetic group in Agl3 was investigated after chloroform extraction (Fig. 6a). RP-HPLC analysis confirmed the presence of NAD + in Agl3 by comparison with authentic NAD + reference material (Fig. 6a, graph 2). Fractionation and spectrophotometric analysis of the extracted cofactor showed a typical NAD + absorption spectrum in the oxidized state (Fig. 6b, spectrum 1). Subsequently, the nature of the extracted NADcofactor was confirmed as NAD + in an in vitro biocatalytic experiment (Fig. 6b, spectra 2 and 3) by adding trace amounts of NAD(P)-dependent glucose dehydrogenase and glucose (1 mM final concentration) to the extracted cofactor. After 2 min of incubation at room temperature, NAD + was reduced, evidenced by an increase of absorbance (Fig. 6b, spectrum 2), which was even further pronounced after 3 min of incubation (Fig. 6b, spectrum 3), since glucose dehydrogenase converted glucose to d-glucono-1,5-lactone thereby transferring the electrons to NAD + and yielding NADH.
Agl3 was, in any case, tested as a holo enzyme because all attempts to prepare NAD-free apo-Agl3 failed. Extensive dialysis (2 days) did not remove the bound NADcofactor; the absorption spectrum of Agl3 at 340 nm still showed the presence of cofactor inside the enzyme. Dialysis for longer time led to precipitation of Agl3 and complete loss of activity (not shown).
Since determination of the NAD-dependent activity of Agl3 could not be monitored at steady-state conditions, the pre-steady-state kinetics of the reaction was investigated by stopped-flow experiments. We probed whether the bound cofactor is able to accept electrons from UDP-d-glucose at 65 °C, mimicking the natural growth temperature of S. acidocaldarius. As depicted in Fig. 6c (spectrum 1), in the resting state of recombinant Ag13 (2 µM), the spectrum of the pyridine nucleotide cofactor suggests the occurrence of a mixture of NAD + and NADH. Upon addition of 1 3 50 µM UDP-d-glucose there was a rapid formation of fully reduced NADH (Fig. 6c, spectrum 2) followed by a slow re-oxidation to the mixed NAD + /NADH state. The corresponding time traces of this biphasic reaction are shown in Fig. 6d for 5 and 50 µM UDP-d-glucose, respectively. The first rapid absorbance increase at 320 nm is dependent on the concentration of the electron donor. Upon fitting this phase by a single-exponential function, k obs values could be estimated. From the slope of the linear plot of k obs values versus the concentration of UDP-d-glucose, an apparent bimolecular rate constant of ~9.3 × 10 4 M −1 s −1 could be estimated at 65 °C and pH 7.0. The high intercept of 8.8 s −1 suggests that transiently formed NADH becomes re-oxidized. After about 3 s the mixed NAD + /NADH state of the enzyme sample was established again (Fig. 6d). The second slower phase (i.e., absorbance decrease at 320 nm) did not depend on the sugar concentration. Consistent with the literature (Mulichak et al. 1999(Mulichak et al. , 2002Hegeman et al. 2001), we propose that the conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene by Agl3 is initiated by oxidation of the hydroxyl group at C-4 of the substrate by NAD + , leading to NADH. This is first demonstration of the rapid electron transfer between UDP-d-glucose and NAD + in a UDP-sulfoquinovose synthase (Fig. 6c, d).
The oxidation of UDP-d-glucose was accomplished at high enzyme concentration (2.7 mg ml −1 ). Chloroform was directly added to the reaction mixture, resulting in an instant unfolding of Agl3 and release of the reaction product into the water phase. RP-HPLC analysis of the latter showed the presence of the new compound at 3.4 min Fig. 6 Cofactor analysis and kinetics of NAD + reduction of UDPd-glucose. a RP-HPLC analysis of NAD + extracted from Agl3 in oxidized form (graph 1) compared to NAD + reference (graph 2). b Absorption spectra of the extracted cofactor in the fully oxidized (NAD + ; spectrum 1) and reduced (NADH) state. Reduction was mediated by glucose dehydrogenase and glucose. Spectra were recorded after 2 min (spectrum 2) and 5 min (spectrum 3). c Absorption spectrum of Agl3 in the resting state (spectrum 1) suggesting a mixed oxidation state of the tightly bound cofactor. Spectrum 2 is formed immediately after addition of 50 μM UDP-d-glucose. d Stopped-flow analysis of the reduction of the bound NAD-cofactor by UDP-d-glucose. Two representative time traces for the reaction between 2 μM Agl3 and 5 or 50 μM UDP-d-glucose are depicted. The first rapid increase at 320 nm depends on the UDP-d-glucose concentration. The inset shows the corresponding plot of the k obs values for this reaction versus the concentration of the electron donor. e RP-HPLC analysis and MS analysis of a new reaction product at 3.4 min retention time (HPLC graph) and a molecular mass of 563 gram/mole (mass spectrum), extracted during the conversion of UDPd-glucose to UDP-d-glucose-5,6-ene by chloroform in the absence of sulfite and at an Agl3 concentration of 2.7 mg ml −1 retention time (~40 pmol) (Fig. 6e), eluting between the substrate UDP-d-glucose (3.2 min) and UDP-d-glucose-5,6-ene (3.8 min). MS analysis of the new reaction product yielded a molecular mass of 563.084 u (ESI-MS analysis in the negative mode; Fig. 6e). This value obviously corresponds to the molecular mass of oxidized UDP-d-glucose. The accumulation of the new product was only detectable at high Agl3 concentration. The putative keto-compound remained tightly associated with the active site of Agl3 during substrate turnover, and could be extracted only by instantaneous chloroform-induced unfolding of Agl3 during the catalytic conversion of UDP-d-glucose to UDP-dglucose-5,6-ene (Sporty et al. 2008). Because of the fast rate of conversion, the accumulation of the putative ketocompound is very low, thus preventing structural analysis by NMR. The existence of a 4-keto product in the reaction pathway of different SDR enzymes was shown in plants (Pugh et al. 1995;Mulichak et al. 1999) and for the model enzymes GalE (Thoden et al. 1996a, b) and RmlB (Allard et al. 2002). Currently it is not possible to continuously monitor every step of the formation of UDP-sulfoquinovose and thus, the overall mechanism of this bi-substrate enzyme is not yet fully understood.
Conversion of UDP-d-glucose by Agl3, analyzed after deuterium labeling
The conversion of UDP-d-glucose to UDP-d-glucose-5,6ene at an excess of D 2 O (Fig. 7) indicated enolization and locking of deuterium at C-5 of UDP-d-glucose (Gross et al. 2000). For this approach, purified Agl3 was dialyzed against 20 mM phosphate buffer of pD 6.4. Substrate conversion in D 2 O-phosphate buffer was repeated several times to obtain sufficient material for the subsequent ESI-MS analysis. As a result, 15 % of UDP-d-glucose were labeled with one deuterium atom, with the ratio of isotope distribution in UDP-d-glucose-5,6-ene after enzymatic conversion in D 2 O-phosphate remaining unchanged compared to the Fig. 7 Conversion of UDPd-glucose to UDP-d-glucose-5,6-ene by Agl3 at an excess of deuterium. a Conversion of UDP-d-glucose to UDP-dglucose-5,6-ene reaching an equilibrium reaction at steadystate level. b MS analysis of the conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene in H 2 O-phosphate buffer compared to (c) its conversion in D 2 Ophosphate buffer. d Natural isotope distribution pattern of UDP-d-glucose measured by ESI-MS in the negative mode in H 2 O-phosphate Fig. 7; Table 4). This indicated that the altered isotope distribution was specific for UDP-d-glucose and not the result of a deuterium contamination. Additionally, the stability of UDP-d-glucose in D 2 O-phosphate buffer was tested, confirming that in the absence of Agl3, UDPd-glucose remained stable and unlabeled (not shown). Interestingly, the catalytic conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene in D 2 O-phosphate buffer increased to nearly 50 % compared to the conversion in H 2 O (~10 %) (Fig. 7a, b). The assumed keto-enol tautomeric equilibrium after oxidation of the OH group at C-4 of UDP-d-glucose was attempted to be shown by deuterium labeling of the substrate at the active site of Agl3 (Gross et al. 2000) (Fig. 8). This inter-conversion requires the release of the hydrogen at C-5 of the putative UDP-4-keto-glucose (Fig. 8a, structure 3) and subsequent formation of a double bond between C-4 and C-5 (Fig. 8a, structure 4). The inter-conversion of the putative UDP-4-keto-glucose and its enol form directly at the active site of Agl3 was traced by locking a deuterium atom at the C-5 position of the substrate by the enolization process (Fig. 8a, structures 4, 5), and labeling of UDP-d-glucose with deuterium when the conversion of UDP-d-glucose by Agl3 was performed at an excess of D 2 O (Fig. 8a, structure 6). Labeling of C-5 with deuterium (e.g. C-5 of dTDP-d-glucose of RmlB) has been proposed for nucleotide-activated glucose intermediates involving keto-formation at C-4 followed by enolization between C-4 and C-5 (Gross et al. 2000). The observed labeling of UDPd-glucose with deuterium (Fig. 8a, structures 6, 7) also suggests that dehydration of UDP-d-glucose occurs only from the enol form of UDP-4-keto-glucose (Fig. 8a, structure 4). This does explain why UDP-d-glucose-5,6-ene at C-5 could not be labeled with deuterium ( Fig. 8b; Table 4). Deuterium is cleaved off from UDP-4-keto-glucose at the active site of Agl3 prior to its dehydration (Fig. 8a, interconversion of structure 5 to 4). Thus, dehydration of UDP-4-keto-glucose is proposed to occur through the enol form (Fig. 8b), followed by the transfer of hydride to C-4. The assumed hydride transfer diverts here from the common dehydratase pathway, where hydride transfer to C-6 has been proposed (Field and Naismith 2003). This step might be accomplished without the suggested sugar rotation and still keep C-4 in an optimal position for hydride transfer. It would allow sulfite addition to an enone for subsequent UDP-sulfoquinovose biosynthesis ( Fig. 9) but we cannot structurally prove our proposal because of the current lack of a crystal structure of Agl3.
An interesting observation concerns the nearly 50 % increase of the catalytic conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene in D 2 O-phosphate buffer when compared to the conversion in H 2 O-phosphate buffer (Fig. 8). There is sufficient volume for a significant amount of water molecules to be present at the active site of Agl3 Fig. 8 Proposed mechanism for the conversion of UDPd-glucose to UDP-d-glucose-5,6-ene (a) NAD-dependent oxidation of UDP-d-glucose at C-4 (structure 3), enolisation of UDP-4-keto-glucose (structure 4) and locking of deuterium at C-5 by the enolisation process at the active site of Agl3 (structures 5-7). b Proposed NAD-dependent dehydration of the enol-conformation of UDP-4-keto-glucose eventually resulting in UDP-d-glucose-5,6ene. 160 × 135 mm (300 × 300 DPI) (see supplemental Figure S1), which, through H-bridging, stabilizes the interaction between the crucial residues of Agl3 and UDP-d-glucose. D 2 O has a stronger dipole moment than H 2 O. Therefore, stronger H-bridge interaction inside of Agl3 in D 2 O-phosphate buffer most likely leads to increased stability of the active site of Agl3 and increased catalytic activity of the enzyme.
Evolutionary relationship between UDP-sulfoquinovose synthases, including Agl3 UDP-sulfoquinovose synthases belong to the SDR subfamily of oxidoreductases containing a conserved cofactorbinding Rossmann-fold domain (Kavanagh et al. 2008). In the current study, we confirmed that the reaction mechanism of Agl3 of S. acidocaldarius follows the general mechanism of this subfamily. This proof is particularly important because of the possible distant evolutionary relationship between Agl3 and SQD1 from A. thaliana (Fig. 1a).
If the reaction mechanisms of Agl3 and SQD1 are indeed different, other crucial amino acids than those identified for SQD1 should be involved in the catalytic reaction. This implicates presumably a dehydratase reaction in Agl3 (Allard et al. 2002;Field and Naismith 2003). The respective catalytic tyrosine residue in Agl3 could be Tyr148 (Figs. 1, 2a, b). It might function as active site base and accomplish-in concert with Glu147-the dehydration step (Allard et al. 2002). With the RmlB enzyme from Salmonella enterica sv. Typhimurium it was shown that the C-4 of the substrate was at the optimal position for initial hydride abstraction (Allard et al. 2002). The results Fig. 9 Comparison of reaction mechanisms of classical 4,6-dehydratases (Field and Naismith 2003) and the proposed reaction mechanisms of Agl3 in the absence and presence of sulfite resulting either in the formation of UDPd-glucose-5,6-ene or UDPsulfoquinovose. 161 × 197 mm (300 × 300 DPI) obtained with Agl3 are in agreement with this reaction step (Fig. 9), in which NAD + initially oxidizes glucosyl C-4 of dTDP-glucose to dTDP-4-keto-glucose, leaving the NADcofactor reduced. Next, water is eliminated between C-5 and C-6 of dTDP-4-keto-glucose to form dTDP-4-ketoglucose-5,6-ene. Hydride transfer from NADH to C-6 of dTDP-4-keto-glucose-5,6-ene regenerates NAD + and produces the product dTDP-4-keto-6-deoxyglucose (Gross et al. 2000). It was proposed that water remains bound to the protein and sugar rotation around the glycosidic linkage creates no steric clash but should move C-6 to an appropriate location for accepting the hydride (Field and Naismith 2003).
The reaction pathway of the UDP-sulfoquinovose synthase Agl3 of S. acidocaldarius obviously follows the general pathway described for dehydratases (Field and Naismith 2003), but seems to be modified at the final step ( Fig. 9). Based on our data we assume the final hydride transfer to occur to C-4 rather than C-6 of the enone form of UDP-4-keto-glucose, leaving this carbon accessible for the subsequent addition of sulfite (Fig. 9). Conclusive information about the actual reaction mechanism of Agl3 is expected from 3D crystallization experiments which are currently in progress.
Agl3 expression and point mutation analysis
Recombinant production and nickel affinity purification of hexa-histidine-tagged Agl3 from S. acidocaldarius were performed as described previously (Meyer et al. 2011). Point mutations were performed by overlap-extension PCR. Ten amino acid positions in Agl3 were selected on the basis of critical amino acids either from epimerases or dehydratases (Mulichak et al. 1999;Field and Naismith 2003). Targeted amino acids in Agl3 along with the primers carrying the alanine codon for replacement of the selected codon are shown in Table 1, 2. First, agl3 was cloned into pET30a (Novagen), using the primer pair 3′-CCCCCC CATATGAGGATTCTAGTACTAGGAATT-5′/3′-CCCCCC CTCGAGACCACCCGCACCACCTCTTACTCTTTTAAC GTATTGTGGTTT-5′. The first phase of overlap-extension PCR was performed with 7 ng of pET30_Agl3 as a template, one unit of Phusion polymerase (Fermentas), 10 mM of dNTP mix and amplification conditions as follows: one cycle at 94 °C for 4 min, followed by 30 cycles at 94 °C for 30 s, each, for denaturation, and annealing at 55 °C for 30 s, extension/elongation at 72 °C for 90 s, and one final elongation cycle at 72 °C for 5 min. The amplification products were purified using the gel extraction kit from Fermentas and used as templates (approximately 5 ng, each) for the second phase of the overlap-extension PCR using identical amplification conditions. The amplification products were purified (see above) and cloned into pET30a via the NdeI and XhoI restriction sites (Meyer et al. 2011). The point mutations were confirmed by single-run sequencing (LGC genomics); the Agl3 proteins carrying the different alanine mutations were expressed in E. coli BL21 DE3 cells (Stratagene) after induction with 1 mM isopropyl-β-D-thiogalactopyranoside (IPTG) and purified as described for Agl3 (Meyer et al. 2011).
Agl3 in vitro assay and saturation kinetics measurements
The conversion of UDP-d-glucose to UDP-d-glucose-5,6-ene catalyzed by Agl3 was performed in 20 mM 1,3-bis(tris-(hydroxymethyl)methylamino)propane (Bis-Tris propane) buffer, pH 6.5, containing UDP-d-glucose at a final concentration of 0.1, 0.2, 0.5, 1.0, 2.0, 4.0, and 10 mM, respectively, and 20 µg of purified Agl3 in a total volume of 40 µl of reaction mixture. The reaction was carried out for 30 min at 70 °C. Subsequently, the reaction mixture was rapidly cooled on ice and both the substrate and reaction product were extracted by the addition of 40 µl of chloroform followed by vortexing the mixture for 1 min. Phase separation was performed using a table centrifuge (Eppendorf 5804R) and the water phase was analyzed by RP-HPLC (Thermo Scientific/Dionex; Ultimate 3000 Standard LC System) on a Nucleosil 120-3 C18 column (Macherey-Nagel) with a flow rate of 0.6 ml min −1 and 0.4 M phosphate (pH 6.1) (Meyer et al. 2011). The formation of UDP-d-glucose-5,6-ene was investigated by PGC-ESI-MS(MS) analysis after pre-purification of the intermediate on a porous graphitized carbon (PGC) cartridge (Thermo Scientific) (Pabst et al. 2010). The conversion of UDP-d-glucose and sulfite to UDP-sulfoquinovose by Agl3 was measured under the same conditions by varying UDPd-glucose concentrations of 0.1, 0.3, 1, 3, 10 and 30 mM against sodium sulfite concentrations of 1.0, 3.0, 10, 30, and 100 mM.
PGC purification and desalting of UDP-activated sugars
Prior to analysis of the in vitro reaction products of Agl3 by NMR and ESI-MS(MS), the UDP-bound intermediates from the reaction mixture were made protein-free and desalted using PGC spin-prep columns (Thermo Scientific). The columns were pre-activated with 500 µl of 100 % acetonitrile and washed with 500 µl of MilliQ water prior to sample application. Bound UDP-sugars were eluted with 100 % acetonitrile and lyophilized on a SpeedVac vacuum concentrator (Pabst et al. 2010). For ESI-MS(MS) analysis, the UDP-sugars were re-dissolved in 0.3 M ammonium formate, pH 9, containing 50 % acetonitrile. The samples were measured via direct infusion on a Bruker maXis 4G mass spectrometer in the negative ion MS scan mode. Specific values were set to: spectra rate 1.0 Hz, low mass 300 m/z, ion transfer time 85 µs, pre pulse storage 10.0 µs. NMR analysis of the Agl3 product Spectra were recorded at 297 K in 99.9 % D 2 O (0.6 ml) on an Avance III 600 spectrometer (Bruker; 1 H at 600.13 MHz, 13 C at 150.9 MHz, 31 P at 242.9 MHz), using standard Bruker NMR software. 1 H NMR spectra were referenced to 2,2-dimethyl-2-silapentane-5-sulfonic acid (δ 0.0), 13 C NMR spectra were referenced to external dioxane (δ 67.40), and 31 P spectra were referenced to external ortho-phosphoric acid (δ 0.0) for solutions in D 2 O. Gradient-selected 1 H, 1 H total correlated spectroscopy (TOCSY, mixing time 120 ms) and COSY experiments were recorded using the programs mlevph and cosygpqf, respectively, with 2048 × 256 data points and 16 and 8 scans, respectively per t1-increment. The multiplicity edited heteronuclear single quantum coherence spectra (HSQC) (Schleucher et al. 1994) were measured using the program hsqcedetgp with 1024 × 128 data points and 128 scans per t1-increment. Heteronuclear multiple bond correlation spectra (HMBC) (Bax and Summers 1986) were acquired using the pulse program hmbcgpndqf with 4096 × 64 data points and 1600 scans per t1-increment and spectral widths of 7.7 ppm for 1 H and 222 ppm for 13 C to check for any carbonyl correlated signals.
Labeling with deuterium oxide
The conversion of UDP-d-glucose to UDP-d-glucose-5,6ene was carried out in 20 mM phosphate buffer, prepared in D 2 O, at a pD value of 6.4 (Gabriel and Lindquist 1968). Briefly, an acidic and a basic phosphate stock solution in D 2 O were prepared by dissolving 0.037 g of Na 2 HPO 4 and 0.035 g of NaH 2 PO 4 in 10 ml of D 2 O, each, at a final concentration of 20 mM, and the pD values of D 2 O-prepared phosphate solutions were determined using a pHmeter (Mettler MP-220); approximate values were 5.3 and 8.8 for the acidic and the basic phosphate stock solution, respectively. The acidic and basic phosphate stock solutions were mixed at a ratio of 3:1 (v/v) to obtain a 20 mM phosphate buffer in D 2 O in the range of pD 6.4. For maximum removal of protons from the phosphate solutions, the solvents were dried using the SpeedVac concentrator and either phosphate salt was re-dissolved in 1 ml of D 2 O. Agl3 was prepared in D 2 O as follows: 2 ml of Agl3 solution (1 mg protein), purified by nickel affinity chromatography and dialyzed against 10 mM phosphate buffer, pH 6.5, were concentrated to 100 µl using the Speed-Vac concentrator and supplemented with D 2 O to a final volume of 1 ml, corresponding to a protein concentration of 1 mg ml −1 .
Cofactor extraction and spectrophotometric analysis
Purified Agl3 (5.6 mg of protein in 10 ml of 5 mM phosphate buffer, pH 6.5) was added to 5 ml of chloroform and mixed for 1 h at room temperature (22 °C). Phase separation was done using a table centrifuge and the water phase was transferred to a new tube. Washing of the chloroform phase was repeated with 2 ml of MilliQ water. The combined water phases were concentrated to 1 ml (Speed-Vac) and analyzed for the presence of the NAD-cofactor by RP-HPLC (as described above) (Meyer et al. 2011). The absorption spectrum of fractions containing NAD + was measured on a diode array photometer (Agilent) in 100-µl quartz cuvettes. The in vitro reduction of extracted NAD + was performed qualitatively by supplementing trace amounts of glucose dehydrogenase (Amano) and glucose (1 mM final concentration) to the NAD-cofactor and incubating the mixture at room temperature for 2 and 5 min, respectively.
Conventional stopped-flow spectroscopy Transient-state measurements were made using the SX.18 MV stopped-flow spectrophotometer (Applied Photophysics, Leatherhead, Surrey, UK), equipped with a 1 cm observation cell. In a typical experiment, UDP-sulfoquinovose synthase was mixed with UDP-d-glucose at 65 °C and first data points recorded starting with 1.5 ms. Final concentrations were 2 µM of Ag13 and 0.5, 5, 25 and 50 µM of UDP-d-glucose, respectively. The reduction of NAD + was followed at 320 nm. Calculation of pseudo-first-order rate constants (k obs ) from experimental time traces was performed with a SpectraKinetic work station (Version 4.38) interfaced to the instruments. The second-order rate constant was calculated from the slope of the linear plot of the pseudo-first-order rate constants versus substrate concentration. To follow spectral transitions, a Model PD.1 photodiode array accessory (Applied Photophysics) connected to the stopped-flow machine together with XScan diode array scanning software (Version 1.07) was utilized.
Circular dichroism (CD) measurements
Electronic circular dichroism spectroscopy was performed using Chirascan (Applied Photophysics, Leatherhead, UK). First, the instrument was flushed with nitrogen at a flow rate of 5 l min −1 . Then, ECD spectra were recorded at room temperature in the far-UV region (i.e., 180-260 nm). The path length was 1 mm, spectral band width 3 nm and scan time per point 10 s.
|
2022-11-30T15:19:28.727Z
|
2015-01-21T00:00:00.000
|
{
"year": 2015,
"sha1": "8cc1c5ff742440ef02c9b8f7236e56de370d8ee1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00792-015-0730-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "8cc1c5ff742440ef02c9b8f7236e56de370d8ee1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
}
|
7044352
|
pes2o/s2orc
|
v3-fos-license
|
Interleukin 3- receptor targeted exosomes inhibit in vitro and in vivo Chronic Myelogenous Leukemia cell growth
Despite Imatinib (IM), a selective inhibitor of Bcr-Abl, having led to improved prognosis in Chronic Myeloid Leukemia (CML) patients, acquired resistance and long-term adverse effects is still being encountered. There is, therefore, urgent need to develop alternative strategies to overcome drug resistance. According to the molecules expressed on their surface, exosomes can target specific cells. Exosomes can also be loaded with a variety of molecules, thereby acting as a vehicle for the delivery of therapeutic agents. In this study, we engineered HEK293T cells to express the exosomal protein Lamp2b, fused to a fragment of Interleukin 3 (IL3). The IL3 receptor (IL3-R) is overexpressed in CML blasts compared to normal hematopoietic cells and thus is able to act as a receptor target in a cancer drug delivery system. Here we show that IL3L exosomes, loaded with Imatinib or with BCR-ABL siRNA, are able to target CML cells and inhibit in vitro and in vivo cancer cell growth.
within 110 min and then to 60% within 15 min; afterwards, phase B is further increased to 95% within 2 min. Phase B is maintained at 95% for 10 min to rinse the column. Finally, B is lowered to 2% over 2 min and the column reequilibrated for 21 min (170 min total run time). The eluting peptides were on-line sprayed in the Triple TOF 5600 Plus mass spectrometer, that it is controlled by Analyst TF 1.7 software (AB SCIEX, Toronto, Canada).
Each of the two samples used to generate the SWATH-MS spectral library was subjected to four DDA runs. For these experiments, the mass range for MS scan was set to m/z 400-1250 and the MS/MS scan mass range was set to m/z 230-1,500. Using the mass spectrometer, a 0.25 s survey scan (MS) was performed, and the top 50 ions were selected for subsequent MS/MS experiments employing an accumulation time of 0.065 s per MS/MS experiment for a total cycle time of 3.5485 s. Precursor ions were selected in high resolution mode (>30,000), tandem mass spectra were recorded in high sensitivity mode (resolution >15,000). The selection criteria for parent ions included an intensity of greater than 500 cps and a charge state ranging from + 2 to + 5. A 15 s dynamic exclusion was used. The ions were fragmented in the collision cell using rolling collision energy, and CES was set to 5.
Eight DDA MS raw files were combined and subjected to database searches in unison using ProteinPilot™ 4.5 software (AB SCIEX; Framingham, US) with the Paragon algorithm. The samples were input as unlabeled samples with the following parameters: iodoacetamide cysteine alkylation, digestion by trypsin and no special factors. The searches were conducted through identification efforts in a UniProt Swiss-Prot database (downloaded in July 2014, with 137216 protein sequence entries) containing whole Homo sapiens proteins. A false discovery rate analysis was also performed.
SWATH-MS analysis and targeted data extraction. Two samples (2 µg each) were subjected to the cyclic data independent acquisition (DIA) of mass spectra. Data were acquired by repeatedly cycling through 40 consecutive 15-Da precursor isolation windows (swaths). For these experiments, the mass spectrometer was operated using a 0.05 s survey scan (MS). The subsequent MS/MS experiments were performed across the mass range of 100 to 1600 m/z on all precursors in a cyclic manner using an accumulation time of 0.03 s per SWATH window for a total cycle time of 1.2990 s. Ions were fragmented for each MS/MS experiment in the collision cell using rolling collision energy, and CES was set to 15. The spectral alignment and targeted data extraction of DIA samples were performed using PeakView v.2.2 (AB SCIEX; Framingham, US) with the reference spectral library. All DIA files were loaded and exported in .txt format in unison using an extraction window of 15 min and the following parameters: three hundred peptides/protein, seven transitions/peptide, peptide confidence level of 90%, excluded shared and modifies peptides, and XIC width set at 75 ppm. This export procedure generated three distinct files 3 containing the quantitative output for (1) the peak area under the intensity curve for individual ions, (2) the summed intensity of individual ions for a given peptide, and (3) the summed intensity of peptides for a given protein. For each protein, seven individual ion intensities summed as peptide intensity, ten peptides intensities summed as protein intensity Mean of all technical replicates were used to compare proteins of the two exosome populations.
Briefly, CML cells were seeded at a density of 1 × 10 5 in a 96-well plate and exposed to 0.1, 0.5, 1 and 10 μg/ml of Imatinib-loaded exosomes for 24 and 48 hours, and with 0.5μM of Imatinib as positive control. Similarly, Imatinib sensitive and resistant CML cells were exposed to 1 μg/ml of siRNAloaded exosomes for 24, 48 and 72 hours, and with 0.5μM of Imatinib. The absorbance was measured at 540 nm.
Dinamic light scatter (DLS)
Exosome size distribution was determined by dynamic light scattering (DLS) experiments. Collected nanovesicle samples were diluted to avoid inter-particle interaction and placed at 20°C in a thermostatic cell compartment of a Brookhaven Instruments BI200-SM goniometer, equipped with a solid-state laser tuned at 532 nm. Scattered intensity autocorrelation functions were measured by using a Brookhaven BI-9000 correlator and analyzed in order to determine the size distribution. The size at the maximum of the distribution (moda) is reported as a significant average size.
RNA extraction and Real-Time PCR
Imatinib sensitive and resistant CML cells were treated for 24 and 48 hours with 1 μg/ml of exosomes derived from transfected HEK293T cells, containing BCR-ABL specific siRNA or scrambled siRNA
Western Blotting
Imatinib sensitive CML cells were treated for 24 hours with 0.5μM of Imatinib (as positive control), with 1 or 10 μg/ml of exosomes derived from HEK293T transfected or not and treated with Imatinib.
|
2018-04-03T03:46:55.285Z
|
2017-03-16T00:00:00.000
|
{
"year": 2017,
"sha1": "d77faad2ea356d5c8a7d6bd793361c4b92d2eb24",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7150/thno.17092",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eafee985db291c092bb302b0bbbb28266b4e6012",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249431507
|
pes2o/s2orc
|
v3-fos-license
|
Proposal for all-electrical spin manipulation and detection for a single molecule on boron-substituted graphene
All-electrical writing and reading of spin states attract considerable attention for their promising applications in energy-efficient spintronics devices. Here we show, based on rigorous first-principles calculations, that the spin properties can be manipulated and detected in molecular spinterfaces, where an iron tetraphenyl porphyrin (FeTPP) molecule is deposited on boron-substituted graphene (B-G). Notably, a reversible spin switching between the $S=1$ and $S=3/2$ states is achieved by a gate electrode. We can trace the origin to a strong hybridization between the Fe-$d_{{z}^2}$ and B-$p_z$ orbitals. Combining density functional theory with nonequilibrium Green's function formalism, we propose an experimentally feasible 3-terminal setup to probe the spin state. Furthermore, we show how the in-plane quantum transport for the B-G, which is non-spin polarized, can be modified by FeTPP, yielding a significant transport spin polarization near the Fermi energy ($>10\%$ for typical coverage). Our work paves the way to realize all-electrical spintronics devices using molecular spinterfaces.
Achieving size-compact and energy-efficient control and detection of magnetism are paramount for the development of future spintronic devices. Using single molecules as quantum units opens a new pathway to reach the physical limits of miniaturization. Currently, spintronics devices are mainly operated via either an external magnetic field (e.g., tunnel magnetoresistance devices 1 ) or electric currents (e.g., spin-transfer torque devices 2 ), which are both highly powerconsuming. More recently, electric-field manipulation of magnetism has been proposed 3,4 and has been extensively studied in bulk and 2D materials 5,6 . However, the fullelectrical programmable reading and writing of magnetism at the single-molecule level are still unsolved problems.
It is now well known that in molecular spintronincs most of the phenomena are driven by the interface, which leads to the concept of spinterface [7][8][9] . Ideally, one aims at the control and detection 10,11 at the individual molecule limit. Therefore, single molecules asorbed on surfaces have become an ideal testbed to study the interaction of molecules with surfaces, the surrounding environment, and responses to external chemical stimuli. In particular, controlling molecular spin states by chemical functionalization of the surface allows for creating molecular devices with novel functionalities 12 . During the last decade, particular attention has been focused on the substitution of carbon atoms in the graphene lattice by heteroatoms leading to new physical and chemical properties [13][14][15][16] . In nitrogen-substituted graphene (N-G), scanning tunneling microscopy (STM) 17 showed a dramatic change of the local electronic structure around the nitrogen depending on its surroundings. This suggests that the N-G may be used to tune the properties of adsorbed molecules, and indeed, the adsorption on top of the N site of N-G can modify the molecular levels by shifts 18 , charge transfer and level splitting 19 , or change of spin state 20 .
Alternatively, boron (B) is also suitable for direct incor-poration into the graphene honeycomb lattice, resulting in at least an effective p-doping 21-25 . However, unlike N-G, the B-substituted graphene-based molecular interfaces have been much less investigated in both theory and experiment. Therefore, detailed insight into the interaction between molecule and B-graphene (B-G) at the atomic scale is currently lacking. In this Letter, we propose the B-G substrate as an ideal spinterface for molecular magnets: We demonstrate, using density functional theory (DFT), the electrical tuning and probing of spin states in a single-molecule device adsorbed on B-G. We choose iron tetraphenyl porphyrin (FeTPP), which has different magnetic ground states on graphene and Au surfaces 26,27 . We find that a single FeTPP molecule on B-G allows for a reversible spin transition between S = 1 and S = 3/2 controlled by an external electrical gate. This effect is driven by a strong and tunable hybridization between FeTPP and B-G. Combining DFT with Keldysh Green's function techniques, we further propose an experimentally feasible 3-terminal transport setup to probe the transport spin polarization (TSP). In contrast to pristine and N-G substrates, the in-plane spin transport for the B-G is significantly modified by the FeTPP with a TSP of more than 10% for a typical coverage. Our work shows a promising application to allelectrically writing and reading magnetization states in molecular spintronics devices.
The transport setup (see Fig. 1a.) consists of graphene with two contacts (L,R) and a charge-plane mimicking the backgate underneath, and an Au tip electrode above the FeTPP. This setup allows for two possible current flows: The inplane transport from left (L) to the right (R) graphene electrodes (I ), and the out-of-plane from L/R to Tip (I ⊥ ). This is feasible in state-of-the-art STM 28,29 , where the back-gate charge is capacitively controlled by a gate voltage. The gate charge enables "writing", while the I or provide a "reading" of the FeTPP spin state. The electronic structure calculations were performed using SIESTA 30 within the DFT + U scheme, and checked by comparing to plane-wave calculations 31 . The transport was studied using TRANSSIESTA 30,32,33 code, which employs the non-equilibrium Green's function (NEGF) formalism combined with DFT, and "post-processing" codes TBTRANS and SISL 34 . We refer to Supplemental Material I 35 for computational details. The electronic properties of the isolated FeTPP have been thoroughly explored both experimentally and theoretically, revealing the dependence of the spectroscopic state on the environment of the Fe atom that gives a ground state either in low (S = 0), high (S = 2), or an intermediate spin state (S = 1). In general, the ground states of the free FeTPP is S = 1 state with 3 A 2g having the occupancy of the 3d shell 1 , the last two orbitals being degenerate as a consequence of the molecular symmetry 27,36 . Figure 1b shows schematic sketches of spin-polarized projected densities of states (PDOS) of the FeTPP on doped graphene (without the STM tip). It has been demonstrated through combined STM experiments and DFT calculations that the molecule keeps the same electronic structure as in the gas phase after being deposited on pristine graphene due to weak molecule-substrate coupling, and the HOMO state originates essentially from the Fe d z 2 spin-down orbital 27 . In the case of N-G, the molecule remains S = 1 with the same spin configuration, while a clear downshift of the electronic spectrum is observed in good agreement with experiments 27 . Surprisingly, a significant change happens when the molecule is attached to 37 . The Mulliken charge analysis shows that upon adsorption 0.7 electrons are transferred from Fe to B-G. In particular, the spin-down channel of Fe-d z 2 , which was initially occupied, becomes an empty state just above E F due to strong hybridization between Fe-d z 2 and B-p z by perfect orbital symmetry matching, as shown in Fig. 2a. Such strong hybridization is also reflected in the charge density difference plotted in Fig. 2c. Furthermore, the strong interaction also introduces a small spin-polarization of the B atom and the carbon atoms at the spinterface. plemental Material II 35 .
A method to achieve full-electrical writing of the magnetization states at the single-molecule level is attractive and promising, corresponding to a reversible spin manipulation. The coupling between FeTPP and B-G at the interface leads to an empty d z 2 orbital very close to the Fermi energy. It turns out that external stimuli may easily tune it: We apply a gate plane placed 15Å underneath the graphene, as shown in Fig. 1a. The gate carries a charge density of n = g×10 13 e/cm 2 , where g defines the gating level, with g < 0 (g > 0) corresponding to n (p)-type doping 38 . Here, the FeTPP molecule and the second nearest-neighbor C atoms to the B atom were fully relaxed when applying a gate charge. Figure 2a shows the corresponding DOS projected onto Fe d-orbitals for g = 0 and g = −1. The variation of PDOS for both spin channels is significant. The d xy orbitals shift to higher energy with g = −1, although it has no direct influence on the total magnetic moment. In contrast, the d z 2 spin down becomes occupied, leading to a spin switching from S = 3/2 to S = 1, also as shown in Fig. 2b. The real-space charge density difference in Fig. 2c also indicates that the molecule retrieves the d z 2 electron when g = −1, compared to g = 0. The atomic structures in Fig. 2c clearly show the bond elongation/weakening between the relaxed molecule and B-G with g = −1. For the unrelaxed, ungated structure subject to the g = −1 gate, the gate-induced forces increasing the Fe-B distance are substantial, 0.2 nN and 0.6 nN on the B and Fe atom, respectively, pushing the structure towards the weakly bonded situation found for pristine graphene. For completeness, we note that for the weak coupling configuration at g = −1 we also get a competing S = 1 solution, (d xy ) 2 (d z ) 1 (d xz ) 2 (d yz ) 1 , only 10 meV higher in energy, see Supplemental Material II 35 .
In order to understand the gate-control of the spin states of the FeTPP molecule, we have performed test calculations with g = −1 for B-G without FeTPP. When g = −1, the B-G substrate becomes charged with extra electrons and thus loses its ability to attract the FeTPP. In other words, the electronic spin writing in the molecular spinterface can be achieved by a gate due to the tunability of the interaction between FeTPP and B-G. Increasing the gate charge to g = −2 yield little difference compared to g = −1, while the spin state is not tunable with g > 0 where it remains S = 3/2. Although the case of FeTPP may seem specific for B-G, the proposed mechanism is quite generic, and a similar approach should be possible when the frontier molecular orbital is close to E F .
For the electrical read-out of the single molecular spin states, we first consider the out-of-plane spin transport from the L/R graphene electrode to the Au tip electrode. Figure 3a and b show the spin-dependent transmission functions with g = 0 and g = −1. We observe the transmissions exhibit a dip near the Fermi level due to the vanishing DOS in graphene at g = 0, and it shifts to below E F when g = −1. We find almost fully spin-polarized current near E F for both g = 0 and g = −1, as shown in Fig. 3c and d. The scanning tunneling spectroscopy (dI/dV curve), probing the energy dependence of T ⊥ , should easily distinguish the two different spin states at g = 0 and g = −1, while shot noise measurements 39,40 , yielding TSP ⊥ , would not. The STM tip may furthermore be used actively to manipulate the molecules and control their spin 41 .
Next, we consider the in-plane transport where the electrical current runs through the graphene xy plane (from L to R). Since we use pristine graphene as L/R-electrodes, the Fermi energy is positioned at the Dirac point for g = 0. Without FeTPP, the N-G and B-G systems retain the pristine sp 2 hybridization and conjugated planar structure, leading to a non-spin-polarized behavior. This is in contrast to the spin-polarized current reported in the case of B-substituted graphene nanoribbons 42 . To understand how the transport properties of the substrates are modified through the molecular interfacial hybridization effects, we plot in Fig. 3e-f the corresponding transmission functions with FeTPP adsorbed on B-G at g = 0 and g = −1. Interestingly, our calculations show that when g = 0, the spin-up and spin-down molecular orbitals hybridize very differently with the substrate, resulting in clear spin-dependent transport behavior (Fig. 3g). For the spin-down channel (red lines), the transmission becomes almost linear at E < E F (similar to pristine B-G), which is significantly different from the rather broadened feature without FeTPP. On the other hand, the transmission for the spinup (black lines) resembles B-G transport without molecule. As a result, we get a large transport spin polarization (TSP ), defined as TSP = (T ↑ − T ↓ )/(T ↑ + T ↓ ), effect near the Fermi energy where it furthermore changes sign. For the applied y-periodic transport cell the TSP reach values beyond 10% which is significant when we consider the corresponding inter-molecular distance of ∼ 16Å, below typical coverages 27 .
On the other hand, the spin-up and spin-down transmission functions are almost degenerate when g = −1, resulting in the absence of TSP (Fig. 3h). Such an on-off TSP via gating could be probed, e.g., in shot noise experiments 39,40 . It should be noted that the concentration of FeTPP in typical experiments will be higher than our quasi-single molecule setup, which further increases the TSP. In addition, for pristine and N-G, the transmission remains almost the same as without FeTPP due to weak coupling between molecule and substrate (i.e., physisorption), leading to non-spin-polarized current (See Supplemental Material III 35 .).
In summary, we propose a molecular spinterface device based on FeTPP on a B-substituted graphene substrate. Our calculations demonstrate all-electrical writing and reading of magnetization states at the single-molecule level. The spin states of FeTPP can be switched reversely between S = 3/2 and S = 1, tracing the origin to the strong hybridization between Fe-d z 2 and B-p z orbitals. We further propose a 3terminal transport setup to probe the magnetization states by measuring spin polarization, which can be feasible using current state-of-the-art STM techniques 39,43 . Surprisingly, the inplane quantum transport for the B-G, which is non-spin polarized, can be spin-polarized by depositing FeTPP with a TSP of more than 10% for typical coverages near the Fermi energy. This large electrically controlled TSP can be detected, for instance, in a spin valve setup 44 . These results open an attractive route for the design of full-electrical writing and reading techniques in molecule/2D materials heterostructures.
|
2022-06-08T01:15:50.768Z
|
2022-06-07T00:00:00.000
|
{
"year": 2022,
"sha1": "11eaf4c1fc9e06e360582615b54b5498a4598cc3",
"oa_license": null,
"oa_url": "https://hal.archives-ouvertes.fr/hal-03721897/file/PhysRevLett.129.027201.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "11eaf4c1fc9e06e360582615b54b5498a4598cc3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
228086451
|
pes2o/s2orc
|
v3-fos-license
|
Exploring nurses’ perception about the care needs of patients with COVID-19: a qualitative study
Background COVID-19 is a new disease affecting and killing a large number of people across the world every day. One way to improve health care for these patients is to recognize their needs. Nurses, as a large population of health care staff, can be rich sources of information and experience on patients’ care needs. Therefore, the aim of this study was to explore nurses’ perception about the care needs of patients with COVID-19. Methods The present qualitative research was performed using the conventional content analysis approach in Iran from March to May 2020. The participants of this study included the nurses caring for patients with COVID-19, recruited by the purpose sampling method. The data was collected through 20 telephone interviews and analyzed based on the method proposed by Lundman and Graneheim. Results Qualitative data analysis revealed six main categories including need for psychological consulting, need for quality improvement of services, need for upgrading of information, need for improving of social support, need for spiritual care and need for social welfare. Conclusion The data showed that patients with COVID-19 were psychologically, physically, socially, economically, and spiritually affected by the disease. Therefore, they should be comprehensively supported by health care staff and other supportive systems.
Background COVID-19 is a newly emerged infectious disease which was first reported in Wuhan, China on December 31, 2019 [1]. After a rapid spread which inflicted many countries across the world, the disease was declared as a pandemic by the World Health Organization (WHO) on March 11, 2020 [2]. Up until May 25, 2020, the global number of people contracting the COVID-19 and the death toll had reached 5,131,810 and 331,108, and in Iran, these numbers were 129,341 and 7249, respectively [3]. Around 20% of patients with the infection may experience severe symptoms requiring oxygen therapy or other inpatients interventions, and only 5% of these will require hospitalization in the intensive care unit (ICU) [4].
Studies on patients with the COVID-19 indicate that they may experience various symptoms such as fever, dyspnea, muscle ache, headache, fear, diarrhea, nausea, vomiting, increased systolic blood pressure, and hemoptysis, that require invasive and non-invasive therapeutic supports during the acute course of the disease [5,6]. The mortality rate of COVID-19 has been estimated as 1 to 5%, but this varies based on patients' age groups and the presence or absence of underlying diseases [7]. Previous experiences of the SARS crisis indicated that patients may face many problems such as fear, loneliness, boredom, anger, anxiety, insomnia, and feeling of a taboo. Also, patients may have concerns regarding the effects of quarantine on their psychological well-being and the risk of infecting family members and friends [8]. The COVID-19, as a SARS-related emerging disease [9], has many unknown dimensions in various clinical care areas.
For a comprehensive patient care, their needs should be identified. Among health care providers, nurses are at the forefront of fighting against the COVID-19. They are in close and constant contact with patients from admission to discharge. Also, nurses are valuable resources to recognize the patients' needs, clinical manifestations of the disease, evidence-based care practices, nursing management problems, and prognostic factors during the COVID-19 crisis [10]. Explaining nurses' perception of COVID-19 patients' needs can be helpful to improve the quality of patient care. A few studies have been carried out on nurses' experiences about the caring needs of patients with COVID- 19. Because of uncertainties about the diverse aspects of the disease and caring needs of patients, and the fact that the authors are proficient in qualitative research methodology, and because they are closely engaged with caring of patients with COVID-19, the aim of this study was to use a qualitative research approach to explore nurses' perception about the care needs of COVID-19 patients. The results of this study can be helpful in improving the quality of care for patients with COVID-19.
Methods
This qualitative study was performed with a conventional content analysis approach.
Participants
The study population included the nurses occupied in the inpatient wards of COVID-19 of general hospitals affiliated to Lorestan University of Medical Sciences. The participants were selected using the purposeful sampling method based on the length of work experience in COVID-19 wards, total years of work experience, the wards where the nurses were occupied before the COVID-19 crisis, and the participants' age and marital status. The inclusion criteria were being engaged with caring for COVID-19 patients, willingness to participate in the study, and having at least 2 weeks of working experience in COVID-19 wards. The exclusion criterion was withdrawal from the study for any reason.
Collecting data
Given the need for urgent data collection to improve the quality of patient care as well as the restrictions of face-to-face interview, the data was gathered through indepth semi-structured telephone interviews from March to May, 2020. The characteristics of the nurses occupied in COVID-19 wards were initially obtained by referring to the nursing officials of the hospitals providing care for these patients. Then the participants were selected based on the snowballs theoretical approach. After explaining the objectives of the study and acquiring verbal consent for participation, an appropriate time was agreed for the interview. All the interviews were recorded by an electronic device. The main questions of the study were: "How would you describe a day of caring for hospitalized patients with COVID-19?", and "What types of care do these patients need?". Then the interview continued based on the participants' answers with more detailed questions such as "Would you please explain more about this?". Of course, the questions differ somewhat based on the position of the participant (i.e. head nurse, care provider, etc.) and the unit where the nurse was occupied (i.e. general or critical care unit). Using probing questions during the interview, the interviewer guided the process to achieve the study's objectives.
Data analysis
Data analysis was conducted simultaneously with the interviews based on the methods proposed by Lundman and Graneheim [11]. Detail of data analysis mentioned in the another article published of this study [12].
Trustworthiness
For ensuring of the accuracy and reliability of the data, the criteria of credibility, dependability, transferability, and confirmability were used as proposed by Lincoln and Goba [13]. Detail of trustworthiness of data mentioned in the another article published of this study [12].
Ethical considerations
Detail of ethical consideration mentioned in the another article published of this study [12].
Results
In this study, a total of 20 nurses including 5 men and 15 women with an average age of 31.95 ± 6.64 years and a mean work experience of 7.25 ± 5.9 years were enrolled ( Table 1). Data analysis in this study led to the emergence of six categories and 12 sub-categories ( Table 2).
Need for psychological consulting
Qualitative content analysis of the data showed that COVID-19 patients might suffer from many mental disorders and experience a lot of fear and panic during and after the disease. Therefore, they particularly need psychological consulting. Within this category, there were four sub-categories including death anxiety, social stigma, hopelessness, and separation anxiety.
Death anxiety
Data analysis showed that the patients equated being infected with COVID-19 to death and therefore were highly afraid of it. A participant noted that "... the atmosphere was like that the patients really had the perception that they would die of the disease, and there was no returning back ... " (p 17 ). The results also showed that the patients were horrified by the sudden death due to the failure of vital organs. Accordingly, one participant stated: ".... there might be nothing notable in the patient's clinical status, but after a while, he/she would die after a reduction in O 2 saturation ..." (p 15 ). Based on the participants' experiences, one of the reasons for the great fear of death was a restricted burial ceremony for the victims. In this regard, one of the participants said: "... well, I say that the condition is not good because they are not buried now, and they will not be buried on good terms ..." (p 14 ).
Social stigma
According to the data analysis, one of the causes of patients' anxiety was an impression to have a taboo disease. Data analysis showed that the patients with COVID-19 might perceive the disease as a social stigma and be 15 ).
Hopelessness
Data analysis showed a feeling of frustration in patients with the COVID-19 disease. One of their basic needs was to give them hope for life and the future. According to the participants, emotional support is one of the main patients' primary needs. The participants also believed that patients with good mood would successfully recover from the acute phase of the disease. One of the participants, referring to the patients' needs based on the Maslow pyramid, said: ".... I think the first need of ill patients is oxygen, ...... but those who have better condition need affection, or as Maslow said; the feeling of being belonged … " (p 14 ). Another participant shared the experience as: " ... for example, we had a COVID-19 patient, a 23-year-old leukemic girl. We gave her hope as much as we could ... to the extent that she could really defeat the virus ... " (p 20 ).
Separation anxiety
Data analysis showed that COVID-19 patients would experience a difficult time during isolation due to physical problems, loneliness, being separated from the family, and the lack of a definitive treatment for the disease. The participants' experiences indicated that the patients had been having a hard time because of being abandoned by the family, which would lead to separation anxiety. One participant said: " … being away from the family is hard for them …" (p 14 ). The participants' experiences also showed that the communication of nurses with these patients could reduce their social isolation problems, anxiety, and stress. As mentioned by one of the participants: " … they need a strong connection … , in fact, social isolation greatly torments them ... " (p 15 ).
Need for quality improvement of services
Data analysis showed that the patients needed to receive high quality care services from health staff. Also, data analysis highlighted the patients' needs for physical support during the disease course. Patients with underlying disorders needed more attention and special equipment. Under this category, there were four sub-categories of physical care, necessity for nutritional therapy, orientation, and isolating of critically ill patients.
Physical care
Data analysis showed that the patients, in terms of the disease severity, needed special attention and support from the health care team. To meet these needs, there are requirements for equipment such as intubation devices, thermometers, medications, etc., as well as health care procedures such as suctioning secretions, catheterization, and other physical care. The participants noted that patients with underlying diseases, who should be taken care of more rigorously, required special attention from the treatment staff. One participant mentioned: "... for example, patients with tracheae have problems in coughing ... they should be suctioned regularly, and on the other hand, their lungs have inadequate function due to the coronavirus ..." (p 17 ), and another participant mentioned: ".... they need to be suctioned .... those with catheters need additional care to prevent urinary tract infections .... it is also needed to change their position every two hours ... " (p 20 ).
Necessity for nutritional therapy
Data analysis showed that one of the important needs of COVID-19 patients was paying attention to their nutritional needs. The participants' experiences showed that these patients develop anorexia due to anxiety, stress, dyspnea, and coughing. Considering the nature of the disease, they need to have a proficient immune system and therefore a rich diet and counseling with nutritionists. One participant stated: "... their nutrition is really important and should be rich ..."(p 20 ), and another participant, referring to patients' dehydration, said: ".... I see that dehydration agonizes these patients ...." (p 15 ).
Orientation
The analysis of the participants' experiences showed that COVID-19 patients should become familiar with the hospital environment and fully informed of their condition within early hours of arrival to the ward. Since nurses' protective clothes are unfamiliar to the patients, they must become recognizable by writing nurses' names or posting their photos. The data showed that familiarizing patients with the hospital environment and simply explaining the function of medical equipment can encourage them to follow therapeutic instructions. The participants noted that fear was a major obstacle to treatment, but patients who were familiarized with the environment and equipment had a good compliance with the instructions. In this regard, one of the participants mentioned: ".... protective clothes surely have an effect on the quality of health care.... most patients don't even know if I'm a man or a woman ..." (p 15 ). Another participant stressed on the importance of familiarizing patients with equipment and its effect on their compliance with the instructions: ".... when I explained to the patient that how the monitor worked, he had a constant eye on it, and whenever his saturation would rise above 90, he would feel calm ... " (p 16 ).
Necessity for isolating of critically ill patients
Data analysis showed that being witnessed to the death or aggravating condition of others would cause fear and anxiety and disrupt other patients' hemodynamic status. The experience of the participants indicated that critically ill patients should be separated from the patients with a moderate-severe disease. One of the participants explained his experience regarding the adverse effects of one patient's death on others' spirit: ".... when one of the patients expired, others were frightened thinking that they would be the next... seeing the death of another patient, some patients experienced the fluctuation of blood pressure or a reduction in blood sugar .... " (p 17 ).
Need for upgrading of information
Data analysis showed that most COVID-19 patients were unaware of the disease's dimensions and did not follow the principles of disease prevention. This category was divided into two subcategories: "the necessity for improving awareness and fighting superstition" and "institutionalizing a disease-prevention culture".
Necessity for improving awareness and fighting superstition
Analyzing the data suggested that people have inadequate knowledge about the COVID-19 disease, and in some cases, they have misleading and somehow superstitious beliefs. The participants noted that the patients may delay referring to hospitals due to the lack of awareness and a fear of the disease. One of the participants quoted a patient: "... is it true if one is infected by the virus, they inject a drug to kill him? " … these superstition beliefs are present among some people ... "(p 14 ).
Institutionalizing a disease prevention culture
Data analysis indicated that some patients still did not believe in preventive actions and following the disease prevention protocols. In this regard, one of the participants, with respect to patients' cultural beliefs and the necessity of observing social distancing, said: " … I have a feeling that our social culture is relatively weak in this area. We go to a patient's bedside and tell him to put on his mask … this would make him upset .... there should be a culture of self-awareness and self-care ...." (p 15 ).
Need for improving of social support
Humans are social beings and interested to be in a community and communicate with others. Data analysis showed that in order to provide care for COVID-19 patients, special attention is required with respect to social support so that the patients feel less homesick and lonely during this period. In this category, there were two sub-categories of the provision of familial communications and personal accessories.
Provision of familial communications
Based on the data analysis, one of the patients' problems was the lack of familial support. The patients needed to communicate with their families and relatives during isolation and hospitalization. The participants verified that phone or video communications of the patients with their family members created a psychological peace for them and positively affected their recovery process. One of the participants said, "... they need so much psychological support... for example, an old mother felt very well as soon as she saw her son from a distance...." (p 17 ).
Referring to the patients' reluctance to treatments, one participant said: "... most of our patients whose families did not come were almost disappointed … for example, they would not take their pills and resist treatment .... "(p 16 ).
Provision of personal accessories
An analysis on the participants' experiences revealed that COVID-19 patients would like to use their personal belongings during hospitalization. The nurses noted that providing them with their personal belongings could lead to psychological and mental calm. One of the participants said, "... we had a patient that said she would like to drink tea with her own flask .... while she was very anxious and fastidious and had sleep problems at the nights before … when I brought him the flask, she barely drank a half cup of tea, felt calm, and slept all the night... "(p 16 ).
Need for spiritual care
The data analysis showed that one of the patients' needs was to pay attention to their spiritual needs. The participants' experiences showed that listening to prayers gave the patients mental peace and a pleasant feeling. Also, the participants recalled that the patients were influenced by verses from the Holy Quran and prayed for themselves and other patients. One of the participants said about the necessity of paying attention to the patients' spiritual dimension: ".... patients were asking us to pray for them. Sometimes, we were praying together ..." (p 17 ).
Need for social welfare
Data analysis showed that economic problems were among the main concerns of COVID-19 patients. The participants mentioned that some of the patients were constantly thinking of economic issues, and this would create a great deal of stress, affecting the course of their illness. One of the participants said: " … we had an economically poor patient who was constantly worried about his household and financial issues... what is my family doing now? The income that I was responsible for has actually been cut … all these issues bothered him a lot ... "(p 15 ).
Discussion
The current study was conducted to explore nurses' perception of caring needs of the patients with COVID-19.
The data analysis showed that COVID-19 patients were psychologically, physically, socially, economically, and spiritually affected by the disease. Therefore, they should be comprehensively supported by medical staff and other supporting systems.
In the present study, the fear of death was reported to be a stressful and annoying factor for the patients. Fear can lead to behavioral disorders and severe psychological reactions including suicide [14]. It has been estimated that the suicide rate due to the fear of COVID-19 will surge the next year, and because of this, an interventional plan has been implemented in the United States [15]. The fear of death not only predicts COVID-19 anxiety but also plays a causal role in different mental health conditions. So, it has been noted that mental health programs should focus on directly addressing death anxiety in these patients. It seems that cognitive behavioral therapy can reduce death anxiety [16], and it is recommended that this approach be considered for COVID-19 patients. Further research is essential to determine whether or not treatment for death anxiety improves long-term outcomes and prevents further disorders in vulnerable populations [16].
In this study, findings showed that COVID-19 patients were concerned about society's view on them and feared of being socially rejected. In fact, being rejected by society and the social stigma have been reported among major concerns of patients during epidemics, particularly in the case of COVID-19 pandemic [17]. During the outbreak of the novel coronavirus, lockdown and communicational limitations were implemented by the aid of military forces in most countries across the world. This phenomenon can strengthen the impression of social stigma and exacerbate social inequalities [18]. The ability of social compatibility actually presents itself during an epidemic [19]. Improving awareness, preventing fake information, and implementing social equality policies (such as equal access to diagnostic and therapeutic facilities) are among the measures that can be helpful to reduce social stigma [20,21]. Understanding the fact that people who die of COVID-19 are buried without formal funerals aggravates the anxiety and fear of the disease. These findings clearly highlight the needs of COVID-19 patients for psychological interventions which should be taken seriously and incorporated in hospitals' therapeutic protocols. In this regard, the findings of other studies indicate that focusing on the etiology and epidemiological characteristics of the disease in social networks generally aggravates the public concerns and changes their knowledge and attitude toward COVID-19 [22]. The level of misinformation on Twitter has been alarming, and there is a need for interventional measures to resolve this issue [17].
The present study showed that frustration was a major problem in COVID-19 patients, and the fact that they needed emotional support as a primary requirement. This finding was consistent with the evidence showing a surge in fear and anxiety during diseases' outbreaks [23]. Lee et al. (2020) found that people with anxiety and stress were more likely to experience frustration, mental crisis, suicidal ideation, and alcohol and substance abuse [24]. Due to the importance of emotional needs and the outcomes of psychological despair, it is recommended to not only focus on clinical symptoms, but also consider psychological counseling and screening programs to identify the patients who are at the early stages of anxiety and despair.
In this study, the findings showed that COVID-19 patients needed to receive high-quality health services. It was found that the patients also had special care needs such as suctioning pulmonary secretions, frequent checking on vital signs, and ventilation. Patients with COVID-19 represent symptoms such as cough, dyspnea, fever, sore throat, and sometimes other nonspecific presentations; nevertheless, only 5% of these patients will require ventilation [25,26]. In patients with severe symptoms, in addition to ventilation, monitoring and maintaining the function of several vital organs such as the heart and kidneys, as well as extremities (legs and fingers) are very important. Actually, patient-specific medical decisions and interventions are necessary in these scenarios [27]. In line with the findings of this study, other researchers have also emphasized on promoting and monitoring the quality of intensive care [25] and the regular updating of health care guidelines [28] for these patients.
Findings in this study showed that the patients with COVID-19 may develop anorexia nervosa due to the fear and anxiety of the disease. Therefore, there is a critical need for paying attention to the nutritional status of these patients. The importance of this observation has been reiterated in several studies [29][30][31]. Improper nutrition can weaken the immune system, promote chronic inflammation, and finally disrupt the host's defense against viruses. The COVID-19-induced inflammation and neurological dysfunction may deteriorate and progress to long-term consequences such as dementia and neurological diseases in those with unhealthy diets [30]. In addition to providing a healthy diet for these patients, educating the public should also be one of the main priorities of health systems to encourage individuals to employ healthy eating habits and choose appropriate regimens to prevent COVID-19 long-term effects.
In line with the findings of the current study, another report showed that anxiety and a feeling of loneliness, as major consequences of COVID-19, relieved after communicating with friends and family members via social media [32]. The findings of other studies also showed that social support is an important factor in reducing stress during outbreaks [33,34]. The COVID-19 has multiple unknown dimensions, and in the present study, we noticed an immediate need for boosting the patients' awareness of the disease. In addition to inadequate knowledge, improper perceptions about the disease are among the issues that we should be focusing on. In line, misinformation about some aspects of the disease has been noted as a point of concern in other studies [35]. In fact, misinformation on the coronavirus outbreak has become a global crisis in a way that the popularity of unconfirmed information sources has surpassed that of the World Health Organization (WHO) and the Centers for Disease Control (CDC) [36]. This phenomenon can predispose to psychosocial disorders, and therefore it is essential to increase the public health literacy, monitor social media, and activate public health organizations in social networks to create transparency and boost confidence in governments and non-governmental agencies.
Our findings showed that one of COVID-19 patients' needs was taking into account their spiritual dimensions. Spiritual care can effectively reduce stress and augment the feelings of wellness and integrity, as well as interpersonal relationships among patients [37]. It seems that spiritual care is one of the missing items of caring protocols for these patients. As these programs can be very helpful, it is recommended to use a team of psychologists and religious experts in order to provide a comprehensive care for these patients [38].
The mental entanglement of hospitalized COVID-19 patients with economic issues was another finding of the present study. Reduced financial trades and monetary activities [39], declined production of manufacturers, and recessions in tourism, food, education, and oil industries are inevitable during pandemics. A reduction in workforce is another major consequence amid the novel coronavirus outbreak, raising concerns about a global economic crisis [40]. So, immediate measures are needed to alleviate the economic burden of the disease (e.g. lost jobs) in vulnerable social groups.
One of the limitations of this study was that we only focused on nurses' perspectives; however, the views of patients with COVID-19 could also provide richer information. Therefore, it is suggested to explore COVID-19 patients' perspectives on the disease. Regarding the urgency of the situation and the fact that the data needed to be collected in the shortest time possible, the performance of validation procedures was compromised. However, the validity of the data was ensured using alternative methods. Another limitation of this study was conducting the interviews via phone calls to reduce the risk of disease transmission for both the interviewee and interviewer. This could have prevented a deep understanding of the phenomenon; nevertheless, the researchers tried to make the phone calls as deep and effective as possible.
Conclusion
The aim of this study was to explore nurses' perception about the care needs of COVID-19 patients. The data showed that these patients were psychologically, physically, socially, economically, and spiritually affected by the disease, highlighting the need for a comprehensive care by medical staff and other supporting systems. Factors such as death anxiety, taboo disease, frustration, and social isolation cause stress and anxiety in the patients, which can be resolved by continuous psychological counseling and providing spiritual care during the disease's course from the onset of symptoms to a few days after recovery. The COVID-19 patients experience various physical symptoms such as pain, fever, dyspnea, and cardiovascular and nutritional problems, etc. which all need to be addressed by medical teams. The lack of knowledge about the various dimensions of the disease, superstition beliefs, and low compliance with preventive measures indicate an urgent need for improving the public knowledge about the disease. Considering the economic problems and the ensuing global recession, governments along with nongovernmental organizations (NGOs) and charities should identify poor and vulnerable patients and endeavor to reduce their problems.
Abbreviations
NGOs: Non-governmental Organizations; CDC: Centers for disease control and prevention; WHO: World health Organization
|
2020-12-11T14:54:38.714Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d0e636e7207c30c58b6eab4c43f289c3048b7784",
"oa_license": "CCBY",
"oa_url": "https://bmcnurs.biomedcentral.com/track/pdf/10.1186/s12912-020-00516-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0e636e7207c30c58b6eab4c43f289c3048b7784",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
235497036
|
pes2o/s2orc
|
v3-fos-license
|
Apatinib with doxorubicin and ifosfamide as neoadjuvant therapy for high-risk soft tissue sarcomas: a retrospective cohort study
Background There is a need to establish an effective neoadjuvant therapy for soft tissue sarcomas (STSs). We previously showed that apatinib, administered in combination with doxorubicin-based chemotherapy, improves the efficacy of treatment. This study aimed to clarify the effectiveness and safety of apatinib combined with doxorubicin and ifosfamide (AI) neoadjuvant chemotherapy for STSs. Methods This retrospective study included patients with STS who received neoadjuvant therapy and surgery between January 2016 and January 2019. The patients were divided into two treatment groups: AI + apatinib group and AI group (doxorubicin + ifosfamide). Results The study included 74 patients (AI + apatinib: 26, AI: 48) with STS. There were significant between-group differences in objective response rates (53.85% vs. 29.17%, p = 0.047) and the average change in target lesion size from baseline (-40.46 ± 40.30 vs. -16.31 ± 34.32, p = 0.008). The R0 rate (84.62% vs. 68.75%; p = 0.170) and 2-year disease-free survival (73.08% vs. 62.50%, p = 0.343) were similar across groups. Finally, the rates of neoadjuvant therapy-related adverse effects and postoperative complications were similar in both groups (p > 0.05). Conclusion Apatinib plus doxorubicin and ifosfamide regimen is safe and effective as neoadjuvant therapy for patients with STS. However, the significantly improved preoperative ORR observed after neoadjuvant therapy did not translate into a significantly improved R0 rate and 2-year DFS. Prospective, well-powered studies are warranted to determine the long-term efficacy and optimal application of these protocols.
Background
There are over 70 subtypes of soft tissue sarcomas (STSs) [1]. Although rare, STS accounts for approximately 40,000 new diagnoses in China each year [2]. The standard treatment for localized STS is surgical resection [3]. Despite achieving optimal local control, over 50% of patients with localized STS succumb to metastatic disease [4]. The first-line treatment for advanced (locally unresectable or metastatic) STS is chemotherapy with doxorubicin [3]. The overall response rate (ORR) to this treatment for advanced STS is approximately 20% [5], and the 5-year survival rate among patients with advanced STS treated with a combination regimen is < 10% [6]. These findings suggest the need for an approach that may help reduce the rates of recurrence and metastasis in patients with early-and mid-stage STS. Neoadjuvant chemotherapy (preoperative adjuvant chemotherapy) is a candidate approach in this context [7].
Despite this need, the efficacy of neoadjuvant chemotherapy for STS remains controversial as evidence from clinical trials has failed to convincingly demonstrate the effectiveness of neoadjuvant chemotherapy for STS [8][9][10]. Due to the ongoing debate over the efficacy of neoadjuvant chemotherapy, the STS research community worldwide is examining ways to improve the efficacy of neoadjuvant therapy [10][11][12][13]. This improvement can be achieved by using more sensitive treatment methods or implementing individualized therapy based on sarcoma subtypes. Determining an effective neoadjuvant therapy remains an ongoing research priority.
Apatinib is a multi-target tyrosine kinase inhibitor (TKI), marketed in China, that effectively treats some types of STS [14,15]. As a leading sarcoma treatment center in central China, we have treated many patients with STS with apatinib [14,16]. In fact, we previously showed that apatinib combined with doxorubicin was more effective than doxorubicin alone in reducing the size of target lesions in patients with STS [17]. This finding suggests that the use of apatinib combined with doxorubicin-based chemotherapy may improve the efficacy of neoadjuvant therapy. Based on this evidence, we treated some STS patients with apatinib combined with doxorubicin and ifosfamide (AI) neoadjuvant chemotherapy over the past few years. In this study, we retrospectively examined these patients' clinical data to clarify the effectiveness and safety of apatinib combined with neoadjuvant chemotherapy for treating STS. The present findings may provide a reference for clinical treatment decision-making and future clinical trial design.
Patients and eligibility criteria
This retrospective study included patients with STS treated at the Affiliated Cancer Hospital of Zhengzhou University between January 2016 and January 2019. Patients were included in the present study if they: 1) had pathologically confirmed STS, 2) were identified as high-risk patients without evidence of distant metastasis [18], 3) received two cycles of AI or AI + apatinib neoadjuvant therapy, 4) underwent resection of the primary lesion, and 5) had complete follow-up data.
This study was approved by the Ethics Committee of the Affiliated Cancer Hospital of Zhengzhou University. Included patients provided written informed consent for their participation. The study complied with the Declaration of Helsinki guidelines and any other relevant reporting or ethical guidelines.
Treatment protocol
Patients were divided into AI + apatinib and AI groups based on the type of neoadjuvant therapy they received. In the AI + apatinib group, patients were administered 37.5 mg/ m 2 doxorubicin per day in the form of a short infusion on days 1 and 2; and 2 g/m 2 of ifosfamide day in the form of an intravenous bolus on days 1-3. The treatment procedure was repeated on day 21. Simultaneously, patients in parallel received 500 mg apatinib once daily, starting on day 1. Apatinib was discontinued on day 35.
In the AI group, patients were administered 37.5 mg/m 2 of doxorubicin per day in the form of a short infusion on days 1 and 2; and 2 g/m 2 ifosfamide per day of an intravenous bolus on days 1-3. The treatment procedure was repeated on day 21.
Patients were assessed for signs of toxicity, according to the National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0. In cases of severe toxicity, treatment with apatinib and doxorubicin was delayed until patient recovery, for a maximum of 14 days.
Surgical resection
Extensive resection of the primary lesion was performed on days 35-45. Patients were confirmed to be free of grade 3-4 adverse events (AEs) at the time of surgery. All surgeries were performed by an experienced STS surgical team. Each surgery aimed to achieve macroscopically complete resection of the tumor mass based on preoperative assessment and intraoperative findings. All operations were routine and nonminimally invasive. No patients received further apatinib or chemotherapy after surgery. All patients received adjuvant radiotherapy after surgery.
Evaluation
The effectiveness of neoadjuvant therapy was evaluated preoperatively with enhanced magnetic resonance imaging and computed tomography scans, according to the Response Evaluation Criteria in Solid Tumors (version 1.1). Betweengroup differences in the ORR, target lesion diameter changes from baseline, R0 rate, and 2-year disease-free survival (DFS) were assessed. DFS was defined as the time from surgical resection to signs of recurrence or metastasis or disease-related death, whichever occurred first. The rates of neoadjuvant therapy-related and surgical resection-related AEs were compared between the groups. Surgical resection-related AEs were graded by the Clavien-Dindo grading system.
Statistical analysis
All statistical analyses were performed using SPSS 21.0 software for Windows. Data are presented as medians (range) or counts (percentage). The Wilcoxon rank-sum test with continuity correction was used to analyze continuous variables. Fisher's exact test was used for the analysis of categorical variables. All statistical analyses were two-sided, and p-values < 0.05 were considered indicative of a statistically significant difference. This was a descriptive analysis.
Patients' characteristics
A total of 74 patients with STS met the eligibility criteria for this study and subsequently assigned to the AI + apatinib (n = 26) and AI (n = 48) groups. The patients' baseline characteristics were similar between groups and are presented in Table 1. Both groups featured more females than males. The median ages of patients in the AI + apatinib and AI groups were 42.04 ± 14.84 and 44.52 ± 13.34 years. The group members' Eastern Cooperative Oncology Group Performance Status scores ranged from 0-1. The primary lesions were most commonly located in the extremities, followed by the trunk and the head and neck. The distribution of histological subtypes in the AI + apatinib group was as follows: undifferentiated sarcoma (n = 7), synovial sarcoma (n = 6), leiomyosarcoma (n = 4), angiosarcoma (n = 4), fibrosarcoma (n = 3), rhabdomyosarcoma (n = 1), and malignant peripheral nerve sheath tumor (MPNST) (n = 1). The distribution of histological subtypes in the AI group was as follows: undifferentiated sarcoma (n = 9), synovial sarcoma (n = 12), leiomyosarcoma (n = 11), angiosarcoma (n = 3), fibrosarcoma (n = 4), rhabdomyosarcoma (n = 5), MPNST (n = 2), and liposarcoma (n = 2). The mean diameters of primary lesions in the AI + apatinib and AI groups were 10.13 ± 5.21 and 9.89 ± 4.36 cm, respectively. Baseline characteristics were similar between the groups ( Table 1).
Effectiveness of the treatment
Neoadjuvant therapy effectiveness was evaluated preoperatively, after administration of neoadjuvant therapy. In the AI + apatinib group, one patient with undifferentiated sarcoma and another with synovial sarcoma achieved complete response (CR) (Fig. 1). In contrast, no patient in the AI group achieved CR. There were significant between-group differences in ORR (53.85% vs. 29.17%, p = 0.047; Table 2) and the average change in target lesion size from baseline (-40.46 ± 40.30 vs. -16.31 ± 34.32, p = 0.008; Table 2 and Fig. 1).
Ancillary analysis
To investigate the effect of sarcoma histological subtypes on neoadjuvant therapy outcomes, we evaluated treatment outcomes after excluding patients with undifferentiated sarcoma. As shown in Table 3 and Fig. 2, we found a significant between-group difference in 2-year DFS (84.20% vs. 61.51%, P = 0.047).
Toxicity evaluation
The major neoadjuvant therapy-related AEs observed in the groups are presented in Table 4. Neoadjuvant therapy-related AEs were more common in the AI + apatinib group than in the AI group; however, this difference did not rise to the level of statistical significance (p > 0.05, Table 4). Most patients experienced grade 1 or 2 AEs, and a few patients experienced grade 3 or 4 AEs. No drug-related deaths occurred.
Postoperative complications per treatment group are presented in Table 5. Grade IV (Clavien-Dindo grading) complications-including cardiac failure and deep venous thrombosis-occurred once in the AI + apatinib and AI groups, respectively. No perioperative deaths occurred. There was no statistically significant between-group difference in the incidence of postoperative complications (P > 0.05, Table 5).
Discussion
Previous studies have demonstrated that combinations of multi-target TKIs and cytotoxic chemotherapy can overcome chemoresistance [15,19]. Apatinib may act as an effective chemotherapy sensitizer for reducing doxorubicininduced chemoresistance [20]. The present study findings support this conclusion. This study's ORR was higher than the previous study [17]. This increase was likely due to the addition of ifosfamide to the chemotherapy regimen. The aim of neoadjuvant therapy is to reduce the diameter of the target lesion, thus simplifying surgery (Fig. 3). Nevertheless, neoadjuvant therapy may increase the risk of disease progression, rendering surgery impossible. This suggests that an intensive neoadjuvant regimen may be required to concurrently minimize the risk of disease progression and reduce the target lesion size. Doxorubicin plus ifosfamide can reduce the diameter of target lesions more than doxorubicin alone [21]. Based on these findings, we used the AI regimen as neoadjuvant chemotherapy. Compared with the ORR associated with AI, that associated with AI + apatinib was significantly improved in patients with STS; similar findings were observed for target lesion shrinkage. Moreover, the rates of R0 and 2-year DFS in the AI + apatinib group were higher than in the comparison group. However, the differences in R0 rate and 2-year DFS were not statistically significant. The error caused by the small sample size may be one reason for the non-significant between group differences in R0 rate and 2-year DFS observed in this study. In addition, R0 and 2-year DFS were significantly different between the two groups after excluding patients with undifferentiated sarcoma. This suggests that the difference in histological subtypes between the two groups is another reason why the significant ORR in the preoperative evaluation in this study did not convert to a significant R0 and DFS postoperatively. Some patients with other histological subtypes in this study responded well preoperatively but developed recurrence/metastasis shortly after surgery (Fig. 1). This suggests that, for some sarcoma subtypes, higher preoperative ORR does not translate into prolonged postoperative DFS. Aside from sarcoma subtypes, other factors contribute to preoperative neoadjuvant therapy failure. One such error involves image evaluation. For example, although a patient was recognized as having CR based on imaging findings in evaluating neoadjuvant therapy, postoperative pathology tests confirmed the presence of residual tumor (Fig. 3). There are many other methods to evaluate the perioperative efficacy of neoadjuvant therapy-including imaging, pathological necrosis rate, R0 assessment, etc. However, it is unclear which method most accurately predicts DFS [4,22,23]. Well-powered prospective studies are required to answer these questions. In conclusion, although better perioperative outcomes of neoadjuvant therapy do not always translate into better postoperative DFS, better postoperative DFS requires better perioperative outcomes. We can conclude based on these findings that AI + apatinib achieves superior perioperative effectiveness compared to AI alone. Neoadjuvant therapy safety is important. Complications associated with neoadjuvant therapy can delay surgery and prolong the overall treatment time. In the present study, the incidence of neoadjuvant treatment-related AEs was similar between groups, as was incidence of postoperative complications. However, we did not rigorously screen patients ahead of enrollment. Patients in a better overall condition were inadvertently more likely to receive combination therapy than their counterparts. This should be considered when reviewing this study's safety assessment. Nevertheless, the present safety-related findings are consistent with previous studies that used TKIs in combination with chemotherapy for STS [24,25]. These findings suggest the safety of AI + apatinib as neoadjuvant therapy for STS.
This study had some limitations. This was a retrospective study with small sample size, resulting in low-level evidence. These limitations notwithstanding, our findings suggest that AI + apatinib may be a promising neoadjuvant therapy for STS. It remains unclear whether the perioperative evaluation of neoadjuvant therapy efficacy can support patient prognostication. Different histological subtypes may have different outcomes. Long-term, prospective studies are required to evaluate these considerations.
Conclusions
In conclusion, the apatinib plus doxorubicin and ifosfamide regimen is safe and effective as neoadjuvant therapy for STS. However, the significantly improved preoperative ORR observed after neoadjuvant therapy did not translate into a significantly improved R0 rate and 2-year DFS. Prospective, well-powered studies are warranted to determine the longterm efficacy and optimal application of these protocols.
|
2021-06-22T13:42:44.428Z
|
2021-06-22T00:00:00.000
|
{
"year": 2021,
"sha1": "904907470edb8133dec41c135a16145d142097b8",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc8541966?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "904907470edb8133dec41c135a16145d142097b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268792205
|
pes2o/s2orc
|
v3-fos-license
|
Open notes in psychotherapy: An exploratory mixed methods survey of psychotherapy students in Switzerland
Background In a growing number of countries, patients are offered access to their full online clinical records, including the narrative reports written by clinicians (the latter, referred to as “open notes”). Even in countries with mature patient online record access, access to psychotherapy notes is not mandatory. To date, no research has explored the views of psychotherapy trainees about open notes. Objective This study aimed to explore the opinions of psychotherapy trainees in Switzerland about patients’ access to psychotherapists’ free-text summaries. Methods We administered a web-based mixed methods survey to 201 psychotherapy trainees to explore their familiarity with and opinions about the impact on patients and psychotherapy practice of offering patients online access to their psychotherapy notes. Descriptive statistics were used to analyze the 42-item survey, and qualitative descriptive analysis was employed to examine written responses to four open-ended questions. Results Seventy-two (35.8%) trainees completed the survey. Quantitative results revealed mixed views about open notes. 75% agreed that, in general open notes were a good idea, and 94.1% agreed that education about open notes should be part of psychotherapy training. When considering impact on patients and psychotherapy, four themes emerged: (a) negative impact on therapy; (b) positive impact on therapy; (c) impact on patients; and (d) documentation. Students identified concerns related to increase in workload, harm to the psychotherapeutic relationship, and compromised quality of records. They also identified many potential benefits including better patient communication and informed consent processes. In describing impact on different therapy types, students believed that open notes might have differential impact depending on the psychotherapy approaches. Conclusions Sharing psychotherapy notes is not routine but is likely to expand. This mixed methods study provides timely insights into the views of psychotherapy trainees regarding the impact of open notes on patient care and psychotherapy practice.
Introduction
In the past decade, health institutions in around 30 countries have begun to provide patients with online access to their medical records via secure portals and apps. 1 Access includes test results, lists of medications, and even the narrative reports written by clinicians (the latter, often referred to as "open notes").Open notes are associated with a range of benefits for patients, including an enhanced engagement and recall about their care plans. 2,3In some countries, such as Sweden and the US, the practice is advanced with most patients offered full, prompt online access to most of their clinical records. 4,5In Switzerland, organizational networks of health professionals and their institutions (e.g.hospitals, nursing homes, birth houses, doctors' practices, pharmacies, Spitex services, and rehabilitation clinics or therapists) have also begun to provide online record access (ORA) (https://www.patientendossier.ch).They are usually called electronic patient file ("elektronisches Patientendossier"; EPD) in Switzerland.
Despite advances that are associated with open notes, many countries including those with digitalized health records have not implemented patient access.In the case of Canada and Germany, for example, open notes are available to some patients; however, they are not offered universally. 1 In China, some hospitals provide inpatients with ORA. 6 In South Korea, through the MyHealthWay app by 2024, it is expected that all health records data, including open notes, will be integrated into the app. 7lsewhere in the EU, in Bulgaria, online patient access to the health record was recently launched at the end of 2022 with prospective access to open notes. 8In Switzerland, a country with 8.927 million inhabitants, fewer than 20,000 EPD dossiers have been opened throughout the country so far. 9sychotherapy is a collective term for varying methods of providing mental healthcare, via so-called talk therapies.It can be delivered in inpatient, out-patient, remote, and ambulatory settings.In this study, psychotherapy notes refer to the qualitative clinical documentation from a psychotherapy session.The sharing of psychotherapy notes remains controversial.For example, in the US from April 2021, the twenty-first Century Cures Act mandated that providers offer patients access to their online clinical records, without charge; however, psychotherapy notes are exempt from this ruling. 10In Sweden, the Swedish National Regulatory Framework states that patients must be able to access their health information, including their notes regardless of whether they were produced in mental healthcare or general practice. 11,12Some exceptions apply and are related to the safety of the patient.In practice, due to the decentralization of healthcare in Sweden, each region decides how to interpret the regulations.In 2021, five of the 21 regions did not routinely provide access to psychiatric notes. 11In Norway, the national provider of ORA makes no distinction between psychotherapy notes and other documentation in the medical record. 13 Consequently, in regions that have implemented ORA, patients have access to both their structured documentation such as referrals and discharge notes, and their free form narrative documentation ("open notes").Although not mandated by law, most public healthcare institutions provide this access, while private healthcare providers usually do not.In Switzerland, inpatient psychiatric clinics are obliged to provide EPDs, including open notes 14 ; in contrast, for ambulatory psychiatric services and psychotherapists participation is still voluntary. 15he controversies around sharing psychotherapy notes are understandable: offering patients access to open notes can be reformulated as a dilemma balancing patient autonomy with the possible risks of harm from reading the in-depth documentation written by therapists. 16,174][25][26] More generally, where studies exist, the findings are mixed.8][29] For example, in the US, in a survey conducted with the Department of Veterans Affairs (VA)the nationwide health system that provides all enrolled veterans with access to their mental health notesaround half of surveyed mental health clinicians reported they would be "pleased" if open notes were discontinued. 29In Sweden, nearly two in three clinical psychologists and a third of psychiatrists reported being less candid in their clinical notes as a result of implementation of the practice. 28There exists also the concern about parallel and more complete records that are collected without the patient's knowledge (i.e.called shadow dossier).In Norway, healthcare personnel in an out-patient mental health setting reported that their documentation practices had changed over time, but they were not sure whether to attribute this to patients having access to their notes. 30In another study comparing mental and somatic healthcare (the latter referring to physical healthcare needs), a higher proportion of healthcare personnel in mental healthcare than in somatic healthcare reported having changed their writing after the implementation of ORA. 31 A study from Norway found that up to a third of healthcare professionals in psychiatry underreport information in the patient record, compared to a fifth of their colleagues in somatic care, and almost 1 in 10 of psychiatry healthcare professionals kept a shadow record. 32reliminary research suggests that open notes may benefit patients in mental health settings.For example, in the US, a small pilot study conducted at a psychiatric outpatient clinic found that, after 20 months, most patients reported an increased understanding about their mental health, and better awareness about the potential side effects of medications. 33Other larger surveys in the US support the finding that patients with mental health diagnoses report better understanding their medications, and doing a better job taking prescribed medications as a result of open notes. 34hile mental health care in general has received some attention, research specifically devoted to isolating experiences of open notes in psychotherapy settings is very limited. 22,25In the US, a pilot qualitative study conducted at one academic center shows mixed findings demonstrating that surveyed patients felt more in control of their care, and access was extremely important for trusting their provider, remembering what they were working on in therapy, and feeling engaged; however, some patients perceived notes as inaccurate, disrespectful, or judgmental, and strain was more likely if patients reported surprises in the notes, or incongruencies between what was communicated face-to-face and what was documented. 25A pilot study of therapists' experiences at the same center reported participants who agreed to share their notes were generally positive about the innovation. 22Notably, however, both studies were limited by small sample sizes and excluded patients with serious mental illnesses, or those therapists with serious misgivings about open notes. 23Further limiting the generalizations of these responses, the studies may have been biased in favor of more engaged therapists and patients, perhaps leading to responder biases with more favorable reporting about the practice.Beyond the limited explorations of clinicians' and patients' views into sharing open notes, there are scarcely any published investigations into clinical students' opinions about the practice.Considering that patients' access to their own health data is unlikely to abate including in mental healthcare, in this study we aimed to explore views about open notes among psychotherapy trainees.This makes the study original in its primacy.We chose to administer the survey in Switzerland, a country where there is a strong tradition of psychotherapy in mental healthcare but where open notes are still only nascent.We identified three research questions: Those students practicing psychotherapy were mainly employed on an hourly basis in a center for psychotherapy and were also under regular supervision.All potential participants were emailed by the study team with a link to the online survey.Apart from being enrolled as students in the target programs, there were no other inclusion or exclusion criteria for participation.
Survey
The study aims were investigated via an online survey comprised of four sections: One section about demographic information, titled (a) "Demographic information," as well as three sections focusing on open notes in different areas: (b) "Psychotherapy & Patients," (c) "Psychotherapists," and (d) "Familiarity with open notes" (see Figure 1).The survey included questions adapted from recent publications on the views of clinicians on open notes in mental healthcare, [27][28][29] as well as novel items, specifically tailored to the field of psychotherapy (see Supplementary Material 1).The survey encompassed 42 items that included 38 closed-ended questions asking for a single-or multiplechoice selection, and four optional open-ended questions, asking for a brief free-text comment.The survey was conducted in Limesurvey (limesurvey.org).The survey was administered in English, but both English and German freetext responses were accepted.
Data analyses
To answer the first research question, "Are psychotherapy trainees familiar with the concept of open notes?", we analyzed the items in Section D on familiarity and previous experience with open notes.To answer the second research question, "What potential impact do trainees foresee open notes to have on psychotherapy patients?",we analyzed the close-ended items from Section B and conducted qualitative analysis of the free-text comment left in response to Item B3.For the last research question, "What potential impact do they foresee open notes to have on psychotherapists?", we analyzed the closed-ended items in Section C and conducted a qualitative analysis of the free-text comment left in response to Item C2 and C3.Items B3 and C2 were combined for coding and further analysis due to a large overlap in responses.
Quantitative data was analyzed through descriptive statistics, which included averages, standard deviation, absolute values and percentages.The number and frequencies of responses on survey were prepared in Excel (v 16.61) and descriptive statistics and analysis were carried out using RStudio (v 1.2.5003).For qualitative analysis, all comments were included.Any comments that were in German were translated into English by Berfin Bakis and SB.Then, an inductive, thematic, data-driven approach was employed to analyze the comments.SB and AK were main coders who applied the initial codes.AB reviewed the codes independently and created the initial categories.The final categories and themes were adjusted by AK, AB, and SB.The impact of AK, SB, and AB's preunderstanding and prior experiences on the analytical process were reflected upon (see Supplementary Material 2).Initial coding was conducted using QCAmap (https://www.qcamap.org)and the frequency statistics for the final categories and themes were calculated in Excel (v 16.61).To maintain participant anonymity when providing direct quotes, participants were assigned a random individual numerical identifier.The qualitative data was reported following the Standards for Reporting Qualitative Research guideline (Supplementary Material 3). 35
Ethical considerations
The study received ethical approval prior to data collection from the Ethics Committee of the Faculty of Psychology, University of Basel, Switzerland (#014-20-3).Informed consent was attained online at the start of the survey.Participants were informed that there is no obligation to participate in the study and that they could choose not to partake in the study without any penalty.
Respondents
Out of 201 contacted students, 72 (35.8%) completed the survey.Respondents were aged between 21 years and 60 years with a mean age of 29.2 years (see Table 1).
Quantitative analyses
Impact on psychotherapy.In general, most students tended to endorse a positive outlook on the impact of open notes on psychotherapy (see Figure 2).Impact on psychotherapy patients.Participants predicted a diverse range of effects of open notes on patients (see Figure 3).
Impact on psychotherapists.Participants forecast divergent, and often negative effects, on the impact of open notes on psychotherapists' work and documentation practices (see Figure 4).Almost all respondents somewhat agreed or agreed that education about open notes should be part of psychotherapy training (94.1%).
Qualitative analyses
Respondents left a total of 199 comments.As a result of the qualitative analysis for Items B3, and C2, 242 passages were coded, which gave rise to 23 categories and four major themes (see Table 2).The biggest theme was Negative impact on therapy (32%), comprising a variety of predictions about the predicted adverse effects of open notes on therapists' work as well as the therapeutic process and the patient-therapist alliance.In this theme, over a third of comments predicted an increase in workload, both because of more time spent on documentation as well as on anticipated impacts on therapy sessions (e.g."Therapy sessions may take longer" [Participant #30, female, 29]).Trainees also expected that therapists would feel constrained as they would have "less freedom in writing down thoughts" [Participant #17, female, 29], and "less intuition" [Participant #38, female, 29].Some comments suggested open notes could hinder the therapy process either by diverting attention from the patient, harming the course of therapy or the therapeutic alliance itself, for example: "Even though therapists are trained to establish and maintain a trustful and honest relationship with their clients, I think the point about honest notes and spending more time to edit notes so that the patient doesn't get offended might disturb this relationship" [Participant #45, male, 24].
In contrast, a quarter of all coded passages pointed to Positive impact on therapy (23%).In this theme, most comments identified a variety of potential benefits of open notes to the therapy process (39.3%), e.g."it could make the process more structured" [Participant #71, female, 25] or "it may be helpful to reflect on the progress made so far" [Participant #25, female, 24].Specific predicted benefits such as an increase in trust and transparency were common.However, fewer comments expected improvements to the therapeutic relationship (14.3%) and only a few hypothesized a reduction in workload (5.3%).
Impact on the patient themselves was discussed in a quarter of all coded passages (Impact on patients, 25%).This theme contained more negative predictions than neutral or positive ones.For example, almost a third of passages anticipated an increase in anxiety or confusion among patients (31.3%).This was often linked to anticipations that patients would be unable to understand the written notes due to technical jargon (e.g."… patients will worry about the content, because the professional language is something else than how we talk to clients" [Participant #36, female, 30]).Participants also predicted that patients would feel more misunderstood or offended after reading their notes (e.g."would be more often confronted with offended patients or offended relatives of the patient" Impact on psychotherapy types.As a result of the qualitative analysis for Item C3, 48 passages were coded, which were analyzed into three major themes with nine categories (see Table 3
Main findings
In general, this mixed-methods exploratory study of psychotherapy trainees' views of open notes revealed a varied picture.Participants anticipated that open notes could have negative effects on patients and on the practice of therapy.For example, more than eight in 10 participants somewhat agreed or agreed that patients would contact therapists more with questions about their notes and that therapists would need to spend more time writing documentation.Similarly, eight in 10 somewhat agreed or agreed that therapists would be less candid in their notes with the knowledge patients could read them.Despite this, around six in 10 somewhat agreed or agreed patients would find significant errors in their notes.Note: Percentages for themes were calculated based on the total number of coded passages and percentages for categories were calculated based on the number of coded passages in that theme.
would benefit informed consent processes and for patient communication, and three quarters of those surveyed somewhat agreed or agreed that making open notes available to psychotherapy patients is a good idea.Similarly, around three in four participants somewhat agreed or agreed that patients would trust their therapist more and feel more in control over their own healthcare because of access.
Opinions on the effects on patients and therapists were further elucidated by the open comments.Four broad themes emerged: (a) negative impact on therapy; (b) positive impact on therapy; (c) impact on patients; and (d) documentation.Trainees identified concerns around increased workload, increased pressures on therapists, and harms to the therapy process and therapeutic relationship.They expressed concerns about patients' feeling confused or anxious, of feeling misunderstood, judged, or offended.However, some trainees identified benefits with respect to the psychotherapy process, including identifying shared goals, strengthened patient autonomy, and increased transparency.Respondents also described a variety of challenges related to documentation including changes in note taking, and the need for "shadow records" after implementation whereby therapists would curate a separate, private record written specifically for themselves.Qualitative analysis was applied to explore perceptions of trainees on the impact of open notes on different types of therapy.A major theme was that the innovation would not be implemented the same way for distinctive modalities, with some trainees anticipating that the approach would be more difficult to implement for some therapies, such as psychoanalysis.In addition, participants reported that open notes would differ depending on patient capabilities and needs and could prove more challenging depending on the diagnosis or among patients with acute illnesses.
Comparison with previous work
The findings of this study are consistent with that of other surveys undertaken in open notes in mental healthcare.Similar to other surveyed mental health clinicians, [19][20][21]29 psychotherapy trainee were sometimes positive about open notes believing transparency and trust could be strengthened ("In principle, it is a good thing but…" 22,36 ). Hover, like the only qualitative study into psychotherapy professionals' views on the practice, attitudes were tempered by concerns about the risk of patient confusion and offense, and of compromising the psychotherapy relationship after access.22 In the Chimowitz study, participants were generally positive sharing access suggesting it could improve candor in clinical sessions.They reported few disruptions to their workload but admitted reluctance about discussing notes with patients during sessions.As the study authors noted, participants also tended to refer to open therapy notes mostly in hypothetical language believing most patients would not read what they wrote.Like the opinions of trainees in our study, their experiences suggested a lack of confidence about using the notes in psychotherapy processes.While the Therapist attitude 2 (100%) "I'd say it is not primarily about the type of psychotherapy, but the attitude of the psychotherapist." Note: Percentages for themes were calculated based on the total number of coded passages and percentages for categories were calculated based on the number of coded passages in that theme.
Chimowitz study explored the views of therapists who used psychodynamic and cognitive behavioral therapy approaches, it did not examine differential perspectives on these approaches to documentation.Our study is the first to identify potential challenges, including in documentation practices, of using open notes across distinct talking therapy modalities.
In our study, trainees identified the potential need for a shadow record to document their psychotherapy notes.This echoes findings among other mental health clinicians who report keeping a shadow record to ensure that they preserve the necessary detail in notes without risking offense or confusion.In a Norwegian study, healthcare professionals noted they kept shadow records on paper or their own computer. 32Beyond mental health contexts, although some clinician surveys have been undertaken, 29,37 very little is understood about how ORA might objectively modify the nature of records including the detail, documenting differential diagnoses, or of the use of accessible and sensitive language in notes. 38revious research in psychotherapy ethics has highlighted the problems associated with informed consent in psychotherapy, [39][40][41][42][43] including the routine failure to provide pertinent and accessible information about psychotherapy processes and the value of specific treatment techniques.Recently, it has been proposed that open notes might provide a new tool in psychotherapy to help strengthen patient autonomy by providing a platform to extend opportunities to explain treatment techniques and processes. 44Despite identifying the potential for confusions associated with technical language or jargon in psychotherapy documentation, trainees in our study also anticipated open notes could be a useful vehicle for strengthening patient autonomy, transparency, and informed consent processes.
More than six in 10 trainees believed that a majority of patients might find significant errors in their psychotherapy notes. 45This prediction is consistent with other studies which suggest that mental health clinicians and general practitioners also believe that patients will identify errors and omissions in their notes. 19,37In a study by Bell et al. in the USA, of more than 22,000 out-patients who read their notes at three diverse health systems, around one in five reported finding an error with 40 percent perceiving the mistake as serious. 46Failure to correct errors, or to address potential omissions means both patients and therapists may be relying on inadequate tools, 1 which may further exacerbate the potential for misinterpretations and clinical errors. 47rainees in our study also suggested that it might not be appropriate to open notes to some psychotherapy patients, depending on their condition or other patient factors.Participants' uncertainties about when it is appropriate to share access, including whether there are occasions when access might be unsuitable, echo concerns identified by other clinicians.In recent qualitative work exploring open notes in mental healthcare which surveyed patients, clinicians, and researchers, respondents agreed that failure to offer access to some patients, however, could exacerbate stigmatization creating its own harms, or lead to inappropriate decisions about whom to offer access. 19,20Currently, there is a lack of evidence-based policy, and lack of research which is focused exclusively on examining the experiences of sharing notes with patients with severe mental illnesses, addiction disorders, personality disorders, or those in hospitalized psychiatric care. 23,24rengths and limitations This is the first survey of psychotherapy trainees' familiarity with, and opinions about, the use of open notes in psychotherapy practice.A particular strength of the study is the diversity of psychotherapy modalities which participant trainees reported intending to practice.
This study has several limitations.Although the response rate was reasonable for an online survey (35.82%), the restricted sample size and restriction to one academic center limits generalizations about trainees' views.The survey was administered during the COVID-19 pandemic, and it is unknown whether this affected response rates, or response biases among our respondents.Comments to freetext questions were brief-only 1 or 2 sentences or written in bullet points-restricting a more in-depth understanding of respondents' opinions.The free-text responses were more negative than the structured items, a predictable pattern in surveys where the methods are combined. 48ecause the survey was administered online, it was not possible to obtain a more in-depth exploration of participants' views which focus groups or interviews might have provided.Our participants were drawn from various levels of psychotherapy training, and potentially disaggregating their responses would have revealed differences in opinion; however, owing to the small sample size this was not practicable.
The majority of our participants did not use open notes in the past.Given the increasing international spread of ORA exploring the awareness and opinions of tomorrow's psychotherapists is pertinent.Rapid expansion of telemedicine and digital interventions in psychotherapy suggests patients and therapists will increasingly expect asynchronous webbased tools to support treatment techniques.
Implications and future directions
We recommend that much more concentrated efforts are needed to solicit the views and experiences of open notes among practicing psychotherapists and clinical psychologists, including those who use different kinds of talking therapies.Experiences and best practices should also be summarized and included in the curriculum for the benefit of trainees to ensure that they are prepared for writing notes. 49,50Previous research suggests that faculty's views on open notes might shape trainees' opinions. 51In tandem, given that clinicians' perceptions of patients' experiences with ORA are often at odds with patient's actual experiences, 1 much more work is needed to explore the views of patients with accessing their psychotherapy notes online, including the benefits and harms of reading their documentation.Finally, advances in generative artificial intelligence including chatbots powered by large language models (LLMs) such as ChatGPT are set to change the frontiers of clinical documentation 52,53 including in mental health contexts. 54These chatbots have strengths in summarizing complex information and have impressive abilities to attune writing styles and tone for different readership making them highly relevant for therapists writing open notes.However, LLMs also carry limitations-they tend to make things up ("hallucinations") and can embed biases and unwanted stereotyping. 53Therefore, we strongly suggest that future research examine how psychotherapy intersects with LLM-powered chatbots, including the opinions and experience of therapists in using these tools.
Conclusions
This mixed-methods study provides exploratory insights into the views of 72 psychotherapy trainees in Switzerland regarding the impact of open notes on patient care and psychotherapy practice.In general, trainees expressed mixed views about open notes.They identified many potential benefits including patient communication, education, and informed consent processes.However, they also identified concerns related to the potential for access to increase workload, harm the psychotherapeutic relationship, compromise the quality of records, and increase risk of shadow records.
Sharing psychotherapy notes is not routine in countries which have begun opening ORA to patients, including Switzerland.However, even in countries where access to therapy notes is not mandatory, many social workers and therapists report opening access to patients.Still, given the nature of the treatment processes and techniques, in psychotherapy contexts, open notes may invite unique challenges with respect to documentation, including potential risks of harm or offense, balanced with respect for patient transparency and autonomy, and more focused work is now needed to understand the particular challenges in this domain.Psychotherapists working with different therapeutic approaches will also need advice and guidance, including formal training, to become more comfortable writing and talking about documentation that patients may read, including how to manage disagreements, perceived errors, and feedback.
[Participant #41, female, 24]) or become upset, perceiving discrimination (e.g."patient could feel stigmatized" [Participant #41, female, 24]).On the other hand, some trainees (27.9%) suggested that the effects of open notes could be more nuanced, depending on the patients or their diagnosis.Beliefs ranged from the cautious (e.g."Depending on the disorder, patients may feel criticized or not equal (to the therapist)" [Participant #59, female, 40]) to the optimistic (e.g."If a patient/client is cognitively functional and wants to get treatment, then I think sharing notes could be hugely beneficial" [Participant #63, female, 34]).A minority of comments referred to positive changes such as patient empowerment and autonomy (e.g."I can see the potential for empowering patients in this practice" [Participant #64, female, 28]); only two participants explicitly mentioned improved patient understanding because of open notes.A final emergent theme, Documentation, contained opinions about the practice of the documentation in the era of open notes (21%).Here, the largest category (which was also the largest overall) comprised a wide variety of comments that described how note taking practices could change after opening them to the patient (66.7%).Some participants predicted that the notes might have to contain "more careful entries" [Participant #68, female, 33] which some linked to "possible loss of first impressions" [Participant #27, female, 24], and the attendant concern that therapists would have to "[pay] closer attention to wording so that there are no misunderstandings" [Participant #21, female, 33].Two comments explained that notes would have to be simplified (e.g."Technical terms should be deleted from the notes, as explaining them to the patients would take a lot of time" [Participant #57, female, 29]) and some forecast a loss or omission in information altogether, e.g."I know, that I would skip some kind of information" [Participant #14, female, 27].Four comments stated outright that notes might contain incorrect information, e.g."psychotherapists might write fake notes, only to avoid long discussions" [Participant #44, female, 21].In the remaining categories of this theme, trainees raised questions about how and when open notes should be implemented, as well as the need for keep shadow notes if open notes were implemented (e.g."it could be that therapists would start to have "shadow notes" on the side" [Participant #63, female, 34]).A minority advocated that "notes should always be written in a way that patients might read it."[Participant #40, male, 26] ).The largest theme, Therapy factors, comprised of opinions about whether the use of open notes depended on the type of psychotherapy.Most trainees either provided examples of therapies where open notes would be more difficult to apply (e.g."More problematic in psychoanalysis" [Participant 12, female, 28]) or pointed to differences between therapies (e.g."certain … therapy approaches require different amounts of translation work for patients" [Participant #12, female, 28]).In contradiction, a few commented that open notes implementation would not be different across types of therapies (e.g."I already write my notes in a way that patients could/ would be allowed to read them any time."[Participant #68, female, 33]) or that it could be easier (e.g."I expect less difficulties in CBT." [Participant #19, female, 31]).Only three comments suggested that the therapy setting is also a factor.For instance, whether it's during an inpatient care (e.g."In in-patient settings, the question arises who takes notes and if the patient can see all the notes from all the therapists and health care workers.I think this probably works best for out-patient settings" [Participant 63, female, 34]) or in legally prescribed therapy (e.g."In a setting where therapy is a legal requirement, it could be very difficult for the patient to understand/ have insight of the notes."[Participant 42, female, 28]).The second theme, Patient factors, comprised of comments about implementation challenges related to the patients themselves.Some trainees suggested that patient capabilities should be considered when using open notes.These included: the patient's cognitive capacity (e.g."Cognitive capacities certainly play a big role here" [Participant 33, male, 25]), patient's ability to understand their diagnosis (e.g."challenging for patients who don't think they could have a mental health problem" [Participant #41, female, 24]), and patient's motivation (e.g."patients that lack the motivation to attend psychotherapy (e.g.offenders)" [Participant 64, female, 28]).Trainees also considered the additional challenges if the patient is underage: "difficult in child and adolescent psychotherapy as parents might sometimes disagree with therapist and the child might not want the parents to read the notes" [Participant 64, female, 28]).The rest of the comments on this theme focused on specific diagnoses where open notes use is more difficult, e.g.: "I work in acute psychiatry.
Figure 2 .
Figure 2. Distribution of responses to statements about the potential impact of open notes on psychotherapy.Ordered from highest agreement (combined answers "somewhat agree" and "agree") to lowest.Note: Item 5 is worded in a negative direction which may have affected its rating.
In addition, most trainee therapists believed patients would worry more if open notes were implemented.Almost all students (94.1%) somewhat agreed or agreed that education about open notes should be part of psychotherapy training.However, trainees in Switzerland also forecast some potential benefits of open notes: around eight in 10 believed open notes
Figure 3 .
Figure 3. Distribution of responses to statements about the potential impact of open notes on psychotherapy patients.Ordered from highest agreement (combined answers "somewhat agree" and "agree") to lowest.
Figure 4 .
Figure 4. Distribution of responses to statements about the potential impact of open notes on psychotherapists.Ordered from highest agreement (combined answers "somewhat agree" and "agree") to lowest.
a Percentages were calculated excluding missing data.b Other reported nationalities were Turkish, German, Italian, and Portuguese.
Table 2 .
Themes and categories from qualitative analysis of comments about benefits and harms of open notes to patients and impact on psychotherapists.
Table 3 .
Themes and categories from qualitative analysis of comments about impact of open notes on types of psychotherapy.
|
2024-04-01T05:07:13.955Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "35c00128e5d07b34f65c6eca4938a45b9cc5a170",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "35c00128e5d07b34f65c6eca4938a45b9cc5a170",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
251571055
|
pes2o/s2orc
|
v3-fos-license
|
Rutin alleviated acrolein-induced cytotoxicity in Caco-2 and GES-1 cells by forming a cyclic hemiacetal product
Acrolein (ACR), an α, β-unsaturated aldehyde, is a toxic compound formed during food processing, and the use of phenolics derived from dietary materials to scavenge ACR is a hot spot. In this study, rutin, a polyphenol widely present in various dietary materials, was used to investigate its capacity to scavenge ACR. It was shown that more than 98% of ACR was eliminated under the conditions of reaction time of 2 h, temperature of 80 °C, and molar ratio of rutin/ACR of 2/1. Further structural characterization of the formed adduct revealed that the adduct of rutin to ACR to form a cyclic hemiacetal compound (RAC) was the main scavenging mechanism. Besides, the stability of RAC during simulated in vitro digestion was evaluated, which showed that more than 83.61% of RAC was remained. Furthermore, the cytotoxicity of RAC against Caco-2 and GES-1 cells was significantly reduced compared with ACR, where the IC50 values of ACR were both below 20 μM while that of RAC were both above 140 μM. And the improvement of the loss of mitochondrial membrane potential (MMP) by RAC might be one of the detoxification pathways. The present study indicated that rutin was one of the potential ACR scavengers among natural polyphenols.
Introduction
Acrolein (ACR), an α, β-unsaturated aldehyde, is one of the typical active carbonyl compounds (1). ACR could be produced endogenously by enzyme-mediated metabolism of threonine and polyamines, metabolism of the anticancer drug cyclophosphamide, and oxidation of unsaturated fatty acids on the cell membrane (2)(3)(4)(5). Due to the presence of the olefinic double bond and carbonyl group, ACR could react with biological nucleophiles such as DNA and proteins, resulting in dysfunction of biomacromolecules meta-phenol structure of polyphenols that played an essential role in their reaction with ACR (26). Inspired by previous literature, we paid more attention to potential ACR scavengers derived from dietary materials.
Previously, a study by Zamora et al. (27) showed that ACR was trapped by quercetin to form an equimolar adduct that was detected not only in an experimental model of frying onions with fresh rapeseed oil but also in the commercial crispy fried onions. While quercetin is the dominant flavonol in onions, it usually exists in the form of glycosides, such as rutin (quercetin 3-O-rhamnosylglucoside) (28,29). Literature also reported that rutin was the second most abundant flavonoid in dry red onion skins after quercetin (30). Besides, rutin is also widely present in asparagus, buckwheat, and peppers, being an important dietary constituent (31)(32)(33). In addition to culinary virtues, substantial evidence suggested that rutin was involved in a variety of biological activities, including antioxidant, antiallergic, antiinflammatory, and cardioprotective effects (34, 35). Chemically, rutin also possesses the typical meta-phenol structure as a reaction site for ACR, but the scavenging effect of rutin on ACR was lack of discussion.
In this work, a simulated system was established to investigate the effect of three factors on the scavenging of ACR by rutin. Subsequently, the adduct between ACR and rutin was identified, which also helped to elucidate the reaction mechanism. Considering that the adduct might be formed and ingested after cooking, a three-stage simulated in vitro digestion of the adduct was conducted. Furthermore, the cytotoxicity of RAC against the human intestinal epithelial cells (Caco-2) and gastric epithelial cells (GES-1) was determined with ACR as the control, followed by the measurement of mitochondrial membrane potential (MMP).
The influence of temperature, reaction time, and molar ratio on the elimination of ACR by rutin In the present study, three variables were taken into consideration, namely temperature (40-100 • C), reaction duration (0.5-8 h), and molar ratio of rutin to ACR (from 1/2 to 4/1), and the remaining level of ACR was determined. A mixture of rutin, ACR, and 20 mL of PBS was prepared in a 50 mL three-necked flask and reacted under light-proof conditions. The different variables were studied separately while keeping other conditions constant as follows: temperature of 80 • C, reaction time of 2 h, and the molar ratio of rutin/ACR of 1/1. At the end of each reaction, ACR was derivatized with DNPH, and its content was determined as below.
Qualitative and quantitative determination of ACR
The stock solution equivalent to ACR concentration of 100 mg/mL was prepared by dissolving 10.7 mg of ACR-DNPH standard with 25 mL of acetonitrile (ACN). And the ACR-DNPH standard curve (2 to 100 µg/mL) was established by serial dilutions of the stock solution. The HPLC conditions were as below. An Agilent 1,260 Infinity II HPLC system (Agilent Technologies, Palo Alto, CA) equipped with a DAD and a Zorbax SB-C18 column (250 × 4.6 mm; 5 µm; Agilent Technologies) was utilized for the determination of ACR-DNPH. The separation was conducted under isocratic conditions by introducing 50% A (water) and 50% B (ACN) at a flow rate of 1.0 mL/min for 15 min. The temperature was 35 • C, the injection volume was 20 µL, and the detection wavelength was set at 365 nm. According to the retention time as well as the peak area, the standard curve of ACR-DNPH derivates was obtained.
DNPH was recrystallized as the followings before use: excess DNPH was added into 200 mL of methanol (MeOH) heating for 1 h, and the supernatant was collected and further heated at 60 • C to remove 95% of the solvent; the crystals were then washed twice with 100 mL of MeOH, transferred to another clean beaker, and co-heated with 200 mL of MeOH; and the resulting crystal was stored under seal. The 0.1 M DNPH solution was prepared by dissolving 5.0548 g of the purified DNPH in 250 mL of ACN containing 5 mL of 10% phosphoric acid solution.
Derivatization of ACR with DNPH was conducted in accordance with Reilly et al. (36). In short, one milliliter of each sample was incubated with 1 mL of diluted DNPH solution (0.01 M) for 40 min at 40 • C, followed by extraction with 10 mL of MeOH/water (75: 25, v/v). After centrifugation (6,000 rpm, 10 min), the supernatant was collected and was further extracted with 10 mL of dichloromethane to obtain the DNPH derivatives. Prior to HPLC analysis as above, the DNPH derivatives were redissolved in 1 mL of HPLC grade ACN and passed through a 0.45 µm filter (Bie & Berntsen, Rødovre, Denmark).
Preparation and purification of the rutin-acrolein adduct (RAC)
Based on the above results, rutin (2 mmol) and ACR (1 mmol) dissolved in 25 mL of PBS were mixed in a three-necked flask and reacted at 80 • C for 2 h in the dark. After cooling to room temperature, the supernatant was obtained by vacuum filtration and concentrated to 2 mL in a rotary evaporator at 45 • C before loading onto a Sephadex LH-20 column (105 × 2.2 cm) for purification. MeOH was applied as the eluent and the flow rate was set at 0.6 mL/min. As a result, the purified adduct of rutin and ACR, namely RAC, was obtained.
Structural characterization of the adduct
The reaction progress was tracked and determined by the Shimadzu LC-20 system (Tokyo, Japan) equipped with a diode array detector (DAD). The HPLC conditions were as follows: the mobile phase was composed of MeOH (A) and 0.1% acetic acid water (B) with gradient elution from 10% A to 100% A within 30 min at a flow rate of 1.0 mL/min. With an injection volume of 10 µL, a Zorbax SB-C18 column (150 × 4.6 mm; 5 µm; Agilent Technologies) was used and the UV spectra were recorded in the range of 190-400 nm.
The mass spectrum (MS) data was obtained in the negative mode by direct injection into an LCMS-8045 triple quadrupole mass spectrometer (Shimadzu Corporation, Tokyo, Japan) equipped with electrospray as the ionization source. The scan range was from m/z 50 to m/z 1000. Other parameters for MS and MS/MS measurements were set in refer to the method of Qi et al. (37).
Additionally, 0.5 mL of DMSO-d 6 and 15 mg of the sample were added into a 5-mm nuclear magnetic resonance (NMR) tube, and the spectra, including 1 H NMR, 13
In vitro simulated digestion
The simulated oral, gastric, and intestinal digestion experiments of RAC were carried out according to Mamone et al. (39) with minor adjustments. One milliliter of RAC (1 . /fnut. . mg/mL) was mixed with 4 mL SSF and stirred at 170 rpm for 2 min in lightproof condition to mimic the oral phase digestion. Based on the oral stage, 10 mL of SGF was added and reacted for another 2 h under the same condition to imitate the gastric phase digestion. To investigate the intestinal phase digestion, 20 mL of SIF was further added, and the mixture was stirred for another 2 h after simulated gastric digestion. For each stage, the same volume of deionized water was used in place of the corresponding digestive fluid as the control. At the end of each stage, 0.8 mL of the mixture was withdrawn and diluted with MeOH to 1.0 mL for HPLC analysis. The standard curve of RAC was prepared as follows: one milligram of RAC was dissolved with MeOH to a final concentration of 1 mg/mL. The above solution was further used to prepare a series of RAC concentrations at 0.02, 0.04, 0.08, 0.2, and 1.0 mg/mL, respectively. The HPLC conditions for the determination of the content of RAC in the digestion mixture and the preparation of the standard curve of RAC were the same as those for the analysis of RAC, except that the detection wavelength was at 358 nm.
Cell culture
The Caco-2 cell line and GES-1 cell line were obtained from iCell Bioscience (Shanghai, China). As described by Chai et al.
Cell survival
The effects of ACR and RAC on cell survival (both Caco-2 and GES-1 cells) were measured by the CCK-8 assay (41). Briefly, cells (1 × 10 4 cells/well) were seeded into a 96-well plate and cultivated overnight. Then 100 µL of sample solutions at different concentrations were added into the wells, followed by incubation at 37 • C for another 48 h. After cells were rinsed twice with PBS and resuspended in 100 µL of DMEM supplemented with 10% FBS, 10 µL of CCK-8 solution was added into each well and the plate was maintained for 1 h in the incubator. The plate was read at 450 nm on a microplate reader (BioTek, Epoch 2, USA), and all experiments were performed thrice. Cell viability was expressed as a percentage of the control group as shown below: Where A sample was the absorbance (Abs) of the cells incubated with different concentrations of sample solutions, and A control was the Abs of the cells incubated without sample solution.
Apoptosis assay
Cell apoptosis assay was conducted in refer to Abas et al. (42) with modifications. In short, cells at a density of 1 × 10 4 cells/well were seeded in 96-well plates and incubated overnight, followed by exposure to different concentrations of ACR and RAC for another 24 h. After that, cells were rinsed twice with PBS, resuspended in 200 µL of Annexinbinding buffer, and finally stained with Annexin-V FITC and PI (5 µL each) for 10 min at 37 • C in darkness. After adding 400 µL of Annexin-binding buffer, the signal intensity was measured using a FACSCalibur (Becton Dickinson, CA, USA) flow cytometry within 1 h, and data was analyzed with FlowJo software (Ashland, OR, USA).
JC-staining
After incubation with different concentrations of samples, the changes of MMP in Caco-2 and GES-1 cells were measured in accordance with Liu et al., with slight modifications (43). In brief, cells (1 × 10 4 cells/well) were exposed to different concentrations of ACR and RAC for 24 h, respectively. Then 500 µL of JC-1 staining solution was added into each well and the plate was further incubated at 37 • C for 15 min. Cells were then rinsed twice with 1 mL of staining buffer and resuspended with 500 µL of staining buffer prior to MMP determination in the flow cytometry.
Statistical analysis
All experiments were carried out in triplicate, and the data was expressed as mean ± standard deviation. As for statistical analysis, analysis of variance (ANOVA) was performed with GraphPad Prism version 9.0.0 (GraphPad Software Inc., San Diego, CA). Differences were considered significant if p < 0.05.
Results and discussion E ects of di erent parameters on the elimination of ACR
Based on the calibration curve (y = 326781x -183802; r 2 = 0.9999; x was the concentration of each sample and y was the corresponding peak area), the residual content of ACR was depicted in Figure 1. After incubation at 40, 60, 80, 90, and .
/fnut. . Similar phenomena could be seen in regard to the effect of reaction time on the capture of ACR. As displayed in Figure 1B, ACR was eliminated by rutin in a time-dependent manner, with more than 94% being removed after 8 h of reaction. However, the content of ACR did not decrease significantly after 4 h of reaction (p > 0.05). Figure 1C showed the effect of molar ratio of rutin to ACR on the elimination of ACR. With the rise of molar ratio of rutin/ACR, the content of residual ACR exhibited an overall downward trend, from 18.54 µmol of 1/2 to 6.69 µmol of 4/1. Additionally, it could be observed that the content of ACR reduced significantly between the molar ratios of 1/1 and 2/1 (p < 0.001).
In addition to rutin, other phenolic compounds with a meta-phenol configuration in A ring were also found to capture ACR. According to Zhu et al. (21), after polyphenols (1.0 mM) incubating with ACR (0.5 mM) for 1.5 h at 37 • C, the ability of them to remove ACR in a decrease sequence was phloretin, epigallocatechin-3-gallate, epicatechin-3gallate, epicatechin, epigallocatechin, theaflavin-3,3'-digallate, theaflavin, cyanomaclurin, and phloridzin. Additionally, in the study by Wang et al. (25), resveratrol and hesperetin could eliminate 93.6 and 94.87% of ACR, respectively, after incubating with equal concentrations of ACR for 12 h at 37 • C. Combined the above results, it was found that not only the specific structure of polyphenols but also the elevated temperature contributed to the higher scavenging efficiency of ACR. As a result, more RAC were prepared under the reaction conditions of temperature at 80 • C, reaction time of 2 h, and molar ratio of rutin/ACR of 2/1 for subsequent experiments.
Structural characterization of RAC
Compared with rutin [retention time (R t ), 16.73 min], a newly occurred chromatographic peak (R t = 18.46 min) could be seen after 0.5 h of reaction (Figure 2A). This compound was obtained after purification with a chromatographic purity of 99%, and given RAC. As displayed in Figure 2B Table 1.
Since the 1D NMR data of rutin and RAC exhibited high similarity, comprehensive interpretation of the 2D NMR data of RAC mainly focused on the newly occurred signals. Its 1 H-1 H COSY spectrum (Supplementary Figure S5) a Measured at 600 ( 1 H) and 150 ( 13 C) MHz in DMSO-d6 for RAC, and chemical shifts were expressed in parts per million (ppm). b "G" is the abbreviation of "glucose" and "R" of "rhamnose". (Supplementary Figure S6) correlations from H-11 to C-12/C-13, indicating the fragment of C(13)-C(12)-C(11). Additionally, HMBC correlations from H-11 to C-8/C-9/C-7 indicated the linkage of C-11 to C-8, and from H-13 to C-7 indicated the fragment of C(13)-O-C(7). Therefore, the chemical structure of RAC was established, as shown in Figure 3, which was a novel compound. The reaction mechanism of ACR and polyphenols with meta-phenol structures to form the corresponding hemiacetals had been put forward previously (20,26). Therefore, the chemical rationale for the interaction between ACR and rutin was proposed as followed ( Figure 4) the C = C of ACR adducted to C-8 of rutin through electrophilic addition, followed by intramolecular nucleophilic addition between the contiguous hydroxyl at Frontiers in Nutrition frontiersin.org . /fnut. .
FIGURE
The proposed reaction mechanism of acrolein and rutin to form the hemiacetal adduct RAC. C-7 and the -CHO of ACR to form a more stable cyclic hemiacetal structure.
Simulated in vitro digestion
The content of RAC was calculated based on its calibration curve (y = 34274493x -100977; r 2 = 0.9998; x was the concentration of each sample and y was the corresponding peak area). As depicted in Figure 5 and Supplementary Table S1, after mimic oral digestion, the content of RAC was 0.595 ± 0.0018 mg/mL, and a decrease of 0.33% indicated that RAC remained unchanged at this stage (p > 0.05). When the adduct was further subjected to simulated gastric digestion for 120 min, a reduction of 24.41% was observed, from 0.167 ± 0.0042 mg/mL to 0.151 ± 0.0021 mg/mL. Surprisingly, a significant increase in the amount of RAC after 30 min of mimic intestinal digestion was observed, compared with that of RAC after 120 min of mimic gastric digestion (p < 0.001). In addition, the amount of RAC after 120 min of mimic intestinal digestion (2.50 ± 0.01 mg) and that of RAC after 30 min of mimic gastric digestion (2.50 ± 0.06 mg) were almost at the same level (p > 0.05). In a whole, 83.61% of RAC was remained after in vitro simulated oral, gastric, and intestinal digestion.
It was reported that hemiacetals were stable under alkaline conditions, but could be reversibly hydrolyzed back to the starting aldehyde and alcohol in acidic aqueous solution (44,45). Therefore, the negligible hydrolysis of RAC during simulated oral and intestinal digestion was attributed to the weakly acidic environment (pH 6.8) (p > 0.05), while the low pH condition (pH 1.5) during simulated gastric digestion led to the sharp decline of RAC. Furthermore, after 30 min of simulated gastric digestion (Supplementary Figure S7), new peaks could be observed by HPLC analysis, and the peak areas increased with time. However, owing to the low content of the newly occurred peaks, their detailed information was hard to obtain in this study. But based on their UV spectra, it was tentatively presumed that they had a similar structure to RAC, with absorbances at 260, 270, and 359 nm.
Cytotoxicity of the adduct against Caco-and GES-cells with acrolein as the control
Given that ingested ACR mainly affected the gastrointestinal system (46), the cytotoxicity of ACR and RAC against GES-1 and Caco-2 cell lines was examined. The CCK-8 assay was one of the effective methods to quantify viable cells, as the formation of orange and water-soluble formazan was positively correlated with the number of living cells (47). As demonstrated in Figures 6A,B, ACR inhibited the cell viability of both Caco-2 and GES-1 cells in a concentration-dependent manner. Specifically, after incubation with ACR for 24 h, the viable cell number of Caco-2 cells decreased by 93.06%, from 31.83% (20 µM) to 6.97% (140 µM), and that of GES-1 cells decreased by 88.78%, .
/fnut. . from 49.57% (20 µM) to 11.22% (140 µM). In comparison, RAC-induced cell death was relatively modest. More than 75% of both Caco-2 and GES-1 cells survived after exposure to 140 µM of RAC for the same duration. It was also observed that the viability of Caco-2 and GES-1 cells treated with RAC was 10and 6.5-fold higher than that of ACR-treated Caco-2 and GES-1 cells respectively. Furthermore, the IC 50 values of ACR for Caco-2 and GES-1 cells were both below 20 µM, while that of RAC for both cells were above 140 µM. Annexin V had high affinity and specificity for phosphatidylserine, which could appear on the outer side of plasma membrane as a signal of apoptosis, while PI was capable of permeating incomplete membrane and binding to DNA (48). Therefore, Annexin V and PI double staining was further introduced to determine apoptosis. It could be seen that ACR concentration-dependently induced apoptosis in Caco-2 and GES-1 cells as well ( Figures 6C,D). After Caco-2 cells were treated with 20, 40, and 60 µM ACR for 24 h, the number of viable cells declined to 87.5, 71.3, and 40.0%, respectively, while that of apoptotic cells increased to 12.19, 28.59, and 59.50%, respectively. In contrast, Caco-2 cells treated with different concentrations of RAC exhibited a negligible increase in apoptosis (p > 0.05), from 8.3% (20 µM) to 10.81% (60 µM). Furthermore, the apoptotic rate of cells exposed to 60 µM of ACR was 5.5 times higher than that of the same concentration of RAC. The above data demonstrated that ACR-induced apoptosis was significantly alleviated after ACR was captured by rutin to form RAC. Similar results could be seen in GES-1 cells. To be specific, the apoptotic rate of ACR-treated GES-1 cells increased from 16.28% (20 µM) to 68.10% (60 µM). However, even exposure to 60 µM of RAC caused only 12.59% of GES-1 cells undergoing apoptosis, 5.4-fold <60 µM ACR-treated group. Consequently, it could be concluded that the formation of RAC attenuated ACR-induced cytotoxicity against Caco-2 and GES-1 cells.
Determination of MMP in Caco-and GES-cells
The cytotoxic effects of ACR against Caco-2 and GES-1 cells were also studied in other literature, and there was accumulating evidence suggesting the link between mitochondrial dysfunction and apoptosis (9,49). Hence, changes of MMP in cells were further measured by JC-1, as the decrease of MMP in cells was an iconic issue found in early apoptotic cells (50). JC-1 was a lipophilic fluorescent probe and was capable of entering the mitochondrial matrix, where it aggregated to form polymers with red fluorescence under high membrane potential while existed as monomers with green fluorescence under low membrane potential. In addition, the enhancement of green fluorescence along with the reduction of red fluorescence was correlated with the loss of MMP (51).
As depicted in Figure 7A, ACR induced the loss of MMP in Caco-2 cells dose-dependently, with the increase of green fluorescence and the decrease of red fluorescence. And the ratio of red/green fluorescence intensity in ACRtreated group significantly decreased from 26.54 (control, data not shown) to 1.97 (60 µM) (Supplementary Table S2) (p < 0.001), highly suggesting depolarization of the MMP in Caco-2 cells. However, RAC attenuated the transformation from red to green fluorescence ( Figure 7A), and the ratio of red/green fluorescence intensity of the 60 µM RAC-treated group (9.47) was even higher than that of the 20 µM ACR-treated group (5.32) (Supplementary Table S2). Similar phenomenon was also observed in GES-1 cells incubated with ACR and RAC ( Figure 7B). With the increased concentrations of ACR, the red fluorescence was reduced while the green fluorescence was increased. Besides, the ratio of red/green fluorescence intensity in ACR-treated group significantly reduced from 17.71 (control, data not shown) to 2.13 (60 µM) (p < 0.001), further indicating the disruption of MMP in GES-1 cells (Supplementary Table S3). However, the reduction of MMP in GES-1 cells was alleviated when incubated with RAC ( Figure 7B), where the ratio of red/green fluorescence intensity of 60 µM RAC-treated group (6.72) was also higher than that of 20 µM ACR-treated group (5.68) (Supplementary Table S3). As a result, it could be tentatively concluded that RAC attenuated ACR-induced cytotoxicity in Caco-2 and GES-1 cells by ameliorating the loss of MMP in mitochondria.
Conclusion
ACR was not only a foodborne pollutant but also a trigger of several serious diseases. In this study, the capacity of rutin, a nutrient polyphenol found in several frequently consumed ingredients like onions and peppers, to scavenge ACR was investigated. It was shown that rutin could scavenge more than 98% of ACR under the set conditions (temperature, 80 • C; reaction time, 2 h; and the molar ratio of rutin/ACR, 2/1). Besides, the results indicated that rutin scavenged ACR through the formation of a hemiacetal adduct (RAC), of which the structure was identified for the first time, and C-8 and the hydroxyl at C-7 of rutin were the reaction sites for ACR. Due to the presence of the hemiacetal structure that was pHsensitive, RAC was partially degraded after the three-stage simulated digestion, with 83.61% remained. Furthermore, the results revealed that RAC ameliorated ACR-induced cytotoxicity against Caco-2 and GES-1 cells through improvement of the loss of MMP. Overall, the above observations suggested that rutin could also be one of the potential ACR scavengers, and rutin-enriched dietary materials might contribute to limit ACR released during domestic cooking.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary materials, further inquiries can be directed to the corresponding authors.
|
2022-08-16T13:54:55.507Z
|
2022-08-16T00:00:00.000
|
{
"year": 2022,
"sha1": "c0c7ad8899d1433d48febe1a4669cf3e9f82c2df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c0c7ad8899d1433d48febe1a4669cf3e9f82c2df",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
249490757
|
pes2o/s2orc
|
v3-fos-license
|
The Association between Skull Bone Fractures and the Mortality Outcomes of Patients with Traumatic Brain Injury
Introduction Skull fractures are often found in patients with traumatic brain injury (TBI). Although skull fractures may indicate greater force impact and are associated with local or diffuse brain injuries, the prognostic value of skull fractures remains unclear. This retrospective study aimed to assess the association between skull fractures and mortality in patients with TBI. Methods This study included 5,430 TBI patients registered in the trauma registry system from January 2009 to December 2018. Clinical and demographic data including age, sex, trauma mechanisms, comorbidities, Glasgow Coma Scale (GCS) score, abbreviated injury score (AIS)-head, injury severity score (ISS), and in-hospital mortality were acquired. Multiple logistic regression and propensity score matching were used to elucidate the effect of skull fractures on mortality outcomes of TBI patients. Results Compared to TBI patients without skull fracture, patients with skull fractures were predominantly male, younger, had lower GCS upon arrival at the emergency room, and had higher AIS-head, ISS, and in-hospital mortality. The patients with skull fracture had 1.7-fold adjusted odds of mortality (95% confidence interval (CI): 1.27–2.25; p < 0.001) than those without skull fracture, controlling for age, sex, comorbidities, and AIS-head. Additionally, the propensity score-matched analysis of 1,023 selected paired patients revealed that skull fracture was significantly associated with increased 1.4-fold odds of risk for mortality (95% CI: 1.02–1.88; p=0.036). Conclusions Using a propensity score-matched cohort to attenuate the confounding effect of age, comorbidities, and injury severity, skull fracture was identified as a significant independent risk factor for mortality in patients with TBI.
Introduction
Traumatic brain injury (TBI) is one of the leading causes of morbidity and mortality associated with road traffic accidents [1]. Currently, computed tomography (CT) has become the standard for initial evaluation of patients suspected with TBI, and skull fractures are often found in patients with TBI. It has been estimated that out of 4,660 patients with TBI, 28% of patients had skull fractures. In addition, skull fractures are found in 25% of patients with fatal head injuries at autopsy [2]. e clinical importance of skull fractures has been reported in the literature; however, the prognostic value of skull fractures remains unclear. Simulation data showed that skull fractures could reduce the risk of diffuse brain injury, but increase the risk of brain contusion [3]. Skull fractures have been reported to contribute to unfavorable outcomes in moderate [4] or severe TBI [5] and increase the risk of leakage of cerebrospinal fluid [6]. Additionally, skull fractures have been associated with local or diffuse injuries of the brain, including cranial nerve injury, seizures, and intracranial hemorrhage [2]. It has also been reported that associated neurologic deficits and complications are more common in patients with skull fractures than in patients without skull fractures [7].
Of note, the comorbidity and demographic features of patients, as well as trauma severity, may confound the assessment regarding the effect of skull fracture on mortality in patients with TBI. For example, aging is associated with a decrease in skull bone stiffness and may increase the occurrence of skull fractures [8]. Furthermore, women were suggested to have favorable outcomes with better recovery than men [9], and this effect is suggested to result from higher levels of circulating estrogen and progesterone [10][11][12][13][14]. In patients with liver cirrhosis, an increased risk of sustaining skull fracture was found to be 1.75 [15]. Meanwhile, the comorbidities of patients and an associated higher injury severity score (ISS) were also associated with increased mortality in patients with TBI [16].
To assess the effect of skull fracture on the mortality of patients with TBI, the present study was designed to investigate the relationship via a propensity score-matched cohort analysis of the registered data to attenuate the confounding effects of the associated comorbidities, demographic features, and injury severity of patients.
Ethics Statement.
is study was approved by the Institutional Review Board (IRB) of Chang Gung Memorial Hospital (approval number 202000057B0). Because the study was designed for retrospective analysis of the registered database, the need for informed consent was waived according to IRB regulations.
Patient Population and Retrieved Information.
We collected the medical data of 35,154 patients ( Figure 1) between January 2009 and December 2018 from the trauma registry system of a level I trauma center in Southern Taiwan [17][18][19][20]. Only hospitalized adult patients (age ≥20 years) with TBI were included in this study. e abbreviated injury score (AIS) was used to evaluate injury severity in the following body regions: head/neck, face, chest, abdomen, extremities (including pelvis), and external region [21]. e AIS was a simplified, expert-based anatomical scale for the severity of bodily injuries, including traumatic brain injury [22,23] with AIS � 1-6 points for injuries based on mortality probability. An injury with AIS � 1 is never fatal, while an injury AIS � 6 is almost certainly fatal. e ISS was calculated by summing the squares of the three highest AIS scores in each body region [24,25] and was categorized into groups of 1-15 (mild to moderate), 16-24 (severe), and >24 (critical). Patients with multiple trauma (AIS ≥3 in other body regions besides the head) (n � 1,457), aged less than 20 (n � 683), burn injury (n � 2), or incomplete data (n � 0) were excluded. e retrieved patient information included age, sex, comorbidities cerebrovascular accident (CVA), hypertension (HTN), coronary artery disease (CAD), congestive heart failure (CHF), diabetes mellitus (DM), and end-stage renal disease (ESRD)), trauma mechanisms, Glasgow Coma Scale (GCS) score upon arrival at the emergency department, AIS, ISS, hospital length of stay (LOS), and in-hospital mortality. According to the GCS, the severity of TBI was categorized in terms of mild (13)(14)(15), moderate (9)(10)(11)(12), and severe (<8) injuries [26].
Statistical Analysis.
Patient characteristics are summarized as mean ± standard deviation, median with interquartile range (GCS and ISS), or frequency (%) as appropriate. Demographic traits and clinical variables were compared between the two groups of patients (those with skull fracture versus without skull fracture) using the chisquare test. In this study, the primary outcome measure was in-hospital mortality. e adjusted odds ratio of mortality was calculated using logistic regression, controlling for age, sex, comorbidities, and AIS-head. Independent risk factors for mortality were evaluated via univariate and multivariate logistic regressions, which included parameters that were significant in the univariate model. In addition, a selected cohort was studied with propensity score matching of parameters with significance in multivariate logistic regression to evaluate the effect of skull fracture on mortality. All analyses were performed using the SPSS software (IBM, version 23). A 1 : 1 propensity score-matched study population was created by the greedy method using the R software (version 3.5.0; package: MatchIt, method: match it) with a 0.2 caliper width to attenuate the influence of confounding variables on the outcome assessment. A p value of <0.05 was set to determine statistically significant group differences.
Results
As given in Table 1, a total of 5,430 patients who were sent to our emergency room, including 3,279 men (60.4%) and 2151 women (39.6%), were included in this study. e mean age at the time of the accident was 55.1 ± 19.6 years. e most commonly encountered trauma mechanisms were motorcycle accidents (n � 2,844, 52.4%), followed by fall accidents (n � 1,732, 31.9%). Of these patients, 1,058 (19.5%) had skull fractures according to radiographic reports. HTN and DM were the first and second most common comorbidities, respectively, of these patients. Most patients presented with mild TBI with a GCS score of 13-15 (75.9%) and sustained an ISS <25 (90.2%). Of the patients with TBI, the median head AIS score was 4, and the average in-hospital mortality rate was 6.5%. With male sex predominance, the average age was significantly lower in the skull fracture group than in the nonskull fracture group (Table 1). Significant differences in comorbidities and trauma mechanisms were also observed between patients with and without skull fractures. e GCS upon arrival at the emergency room was significantly lower in the skull fracture group than in the nonskull fracture group (median (Q 1 -Q 3 ): 14 (9-15) vs. 15 (13)(14)(15); p < 0.001). Patients with skull fracture were also associated with higher ISS (16 (13-20) vs. 14 (9-16); p < 0.001), AIS-head (4 (3-4) vs. 3 (2-4); p < 0.001), mortality (10.3% vs. 5.6%; p < 0.001), and hospital stay (12.3 days vs. 10.4 days, p < 0.001) than those without skull fractures. e patients with skull fracture had 1.7-fold adjusted odds of mortality (95% CI: 1.27-2.25; p < 0.001) than those without skull fracture, under conditions controlled by age, sex, comorbidities, and AIS-head. Table 2 provides the regression analysis of the associated risk of mortality by the presence of skull fracture, sex, age, comorbidities, AIS of head � 4, AIS of head � 5, and ISS. In univariate analysis, skull fracture was significantly associated with mortality (odds ratio, 1.9; 95% CI: 1.52-2.44; p < 0.001). Age, CVA, HTN, CAD, and ESRD were also significantly associated with mortality in patients with TBI. AIS-head (OR (95% CI): 10.0 (8.06-12.35); p < 0.001) and ISS (1.3 (1.24-1.29); p < 0.001) were also the significant risk factors for mortality. ese parameters affecting mortality were included in further multivariate analyses to clarify their independent effects on mortality in patients with TBI. Skull fracture had a significant effect on the increasing mortality rate (1.8 (1.35-2.48); p < 0.001). In addition, age (1.0 (1.01-1.02); p � 0.002), CAD (2.1 (1.32-3.31); p � 0.002), and ESRD (4.2 (2.46-7.04); p < 0.001), excluding CVA and HTN, were identified as the independent risk factors for mortality. No trauma mechanisms had been identified as the independent risk factors for mortality. AIS-head � 4 and 5 were also associated with a significantly higher mortality rate (AIS-head � 4, 4.5 (1.76-11.51); p � 0.002 and AIS-head � 5, 88.4 (27.28-286.26); p < 0.001, respectively). In contrast, AIS-head � 3 and ISS were not found to be a significant risk factor for mortality in patients with TBI.
To clarify the importance of skull fractures on the mortality of patients with TBI, 1 : 1 propensity scorematched patient cohorts with the same number of patients (n � 1,023 for each group) were created (Table 3) to attenuate the influence of confounding variables on the outcome assessment. In the matched patient cohort, there were no
Discussion
is study revealed that various factors were associated with skull fracture, including sex, age, comorbidities, GCS, ISS, and AIS-head. Multivariate analysis revealed that skull fracture, age, CAD, ESRD, and AIS-head were the independent risk factors for mortality in patients with TBI. Notably, many factors contribute to the mortality of patients with TBI [27]. erefore, a propensity score-matched cohort, attenuating the confounding effect of the above variables, was created for this study in the outcome assessment. We found that skull fracture was still significantly associated with a 1.4-fold increase in mortality risk in patients with TBI.
In this study, females accounted for a small proportion (25.9%) of skull fracture patients, which is consistent with the results of a previous study that revealed gender differences in head trauma [5,28]. Our previous studies also reported that more males than females sustained TBI in road accidents [18,29]. e study results did not identify gender as a significant risk factor for mortality, which is in accordance with reports from other studies [10][11][12][13][14]. However, the complex physiological and social factors, which may have contributed by the differences between males and females in terms of skull fractures, were not explored in the study. Hence, further work on this topic is encouraged. Furthermore, the study results revealed that age was an independent factor for mortality in patients with TBI. is result is consistent with those of many reports demonstrating that age is an important risk factor for mortality at any given level of GCS and AIS-head [30,31]. Notably, in this study, patients with skull fractures were younger than those without skull fractures. It has been reported that aging may lead to a decrease in the stiffness of cranial bones [8]; therefore, older individuals are more prone to fractures; however, since the impact force sustained in each patient during the accident was unknown, the association between age and occurrence of skull fracture would not be conclusive.
Comorbidities of patients with TBI are important factors that contribute to alterations in the clinical course and influence the short-term and long-term outcomes of patients [32][33][34]. is study found an association between CAD and ESRD and increased mortality in patients with TBI, a phenomenon that has been supported by many prior studies [6,16,[35][36][37]. Furthermore, in this study, we found that AIS-head, but not ISS, is an independent factor for mortality in patients with TBI. ISS was significantly associated with mortality only in the univariate regression, but not in the multivariate regression. Although ISS reflects the severity of multiple traumas in an injured person, the input of AIS-head into the regression may, to a large extent, explain the mortality outcome [38] and lessen the influence of ISS on the mortality outcome. Similar reports have shown that multiple traumas have no role in the mortality of patients with severe head injury [39,40] and the mortality of patients depends on the severity of the intracranial pathology, regardless of ISS [41].
is study has some limitations. First, the analysis was limited to data from a level I regional trauma center, and the conclusions may not be generalizable to other regions or countries. Second, the different skull fracture types such as linear/nondepressed/depressed/compound fractures may be associated with different trauma mechanisms [42] and prognosis [43]. However, the skull fracture types and their association with local hematoma or parenchymal injury were not recorded in the trauma registry system and thus may lead to bias in the outcome assessment. ird, this study was a retrospective study based on a trauma registry database, which could have led to selection bias. e parameters that could be selected from the registered database for outcome analysis were still limited, considering the complex interaction of various factors leading to mortality in patients with TBI. Fourth, some bias may exist considering that the CT characteristics of patients which may also affect the prognosis of traumatic brain injury were not studied as a parameter. Fifth, the use of the propensity score as the matching method to attenuate nonrandomized assignment of the study population on the outcome assessment rely on a correct model fit of the relationship between the propensity score and the outcome [44,45]. e goodness-of-fit for the propensity score model may have impact on the outcome evaluation [44,45]. Furthermore, only short-term in-hospital mortality was measured, and long-term mortality was not included; thus, a selection bias may exist in the outcome analysis.
Conclusions
Using a propensity score-matched cohort to attenuate the confounding effect of age, comorbidities, and injury severity, skull fracture was identified as a significant independent risk factor for mortality in patients with TBI.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2022-06-09T15:04:48.179Z
|
2022-06-07T00:00:00.000
|
{
"year": 2022,
"sha1": "d34e68baef54fea0919096465df626f9cead1517",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/emi/2022/1296590.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1801316b7882ff16062e11bf558fe7a247d47c6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
115144751
|
pes2o/s2orc
|
v3-fos-license
|
Contribution TRAIT AND STATE ANXIETY AS FACTORS OF THRESHOLD AND TOLERANCE TO EXPERIMENTALLY INDUCED PAIN
Pain is an experience that has physical, psychological and social aspects. Sensitivity to pain is individual and depends on psychological factors. Studies have shown that anxiety is associated with the perception of experimentally induced pain. PURPOSE: The purpose of the present study is to examine the relationship between anxiety, threshold and tolerance to experimentally induced pain in healthy persons. METHODS: 35 healthy persons at the age from 19 to 39, 20 women and 15 men were examined. Methods: Spielberger’s questionnaire, Cold pressor test, Visual Analog Scale for Pain, Descriptive statistics, Correlation analysis, Mann-Whitney’s Test. RESULTS: Significant differences in tolerance to pain were identified depending on the levels of state anxiety (U=12.5, Р=0.037). The state anxiety was greatly related to the intensity of the pain experienced. (Spearman rho=0.49, P=0.008). Significant differences were not found in threshold, tolerance and intensity of pain depending on the levels of trait anxiety in the examined people. CONCLUSIONS: The increased levels of state anxiety in healthy persons exposed to experimentally induced pain suggest a weaker endurance to pain and perceiving it as stronger.
INTRODUCTION
Pain is associated with illness or physical trauma and is the most common symptom in patients seeking medical care.
It is a cognitive-affective condition including "a sensory and emotional experience of discomfort" (1).The individual sensitivity to pain may vary among the examined persons.In patients having the same illness, the sense of pain covers the whole range starting from "no pain" to "the most terrible pain we can imagine".All this necessitates the study of various types of factors in order to identify the processes involved in the occurrence, the keeping and the reduction of the pain response.Understanding the causes of individual differences in sensitivity to pain can be crucial to its prevention and treatment and to be useful in the diagnostic process.
Anxiety is related to fears of impending harm and expectations of an indefinite or uncontrollable threat.It has cognitive, emotional and locomotor (including vegetative and physiological) components.Anxiety is associated with a state of tension and vegetative excitement (for example, increase of heart rate), an experience of uncertainty, helplessness, readiness to respond to danger and tendency to avoidance behavior.(2).Spielberger, Krasner (1972) distinguished anxiety as a state and a trait.The state anxiety is the present level of anxiety of the individual characterized with physiological excitement and a feeling of tension.It is influenced by situational factors and varies in intensity and duration.The trait anxiety is an individual sustainable tendency to be restless or "a tendency to uneasiness ".The trait anxiety is associated with the state anxiety and the attention paid to threatening stimuli.People with higher trait anxiety are inclined to perceive most of the situations as threatening and experience a stronger sudden anxiety than people with lower trait anxiety (3).
Studies with experimentally induced pain show that anxiety influences pain perception and an increased state anxiety are associated with higher intensity (4), lower threshold (5) and tolerance to pain (6).The pain threshold refers to the lowest intensity of the stimulus perceived as painful and the tolerance to painto the maximum intensity of the pain that a person can endure.The methods for experimentally induced paid in healthy people propose a way to identify the effect of the anxiety on the pain perception which is difficult to be achieved with a clinical group.
PURPOSE OF THE STUDY
The purpose of the present study is to examine the relationship between anxiety, threshold and tolerance to experimentally induced pain in healthy people.
EXAMINED PERSONS
35 healthy persons at the age from 19 to 39, 20 women and 15 men, were examined.In the course of the study, we strictly observed the rules of the local ethical committee at Trakia University and the principles of the Declaration of Helsinki (1964).An informed consent was obtained from all participants before initiation of the experimental procedures.They were informed that they could discontinue the study whenever they wanted and without giving any reason for their decision.In the beginning of the study, the participants filled in an anonymous questionnaire with socio-demographic information as well as a physiological questionnaire and after that they underwent the cold pressor test.After making the cold pressor test, the examined people evaluated the severity of the pain experienced by means of VAS.
Questionnaire of Spielberger, adapted for
Bulgarian conditions for persons over 13 years of age (7).It consists of 40 statements in two subscales: state anxiety and trait anxiety.Each statement is evaluated by a 4-degree Likerttype scale.
Cold pressor test (CPT) is an experimental technique for inducing a painful experience.
The examined people put their hand in a water container with floating ice cubes and communicate the parameters of their sensory experiences -the appearance of pain (pain threshold) and intolerable pain (tolerance) after that the study is ceased (8).
Visual Analog Scale for Pain (VAS Pain)
The VAS pain is a continuous scale comprised of a horizontal line, usually 10 centimeters in length.For pain intensity, the scale is most commonly anchored by "no pain" (score of 0) and "pain as bad as it could be" or "worst imaginable pain".The pain VAS is a singleitem scale with individual scores in mm (9).
Statistical methods
Descriptive statistics, Correlation analysis, Mann-Whitney's Test.The data from the empirical study were statistically processed with IBMSPSS Statistics, V.19.0.
RESULTS AND DISCUSSION
In the present study, it was found out there are statistically significant differences in the pain tolerance of the examined people depending on the level of their state anxiety (Mann-Whitney U=12.5, Р=0.037) and a lack of differences in the pain threshold (Р>0.05).The persons with lower state anxiety had higher tolerance to experimental pain compared to those with higher anxiety.Jones and Zachariae (23) in a study of 80 healthy students with a cold pressor test also found that the state anxiety influences considerably the tolerance to experimental pain.This effect of anxiety, however, was found in men only.Men with lower levels of anxiety had a considerably higher tolerance to pain compared to those with high level of anxiety as well as compared to women (10).
Based on the correlation analysis, the state anxiety was found to correlate moderately and positively with the severity of the pain as measured with VAS (Spearman rho=0.49,P=0.008).In an experimental study of 32 healthy persons with electrical pain stimuli, Tang and Gibson (2005) confirmed the relationship between state anxiety and pain intensity but did not find such with the pain threshold (11).One possible explanation of this opposite result is in the use of consecutive electrical stimuli with increasing intensity which was reported to excite all afferent paths in an unnatural, synchronized way and could have changed the pain threshold (12).Bement and co-author (2010) also confirmed the relation of the state anxiety with the pain intensity but unlike our study they found a relation with the pain threshold.The authors examined 22 healthy students with experimentally induced mechanical pain as part of them (an experimental group) were exposed to additional stress influence (stress session) while the other part (a control group) did not have such influence (free session).The pain perception was measured before the beginning and at the end of the sessions.It was found out that during the initial exposure to experimental pain, the starting level of the state anxiety correlated positively with the intensity and negatively with the pain threshold in both groups (control and experimental ones).The individuals with higher levels of anxiety were more inclined to communicate stronger pain and had higher sensitivity to it compared to those with lower levels of anxiety.Anxiety correlated not only with the initial pain threshold and intensity but also with changes occurring in them during repeated exposure to pain.As a consequence of the stress session with the experimental group, an increase in the anxiety levels was observed.The increased anxiety of the experimental group was significantly associated with a reduced pain threshold and an increased intensity of the reported pain compared to their initial levels (13).In conclusion, these studies showed that anxiety was a physiological factor related to increased pain sensitivity in otherwise healthy people and to their more negative emotional experiences during experimental procedures causing pain.
It has been also found that the anxiety has a negative impact on the pain perception and in clinical environment.Diagnostic and operative medical interventions increase the levels of state anxiety in patients (14).In turn, the increase anxiety in patients before an intervention relates positively to the strength of the reported pain and distress (15) and significantly predicts the reporting of more severe pain not only during the manipulation but also after it (16,17).These results show that controlling the anxiety state before manipulations, causing pain, could increase the patients' endurance to pain, to reduce the judgement of its severity and of the distress caused by them.
From the analysis made with the Mann-Whitney's test, statistically significant differences were not found in pain threshold and tolerance depending on trait anxiety as well as significant correlation between the evaluation of the experimental pain intensity and the trait anxiety (p>0.05).
Literature provides convincing evidence that lower levels of anxiety in patients are associated with a judgement of lower pain severity, with decreased distress and with increased pain threshold (18) but the experimental studies of healthy people do not firmly confirm this effect.For example, similarly to our result, other studies of healthy people ascertain that trait anxiety is not related to pain threshold and tolerance to cold pressor pain (19,20) as well as to pain from thermal stimuli (20).During a study of 32 healthy persons exposed experimentally to electrical pain stimuli, Tang and Gibson (2005) also did not find differences in the pain threshold depending on the anxiety but found a considerable impact of the anxiety on the pain intensity.The participants with higher trait anxiety reported more severe pain during the experiment (11).Contrary to these results, James and Hardardottir (2002) found that the tolerance to cold pressor pain is higher for people with lower trait anxiety compared to those with higher train anxiety.Furthermore, distraction from the pain during the experimental procedure increased the pain tolerance in people with lower anxiety but not in those with higher anxiety (21).The more expressed anxiety, in addition to reducing the endurance to pain, also increased the tendency of the examined people to direct their attention to possible negative effects of the pain after the study was ceased.Some studies highlight that anxiety is associated with pain perception in men but not in women.While studying 140 healthy persons with a cold pressor test, Jones and co-authors (2003) ascertained that people with higher levels of anxiety had considerably lower tolerance to pain and higher pain severity compared to those with lower levels of anxiety; however, these differences were found in men (22).
In our study, statistically significant differences by gender were not found in threshold and tolerance to experimental pain, even though there is evidence in the literature that women have lower threshold to cold pressor pain, thermal, electrical and mechanical pain (23) and lower tolerance to experimental pain (24) compared to men.Other studies, however, have ascertained that gender does not influence the perception of experimental pain but is a significant moderator on the effect of anxiety (10,22) i.e. there are differences in the way anxiety impacts the perception of pain in men and women.Further studies are necessary to examine this relation, as well as, the psychosocial and psychological mechanisms that could explain these differences.CONCLUSIONS 1. State anxiety in healthy people is related to intensity and tolerance to experimentally induced pain but not with pain threshold.2. Increased levels of state anxiety suggest reduced endurance to pain and perceiving it as stronger.3. Trait anxiety in healthy people does not have an impact on whether a particular sensation is perceived as a pain and on how long the pain could be endured.
|
2019-04-14T05:18:08.050Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f015c32881fa79af89f3a597dcf3a07ab6e35c43",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15547/tjs.2018.03.004",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f015c32881fa79af89f3a597dcf3a07ab6e35c43",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
53102165
|
pes2o/s2orc
|
v3-fos-license
|
High Risk of Deep Neck Infection in Patients with Type 1 Diabetes Mellitus: A Nationwide Population-Based Cohort Study
Objective: To investigate the risk of deep neck infection (DNI) in patients with type 1 diabetes mellitus (T1DM). Methods: The database of the Registry for Catastrophic Illness Patients, affiliated to the Taiwan National Health Insurance Research Database, was used to conduct a retrospective cohort study. In total, 5741 patients with T1DM and 22,964 matched patients without diabetes mellitus (DM) were enrolled between 2000 and 2010. The patients were followed up until death or the end of the study period (31 December 2013). The primary outcome was the occurrence of DNI. Results: Patients with T1DM exhibited a significantly higher cumulative incidence of DNI than did those without DM (p < 0.001). The Cox proportional hazards model showed that T1DM was significantly associated with a higher incidence of DNI (adjusted hazard ratio, 10.71; 95% confidence interval, 6.02–19.05; p < 0.001). The sensitivity test and subgroup analysis revealed a stable effect of T1DM on DNI risk. The therapeutic methods (surgical or nonsurgical) did not differ significantly between the T1DM and non-DM cohorts. Patients with T1DM required significantly longer hospitalization for DNI than did those without DM (9.0 ± 6.2 vs. 4.1 ± 2.0 days, p < 0.001). Furthermore, the patients with T1DM were predisposed to DNI at a younger age than were those without DM. Conclusions: T1DM is an independent risk factor for DNI and is associated with a 10-fold increase in DNI risk. The patients with T1DM require longer hospitalizations for DNI and are younger than those without DM.
Introduction
Deep neck infection (DNI) is a common infectious disease involving the deep neck space; DNI usually requires intensive care and aggressive treatment [1]. The easy availability of antibiotics, improvements in diagnostic technology, and the concept of early surgical debridement have significantly reduced the morbidity and mortality of DNI [2,3]. However, DNI remains a potentially life-threatening disease when lethal complications, such as descending necrotizing mediastinitis, develop [4,5].
A study reported that patients with diabetes mellitus (DM) are at a 1.4-fold higher risk of DNI than those without DM [6]. DNI can cause higher morbidity and mortality among patients with systemic diseases such as DM, end-stage renal disease, liver cirrhosis, and autoimmune diseases [1,4,[7][8][9]. However, the pathogenesis of type 1 DM (T1DM) is different from that of type 2 DM (T2DM). T1DM is characterized by an immune-mediated depletion of beta cells, which causes a lifelong dependence on exogenous insulin [10]. Patients with T1DM, considered to have an immunocompromised status, are expected to be more vulnerable to complicated infection and have a higher infection-related mortality risk than patients with T2DM [11]. Studies investigating the effect of T1DM on DNI are not currently available in the literature. This study investigated the effect of T1DM on DNI occurrence, treatment, and prognosis.
Data Source
The government of Taiwan established the National Health Insurance Research Database (NHIRD), which covered 99.6% of Taiwan's population in 2017 [12,13]. The NHIRD provides all medical claims data of all beneficiaries, including disease diagnoses during clinic visits and hospitalization, prescription drugs and doses, examinations, procedures, surgery, payments, resident locations, and income levels, generated during reimbursement for insurance in an electronic format. The diagnostic codes in the NHIRD are based on the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). This study was exempted from obtaining informed consent from the participants because the data were deidentified. All information of the insurants was unidentifiable, and this study did not violate their rights or adversely affect their welfare. The study was approved by the Institutional Review Board of Chang Gung Memorial Hospital (IRB Number: 201601249B1).
Study Cohort
In Taiwan, T1DM is categorized as a "catastrophic illness" in the NHIRD. Patients with T1DM are certified by the government and included in the Registry for Catastrophic Illness Patients (RFCIP). Therefore, they can avail considerable discounts on medical expenses. The certification process requires critical evaluation of medical records, serological, and pathological reports by experts [14]. Therefore, the T1DM diagnosis of the enrolled patients was highly accurate and reliable.
Data regarding patients who received new diagnoses of T1DM between January 2000 and December 2010 in Taiwan were retrieved from the RFCIP (Figure 1). The patients who received T1DM diagnoses in or after 2011 were not included to ensure a follow-up period of at least 3 years. We used the following T1DM-associated ICD-9-CM codes, which were defined for the RFCIP [14]. In addition, patients who received DNI diagnoses before T1DM were excluded. Finally, 6201 patients with T1DM were enrolled in the study cohort.
Comparison Cohort
The Longitudinal Health Insurance Database 2000 (LHID2000), a subset database of the NHIRD, consists of 1,000,000 insurants who were randomly statistically selected from all insurants in Taiwan in 2000. Age distribution, sex distribution, or health care costs did not differ significantly between the LHID2000 sample group and all enrollees in the NHIRD, according to a report by the National Health Research Institutes [13]. The LHID2000 has been used in several population-based studies [15,16]. We used the LHID2000 to generate a comparison cohort, which consisted of patients without DM.
Matching Process
For each patient with T1DM, four patients without DM were randomly selected from the LHID2000 database, matched for sex, age, urbanization level, and income level to form a comparison cohort. The index date of the study cohort was the date of registry in the RFCIP for patients with T1DM, and an index date matching that of patients with T1DM was created for the comparison cohort. After the matching process, 5741 T1DM and 22,964 non-DM patients were enrolled in the study.
Main Outcome
The main outcome of this study was the occurrence of DNI, which is defined as hospitalization with the following ICD-9 codes: 528.3 (cellulitis and abscess of oral soft tissues; Ludwig angina), 478.22 (parapharyngeal abscess), 478.24 (retropharyngeal abscess), and 682.1 (cellulitis and abscess of neck) [1,17]. The follow-up period was from the index date to the diagnosis of DNI, death, or the end of 2013.
Comorbidities
Comorbidities were defined using ICD-9-CM codes recorded in the claims data:
Comparison Cohort
The Longitudinal Health Insurance Database 2000 (LHID2000), a subset database of the NHIRD, consists of 1,000,000 insurants who were randomly statistically selected from all insurants in Taiwan in 2000. Age distribution, sex distribution, or health care costs did not differ significantly between the LHID2000 sample group and all enrollees in the NHIRD, according to a report by the National Health Research Institutes [13]. The LHID2000 has been used in several population-based studies [15,16]. We used the LHID2000 to generate a comparison cohort, which consisted of patients without DM.
Matching Process
For each patient with T1DM, four patients without DM were randomly selected from the LHID2000 database, matched for sex, age, urbanization level, and income level to form a comparison cohort. The index date of the study cohort was the date of registry in the RFCIP for patients with T1DM, and an index date matching that of patients with T1DM was created for the comparison cohort. After the matching process, 5741 T1DM and 22,964 non-DM patients were enrolled in the study.
Main Outcome
The main outcome of this study was the occurrence of DNI, which is defined as hospitalization with the following ICD-9 codes: 528.3 (cellulitis and abscess of oral soft tissues; Ludwig angina), 478.22 (parapharyngeal abscess), 478.24 (retropharyngeal abscess), and 682.1 (cellulitis and abscess of neck) [1,17]. The follow-up period was from the index date to the diagnosis of DNI, death, or the end of 2013.
Comorbidities
Comorbidities were defined using ICD-9-CM codes recorded in the claims data: [1,[17][18][19]. Medical comorbidities were included if they appeared at least once in the diagnoses of inpatients or at least thrice in the diagnoses of outpatients.
Treatment Modalities
The treatment methods were divided into two subgroups: "surgical" and "nonsurgical." The patients who received surgical intervention were included in the "surgical" subgroup, whereas those who received antibiotic or abscess aspiration without surgery were included in the "nonsurgical" subgroup [1].
Prognosis Evaluation
For evaluating prognosis, we analyzed the duration of hospitalization, care in intensive care units (ICUs), performance of tracheostomy, and mediastinal complications, which were defined according to the receipt of mediastinal surgery during hospitalization or the diagnostic codes of mediastinitis (ICD-9-CM codes: 510, 513, and 519.2) [1]. Mortality and mediastinitis-related mortality were also investigated in both cohorts. Mortality was defined as death occurring during DNI treatment. Mediastinitis-related mortality was defined as death during DNI treatment accompanied by the diagnosis of mediastinitis [1]. In addition, we analyzed the age distribution of the patients with DNI identified in the T1DM and non-DM cohorts.
Statistical Analysis
The demographic characteristic and comorbidities of the T1DM and non-DM cohorts were compared using the Pearson's chi-square test for categorical variables and the unpaired Student t-test for continuous variables. Control variables, such as age, sex, urbanization level, income level, and comorbidities (HTN, CVA, CAD, CKD, SADs, and LC) were included as covariates in the univariate model. Variables in the univariate analysis that showed p < 0.1 were included in the multivariate analysis. Kaplan-Meier analysis was used to estimate the cumulative incidence in the two cohorts, and the differences were determined using a two-tailed log-rank test. Multivariable Cox proportional hazard regression models were used to measure the hazard ratio (HR) and 95% confidence interval (CI) of DNI incidence between the T1DM and non-DM cohorts. In addition, the stability of HR was examined using sensitivity testing and subgroup analysis if the interaction effects between the comorbidities and T1DM on DNI were significant. All analyses were performed using SAS software, version 9.4 (SAS Institute, Cary, NC, USA), and the level of statistical significance was set at p < 0.05. Table 1 illustrates the distribution of sociodemographic characteristics, DNIs, and comorbidities identified in the T1DM and non-DM cohorts. The T1DM cohort exhibited a significantly higher prevalence of DNI, HTN, CVA, CAD, CKD, SADs, and LC. Among the 5741 patients with T1DM, 42 (0.7%) patients with DNI were identified, and the incidence rate was 92.4 per 100,000 person-years in a mean follow-up period of 7.91 ± 2.41 years. By contrast, among the 22,964 controls, 16 (0.1%) patients with DNI were identified in a mean observation period of 8.08 ± 2.29 years, and the incidence rate was 8.6 per 100,000 person-years. The incidence rate ratio was 10.73 with a 95% CI of 6.03-19.08. The incidence of DNI was significantly higher in the T1DM cohort than in the non-DM cohort (p < 0.001). Results of the Kaplan-Meier analysis revealed the cumulative incidence of DNI in both the cohorts over a 10-year observation period. The T1DM cohort exhibited a significantly higher incidence of DNI than the non-DM cohort did (log-rank test p < 0.001, Figure 2). The Cox proportional hazards model revealed that T1DM was associated with a 10-fold higher risk of DNI (adjusted HR: 10.71, 95% CI: 6.02-19.05, p < 0.001, Table 2). In addition, the sensitivity test showed a stable effect of T1DM on DNI risk in the study cohort in the main model with each additional covariate. The results of subgroup analysis showed that T1DM is a risk factor for DNI in all the subgroups. The log-rank test revealed a significantly higher cumulative incidence in the T1DM cohort than in the non-DM cohort (p < 0.001). Table 3 presents the treatment modalities and prognosis of DNI in the patients in both cohorts (Table 3). Although the percentage of patients requiring surgical treatment for DNI was higher in the T1DM cohort than in the non-DM cohort, the difference in the percentages was not significant (T1DM vs. non-DM cohorts = 33.3% vs. 18.8%, p = 0.276). DNI in the patients in the T1DM cohort required longer hospitalization durations than did those in the non-DM cohort (T1DM vs. non-DM cohorts: 9.0 ± 6.2 vs. 4.1 ± 2.0 days, p < 0.001). Furthermore, care in ICU and mediastinal complications were only identified in patients with T1DM and DNI (ICU: 6/42, 14.3%; mediastinitis: 1/42, 2.4%). DNI-related mortality was observed in the T1DM cohort (mortality: 2/42, 4.8%) but not in the non-DM cohort. Figure 3 presents the age distribution of DNI identified in the T1DM and non-DM cohorts. We divided the age into the following four groups: <10, 10-20, 21-40, and >40 years. Accordingly, the proportions of DNI in the two cohorts (T1DM vs. non-DM) were 2.38% vs. 18.75% (<10 years), 40.47% vs. 18.75% (10-20 years), 42.86% vs. 50% (21-40 years), and 14.28% vs. 12.5% (>40 years). In this study, the peak age of DNI occurrence in the non-DM cohort was 21-40 years, while the T1DM cohort exhibited two peak ages, namely 10-20 and 21-40 years. Table 3 presents the treatment modalities and prognosis of DNI in the patients in both cohorts (Table 3). Although the percentage of patients requiring surgical treatment for DNI was higher in the T1DM cohort than in the non-DM cohort, the difference in the percentages was not significant (T1DM vs. non-DM cohorts = 33.3% vs. 18.8%, p = 0.276). DNI in the patients in the T1DM cohort required longer hospitalization durations than did those in the non-DM cohort (T1DM vs. non-DM cohorts: 9.0 ± 6.2 vs. 4.1 ± 2.0 days, p < 0.001). Furthermore, care in ICU and mediastinal complications were only identified in patients with T1DM and DNI (ICU: 6/42, 14.3%; mediastinitis: 1/42, 2.4%). DNIrelated mortality was observed in the T1DM cohort (mortality: 2/42, 4.8%) but not in the non-DM cohort. Figure 3 presents the age distribution of DNI identified in the T1DM and non-DM cohorts. We divided the age into the following four groups: <10, 10-20, 21-40, and >40 years. Accordingly, the proportions of DNI in the two cohorts (T1DM vs. non-DM) were 2.38% vs. 18.75% (<10 years), 40.47% vs. 18.75% (10-20 years), 42.86% vs. 50% (21-40 years), and 14.28% vs. 12.5% (>40 years). In this study, the peak age of DNI occurrence in the non-DM cohort was 21-40 years, while the T1DM cohort exhibited two peak ages, namely 10-20 and 21-40 years.
Discussion
Our nationwide study is the first to examine the influence of T1DM on DNI. Our study demonstrated that T1DM is a definite risk factor for DNI. Our results revealed that patients with T1DM are at a 10-fold higher risk of DNI than were those without DM. The higher frequency of infections in patients with T1DM is attributable to hyperglycemia, which results in immune dysfunction, including disrupted neutrophil function, depression of the antioxidant system and humoral immunity, microand macroangiopathies, neuropathy, decrease in the antibacterial activity of urine, gastrointestinal and urinary dysmotility, and the need for medical intervention in these patients [20].
Patients with T1DM are more likely to have complicated infections, such as pneumonia, septicemia, and osteomyelitis, than are those without DM [21]. Simonsen et al. reported that the incidence of bacterial infections was significantly higher in patients with T1DM than in those without DM [22]. Muller et al. reported that patients with T1DM and T2DM have an increased risk of infections of the lower respiratory tract, urinary tract, and skin and mucous membranes [23]. In addition, an Australian diabetes register-based study revealed that patients with T1DM exhibited significantly higher infection-related mortality (pneumonia, septicemia, and osteomyelitis) than did those with T2DM [11]. Therefore, T1DM is a risk factor for complicated infections, and it might be associated with higher incidence and severity of infection than T2DM.
Previous studies have reported that surgical treatment was used in 55-80% of patients with DNI [4,9,[24][25][26][27][28]. In our study, few patients received surgical treatment in both the cohorts (T1DM: 33.3% and non-DM: 18.7%). This difference in percentage may result from previous studies being conducted in medical centers or tertiary hospitals, which receive and treat patients with severe DNI [4,8,9,[24][25][26][27]. Hence, patients with severe DNI were more likely accept surgical interventions. However, we enrolled patients from all hospitals in our nationwide study. The distribution of patients with DNI was from primary to tertiary hospitals, and patients with low DNI severity were also included; thus, our study provided a complete spectrum of DNI treatment and prognosis [1]. In general, the use of surgical interventions to treat a DNI indicates that the infection is more severe and life-threatening. In our study, the percentage of surgical treatment for DNI was higher in the T1DM cohort than in the non-DM cohort; however, the difference was not statistically significant.
DNIs in patients with DM have been reported to be associated with long hospitalization durations and numerous complications [4,[28][29][30]. In our study, the duration of hospitalization for DNI was significantly higher in the T1DM than in the non-DM cohort, and this result was consistent with previously reported findings. Patients with T1DM and DNI were reported to exhibit a higher frequency of lethal complications, such as mediastinitis (2.7-10.0%), and higher mortality (1.6-7.5%) than those without DM [4,28,29]. A higher rate of ICU care for DNI was noted in patients with T1DM than with those without DM [1]. In our study, the occurrence of ICU care for DNI, mediastinitis, and DNI-related mortality was higher in the T1DM cohort than in the non-DM cohort, and these results were consistent with those of previous studies.
We analyzed the age distribution of DNI in the T1DM and non-DM cohorts in our study. In the non-DM cohort, the peak age of DNI occurrence was 21-40 years, while in the T1DM cohort, the two peak ages of DNI occurrence were 10-20 years and 21-40 years. In addition, in the T1DM cohort, DNI developed at age 10-20 years. In general, the incidence of DNI was higher at age 20-40 years. Patients with diabetes have been reported to have a late onset of DNI [24,25,28,31]. However, T1DM was characterized by diagnosis at a young age; according to Magliano's report, infection (pneumonia, septicemia, and osteomyelitis) at a young age was more likely to occur in patients with T1DM than in patients with T2DM (T1DM vs. T2DM = 26.9 vs. 60.4 years) [11]. In summary, we believe that patients with T1DM tend to develop DNI at younger age (10-40 years) than do patients without DM (21-40 years) and those with T2DM (>40 years).
Our study has several strengths, including a large number of patients with T1DM representing a nationwide population and a 10-year observation period. In addition, the diagnosis of T1DM was based on data from the RFCIP, a highly accurate and reliable database affiliated to the NHIRD. Nevertheless, the study has some limitations. The diagnoses were based on ICD-9-CM codes and not on original medical records; therefore, it lacked blood sugar level, laboratory data, data from imaging studies, surgical records, and pathologic reports, which are necessary for evaluating disease severity. The bacterial spectrum and drug sensitivity of T1DM-DNI and the difference from non-DM-DNI are important information for clinical management and prescription of antibiotics. However, our database did not contain that information. The effects of the factors omitted in this study on T1DM and DNI should be investigated in future studies. In addition, ICU care, mediastinitis, and mortality were observed only in patients with T1DM; however, the number of patients was insufficient to develop a statistical conclusion. Additional studies including detailed medical records and a large sample size of patients with DNI are needed.
Conclusions
This nationwide population-based study was the first to investigate the epidemiological data of DNI development and prognosis in patients with T1DM. We concluded that T1DM is a predisposing factor for DNI. The duration of hospitalization for DNI is longer in patients with T1DM than in those without DM. In addition, patients with T1DM are predisposed to developing DNI at a younger age than are those without DM.
|
2018-11-11T01:39:44.573Z
|
2018-10-25T00:00:00.000
|
{
"year": 2018,
"sha1": "c3a77cc8bdbce75b639b3a078ab8195fba770fe2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/7/11/385/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3a77cc8bdbce75b639b3a078ab8195fba770fe2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201070843
|
pes2o/s2orc
|
v3-fos-license
|
SIRUS: Making Random Forests Interpretable
State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as"black-boxes"because of the high number and complexity of operations involved in their prediction mechanism. This lack of interpretability is a strong limitation for applications involving critical decisions, typically the analysis of production processes in the manufacturing industry. In such critical contexts, models have to be interpretable, i.e., simple, stable, and predictive. To address this issue, we design SIRUS (Stable and Interpretable RUle Set), a new classification algorithm based on random forests, which takes the form of a short list of rules. While simple models are usually unstable with respect to data perturbation, SIRUS achieves a remarkable stability improvement over cutting-edge methods. Furthermore, SIRUS inherits a predictive accuracy close to random forests, combined with the simplicity of decision trees. These properties are assessed both from a theoretical and empirical point of view, through extensive numerical experiments based on our R/C++ software implementation sirus available from CRAN.
Introduction
Industrial context In the manufacturing industry, production processes involve complex physical and chemical phenomena, whose control and efficiency are of critical importance. In practice, data is collected along the manufacturing line, describing both the production environment and its conformity. The retrieved information enables to infer a link between the manufacturing conditions and the resulting quality at the end of line, and then to increase the process efficiency. State-of-the-art supervised learning algorithms can successfully catch patterns of such complex physical phenomena, characterized by nonlinear effects and low-order interactions between parameters. However, any decision impacting the production process has long-term and heavy consequences, and therefore cannot simply rely on a blind stochastic modelling. As a matter of fact, a deep physical understanding of the forces in action is required, and this makes black-box algorithms unappropriate. In a word, models have to be interpretable, i.e., provide an understanding of the internal mechanisms that build a relation between inputs and ouputs, to provide insights to guide the physical analysis. This is for example typically the case in the aeronautics industry, where the manufacturing of engine parts involves sensitive casting and forging processes. Interpretable models allow us to gain knowledge on the behavior of such production processes, which can lead, for instance, to identify or fine-tune critical parameters, improve measurement and control, optimize maintenance, or deepen understanding of physical phenomena.
Interpretability As stated in Rüping (2006), Lipton (2016), Doshi-Velez and Kim (2017), or Murdoch et al. (2019), to date, there is no agreement in statistics and machine learning communities about a rigorous definition of interpretability. There are multiple concepts behind it, many different types of methods, and a strong dependence to the area of application and the audience. Here, we focus on models intrinsically interpretable, which directly provide insights on how inputs and outputs are related. In that case, we argue that it is possible to define minimum requirements for interpretability through the triptych "simplicity, stability, and predictivity", in line with the framework recently proposed by . Indeed, in order to grasp how inputs and outputs are related, the structure of the model has to be simple. The notion of simplicity is implied whenever interpretability is invoked (e.g., Rüping, 2006;Freitas, 2014;Letham, 2015;Lipton, 2016;Ribeiro et al., 2016;Murdoch et al., 2019) and essentially refers to the model size, complexity, or the number of operations performed in the prediction mechanism. Yu (2013) defines stability as another fundamental requirement for interpretability: conclusions of a statistical analysis have to be robust to small data perturbations to be meaningful. Finally, if the predictive accuracy of an interpretable model is significantly lower than the one of a state-of-the-art black-box algorithm, it clearly misses some patterns in the data and will therefore be useless, as explained in Breiman (2001b). For example, the trivial model that outputs the empirical mean of the observations for any input is simple, stable, but brings in most cases no useful information. Thus, we add a good predictivity as an essential requirement for interpretability.
Decision trees Decision trees are a class of supervised learning algorithms that recursively partition the input space and make local decisions in the cells of the resulting partition (Breiman et al., 1984). Trees can model highly nonlinear patterns while having a simple structure, and are therefore good candidates when interpretability is required. However, as explained in Breiman (2001b), trees are unstable to small data perturbations, which is a strong limitation to their practical use. In an operational context, as a new batch of data is collected from a stationary production process, the conclusions can drastically change, and such unstable models provide us with a partial and arbitrary analysis of the underlying phenomena.
A widespread method to stabilize decision trees is bagging (Breiman, 1996), in which multiple trees are grown on perturbed data and aggregated together. Random forests is an algorithm developped by Breiman (2001a) that improves over bagging by randomizing the tree construction. Predictions are stable, accuracy is increased, but the final model is unfortunately a black-box. Thus, simplicity of trees is lost, and some post-treatment mechanisms are needed to understand how random forests make their decisions. Nonetheless, even if they are useful, such treatments only provide partial information and can be difficult to operationalize for critical decisions (Rudin, 2018). For example, variable importance (Breiman, 2001a(Breiman, , 2003a identifies variables that have a strong impact on the output, but not which inputs values are associated to output values of interest. Similarly, local approximation methods such as LIME (Ribeiro et al., 2016) do not provide insights on the global relation.
SIRUS In line with the above, we design in the present paper a new supervised classification algorithm that we call SIRUS (Stable and Interpretable RUle Set). SIRUS inherits the accuracy of random forests and the simplicity of decision trees, while having a stable structure for problems with low-order interaction effects. The core aggregation principle of random forests is kept, but instead of aggregating predictions, SIRUS focuses on the probability that a given hyperrectangle (i.e., a node) is contained in a randomized tree. The nodes with the highest probability are robust to data perturbation and represent strong patterns. They are therefore selected to form a stable rule ensemble model.
In Section 4 we illustrate SIRUS on a real and open dataset, SECOM (Dua and Graff, 2017), from a semi-conductor manufacturing process. Data is collected from 590 sensors and process measurement points (X (1) , X (2) , . . . , X (590) ) to monitor the production. At the end of the line, each of the 1567 produced entities is associated to a pass/fail label, with an average failure rate of p f = 6.6%. SIRUS outputs the following simple set of 6 rules: Average failure rate p f = 6.6% if X (60) < 5.51 The model is stable: when a 10-fold cross-validation is run to simulate data perturbation, 4 to 5 rules are consistent across two folds in average. The predictive accuracy of SIRUS is similar to random forests whereas CART tree performs no better than the random classifier as we will see for this dataset.
Section 2 is devoted to the detailed description of SIRUS. In Section 3, we establish the consistency and the stability of the rule selection procedure. These results allow us to derive empirical guidelines for parameter tuning, gathered in Section 4, which is critical for good practical performance. One of the main contributions of this work is the development of a software implementation of SIRUS, via the R package sirus available from CRAN, based on ranger, a high-performance random forest implementation in R and C++ (Wright and Ziegler, 2017). We illustrate, in Section 4, the efficiency of our procedure sirus through numerical experiments on real datasets.
SIRUS description
Within the general framework of supervised (binary) classification, we assume to be given an i.i.d. sample D n = {(X i , Y i ), i = 1, . . . , n}. Each (X i , Y i ) is distributed as the generic pair (X, Y ) independent of D n , where X = (X (1) , . . . , X (p) ) is a random vector taking values in R p and Y ∈ {0, 1} is a binary response. Throughout the document, the distribution of (X, Y ) is assumed to be unknown, and is denoted by P X,Y . For x ∈ R p , our goal is to accurately estimate the conditional probability η(x) = P(Y = 1|X = x) with few simple and stable rules.
To tackle this problem, SIRUS first builds a (slightly modified) random forest with trees of depth 2 (i.e., interactions of order 2). Next, each hyperrectangle of each tree of the forest is turned into a simple decision rule, and the collection of these elementary rules is ranked based on their frequency of appearance in the forest. Finally, the most significant rules are retained and are averaged together to form an ensemble model. To present SIRUS, we first describe how individual rules are created in Subsection 2.1, and then show how to select and aggregate the individual rules to obtain a more robust classifier in Subsection 2.2.
Basic elements
Random forests SIRUS uses at its core the random forest method (Breiman, 2001a), slightly modified for our purpose. As in the original procedure, each single tree in the forest is grown with a greedy heuristic that recursively partitions the input space using a random variable Θ. The essential difference between our approach and Breiman's one is that, prior to all tree constructions, the empirical q-quantiles of the marginal distributions over the whole dataset are computed: in each node of each tree, the best split can be selected among these empirical quantiles only. This constraint helps to stabilize the forest structure while keeping almost intact the predictive accuracy, provided q is not too small (typically of the order of 10-see the experimental Subsection 4.2). Also, because the targeted applications involve loworder interactions, the depth of the individual trees is limited to d = 2 (so, each tree has at most four terminal leaves). This produces shallow and simple trees, unlike traditional forests which use trees of maximal depth. Apart from these differences, the tree growing is similar to Breiman's original procedure. The tree randomization Θ is independent of the sample and has two independent components, denoted by Θ (S) and Θ (V ) , which are respectively used for the subsampling mechanism and randomization of the split direction. More precisely, we let Θ (S) ⊂ {1, . . . , n} an be the indexes of the observations in D n sampled with replacement to build the tree, where a n ∈ {1, . . . , n} is the number of sampled observations (it is a parameter of SIRUS). As for Θ (V ) , since the tree depth is limited to 2, it takes the form is the set of coordinates selected to split the root node (resp., its left and right children). As in the original forests, Θ Throughout the manuscript, for a given integer q ≥ 2 and r ∈ {1, . . . , q − 1}, we letq (2.1) The construction of the individual trees is summarized in the table below and is illustrated in Figure 1.
Algorithm 1 Tree construction 1: Parameters: Number of quantiles q, number of subsampled observations a n , number of eligible directions for splitting mtry. 2: Compute the empirical q-quantiles for each marginal distribution over the whole dataset. 3: Subsample with replacement a n observations, indexed by Θ (S) . Only these observations are used to build the tree. 4: Initialize s = 0 (the root of the tree). s , compute the CART-splitting criterion at all empirical q-quantiles of X (j) that split the cell s into two non-empty cells. 7: Choose the split that maximizes the CART-splitting criterion. 8: Repeat lines 5 − 7 for the two resulting cells (i.e., s = L and s = R). R ) is the set of coordinates selected to split the root node (resp., its left and right children).
In our context of binary classification, where the output Y ∈ {0, 1}, maximizing the socalled empirical CART-splitting criterion is equivalent to maximizing the criterion based on Gini impurity (see, e.g., Biau and Scornet, 2016). More precisely, at node H and for a cut performed along the j-th coordinate at the empirical r-th q-quantileq (j) n,r , this criterion reads: where Y H is the average of the Y i 's such that X i ∈ H, N n (H) is the number of data points X i falling into H, and Note that, for the ease of reading, (2.2) is defined for a tree built with the entire dataset D n without resampling.
Following the construction of Algorithm 1, SIRUS grows M randomized trees, where the extra randomness used to build the -th tree is denoted by Θ . The random variables Θ 1 , . . . , Θ M are generated as i.i.d. copies of the generic variable Θ = (Θ (S) , Θ R ), so that tree structures are independent conditional on the dataset D n .
Path representation In order to go further in the presentation of SIRUS, we still need to introduce a useful notation, which describes the paths that go from the root of the tree to a given node. To this aim, we follow the example shown in Figure 2 with a tree of depth 2 partitioning the input space R 2 , as we will only consider trees of depth 2 throughout the (1, 7, L)} Figure 2: Example of a root node R 2 partitionned by a randomized tree of depth 2: the tree on the right side, the associated paths and hyperrectangles of length d = 2 on the left side.
document. For instance, let us consider the node P 6 defined by the sequence of two splits X n,7 . The first split is symbolized by the triplet (2, 4, R), whose components respectively stand for the variable index 2, the quantile index 4, and the right side R of the split. Similarly, for the second split we cut coordinate 1 at quantile index 7, and pass to the right. Thus, the path to the considered node is defined by P 6 = {(2, 4, R), (1, 7, R)}. Of course, this generalizes to each path P of length d = 1 or d = 2 under the symbolic compact form where, for k ∈ {1, . . . , d} (d ∈ {1, 2}), the triplet (j k , r k , s k ) describes how to move from level (k −1) to level k, with a split using the coordinate j k ∈ {1, . . . , p}, the index r k ∈ {1, . . . , q −1} of the corresponding quantile, and a side s k = L if we go the the left and s k = R if we go to the right. The set of all possible such paths is denoted by Π. It is important to note that Π is in fact a deterministic (that is, non random) quantity, which only depends upon the dimension p and the order q of the quantiles-an easy calculation shows that Π is a finite set of cardinality 2p(q − 1) + p(4p − 1)(q − 1) 2 . On the other hand, a Θ-random tree of depth 2 generates (at most) 6 paths in Π, one for each internal and terminal nodes. In the sequel, we let T (Θ, D n ) be the list of such extracted paths, which is therefore a random subset of Π. Note that, in very specific cases, we can have less than 6 paths in T (Θ, D n ), typically if one of the two child nodes does not have any possible splits in the selected directions.
Elementary rule Of course, given a path P ∈ Π one can recover the hyperrectangle (i.e., the tree node)Ĥ n associated with P and the entire dataset D n via the correspondencê Thus, for each path P ∈ Π, we logically define the companion elementary ruleĝ n,P by with the convention 0/0 = 0. For x ∈ R p , the elementary ruleĝ n,P (x) is an estimate of the probability that x is of class 1, depending whether x falls inĤ n (P) or not. We note that such a rule depends on the dataset D n and the particular path P. One small word of caution: here, the term "rule" does not stand for "classification rule" but, as is traditional in the rule learning literature, to a piecewise constant estimate that can take two different values and simply reads "if conditions on x, then response, else default response".
The elementary rulesĝ n,P will serve as building blocks for SIRUS, which will learn from a collection of such rules. Since each Θ-random tree generates (at most) 6 rules through the path extraction, we can then generate a wide collection of rules using our modified random forest. The next subsection describes how we select and aggregate the most important rules of the forest to form a compact, stable, and predictive rule ensemble model.
SIRUS
Rule selection Using our modified random forest algorithm, we are able to generate a large number M of trees (typically M = 10 000), randomized by Θ 1 , . . . , Θ M . Since we are interested in selecting the most important rules, i.e., those which represent strong patterns between the inputs and the output, we select rules that are shared by a large portion of trees. As described above, for each Θ -random tree, we extract 6 rules through the associated paths. To make this selection procedure explicit, we let p n (P) be the probability that a Θ-random tree of the forest contains a particular path P ∈ Π, that is, The Monte-Carlo estimatep M,n (P) of p n (P), which can be directly computed using the random forest, takes the formp Clearly,p M,n (P) is a good estimate of p n (P) when M is large since, by the law of large numbers, conditional on D n , lim M →∞p M,n (P) = p n (P) a.s.
We also see thatp M,n (P) is unbiased since E[p M,n (P)|D n ] = p n (P). Now, let p 0 ∈ (0, 1) be a fixed parameter to be selected later on. As a general strategy, once the modified random forest has been built, we draw the list of all paths that appear in the forest and only retain those that occur with a frequency larger than p 0 . We are thus interested in the setP M,n,p 0 = {P ∈ Π :p M,n (P) > p 0 }.
( 2.4) We see that if M is large enough, thenP M,n,p 0 is a good estimate of By construction, there is some redundancy in the list of rules generated by the set of distinct pathsP M,n,p 0 . The hyperrectangles associated with the 6 paths extracted from a Θ-random tree overlap, and so the corresponding rules are linearly dependent. Therefore a post-treatment to filterP M,n,p 0 is needed to make the method operational. The general idea is straightforward: if the rule associated with the path P ∈P M,n,p 0 is a linear combination of rules associated with paths with a higher frequency in the forest, then P is removed fromP M,n,p 0 . The post-treatment mechanism is fully described and illustrated in Appendix A. Note that the theoretical properties of SIRUS will only be stated forP M,n,p 0 without post-treatment. However, since the post-treatment is deterministic, all subsequent results still hold whenP M,n,p 0 is post-treated (except the second part of Theorem 2-see Remark 1).
Rule aggregation Recall that our objective is to estimate the conditional probability η(x) = P(Y = 1|X = x) with a few simple and stable rules. To reach this goal, we propose to simply average the set of elementary rules {ĝ n,P : P ∈P M,n,p 0 } that have been selected in the first step of SIRUS. The aggregated estimateη M,n, n,P (x). (2.5) Finally, the classification procedure assigns class 1 to an input x if the aggregated estimatê η M,n,p 0 (x) is above a given threshold, and class 0 otherwise. In the introduction, we presented an example of a list of 6 rules for the SECOM dataset. In this case, for a new input x, η M,n,p 0 (x) is simply the average of the output p f over the 6 selected rules.
In past works on rule ensemble models, such as RuleFit (Friedman et al., 2008) and Node harvest (Meinshausen, 2010), rules are also extracted from a tree ensemble, and then combined together through a regularized linear model. In our case, it happens that the parameter p 0 alone is enough to control sparsity. Indeed, in our experiments, we observe that adding such linear model in our aggregation method hardly increases the accuracy and hardly reduces the size of the final rule set, while it can significantly reduce stability, add a set of coefficients that makes the model less straightforward to interpret, and requires more intensive computations. We refer to the experiments in Appendix A for a comparison betweenη M,n,p 0 defined as simple average (2.5) and defined with a logistic regression.
Theoretical properties
The construction of the rule ensemble model essentially relies on the path selection and on the estimatesp M,n (P), P ∈ Π. Therefore, our theoretical analysis first focuses on the asymptotic properties of those estimates in Theorem 1. Among the three minimum requirements for interpretability defined in Section 1, simplicity and predictivity are quite easily met for rule models (Cohen and Singer, 1999;Meinshausen, 2010;. On the other hand, as recall, building a stable rule ensemble is challenging. In the second part of the section, we provide a definition of stability in the context of rule models, introduce relevant metrics, and prove the asymptotic stability of SIRUS. Let us start by defining all theoretical counterparts of the empirical quantities involved in SIRUS, which do not depend on D n but only on the unknown distribution P X,Y of (X, Y ). For a given integer q ≥ 2 and r ∈ {1, . . . , q − 1}, the theoretical q-quantiles are defined by i.e., the population version ofq (j) n,r defined in (2.1). Similarly, for a given hyperrectangle H ⊆ R p , we let the theoretical CART-splitting criterion be Based on this criterion, we denote by T (Θ) the list of all paths contained in the theoretical tree built with randomness Θ, where splits are chosen to maximize the theoretical criterion L instead of the empirical one L n , defined in (2.2). We stress again that the list T (Θ) does not depend upon D n but only upon the unknown distribution of (X, Y ). Next, we let p (P) be the theoretical counterpart of p n (P), that is p (P) = P(P ∈ T (Θ)), and finally define the theoretical set of selected paths P p 0 by {P ∈ Π : p (P) > p 0 } (with the same post-treatment as for the empirical procedure-see Section 2). Notice that, in the case where multiple splits have the same value of the theoretical CART-splitting criterion, one is randomly selected.
As it is often the case in the theoretical analysis of random forests, we assume throughout this section that the subsampling of a n observations to build each tree is done without replacement to alleviate the mathematical analysis. Note however that Theorem 2 is valid for subsampling with or without replacement.
Consistency of the path selection
Our consistency results hold under conditions on the subsampling rate a n and the number of trees M n , together with some assumptions on the distribution of the random vector X. They are given below.
(A1) The subsampling rate a n satisfies lim n→∞ a n = ∞ and lim (A3) X has a strictly positive density f with respect to the Lebesgue measure. Furthermore, for all j ∈ {1, . . . , p}, the marginal density f (j) of X (j) is continuous, bounded, and strictly positive.
We are now in a position to state the main result of this section.
The proof of Theorem 1 is to be found in the Supplementary Material A. It is however interesting to give a sketch of the proof here. The consistency is obtained by showing that p Mn,n (P) is asymptotically unbiased with a null variance. The result for the variance is quite straightforward since the variance ofp Mn,n (P) can be broken into two terms: the variance generated by the Monte-Carlo randomization, which goes to 0 as the number of trees increases (Assumption (A2)), and the variance of p n (P). Following Mentch and Hooker (2016), since p n (P) is a bagged estimate it can be seen as an infinite-order U-statistic, and a classic bound on the variance of U-statistics gives that V[p n (P)] converges to 0 if lim n→∞ an n = 0, which is true by Assumption (A1). Next, proving thatp Mn,n (P) is asymptotically unbiased requires to dive into the internal mechanisms of the random forest algorithm. To do this, we have to show that the CART-splitting criterion is consistent (Lemma 3) and asymptotically normal (Lemma 4) when cuts are limited to empirical quantiles (estimated on the same dataset) and the number of trees grows with n. When cuts are performed on the theoretical quantiles, the law of large numbers and the central limit theorem can be directly applied, so that the proof of Lemmas 3 and 4 boils down to showing that the difference between the empirical CART-splitting criterion evaluated at empirical and theoretical quantiles converges to 0 in probability fast enough. This is done in Lemma 2 thanks to Assumption (A3).
The only source of randomness in the selection of the rules lies in the estimatesp Mn,n (P). Since Theorem 1 states the consistency of such an estimation, the path selection consistency follows, as formalized in Corollary 1, for all threshold values p 0 that do not belong to the finite set U = {p (P) : P ∈ Π} of all theoretical probabilities of appearance for each path P. Indeed, if p 0 = p (P) for some P ∈ Π, then P(p Mn,n (P) > p 0 ) does not necessarily converge to 0 and the path selection can be inconsistent.
Corollary 1 is a stability result, and thus a first step towards our objective of designing a stable rule ensemble algorithm. However, such an asymptotic result does not guarantee stability for finite samples. Metrics to quantify stability in that case are introduced in the next subsection.
Stability
In the statistical learning theory, stability refers to the stability of predictions (e.g., Vapnik, 1998). In particular, Rogers and Wagner (1978), Devroye and Wagner (1979), and Bousquet and Elisseeff (2002) show that stability and predictive accuracy are closely connected. In our case, we are more concerned by the stability of the internal structure of the model, and, to our knowledge, no general definition exists. So, we state the following tentative definition: a rule learning algorithm is stable if two independent estimations based on two independent samples result in two similar lists of rules. Thus, given a new sample D n independent of D n , we definep M,n (P) and the corresponding set of pathsP M,n,p 0 based on a modified random forest drawn with a parameter Θ independent of Θ. We take advantage of a dissimilarity measure between two sets, the so-called Dice-Sorensen index, often used to assess the stability of variable selection methods (Chao et al., 2006;Zucknick et al., 2008;Boulesteix and Slawski, 2009;He and Yu, 2010;Alelyani et al., 2011). This index is defined bŷ S M,n,p 0 = 2 P M,n,p 0 ∩P M,n,p 0 P M,n,p 0 + P M,n,p 0 (3.1) with the convention 0 0 = 1. This is a measure of stability taking values between 0 and 1: if the intersection betweenP M,n,p 0 andP M,n,p 0 is empty, thenŜ M,n,p 0 = 0, while if P M,n,p 0 =P M,n,p 0 , thenŜ M,n,p 0 = 1. We also define S n,p 0 , the population counterpart of S M,n,p 0 based on P n,p 0 and P n,p 0 , as S n,p 0 = 2 P n,p 0 ∩ P n,p 0 P n,p 0 + P n,p 0 .
Therefore, lim n→∞Ŝ Mn,n,p 0 = 1 in probability. The case S n,p 0 is similar.
Corollary 2 shows that the rule ensemble is asymptotically stable for both the infinite and finite forests, respectively corresponding to S n,p 0 andŜ Mn,n,p 0 . Of course, the latter case is of greater interest since only a finite forest is grown in practice. Nevertheless, an important stability requirement for SIRUS is to output the same set of rules when fitted multiple times on the same dataset D n , for a fixed sample size n and a given p 0 . This means that, conditionally on D n and with D n = D n ,Ŝ M,n,p 0 should be close to 1. The first statement of Theorem 2 below shows that this is indeed the case. Theorem 2 also provides an asymptotic approximation of E[Ŝ M,n,p 0 |D n ] for large values of the number of trees M , which quantifies the influence of M on the mean stability, conditional on D n . We let U n def = {p n (P) : P ∈ Π} be the empirical counterpart of U . , where Φ(M p 0 , M, p n (P)) is the cdf of a binomial distribution with parameter p n (P), M trials, evaluated at M p 0 , and, for all P, P ∈ Π, σ n (P) = p n (P)(1 − p n (P)), and ρ n (P, P ) = Cov(1 P∈T (Θ,Dn) , 1 P ∈T (Θ,Dn) |D n ) σ n (P)σ n (P ) .
The proof of Theorem 2 is to be found in the Supplementary Material A. Despite its apparent complexity, the asymptotic approximation of 1 − E[Ŝ M,n,p 0 |D n ] can be easily estimated, and plays an essential role to stop the growing of the forest at an optimal number of trees M , as illustrated in the next section.
Remark 1. As mentioned in Section 2, the equivalent provided in Theorem 2 is defined when the sets of rulesP M,n,p 0 andP M,n,p 0 are not post-treated. It considerably simplifies the analysis of the asymptotic behavior of E[Ŝ M,n,p 0 |D n ]. Since the post-treatment is deterministic, this operation is not an additional source of instability. Then, if the estimation of the rule set without post-treatment is stable, it is also the case when the post-treatment is added. Therefore an efficient stopping criterion for the number of trees can be derived from Theorem 2.
Tuning and experiments
We recall that our objective is to design simple, stable, and predictive rule models, with an acceptable computational cost. In practice, for a finite sample D n , SIRUS relies on two hyperparameters: the number of trees M and the selection threshold p 0 . This section provides a procedure to set optimal values for M and p 0 , and illustrates the good performance of SIRUS on real datasets.
Tuning of SIRUS
Throughout this section, we should keep in mind that in SIRUS, the random forest is only involved in the selection of the paths. Conditionally on D n , the set of selected pathsP M,n,p 0 = {P ∈ Π :p M,n (P) > p 0 } is a good estimate of its population counterpart P n,p 0 when M is large.
Tuning of M to maximize stability As explained in Section 3, an important stability requirement is that SIRUS outputs the same set of rules when fitted multiple times on a given dataset D n . This is quantified by the mean stability E[Ŝ M,n,p 0 |D n ], which measures the expected proportion of rules shared by two fits of SIRUS on D n , for fixed n (sample size), p 0 (threshold), and M (number of trees). Since the computational cost increases linearly with M , we propose to stop the growing of the forest when the mean stability is close enough to 1, with typically a gap smaller than α = 0.05. Thus, the stopping criterion takes the form 1 − E[Ŝ M,n,p 0 |D n ] < α.
There are two obstacles to operationalize this stopping criterion: its estimation and its dependence to p 0 . We make two approximations to overcome these limitations and give empirical evidence of their good practical behavior. First, Theorem 2 provides an asymptotic equivalent of 1 − E[Ŝ M,n,p 0 |D n ], that we simply estimate by ε M,n,p 0 = P∈Π Φ(M p 0 , M,p M,n (P))(1 − Φ(M p 0 , M,p M,n (P))) P∈Π (1 − Φ(M p 0 , M,p M,n (P))) .
Secondly, ε M,n,p 0 depends on p 0 , whose optimal value is unknown in the first step of SIRUS, when trees are grown. It turns out however that ε M,n,p 0 is not very sensitive to p 0 , as shown by the experiments of Figure 7 in Appendix A. Consequently, our strategy is to simply average ε M,n,p 0 over a setV M,n of many possible values of p 0 (see Appendix A for a precise definition) and use the resulting average as a gauge. Thus, in the experiments, we utilize the following criterion to stop the growing of the forest, with typically α = 0.05: Experiments showing the good empirical performance of this criterion are presented in Appendix A.
Remark 2. We emphasize that growing more trees does not improve predictive accuracy or stability with respect to data perturbation for a fixed sample size n. Indeed, the instability of the rule selection is generated by the variance of the estimatesp M,n (P), P ∈ Π. Upon noting that we have two sources of randomness-Θ and D n -, the law of total variance shows that V[p M,n (P)] can be broken down into two terms: the variance generated by the Monte Carlo randomness Θ on the one hand, and the sampling variance on the other hand. In fact, equation (1.3) in the proof of Theorem 1 (Supplementary Material A) reveals that The stopping criterion (4.1) ensures that the first term becomes negligible as M → ∞, so that V[p M,n (P)] reduces to the sampling variance V[p n (P)], which is independent of M . Therefore, stability with respect to data perturbation cannot be further improved by increasing the number of trees. Additionally, the trees are only involved in the selection of the paths. For a given set of pathsP M,n,p 0 , the construction of the final aggregated estimateη M,n,p 0 (see (2.5)) is independent of the forest. Thus, if further increasing the number of trees does not impact the path selection, neither it improves the predictive accuracy.
Tuning of p 0 to maximize accuracy The parameter p 0 is a threshold involved in the definition ofP M,n,p 0 to filter the most important rules, and therefore determines the complexity of the model. The parameter p 0 should be set to optimize a tradeoff between the number of rules, stability, and accuracy. In practice, it is difficult to settle such a criterion, and we choose to optimize p 0 to maximize the predictive accuracy with the smallest possible set of rules. To achieve this goal, we proceed as follows. The 1-AUC is estimated by 10-fold cross-validation for a fine grid of p 0 values, defined such that |P M,n,p 0 | varies from 1 to 25 rules. (We let 25 be an arbitrary upper bound on the maximum number of rules, considering that a bigger set is not readable anymore.) The randomization introduced by the partition of the dataset in the 10 folds of the cross-validation process has a significant impact on the variability of the size of the final model. Therefore, in order to get a robust estimation of p 0 , the cross-validation is repeated multiple times (typically 30) and results are averaged. The standard deviation of the mean of 1-AUC is computed over these repetitions for each p 0 of the grid search. We consider that all models within 2 standard deviations of the minimum of 1-AUC are not significantly less predictive than the optimal one. Thus, among these models, the one with the smallest number of rules is selected, i.e., the optimal p 0 is shifted towards higher values to reduce the model size without decreasing predictivity-see Figures 3 and 4 for examples.
Experiments
We have conducted experiments on 9 diverse public datasets from the UCI repository (Dua and Graff, 2017; data is described in Table 1), as well as on the SECOM data, collected from a semi-conductor manufacturing process. The first batch of experiments aims at illustrating the good behavior of SIRUS in various settings. Especially, we observe that the restrictions in the forest growing (cut values on quantiles and a tree depth of two) are not strong limitations, and that SIRUS provides a substantial improvement of stability compared to state-of-the-art rule algorithms. On the other hand, the SECOM dataset is an example of a manufacturing process problem. Typically, data is unbalanced (since most of the production is valid), hundreds of parameters are collected along the production line, with many noisy ones, and the order of interaction between these parameters is low. We use the R/C++ software implementation sirus (available from CRAN), adapted from ranger, a fast random forest implementation (Wright and Ziegler, 2017). The hyperparameters M and p 0 are tuned as explained earlier, we set mtry = p 3 and q = 10 quantiles. Bootstrap is used for the resampling mechanism, i.e., resampling is done with replacement and a n = n. Finally, categorical variables are transformed in multiple binary variables.
Dataset
Performance metrics As we have seen several times, an interpretable classifier is based on three essential features: simplicity, stability, and predictive accuracy. We introduce relevant metrics to assess those properties in the experiments. By definition, the size (i.e., the simplicity) of the rule ensemble is the number of selected rules, i.e., |P M,n,p 0 |. To measure the predictive accuracy, 1-AUC is used and estimated by 10-fold cross-validation (repeated 30 times for robustness). With respect to stability, an independent dataset is not available for real data to computeŜ M,n,p 0 as defined in Corollary 2 in Subsection 3.2. Nonetheless, we can take advantage of the cross-validation process to compute a stability metric: the proportion of rules shared by two models built during the cross-validation, averaged over all possible pairs. UCI datasets Now, SIRUS is run on the 9 selected UCI datasets. Figure 3 provides an example for the dataset "Credit German" of the dependence between predictivity and the number of rules when p 0 varies. In that case, the minimum of 1-AUC is about 0.26 for SIRUS, 0.21 for Breiman's forests, and 0.31 for CART tree. For the chosen p 0 , SIRUS returns a compact set of 18 rules and its stability is 0.66, i.e., about 12 rules are consistent between two different models built in a 10-fold cross-validation. Thus, the final model is simple (a set of only 18 rules), is quite robust to data perturbation, and has a predictive accuracy close to random forests. Figure 4 provides another example of the good practical performance of SIRUS with the "Heart Statlog" dataset. Here, the predictivity of random forests is reached with 11 rules, with a stability of 0.69. We also evaluated main competitors: CART, RuleFit, Node harvest, and BRL, using available R implementations, respectively rpart (Therneau et al., 2018), pre (Fokkema, 2017), nodeharvest (Meinshausen, 2015), and sbrl (Yang et al., 2017). All algorithms were run with their default settings (CART trees are pruned, RuleFit is limited to rule predictors). To compare stability of the different methods, data is binned with 10-quantiles, so that the possible rules are the same for all algorithms, and the same stability metric is used. Experimental results are gathered in Table 2 for model size, Table 3 for stability, and Table 4 for predictive accuracy.
Clearly, SIRUS is more stable than its competitors. We see that BRL exhibits a comparable stability for a few datasets and generates shorter set of rules, but at the price of a weaker predictive accuracy. RuleFit and Node harvest have a slightly better predictive accuracy than SIRUS, but they are unstable and generate longer sets of rules. Overall, the general conclusion of this first batch of experiments is that SIRUS improves stability with a predictive accuracy comparable to state-of-the-art methods. Manufacturing process data In this second batch of experiments, SIRUS is run on a real manufacturing process of semi-conductors, the SECOM dataset (Dua and Graff, 2017). Data is collected from sensors and process measurement points to monitor the production line, resulting in 590 numeric variables. Each of the 1567 data points represents a single production entity associated with a label pass/fail (0/1) for in-house line testing. As it is always the case for a production process, the dataset is unbalanced and contains 104 fails, i.e., a failure rate p f of 6.6%. We proceed to a simple pre-processing of the data: missing values (about 5% of the total) are replaced by the median. The threshold p 0 and the number of trees are tuned as previously explained. Figure 5 displays predictivity versus the number of rules when p 0 varies. The 1-AUC value is 0.30 for SIRUS (for the optimal p 0 = 0.04), 0.29 for Breiman's random forests, and 0.48 for a pruned CART tree. Thus, in that case, CART tree predicts no better than the random classifier, whereas SIRUS has a similar accuracy to random forests. The final model has 6 rules and a stability of 0.74, i.e., in average 4 to 5 rules are shared by 2 models built in a 10-fold cross-validation process, simulating data perturbation. By comparison, Node harvest outputs 34 rules with a value of 0.31 for 1-AUC.
Finally, the output of SIRUS may be displayed in the simple and interpretable form of Figure 6. Such a rule model enables to catch immediately how the most relevant variables impact failures. Among the 590 variables, 5 are enough to build a model as predictive as random forests, and such a selection is quite robust. Other rules alone may also be informative, but they do not add additional information to the model, since predictive accuracy is already minimal with the 6 selected rules. Then, production engineers should first focus on those 6 rules to investigate an improved parameter setting.
A Additional experiments and settings
This appendix specifies computational settings and provides additional experiments on the nine UCI datasets used in Section 4-see Table 1.
Rule set post-treatment As explained in Section 2, there is some redundancy in the list of rules generated by the set of distinct pathsP M,n,p 0 , and a post-treatment to filterP M,n,p 0 is needed to make the method operational. The general principle is straightforward: if the rule associated with the path P ∈P M,n,p 0 is a linear combination of rules associated to paths with a higher frequency in the forest, then P is removed fromP M,n,p 0 .
To illustrate the post-treatment, let the tree of Figure 2 be the Θ 1 -random tree grown in the forest. Since the paths of the first level of the tree, P 1 and P 2 , always occur in the same trees, we havep M,n (P 1 ) =p M,n (P 2 ). If we assume these quantities to be greater than p 0 , then P 1 and P 2 belong toP M,n,p 0 . However, by construction, P 1 and P 2 are associated with the same rule, and we therefore enforce SIRUS to keep only P 1 inP M,n,p 0 . Each of the paths of the second level of the tree, P 3 , P 4 , P 5 , and P 6 , can occur in many different trees, and their associatedp M,n are distinct (except in very specific cases). Assume for example that p M,n (P 1 ) >p M,n (P 4 ) >p M,n (P 5 ) >p M,n (P 3 ) >p M,n (P 6 ) > p 0 . Sinceĝ n,P 3 is a linear combination ofĝ n,P 4 andĝ n,P 1 , P 3 is removed. Similarly P 6 is redundant with P 1 and P 5 , and it is therefore removed. Finally, among the six paths of the tree, only P 1 , P 4 , and P 5 are kept in the listP M,n,p 0 . Random forest accuracy As described in Section 2, in the forest construction of SIRUS, the splits at each node of each tree are limited to the empirical q-quantiles of each component of X. We then first check that this modification alone of the forest has little impact on its accuracy. Using the R package ranger, 1-AUC is estimated for each dataset with 10 fold-crossvalidation for q = 10. Results are averaged over 10 repetitions of the cross-validation-the standard deviation is displayed in parentheses in Table 5.
Dataset
Breiman's RF RF -limited splits (q = 10) haberman 0.32 ( Definition ofV M,n To design the stopping criterion (4.1) of the number of trees, ε M,n,p 0 is averaged across a setV M,n of diverse p 0 values. These p 0 values are chosen to scan all possible path setsP M,n,p 0 , of size ranging from 1 to 50 paths. When a set of 50 paths is post-treated, its size reduces to around 25 paths. Thus, as explained in Section 4, 25 is an arbitrarily threshold on the maximum number of rules above which a rule model is not readable anymore. In order to generate path sets of such sizes, p 0 values are chosen halfway between two distinct consecutivep M,n (P), P ∈ Π, restricted to the highest 50 values.
Number of trees
We run experiments on the UCI datasets to assess the quality of the stopping criterion (4.1). Recall that the goal of this criterion is to determine the minimum number of trees M ensuring that two independent fits of SIRUS on the same dataset result on two lists of rules with an overlap of 95% in average. This is checked with a first batch of experiments-see next paragraph. Secondly, the stopping criterion (4.1) does not consider the optimal p 0 , unknown when trees are grown in the first step of SIRUS. Then, another batch of experiments is run to show that the stability approximation 1 − ε M,n,p 0 is quite insensitive to p 0 . Finally, a last batch of experiments provides examples of the number of trees grown when SIRUS is fit.
Experiments 1 For each dataset, the following procedure is applied. SIRUS is run a first time using criterion (4.1) to stop the number of trees. This initial run provides the optimal number of trees M as well as the setV M,n of possible p 0 . Then, SIRUS is fit twice independently using the precomputed number of trees M . For each p 0 ∈V M,n , the stability metricŜ M,n,p 0 (with D n = D n ) is computed over the two resulting lists of rules. Finallŷ S M,n,p 0 is averaged across all p 0 values inV M,n . This procedure is repeated 10 times: results are averaged and presented in Table 6, with standard deviations in parentheses. Across the considered datasets, resulting values range from 0.941 to 0.955, and are thus close to 0.95 as expected by construction of criterion (4.1). Experiments 2 The second type of experiments illustrates that ε M,n,p 0 is quite insensitive to p 0 when M is set with criterion (4.1). For the "Credit German" dataset, we fit SIRUS and then compute 1 − ε M,n,p 0 for each p 0 ∈V M,n . Results are displayed in Figure 7. 1 − ε M,n,p 0 ranges from 0.90 to 1, where the extreme values are reached for p 0 corresponding to very small number of rules, which are not of interest when p 0 is selected to maximize predictive accuracy. Thus, 1 − ε M,n,p 0 is quite concentrated around 0.95 when p 0 varies. Table 7 the optimal number of trees when the growing of SIRUS is stopped using criterion (4.1). It ranges from 4220 to 20 650 trees. In Breiman's forests, the number of trees above which the accuracy cannot be significantly improved is typically 10 times lower. However SIRUS grows shallow trees, and is thus not computationally more demanding than random forests overall.
Experiments 3 Finally, we display in
Logistic regression In Section 2,η M,n,p 0 (x) (2.5) is a simple average of the set of rules, defined asη where the coefficients β P have to be estimated. To illustrate the performance of the logistic regression (A.2), we consider again the UCI dataset, "Credit German". We augment the previous results from Figure 3 (in Section 4) with the logistic regression error in Figure 8. One can observe that the predictive accuracy is slightly improved but it comes at the price of an additional set of coefficients that can be hard to interpret (some can be negative), and an increased computational cost.
Supplementary Material
Proofs of Theorems 1 and 2 are available in Supplementary Material for: SIRUS: Making random forests interpretable.
|
2019-08-19T14:55:47.000Z
|
2019-08-19T00:00:00.000
|
{
"year": 2019,
"sha1": "17256c6758eb813ec85f803d791e43dfef7af1db",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "17256c6758eb813ec85f803d791e43dfef7af1db",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
79992901
|
pes2o/s2orc
|
v3-fos-license
|
APPROACHES TO ANTIBIOTIC THERAPY IN PATIENTS WITH CALCULOUS PYELONEPHRITIS, UNDERGOING IN-PATIENT TREATMENT IN THE DEPARTMENT OF UROLOGY
Urolithiasis is one of the most common urologic diseases and it is found more than 3% of the population of Russia, which is complicated by calculous pyelonephritis from 43-81% up to 100% of cases. The knowledge of the main bacteria usually involved in patients with calculous pyelonephritis and their antimicrobial susceptibility is necessary for appropriate empirical therapy and prevention of the emergence of antibiotic resistance. The main pathogens in patients with calculous pyelonephritis undergoing treatment in the department of urology of St. Joseph Belgorod Regional Clinical Hospital in 2013-2015 was Escherichia coli, was presented in 36.8% of the isolates, followed by Klebsiella species in 18.1% of the isolates, Enterobacter species in 16.9% of the isolates, and Proteus species in 8.8% of the isolates. All isolates showed susceptibilities to carbapenems. Sensitivity to cephalosporins ranged from 48.5% of the cases to 41.8% of the cases, to fluoroquinolones from 32.4% of the cases to 24.5% of the cases, to co-trimoxazole ranged from 27.9% of the cases to 30.84% of the cases in 2013-2015. It was found increase of aminoglycosides activity: sensitive strains to amikacin were allocated 67.6% of the isolates, 86.1% of the isolates, 84.7% of the isolates, it was identified sensitive strains to gentamicin in 44.3% of the isolates, 53.5% of the isolates, 55.2% of the isolates in 2013, 2014, 2015, respectively. High effective agents was fosfomycin, which shown activity in 79.3% of the cases, 84.4% of the cases, 80.4% of the cases in 2013, 2014, 2015, respectively. The obtained data were shown, that amikacin, fosfomycin, piperacillin/tazobactam, cefoperazone/sulbactam, carbapenems can be used for empirical therapy in patients with calculous pyelonephritis undergoing treatment in the department of urology of St. Joseph Belgorod Regional Clinical Hospital.
Urolithiasis is one of the most common urologic diseases and it is found more than 3% of the population of Russia [1], which is complicated by calculous pyelonephritis from 43-81% up to 100% of cases [2].Currently, despite the relatively wellstudied etiological structure of causative agents of pyelonephritis, issues of treatment of this disease is still actual, which is associated with the rapid growth of pathogens resistant to antimicrobial agents.It has transformed rational treatment regimens in the past to ineffective [3,4,5].
Irrational choice of antibacterial agents in patients with urinary tract infections not only leads to serious medical (increased incidence, recurrence, complications) and economic (growth of health care costs, increasing the duration of temporary disability), but also to the social (the deterioration of quality of life) and environmental (growth antibiotic resistance of microorganisms) effects [6].This makes it necessary to conduct epidemiological studies to the definition of the structure of allocated
74
RESEARCH RESULT: PHARMACOLOGY AND CLINICAL PHARMACOLOGY microorganisms and to determine their sensitivity in patients with calculous pyelonephritis.
Objectives of the study: 1.To study the structure of pathogens and their sensitivity to antibiotics in patients with calculous pyelonephritis undergoing treatment in the department of urology of St.Joseph Belgorod Regional Clinical Hospital in 2013-2015. 2.
To determine the antibacterial chemotherapeutic agents for initial and etiological antibiotic therapy of patients with calculous pyelonephritis in the department of urology of St.Joseph Belgorod Regional Clinical Hospital.
Materials and methods:
It was a retrospective pharmacoepidemiological analysis of medical records.It were included medical history of male and female over 18 years with a diagnosis "calculous pyelonephritis" in case samples showing significant growth according to criteria recommendations of Russian society of urology [7].
It was analyzed medical history of 736 patients (42.2%, of them were men and 57.8%women; mean age of patients was 56.38 ± 6.8 years old) with calculous pyelonephritis, who were treated in department of urology of St.Joseph Belgorod Regional Clinical Hospital in 2013-2015.
All patients underwent standard clinical examination, with mandatory bacteriological urine analysis, ultrasound study of kidneys.The clinical material for the study was an average portion of the morning urine or urine was obtained after drainage of the catheter-stent, or the draining ureteral catheter / nephrostomy drainage.Samples that were shown growth more than one types of organism, or had evidence of perineal contamination were not included for analysis.
The identification of microorganisms were obtained from the urine of patients with calculous pyelonephritis were treated in the department of urology of Saint Joseph Belgorod Regional Clinical Hospital in 2013-2015.Susceptibility testing was done by disk diffusion method and interpreted according to the EUCAST criteria.Statistical analyses were performed using "Statistica 10.0" applied statistical software package.
Results and discussion.
During the period of 2013-2015 total 497 strains were detected in urine of patients with calculous pyelonephritis have being treated in the department of urology of Saint Joseph Belgorod Regional Clinical Hospital in 2013-2015 (Table 1).
It was found that the most frequently isolated Gram-negative bacteria, were presented by strains of Enterobacteriacae.The most common uropathogen was Escherichia coli, which was presented in 36.8% of the cases, followed by Klebsiella species in 18.1% patient of the cases, and Enterobacter species in 16.9% of the cases.Also it was detected Proteus species in 8.8% of the cases, Citrobacter species in 5.1% of the cases.Pseudomonas species was detected in 5.1% of the cases, and Acinetobacter species in 2.6% of the cases (Table 1).Among Gram-positive bacteria the most frequent Gram-positive pathogen was Enterococcus spp.It was identified in 6.5% of the total number of strains isolated from urine of patients undergoing treatment in with calculous pyelonephritis have being treated in the department of urology of Saint Joseph Belgorod Regional Clinical Hospital in 2013-2015, which is less than the level was obtained by the data of other Russian researchers [8].
Due to the dominance of E. coli and other strains of Enterobacteria in the etiological structure of pathogens in calculous pyelonephritis the greatest interest is the data on the total sensitivity of all selected agents Enterobacteriacae and separate data sensitivity of E. coli, Klebsiella spp., Enterobacter spp.
Large proportions of E. coli were found to be resistant to ampicillin and amoxicillin-clavulanate in 2013-2015 ( It was revealed an increase in sensitivity to aminoglycosides: in 2013 the sensitivity to amikacin and gentamicin amounted to 55.6% of cases and 51.8% of cases, respectively.In 2014 and 2015, the susceptibility to amikacin amounted to 86.4% of the cases and 87.5% of the cases, respectively.Activity of gentamicin was slightly lower: it was identified in 63.6% of the strains and 70.8% of the strains in 2014 and 2015, respectively.
It was registered the low level of susceptibility to fluoroquinolones, which ranged from 31.8% of the cases to 37.1% of the cases to ciprofloxacin, from 29.1% of the cases to 37.1% of cases to levofloxacin in 2013-2015.The sensitivity of E. coli to cotrimoxazole amounted to 37.1% of the isolates, 36.4% of the isolates, 33.3% of the isolates in 2013, 2014, 2015, respectively.Fosfomycin showed a high level of activity, which was reached to 81.5% of the isolates, 86.3% of the isolates and 79.2% of the isolates in 2013, 2014 and 2015, respectively.
The findings suggest about the rise in the level of resistance of E. coli to penicillins, cephalosporins, fluoroquinolones, which is consistent with domestic authors obtained for the years 2009-2013.However, the sensitivity E. coli, in patients, who were treated at the department of urology of Saint Joseph Belgorod Regional Clinical Hospital in 2014-2015, to these antibacterial agents retained at substantially high sensitivity to amikacin, carbapenems, fosfomycin [8].By comparison the result was obtained in National Multicenter Surveillance Study «MARATHON» with data of susceptibility to antibiotics in the department of urology of St.Joseph Belgorod Regional Clinical Hospital revealed higher susceptibility to ampicillin, amoxicillin/clavulanate, cephalosporins, carbapenems, aminoglycosides, a slightly higher susceptibility to fluoroquinolones, co-trimoxazole and lower sensitivity to fosfomycin [9].
As compared with the data of foreign researchers, the level of resistance strains of E. coli, which were registered in patients with urinary tract infections during inpatient treatment, below the level of resistance, identified by researchers at the Veterans Hospital in Boston (USA), and researchers at the hospital of Al Zahra (Iran), but exceeds the level of resistance reported in the clinical hospital in Dublin (Ireland) [10,11,12].Analysis of susceptibility of extended-spectrum β-lactamase-producing E.coli were isolated from the urine of patients with calculous pyelonephritis which were treated in the department of urology of the St.Joseph Belgorod Regional Clinical Hospital in 2015 was registered sensitivity isolates to carbapenem in 100% of the cases, gentamicinin 40% of the cases, amikacinin 90% of the cases, to ciprofloxacinin 10% of the cases, to levofloxacinin 10% of the cases, to co-trimoxazolein 10% of the cases, to Korean researchers have identified slightly higher sensitivity of E. coli to the antibiotics: sensitivity to ciprofloxacin was identified in 20.7% of the strains, levofloxacinin 22.7% of the strains, amikacinin 94.1% of the strains, co-trimoxazolein 34.3% of the strains, fosfomycinin 87.7% of the strains [14].
Isolates of Klebsiella spp. was revealed a high level of resistance to penicillins, cephalosporins.All of the isolates were found to be resistant to ampicillin.The sensitivity to amoxicillin/clavulanate was reduced from 26.3% in 2013 to 0% of susceptible strains in 2015.Number of strains of Klebsiella spp., sensitive to cephalosporins was 36.8% in 2013.In 2014-2015 it was noted a decrease in their sensitivity to 27.3-28.5%.All isolates of Klebsiella spp.showed its sensitivity to carbapenems.It was registered low sensitivity to fluoroquinolones, co-trimoxazole.It was allocated 26.3% of the strains, 18.2% of the strains, 28.5% of the strains to ciprofloxacin and levofloxacin in 2013, 2014, 2015, respectively.The level of sensitivity to cotrimoxazole was 16.6-28.5% of susceptible strains in 2013-2015, unlike sensitivity to fosfomycin, which was 73.7%, 81.8%, 71.4% in 2013, 2014, 2015, respectively.It was shown low sensitivity to gentamicin, which varied from 36.8% to 45.5%, in contrast to amikacin, to which sensitivity was 73.7%, 90.9%, 85.7% in 2013, 2014, 2015, respectively.This is consistent with the data of researchers from the United States [15], but below the level of resistance was identified by Indian researchers.It has been shown, that the susceptibility of Klebsiella spp.which were isolated from sample of urine of patients who were hospitalized in a medical college hospital of Bangalore (India) in 2012, were registered a lower sensitivity to ampicillin, cephalosporins, aminoglycosides, fluoroquinolones.The level of sensitivity of strains Klebsiella spp. to carbapenems was 67.9% [16].
In comparison with the result obtained in National Multicenter Surveillance Study «MARATHON», we found higher sensitivity selected strains of Klebsiella spp. to cefotaxime, ceftazidime, cefepime, gentamicin, amikacin, with comparable susceptibility to carbapenems, fluoroquinolones, fosfomycin in the department of urology of the Belgorod Regional Clinical Hospital of St. Joasaph [9].
Frequency of the isolated strains of extendedspectrum β-lactamase-producing Klebsiella spp.isolated from the urine of patients with calculous pyelonephritis in the urology department of the St.Joseph Belgorod Regional Clinical Hospital in 2015 was reached 71.4% and the isolated strains showed a low susceptibility to the test antibiotic compared with the registered strains of extended-spectrum β-lactamase-producing E. coli.It was registered sensitivity of isolates Klebsiella spp. to carbapenem in 100% of the cases, gentamicinin 40% of the cases, amikacin -90% of the cases, to ciprofloxacin in 10% of the cases, to levofloxacinin 10% of the cases, to co-trimoxazolein 10% of the cases, to fosfomycinin 70% of the cases.The data is consistent with Korean researchers.[14].
Large proportions of the isolates were found to be resistant to penicillins and cephalosporins isolated strains.All strains of Enterobacter spp.isolated in 2013-2015 showed their resistance to ampicillin, amoxicillin/clavulanate.It was registered increase in the level of resistance of Enterobacter spp.: sensitivity to cephalosporins was recorded in 46.6% of the isolates, 16.6% strains 21.4% of strains in 2013, 2014, 2015, respectively.Strains of Enterobacter spp., which were isolated in 2013-2015, have been showed a higher level of resistance to cephalosporins as compared with the isolated strains of E. coli and Klebsiella spp.All isolates were susceptible to carbapenems.Analysis of the sensitivity of Enterobacter spp. was established low sensitivity to fluoroquinolones, co-trimoxazole.Amikacin was active in 73.3% of the cases, 66.6% of the cases, 73.3% of the cases; gentamicin in 33.3% of the cases, 33.3% of the cases, 28.5% of the cases in 2013, 2014, 2015, respectively.Susceptibility to ciprofloxacin does not exceed 26.2% of the strains to levofloxacin -20.0% of the strains.Selected strains of Enterobacter spp.were susceptible to cotrimoxazole in 26.2% of the cases, 16.6% of the cases, 35.7% of the cases in 2013, 2014, 2015, respectively.High activity to fosfomycin was showed in 86.6% of the isolates, 66.6% of the isolates, 83.3% of the isolates in 2013, 2014, 2015, respectively.
The findings of the high level of resistance Enterobacter spp.were consistent with those of Russian and foreign authors [9,17].
The identified strains of Enterobacter spp.showed the highest level of resistance among Enterobacteriaceae.Out of all strains of Enterobacter spp. it was registered 78.6% extended-spectrum βlactamase-producing strains of Enterobacter spp. in patients with calculous pyelonephritis in the department of urology of the St.Joseph Belgorod Regional Clinical Hospital in 2015.The most active antibacterial agents were carbapenems, which sensitivity were 100% of the cases.Susceptibility to amoxicillin/clavulanate in 0% of the cases, gentamicinin 27.2% of the cases, amikacin in 63.6% of the cases, ciprofloxacin in 18.2% of the cases, levofloxacinin 9.1% of the cases, cotrimoxazolein 0% of the cases, fosfomycinin 72.7% of the cases.This data exceeds the level of resistance, identified by researchers in the study of the spectrum of pathogens and their sensitivity to one of the Approaches to antibiotic therapy in patients with calculous pyelonephritis, undergoing in-patient treatment in the department of urology / N.G.Filippenko, T.N.Malorodova, T.G.Pokrovskaya, S.A. Batishchev, T.I.Kulchenkova, V.P. Lihodedova, J.S. Urojevskaya // Research result: pharmacology and clinical pharmacology.-2017.-Vol.3, №1 -P.73-78.
|
2019-03-17T13:06:19.923Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "b604b30b56604fe360ed78828fe90002a70b625d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18413/2500-235x-2017-3-1-73-78",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b604b30b56604fe360ed78828fe90002a70b625d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199098489
|
pes2o/s2orc
|
v3-fos-license
|
Consumer Preference towards Branded and Unbranded Honey in Tamil Nadu, India
Honey industry was classified as organized and unorganized; the organized industry had well established brands with large scale production and marketing. Unorganized ones were usually produced by local beekeeper or vendor who produces them in a small quantity and sold locally. It was usually sold under shop name or the product name rather than the company name. The Indian honey market is estimated around 2000 crores of which branded honey market in India is estimated around 700-800 crores (www.fnb news.com)
Introduction
Honey is a sweet, flavorful liquid obtained from the nectar of flowers collected by bees, which has high nutritional value and lot of health benefits. Earlier, honey was collected from the wild forest but presently due to its wide spread of potential uses and huge demand, it has been domesticated.
Honey industry was classified as organized and unorganized; the organized industry had well established brands with large scale production and marketing. Unorganized ones were usually produced by local beekeeper or vendor who produces them in a small quantity and sold locally. It was usually sold under shop name or the product name rather than the company name. The Indian honey market is estimated around 2000 crores of which branded honey market in India is estimated around 700-800 crores (www.fnb news.com) China is the topmost producers of honey in the world. India ranks sixth in the global production of honey. The honeybee species in India includes Apis mellifera (European or Italian bee) and Apis cerana indica (Indian hive bee) and exotic species have also been introduced. The major honey producing states Honey is a natural, sweet, syrupy fluid obtained from the nectar of flowers collected by bees. The main objective of the study was to analyze consumer preference towards branded and unbranded honey in Tamil Nadu. The primary data was collected from consumers through questionnaire using online survey method. The secondary data was collected using published sources and websites. From the study it was clear that majority of the respondents preferred branded honey which was followed by wild/ tribal honey and most of them preferred Dabur honey from the available brands. in India were Uttar Pradesh, West Bengal, Punjab, Bihar, Rajasthan, Himachal Pradesh, Kerala, etc. Tamil Nadu is one of the largest honey producers in India with the annual production of about 1820 MT (Source: www.indiastat.com). Majority of honey production is owned in Jamunamaruthur in Thiruvannamalai district and Marthandam in Kanyakumari District. The major sources for honey production in Tamil Nadu were Cardamom, Cashew, Tamarind, Rubber, Forest flora, Jamun, etc.
K e y w o r d s
Some of the health benefits of honey were it helps in delaying ageing; wound healing powers, an energizer, overcome fatigue, antibacterial, antioxidant, etc 3 . Honey has been used throughout the world since several million years ago as it is a safe and suggestive of good health for all age groups.
The quality of honey varied on the type and variety of plants and also according to the climatic conditions. Some of the medicinal uses of honey were 2 :
Treats Gastroenteritis
Cures Gastric ulcers by reducing the secretion of gastric acid Heals wounds
Remedy for stomach disorders
Honey is also widely used as sweeteners, food additives, and cosmetics. It is widely used in preparation of bakery products, preservation of fruits and vegetables, etc. Nutritional value of honey is provided (Table 1) and it indicates that the main composition of honey was carbohydrates, sugars and water.
Materials and Methods
Primary data for the study was collected through online survey method with structured and detailed questionnaire from consumers. Information from consumers was obtained using online survey with detailed questionnaire and the total sample size of the study was 301 consumers. The collected information was analyzed using percentage analysis.
Results and Discussion
The study was undertaken with the primary objective to understand the preference of different brands of honey among the sample consumers. The type of honey includes branded honey, unbranded honey, imported honey and wild/ tribal honey in the market.
The respondents were asked to choose the type of honey purchased and brands preferred. Table 2 shows that the majority of the respondents of about 41.2 per cent preferred branded honey, 17.9 per cent of the respondents preferred non branded honey, 6.3 percent of the respondents preferred imported honey and 34.6 percent of the respondents preferred Wild/Tribal honey.
Consumer honey from wide range of brands like Dabur honey, Lion honey, Himalaya forest honey, Patanjali honey, 24 Mantra honey, Vibis honey, Local branded honey, Zandu pure honey, Apis Himalaya Honey, etc. Table 3 shows that the majority of the respondents of about 20.3 percent preferred Dabur honey, followed by Patanjali honey (13.6 percent), Lion honey (13.3 percent), 6.6 per cent of the respondents preferred other brands which were locally available. Around 12.3 per cent did not prefer branded honey and 19.6 per cent of the respondents did not have any specific brand preference Bee keeping's untapped potential for increasing opportunities, steady employment and income in rural areas is yet to be utilized. From the study it was clear that majority of the respondents preferred branded honey which was followed by wild/ tribal honey and most of them preferred Dabur honey from the available brands. Presently consumers have become health conscious and preferred quality products. Farmers can explore ways to enhance their income by honey production and market their produce by forming Farmer producer companies.
|
2019-08-02T20:42:48.423Z
|
2019-06-20T00:00:00.000
|
{
"year": 2019,
"sha1": "1dd78a09facc787bdb855eada3d3e9f186c7c53c",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/8-6-2019/S.%20Leaka,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4215fc9ffa3245b1b4d8ccef87b2cc58e4c94741",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Geography"
]
}
|
10208705
|
pes2o/s2orc
|
v3-fos-license
|
Studies on Incorporation of Mg in Zr-Based AB 2 Metal Hydride Alloys
Mg, the A-site atom in C14 (MgZn2), C15 (MgCu2), and C36 (MgNi2) Laves phase alloys, was added to the Zr-based AB2 metal hydride (MH) alloy during induction melting. Due to the high melting temperature of the host alloy (>1500 ̋C) and high volatility of Mg in the melt, the Mg content of the final ingot is limited to 0.8 at%. A new Mg-rich cubic phase was found in the Mg-containing alloys with a small phase abundance, which contributes to a significant increase in hydrogen storage capacities, the degree of disorder (DOD) in the hydride, the high-rate dischargeability (HRD), and the charge-transfer resistances at both room temperature (RT) and ́40 ̋C. This phase also facilitates the activation process in measurement of electrochemical discharge capacity. Moreover, through a correlation study, the Ni content was found to be detrimental to the storage capacities, while Ti content was found to be more influential in HRD and charge-transfer resistance in this group of AB2 metal hydride (MH) alloys.
Introduction
Laves phase-based AB 2 metal hydride (MH) alloy is one of the high-capacity negative electrode materials used in nickel/metal hydride (Ni/MH) batteries.Its reversible hydrogen storage capacity can be as high as 3 wt% [1], which is equivalent to an electrochemical capacity of 804 mAh¨g ´1.
The measured electrochemical discharge capacity can reach up to 436 mAh¨g ´1 [2], which is about 25% higher than the conventional AB 5 MH alloys based on rare earth metals (330 mAh¨g ´1) [3,4].Early in their development, AB 2 MH alloys suffered from a harder activation and a shorter cycle life when compared to AB 5 MH alloys [5][6][7][8].With composition and process refinement, the activation and cycle stability of AB 2 MH alloys as negative electrode active material improved substantially [9].However, the high-rate dischargeability (HRD) of the AB 2 MH alloys, especially at low temperature, is still significantly inferior to the AB 5 MH alloys because of the relatively low nickel content in the AB 2 alloy [10].Various additions, including transition metals-Al [11], Cr [12], Co [13], Cu [14,15], Fe [16], Mo [17,18], Zn [19], Pt [20], Pd [21,22], rare earth metals-Y [23], Ce [24], La [25,26], and Nd [27], and others such as Si [28] and B [29], have been used to reduce the surface charge transfer resistance and increase the HRD of AB 2 MH alloys.In this paper, we summarize our findings regarding the use of one of the alkaline earth elements, Mg, as an additive in AB 2 MH alloys.
Mg can form various Laves phase alloys with different transition metals, such as C14 (MgZn 2 ), C15 (MgCu 2 ), and C36 (MgNi 2 ) [30].Mg-containing Mg 2 Ni, with an hP18 hexagonal structure (derivative of AlB 2 type [31]), is an important MH alloy that typically works in a temperature range of 200-250 ˝C.When the crystalline size decreases to the nanoscale or an amorphous state, Mg 2 Ni can be used as the negative electrode material in Ni/MH batteries [32][33][34][35][36][37][38][39][40][41].A more suitable stoichiometry for the Mg-Ni system is MgNi (1:1) but, unfortunately, it is not possible to obtain this material through conventional melt-and-cast, according to the phase diagram [42].Amorphous MgNi prepared by a combination of melt-spin and mechanical alloying can achieve an electrochemical capacity of 720 mAh¨g ´1 for the first cycle.However, it has very poor cycle stability [43] and is, therefore, the subject of a DoE-funded project [44,45].Reports of Mg use as a modifier in adjusting the hydrogen storage properties of AB 2 MH alloys are very scarce [46], which is very different from Mg-containing superlattice-based MH alloys (reviewed in [47]).Although Mg alone can form Laves phases, Mg is only slightly solubility in Zr(Ti)-based AB 2 phases (0.3 at%) and segregates into a Mg 2 Ni phase [46].The Mg 2 Ni secondary phase reduces the surface reaction current, but increases the charge retention [46].In addition to adding Mg to AB 2 alloy, we also investigated the role of Mg-addition (9.5 at%) in Zr 8 Ni 21 and found that the Mg-added alloy segregates into Zr 7 Ni 10 matrix, consisting of Zr 2 Ni 7 grains with occasional Mg 2 Ni inclusions, and Mg has a solubility of about 1.5 at% in the Zr 2 Ni 7 phase [48].Additionally, Mg added in Zr 8 NNi 21 alloy hindered the formation process, but increased the surface reaction exchange current [49].
Experimental Setup
An induction melting process involving an MgAl 2 O 4 crucible, an alumina tundish, a 2-kg furnace under argon atmosphere, and a pancake-shaped steel mold was used to prepare the ingot samples.MgNi 2 alloy was used as the Mg source, which was added in the final melting step.A 50% excess of Mg was added to compensate for evaporation loss.The ingots were first hydrided/dehydrided to increase their brittleness and then crushed and ground into ´200 mesh powder.The chemical compositions of the ingots were analyzed using a Varian Liberty 100 inductively coupled plasma-optical emission spectrometer (ICP-OES, Agilent Technologies, Santa Clara, CA, USA).A Philips X'Pert Pro X-ray diffractometer (XRD, Amsterdam, The Netherlands) was used to study the phase component.A JEOL-JSM6320F scanning electron microscope (SEM, Tokyo, Japan) with energy dispersive spectroscopy (EDS) was applied in investigating the phase distribution and composition.The hydrogen storage was measured using a Suzuki-Shokan multi-channel pressure-concentration-temperature system (PCT, Tokyo, Japan).In PCT analysis, each sample was first activated by a 2-h thermal cycle between room temperature (RT) and 300 ˝C under 2.5 MPa H 2 pressure, and then measured at 30, 60, and 90 ˝C.Details of the electrode and cell preparations, as well as the measurement methods, were previously reported [50,51].AC impedance measurements were conducted using a Solartron 1250 Frequency Response Analyzer (Solartron Analytical, Leicester, UK) with a sine wave amplitude of 10 mV and frequency range of 0.5 mHz to 10 kHz.Prior to experiments, electrodes were subjected to one full charge/discharge cycle at a rate of 0.1C, using a Solartron 1470 Cell Test galvanostat, discharged to 80% state-of-charge, and then cooled to ´40 ˝C.Magnetic susceptibility was measured using a Digital Measurement Systems Model 880 vibrating sample magnetometer (MicroSense, Lowell, MA, USA).
Results and Discussion
Six alloys were prepared by the induction melting technique.The design composition, with the ICP results, is summarized in Table 1.The Mg-free base alloy, Mg0, has been used numerous times in the previous comparison works [15][16][17][18][19].In the design, Mg-content was varied from 0 to 5 at% in alloys Mg0-Mg5, respectively.However, due to the strong rejection from the major phase, the Mg-content in the final alloys is in the range of only 0.6-0.8at%.To compensate for the increase in Mg-content in the design, both the Ti-and Ni contents were reduced.While the reduction in Ti content is clearly observed in ICP results, the reduction in Ni content is not obvious in alloys Mg1-Mg3, and the average Ni content actually increases in alloys Mg4 and Mg5 because of Mg loss.The average electron density (e/a), a strong factor in determining the C14/C15 phase abundance ratio [30,52], decreases monotonically in the design, but stabilized in the beginning and then increases in the ICP results, mirroring the evolution of Ni content.The B/A ratio, defined by the ratio of atomic percentage of B-sites (elements other than Zr, Ti, and Mg) and A-site atoms (Ti, Zr, and Mg), decreases in the design (hypo-stoichiometry), but stabilizes and then increases (hyper-stoichiometry) in the ICP results due to the increase in Ni content.The impact of stoichiometry on the performance of AB 2 MH alloys has been previously studied [53,54].In general, hypo-stoichiometry promotes the C14 phase, lowers the PCT plateau pressure, and decreases HRD.
X-Ray Diffractometer Analysis
XRD analysis is an important tool to study the multi-phase nature of the Laves phase MH alloys [55][56][57][58].The XRD patterns for alloys Mg0-Mg5 are shown in Figure 1.Peaks from the C15 cubic phase overlap with some of those from the C14 hexagonal phase.The TiNi phase with a B2 cubic structure can be seen in most of the XRD patterns.In addition to the C14, C15, and TiNi phases, one more cubic phase was observed in the Mg-containing alloys and it is believed to relate to Mg addition in the alloy.As the alloy number increases (Mg0 Ñ Mg5), the intensities of C14-only peaks (for example, the one near 39.5 ˝) decrease and the main C14/C15 peak (around 42.8 ˝) first shifts to the left (larger unit cell) and then shifts to the right (smaller unit cell), as indicated by the blue vertical line in Figure 1.electron density (e/a), a strong factor in determining the C14/C15 phase abundance ratio [30,52], decreases monotonically in the design, but stabilized in the beginning and then increases in the ICP results, mirroring the evolution of Ni content.The B/A ratio, defined by the ratio of atomic percentage of B-sites (elements other than Zr, Ti, and Mg) and A-site atoms (Ti, Zr, and Mg), decreases in the design (hypo-stoichiometry), but stabilizes and then increases (hyper-stoichiometry) in the ICP results due to the increase in Ni content.The impact of stoichiometry on the performance of AB2 MH alloys has been previously studied [53,54].In general, hypo-stoichiometry promotes the C14 phase, lowers the PCT plateau pressure, and decreases HRD.
X-Ray Diffractometer Analysis
XRD analysis is an important tool to study the multi-phase nature of the Laves phase MH alloys [55][56][57][58].The XRD patterns for alloys Mg0-Mg5 are shown in Figure 1.Peaks from the C15 cubic phase overlap with some of those from the C14 hexagonal phase.The TiNi phase with a B2 cubic structure can be seen in most of the XRD patterns.In addition to the C14, C15, and TiNi phases, one more cubic phase was observed in the Mg-containing alloys and it is believed to relate to Mg addition in the alloy.As the alloy number increases (Mg0 → Mg5), the intensities of C14-only peaks (for example, the one near 39.5°) decrease and the main C14/C15 peak (around 42.8°) first shifts to the left (larger unit cell) and then shifts to the right (smaller unit cell), as indicated by the blue vertical line in Figure 1.The lattice constants of the four phases obtained from the XRD analysis are listed in Table 2 with the crystallite size of the main C14 phase.With the increase in alloy number, the lattice constants of the C14 phase first increase and then decrease.The changes are very isotropic, as seen from the nearly unchanged a/c ratio.The crystallite size of the C14 phase decreases, and the lattice constants of C15 and TiNi follow the same trend as observed in the C14 main phase.The phase abundances of four constituent phases, obtained from a Rietveld refinement of the XRD patterns, are listed in Table 3.In general, as the alloy number increases, the C14 phase was replaced by the C15 phase and the TiNi phase abundance first increases and then decreases while the phase abundance of the Mg-related cubic phase remains unchanged.The evolution of the C14/C15 phase agrees with the changes in e/a and B/A (Table 1), because the C14/C15 phase determination threshold of e/a is approximately 6.9 in this case [52].
Scanning Electron Microscope/Energy Dispersive Spectroscopy Analysis
SEM back-scattering electron images (BEI) for alloys Mg1-Mg5 are shown in Figure 2. EDS was used to study composition information of representative spots with different contrasts on the BEI micrographs and the results are summarized in Table 4.As we know, EDS is a semi-quantitative analysis and results are only for comparison purpose.The microstructure of the Mg-free alloy (Mg0) was published before (as alloy Mo0 in [17]) and is composed of C14 and TiNi phases.In the microstructures of Mg-containing alloys, a C15 phase (judging from its relatively higher e/a value) with a slightly brighter contrast and an Mg-predominated phase with a darker contrast start to appear.The TiNi phase is usually surrounded by the C15 phase, since the cooling sequence is C14-C15-TiNi [59,60].According to the EDS results shown in Table 4, the Mg-content in the C14 phase is very small (0.2-0.3 at%), while that of the C15 phase is slightly higher (0.3-0.7 at%).In addition, the B/A ratios in the C14 phase are higher than those in the C15 phases.The initial increase followed by a decrease in the C14 lattice parameters found through XRD analysis can be explained by the balance between the decrease in the content of relatively small Ti (larger C14 unit cell) and the increase in B/A ratio (smaller C14 unit cell [61]).The B/A ratio in the TiNi phase is calculated based on the assumption of V occupying the A-site [27] and the results are still higher than 1, which indicates the possibility of other Zr x Ni y secondary phases with higher B/A ratios.The nature of the Zr x Ni y phase was studied before by transmission electron microscopy [62].The Mg-rich phase (the fourth phase in each BEI micrograph) has an Mg-content from 45.4 at% to 82.1 at%.It is difficult to link this phase to the cubic phase found by XRD since all alloys in Mg-Ni phase diagram are hexagonal (Mg, Mg 2 Ni, and MgNi 2 ).An MgIn 2 intermetallic alloy with a cubic structure and a lattice constant of 4.60 Å [63] was the prototype used in our XRD analysis.The phase with the bright contrast (the fifth phase in each BEI micrograph) has a relatively higher Zr-content and a B/A ratio close to 1.0, and therefore is identified as the ZrNi phase.It cannot be the mixture of Zr metal and neighboring Laves phase because of a relatively low V-content (similar to the case of TiNi phase).The corresponding XRD peak of this ZrNi phase cannot be identified due to low abundance.The Mg-contents in the TiNi and ZrNi phases are slightly higher than those in the C14 and C15 phases.and MgNi2).An MgIn2 intermetallic alloy with a cubic structure and a lattice constant of 4.60 Å [63] was the prototype used in our XRD analysis.The phase with the bright contrast (the fifth phase in each BEI micrograph) has a relatively higher Zr-content and a B/A ratio close to 1.0, and therefore is identified as the ZrNi phase.It cannot be the mixture of Zr metal and neighboring Laves phase because of a relatively low V-content (similar to the case of TiNi phase).The corresponding XRD peak of this ZrNi phase cannot be identified due to low abundance.The Mg-contents in the TiNi and ZrNi phases are slightly higher than those in the C14 and C15 phases.
Pressure-Concentration-Temperature Analysis
PCT measurement has been used extensively in the study of Laves phase MH alloys reacting with hydrogen gas [64][65][66][67][68][69].PCT isotherms measured at 30 ˝C and 60 ˝C for alloys Mg0-Mg5 are compared in Figure 3.These isotherms lacking noticeable plateaus are commonly observed in highly-disordered AB 2 MH alloys.The multi-phase nature in this group of alloys lowers the critical temperature (T c ) when the pressure plateau starts to disappear [70][71][72].Some hydrogen storage properties detected from the PCT isotherms are listed in Table 5.Both the maximum and reversible hydrogen storage capacities first increase and then decrease as the alloy number increases.Due to the lack of obvious plateau pressure, the desorption pressure at 0.75 wt% of hydrogen storage capacity was used for comparison between equilibrium pressure and calculation of hysteresis, and heat of hydride formation (∆H h ) and change in entropy (∆S h ).In the Mg-containing alloys, the equilibrium pressure first decreases and then increases as the alloy number increases, which complies with the general rule that a higher metal-hydrogen bond strength yields a lower plateau pressure and a higher hydrogen storage capability [50].The slope factor (SF) indicates the degree of disorder (DOD) in an alloy.SF is defined as the ratio of the storage capacity between 0.01 MPa and 0.5 MPa to the total capacity in the desorption isotherm [2,19,51].An alloy with a large SF has a flatter plateau and less DOD (less components or less variations among the components).As the alloy number increases, the SF decreases, indicating an increase in alloy homogeneity with addition of Mg.The hysteresis of the PCT isotherm is defined as ln(P a /P d ), where P a and P d are the absorption and desorption equilibrium pressures, respectively, at 0.75 wt% H-storage.The irreversible energy loss during plastic deformation of the hydride phase in the alloy matrix is a common explanation for PCT hysteresis [73][74][75], and was linked to the a/c ratio and pulverization rate of the alloy [76].In this study, the addition of Mg does not significantly change PCT hysteresis and should have no impact on the pulverization rate of alloy during cycling.The desorption equilibrium pressures at the midpoint capacity measured at 30, 60, and 90 °C (P) were used to estimate the changes in enthalpy (ΔHh) and entropy (ΔSh) using the equation: The desorption equilibrium pressures at the midpoint capacity measured at 30, 60, and 90 ˝C (P) were used to estimate the changes in enthalpy (∆H h ) and entropy (∆S h ) using the equation: where R is the ideal gas constant and T is the absolute temperature.Since the hydrogenation reaction is exothermic, the heat of hydride formation (∆H h ) is negative.Both ∆H h and ∆S h decrease (become more negative) and then increase with the increase in alloy number.The evolution in ´∆H h value correlates to hydrogen storage capacity and is agrees with the strength of hydrogen-metal bond assumption described earlier.∆S h is an indicator for showing the DOD in hydride from a completely ordered solid (e.g., solid hydrogen).The difference between ∆S h and ´130.7 J¨mol ´1¨K ´1 (the S for H 2 (g) at 300 K and 0.1 MPa [77]) can be interpreted as the DOD for hydrogen in the hydride form (β-phase).In this study, the trend of |∆S h | increases, and then decreases with the increase in the alloy number, which is similar to that of SF (the indicator for the DOD in the host metal alloy).The same correlation between the DOD of the hydride and the DOD of the occupied hydrogen was previously reported [19].
Electrochemical Analysis
Discharge capacities measured at a discharge current of 4 mA¨g ´1 for the first 13 cycles for each alloy in this study are plotted in Figure 4a to show the activation behavior at full capacity.The activation of the Mg-containing alloys appeared to be slightly easier than the Mg-free Mg0 alloy.The results of discharge capacities with other electrochemical tests are listed in Table 6.For Mg-containing alloys, both discharge capacities measured at 4 mA¨g ´1 and 50 mA¨g ´1 rates increase and then decrease with the corresponding increase in the alloy number.The maximum capacities obtained are from alloy Mg2, which demonstrated the lowest e/a value and B/A ratio.Mg2 also has the highest hydrogen storage capacity among the alloys in this study.The HRD values, defined as the ratio of the tenth cycle capacities measured at 50 mA¨g ´1 and 4 mA¨g ´1 rates, are listed in Table 6 and demonstrated an increasing trend correlating to alloy number.The HRD obtained from the first 13 cycles are plotted in Figure 4b, which exhibited easier activation in HRD with higher alloy numbers.From Table 6, the addition of Mg is shown to be effective in improving both activation and HRD.The improvement in HRD with Mg was further investigated by electrochemically measuring the bulk diffusion constant (D) and surface exchange current (I o ).Method details of these two parameters were previously reported [25] and the results are listed in Table 6.In general, as the alloy number increases, D first decreases and then increases while I o shows the opposite trend.The increase in HRD with alloy number is related to both the bulk and surface properties of the alloys.The addition of Mg decreases the D value (bulk), except for alloys with very high Ni content (Mg4 and Mg5), and increases the I o value (surface) of the alloys.The contribution of Mg to faster activation is similar to that with La in AB 2 MH alloys [78][79][80][81], where the new Mg-containing phase may absorb a larger amount of hydrogen, causing surface cracking and an increase in the surface area.
where R is the ideal gas constant and T is the absolute temperature.Since the hydrogenation reaction is exothermic, the heat of hydride formation (ΔHh) is negative.Both ΔHh and ΔSh decrease (become more negative) and then increase with the increase in alloy number.The evolution in −ΔHh value correlates to hydrogen storage capacity and is agrees with the strength of hydrogen-metal bond assumption described earlier.ΔSh is an indicator for showing the DOD in hydride from a completely ordered solid (e.g., solid hydrogen).The difference between ΔSh and −130.7 J•mol −1 •K −1 (the S for H2 (g) at 300 K and 0.1 MPa [77]) can be interpreted as the DOD for hydrogen in the hydride form (βphase).In this study, the trend of |ΔSh| increases, and then decreases with the increase in the alloy number, which is similar to that of SF (the indicator for the DOD in the host metal alloy).The same correlation between the DOD of the hydride and the DOD of the occupied hydrogen was previously reported [19].
Electrochemical Analysis
Discharge capacities measured at a discharge current of 4 mA•g −1 for the first 13 cycles for each alloy in this study are plotted in Figure 4a to show the activation behavior at full capacity.The activation of the Mg-containing alloys appeared to be slightly easier than the Mg-free Mg0 alloy.The results of discharge capacities with other electrochemical tests are listed in Table 6.For Mg-containing alloys, both discharge capacities measured at 4 mA•g −1 and 50 mA•g −1 rates increase and then decrease with the corresponding increase in the alloy number.The maximum capacities obtained are from alloy Mg2, which demonstrated the lowest e/a value and B/A ratio.Mg2 also has the highest hydrogen storage capacity among the alloys in this study.The HRD values, defined as the ratio of the tenth cycle capacities measured at 50 mA•g −1 and 4 mA•g −1 rates, are listed in Table 6 and demonstrated an increasing trend correlating to alloy number.The HRD obtained from the first 13 cycles are plotted in Figure 4b, which exhibited easier activation in HRD with higher alloy numbers.From Table 6, the addition of Mg is shown to be effective in improving both activation and HRD.The improvement in HRD with Mg was further investigated by electrochemically measuring the bulk diffusion constant (D) and surface exchange current (Io).Method details of these two parameters were previously reported [25] and the results are listed in Table 6.In general, as the alloy number increases, D first decreases and then increases while Io shows the opposite trend.The increase in HRD with alloy number is related to both the bulk and surface properties of the alloys.The addition of Mg decreases the D value (bulk), except for alloys with very high Ni content (Mg4 and Mg5), and increases the Io value (surface) of the alloys.The contribution of Mg to faster activation is similar to that with La in AB2 MH alloys [78][79][80][81], where the new Mg-containing phase may absorb a larger amount of hydrogen, causing surface cracking and an increase in the surface area.Low-temperature performance is a very important parameter in propulsion applications, especially in start-stop type micro-hybrid vehicles.The conventional AB 5 MH alloy performed poorly below ´25 ˝C, but the Co-doped A 2 B 7 superlattice MH alloy can result in significant improvements [82].Additions of La, Nd, and Si in AB 2 MH alloys can lower the surface charge-transfer resistance (R) at ´40 ˝C to a comparable level with AB 5 MH alloys [23,25,27].In order to study the effect of Mg addition to the low-temperature performance of AB 2 MH alloys, AC impedance at ´40 ˝C was measured, and R and the double-layer capacitance (C, closely related to the surface reaction area) were calculated from the obtained Cole-Cole plot.R and C values for alloys in this study are listed in Table 7.The addition of Mg into AB 2 MH alloys increases the R value and decreases C. Therefore, different from rare earth elements and Si, the addition of alkaline earth elements in AB 2 MH alloys should not be considered when improvement in the low-temperature performance is needed.
Magnetic Properties
Magnetic susceptibility was used to characterize the nature of the metallic nickel particles present in the surface layer of the alloy following an alkaline activation treatment [10].Details on the background and experimental methods were reported earlier [83].Metallic Ni is an active catalyst for the water splitting and recombination reactions that contributes to the I o in this electrochemical system.This technique allows us to obtain the saturated magnetic susceptibility (M s ), a quantification of the amount of surface metallic Ni (the product of preferential oxidation), and the magnetic field strength at one-half of the M s value (H 1/2 ), a measurement of the averaged reciprocal number of Ni atoms in a metallic cluster (Figure 5a).The magnetic susceptibility graphs for alloys in this study are shown in Figure 5b and the calculated M s and H 1/2 values are listed in Table 6.The M s decreases and then increases with the increase in the alloy number, which is the opposite trend to the data observed for I o .The increase in I o for Mg2 is not from the metallic nickel particles embedded in the surface oxide and may be due to the high content of TiNi phase, which was reported as a catalytic phase for electrochemical reaction [84,85].The H 1/2 values listed in Table 6 indicate that the size of metallic nickel decreases and then increases as the alloy number increases.In general, the Mg-containing alloys have smaller metallic nickel clusters in the surface than the Mg-free Mg0.
Correlations
Due to the limited solubility of Mg in AB2 MH alloys, the ICP results have some deviations from the original design of Zr21.5Ti21−0.6xV10Cr7.5Mn8.1Co8Ni32.2−0.4xMgxSn0.3Al0.4,where x = 0, 1, 2, 3, 4, and 5, especially in Mg and Ni contents.As the alloy number increases, Mg content remains at approximately the same level, Ni content remain unchanged and then increases for the last two alloys, Mg4 and Mg5, while the Ti content decreases monotonically.In order to study the influences of different compositions with regard to various alloy properties, the correlation factor (R 2 ) was calculated between composition (Ni content, Ti content, e/a, and B/A) and properties.The comparison results are listed in Table 8.The findings of the correlation can thus be summarized as demonstrating that Ni content has more influences on the B/A ratio, C14 unit cell volume, hydrogen storage properties, low-rate electrochemical capacity, bulk diffusion, and size of metallic Ni embedded in the surface oxide layer when compared to Ti content.The hydrogen capacities obtained from PCT (converted into electrochemical capacity by 1 wt% = 268 mAh•g −1 ) and half-cell tests are plotted against Ni content in Figure 6a, showing a decrease in capacity with an increase in the Ni content.The Ti content is more closely related to the C14 phase crystallite size, C14 phase abundance, HRD, and R and C measured at −40 °C.The last three characteristics are plotted against Ti content in Figure 6b, showing that as Ti content increases, the R at −40 °C decreases and C at −40 °C increases.However, the RT HRD decreases.The decrease in R is
Correlations
Due to the limited solubility of Mg in AB 2 MH alloys, the ICP results have some deviations from the original design of Zr 21.5 Ti 21´0.6xV 10 Cr 7.5 Mn 8.1 Co 8 Ni 32.2´0.4xMg x Sn 0.3 Al 0.4 , where x = 0, 1, 2, 3, 4, and 5, especially in Mg and Ni contents.As the alloy number increases, Mg content remains at approximately the same level, Ni content remain unchanged and then increases for the last two alloys, Mg4 and Mg5, while the Ti content decreases monotonically.In order to study the influences of different compositions with regard to various alloy properties, the correlation factor (R 2 ) was calculated between composition (Ni content, Ti content, e/a, and B/A) and properties.The comparison results are listed in Table 8.The findings of the correlation can thus be summarized as demonstrating that Ni content has more influences on the B/A ratio, C14 unit cell volume, hydrogen storage properties, low-rate electrochemical capacity, bulk diffusion, and size of metallic Ni embedded in the surface oxide layer when compared to Ti content.The hydrogen capacities obtained from PCT (converted into electrochemical capacity by 1 wt% = 268 mAh¨g ´1) and half-cell tests are plotted against Ni content in Figure 6a, showing a decrease in capacity with an increase in the Ni content.The Ti content is more closely related to the C14 phase crystallite size, C14 phase abundance, HRD, and R and C measured at ´40 ˝C.The last three characteristics are plotted against Ti content in Figure 6b, showing that as Ti content increases, the R at ´40 ˝C decreases and C at ´40 ˝C increases.However, the RT HRD decreases.The decrease in R is consistent with the increase in C (reaction surface area), but should decrease the RT HRD.The RT R and C follow the trend of ´40 ˝C R and C (Table 7) and, therefore, the discrepancy is not due to different temperatures and requires further investigation.Both C14 and TiNi phase abundances are correlated to various properties.The results show that the C14 main phase abundance is more influential in hydrogen storage capacities both in solid state and electrochemistry, while the TiNi minor phase abundance has high impacts on D, I o , H
Conclusions
The influence of Mg addition (approximately 0.7 at%) to the structural, solid state, and electrochemical properties of a series of Laves phase based AB2 MH alloys with different Ti and Ni contents were investigated.In general, addition of Mg does not lower the surface charge-transfer resistance, unlike other non-transitional metals, such as Si, Y, La, and Nd.Interestingly, one of the alloys, Mg2 (Zr22.5Ti10.9V9.9Cr7.5Mn8.2Co8.0Ni31.8Mg0.8Sn0.3Al0.4) with a phase distribution of 88.3% C14, 7.9% C15, 3.5% TiNi, and 0.3% of a Mg-rich cubic phase, shows improved hydrogen capacities in both solid state and electrochemistry and a higher surface exchange current, but a lower bulk diffusion coefficient.
Conclusions
The influence of Mg addition (approximately 0.7 at%) to the structural, solid state, and electrochemical properties of a series of Laves phase based AB 2 MH alloys with different Ti and Ni contents were investigated.In general, addition of Mg does not lower the surface charge-transfer resistance, unlike other non-transitional metals, such as Si, Y, La, and Nd.Interestingly, one of the alloys, Mg2 (Zr 22.5 Ti 10.9 V 9.9 Cr 7.5 Mn 8.2 Co 8.0 Ni 31.8Mg 0.8 Sn 0.3 Al 0.4 ) with a phase distribution of 88.3% C14, 7.9% C15, 3.5% TiNi, and 0.3% of a Mg-rich cubic phase, shows improved hydrogen capacities in both solid state and electrochemistry and a higher surface exchange current, but a lower bulk diffusion coefficient.
Figure 1 .
Figure 1.X-ray diffractometer (XRD) patterns using Cu-Kα as the radiation source for alloys: (a) Mg0; (b) Mg1; (c) Mg2; (d) Mg3; (e) Mg4; and (f) Mg5.In addition to two Laves phases, two cubic phases can be also identified.The vertical line indicates the main C14/C15 peak shifting to lower and then higher angles with an increasing alloy number.
Figure 1 .
Figure 1.X-ray diffractometer (XRD) patterns using Cu-Kα as the radiation source for alloys: (a) Mg0; (b) Mg1; (c) Mg2; (d) Mg3; (e) Mg4; and (f) Mg5.In addition to two Laves phases, two cubic phases can be also identified.The vertical line indicates the main C14/C15 peak shifting to lower and then higher angles with an increasing alloy number.
Batteries 2016, 2, 11 5 of 16 phase found by XRD since all alloys in Mg-Ni phase diagram are hexagonal (Mg, Mg2Ni,
Figure 4 .
Figure 4. Activation behavior observed from (a) half-cell capacity measured at 4 mA•g −1 and (b) halfcell high-rate dischargeability (HRD) for the first 13 electrochemical cycles.
Figure 4 .
Figure 4. Activation behavior observed from (a) half-cell capacity measured at 4 mA¨g ´1 and (b) half-cell high-rate dischargeability (HRD) for the first 13 electrochemical cycles.
Figure 5 .
Figure 5. (a) Representative magnetic susceptibility measurement results showing the saturated magnetic susceptibility (Ms) with paramagnetic component removed and H1/2 defined as the magnitude of applied field corresponding to half of maximum magnetic susceptibility (1/2 Ms); (b) the magnetic susceptibility of the alloys in this study.
Figure 5 .
Figure 5. (a) Representative magnetic susceptibility measurement results showing the saturated magnetic susceptibility (M s ) with paramagnetic component removed and H 1/2 defined as the magnitude of applied field corresponding to half of maximum magnetic susceptibility (1/2 M s ); (b) the magnetic susceptibility of the alloys in this study.
1/2 , and C14 cell volume.The correlation of TiNi phase abundance to D and I o are plotted in Figure 6c.High amount in TiNi phase increases the surface reactivity (I o ) but hinders the bulk diffusion of hydrogen (D).While the connections between TiNi phase abundance and H 1/2 may be true and require future validation, the connection with the C14 cell volume is most likely a coincidence.Both B/A ratio and e/a value are not as influential as other factors in this comparison.Batteries 2016, 2, 11 11 of 16 consistent with the increase in C (reaction surface area), but should decrease the RT HRD.The RT R and C follow the trend of −40 °C R and C (Table 7) and, therefore, the discrepancy is not due to different temperatures and requires further investigation.Both C14 and TiNi phase abundances are correlated to various properties.The results show that the C14 main phase abundance is more influential in hydrogen storage capacities both in solid state and electrochemistry, while the TiNi minor phase abundance has high impacts on D, Io, H1/2, and C14 cell volume.The correlation of TiNi phase abundance to D and Io are plotted in Figure 6c.High amount in TiNi phase increases the surface reactivity (Io) but hinders the bulk diffusion of hydrogen (D).While the connections between TiNi phase abundance and H1/2 may be true and require future validation, the connection with the C14 cell volume is most likely a coincidence.Both B/A ratio and e/a value are not as influential as other factors in this comparison.
Figure 6 .
Figure 6.Examples of correlations found in this study showing: (a) a high Ni content decreases capacities; (b) a higher Ti content decreases the HRD and R, while C increases; and (c) a higher TiNi phase abundance increases Io (surface), but decreases D (bulk).
Figure 6 .
Figure 6.Examples of correlations found in this study showing: (a) a high Ni content decreases capacities; (b) a higher Ti content decreases the HRD and R, while C increases; and (c) a higher TiNi phase abundance increases I o (surface), but decreases D (bulk).
Table 1 .
Design compositions and inductively coupled plasma (ICP) results in at%.e/a is the average electron density.B/A is the atomic ratio of B-atom (elements other than Ti, Zr, and Mg) to A-atom (Ti, Zr, and Mg).
Table 1 .
Design compositions and inductively coupled plasma (ICP) results in at%.e/a is the average electron density.B/A is the atomic ratio of B-atom (elements other than Ti, Zr, and Mg) to A-atom (Ti, Zr, and Mg).
Table 2 .
Lattice constants a and c, a/c ratio, unit cell volume, and crystallite size of the main C14 phase of alloys Mg0-Mg5 from XRD analysis.ND denotes non-detectable.
Table 4 .
Summary of EDS results.All compositions are in at%.Compositions of the main C14 and C15 phase are in bold and italic, respectively.
Table 5 .
Summary of the solid state (gaseous phase) hydrogen storage properties of the Mgcontaining AB2 alloys.SF: slope factor.
Table 5 .
Summary of the solid state (gaseous phase) hydrogen storage properties of the Mg-containing AB 2 alloys.SF: slope factor.
Table 6 .
Summary of the room temperature (RT) electrochemical and magnetic results (capacity, rate, D, I o , and M s , H 1/2 ,) of the Mg-containing AB 2 alloys.
Table 7 .
Summary of the electrochemical results from AC impedance measurement (R: charge transfer resistance, C: double-layer capacitance at ´40 ˝C and RT) of the Mg-containing AB 2 alloys.
Table 8 .
Correlation factors (R 2 ) between the composition and hydrogen storage properties.
Table 8 .
Correlation factors (R 2 ) between the composition and hydrogen storage properties.
|
2016-06-10T08:59:46.098Z
|
2016-04-14T00:00:00.000
|
{
"year": 2016,
"sha1": "a0ba9e1d65f1395d29839c677a28270d21bcf2ba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-0105/2/2/11/pdf?version=1463995365",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "a0ba9e1d65f1395d29839c677a28270d21bcf2ba",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
149573731
|
pes2o/s2orc
|
v3-fos-license
|
Patterns of Young Children ’ s Number Sense Development as Assessed by How Many Hidden Game
The data for this paper are drawn from a larger study of the usefulness of a gamesbased approach for assessing children’s abilities in mathematics. The focus of this paper is on the how many hidden game, which has a focus on number sense. This game originated in Australia and known as Gumnut game, and used readily available nuts from eucalyptus trees as playing pieces. However, researchers in Malaysian Borneo modify this game and name it as How many hidden. Any small objects can be used, and in Sabah, plastic counters were used as the playing pieces. The development of children’s prior-toschool number sense is important, as Aunola et al., (2004), Claessens et al., (2009), Romano et al., (2010), have shown that number sense proficiency is beneficial to a child’s school mathematics learning. Davydov (1976) too, in his work on quantities, and their relationships, while eschewing number, points out that the basic relationships between quantities is a foundation for problem solving in the later years of children’s mathematics learning. ABSTRACT
INTRODUCTION
The data for this paper are drawn from a larger study of the usefulness of a gamesbased approach for assessing children's abilities in mathematics.The focus of this paper is on the how many hidden game, which has a focus on number sense.This game originated in Australia and known as Gumnut game, and used readily available nuts from eucalyptus trees as playing pieces.However, researchers in Malaysian Borneo modify this game and name it as How many hidden.Any small objects can be used, and in Sabah, plastic counters were used as the playing pieces.The development of children's prior-toschool number sense is important, as Aunola et al., (2004), Claessens et al., (2009), Romano et al., (2010), have shown that number sense proficiency is beneficial to a child's school mathematics learning.Davydov (1976) too, in his work on quantities, and their relationships, while eschewing number, points out that the basic relationships between quantities is a foundation for problem solving in the later years of children's mathematics learning.This paper outlines this innovative approach to assessing young children's informal mathematical abilities through the use of games designed as instruments for assessment through non-threatening environment.
METHODOLOGY
Altogether 74 children, aged 4 to 6 years, played the how many hidden game with one of their teachers.All children were not yet in formal schooling, and as had developed their quantity knowledge and understandings informally.
Procedure
Children were asked if they would like to play a game, the How many hidden Game, with one of their teachers.Those who agreed then played the game and the highest quantity for which they responded correctly was recorded as their 'score'.This does not mean, that a child who played with a maximum quantity of 5, knew all the possible combinations that make 5 (see Table 1 below for these).However, each child had the opportunity to repeat the game for that number.
The How Many Hidden Game
The How many hidden game is a game for a teacher, parent, or other numerate person, and a child.The game focuses on the child's ability to conceptualise a quantity, and its component (sub-quantities) parts.This is to say, the game is about playing with number sense, and allows an assessment of the child's number sense capabilities.The How many hidden game develops practice in number sense.For example, the starting quantity might be 5, and the game requires the child to provide one of the two sub-quantities, of 5, for all possible combinations.Additionally, only one of the sub-quantities is visible, so that physical counting is not available to aid the child's response.For the quantity 5, this is represented as in Table 1 below.
The visible sub-quantity lies in front of the child, while the hidden sub-quantity remains hidden from the child.Once the child has named the hidden sub-quantity, they get to see if they were correct or not, thus allowing them to see the previously hidden sub-quantity, and correct themselves if necessary.
Note that the hidden sub-quantity has to be recovered from an internalised splitting of the initial quantity (5 in the example above) in order to play the game successfully.
RESULTS
The children's scores (their highest game number) were grouped by age and described in three figures below, and by descriptive statistics.In order to make the data clearer in the figures, children's responses have been ordered from lowest to highest response number.The figures are presented below.Note, that in Figure 3, there appears to be a 'barrier' for many children, in the sample, at their age.However, we have found, shown in Figure 1, that this is not true for all children, as six children (27%) were able to work with quantities beyond 5, and one child was able to work with the quantity 10.Anecdotal evidence appears to support that for the younger child, the quantity understanding is aligned with their age: that is, 2-year-olds are capable with a quantity at least 2, 3-year-olds with a quantity at least 3, and so on.In our sample of 4-year-olds, 4 children (18%) were only able to work with quantities less than 4, while 10 children (45%) could work with the quantity 4. Two children were unable to start the game, and were credited with a quantity of 0. The mean quantity for the 4-year-old group was 4.27, with a Standard Deviation of 2.53.
Figure 4 shows that, by the age of 5 years, the majority of our sample had broken free of any restriction to their age.The majority of children could work with quantities greater than 5, some up to 10, and a few beyond 10.Only 4 of the 24 in the 5-year-old sample (16.7%) were working below 5, while 2 (8.3%) were at a quantity of 5, and of the remaining 18 children, 15 (62.5%) were able to Highest count work with quantities between 5 and 10.The 3 (12.5%)highest quantities, between 10 and 20, were unexpected of children so young and un-schooled.Note that this provides a warning to educators, that such a range, 3 to 20, may exist within their classes, and the needs of children at the extremes are very different.The Mean quantity for the 5-year-old group was 7.5, and the Standard Deviation 3.
Figure 5 shows that by age 6, all children in the sample were working with quantities greater than 5, and most of them with quantities greater than 10, with some beyond 15 or 20.
Seven (25%) of the 6-year-olds were capable of working with quantities between 5 and 10, while another 7 (25%) were working within the range 11 to 15. Unexpectedly, a further 12 (42.9%)were working with quantities between 16 and 20, and the last 2 (7.1%) were working at 25 or 26.The Mean quantity for the 6-year-olds was 15.04, and the Standard Deviation was 5.3.
Finally, Figure 6 shows all three groups represented on a single graph.Due to different numbers of children in each sample age group, the last six columns have no 4-yearold data, and the last 4 columns have only data from the 6-year-old sample.
DISCUSSION
While the small sample size may make generalizations difficult, the results demonstrate, clearly, that this approach to assessing young children's informal mathematical understanding is feasible and informative.While in this paper we have only described one of a range of assessment games, we believe that we have demonstrated that this approach is worth pursuing further.Assessment that describes development is a powerful tool to put into the hands of educators of young children.
8 Figure 6 :
Figure 6: Composite of all How many hidden game performances (N=74)
|
2019-05-12T14:24:49.655Z
|
2018-12-10T00:00:00.000
|
{
"year": 2018,
"sha1": "3438ecb973acd85af3ca686643a679d2fb4f061d",
"oa_license": "CCBYNCSA",
"oa_url": "http://publisher.unimas.my/ojs/index.php/JCSHD/article/download/1117/671",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3438ecb973acd85af3ca686643a679d2fb4f061d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
254218425
|
pes2o/s2orc
|
v3-fos-license
|
Interactions between lexical and syntactic L1-L2 overlap: Effects of gender congruency on L2 sentence processing in L1 Spanish-L2 German speakers
Abstract Bringing together lines of research from sentence processing and lexical access, this empirical study investigates the interplay between lexical (grammatical gender) and syntactic (word order) cross-linguistic overlap in L2 German. Eighty-six L1 Spanish-L2 German and thirty-six monolingual German adults completed a German self-paced reading task with noun phrases (NPs) manipulated by L1-L2 gender congruency (congruent, incongruent, neuter) and L1-L2 adjective-noun word order (pre- vs. postnominal adjectives). The study examines the effects of gender congruency, the type of L1-L2 gender mapping (i.e., presence vs. absence of each class in L1 and L2), and L2 proficiency level. Results show that the detection of ungrammatical word order in L2 German interacts with gender congruency, in that L2 speakers are only sensitive to word order violations for sentences with gender-congruent nouns. The detection of ungrammaticality for sentences containing gender-incongruent nouns only emerges at higher L2 proficiency levels. These findings underscore the role of cross-linguistic lexical overlap in syntactic processing.
Introduction
Research on the bilingual mental lexicon shows that bilinguals activate both languages when speaking and understanding the L1 or the L2 (for a review see Tokowicz, 2015). Such "fundamental permeability" (Kroll, 2015) between languages, evident in cross-linguistic influence, is often considered a hallmark of bilingual language processing. Permeability extends into grammar, with learners, for instance, being affected by the grammatical gender of nouns in one language even when processing sentences in another (gender congruency effect; Morales et al., 2016). At the same time, there is mixed evidence of cross-linguistic effects at the sentence level. While work on L2 sentence productionmostly from crosslinguistic priming studiessuggests a high degree of interactivity between the L1 and L2 also at a structural level (e.g., Hartsuiker & Bernolet, 2017), research on L2 sentence comprehension often reports that L2 comprehenders demonstrate analogous processing patterns despite L1 differences (see Hopp, 2022 for a review). Such absence of cross-linguistic effects has been interpreted as reflecting an overall tendency among adult L2 learners to underuse syntactic information in L2 real-time comprehension (e.g., Clahsen & Felser, 2006, 2018. This general observation of cross-linguistic influence at the lexical level and its comparative absence at the syntactic leveltogether with a general underreliance on syntaxhas spurred research to consider how lexical and syntactic processing may interact in L2 adults. According to the Shared Syntax model for language production (Hartsuiker & Bernolet, 2017), L2 grammatical representations are initially lexically specific (i.e., tied to particular lexical items), and learners only gradually abstract grammatical structure. At a higher proficiency, learners connect abstract structural properties across languages, which leads to cross-linguistically shared syntax. In this way, lexical acquisition paves the way to L2 syntax. Other approaches have focused on how lexical processing can impede target sentence processing. For instance, the Lexical Bottleneck Hypothesis (Hopp, 2018) stipulates that incomplete parsing arises partially from greater demands on lexical processing in bilinguals, as evidenced by slowdowns in lexical access (e.g., Hopp, 2016;Miller, 2014) and cross-linguistic influence in lexical processing (Hopp & Lemmerth, 2018).
The present study investigates the interaction between syntactic and lexical gender congruency in Spanish as the L1 and German as the L2, that is, two languages with a different number of gender classes. We developed a self-paced reading task in German where we manipulated lexical and syntactic cross-linguistic overlaps in Spanish and German with a particular focus on attributive adjectives. To investigate syntactic congruency, attributive adjectives appeared either pre-or postnominally within the noun phrase (NP). While postnominal adjectives are grammatical in Spanish, they are ungrammatical in German, as the language requires attributive adjectives to appear prenominally. To investigate lexical congruency, the grammatical gender of the nouns in the NPs was either congruent between Spanish and German, incongruent (e.g., Spanish masculine-German feminine), or neuter in German. We examine (a) how lexical gender and syntactic overlap interact during L2 sentence processing; (b) whether the type of L1-L2 mappings of gender affects L2 sentence processing (i.e., whether the gender has an analogous value in the L1); and (c) whether L2 proficiency plays a role. The findings demonstrate that lexical gender congruency interacts with incremental sensitivity to syntactic ungrammaticality, in that L1 Spanish learners of German only show slowdowns for sentences with ungrammatical Spanish-like postnominal attributive adjectives if the gender of the nouns in the NPs is congruent with Spanish. For gender-incongruent nouns, only higher proficiency learners come to be sensitive to the ungrammaticality of Spanish NP word order in German.
Background
Interactions between lexical and syntactic processing When reading or listening to sentences in the L2, late learners often demonstrate attenuated, slower, or absent reflexes of syntactic structure building compared to native speakers. These are manifested in difficulties or delays in the resolution of syntactic ambiguities or lower and delayed sensitivity to ungrammatical syntax during real-time comprehension (for a review, see Roberts, 2013).
Many approaches relate these difficulties in processing L2 morphosyntax to lower degrees of the availability of grammar to L2 learners as compared to L1 speakers (Clahsen & Felser, 2006, 2018, stronger interference from competing information (e.g., Cunnings, 2017), or lower degrees of integration of morphosyntactic information in a late-learnt L2 (e.g., Jiang, 2007). Yet others have raised the possibility that some of these difficulties may be natural consequences of bilingualism; that is, the distribution of language use across two languages and the concurrent activation of all languages during bilingual language processing (see Hopp, 2018 for a review).
On the one hand, bilinguals divide their time across two languages such that they have less experience with either language than a monolingual speaker of that language. Consequently, they encounter words and sentences in each language less frequently. As argued by the Weaker Links (or frequency lag) hypothesis by Gollan and colleagues (e.g., Gollan et al., 2008), less use translates into larger frequency effects in lexical retrieval, with bilinguals suffering delays in word recognition, particularly for lower-frequency words. Several studies have examined potential consequences of slower lexical processing for sentence comprehension.
One line of work manipulates lexical processing by item-level factors, such as the lexical frequency of words in the sentences. As in L1 processing (Tily et al., 2010), L2 readers show earlier evidence of structure building with high-frequency than with low(er)-frequency nouns (Hopp, 2017;Miller, 2014). Another line of work relates individual differences in lexical processing at the participant level to differences in sentence processing. For instance, L2 learners with faster lexical decoding skills demonstrate more target-like processing of ambiguous sentences, while less efficient lexical decoders are not sensitive to L1-L2 structural differences in sentence processing (Cheng et al., 2021;Hopp, 2014). Such findings extend to the processing of grammatical gender (Hopp, 2013), suggesting that efficient lexical access is a prerequisite for target grammatical processing. In turn, when lexical processing is slower and more taxing in an L2 compared to an L1, it may have knock-on effects on syntactic structure-building operations in real time. These operations can only be successfully executed once the lexical items incorporated in the parse have been processed to some degree.
On the other hand, the integrated (or non-selective) nature of lexical access in bilinguals can impact sentence processing. As bilinguals automatically activate all of the languages in their lexiconincluding grammatical information such as grammatical genderthis spreading activation can lead to different, more diffuse, or delayed use of grammatical information in sentence comprehension. In a series of studies with Russian-German bilinguals, Hopp and Lemmerth (2018;Lemmerth & Hopp, 2019) examined how the predictive processing of grammatical gender agreement in an L2 is affected by gender congruency with the L1. In a visualworld eye-tracking study, they studied whether Russian-German bilinguals could use grammatical gender marked on articles (e.g., der M /die F /das N ) or adjectives (blauer M /blaues N "blue") to anticipate a following noun (e.g., Tisch M /Lampe F / Haus N ). Of note, the study contrasted the gender congruency of the German nouns with Russian such that nouns and their translation equivalents were either gendercongruent (Lampe F -lampa F "lamp") or gender-incongruent (Haus N -dom M "house"). In addition, the conditions varied as to whether they were syntactically congruent (i.e., both German and Russian have gender marking on prenominal adjectives), or syntactically incongruent (i.e., only German marks gender on articles since Russian does not have articles). The results show that, for intermediate adult L2 learners, lexical and syntactic congruency interacted: they could only use gender for predictive agreement processing in the syntactically incongruent condition (articles) if the nouns were gender-congruent between German and Russian, even when target knowledge of the nouns and their genders was controlled (see also Weber & Paris, 2004). For successive bilingual Russian-German children, Lemmerth and Hopp (2019) also found that gender congruency was a prerequisite for target syntactic prediction according to gender agreement. These findings indicate that lexical gender congruency effects observed in predictive processing interact with syntactic processing, leading to delayed and less robust referent identification when the gender of the nouns implicated in the agreement relation does not overlap between L1 and L2.
This body of evidence led to the formulation of the Lexical Bottleneck hypothesis (LBH; Hopp, 2018), arguing that incomplete parsing in an L2 can partially arise from slowdowns at the lexical level and the non-language-selective nature of lexical access in bilinguals. Lexical processing constitutes a bottleneck, in that lexical retrieval consumes time and resources that then cut short the subsequent target computation of syntax or lead to differences in the activation of grammatical information in bilinguals. In this paper, we test the scope of the Lexical Bottleneck hypothesis in the processing of word order violations in NPs containing L1-L2 gender-congruent, incongruent, or neuter nouns. In this way, we investigate whether the effects of lexical gender congruency observed in predictive L2 processing of gender agreementwhere incongruency in gender leads to less successful referent identificationcan also be seen in slower processing of non-target syntax in the L2 when the latter corresponds to licit word orders in the L1.
Interplay between L1-L2 gender mapping and lexical congruency effects Previous work from individual word production and processing has further shown that it is not simply a matter of whether gender is cross-linguistically congruent or incongruent, but that the nature of the gender mapping impacts L2 lexical access. Most of the studies that have examined gender interactions in asymmetric systems, where there is no straightforward mapping for one of the gender classes (e.g., German neuter for Spanish-German bilinguals), yield different findings for incongruent versus asymmetric gender classes.
Testing bilinguals with a three-gender L1 and a two-gender L2, Manolescu and Jarema (2015) conducted an L2 picture naming task and a timed L2 translation task with L1 Romanian-high proficient L2 French adults. In naming, reaction times (RTs) were significantly faster for both congruent and neuter nouns (i.e., Romanian neuter and masculine/feminine in French) compared to incongruent (masculine-feminine mismatched) nouns. The same pattern of results was found in the translation task, though in this case neuter did not differ statistically from incongruent. In a similar vein, Paolieri and colleagues (2019) examined gender representation in L1 Russian-high proficient L2 Spanish adults using two timed L2 translation tasks. Bare noun translation revealed significantly faster RTs for congruent than incongruent, for neuter than incongruent, and also for congruent than neuter nouns (with only the latter differing from findings with L1 Romanian-L2 French speakers). NP (articlenoun) translation offered a similar pattern of results, though the difference between incongruent and neuter nouns was not significant.
Focusing on bilinguals with a two-gender L1 and a three-gender L2, Klassen (2016b) conducted a timed binary choice NP (articlenoun) grammaticality judgment task with L1 French-intermediate L2 German adults. Results showed significantly faster RTs for congruent nouns compared to incongruent ones, with no significant difference between incongruent and neuter or congruent and neuter. Finally, in the most relevant previous study, L1 Spanish-intermediate L2 German adults completed an L2 picture naming task (Klassen 2016a). RTs were significantly faster with congruent than incongruent and with neuter than incongruent, while there was no significant difference between congruent and neuter nouns.
Across studies, there is a trend for faster RTs for neuter nouns than incongruent ones, although technically both of these conditions consist of gender mismatches between the L1 and L2. Klassen (2016a,b) argues that the asymmetry between neuter and incongruent nouns can be accounted for by the language-specific nature of the neuter gender node in a system where symmetric genders across the L1 and L2 have a shared representation. Within this L1-L2 integrated representation (Salamoura & Williams, 2007), the activation of masculine and feminine gender nodes that are common to both the L1 and the L2as is the case with incongruent nounscreates interference in the response. This interference arises due to the competition for selection between the shared nodes, as the mismatching genders of the lexical items conflict symmetrically. In other words, the masculine gender of a word presented in the task interferes with lexical selection, as its translation equivalent in the other language has feminine gender. In contrast, with neuter nouns, neuter gender does not interfere with lexical selection, since it does not compete with a node available in the other language. Compared to congruent nouns (in which selection is facilitated by the L1 and L2 both activating the same shared gender node), and incongruent nouns (where selection is symmetrically inhibited), neuter nouns lead to asymmetric, one-sided interference only, which generates lower levels of response interference due to the gender class that is unique to the L2 (Figure 1). To examine whether this pattern of results emerging from lexical access studies extends beyond words produced and processed in isolation, this study also examines the role of neuter nouns in sentence processing in Spanish-German learners.
Gender and the NP in Spanish and German Both Spanish and German instantiate grammatical gender, with Spanish making a two-way distinction between masculine and feminine, while German displays the three-way distinction masculine, feminine, and neuter. Approximately half of the nouns in each of the languages are assigned to masculine gender, with the remaining half constituting either feminine or feminine and neuter (for Spanish: 52% masculine, 45% feminine (Bull, 1965); for German: 50% masculine, 30% feminine, 20% neuter (Bauch, 1971)).
Spanish is considered to have a largely transparent gender system: approximately two-thirds of nouns end in -o or -acorresponding to masculine or feminine, respectively, in more than 96% of instances (Teschner, 1987)while the remaining third end in -e or a consonant (Harris, 1991). Gender is similarly transparently marked on articles and adjectives. In contrast, German gender marking is rather opaque and non-predictable, offering only some probabilistic semantic or morphophonological regularities (e.g., Köpcke & Zubin, 1996). However, there are so many exceptions that L2 speakers must typically learn each noun gender individually. For this reason, gender is only clearly marked on articles and attributive adjectives, even though the transparency of gender marking is compromised by high levels of syncretism among case, number, and gender.
Of particular relevance to the present study are indefinite articles and attributive adjectives in nominative case. Spanish marks gender on these elements transparently and with unique entries in each instance, while German displays syncretism between masculine and neuter with indefinite articles, but unique adjectival endings by gender. These paradigms are illustrated in Table 1.
In addition, Spanish and German differ syntactically in the realization of attributive adjectives with respect to the relative order of the noun and adjective within the NP. Canonically, attributive adjectives appear postnominally in Spanish (Bosque & Picallo, 1996) but prenominally in German (Behaghel, 1923). As seen in Table 1, there is no overlap in the syntactic realization of adjectives between Spanish and German, while there can be lexical overlap in the gender of nouns for gendercongruent nouns. In addition, nouns can differ cross-linguistically with respect to gender in two ways: on the one hand, German nouns can have the opposite gender of their translation equivalent in Spanish; on the other hand, they can have an asymmetric (neuter) gender which is not available in Spanish. The present study
Research questions
Against this backdrop, the present study investigates possible connections between lexical and syntactic processing. Specifically, we focus on the following research questions: RQ1: How do lexical gender and syntactic congruency interact during sentence processing in an L2?
As suggested by the Lexical Bottleneck Hypothesis, we expect to find that interactions of lexical and syntactic processing extend to the processing of ungrammatical sentences during reading to the extent that less taxing lexical processing will facilitate target syntactic processing. Specifically, non-target syntax, that is, postnominal attributive adjectives in German, should be easier to detect when the nouns are congruent in lexical gender class.
RQ2: Does the type of L1-L2 mappings of gender affect sentence processing?
Given the results emerging from studies on words processed in isolation (e.g., Klassen, 2016a,b), we expect that neuter nouns, for which there is no analogue in Spanish, will be processed differently from feminine and masculine incongruent nouns, given the contrast in the nature of cross-linguistic competition between masculine and feminine and the competition between masculine/feminine and the asymmetric gender with no representation in the L1 (neuter). Hence, ungrammaticality detection should be easier for neuter nouns than incongruent nouns. RQ3: How does L2 proficiency affect the processing of cross-linguistic overlap?
On the basis of previous research on word recognition and sentence processing, we expect that learners at lower proficiency levels will show greater congruency effects (e.g., Hopp & Lemmerth, 2018;Sá-Leite et al., 2020), since the L1 affects L2 processing to a greater extent at lower proficiency. Larger effects of gender congruency will attenuate or delay their sensitivity to ungrammaticality during sentence processing (e.g., Hopp, 2006;Jackson, 2008). As a consequence, we explore effects of proficiency in the present study by adding proficiency as a continuous predictor variable.
Design
To address these research questions, we developed a self-paced reading task in German in which lexical and syntactic cross-linguistic overlaps in Spanish (L1) and German (L2) were manipulated. In the experimental items, target manipulations focused on the gender of nouns at the lexical level and the relative order of attributive adjectives and nouns in NPs at the syntactic level. The manipulations at the level of syntactic overlap resulted in grammatical and ungrammatical sentences in German. Attributive adjectives appear prenominally in German, but postnominally in Spanishwith few exceptions. Thus, sentences that were grammatical in German (Adj-N) would be ungrammatical in Spanish, and those that were ungrammatical in German (N-Adj) would be grammatical in Spanish.
With respect to lexical gender overlap, nouns were either congruent between German and Spanish (masculine or feminine in both L1 and L2; MM & FF), incongruent (masculine-feminine mismatches between L1 and L2; MF & FM), or neuter (masculine or feminine in L1 and neuter in L2; MN & FN).
Participants
In total, 122 adults participated in this study: L1 Spanish speakers who were L2 learners of German (n=86) and L1 German speakers as the control group (n=36). Participants were recruited through German-teaching colleagues in Spain and via the authors' networks in both Spain and Germany. Due to travel restrictions at the time of testing, all participants completed the study over the internet, using the Gorilla Experiment Builder platform (Anwyl-Irvine et al., 2019).
All L2 German participants were adult L2 learners with no significant exposure to another language with grammatical gender. Prior to data analysis, we excluded L2 participants who had L1s in addition to Spanish (n=3), who did not complete the post-task (n=15), and who did not adhere to the instructions in the experiment (n=2). Table 2 shows the age and gender information for all remaining participants, as well as the proficiency means for the L2 group. Proficiency was assessed by a standardized 30-item written placement test of German (Goethe- Institut, 2010).
All speakers in the L1 control group were living in Germany at the time of testing and did not have any knowledge of Spanish. From the 36 native speakers, we excluded one participant who had an additional L1 to German.
Materials
For the reading study, we created 48 pairs of experimental items, as in (1), which contained a complex NP as a subject in an embedded clause. The main clause always consisted of a predicative adjective, followed by the conjunction denn ("because"), the NP, a copula verb, and two prepositional phrases (PPs), serving as spillover regions 1 . The order of nouns and adjectives was either grammatical (prenominal adjectives as in (1a)) or ungrammatical (postnominal adjectives as in (1b)) in German. Jakubíček et al., 2013), number of letters, and number of syllables. In addition, 48 adjectives were selected and paired with the target nouns, also matching them as closely as possible by frequency, number of letters, and number of syllables across conditions (Appendix A). None of the adjectives selected were typically prenominal in Spanish. Subsequently, a total of 48 experimental sentences were created from the nounadjective pairings, each with a grammatical (1a) and ungrammatical (1b) version.
The task also included 16 grammatical filler sentences containing grammatical (postnominal) predicative adjectives (2) that served to mitigate possible task effects created by the ungrammaticality of postnominal attributive adjectives in the experimental items. There were a further 8 grammatical and 8 ungrammatical sentences containing negation. In all, the task consisted of 80 sentences, 48 (60%) of which were grammatical and 32 ungrammatical (40%) (Appendix B).
(2) Ich bin besorgt, denn die Wolke ist dunkel und gewaltig. I am worried, because the cloud is dark and huge "I am worried because the cloud is dark and huge." To ensure that the L2 speakers had sufficient knowledge of German gender, a gender assignment task including all German nouns in the experimental items was carried out following the reading task. In this 48-item post-task, participants selected the correct nominative definite article form (der M , die F or das N ) for each of the target nouns.
Procedure
Two lists were created such that each participant saw either the grammatical or ungrammatical version of each of the experimental sentences (1a vs 1b) in addition to the grammatical and ungrammatical filler items. In total, each participant was presented with a total of 48 grammatical and 32 ungrammatical sentences, preceded by five practice sentences to familiarize participants with the task. Sentences were segmented by phrases as in (3) 2 and presented as a noncumulative, moving-window self-paced reading task programmed using Gorilla Experiment Builder (Anwyl-Irvine et al., 2019). All sentences were presented in 18-pt Open Sans font.
( 3) Each trial began with a fixation cross and the first segment appeared after 500ms, with the participants using the spacebar to advance through the segments. Following each sentence, either a yes/no comprehension question targeting the first PP (segment 5) appeared (for 28 out of 80 trials) or a blank screen displayed for 1000ms. All sentences were randomized for each participant by the software, and participants were offered a break at the midpoint in the task.
The experimental session was completed via the internet by each participant using their personal computer. Technical (timeouts) and attention (intermittent instructions to click a specific button) checkpoints were included in the programming in order to ensure the quality of remotely collected data. Participants who scored less than 75% on the attention checks prior to the reading task as well as those who took less than 3 or more than 30 minutes to complete the proficiency task were automatically prevented from continuing the experiment. 3 Participants provided informed consent, completed the proficiency test, the self-paced reading task, and the gender assignment task. Finally, they filled out a language background questionnaire. The entire session lasted approximately 45 minutes, and participants were sent 20 Euro gift cards upon completion.
Results
We excluded participants in the L2 group who had fewer than two correct gender assignments in at least one condition in the post-task (n=6). We used the data from the post-task to exclude participants based on the number of data points in each conditionthat is, participants had to have at least 2 correct gender assignments per conditionrather than excluding specific sentences for which the individual participant did not assign the correct gender, since target assignment is not relevant for gender congruency effects. L2 learners can link whatever gender of the noun is given in the input (i.e., the target German gender) to the gender of the Spanish translation equivalent, irrespective of whether they have target knowledge of German gender. Since the L2 participants are native Spanish speakers, we can be confident they know the gender of the Spanish translation equivalents. So if they see a German noun with feminine gender, they can link it to the translation equivalent in Spanish, which is either congruent or incongruent in gender. The congruency effects arising from this mapping are independent of whether the participants have a target representation of German gender. Data for the remaining 60 adult L2 learners of German and the 35 native speakers were analyzed.
In the gender assignment task, gender accuracy among the L2 speakers varied across gender classes, with feminine nouns having higher accuracy (77%) than masculine (67%) and neuter (65%) nouns. Gender accuracy did not vary by gender congruency, with congruent nouns showing similar accuracy (68%) to incongruent nouns, including neuter items (70%). In all, the L2 group demonstrated considerably above-chance (> 33%) gender knowledge of the critical nouns used in the experimental sentences.
For the analysis of the reading times, we excluded all segments with reading times below 200 ms and above 5000 ms. In total, these exclusions removed less than 7.2% of the data. We then log-transformed the reading times (natural logarithm) to adjust for the skewness of their distribution, since the Box-Cox procedure (Box & Cox, 1964) confirmed that a log transformation was appropriate. Table 3 shows the mean reading times by conditions and group.
To address RQ1 about effects of gender congruency, we first compared the read- Jakubíček et al., 2013) as a scaled fixed effect. Since reading times in self-paced reading are subject to spillovers from one segment to the next, we also added the RTs of the previous segment as a fixed effect. Initially, we estimated models with a maximal random effects structure containing random slopes for Grammaticality, Congruency, and Frequency and their interactions on the random intercept of participants and random slopes for Grammaticality and Group and their interaction on the item intercept. We then used the "order" command in the buildmer package (version 2.3; Voeten, 2021) to obtain the maximal models that converge. The final models are reported by segment in Table 4. On top of main effects of reading times on the previous segment, frequency, group, and grammaticality, the model returned significant interactions of group and grammaticality in segments 3 and 4, with this interaction being qualified by a three-way interaction between group, grammaticality, and congruency in segment 4. In light of the interactions with group, we performed separate analyses of segments 3 and 4 for the L1 group and the L2 group. For the analyses of the L2 group, we added the proficiency score as a scaled fixed effect, including its interactions with grammaticality and congruency. For segment 3, analyses by group revealed that the L1 speakers demonstrated a highly significant main effect of grammaticality (ß = −0.069; SE = 0.012; z = −5.611, p < .001), while the L2 group did not demonstrate a main effect of grammaticality (ß = −0.013; SE = 0.011; z = −1.211, p = .226) or any interactions with it. No further effects or interactions reached significance in segment 3. For segment 4, Table 5 lists the models by group and by congruency for the L2 group. The L1 group did not show any significant effects beyond the main effect of grammaticality, while the L2 group demonstrated a main effect of grammaticality qualified by an interaction with congruency. As the subsequent comparisons by congruency show, the L2 group only evinced a main effect of grammaticality for sentences with congruent NPs, while there was no significant effect of grammaticality for sentences with incongruent NPs. For the latter, there was a trend for more highly proficient learners to make a difference between grammatical and ungrammatical sentences, as suggested by the marginally significant interaction of grammaticality and proficiency. In sum, the analyses suggest that the L2 group is less sensitive to the difference between grammatical and ungrammatical sentences than the L1 group, in that the L2 group only shows effects of grammaticality on segment 4 for sentences with gender-congruent NPs. In order to answer RQ2 regarding the type of L1-L2 mappings, we compared sentences in the neuter condition with those containing incongruent nouns on the basis of our hypothesis that neuter nouns behave differently from incongruent nouns for L2 learners. Figure 4 graphs the reading times for neuter and incongruent nouns for the L1 group, and Figure 5 does so for the L2 group. As can be seen for the L2 group, the reading time differences are largely similar for neuter and incongruent nouns.
We fitted the same omnibus model as above to the data, the only difference being that the fixed effect of Congruency was defined as neuter (0.5) vs incongruent (−0.5). Table 6 shows the model output.
Beyond main effects of reading times on the previous segment, frequency, congruency, grammaticality, and group, the models returned two-way interactions between grammaticality and group on segments 3 and 4. Subsequent by-group models demonstrate that the main effects of grammaticality became significant for the L1 group in both segments (S3: β = −0.080; SE = 0.013; z = −6.183; p < .001; S4: β = −0.066; SE = 0.0149; z = −4.445; p < .001). In contrast, the L2 group did not show an effect of grammaticality in segment 3 (β = −0.010; SE = 0.013; z = −0.777; p = .437) and only a trend towards a main effect of grammaticality in segment 4 (β = −0.013; SE = 0.007; z = −1.791; p = .0733), which was qualified by an interaction with proficiency (β = −0.016; SE = 0.007; z = −2.237; p = .0253). Figure 6 plots the model output for the interaction between proficiency and the grammaticality effect in the L2 group across neuter and incongruent items in segment 4. It shows that more proficient L2 learners begin to make a difference between grammatical and ungrammatical sentences, while lower proficiency learners are not sensitive to grammaticality for sentences containing incongruent and neuter nouns.
As regards RQ3, then, the study finds an interaction between grammaticality and proficiency for sentences with incongruent and neuter NPs. This interaction illustrates that more highly proficient L2 learners tend to demonstrate longer reading times on segment 4 for ungrammatical versus grammatical sentences containing NPs that differ in gender between Spanish and German in being either of opposite or asymmetrical gender cross-linguistically.
Discussion
The aim of the current study was to investigate lexical gender effects in L2 syntactic processing. We manipulated the type of gender mapping between German and Spanish at the lexical level, as well as the syntactic grammaticality in the relative order of the noun and adjectives. We found that gender congruency interacts with the detection of syntactic ungrammaticality. L2 learners demonstrated grammaticality effects in processing sentences containing congruent nouns compared to those comprising incongruent nouns (RQ1). Second, there were no significant differences in the processing of neuter nouns compared to incongruent nouns (RQ2). For both incongruent and neuter nouns, grammaticality effects emerged only at higher L2 proficiency levels (RQ3). With respect to RQ1, the study found that L2 learners demonstrate earlier and stronger slowdowns for ungrammatical sentences comprising congruent as compared to incongruent nouns, while there was no such effect for the monolingual controls. Hence, the difference between conditions likely reflects effects of the L1, in that the activation of the same gender node by both Spanish and German appeared to speed up the integration and detection of the syntactic ungrammaticality of postnominal attributive adjectives in German. For gender-congruent nouns, learners demonstrated slowdowns associated with ungrammatical syntax on the segment following the NP, unlike the native control group that evinced the effects on both the NP and the following segment. Such a delay in incremental reading effects is common among L2 learners, especially at less than near-native proficiency levels (e.g., Hopp, 2010). Overall, this finding of congruency effects suggests that L2 learners are more sensitive to mismatches between L1 and L2 word order during L2 sentence comprehension when the nouns embedded within the ungrammatical segment match in gender between the L1 and the L2.
In principle, these lexical effects of gender are consistent with gender congruency effects reported in word recognition and production (e.g., Lemhöfer et al., 2008;Paolieri et al., 2010) and extend these findings to sentence contexts. Crucially, in this study, lexical congruency interacts with syntactic grammaticality detection. These results underscore that gender congruency has consequences for syntactic processing.
This pattern of results cannot be exhaustively accommodated by genderintegrated approaches to the bilingual mental lexicon. If gender congruency was limited to lexical co-activation of both languages due to non-selective access to the lexicon, congruent nouns should lead to globally faster reading times compared to incongruent nouns, irrespective of grammaticality differences. While compatible with gender-integrated models, the finding that gender congruency had specific effects for the detection of ungrammaticality requires additional explanation, as for instance, that provided by the Lexical Bottleneck Hypothesis (LBH). According to the LBH, fast lexical access leaves more time and resources for syntactic computations in real-time sentence processing, while greater demands on lexical processing can delay or attenuate syntactic structure building in real time among L2 learners. The present findings are compatible with this account, in that gender-congruent nouns are processed with greater ease, and thus, readers show sensitivity to syntactic ungrammaticality. Gender-incongruent nouns, in contrast, consume greater lexical processing resources, as learners need to inhibit the concurrent activation of the mismatching gender node in L1 Spanish as they process the German noun. As a consequence, reading slowdowns for ungrammatical versus grammatical word orders are attenuated or absent compared to congruent nouns and give rise to non-native syntactic processing.
In this respect, the findings are broadly similar to the ones reported for predictive gender processing among L1 Russian learners of German by Hopp and Lemmerth (2018). In their study, high-intermediate L2 learners showed predictive use of gender only for lexically congruent nouns in the syntactically incongruent (article) condition, while there were no effects of lexical congruency for gender agreement processing in syntactically congruent contexts, that is, gender marking on prenominal adjectives. The authors account for this asymmetry by arguing that L2 learners' use of inflection is limited when the syntactic realizations of agreement differ between the L1 and L2 (see also Tokowicz & MacWhinney, 2005). However, lexical gender congruency can ease agreement processing by lowering the processing demands associated with the non-overlapping syntax.
The present findings suggest that these facilitative effects of lexical congruency attested for the predictive use of gender information in syntactic agreement processing extend to identifying ungrammaticalities in syntactic processing, even if these do not involve mismatches in grammatical gender agreement between the L1 and L2. Instead, gender congruency at the lexical level facilitates the sensitivity to word order differences between the L1 and L2. From a broader perspective, the study adds to previous findings that L1-L2 lexical overlap, for example, by virtue of cognates, leads to more target-like L2 sentence processing (e.g., Miller, 2014) and reduces the interference of the L1 syntax (Hopp, 2017). As such, it highlights interdependencies between lexical and syntactic processing during L2 sentence comprehension.
As regards RQ2, this study goes beyond previous research on L2 sentence processing by addressing the question of how the presence (or absence) of an analogous L1 gender for the gender class in the L2 affects syntactic processing. Masculine and feminine are present in both Spanish and German and can be straightforwardly mapped onto each other, while there is no clear mapping for the German neuter given the absence of a third gender class in Spanish. 4 In this study, there were no differences in reading times for sentences with neuter nouns compared to those with incongruent nouns. For both incongruent and neuter nouns, effects of grammaticality detection only surfaced at higher proficiency levels, which suggests that the lexical bottleneck posed by lexical incongruency in gender class widens with more experience and greater facility in the L1. As can be seen in Figure 6, as proficiency rises, the reading times for grammatical sentences tend to reduce. This seems to indicate that, as proficiency rises, the lexical competition between different gender classes in the L2 and L1 can be resolved more easily, which in turn allows the parser to integrate syntactic ungrammaticality signals incrementally. When the effects of lexical competition abate, evidence of target syntactic processing emerges.
The finding that neuter nouns pattern with incongruent nouns is inconsistent with Klassen's (2016a,b) findings for words processed in isolation. In an L2 picture naming task, L2 German learners of Spanish showed significantly less response interference arising from the co-activation of the Spanish gender when asked to produce bare nouns or NPs containing neuter nouns compared to incongruent feminine or masculine nouns. To the best of our knowledge, the present study is the first to examine the effect of such an asymmetry in the L1 and L2 gender systems at the level of L2 sentence processing. Unlike isolated picture naming, in the present study, processing does not differ for neuter and incongruent nouns in sentence contexts in terms of affecting L2 learners' ability to detect syntactic violations. This could be due to the fact that more processes are involved in the sentences presented to participants in this reading experiment as compared to picture naming, which is largely limited to lexical retrieval. Further research is required to determine the precise locus of the asymmetric neuter effect in word recognition and its possible replication with other asymmetric gender language pairings.
Overall, the interactions between lexical and syntactic processing observed in the present study attest that the creation of syntactic structure in real-time L2 comprehension is sensitive to the lexical properties of the elements involved therein, even when these properties are not relevant for building the structure as such (see also Hopp, 2016;Miller, 2014; for L1 processing, see Tily et al., 2010). In L2 processing, lexical aspects unique to bilingualism, for example, cross-linguistic gender congruency effects, impact the time-course and degree of ungrammaticality detection in word order. In this respect, they also underscore that non-native-like processing signatures in syntax are not necessarily rooted in problems with using syntax in real time (e.g., Clahsen & Felser, 2006), but may be caused by difficulties at processing stages that precede and subserve syntactic processing. Accordingly, the present findings highlight the need for an integrated study of the bilingual language system, encompassing both the lexicon and the grammar, in order to delineate aspects of adult L2 grammatical processing that are natural consequences of bilingualism from those that may reflect maturational constraints on late L2 acquisition.
A potential limitation of this study is the collection of data via the internet with participants using their personal computers. While recent analyses of online versus laboratory data collection have shown reliable results with both methods (e.g., Anwyl-Irvine et al., 2020), we further minimized risks associated with unsupervised participation by controlling the type of device permitted by the software (computers only), introducing technical and attentional checkpoints into the programming, and imposing strict inclusion criteria on the data in the analyses (see the Participants and Results sections for details). Even with discarding data from 27 participants, our study included a large sample (n = 95) that found robust effects comparable to laboratory-based experiments.
Another potential limitation lies in the fact that we only tested one bilingual group and that we therefore cannot definitively tie the findings to effects coming from L1 Spanish. However, given that the native German control group did not show congruency effects, noun and adjective frequency were controlled across conditions, and the congruent and incongruent nouns were indistinguishable in gender assignment accuracy in the post-task, there is no reason to believe that the different pattern of results across the conditions reflects item-level differences. Hence, we confidently relate the differential results between congruent, incongruent, and neuter nouns to L1 effects. Nevertheless, a further study could test a different L1 group whose L1 has prenominal adjectives to explore if similar effects of gender congruency on syntactic ungrammaticality detection can be observed, even if the ungrammatical word order in the L2 does not map to a licit word order in the L1.
Conclusion
In sum, this study reveals lexical gender effects in bilinguals' detection of ungrammatical word order in L2 sentence processing. It suggests that lexical bottlenecks in L2 sentence processing reduce sensitivity to ungrammatical syntax. These findings not only advance the state of the art but also open new avenues for further research at the crossroads between lexical gender and syntactic congruency in bilinguals.
1.
Denn was chosen in order to avoid verb-final word order in the embedded clause. 2. The mean number of letters and syllables in critical and spillover regions was balanced across sentences according to gender congruency and grammaticality conditions. 3. The minimum time to complete the proficiency test was based on the fastest native speaker completion times in piloting, and a maximum was established in order to prevent participants from stopping frequently to search for answers on the internet. 4. Considerable evidence from 2L1 children's code-switches between the article and the noun shows that pre-school-age children draw connections between masculine and feminine across Romance and Germanic languages. Findings such as those in Cantone and Müller (2008) and Eichler, Hager and Müller (2012) illustrate that in spontaneous speech, 2L1 Spanish-German, Italian-German, and French-German children have a general tendency to produce code-switched NPs in which there is cross-linguistic gender agreement between the article and the noun (e.g., una SP-fem Schlange GER-fem "a snake").
Appendix A A1. Frequency data for nouns (means) by language and condition.
Condition
German Spanish Frequency per million (log10)
|
2022-12-04T17:57:54.309Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "40b1ccdf019a84aec5ed7e4111f009ecf57f5397",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8A5B8764D05E4BB665E5566F675AC3E0/S0142716422000236a.pdf/div-class-title-interactions-between-lexical-and-syntactic-l1-l2-overlap-effects-of-gender-congruency-on-l2-sentence-processing-in-l1-spanish-l2-german-speakers-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "2935c19c72f27ca2c7e26874597ee20a89bc4080",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
}
|
247798582
|
pes2o/s2orc
|
v3-fos-license
|
Probiotics, Their Extracellular Vesicles and Infectious Diseases
Probiotics have been shown to be effective against infectious diseases in clinical trials, with either intestinal or extraintestinal health benefits. Even though probiotic effects are strain-specific, some “widespread effects” include: pathogen inhibition, enhancement of barrier integrity and regulation of immune responses. The mechanisms involved in the health benefits of probiotics are not completely understood, but these effects can be mediated, at least in part, by probiotic-derived extracellular vesicles (EVs). However, to date, there are no clinical trials examining probiotic-derived EVs health benefits against infectious diseases. There is still a long way to go to bridge the gap between basic research and clinical practice. This review attempts to summarize the current knowledge about EVs released by probiotic bacteria to understand their possible role in the prevention and/or treatment of infectious diseases. A better understanding of the mechanisms whereby EVs package their cargo and the process involved in communication with host cells (inter-kingdom communication), would allow further advances in this field. In addition, we comment on the potential use and missing knowledge of EVs as therapeutic agents (postbiotics) against infectious diseases. Future research on probiotic-derived EVs is needed to open new avenues for the encapsulation of bioactives inside EVs from GRAS (Generally Regarded as Safe) bacteria. This could be a scientific novelty with applications in functional foods and pharmaceutical industries.
INTRODUCTION
Infectious diseases are disorders caused by organisms such as viruses, bacteria, fungi, or parasites. Could probiotics deal with infectious diseases? A lot of clinical trials have brought this question to the forefront with positive effects on prevention and/or treatment of infectious diseases. In this review, we conducted a search for probiotic bacteria utilized for treatment of infectious diseases and then discussed the current knowledge about extracellular vesicles (EVs) released by these probiotic species. It is important to highlight that according to the generally accepted definition of probiotic, probiotic effects are strain-specific. However, various effects of probiotics can be ascribed to the species level (Hill et al., 2014). Moreover, the study of EVs released by the probiotic strains is still in its infancy and has not been widely analyzed so far. For these two reasons, to collect the current evidence of probiotic-derived EVs we decided to extrapolate our search to the species level. In this line, EV-producing strains were shown to mediate beneficial effects both in vitro and in vivo models, but human trials (a requirement for probiotic claim) are still pending.
PROBIOTICS Definition
Probiotics are defined as "live microorganisms that, when administered in adequate amounts, confer a health benefit on the host" (Hill et al., 2014). In a position statement an expert panel of the International Scientific Association for Probiotics and Prebiotics (ISAPP) set four minimum criteria for probiotic claims (Binda et al., 2020). Probiotics must: (1) be identified to the genus, species and strain level, (2) be safe for the intended use, (3) have demonstrated health benefits in at least one clinical trial, and (4) have a suitable viable count at end of shelf life.
Before clinical trials are conducted, potential probiotics must be selected by a comprehensive approach including multiple steps. According to the "Guidelines for the Evaluation of Probiotics in Food" (FAO/WHO, 2002), candidate strains are suggested to be assessed for their stress tolerance, antimicrobial properties, epithelium adhesion ability, and safety. At the same time, in vitro and in vivo experiments should be performed to evaluate probiotic effects (de Melo Pereira et al., 2018;Santos et al., 2020).
As stated above, for validation of treatment safety and efficacy, probiotics must be subjected to at least one clinical trial, which must be conducted based on generally accepted scientific standards (Binda et al., 2020). In general, the weight ascribed to a trial result is higher when sources of bias are avoided (Higgins et al., 2019), and therefore randomized controlled trials are usually considered the most appropriate methodology for validating a probiotic health claim (Tamayo, 2008). In the last decades, there has been a rapid growth in the number of clinical trials for the use of probiotics for prophylactic and/or therapeutic applications in various fields: infectious diseases, cancer, depression and obesity (Zommiti et al., 2020).
Even though probiotics must be identified to the strain level, various meta-analyses indicate that "shared benefits" are achieved by many different strains of the same species, due to similar biological pathways (Sanders et al., 2018). In that regard, the ISAPP panel considered that well-studied beneficial species may be considered as "probiotics" even in the absence of randomized controlled trials that support this claim (Hill et al., 2014). Although clinical trials rarely compare different strains of the same species, certain health effects such as immunomodulation have been ascribed to many strains of the same species .
Many probiotic lactic acid bacteria have long been used in dairy products, being awarded the status of GRAS (Ghosh et al., 2019). The projection of the global probiotics market is expected to grow at a compound annual growth rate of 7.2% from 2021 to 2028 (Grand View Research, 2021). The popularity of probiotic use has increased dramatically in the last decades, not only for their clinical use, but also in healthy individuals wishing to maintain a healthy gut microbiota (Fleming et al., 2016;Su et al., 2020).
Probiotics and Microbiota: Inter-Kingdom Communication
Our bodies are composed of human cells and microbiota, which is composed of viruses, bacteria, fungi and parasites (Cao and Mortha, 2020). These complex and dynamic populations of microorganisms are crucial for maintaining health and playing a decisive defensive role against pathogens (Sokol, 2019). Inside the human body there exist different microbiotas according to their localization: skin, lung, urethra, vagina, etc. In the last decade, organs that had been previously considered sterile today are hypothesized to have a microbiota. For example, despite it was always thought that the fetus was developed under sterile conditions, recent data suggested the presence of microorganisms in the uterus and placenta (Agostinis et al., 2019;Tang et al., 2020). Moreover, contrary to a long-held dogma, today we know that human milk is not sterile (McGuire and McGuire, 2017). The hypothesis of how bacteria from the maternal gastrointestinal tract (GIT) are translocated to human milk is that dendrites from dendritic cells (DCs) could cross the gut epithelium and transport gut lumen bacteria to the mammary gland through the lymphoid system (Olivares et al., 2015;Demmelmair et al., 2020).
Nowadays, the gut microbiota is considered a new "vital organ" of the human body and is connected with other organs through different axis via neural, endocrine and immune interactions (Ding et al., 2019;Ahlawat et al., 2021). In this line, the consumption of probiotics has been reported to have beneficial effects on the gut-brain axis, the gut-skin axis, etc. (Banfi et al., 2021;Park et al., 2021). In addition, it has recently been demonstrated that the consumption of probiotics can modulate other microbiotas too, e.g., vaginal microbiota (Silvia Ventimiglia et al., 2021).
Fermented foods and probiotics (these terms should not be confused) increase gut microbiota diversity with benefits on human health (Aljutaily et al., 2020;Vinderola and Pérez-Marc, 2021). Frequently, a disruption in the microbiota composition results in a less diverse or less "rich" microbiota, which is often linked to a leaky gut syndrome, higher gut inflammation and more oxidative stress. This microbiota imbalance is linked to various diseases including obesity, diabetes, irritable bowel syndrome, inflammatory bowel disease, depression, and cardiovascular disease (Hills et al., 2019).
It is important to emphasize that many probiotic strains do not colonize the gut and are no longer recoverable in stool 1-4 weeks after stopping their consumption. For example, the probiotic-containing fermented milk Activia did not change the bacterial composition in the gut, but instead altered gene expression patterns that are relevant to carbohydrate metabolism in the gut microbiota. These changes in the gut function were confined only to the time of probiotic consumption (Maguire and Maguire, 2019).
Probiotics for Infectious Diseases
In the context of probiotics against infectious diseases, widespread effects or "shared benefits" of probiotics include mechanisms that act directly by inhibiting pathogens and indirectly by reinforcing the host epithelial barrier function and immune responses (Lebeer et al., 2010;Sassone-Corsi and Raffatellu, 2016;Raheem et al., 2021). Even though probiotic effects are strain-specific, in this review we collected a series of clinical trials where probiotic species benefits were assessed against infectious diseases (Figure 1). According to our search, nearly 50% of these species were reported to release EVs (Table 1). Seven out of 24 strains released EVs that have had beneficial effects against pathogens in in vitro, ex vivo or in vivo models ( Table 1, indicated by asterisks). In fact, some of these strains are well-known probiotics: i.e., Escherichia coli Nissle 1917 and Lacticaseibacillus rhamnosus GG. We limited our search to bacteria, although some fungi are also considered to be probiotics.
Probiotic bacteria with successful results against infectious diseases mainly include bifidobacteria and lactobacilli, which represent the most studied probiotics (Stavropoulou and Bezirtzoglou, 2020), and other Gram (+) bacteria belonging to the genera Streptococcus, Bacillus, Propionibacterium and Clostridium. On the other hand, to our knowledge the only Gram (−) bacterial strain that was found to be effective in clinical trials is E. coli Nissle 1917. E. coli Nissle 1917 has been considered a probiotic for over a century and used to treat intestinal diseases. However, the strain contains a pathogenicity island (pks) that codes for colibactin, a genotoxin that mediates anti-inflammatory effects (Olier et al., 2012) and is now linked to causative mutations found in human colorectal cancer (Nougayrède et al., 2021).
Probiotics are commonly consumed in food or supplements (Hill et al., 2014; Figure 2). Oral administration, which is the most usual route of administration of probiotics, has resulted in satisfactory outcomes in clinical trials, even when the beneficial effect occurred in extraintestinal sites (Maldonado-Lobón et al., 2015;Panigrahi et al., 2017;Vladareanu et al., 2018;Lazou Ahrén et al., 2021). Possible mechanisms by which oral administration of probiotics may have extraintestinal and systemic effects on the host will be discussed in the following sections. However, there are many possible routes of administration of probiotics, such as mouth rinses and lozenges for periodontal disease (Tsubura et al., 2009;Invernici et al., 2018), vaginal suppositories for trichomoniasis, bacterial vaginosis and recurrent urinary tract infections (Stapleton et al., 2011;Sgibnev and Kremleva, 2020), intranasal administration for upper respiratory tract infections (Passali et al., 2019), and topical application for skin wounds (Peral et al., 2009). In the case of respiratory and skin infections, although topical administration could be advantageous (Lopes et al., 2017;Spacova et al., 2021), their study in clinical trials is currently underrepresented.
It is important to note that many clinical trials examine the use of probiotics as a supplementation to conventional therapy against infectious diseases, such as antibiotic and antifungal agents (Shi et al., 2019;Joseph et al., 2021). In general terms, probiotics have shown effectiveness in preventing infectious diseases in different organ systems, from the respiratory and gastrointestinal tracts to the female urogenital system, among others. As regards gastrointestinal diseases, probiotics were effective in reducing frequency and duration of diarrhea (Francavilla et al., 2012;Park et al., 2017;Sharifi-Rad et al., 2020), reducing symptoms of gastroenteritis and H. pylori gastritis (Shafaghi et al., 2016;Shin et al., 2020), and preventing necrotizing enterocolitis (Chang et al., 2017). In respect of respiratory diseases, the benefit of probiotics has been mostly associated with prevention of infections, especially in the upper respiratory tract (Aryayev et al., 2018;Anaya-Loyola et al., 2019;Lazou Ahrén et al., 2021). Finally, certain probiotics were successful in reducing symptoms and frequency of recurrent vulvovaginal candidiasis, bacterial vaginosis and urinary tract infections (Laue et al., 2018;Russo et al., 2019;Sgibnev and Kremleva, 2020), possibly mainly by restoring the normal vaginal FIGURE 2 | Schematic representation of the interactions between probiotics, pathogens and the host. Probiotics in various dosage forms were shown to exert beneficial effects on different human organ systems for the prevention or treatment of infectious diseases. These effects are exerted indirectly or directly through pathogen inhibition and may be mediated, at least in part, by probiotic-derived EVs.
microbiota. Other benefits of probiotics demonstrated in clinical trials involve other organ systems, such as skin and the nervous system (Kotzampassi et al., 2015;Xia et al., 2018). Further highquality clinical trials and meta-analyses should be undertaken to provide stronger evidence for the therapeutic use of probiotics (Stavropoulou and Bezirtzoglou, 2020).
EXTRACELLULAR VESICLES Bacterial Extracellular Vesicles
Probiotics seem to act through a wide repertoire of mechanisms but the specific pathways and key regulatory molecules underlying their beneficial effects are largely unknown (Plaza-Diaz et al., 2019). In this line, EVs have been associated with diverse functions in cell-to-cell communication and appear to be a common language between kingdoms (i.e., bacteria and eukaryotic cells) (Ñahui Palomino et al., 2021). Extracellular vesicles are produced by all domains of life: archaea, bacteria and eukarya. To this day it has been seen that EVs appear to be produced by all cell types of all studied organisms. All EVs are composed of a lipid bilayer with membrane proteins and contain DNA, RNA and proteins (Théry et al., 2018). The level of knowledge about bacterial EVs is lower than eukaryotic EVs, but the number of studies is continuously increasing (Ñahui Palomino et al., 2021). In particular, EVs from Gram (+) bacteria have been less studied, and our understanding of their biogenesis and interaction with host cells is just being started (Briaud and Carroll, 2020).
The size of bacterial EVs is in the nanoscale (below 500 nm), and has been related to bacterial physiology including probiotic and pathogenic effects. In the case of Gram (+) bacteria, EVs are called membrane vesicles (MVs) and the lipid bilayer encloses cytosolic material. In contrast, in the case of Gram (−) bacteria, EVs are called outer-membrane vesicles (OMVs) and the lipid bilayer encloses periplasmic material. Gram (+) and Gram (−) bacterial EVs are also different in their surface composition for Lacticaseibacillus paracasei -EVs decreased NF-κB levels and mRNA levels of TNFα, IL-1α, IL-1β and IL-2, and increased mRNA levels of TGFβ and IL-10 in LPS-induced inflammation in human intestinal epithelial cells (HT-29) and reduce inflammation symptoms of dextran sulfate sodium-induced colitis in mice. •
APsulloc 331261
EVs increased IL-10, IL-1β and GM-CSF levels in ex vivo human skin cultures, and induced monocyte-to-macrophage transition and polarization to M2b in human monocytic cells (THP-1) •
BGAN8
EVs were endocytosed in a clathrin-dependent manner by human intestinal epithelial cells (HT29) Lactiplantibacillus plantarum
KCTC 11401BP
EVs decreased IL-6 levels and protected cell viability against treatment with S. aureus EVs in human epidermal keratinocytes (HaCaT), and reduced skin inflammation in S. aureus EV-induced atopic dermatitis in mice Lacticaseibacillus rhamnosus JB-1 EVs appeared in blood 2.5 h after oral consumption and contained bacteriophage DNA Frontiers in Microbiology | www.frontiersin.org EVs that have had beneficial effect against pathogens in in vitro, ex vivo, or in vivo models are indicated by asterisks.
Frontiers in Microbiology | www.frontiersin.org example the presence of lipopolysaccharide (LPS). The diversity in cargo molecules contained in EVs might explain the variety of described roles ranging from decoys for viral and antibiotic attack, quorum sensing as well as regulation of host immune defense (Kaparakis-Liaskos and Kufer, 2020).
Postbiotics, a New Concept
As mentioned before, probiotics comprise live microorganisms that confer a health benefit on the host when administered in adequate amounts. At the same time, there is increasing evidence of the health effects of non-viable microorganisms and their bioactive compounds (metabolites that they can produce by fermentation or by their action on food components) (Collado et al., 2019). An expert panel of ISAPP defined a postbiotic as a "preparation of inanimate microorganisms and/or their components that confers a health benefit on the host" (Salminen et al., 2021). In this line, EVs are secretory components associated with probiotic bacteria health benefits and consequently could be considered postbiotics (Wegh et al., 2019). Extracellular vesicles play a central role in many physiological and pathological processes due to their capacity to transport biologically active macromolecules that can effectively alter the biological properties of target cells. Due to this property, they can be considered novel agents with different therapeutic applications. There are many clinical trials investigating the use of human EVs for various therapeutic approaches, including pathogen vaccination, anti-tumor therapy, regenerative therapies and drug delivery (Lener et al., 2015;Théry et al., 2018). In the case of EVs against infectious diseases there exist two different strategies: evaluation of EVs released naturally by the pathogen or infected cells, and EVs from in vitro antigen-pulsed DCs (Wahlund et al., 2017;Riley and Blanton, 2018;Santos and Almeida, 2021). However, to our knowledge there are no clinical trials related to the use of EVs from probiotic bacteria for the prevention and/or treatment of any infectious disease.
EXTRACELLULAR VESICLES FROM PROBIOTIC BACTERIA AND INFECTIOUS DISEASES
In order to organize the information, we divided the current evidence of the knowledge about the role of EVs as mediators of probiotic beneficial effects into six categories. The first category addresses the role of EVs against pathogens. The second and third categories are related to their function in the host immune system that can also be divided into three lines of defense: physical and chemical barriers, innate immunity and adaptive immunity. The last categories describe EVs composition, how EVs are uptaken and transported across human cells and other functions.
Pathogen Inhibition
Probiotics can inhibit pathogens through production of antimicrobial agents and through competitive exclusion of pathogens by competing for adhesion or nutrients in the GIT (Surendran Nair et al., 2017;van Zyl et al., 2020;Raheem et al., 2021).
Antimicrobial agents mainly include reactive oxygen species, lactic acid, and bacteriocins (Rajilić-Stojanović, 2013). Bacteriocins are peptides with antimicrobial activity that have shown to inhibit not only bacteria, but also viruses, fungi and parasites (Dicks and Grobbelaar, 2021;Huang et al., 2021). Furthermore, bacteriocins might be an interesting alternative to the use of antibiotics for infectious diseases caused by antibioticresistant bacteria due to their high potency and low toxicity Gradisteanu Pircalabioru et al., 2021).
Recent studies show that EVs released by L. acidophilus ATCC 53544 could deliver bacteriocins and thus kill other bacteria (Dean et al., 2019(Dean et al., , 2020. Proteomic analyses revealed bacteriocins are enriched in EVs. Even though bacteriocins investigated by these authors are directed against a L. delbrueckii strain (Dean et al., 2020), other bacteriocins synthesized by probiotics are able to inhibit or kill pathogens, such as Listeria monocytogenes, Staphylococcus aureus, Acinetobacter baumannii, Gardnerella vaginalis, Streptococcus agalactiae, and Pseudomonas aeruginosa, in both in vitro and in vivo models (Gaspar et al., 2018;van Zyl et al., 2019;Hassan et al., 2020). It is noteworthy that EVs may protect bacteriocins from proteases and inactivation molecules that are normally present in the intestine. Whether EVs from probiotics can deliver bacteriocins to pathogens is still unknown and holds great potential for future research.
Several clinical trials have shown that probiotics improved vaginal microbiota composition (Ho et al., 2016;Laue et al., 2018;Vladareanu et al., 2018) and it has been demonstrated that a vaginal microbiota dominated by lactobacilli prevents infections caused by various pathogens, including HIV-1 (Chee et al., 2020). A possible relevant mechanism of EVs related to pathogen inhibition is their ability to prevent pathogen interaction with host cells. It has been demonstrated that some L. crispatus and L. gasseri EVs reduced HIV-1 attachment to host cells and in this way prevented infection in human cell lines and tissues (Ñahui Palomino et al., 2019). This effect was associated with the reduced accessibility of gp120 (a viral envelope protein) to host target cells after incubating HIV-1 virions with EVs.
Regarding competitive exclusion, probiotics can compete with enteric pathogens for adhesion sites on the mucus layer or on intestinal epithelial cells, and hence prevent pathogen colonization and infection (van Zyl et al., 2020). Competitive exclusion of pathogens has been demonstrated in in vitro models (Singh et al., 2017;Tuo et al., 2018), and possibly takes place not only in the GIT but also in the oral cavity and urogenital tract. Numerous authors have investigated the role of pathogenic bacteria EVs in transporting virulence factors and toxins into host cells (Macion et al., 2021). On the other hand, to the best of our knowledge, the only existing report of EVs from probiotics mediating the competition between pathogenic and probiotic bacteria was published by Kim et al. (2018). In this study, it was shown that EVs from L. plantarum prevented skin inflammation in a murine model of S. aureus EV-induced atopic dermatitis. Concerning the GIT, EVs from probiotics expose adhesion proteins that may interact with the mucus layer and human cells. Although it is likely that this interaction may affect viral and bacterial attachment, there remains a need for in vitro and in vivo studies addressing this question.
Barrier Function: Physical and Chemical Defense
The intestinal epithelial barrier acts as the first line of defense by avoiding the entrance of antigens and pathogens (Barbara et al., 2021). The alteration of the gut microbiota is the most important factor that disrupts the integrity of the intestinal epithelial barrier, leading to intestinal inflammation and diseases (Gareau et al., 2010). Probiotics, in this context, as transient constituents of the microbiota, are able to improve barrier function by surface components and secreted factors (postbiotics), among them, EVs . Since the exposure to infection can lead to the loss of epithelial integrity (König et al., 2016;Invernizzi et al., 2020), probiotic EVs participation in the improvement of barrier function could be an important point to regard them as potential prophylactic or therapeutic agents against infections.
As for the physical barrier, in vitro and in vivo experiments have demonstrated that EVs released by E. coli Nissle 1917 can mediate anti-inflammatory effects and protect the intestinal epithelial barrier function (Alvarez et al., 2016;Fábrega et al., 2016Fábrega et al., , 2017. A key role in the maintenance of intestinal epithelial barrier integrity is played by tight junctions, which are composed of a network of proteins that regulate paracellular permeability, such as claudins, zonula occludens (ZO) and occludin (Barbara et al., 2021). EVs released by E. coli Nissle 1917 have been shown to upregulate ZO-1 and claudin-14, downregulate claudin-2 (a gene that codes for a leaky protein), and in turn improve epithelial barrier function in an in vitro intestinal epithelium models (T-84 and Caco-2 cell lines) (Alvarez et al., 2016). This function of EVs has also been reported in these same cell lines infected with enteropathogenic E. coli (EPEC), an enteric pathogen that disrupts tight junctions as a way to increase invasion. In this work, EVs released by E. coli Nissle 1917 were able to counteract EPEC-altered mRNA levels of claudin-14 and occludin, preserve subcellular localization of ZO-1 and occludin, and maintain F-actin at the intercellular junctions. Barrier integrity restoration was further confirmed by measuring transepithelial electrical resistance (TEER) and the flux of FITCdextran (Alvarez et al., 2019). In addition, restoration of epithelial integrity by EVs has been observed in an in vivo model of experimental colitis (Fábrega et al., 2017). In this regard, these authors demonstrated that oral administration of EVs from E. coli Nissle 1917 increased: trefoil factor 3 (TFF-3) mRNA levels, a marker of intestinal barrier function; and decreased MMP-9 mRNA levels, a protein involved in tissue injury.
Regarding the intestinal chemical defense, antimicrobial peptides and the mucus layer (mainly produced by the goblet cells) are further key factors that maintain intestinal barrier integrity by protecting epithelial cells from bacteria and other challenges (Hansson, 2020;Yong et al., 2020;Barbara et al., 2021;Fusco et al., 2021). In an in vivo model of experimental colitis, treatment with EVs from E. coli Nissle 1917 resulted in the restoration of the mucin content in goblet cells and in a smaller ulceration surface, as evidence of barrier integrity (Fábrega et al., 2017). On the other hand, a recent study conducted by Gu and colleagues showed that EVs from L. rhamnosus GG increased nuclear factor erythroid 2-related factor 2 (Nrf2) expression and, in turn, increased tight junction proteins and antimicrobial peptide Reg3 levels, which is involved in the prevention of Listeria monocytogenes and Salmonella enteritidis infections (Loonen et al., 2014;Gu et al., 2021). Furthermore, mRNA Reg3 levels increased after incubation of Caco-2 cells with EVs from L. plantarum WCFS1 (Li et al., 2017). In an in vivo model, the administration of these EVs prolonged the survival of Caenorhabditis elegans infected with vancomycinresistant enterococci. In this line, EV-mediated protection against antimicrobial resistant pathogens could be useful to limit the development of antibiotic resistance that results from the widespread use of antibiotics.
Innate and Adaptive Immunity
As mentioned before, intestinal epithelial cells provide a physical barrier that separates the host from the external environment and form not merely static physical barriers: on the contrary, intestinal epithelial cells engage in a complex dynamic crosstalk between the microbiota and the intestinal immune system (Takiishi et al., 2017). Both bacteria and host-derived EVs are key players of such inter-kingdom crosstalk. There is now an accumulating body of evidence that bacterial EVs regulate the innate and adaptive immune system of the host. Consequently, EVs released by the gut microbiota may have great influence on human health and disease. EVs also carry a set of molecules known as microbe-associated molecular patterns (MAMPs) that are recognized by specific receptors expressed by host epithelial and immune cells. These pattern recognition receptors (PRRs), such as TLR2 and NOD1, are key components of innate immunity and mediate host responses (Lebeer et al., 2010;Díaz-Garrido et al., 2021).
Maintaining the proper balance of immune responses at mucosal surfaces is critical for maintaining homeostasis and successfully clearing pathogens. Epithelial cells have been identified as key players in the development of elaborate immune responses that discriminate between non-pathogenic and pathogenic microorganisms. In this regard, intestinal epithelial cells contribute to delaying and dampening infections by initiating the development of an immune response and attracting immune cells to the infectious site (Pellon et al., 2020). Among intestinal epithelial cells, enterocytes are the most abundant cells and represent approximately 90% of the total number. To study absorption and immune responses there are different in vitro models of cell lines: Caco-2, HT-29, and T-84. The remaining 10% of the cells consist of mucusproducing goblet cells, enteroendocrine cells, antimicrobial peptide-producing Paneth cells and others (Jochems et al., 2018).
In vitro and in vivo experiments with intestinal epithelial cells have demonstrated that EVs released by E. coli, L. casei, L. paracasei, P. freudenreichii, and L. rhamnosus can modulate NFκB levels (Cañas et al., 2018;Bäuerl et al., 2020;Choi et al., 2020;Vargoorani et al., 2020;Tong et al., 2021). NF-κB is a family of transcription factors and has an essential role in a variety of aspects related with human health including the development of both innate and adaptive immunity. EVs from L. paracasei and P. freudenreichii decreased NF-κB levels in LPS-induced inflammation in HT-29 cell line (Choi et al., 2020;Rodovalho et al., 2020). At the same time, L. rhamnosus EVs had the same effect in an in vivo model of dextran sulfate sodium-induced colitis in mice (Tong et al., 2021). On the other hand, EVs from E. coli and L. casei increased NF-κB levels per se in Caco-2 and HT-29 cell lines (Cañas et al., 2018;Bäuerl et al., 2020). This opposite modulation of NFκB levels, in the presence or absence of LPS, was also observed for pro-inflammatory cytokines like IL-8 in both Caco-2 and HT-29 cell lines, and in ex vivo human colonic explant Choi et al., 2020;Vargoorani et al., 2020).
On the contrary, in the presence or absence of LPS, L. casei and L. paracasei EVs always increase the levels of anti-inflammatory cytokines like IL-10. The inhibition of the NF-κB pathway and the increase of IL-10 by EVs have been extensively reported for probiotic bacteria in both in vitro and in vivo models of infection and/or inflammation Bhardwaj et al., 2020). Moreover, Fábrega and colleagues demonstrated in an in vivo model of dextran sulfate sodium-induced colitis in mice that E. coli Nissle 1917 EVs decreased COX-2 and iNOS mRNA levels that encode important inducible enzymes for the synthesis of prostaglandins and nitric oxide, respectively. This decrease in COX-2 and iNOS levels leads to inflammation and tissue damage, and correlated with the reduced expression of the pro-inflammatory cytokines TNF-and IFN-γ, lower colon inflammation and tissue damage in EV-treated mice (Fábrega et al., , 2017. This evidence suggests that EVs could mediate, at least in part, the beneficial effect of probiotics against infectious diseases. With regard to immune cells, EVs from different species increase per se the levels of pro-inflammatory cytokines like TNFalpha and IL-6 Hu et al., 2020;Gu et al., 2021;Morishita et al., 2021) and, at the same time, increase the level of anti-inflammatory cytokines like IL-10 and IL-22 produced by macrophages, DCs and peripheral blood mononuclear cells (PBMC) (López et al., 2012;Al-Nedawi et al., 2015;Fábrega et al., 2016;Hu et al., 2020). In agreement with this effect, it has been reported that different probiotic bacteria stimulate proinflammatory and/or anti-inflammatory cytokines in different immune cells (Ren et al., 2016;Cristofori et al., 2021).
Modulation of the immune system by bacterial EVs has also been studied against pathogens in in vitro models. EVs from L. rhamnosus GG and L. reuteri DSM 17938 decreased inflammatory mediators like IFN-γ and IL-17A in S. aureusstimulated human PBMC (Mata Forsberg et al., 2019), while EVs from the probiotic strain E. coli Nissle 1917 improved the antibacterial activity of macrophages against three bacterial pathogenic strains of E. coli, S. typhimurium, and S. aureus (Hu et al., 2020).
Regarding macrophage differentiation, L. plantarum APsulloc 331261 EVs induced monocyte-to-macrophage transition and polarization to M2b in human THP-1 . M2b, a subtype of M2 macrophages, has attracted increasing attention due to its strong immunoregulatory and anti-inflammatory effect (Wang et al., 2019). Probiotic bacteria are reported to have a beneficial effect on the host immune status through their ability to modulate macrophage polarization. Some probiotic strains are reported to activate macrophages to M1 phenotype to kill intracellular pathogens, while some other probiotics can induce M2 macrophages to exert an anti-inflammatory effect. Similarly, another strain of the same species (L. plantarum CLP-0611) also ameliorated colitis in mice by polarizing M1 to M2-like mouse peritoneal macrophages (Jang et al., 2014).
In line with the anti-inflammatory effects of bacterial EVs, L. paracasei and L. reuteri BBC3 EVs increased mRNA levels of TGF-β in a model of LPS-induced inflammation in human intestinal epithelial cells (HT-29) and jejunum tissues from chicken (Choi et al., 2020;Hu et al., 2020). TGF-β plays a critical role in the development of Treg cells (Zhao et al., 2017). At the same time, B. bifidum LMG13195 and L. rhamnosus JB-1 EVs incubation with human DCs induced differentiation to Treg cells and increased IL-10 levels (López et al., 2012), and L. rhamnosus EVs increased the number of Treg cells in Peyer's patch from mice (Al-Nedawi et al., 2015). While in some instances Treg cells appear to limit the efficiency of antiviral protective immunity, in other cases they reduce the level of tissue damage caused by a virus infection (Veiga-Parga et al., 2013).
Regarding adaptive immunity, vaccination with engineered EVs from the probiotic strain E. coli Nissle 1917 in mice increased the levels of IgG against a recombinant antigen comparable to the "gold standard" adjuvant (alum) (Rosenthal et al., 2014). This strong adjuvant capability of EVs from probiotic strains provides evidence that engineered EVs could be a useful platform for vaccines in humans. On the other hand, it is interesting to note that L. johnsonii N6.2 EVs are recognized by IgA and IgG from the plasma of individuals who had consumed the probiotic. In particular, the increase of IgA occurs as a result of a specific response to EV components: Sdp_SH3b2 and Sdp_SH3b6 (Harrison et al., 2021). Although the function of bacterial SH3b domains is not completely known, they are proposed to be cell wall binding domains in prokaryotes. In a previous work, the authors had shown that L. johnsonii N6.2 increased circulating levels of IgA (Marcial et al., 2017). Moreover, it has been demonstrated that L. sakei EVs enhanced IgA production by murine Peyer's patch cells (Yamasaki-Yashiki et al., 2019). A similar study found that commensal bacteria increase the serum levels of IgA, providing a protective effect against polymicrobial sepsis (Wilmore et al., 2018). Therefore, serum IgA concentrations depend on the interaction with the gut microbiota and these effects could be mediated, at least in part, by EVs.
It is interesting to note that L. plantarum KCTC 11401BP EVs decreased IL-6 levels, protected cell viability of human epidermal keratinocytes (HaCaT) incubated with S. aureus EVs and reduced skin inflammation in S. aureus EV-induced atopic dermatitis in mice (Kim et al., 2018). Moreover, L. plantarum EVs increased IL-10 and granulocyte Macrophage Colony-Stimulating Factor (GM-CSF) levels in ex vivo human skin cultures . These findings suggest that oral administration of bacteria could have a preventive effect on skin inflammation and these effects could be mediated by EVs. As mentioned below in Section "Uptake and Transport, " L. rhamnosus JB-1 EVs appeared in blood after oral consumption and consequently the presence of EVs in the bloodstream could in part explain the benefit of probiotics in extraintestinal tissues and organs (Stentz et al., 2018;Champagne-Jorgensen et al., 2021a).
Composition
Throughout the years, it has been shown that the supernatant from probiotic bacteria exert beneficial effects in both in vitro and in vivo models (De Marco et al., 2018;Mantziari et al., 2020). For instance, the culture supernatant from L. rhamnosus GG induces resistance to Escherichia coli K1 infection by enhancing intestinal defense in neonatal rats (He et al., 2017). In recent years, with the discovery of EVs from probiotics, we can speculate that at least part of these beneficial effects could be mediated by EV components.
As far as we know, there are differences in EV metabolite, nucleic acid and protein content compared with that of the bacterial cell (Briaud and Carroll, 2020). The relative abundance of certain components suggests not only a possible sorting mechanism to package EV cargo, but also a special biological role for EVs (Kim et al., 2018;Huang et al., 2021). For example, EVs from L. rhamnosus GG contain high levels of tryptophan metabolites that lead to an improved barrier function (Gu et al., 2021).
In the last few years, "omics" approaches, such as proteomics, transcriptomics and metabolomics, have enabled a comprehensive characterization of probiotics and their EVs, allowing us to gain a deeper understanding of their mechanisms of action (Cunningham et al., 2021). Proteomic analyses showed that EVs from Lacticaseibacillus genus (including L. casei and L. rhamnosus species) contain p40 and p75, two proteins associated with probiotic effects (Domínguez Rubio et al., 2017;Dean et al., 2019;Gu et al., 2021). In particular, p40, when administered in early life, increased TGF-β levels in mice and consequently prevented intestinal inflammation in adulthood (Gu et al., 2021). These proteins, p40 and p75, are able to induce the phosphorylation of the epidermal growth factor receptor (EGFR), and thus have anti-apoptotic effects, as demonstrated in intestinal epithelial cells . For p40, this effect was also observed in a murine model of colitis (Yan et al., 2011). EGFR activation can also be triggered by L. casei EVs, which expose p40 and p75 at the surface (Bäuerl et al., 2020). Intriguingly, EVs from L. rhamnosus GG were shown to have apoptotic effects in hepatic cancer cells by the intrinsic pathway of apoptosis (Behzadi et al., 2017). Therefore, apoptotic effects seem to depend on the dose of EVs and on the model used. On the other hand, p40 and p75 were able to prevent the disruption of tight junctions by protein kinase C (PKC)dependent mechanisms in Caco-2 cell monolayers (Seth et al., 2008). Anti-apoptotic effects and protection of tight junctions in intestinal epithelial cells are related to an enhancement of intestinal epithelial integrity, a key factor in the maintenance of barrier function, the first line of defense. On the other hand, p40 was proven to increase IgA levels. As mentioned in Section "Innate and Adaptive Immunity, " IgA further contributes to the protection of the host against infections (Ho et al., 2016;Wang and Jeffery, 2016).
It has been shown that EVs from probiotics contain proteins that could mediate pathogen inhibition, and in this way could possibly compete with pathogens for colonization in the intestine (Domínguez Rubio et al., 2017;Bajic et al., 2020;Bäuerl et al., 2020;Nishiyama et al., 2020). Proteomic analyses of EVs from three different lactobacilli strains showed that protein composition of EVs can be very different among species (Dean et al., 2019). Interestingly, antimicrobial bacteriocins are enriched in EVs from L. acidophilus ATCC 53544. These EVs can fuse with other bacteria and thus may constitute a useful platform for the delivery of antimicrobial compounds (Dean et al., 2020). On the other hand, it would be interesting to investigate the occurrence of moonlighting proteins in EVs. Moonlighting proteins are proteins that have different functions according to their cellular location (Wang and Jeffery, 2016;Jeffery, 2018). For example, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), a well-known cytoplasmic metabolic protein, is exposed at the surface of the bacterial cell and performs adhesion functions. Further analyses are necessary to confirm the localization of these proteins within EVs to better understand their multiple functions.
Indeed, EV composition is relevant to understand their biological function, even in the context of infections. EVs from L. crispatus BC3 and L. gasseri BC12, but not EVs from other strains, were capable of protecting vaginal tissues from HIV-1 infection ex vivo, suggesting virus inhibition was due to the presence of specific components of EVs (Ñahui Palomino et al., 2019). Regarding immunomodulatory effects of EVs from probiotics, EVs from Propionibacterium freudenreichii contain surface-layer protein B (SlpB), which effectively mitigated NF-κB activation (Rodovalho et al., 2020).
Lipoteichoic acid (LTA) has been found on the surface of EVs from L. gasseri JCM 1131, L. casei BL23 and L. rhamnosus JB-1 (Shiraishi et al., 2018;Champagne-Jorgensen et al., 2021b). LTA is a ligand for TLR2 in a heterodimer with TLR6, and it seems to induce immune tolerance in intestinal epithelial cells (Lebeer et al., 2010). In agreement with this, EVs from L. rhamnosus JB-1 expose LTA, which was responsible for TLR-2 activation and increase of IL-10 production by bone marrowderived DCs (Champagne-Jorgensen et al., 2021b). LTA from probiotics could play a role in attenuating infections. In this regard, it has been shown that L. plantarum LTA inhibits virus-induced inflammatory responses in porcine intestinal epithelial cells and reduced Enterococcus faecalis biofilm in vitro (Kim et al., 2017. As mentioned before, peptidoglycan contained in EVs from Gram (+) and Gram (−) probiotics is also an important factor in the enhancement of innate immunity and the maintenance of intestinal homeostasis (Cañas et al., 2018;Morishita et al., 2021). In fact, EVs from Bifidobacterium longum, Clostridium butyricum, and L. plantarum WCFS1 have been proposed as a novel immunotherapy formulation that would be advantageous over bacterial lysates due to protection from degradation of bioactives within EVs (Morishita et al., 2021).
As aforesaid, EVs from the Gram (−) probiotic strain E. coli Nissle 1917 were shown to have a strong adjuvant capability. The authors ascribed this result to LPS, proteins and glycosyl composition (Rosenthal et al., 2014). The presence of LPS and other MAMPs, such as flagellin and mannose, may be responsible for the strong immune response when applying these EVs as vaccine platforms.
Previous work has established that bacterial EVs contain DNA and RNA (Koeppen et al., 2016;Bitto et al., 2017;Li and Stanton, 2021). Regarding EVs from probiotics, little is known about their nucleic acid cargo. Even though DNA and RNA were found in EVs from L. reuteri BBC3 and L. casei BL23 (Domínguez Rubio et al., 2017;Hu et al., 2021), the characterization of nucleic acids from probiotics remains to be studied. Small RNA contained in EVs from probiotics might possibly regulate gene expression in host cells, as it is the case for EVs from pathogenic bacteria, and this interaction could have implications in preventing and treating infections (Lee, 2019;Munhoz da Rocha et al., 2020). Extracellular vesicles from probiotics have shown to contain phage nucleic acids (Domínguez Rubio et al., 2017;Champagne-Jorgensen et al., 2021b;Gu et al., 2021) and phage proteins (Domínguez Rubio et al., 2017;Gu et al., 2021). EVs can even transmit phage receptors to phage-resistant bacteria, which in turn become phage-sensitive (Tzipilevich et al., 2017). Both phage nucleic acid and phage-receptors transmission would lead to a broadened phage host range with potential applications in the treatment of infections .
Uptake and Transport
The communication between bacteria and the host could in part occur through bacterial EVs and other soluble factors (postbiotics). EVs are able to transport diverse bioactive molecules to host cells and trigger different effects such as the modulation of immune responses. It is generally accepted that, due to their nanosize, bacterial EVs can overcome epithelial barriers and migrate long distances in the human body (Macion et al., 2021). In fact, bacterial EVs have been demonstrated to enter host cells by several routes, including clathrin, caveolin or lipid raft mediated endocytosis, and membrane fusion (O'Donoghue and Krachler, 2016). Even though much research in the last decades has focused on the study of uptake and transport of EVs released by pathogenic bacteria (Bielaszewska et al., 2017;Bitto et al., 2017), few researchers have addressed these issues for EVs released by non-pathogenic bacteria. However, in the last years there has been an increase in research trying to understand the way that EVs from probiotics are internalized by host cells, or even more, transported through the intestinal barrier and delivered to different tissues and organs.
Before being uptaken by intestinal cells, EVs must also diffuse through the mucus layer. In this regard, EVs from E. coli Nissle 1917 were able to diffuse through the mucus layer in the mucin-producer HT29-MTX cell line . Although there is no direct evidence of EV diffusion through the mucus layer in vivo, this event can be assumed from the fact that EVs can reach the bloodstream after oral administration (Champagne-Jorgensen et al., 2021a).
Extracellular vesicles from probiotics were proven to be internalized by intestinal epithelial cells in several studies Bajic et al., 2020;Domínguez Rubio et al., 2020;Champagne-Jorgensen et al., 2021b). Although there are several routes of entry for EVs from pathogens into epithelial cells, clathrin-mediated endocytosis has been the most widely reported route among EVs from probiotics so far. Inhibitors of clathrin-mediated endocytosis, such as chlorpromazine and the dynamin dynasore, blocked the uptake of EVs by intestinal cells cultivated in vitro Bajic et al., 2020;Champagne-Jorgensen et al., 2021b). Additionally, EVs from L. rhamnosus JB-1 were shown to be internalized by intestinal epithelial cells in an in vivo model within 2 h after oral consumption (Champagne-Jorgensen et al., 2021a). It is likely that EVs are internalized simultaneously by different endocytic pathways depending on their size (El-Sayed and Harashima, 2013).
With respect to intracellular trafficking, colocalization analyses showed that EVs from E. coli Nissle 1917 are present in early endosomes and, once inside the cell, EV peptidoglycan interacts with NOD1 that leads to the activation of the immune system (Cañas et al., , 2018Fernández-García et al., 2021). Moreover, EVs can also fuse with lysosomes . On the other hand, it was demonstrated that EVs from a pathogenic E. coli strain can deliver toxins to other subcellular compartments including the cytosol, nucleus and mitochondria (Bielaszewska et al., 2017). Bacterial EVs can deliver DNA or RNA to host cells (Bitto et al., 2017;Lécrivain and Beckmann, 2020), and there is evidence that nucleic acid cargo of EVs from pathogens may enter the nucleus of eukaryotic host cells (Blenkiron et al., 2016;Bitto et al., 2017). Furthermore, EVs from pathogens contain small RNA that might regulate gene expression in host cells (Koeppen et al., 2016). Although little studied to date, these mechanisms may be also applicable to EVs from probiotics.
While a portion of EVs may act in intestinal cells, another portion is possibly transported through the intestinal epithelium, either by paracellular or transcellular transport, to finally reach extraintestinal tissues and organs (Jang et al., 2015;Stentz et al., 2018;Jones et al., 2020). Park et al. (2017) revealed EVs from intestinal bacteria reach the bloodstream in a mouse model, where blood EV diversity was directly linked to intestinal microbiota diversity. Regarding probiotics, a proportion of CFSE-labeled EVs from Bacillus subtilis were transported through a monolayer of polarized epithelial cells in a transwell system. Transcellular transport resulted in the detection of intact EVs in the lower chamber in 60-120 min (Domínguez Rubio et al., 2020). Alternatively, EVs could possibly be transported through the intestinal epithelium via DCs, goblet cells or M-cells. On the other hand, microbiota EV transport through the epithelium can occur by paracellular transport when the intestinal epithelial barrier integrity is compromised (Chronopoulos and Kalluri, 2020).
The transport of EVs across the intestinal epithelium implies that EVs could reach the lamina propria, where they are able to interact with immune cells. EV uptake by immune cells has been described in a few studies. In vivo studies showed that EVs from L. rhamnosus JB-1 were uptaken by DCs in the lamina propria (Champagne-Jorgensen et al., 2021b). This internalization was thought to occur via clathrin-mediated endocytosis, as it was prevented by dynasore, even though phagocytosis cannot be ruled out since dynamin is required for this process. In another study, probiotic-derived EVs were uptaken by mouse macrophage-like and DCs via clathrinmediated endocytosis and macropinocytosis, as demonstrated in the presence of endocytosis inhibitors (Morishita et al., 2021).
Different studies support EVs distribution and delivery to distal body sites. For example, EVs from L. rhamnosus JB-1 were present in the bloodstream of mice fed with the bacteria (Champagne-Jorgensen et al., 2021a), as demonstrated by the detection of DNA from prophages in EVs. What is more, oral administration of EVs from L. plantarum reduced skin inflammation in mice with S. aureus EV-induced atopic dermatitis (Kim et al., 2018). In humans, microbiota-derived EVs were able to reach urine. In fact, urine-EVs were proposed as a useful assessment method of microbiota profiles (Li et al., 2017). In another study, intraperitoneally injected EVs from L. plantarum increased brain-derived neurotrophic factor (BDNF) mRNA levels in the hippocampus of mice and produced antidepressant effects (Choi et al., 2019). This increase in gene expression in the brain suggests that EVs might possibly cross the brain blood barrier. Indeed, EV transport could be one of the reasons why probiotics consumption exerts not only local but also systemic effects, since it is likely that EVs are released by probiotics in the GIT after the consumption of these bacteria.
Other Biological Effects
As it can be inferred from probiotic beneficial effects, modulation of symptoms is one important factor that explains the clinical efficacy of probiotics in the treatment of infectious diseases. It is often observed that EVs mimic the effect of the parent bacteria. For example, L. reuteri DSM 17938 clinical efficacy has been demonstrated for the treatment of colic, diarrhea and constipation (Coccorullo et al., 2010;Chau et al., 2015;Dinleyici et al., 2015). Accordingly, EVs from this strain could reproduce the bacteria beneficial effects on gut motility in jejunum and colon explants from mice (West et al., 2020). Therefore, EV release could be one mechanism whereby probiotics mediate their beneficial effects.
In relation to stress and immunity, chronic stress leads to constantly high corticosteroid levels in blood, an impaired immune function, and an increased susceptibility to infections and other health disorders (Bae et al., 2019). At the same time, exposure to stress can cause a decrease in the expression of BDNF in humans, a molecule with antidepressant-like effects (Yang et al., 2015). Some probiotics were shown to be antidepressants in patients and animal models, and even though the gutbrain axis is involved in this effect, the mechanisms of action are not completely understood (Yong et al., 2020). EVs might come into play here. In this regard, EVs from L. plantarum KCTC 11401BP counteracted the decreased levels of BDNF mRNA in the hippocampus of corticosteroid treated mice and also blocked the decrease in the levels of BDNF mRNA in corticosteroid post-treated mice, which was further evidenced in mice antidepressant behavior (Choi et al., 2019). If the anti-depressant effects of EVs are proven, they could possibly participate in preventing and/or treating infections given that immune function may be impaired in patients with depression (Andersson et al., 2016).
Potential Use of Extracellular Vesicles
The use of EVs as delivery systems could provide several advantages including their nanosize, their biocompatibility in comparison to synthetic drug delivery systems (low toxicity), the ability to cross biological barriers, their ability to protect their cargo from unfavorable environmental conditions (pH, enzymes, oxidative stress) and the possibility of engineering parent cells to modify EV composition (Figure 3). There is still a huge gap between basic research and clinical trials as far as bacterial EVs are concerned.
Postbiotics are a novel clinical strategy to consider for the treatment of infections in absence of cells. For example, in diabetic foot ulcers, skin barrier is impaired and thus administration of live bacteria is not a safe approach (Nam et al., 2021). Here is where probiotic-derived EV administration could fall into place and replace probiotic beneficial effects like pathogen inhibition and immunomodulation.
To prevent infectious diseases, only Gram (−) pathogenic bacterial EVs have been used as vaccines up to now, showing to be safe and efficacious on several occasions, while others are under evaluation (Behrens et al., 2021). For example, there are clinically available EV-based vaccines against Neisseria meningitidis, a causative agent of meningitis. The development of EV-based vaccines is a promising field for the prevention of infections. However, the isolation of EVs from several pathogenic microorganisms for vaccine design may have limitations. For example, many pathogens like bacteria, fungi and parasites cannot be cultured in the laboratory (Li et al., 2014;Roig et al., 2018). In the case of viruses, which do not produce EVs, cell cultures are necessary for the design of EV-based vaccines (Shehata et al., 2019;Yang et al., 2021). In this line, vaccination with engineered EVs from probiotic bacteria could FIGURE 3 | Biological advantages and potential use of probiotic-derived EVs.
Frontiers in Microbiology | www.frontiersin.org be a useful platform to express pathogen antigens to be used as vaccines without toxicity in humans. To our knowledge, E. coli Nissle 1917 was the only strain assessed for this application in an animal model (Rosenthal et al., 2014). Further studies comparing Gram (−) and Gram (+) probiotic EVs would be necessary to elucidate whether the presence of certain components like LPS or LTA on the surface is important for the enhancement of the immune response. It is important to highlight that different chemical composition of LPS and LTA induce differential inflammatory responses and this must be taken into account to enhance EV immunogenicity (Migale et al., 2015;Jastrząb et al., 2021).
On the other hand, to treat infectious diseases, genetic engineering could be exploited for pathogen inhibition by increasing the expression of antimicrobial peptides and further encapsulation in EVs (Dean et al., 2020). Bacteriocins are potent small antimicrobial peptides synthesized by certain bacteria that may be appointed as alternatives to traditional antibiotics (Gradisteanu Pircalabioru et al., 2021). Bacteriocins within EVs turn them into potential candidates against infections, including those caused by antimicrobial resistant pathogens. According to WHO, antimicrobial resistance continues to be a global health and development threat (World Health Organization, 2021). Indeed, an important advantage of probiotic administration is the reduction in the use of strong anti-inflammatory agents and/or antibiotics that can be unfavorable in the long term (Kasatpibal et al., 2017;Guo and Leung, 2020;Raheem et al., 2021). In this regard, the indiscriminate use of antimicrobials leads not only to the development of antimicrobial resistance in pathogens, but also to the loss of our microbiota. The latter increases the susceptibility to infections such as vaginal candidiasis (Xu et al., 2008). Administration of probiotic EVs could be used not only to treat and/or prevent infections, but also would decrease antimicrobial use.
By taking advantage of EV versatility, other genetic engineering approaches can be applied to modify EV cargo or surface for the delivery of drugs to target cells. Genetic engineering enables the overexpression of proteins or the synthesis of small RNA that could silence target host genes (Fantappiè et al., 2014;Koeppen et al., 2016). EV cargo could be protected from harsh environmental conditions and additionally surface molecules could direct EVs to target host cells. This strategy could be relevant for the delivery of two or more synergistic drugs and/or the delivery of compounds that have difficulties in crossing the cell membrane .
Missing Knowledge and Challenges
As documented in several studies, probiotic-derived EVs could be involved in the prevention and treatment of infectious diseases. However, the protective capacity of probiotic bacteria EVs against pathogen infections was only studied against one virus (HIV-1) and a few bacteria (S. aureus, S. typhimurium and E. coli) (Mata Forsberg et al., 2019;Ñahui Palomino et al., 2019;Hu et al., 2020). Therefore, there is still no information on its beneficial effect against fungal and parasitic infections.
To date, there are many unknowns regarding the use of probiotics EVs as pharmaceutical agents. Current challenges are the lack of standardized and cost-effective methods for EV isolation, purification, characterization and upscale processing (Gurunathan et al., 2021). Unlike human EV markers, specific bacterial EV markers remain mostly unidentified (Ñahui Palomino et al., 2021). Identifying these molecular markers could not only optimize current characterization techniques, but also improve our understanding about EV physiology and future possible biomedical applications. For example, the probiotic B. subtilis produced S. aureus intestinal decolonization by inhibiting the pathogen quorum-sensing, and thereby produced a general decolonization (including the nose) (Piewngam and Otto, 2020). It would be interesting to study if probiotic EV components can mediate the inhibition of quorumsensing among pathogenic bacteria. Even more, advances in the understanding of the role of EVs in inter-kingdom communication will almost certainly provide valuable insights into the development of novel therapies against pathogens.
Regarding the use of probiotic bacteria to create engineered EVs with vaccination purposes, the expression of antigenic proteins from non-culturable eukaryotic pathogens (fungi and parasites) has some limitations related to bacterial ability to make post-translational modifications. In this case, expression of antigens in eukaryotic probiotic organisms like yeasts would be a better and low cost option.
One alternative to administering isolated EVs that remains to be evaluated is whether it would be more advantageous to administer functional food with probiotics as a platform for EV delivery. As far as we know, EVs are constantly secreted by metabolically active bacteria (Brown et al., 2015;Liu et al., 2018). In a bacterial culture, EV release can vary depending on the growth conditions, including pH, oxygen presence, and agitation rate (Müller et al., 2021). For example, at pH 5 L. plantarum released a smaller number of EVs than at pH 7. On the other hand, there is recent evidence that L. rhamnosus JB-1 EVs can reach the bloodstream of mice after oral administration of the probiotic (Champagne-Jorgensen et al., 2021a). This outcome strongly suggests in situ EV release in the GIT. Whole cells would resist better than EVs to conditions during storage and transit through the GIT. In the case of spore-producing probiotics (e.g., B. subtilis), spore administration would be a cost-effective option. In this way, problems concerning EV stability would be avoided. Another strategy to consider is the microencapsulation of probiotics contained in food matrices to improve their viability during storage and in the GIT (Qi et al., 2020). Besides, if the encapsulating agent is mucoadhesive, a longer residence time in the GIT may allow a sustained release of EVs over time (Yao et al., 2020). Another microparticle-based delivery system could be a particle with coupled EVs on the surface to achieve high concentrations of EVs, maximizing EV effects as demonstrated in in vitro models (Kuhn et al., 2020).
CONCLUSION
The new era of postbiotics has brought a new point of view on the beneficial effects of probiotics. Probiotic-derived EVs could be mediating, at least in part, the beneficial effects of probiotics against infectious diseases via: inhibition of pathogens, enhancement of epithelial barrier function and modulation of the immune system. Remarkably, EVs can reach the bloodstream and consequently be delivered to extraintestinal organs, where probiotics were shown to have beneficial effects. Future studies should be focused on the characterization of EV active components and their interaction with the host. Novel EV-based technologies are promising for the design of therapies and/or vaccines against infections. Moreover, probiotics contained in food matrices could be used as EV-releasing devices in the GIT with potential applications in the functional food industry.
AUTHOR CONTRIBUTIONS
APDR and CLD designed the idea, collected literature data, created the tables and figures, and wrote the manuscript.
MP and OP reviewed and approved the final version of the manuscript. All authors contributed to the article and approved the submitted version.
|
2022-03-31T13:33:40.588Z
|
2022-03-30T00:00:00.000
|
{
"year": 2022,
"sha1": "3077f945e2bd4fad2d1214a22962d73eb1b85956",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3077f945e2bd4fad2d1214a22962d73eb1b85956",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118656495
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic phases of spin-3/2 fermions on a spatially anisotropic square lattice
We study the magnetic phase diagram of spin-3/2 fermions in a spatially anisotropic square optical lattice at quarter filling (corresponding to one particle per lattice site). In the limit of the large on-site repulsion the system can be mapped to the so-called Sp(N) Heisenberg spin model with N=4. We analyze the Sp(N) spin model with the help of the large-N field-theoretical approach and show that the effective theory corresponds to the Sp(N) extension of the CP^{N-1} model, with the Lorentz invariance generically broken. We obtain the renormalization flow of the model couplings and show that although the Sp(N) terms are seemingly irrelevant, their presence leads to a renormalization of the CP^{N-1} part of the action, driving a phase transition. We further consider the influence of the external magnetic field (the quadratic Zeeman effect), and present the qualitative analysis of the ground state phase diagram.
I. INTRODUCTION
Extraordinary controllability of ultracold gases allows highly accurate modeling and study of problems originated in condensed matter physics. Frustrated magnetic systems occupy an important position on the list of intriguing problems that could be studied in multicomponent ultracold gases. Recently, multicomponent ultracold Fermi gases have attracted much attention 1,2 motivated by the growing availability of hyperfine-degenerate fermionic atoms, such as 6 Li, 3-5 40 K, 6 135 Ba and 137 Ba, 7 and 173 Yb. 8 Realization of unconventional phases of SU (N ) internally frustrated antiferromagnets 9 have been recently suggested 10,11 in alkaline earth atoms with nuclear spin as large as 9 2 in 87 Sr. Among multicomponent ultracold gases, spin-3 2 alkaline fermions stand out by their rich physics characterized by an enlarged Sp(4) symmetry, which is naturally present in the system without fine-tuning of any parameters. 12 By tuning the ratio of scattering lengths in the two allowed spin-0 and spin-2 channels, even the larger SU (4) symmetry may be achieved. [12][13][14][15] Experiments with ultracold atoms are usually done in the presence of magnetic fields. For atoms with hyperfine spins F ≥ 1, the spin-changing collisions redistribute the populations of the components with different spin projection F z while keeping the total magnetization M = j F z j fixed. Therefore, the usual linear Zeeman effect induced by an external magnetic field does not play any role for a state with a fixed initially prepared M , and the main influence of an external magnetic field (except the change of scattering lengths due to the Feshbach resonance phenomenon) is contained in the quadratic Zeeman effect (QZE). The quadratic Zeeman field q couples to j (F z j ) 2 and thus introduces a difference in chemical potentials for components with different |F z |. A peculiar property of spin- 3 2 fermions is the fact that even in presence of the quadratic Zeeman field, the high Sp (4) symmetry is lowered to SU (2) × SU (2) and thus remains quite high. 16 In this work, we study the magnetic phase diagram of spin- 3 2 fermions at quarter filling, in the limit of a strong on-site repulsion, on an anisotropic square lattice with hopping amplitudes in two spatial directions differing by the factor λ ∈ [0, 1], as depicted in Fig. 1. We construct the effective field theory describing the lowenergy properties of the system which has the form of a Sp(N ) extension of the CP N −1 model, with the generically broken Lorentz invariance. For this field theory, the analysis of the one-loop renormalization group equations shows that the Sp(N ) terms are dangerously irrelevant: their presence leads to a considerable renormalization of the CP N −1 part of the action. As the result, by changing the ratio of the scattering lengths or the lattice anisotropy parameter λ one can drive the phase transition between the long range ordered Néel state and the valence-bond-solid (VBS) state.
We also include into consideration the quadratic Zeeman coupling and study the evolution of the ground state under QZE. Since the QZE preserves the SU (2) symmetry, 16 the ground state at large values of the quadratic Zeeman field q corresponds to the long-range ordered (Néel) phase of the isotropic spin-1/2 Heisenberg antiferromagnet (HAFM), for any nonzero value of the lattice anisotropy parameter λ = 0. We show that, depending on the value of the anisotropy λ, when the field q is decreased, this state either adiabatically evolves into the Néel phase of 4-component fermions or undergoes a phase transition into the VBS state.
The structure of the paper is as follows: in Sect. II we present the derivation of the low-energy effective field theory for a system spin- 3 2 fermions at quarter filling in the regime of a Mott insulator (in other words, in the regime of the Sp(4) Heisenberg model). In Sect. III we analyze the renormalization group flow of the derived model and sketch the phase diagram of the system in dimensions one and two. In Sect. IV we study the effect of an external quadratic Zeeman field; finally, Sect. V contains the summary and discussion of the results.
II. EFFECTIVE LOW-ENERGY FIELD THEORY FOR THE Sp(N ) HEISENBERG MODEL
Consider a system of spin-3 2 fermions on a twodimensional anisotropic square lattice. In the s-wave scattering approximation, this system can be described by the following Hamiltonian: 12 where c σ,i are the spin-3 2 fermionic operators at the lattice site i, t ij are the effective hopping amplitudes between two neighboring sites, P F m,i = σσ ′ F m| 3 2 σ, 3 2 σ ′ c σ,i c σ ′ ,i are the operators describing an on-site pair with the total spin F , and the positive interaction constants U 0 , U 2 are proportional to the scattering lengths in the F = 0 and F = 2 channels, respectively. The hopping is assumed to be generally anisotropic in two spatial directions, i.e., Although our main interest will be in the behavior of the two-dimensional model, we will also make a few comments about the one-dimensional case which formally corresponds to λ = 0. We will be also interested in the effect of the external magnetic field. Since the total magnetization in cold atom experiments has very long relaxation time, the primary effect of the external field is given by the quadratic Zeeman term: At quarter filling (one particle per site), and in the limit of strong on-site repulsion t ij ≪ U 0 , U 2 the charge degrees of freedom are strongly gapped and the system can be approximately described by an effective spin λt t FIG. 1. Square lattice with the anisotropic hopping considered in this paper. The hopping amplitudes are t and λt along the horizontal and vertical bonds, respectively.
In the second order of the perturbation theory in t, the exchange constants J 1,2 are given by: The operators Γ a , Γ ab can be expressed in terms of four Schwinger bosons b α , 1 ≤ α ≤ 4, satisfying the constraint b † α b α = 1 at each site: effectively one can just replace the operators c σ,i by b α,i in the definition of Γ in (4). Doing so, one arrives at the Hamiltonian of the form 17 Γ a = σ a ⊗ σ z , a = 1, 2, 3, Γ 4 = 1 ⊗ σ x , where σ a are the Pauli matrices. The Sp(4) group may be viewed as a group of unitary 4 × 4 matrices U that satisfy the condition U T J U = J . The couplings J, J ′ in (6) are given by: and are assumed to be positive; in terms of the atomic spin-3 2 system this corresponds to the assumption The Hamiltonian in the form (6) can be easily generalized to the case of an even number N of bosonic flavors b α , α ∈ [1, N ] and the local hardcore constraint for the Schwinger bosons is generalized by with the number n c playing the role of the "spin magnitude". 18 The Sp(N ) symmetry of the Hamiltonian is enlarged to SU (N ) at the point J = 0, where the two-site Hamiltonian becomes a permutation operator of two Ncomponent objects. Since the lattice is bipartite, the enlargement of symmetry to SU (N ) happens also at J ′ = 0 point. [12][13][14][15]17 The latter point J ′ = 0 corresponds to a SU (N ) antiferromagnet where spins transform according to the fundamental representations of SU (N ) on sublattice A and according to the conjugate representation on sublattice B. In the following we will refer to this point as the staggered SU (N ) antiferromagnetic point. The other SU (N ) point J = 0, where spins are in the fundamental representations of SU (N ) on each site, corresponds to the exactly solvable Uimin-Lai-Sutherland model in one dimension 19 and we will call it the uniform SU (N ) antiferromagnetic point.
Our strategy will be to use the staggered SU (N ) antiferromagnetic point J ′ = 0 (i.e., U 2 → ∞) as the starting point to construct the effective low-energy field theory. First of all, we make a unitary transformation 15,17 on one sublattice, that effectively interchanges the operators Q and K in the Hamiltonian (6). Further, we use the usual coherent state path integral formalism, passing from the bosonic operators b α,n to the corresponding c-number lattice variables b n . It is easy to see that at the mean-field level both terms in the Hamiltonian (6) are simultaneously minimized for a uniform distribution of b n , provided that J, J ′ > 0. Thus, in the parameter region J, J ′ > 0 in (6) one may expect the physics to be rather different from that found in Sp(N ) models describing geometrically frustrated systems. 20 In terms of b n , the Euclidean action on the lattice takes the form A lat = dτ L lat , with the Lagrangian: In a standard manner, we then split the field b into the smooth and staggered components z n , ψ n : where η n takes values of ±1 on A and B sublattices, respectively, and the constraints are implied. One can expect that the magnitude of the staggered component ψ that corresponds to ferromagnetic fluctuations, will be much smaller than that of z.
The unitary transformation (9) is a necessary step: for positive J ′ one cannot start from the uniform SU (N ) antiferromagnetic point J = 0, since no reasonable choice of smooth fields is possible. It has to be remarked, however, that our choice of smooth fields becomes poor in the vicinity of the point J = 0 that has a much higher degeneracy of the mean-field ground state. One may thus expect that the resulting effective field theory is not reliable close to the uniform SU (N ) antiferromagnetic point.
Passing to the continuum and making the gradient expansion, while retaining only up to quadratic terms in ψ and neglecting its derivatives, one readily obtains the Euclidean action in the form A = A 0 +A int +A top , where A 0 corresponds to J ′ = 0: The term A top contains the topological phase contribution, 18,21,22 and the perturbation A int is determined by the deviation J ′ from the staggered antiferromagnetic SU (4) point: (15) Here and throughout the paper we use the notation In the above expressions, d = 1 or 2 is the spatial dimension, the index k = 1, . . . , d labels spatial coordinates, and the lattice constant a and the Planck constant has been set to unity for convenience. The factor √ λ comes from rescaling one of the coordinates to compensate for the anisotropy of interactions (for d = 1 one has to set effectively λ = 1), and µ 1,2 are the Lagrange multipliers ensuring the constraints.
Integrating out the staggered component ψ can be easily performed (see Appendix A for details), and one arrives at the following effective action for the complex unit vector field z: Here Λ is the ultraviolet momentum cutoff, x 0 = 2Jτ √ d − 1 + λ is the rescaled imaginary time coordinate, D µ = ∂ µ − iA µ is the gauge covariant derivative, and A µ = −i(z * · ∂ µ z) is the gauge field. It is worth noting that z · D µ z ≡ z · ∂ µ z since z · z = 0. The bare values of the coupling constants in (17) are given by the following expressions: 2 . (18) In particular, note the different signs of g 1 and g 2 . The first two terms in the action (17) constitute the familiar action of the CP N −1 model, [23][24][25][26][27] that has been extensively studied as an effective theory for SU (N ) antiferromagnets 18,22 .
The third and fourth terms are proportional to J ′ and thus represent perturbation caused by the deviation from the SU (N ) staggered antiferromagnetic point. The presence of those terms has been noticed by Qi and Xu, 17 but they have been neglected since they seem to be irrelevant. In the next section, we will show that those Sp(N ) terms are generally only marginally irrelevant, and can drive a phase transition.
Here a remark is in order: in the action above, for the 2d case we have effectively removed the lattice anisotropy by rescaling one of the coordinates. Due to the Sp(4) symmetry of the problem, the remaining perturbations that break the 90 degree rotation symmetry of the lattice appear only in higher orders in field derivatives: the lowest-order terms of this type have the form Such terms are less relevant than those taken into account in the action (17) and thus will be neglected.
The properties of the CP N −1 model are well understood: in the absence of the topological term A top given by (14), it is always disordered in d = 1, and in d = 2 the long range order appears below a certain critical value of the effective coupling. 25,26 This critical value depends on N , and from the numerical work [28][29][30] it is known that the two-dimensional CP N −1 model on a square lattice is disordered for N/n c ≥ 5.
In the disordered phase the field z acquires a finite mass, and a kinetic term for the gauge field is dynamically generated. 26 It is also well known that the topological term becomes crucially important in the disordered phase; 18,21,22 particularly, it leads to a spontaneous dimerization in d = 1 for odd n c (except for N = 2 which is special: in that case the system remains gapless and translationally invariant in a wide g range 31-34 ), and in d = 2 the disordered phase gets spontaneously dimerized in different patterns depending on the value of (n c mod 4). The "disordered" phase thus acquires valence bond solid (VBS) order connected to the broken translational invariance. For the Sp(4) case, parent Hamiltonians with exact ground states of the VBS type have been constructed recently. 35 An effective theory for the Sp(N ) Heisenberg model in a form similar to (17) has been obtained by Kataoka et al. 36 However, our result differs from that of Ref. 36 in one important respect: the last two terms in (17), proportional to the "perturbation" J ′ , explicitly break the Lorentz invariance, while in the theory of Ref. 36 the Lorentz invariance is retained even in the presence of the "perturbation". By a simple classical linear excitation analysis of the initial lattice action (10) one can obtain (N − 1) Goldstone modes ("spin waves"), having the velocities v 1 , see Appendix B for details. Our effective theory (17) yields (N − 2) modes with the velocity u 1 = √ γ and one mode with the velocity u 2 = u 1 (1 + ρ)/(1 − κ), where γ = g 1 /g 2 , ρ = g 2 / g 2 , and κ = −g 1 / g 1 . After substituting the bare values of Eq. (18), this is in a perfect agreement with the spin wave calculation, while the theory of Ref. 36 yields the same velocity v = 1 for all three modes.
For N = 4, the contribution of the quadratic Zeeman field (3) takes the following form: with the bare "mass" value given by For any finite q, the symmetry of the model is reduced from Sp(4) to SU (2) × SU (2) 16 . If q is large enough, the effective theory reduces to that of a two-component complex field, i.e., to the CP 1 model.
III. ONE-LOOP RENORMALIZATION GROUP ANALYSIS AT ZERO FIELD
Consider first the case when the external field is absent. To understand the role of the Sp(N ) terms in the action (17), we have to analyze their behavior under renormalization. Renormalization group (RG) equations for spin liquids described by a Lorentz-invariant low-energy field theory with SU (N ) and Sp(N ) symmetries have been studied recently 37 by means of the fermionic large-N formulation. Our effective theory (17) does not possess the Lorentz invariance. We write down one-loop RG equations for the model (17), using Polyakov's background field method. 38 The details of the derivation can be found in Appendix C. It is convenient to define the following parameters: where C d = πS d /(2π) d+1 and S d = 2π d/2 /Γ(d/2) is the surface of a d-dimensional sphere. The minus sign has been introduced in κ, to compensate for the negative initial value of g 1 in (18). The physical meaning of the couplings (21) will be clarified below. Their bare values are: 1 , γ (0) = 1, The resulting RG equations can be conveniently written down in the following form: where a dot denotes the derivative d/dl = −Λ(d/dΛ) with respect to the scale variable. For the staggered antiferromagnetic SU (N ) point J ′ = 0 we have κ (0) = 0, ρ (0) = 0, and the above equations reduce to the single equation for the coupling g = y/(2C d ) of the CP N −1 model in d spatial dimensions. For d = 2 this model is ordered (g renormalizes to zero), if the ratio n c /N is above a certain critical value; 18 equations (23) estimate this critical value as follows: On the isotropic (λ = 1) square lattice, this yields (n c /N ) cr = (2π 2 ) −1/2 ≃ 0.225, which agrees qualitatively with the value of 0.19 obtained by a mean-field large-N solution, 39 and with the numerical studies [28][29][30] suggesting that the system has no Néel order for N/n c ≥ 5.
For nonzero J ′ , we have studied the equations (23) numerically for different values of the lattice anisotropy λ and the Sp(4) perturbation J ′ . In the two-dimensional case (d = 2), they exhibit two different characteristic flow patterns: In type I, y flows to zero as l → ∞, while the other couplings flow to some constant values. This type of flow corresponds to the Néel-ordered phase, and the Lorentz invariance remains broken: there are two different velocities u 1 = √ γ and u 2 = u 1 (1 + ρ)/(1 − κ). In type II, y flows to infinity at a certain scale l = l 0 as y ∼ 1/(l 0 − l), while γ flows to a constant, and both ρ and κ flow to zero as ρ, κ ∼ (l 0 − l) 1−3/2N . This behavior corresponds to a disordered phase, the Lorentz invariance is restored (the perturbation terms in (17) flow to zero), so in the disordered phase the system is again effectively described by the CP N −1 model, albeit with a renormalized velocity u = √ γ and the effective coupling g = y/(2C d=2 ). In this phase, the presence of a topological term induces dimerization, 18 so this regime in fact corresponds to a valence bond solid (VBS) state.
It is worth noting that when y flows to infinity in d = 2, it exhibits a two-stage "U-turn" behavior as shown in Fig. 2: at the initial stage of the flow, up to a certain scale l = l * , g decreases and only later starts growing till it explodes at l = l 0 . The scale l * increases in the vicinity of the phase transition line (see below). This behavior is reminiscent of a double-scale behavior observed in the SO(3) model 40 as well as in the CP N −1 model with a massive gauge field. 41 If at J ′ = 0 we are in the VBS phase (i.e., n c /N is below the critical value (24)), then increasing κ (0) ∝ J ′ beyond some threshold κ c leads to a transition to the Néel phase. On the anisotropic (rectangular) lattice, with the increasing anisotropy (deviation of λ from 1) the transition point κ c shifts towards higher values. The resulting phase diagram is shown in Fig. 3. Thus, in two dimensions the Sp(N ) perturbation terms in (17) are dangerously irrelevant and can drive the phase transition between the disordered (VBS) and Néel phase.
In one dimension, y always flows to infinity indicating that the system is dimerized in the entire range of 0 ≤ κ < 1, in line with the numerical results. 16 Curiously, in the close vicinity of the uniform SU (N ) point κ = 1 (J ′ ≫ J), the coupling y again exhibits the "U-turn" behavior as described above, and the intermediate scale l * seems to diverge as J/J ′ → 0. This agrees with the fact the uniform SU (N ) antiferromagnet κ = 1 is gapless in d = 1 (it corresponds to the exactly solvable Uimin-Lai-Sutherland (ULS) model 19 ).
With the present approach, we are not able to detect any tendency towards a transition to the VBS phase with increasing J ′ /J for the case of isotropic square lattice (line λ = 1). One has to keep in mind that our construction of smooth fields becomes increasingly inadequate as J → 0; however, one can argue that the theory still remains valid at the energy scales of less than order J. Several numerical results using exact diagonalization on small 2d clusters, 42 series expansions, 43 and density matrix renormalization group (DMRG) on a ladder 44 suggest that the uniform SU (4) antiferromagnet (J = 0, λ = 1) is in a VBS phase with the plaquette-type dimerization order. At the same time, theoretical studies advocate different scenarios for the uniform SU (N ) antiferromagnetic point: in a recent work based on the Majorana fermion representation of spin-orbital operators, 45 the existence of a Z 2 spin-orbital liquid state with emergent nodal fermions has been proposed for N = 4; other studies based on Schwinger-boson representations 46,47 and exact diagonalization for the SU (3) case 47 suggest that at this point the ground state has the Néel-type Nsublattice order, which may be viewed as order at the wavevector (2π/N, 2π/N ). The question about the correct ground state around the point λ = 1, J = 0 is thus still open. Further, our result shown in Fig. 3 indicates that the VBS phase present at small λ has the tendency to shrink with increasing J ′ /J. This makes plausible to assume that, even if the point λ = 1, J = 0 is in the VBS state, this phase should be different from the VBS phase at small λ. Another argument in favor of this scenario is the following: consider the point J = 0, λ = 0 which describes uncoupled ULS chains. Each chain is gapless, with zero gap at wave vectors k = 2πm/N , −N/2 < m ≤ N/2. Switching on weak interaction λ between the chains may be expected to lead to an immediate ordering at those wave vectors, while switching on weak J leads to a VBS state.
IV. THE EFFECT OF QUADRATIC ZEEMAN FIELD IN THE Sp(4) HEISENBERG MODEL
Let us now add the quadratic Zeeman field term (19) to the effective model (17) with N = 4. Now the first and the fourth field components become massive and can be integrated out at once. We decompose the field into the background part ϕ and the "fast" massive part χ as follows: where ϕ * · ϕ = 1. One can straightforwardly show that for the two-component unit complex vector field the fol- lowing identity holds: Thus, integrating out χ, one obtains the familiar CP 1 model that is equivalent to the O(3) nonlinear sigma model (NLSM) and has been extensively used for a description of Heisenberg spin systems. The topological term (14) is retained. The resulting action takes the form: (27) The renormalized couplings g * 1,2 are given by the formulas: where: In one and two dimensions one has respectively: The CP 1 model with the topological phase angle θ = π has a disordering transition into a gapped dimerized (VBS) phase above a certain critical value g c of the effective coupling g eff = g * 1 g * 2 , both in one and two spatial dimensions. For g eff < g c , the model is gapless, is longrange ordered in d = 2 and has quasi-long-range order in d = 1. Thus, the line of phase transition between the Néel and VBS phases is given by: Although the exact value of g c is not known, one may expect that Eq. (31) will qualitatively reproduce the transition line. Fig. 4 shows the result for the case of two dimensions, where we have used g c = π (which is just the extrapolation of the large-N result g c = 2π N down to the case N = 2). One can see that the curvature of the phase boundary agrees with the results of the previous section obtained in absence of the field. Again, we cannot see any tendency toward the VBS order at the uniform SU (N ) antiferromagnetic point on a square lattice (J = 0, λ = 1).
In the one-dimensional case, one can do better and extract the value of g c from the comparison with the transition in an antiferromagnetic spin-1 2 zigzag chain. Rodriguez et al. 16 have used a direct mapping of the original fermionic model (1) onto an effective spin-1 2 chain with nearest-and next-nearest-neighbor exchange couplings j 1 and j 2 , respectively. The constants j 1,2 were obtained as series in the perturbation parameter 1/Q, defined as follows: Further, by using the value j 2 /j 1 ≃ 0.24 for the transition point into the dimerized phase, which is known from numerical studies, 48 in Ref. 16 an estimate for the transition line in the (1 − κ, Q) plane has been obtained that compared very well with the numerical results for the original spin-3 2 fermionic model. In our approach, we can try and fix the value of g c (which is a sole fitting parameter in our theory, for the entire line of transition points in the (κ, q) space) by comparing the output of our Eq. (31) to the results of Ref. 16. Fig. 5 shows the transition line obtained from Eq. (31) for d = 1 with g c = 8.5 in comparison to the curve obtained in Ref. 16. One can see that the above value of g c yields a good agreement close to the SU (N ) antiferromagnetic point κ = 0. So, as a byproduct of our studies of the Sp(4) Heisenberg model, we obtain an independent estimate of the critical coupling value for the 1d CP 1 model (or, alternatively, the O(3) NLSM) at the topological angle θ = π: In the vicinity of the Sutherland point J = 0 our description breaks down; this can be seen already from the fact that the transition line goes to a finite value of q 0 = 2q/J at J → 0 (see the inset of Fig. 5), while the main scale in this limit is J ′ ≫ J, so the critical value of q 0 should diverge as (1 − κ) −1 in this limit.
V. SUMMARY
We have considered the model (1) describing spin-3 2 fermions in a spatially anisotropic optical lattice shown in Fig. 1 at quarter filling in the Mott limit of the onsite repulsion constants U 0,2 being much larger than the hopping amplitudes t. In this limit the charge degrees of freedom have a large gap, and the system can be mapped to the so-called Sp(4) Heisenberg spin model.
We have studied its large-N generalization, the Sp(N ) spin model, with the help of the field-theoretical approach constructed in the vicinity of the staggered SU (N ) antiferromagnetic point. It is shown that the effective field theory corresponds to the Sp(N ) extension of the CP N −1 model, with the Lorentz invariance generically broken by the Sp(N ) terms that break the SU (N ) symmetry. For this effective field theory, we have obtained the renormalization group equations to one-loop order and have shown that although in the vicinity of the staggered SU (N ) antiferromagnetic point the Sp(N ) terms are seemingly irrelevant, their presence leads to a considerable renormalization of the SU (N ) part of the action, thus driving the transition between the phase with a long-range Néel-type order and the magnetically disordered valence bond solid (VBS) phase. We would like to note that solutions of the renormalization group equations in the disordered phase exhibit a characteristic double-scale behavior close to the Néel-VBS transition boundary. Such a behavior is reminiscent to that encountered in other frustrated models, 40,41 and is also expected 49,50 in the framework of the deconfined criticality conjecture. 51 In addition to the Sp(N ) perturbation, we have also analyzed the effect of the external magnetic field (quadratic Zeeman effect) and established the qualitative form of the phase diagrams in one and two spatial dimensions. For the physical case N = 4, at large values of the quadratic Zeeman field the effective theory reduces to CP 1 model describing an isotropic Heisenberg antiferromagnet with a pseudospin 1 2 . Its ground state in two dimensions is always in the long-range ordered (Néel) phase for the pseudospin- 1 2 , and when the field is decreased, this state either adiabatically evolves into the Néel phase of spin- 3 2 fermions (with the reduced Néel order 29 ) or undergoes a phase transition into the VBS state. In one dimension, there is a phase transition of the Berezinskii-Kosterlitz-Thouless type that corresponds to the spontaneous dimerization transition in a frustrated spin-1 2 chain with next-nearest neighbor exchange. 16 As a byproduct, by fitting our results to the available numerical data, 16 we have obtained an estimate for the critical coupling of the CP 1 model with the θ = π topological term in (1 + 1) dimensions.
One last word of caution is in order: since our effective theory is constructed around the staggered SU (N ) antiferromagnetic point J ′ = 0, it is not expected to work well close to the other, uniform antiferromagnetic SU (N ) point J = 0. For that reason, we cannot exclude the presence of another phase transition to the VBS phase in some region around the uniform SU (N ) point on the isotropic lattice (J = 0, λ = 1), as suggested in Ref. 17. Constructing an effective field theory describing the vicinity of the uniform antiferromagnetic SU (N ) point remains a challenge for the future work. and keeping only up to quadratic terms, we obtain the Lagrangian: where Z = 2d is the lattice coordination number, and for the sake of clarity we have switched back to real time t and set the lattice to be spatially isotropic (λ = 1). After the standard Fourier transform, the equations of motion are obtained as i∂ t ϕ a (k) + F a (k)ϕ a (k) = 0, where the functions F a are given by: (1 + cos k µ ), a = 1, . . . , N − 2 The dispersions ω a (k) of linear modes ("spin waves") are thus determined simply by the relation ω 2 a (k) = F a (k)F a (k + π), which in the limit k → 0 yields the spin wave velocities: Those velocities, obtained by a spin-wave-type lattice calculation, perfectly agree with the velocities obtained from our effective continuum action (17). As a side remark, it is worthwhile to note that in presence of the quadratic Zeeman field q (see (3)) the spin wave velocities do not change with the increase of q, counterintuitively to the common knowledge that the spin wave velocity is linearly proportional to the spin magnitude S (and S effectively decreases from 3 2 at q = 0 to S = 1 2 at q = ∞, for the physical case N = 4). For N = 4, the effect of QZE is to make two out of three spin wave modes massive, but it does not touch the velocities (which, for gapped modes, take the meaning of limiting velocities). On the other hand, when one changes the spin-2 channel interaction U 2 from +∞ to U 0 (which corresponds to the path from the staggered to the uniform antiferromagnetic SU (4) points), velocities v 1,2 decrease and tend to zero as U 2 → U 0 , while the remaining velocity v 3 increases. Particularly, v 3 is twice as large at the uniform SU (4) point as it is at the staggered SU (4) one, v 3 (U 2 = U 0 )/v 3 (U 2 = ∞) = 2. Physically, softening of v 1,2 reflects the increase of frustration on the way from the staggered to the uniform antiferromagnetic SU (4) point.
Appendix C: RG equations for the Sp(N ) model
We derive here the RG equations for our Sp(N ) effective action without Lorentz invariance (17), using the Polyakov background field method. 38 We start by splitting the fields z α and A µ into the background ("slow") fields ϕ α , A µ , and the fluctuation ("fast") parts χ a , a µ : where {ϕ, e a } form a set of mutually orthogonal complex unit vectors. Since the "tilded" slow field ϕ = J ϕ satisfies the condition ϕ · ϕ = 0, we can expand it as We will not need any explicit expansion of the "tilded" fast field χ, because we will be able to avoid its presence by using the identities x · y = −( x · y), x · x = x · x. We will use the notation D µ = ∂ µ − iA µ and D µ = ∂ µ − iA µ . The derivatives of {e a (x)} can be written in the form ∂ µ e a = B ab µ e b + B a0 µ ϕ, ∂ µ ϕ = B 0a µ e a + B 00 µ ϕ, where B αβ µ = −(B βα µ ) * = e β · e α , e 0 ≡ ϕ. The quantity B 00 µ = ϕ * · ∂ µ ϕ can be eventually identified with A µ . There is a substantial freedom in the choice of the local basis {e a (x)}, which one can use in order to eliminate B ab µ (but not B a0 µ ). Indeed, under a local unitary rotation e a → U ab e b the (N − 1) × (N − 1) matrix B µ = {B ab µ } transforms as B µ → (∂ µ U )U † + U B µ U † . Thus, to eliminate B µ , one has to solve the equation B µ = (∂ µ U † )U . Comparing that to (C3), it is easy to see that setting U ab = (e * b ) a does the desired job. Finally, multiplying the rotation matrix by a phase factor, U → U exp − i x A µ (x ′ )dx ′ µ , we can eliminate the U(1) gauge field A µ from the expression for D µ χ as well, so that one effectively replaces D µ χ by ∂ µ χ a e a +χ a B a0 µ ϕ. We substitute the ansatz (C1) into the action (17), and make use of the trick described above to simplify D µ χ. The "fast" component of the gauge field a µ enters the action in a quadratic way; integrating it out yields: Plugging this expression back into the action, one obtains after some algebra the new effective action in the form Here for the sake of brevity we have introduced the following notation: g µ = g 2 + (g 1 − g 2 )δ 0µ , g µ = g 2 + ( g 1 − g 2 )δ 0µ , (C6) so that g µ = g 1 , g µ = g 1 for µ = 0, and g µ = g 2 , g µ = g 2 for µ = 0.
Doing the final step of integrating out the fluctuations field χ, it is convenient to make use of the following formula: for a matrix of the form where ϕ * a ϕ a = 1, its inverse can be explicitly written down as: With the help of this identity, the "fast" field χ, containing the Fourier components with momenta in the interval [Λ, Λ(1 + dl)], can be easily integrated out. A typical integral over the momentum (k 0 , k) has the form: (C9) where we have denoted C d = πS d /(2π) d+1 and S d = 2π d/2 /Γ(d/2) is the surface of a d-dimensional sphere. The correction ∆A to the action A eff [ϕ], coming from the fluctuation, takes the form:
|
2010-12-05T07:17:32.000Z
|
2010-08-03T00:00:00.000
|
{
"year": 2010,
"sha1": "18eb151bacddd4490c4a38c498369976e7655433",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1008.0598",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "18eb151bacddd4490c4a38c498369976e7655433",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7547788
|
pes2o/s2orc
|
v3-fos-license
|
Early genome duplications in conifers and other seed plants
A new phylogenomic approach reveals that conifer genomes are duplicated despite rare polyploidy among extant species.
INTRODUCTION
Polyploidy, or whole genome duplication (WGD), is one of the most important forces in vascular plant evolution. Nearly 25% of vascular plants are recent polyploids (1), with approximately 15% of angiosperm and 31% of fern speciation events due to genome duplication (2). Ancient polyploidy is found in the ancestry of all extant seed and flowering plants (3), and many angiosperm lineages have experienced additional rounds of genome duplication (4)(5)(6)(7)(8)(9)(10). Changes in the rates of molecular evolution and turnover in genome content following polyploidy may have provided novel genetic variation that was important for the evolution of plant diversity (3,8,(11)(12)(13)(14)(15)(16).
Despite the prevalence of polyploidy in the history of flowering plants, the role of polyploidy in gymnosperm evolution is less clear. The extant gymnosperms appear to be the sister clade of angiosperms (17), and they diverged from their most recent common ancestor (MRCA) as much as 310 million years ago (18). Most evidence indicates that polyploid speciation is relatively rare among extant gymnosperms (2), although in some genera (for example, Ephedra), polyploidy is prevalent (19,20). Previous analyses of conifer genome sizes and chromosomes suggested that paleopolyploidy occurred in Pinaceae (19,21). Although there was evidence of an ancient polyploidy shared by all seed plants (3), no evidence of a gymnosperm or conifer ancient polyploidy was found in the genome of Norway spruce (Picea abies), the first published gymnosperm genome. However, this conclusion was based on only a single plot of the relative ages of duplicate genes, presumably because the genome assembly was not of high enough quality (N50 = 4.87 kb) for syntenic analyses. Based on the pattern of accumulation of paralogs seen in this plot, they suggested that the large genomes of conifers originated by mechanisms exclusive of WGD, in particular through proliferation of long terminal repeat retrotransposons (LTR-RTs). Given that paleopolyploidy has been repeatedly observed among flowering plants and is also hypothesized to occur among conifers (19,21), our goal was to test more thoroughly for evidence of ancient polyploidy in gymnosperms, using a phylogenetically diverse data set and a new phylogenomic method for determining the phylogenetic placement of WGDs.
We assembled transcriptomes for 24 gymnosperms and 3 outgroup species, including representatives of all major gymnosperm and vascular plant clades (table S1). Three of these transcriptomes-Ophioglossum petiolatum, Gnetum gnemon, and Ephedra frustillata-were newly sequenced to cover phylogenetic gaps in our data set. For each transcriptome, we used our DupPipe bioinformatic pipeline to generate age distributions of paralogs to identify shared bursts of gene duplication that are indicative of ancient WGD (7,22,23). We also introduce a newly developed algorithm, Multi-tAxon Paleopolyploidy Search (MAPS), to place inferred paleopolyploid events in phylogenetic context. For each node in a phylogeny, MAPS evaluates the percentage of gene duplications shared by all taxa descended from that node. Ancient WGDs are identified and located as peaks in plots of duplication events shared among a set of species (Materials and Methods; figs. S1 and S2). We used MAPS to confirm and locate genome duplication events in the history of the gymnosperms and seed plants.
contain clear evidence of the putative seed plant WGD, perhaps due to elevated substitution or gene birth/death rates among these species.
To place this ancient WGD in the vascular plant phylogeny, we implemented a new multispecies paleopolyploid search tool, MAPS. Previous analyses found evidence for an ancient polyploidy in the ancestry of all extant seed plants, Jiao et al. (3). However, a major clade of vascular plants, the monilophytes (ferns), was not included in that analysis. It was therefore unclear if this WGD is shared among all euphyllophytes (seed plants and monilophytes) or restricted to only seed plants. To better place this WGD in the vascular plant phylogeny, we analyzed new transcriptome data from the eusporangiate fern Ophioglossum with data from Araucaria (gymnosperm), Ginkgo (gymnosperm), Amborella (angiosperm), and Selaginella (lycophyte, the sister lineage to euphyllophytes). Gene trees were constructed for 3235 gene families with at least one gene copy present in each species. Among these gene families, MAPS identified 544 subtrees that included the MRCA of Araucaria, Ginkgo, and Amborella, which were consistent with the species tree. Nearly 64% of these subtrees contained evidence for a shared duplication in the MRCA of the seed plants that was not shared with Ophioglossum (Fig. 1A, fig. S4A, and table S2). This result demonstrates that the unclearly delimited euphyllophyte genome duplication (3) is indeed limited to seed plants as a whole and not shared with ferns and other vascular plants (Fig. 2).
Independent paleopolyploidies in Pinaceae and Cupressaceae
Most gymnosperm lineages only contained evidence for a single, ancient WGD, but some species had multiple signals. The Ks plots for most of the conifers contained a younger peak consistent with a WGD since the seed plant genome duplication ( fig. S3). Among Pinaceae, we observed a younger peak with a median Ks = 0.2 to 0.4 for each taxon in our data set. Similarly, gene age distributions for taxa in Cephalotaxaceae, Cupressaceae, and Taxaceae contained a younger peak with a median Ks = 0.2 to 0.5. Araucaria was the only conifer in our data set without an unambiguous younger peak. Thus, the Ks plots suggest that there may have been one shared conifer WGD or independent WGDs in the history of different conifer families.
We conducted two different MAPS analyses to resolve the placement and number of WGDs among the conifers. For one analysis, we selected the transcriptomes of Pinus, Larix, and Cedrus to represent Pinaceae, and the transcriptome of Taxus to represent Taxaceae; we chose Ginkgo, Ophioglossum, and Selaginella as outgroups. We recovered 2175 gene family phylogenies with at least one gene copy from each taxon. MAPS identified 625 subtrees among these gene family phylogenies that included the MRCA of Pinaceae. More than 52% of the subtrees supported a shared duplication in the ancestry of Pinaceae (Fig. 1B, fig. S4B, and table S3). In contrast, only 9% of 535 subtrees supported a gene duplication shared between Pinaceae and Taxaceae. In the second analysis, we selected Taxus (Taxaceae), Cephalotaxus (Cephalotaxaceae), Cryptomeria (Cupressaceae), and Pinus (Pinaceae), with Ginkgo, Ophioglossum, and Selaginella as outgroups. Among 1886 gene family phylogenies for these taxa, MAPS identified 469 subtrees that included the MRCA of the cupressophytes. More than 42% of the subtrees supported a shared gene duplication in the MRCA of Cupressaceae and Taxaceae (Fig. 1C, fig. S4C, and table S4). Only 10% of the subtrees supported a duplication event shared by Pinaceae, Cupressaceae, and Taxaceae. We found similar results with MAPS using only gene trees with >50% bootstrap support for all branches (table S5). These results suggest that there are two ancient WGDs in the conifers: one shared by Cupressaceae and Taxaceae (the cupressophytes), and one in the ancestry of Pinaceae (Fig. 2).
Analyses of ortholog divergence corroborated our MAPS results and supported independent WGDs among the conifers. We identified 3266 orthologs by reciprocal best BLAST hit (22) from representatives of Pinaceae and Cupressaceae, Picea glauca and Cryptomeria japonica. Excluding poorly aligned orthologs with Ks >5, the median orthologous divergence between P. glauca and C. japonica was Ks = 0.78. In contrast, their most recent WGDs occurred at median Ks = 0.35 and 0.24, respectively ( Fig. 3), much later than the divergence of their lineages. Orthologous divergence and phylogenomic approaches both support independent WGDs in Pinaceae and cupressophytes. Consistent with this interpretation is an absence of evidence for these WGDs in Araucariaceae ( fig. S3). Overall, these results are consistent with previous analyses of chromosomes and genome sizes that hypothe-sized no paleopolyploidy in Araucariaceae, but likely ancient WGD in Pinaceae (19,21).
DISCUSSION
In contrast to the recently published study of the Norway spruce genome (24), our analyses find evidence for at least two independent WGDs in the ancestry of major conifer clades. Why did analyses of the spruce genome not recover similar evidence of this WGD? Visual (24)] suggests that there is in fact a peak consistent with a WGD near Ks~0.25, similar to our results. Although it is not clear why this result was overlooked, the spruce genome results do appear to be fully consistent with our analyses. Our more extensive phylogenetic sampling provides additional support that this peak is likely a WGD because more than 50% of gene families in multiple Pinaceae species have paralogs from this event (Fig. 1, B and C, and fig. S4, B and C).
What are the implications of these results for our understanding of conifer genome evolution? First, Nystedt et al. (24) proposed a model of conifer genome evolution that must be revised in light of our results. Their model suggests that in the absence of polyploidy, 12 ancestral conifer chromosomes expanded at a slow and steady rate owing solely to the activity of a diverse set of LTR transposable elements. Although conifer chromosome numbers cluster near n = 12 (25), our discovery of WGDs in the ancestry of two major conifer clades (Pinaceae and cupressophytes) indicates that these numbers must have fluctuated rather than remained completely static over time. Our analyses do not contradict evidence that the expansion of repetitive DNA is the major contributor to conifer genome size evolution. However, the dynamics of conifer genome evolution clearly did involve WGDs, and genome duplication events have played a role in generating some of the largest genomes among conifers (for example, Pinaceae). It is notable that the genome sizes of paleopolyploid Cupressaceae and Taxaceae are not substantially larger on average than that of non-paleopolyploid Araucariaceae (26,27). This finding suggests that an insight from angiosperm genome evolution also holds true for gymnosperms; differences in turnover rates of genome content likely contribute more to genome size variation than a single paleopolyploidy (12,28,29).
Nystedt et al. (24) also suggests that conserved synteny across Pinaceae (30) results from an absence of paleopolyploidy. Analyses of angiosperm genomes indicate that the degree of synteny conservation following paleopolyploidy varies widely (12, 31-33). The composition of parental genomes, in particular differences in transposon load, may establish genome dominance that leads to the biased retention and loss of genes (33). If most fractionation and genome rearrangements occur quickly after polyploidy, descendant polyploids may also inherit a largely common synteny (34,35). The lack of reciprocal genome rearrangements following WGDs, such as in Poaceae (36), would also reduce syntenic diversity in descendant lineages. For decades, the broad ancestry of polyploidy in the flowering plants was undetected in linkage mapping studies. Thus, relatively conserved synteny, especially from linkage map data, is not evidence against a paleopolyploidy in Pinaceae.
One of the most intriguing evolutionary questions raised by our analyses is, why are there so few polyploid species among extant conifers and other gymnosperms? Our analyses indicate that polyploid speciation contributed to their diversity. Perhaps these WGDs thrived at a climatically favorable time for polyploid species, as was proposed to explain the apparent clustering of angiosperm WGDs near the K-Pg mass extinction event (37). Based on our phylogenetic placements of WGDs and existing estimates for the ages of gymnosperm lineages (38), the conifer WGDs occurred ca. 210 to 275 million years ago (Cupressaceae + Taxaceae) and ca. 200 to 342 million years ago (Pinaceae). Many major events in Earth's history occurred during this time frame, including Earth's most severe mass extinction event, the Permian-Triassic extinction. Did polyploid conifers survive the end-Permian event better than did their diploid contemporaries? Given that many of these conifer clades originated during this period, these WGDs may have uniquely contributed to the morphological and biological diversity of these lineages. Polyploidy may differentially influence the evolution of dosage-sensitive genes and pathways (16,(39)(40)(41) or generate novelty by sub-or neofunctionalization (42). Examining further data sets to more precisely pinpoint these WGDs in the conifer phylogeny and to explore the effects of duplication on specific gene families will be critical to further answer how polyploidy has contributed to conifer evolution.
Sampling and sequencing
Leaf material of O. petiolatum (PRJNA257107), G. gnemon (PRJNA283231), and E. frustillata (PRJNA283230) was collected in liquid nitrogen from the University of British Columbia (UBC) Botanical Gardens and Greenhouse and then stored in a −80°C freezer (table S1). We extracted total RNA using the TRIzol reagent (Invitrogen)/RNeasy (Qiagen) approach as described by Lai et al. (43). For 454 sequencing (454 Life Sciences), we used modified oligo-dT primers for complementary DNA (cDNA) synthesis to reduce the length of mononucleotide runs associated with the polyadenylate [poly(A)] tail of mRNA. We used a "broken chain" short oligo-dT primer to prime the poly(A) tail of mRNA during firststrand cDNA synthesis (44). cDNA was amplified and normalized with the TRIMMER-DIRECT cDNA Normalization Kit. After normalization, we fragmented the cDNA to 500-800 base pair fragments by either sonication or nebulization and removed small fragments through size selection using AMPure SPRI beads (Angencourt). Then, the fragmented ends were polished and ligated with adaptors. The optimal ligation products were selectively amplified and subjected to two rounds of size selection by gel electrophoresis and AMPure SPRI bead purification (45). Normalized cDNA was prepared for sequencing following the standard genomic DNA shotgun protocol recommended by 454 Life Sciences. Fig. 3. Pinaceae-Cupressaceae ortholog divergence and independent WGDs. Combined Ks plot of the gene age distributions of P. glauca (Pinaceae; green) and C. japonica (Cupressaceae; orange), and their ortholog divergences (blue). The median peaks for these plots are highlighted. Analyses of ortholog divergence indicated that these two taxa diverged before their most recent WGDs.
Additional data sets were downloaded from the GenBank Sequence Read Archive (SRA) (table S1). These included Sanger and Illumina data from 22 species. Data sets were selected to provide broad phylogenetic coverage of the gymnosperms. We also obtained the annotated coding DNA sequences of Amborella trichopoda (46) and Selaginella moellendorffii (47) from Phytozome (www.phytozome.net/).
Transcriptome assembly
Raw read quality filtering and trimming were performed by SnoWhite (48) before assembly. Three different assembly strategies were used for our three different data types. Sanger expressed sequence tags (EST) were cleaned using the SeqClean pipeline and assembled using TGICL. For 454 data, we used a combination of MIRA and CAP3 to assemble contigs. We used MIRA version 3.2.1 (49) using the "accurate.est. denovo.454" assembly mode. Because MIRA may split up high-coverage contigs into multiple contigs, we used CAP3 at 94% identity to further assemble the MIRA contigs and singletons (50). SOAPdenovo-Trans (51) was used to assemble Illumina sequenced transcriptomes using a k-mer of about 2 / 3 read length. All other parameters were set to default. Assembly statistics for the 26 assemblies are given in table S1.
Age distribution of paralogs
For each species data set, we used our DupPipe pipeline to construct gene families and estimate the age of gene duplications (7,22,23,47,52). Translations and reading frames were estimated by Genewise alignment to the best hit protein from a collection of proteins from 25 plant genomes on Phytozome. As in other DupPipe runs, we used proteinguided DNA alignments to align our nucleic acids while maintaining the reading frame. For each node in our gene family phylogenies, we estimated synonymous divergence (Ks) using PAML with the F3X4 model (53). Summary plots of the age distribution of gene duplications were evaluated for each gymnosperm species for peaks of gene duplication as evidence of ancient WGDs. Taxa with peaks suggesting ancient WGDs were further analyzed using a multispecies approach (described below) to assess what fraction of gene families show a shared gene duplication and simultaneously place potential WGDs in phylogenetic context.
Estimating the orthologous divergence of Pinaceae and Cupressaceae
To estimate the average ortholog divergence of conifer taxa and compare it to observed paleopolyploid peaks, we used our previously described RBH Ortholog pipeline (22). Briefly, we identified orthologs as reciprocal best blast hits in the transcriptomes of P. glauca (Pinaceae) and C. japonica (Cupressaceae). Using protein-guided DNA alignments, we estimated the pairwise synonymous (Ks) divergence for each pair of orthologs using PAML with the F3X4 model (53). We plotted the distribution of ortholog divergences and compared the median divergence against the synonymous divergence of paralogs from inferred WGDs in these lineages.
Inference of gene family phylogenies
Each transcriptome was translated into amino acid sequences using the TransPipe pipeline (22). We performed reciprocal protein BLAST (blastp) searches of selected transcriptomes with an e-value of 10 −5 as a cutoff. Gene families were clustered from these BLAST results using OrthoMCL v2.0 with default parameters (54). Using a custom perl script, we filtered for gene families that contained at least one gene copy from each taxon and discarded the remaining OrthoMCL clusters. SATé was used for automatic alignment and phylogeny reconstruction of gene families (55). For each gene family phylogeny, we ran SATé until five iterations without an improvement in score using a centroid breaking strategy. MAFFT was used for alignments (56), Opal for mergers (57), and RAXML for tree estimation (58). The best SATé tree for each gene family was used to infer and locate WGDs by our MAPS algorithm.
Multi-tAxon Paleopolyploidy Search (MAPS)
To infer and locate ancient WGDs in our data sets, we developed a gene tree sorting and counting algorithm, MAPS. This algorithm uses a given species tree to filter for subtrees within complex gene trees consistent with relationships at each node in the species tree. For each node of the species tree, MAPS parses the species tree into subtrees with a sister species and an outgroup, for example, ((A,B),C). MAPS iteratively searches for each of these subtrees in the gene tree and will ignore subtrees that do not have the expected relationship. In-paralogs are collapsed by MAPS to simplify the search. We filter for these substrees, rather than filtering on entire topologies, because ancient WGDs may yield phylogenies with many nested and/or orthologous clades. Filtering for a simple gene tree that matches the species tree would eliminate many of the trees that support WGDs. By filtering for subtrees of the species tree, MAPS captures the evidence for polyploidy in complex gene family topologies. Using this filtered set of gene trees, MAPS records the number of subtrees that support a gene duplication at a particular node in the species tree ( fig. S1). To infer and locate a potential WGD in the species tree, we plot the percentage of gene duplications shared by descendant taxa by node ( fig. S2). A WGD will produce a large burst of shared duplications across taxa and gene trees. This burst of duplication will appear as an increase in the percentage of shared gene duplications in our MAPS analyses.
To evaluate if a WGD occurred before the divergence of taxa A and B, MAPS requires gene trees with at least a sister group A and B and an outgroup C (fig. S1). The basic algorithm of MAPS has two steps. In step 1, MAPS collapses in-paralogs that evolved after the divergence of A and B to a single copy in each gene tree ( fig. S1). In step 2, MAPS counts subtrees from all gene trees that are consistent with a duplication event in the MRCA of A and B. In our ABC example, subtrees with a topology consistent with duplication before the divergence of A and B [for example, (((A,B),(A,B)),C)] will be recorded as a duplication at their MRCA node ( fig. S1, 1.6). Additionally, subtrees with a topology consistent with duplication before the divergence of A and B followed by independent gene loss [for example, ((A,~),(A,B)),C) or (((A,B),(~,B), C)] will also be recorded as a duplication at their MRCA node (fig. S1, 1.7 to 1.10). If gene trees do not have a topology consistent with any gene duplication among the ingroup taxa, then no duplications will be recorded at the internal nodes ( fig. S1, 1.1 to 1.5). When searching for ancient WGDs in a collection of gene trees that contain more than three taxa, MAPS will repeat the same algorithm on each node of the tree (fig. S2). WGDs are inferred by searching for evidence of a large number of shared duplications at a particular node(s) of the species tree ( fig. S2).
To evaluate the phylogenetic placement of the putative "seed plant" WGD, we used MAPS to analyze gene families from representatives of each vascular plant lineage (Fig. 1A and fig. S4A). We selected Araucaria angustifolia and Ginkgo biloba to represent gymnosperms because our Ks plots suggest that they only experienced the seed plant WGD. We also analyzed the Amborella genome to represent angiosperms (46). The newly sequenced O. petiolatum transcriptome and the S. moellendorffii genome (47) were chosen to represent ferns and lycophytes, respectively.
We conducted two MAPS analyses to evaluate numbers and placements of WGDs among conifers (Fig. 1, B and C, and fig. S4, B and C). Two analyses were conducted instead of one because the MAPS algorithm works best with simple, ladderized species trees. To maximize the numbers of gene trees in the MAPS analysis and have good coverage of the Pinaceae phylogeny, we selected the transcriptomes of Pinus monticola, Larix gmelinii, and Cedrus atlantica to represent Pinaceae. We also selected Taxus mairei to represent the cupressophytes. Likewise, we chose T. mairei, Cephalotaxus hainanensis, and C. japonica to represent cupressophytes, and P. monticola to represent Pinaceae. For both Pinaceae and cupressophyte analyses, the transcriptomes of G. biloba and O. petiolatum as well as the S. moellendorffii genome were selected as outgroups.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/1/10/e1501084/DC1 Fig. S1. Example topologies processed by MAPS to identify a gene duplication (red star) or not (black dot) in a given gene family phylogeny. Fig. S2. Example MAPS summary results for a four-taxon phylogeny. Fig. S3. Histograms of the age distribution of gene duplications from 24 gymnosperm transcriptomes. Fig. S4. Numerical summary of MAPS results. Table S1. Assembly statistics and accession numbers for 25 transcriptomes and 2 genomes. Table S2. Number of gene subtrees that fit the expected species tree support shared duplication in seed plant analysis. Table S3. Number of gene subtrees that fit the expected species tree support shared duplication in Pinaceae analysis. Table S4. Number of gene subtrees that fit the expected species tree support shared duplication in cupressophyte analysis. Table S5. Number of gene subtrees that fit the expected species tree support shared duplication in cupressophyte analysis using only trees with >50% bootstrap support for each branch.
|
2016-05-12T22:15:10.714Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "7535da573b9bb65610f513defdf10a10fca4548b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1126/sciadv.1501084",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7535da573b9bb65610f513defdf10a10fca4548b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
9374304
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Proteomic Analysis of Buffalo Oocytes Matured in vitro Using iTRAQ Technique
To investigate the protein profiling of buffalo oocytes at the germinal vesicle (GV) stage and metaphase II (MII) stage, an iTRAQ-based strategy was applied. A total of 3,763 proteins were identified, which representing the largest buffalo oocytes proteome dataset to date. Among these proteins identified, 173 proteins were differentially expressed in GV oocytes and competent MII oocytes, and 146 proteins were differentially abundant in competent and incompetent matured oocytes. Functional and KEGG pathway analysis revealed that the up-regulated proteins in competent MII oocytes were related to chromosome segregation, microtubule-based process, protein transport, oxidation reduction, ribosome, and oxidative phosphorylation, etc., in comparison with GV and incompetent MII oocytes. This is the first proteomic report on buffalo oocytes from different maturation stages and developmental competent status. These data will provide valuable information for understanding the molecular mechanism underlying buffalo oocyte maturation, and these proteins may potentially act as markers to predict developmental competence of buffalo oocyte during in vitro maturation.
Proteomics have been applied in the research of mammalian oocytes and embryos, including mouse [8][9][10][11][12] , bovine [13][14][15] , pig 6,[16][17][18] , and numbers of proteomics data have been obtained. The related studies were mainly focused on identifying protein expression profile of embryos at different developmental stages, and maternal proteins in oocytes. In addition, substantial studies have been performed to reveal signal transduction pathways during oocyte maturation and important transcription factors related to reprogramming and chromosome reconstruction in oocytes. Although numerous valuable protein information about the growth and development of mammalian oocyte/embryo was obtained from proteomics data, there was still some problems to be solved. First, many proteins, especially low molecular weight and low abundance proteins is difficult to be identified due to the limitation of 2-DE platform used in those studies, which result in limited proteome coverage. Second, the classic proteomics quantification methods, including 2-DE, or label free method are unfit for accurate quantification between samples, especially those proteins with small fold change ratios in different groups.
Isobaric tags for relative and absolute quantitation (iTRAQ) is a widely used stable isotope-based approach for quantitative proteomics, which allows simultaneous identification and quantification. The same peptide from different samples display a single peak in MS scans, thus reducing the complexity of parent ion spectra. And quantification is performed via reporter ion intensity from the low mass range at MS/MS level, which improves the accuracy of quantitation. In addition, this method could simultaneously analyze up to eight different samples in one experiment.
Buffalo (Bubalus bubalis) is an important domestic animal distributed in the tropical and subtropical region, providing high quality of milk, meat, and work power 19 . However, there were very few reports on protein dynamic changes during oocyte maturation using proteomics techniques in buffalos due to their special distribution region. The efficiency of buffalo blastocysts produced in vitro is reported to be low in comparison with bovine blastocysts produced in vitro. Thus, the present study was undertaken to investigate the protein expression profile of buffalo oocytes during IVM, identify the differentially proteins in oocytes at GV and MII stages with different competence using iTRAQ quantitative proteomics technology. This research will provide useful information for understanding the changes in protein profiling of buffalo oocyte during maturation, and then set up a foundations for further exploring the molecular mechanism of buffalo oocytes matured in vitro.
Results
Quantitative proteomics analysis of buffalo oocytes. To identify the differentially expressed proteins in buffalo oocytes before and after maturation, GV and MII stage oocytes were selected for quantitative , competent and incompetent metaphase II (MII) buffalo oocytes were collected in two biological replicates. A similar amounts of proteins were digested into peptides using trypsin. The resulting peptides were subsequently extracted and desalted. All samples were pooled together after iTRAQ labeling and separated by RP-HPLC, and then analyzed using LC-MS/MS. Tag 113 and 114 for GV oocytes, 115 and 116 for competent and incompetent MII oocytes, respectively. (C) SDS-PAGE of buffalo oocytes proteins. Buffalo oocytes proteins were separated using a 10%SDS-PAGE gel and then stained with Coomassie brilliant blue stain.
proteomics analysis by iTRAQ. The experimental workflow was depicted in Fig. 1B. After separated by SDS-PAGE (Fig. 1C) and in-gel digested with trypsin, the peptides were labeled with iTRAQ regents. Labeled peptides were then pooled and separated into 20 fractions by high-pH reverse-phase high performance liquid chromatography, followed by nano-UPLC-MS/MS analysis with LTQ-Orbitrap Velos mass spectrometer. Two biological experiments were carried out and the LC-MS/MS identification was repeated twice for each biological replicate. The representative identification results of mass spectrometer were showed in Supplementary Information Figure S1. A total of 3,763 proteins (FDR < 1%) were identified from the labled samples, among which 2,461 proteins were found in both biological replicates (65% of the proteome) ( Fig. 2A). Among the identified proteins, 3,166 (84%) proteins were quantified, of which 2,050 proteins were found to co-exist in two biological replicates (Fig. 2B). The complete list of all peptide and protein identifications of buffalo oocytes was showed in Supplementary Information Table S1. Among the identified proteins, 17%, 11%, 7%, 6%, 5% and 54% of proteins comprised of 1-peptide, 2-peptides, 3-peptides, 4-peptides, 5-peptides and at least 6 unique peptides respectively (Fig. 2C).
Comparison of mammalian oocyte/embryo proteomics datasets. To find out the similarity and diversity of expression proteins in oocytes/embryos from different species, the iTRAQ quantitative results were compared with published bovine proteome datasets, including bovine GV stage oocytes, cumulus cells (Burgess et al.) and embryos (Deutsch et al.). Proteome differences in the bovine and buffalo oocytes were showed as Venn diagram of absolute protein numbers (Fig. 3). Seven hundred and ten proteins were communal found in Deutsch's and our dataset, only 90 proteins were common identified in Burgess's and ours. A very few proteins (28) were existed in Deutsch and Burgess's results. We speculated that the differences may be caused by the samples of which oocytes and cumulus cells were used in Deutsch, while MII oocytes and embryos were used in Burgess. It is well known that cumulus cells express many specific proteins to support oocyte growth and maturation, while oocytes do not. The development of bovine embryos before genomic activation at the 8-cell stage is dependent on the maternal proteins stored in oocytes during growing and maturation. Thus, oocytes and early embryos may have similar protein expression patterns.
Gene Ontology (GO) categorization analysis of buffalo oocyte proteins.
To understand the biological functions of identified proteins in the buffalo oocytes, GO categorization analysis was performed using DAVID Bioinformatics Resources. Of the 3,763 identified proteins in buffalo oocytes, 3,184 proteins were annotated to DAVID GO term, and then 277 categorization groups were obtained. According to the GO analysis results, the proteins related to the biological process of generating metabolite precursor, energy metabolism, translation, oxidation reduction and structure were significant enriched in buffalo oocytes (Supplementary Information Figure S2 and Table S2).
Statistical analysis of mass spectrometry data. The distribution of log 2 ratio between two technical replicates of samples in one biological experiment was normal distribution with standard deviation 0.16 (Fig. 4A), indicating that the large majority of identified proteins were unchanged and the quantitative accuracy of the experiment was high. To evaluate the quantification reproducibility of the iTRAQ experiment, a linear regression analysis of proteins in two replicates was performed. The slope of the linear regression fit to the technical replicates of samples was 1.0428. iTRAQ reporter ion intensities between the two technical replicates of samples showed high correlation (Pearson R 2 = 0.9956, Fig. 4B), which demonstrated a good reproducibility. In current study, equivalent amount of peptides of four samples were used and mixed in an equal ratio. We performed a comparison of log-transformed ratios from as a box-plot analysis, and the result derived from one biological replicate was showed in Fig. 4C. The ratios were calculated from random two tags. Ideally we observed the ratios of each group matched as expected values, indicating samples were mixed in equal amount. Next, the technical variations and determined the threshold for differentially expressed proteins were evaluated. Accordingly, around 90% of the common identified proteins fell within 30% of the variation in the LC-MS/MS identification replicates (Fig. 4D). Thus, the cutoff point for differentially expressed protein in our study was considered as fold change of ≥ 2 or ≤ 0.50. Furthermore, iTRAQ ratios also required the P value less than 0.05 (95% confidence limit of proteins considered to change). A minimum of one unique peptide was required to identify and relatively quantify a protein.
Based on the screening criteria, a total of 173 significant differentially expressed proteins were found in competent MII oocytes (MII G) compared to GV stage oocyte (GVO). Among these differentially expressed proteins, 108 and 65 proteins were up-regulated and down-regulated respectively. When MII G were compared with incompetent MII oocytes (MII B), 146 differentially expressed proteins (111 up-and 35 down-regulated) were found to be mapped the cut-off criteria ( Table 1, Fig. 5). The complete list of differentially expressed proteins was shown in Supplementary Information Table S3.
Hierarchical clustering analysis of differentially expressed proteins. To understand the dynamic changes of proteins expressed differentially during buffalo oocyte maturation, hierarchical cluster was performed. Proteins clustered were those differentially expressed at least in one of the two pairwise comparisons. As showed in Fig. 6, a total of 265 proteins were classified into five different expression clusters. Then, each cluster proteins were further subjected to gene ontology (GO) annotations using DAVID software. Cluster 1 contained 73 proteins enriched for biological process related to electron transport chain, oxidation reduction, protein transport, oxidative phosphorylation etc. Cluster 2 included 21 proteins that were involved in angiogenesis and blood vessel morphogenesis. Cluster 3 (57 proteins) was related with the heterocycle biosynthetic process, pigment biosynthetic process, macromolecular complex assembly etc. Proteins (54) of Cluster 4 were related to the microtubule-based process, nuclear division, mitosis, chromosome segregation etc. Enrichment of cluster 5 (60 proteins) were proteins involved in oxidation reduction, transmembrane transport, protein location etc. Details for GO annotation of differentially expressed proteins in five clusters were listed in Supplementary Information Table S4.
Analysis of KEGG pathway related to proteins expressed differentially. To further reveal the signaling pathways related to the maturation of buffalo oocytes, proteins related KEGG pathway were analyzed. As shown in Fig. 7, 173 proteins expressed differentially in MII G and GVO were found to be related to the
Figure 5. Scatterplot of log 2 transformed iTRAQ ratio data (left for MII G vs GVO and right for MII G vs MII B)
. The x-axis shows the log 10 of the protein intensity. The y-axis shows the log 2 ratios between MII G and GVO or MII G and MII B, respectively. The red and green dots represent up-regulated and down-regulated proteins, respectively. And the gray dots were those not significant proteins.
Scientific RepoRts | 6:31795 | DOI: 10.1038/srep31795 metabolism of fructose and mannose, oxidative phosphorylation, cell cycle and tight junction. One hundred and forty-six proteins expressed differentially in MII G and MII B were involved in oxidative phosphorylation, ribosome and valine, leucine and isoleucine degradation. Thus, the oxidative phosphorylation was the common pathway, suggesting that high expression proteins related to oxidative phosphorylation pathway may play an important role during in vitro maturation of buffalo oocytes.
Analysis of gene expression by quantitative RT-PCR.
To further demonstrate the proteins expressed differentially in buffalo oocytes during in vitro maturation, quantitative RT-PCR was performed to check expression of five genes (KIF20A, KIF2C, MYH10, MYH9, and DYNLL2). As shown in Supplementary Information Figure S3, the relative expression patterns of five genes in GVO and MII G oocytes were not in accordance with the results of proteomics analysis, suggesting that post-transcriptional mechanism may involve in the regulation of the expression of these proteins.
Discussion
In vitro maturation of oocytes is an important technology for providing matured oocytes that are utilized in IVF, in vitro production (IVP), and somatic cell nuclear transfer. In the past decade, although many efforts have been made to improve the efficiency of the IVM, the development competence of oocytes matured in vitro is lower than oocytes matured in vivo 1,2 . Elucidation of the molecular mechanisms regulating the oocyte maturation and identification of potential predictors related to oocyte developmental competence will help us to improve the quality of oocytes matured in vitro. Proteomic approaches allow us to monitor dynamic changes at protein expression level and identify proteins that are functionally associated with a special cell or tissue phenotype. Knowledge of protein expression profiles occurred during the process of oocyte maturation will provide new insights into the molecular mechanisms regulating oocyte maturation.
In the present study, we applied the iTRAQ-based quantitative proteomic strategy to study the protein expression profile of buffalo oocytes during in vitro maturation and a total of 3,763 proteins were identified, which represented the largest buffalo oocyte proteome dataset so far. The identified proteins were known to be essential for oogenesis and embryo development, which previously detected in other species, such as NLRP5/MATER, OOEP/FLOPED, PADI6, PRDX1, GDF9, NPM2, TLE6. Novel proteins were also identified in buffalo oocytes, such as ZAR1, BMP15, DNMT1, and PTPN1 (undetected in ref. 15). However, some oocyte-specific proteins (STELLA, SMARCA4, DPPA3, and PMS2) reported in bovine [13][14][15] were not detected in current study. A total of 1,264 proteins (34%) were annotated as "uncharacterized proteins" in our report.
As shown in Figure S2, more than half of significantly enriched categories in buffalo oocytes were related to metabolism pathway, including glycolysis, oxidative phosphorylation, tricarboxylic acid cycle (TCA cycle), fatty acid metabolic, lipid biosynthetic and steroid metabolic. These results indicated that oocytes may require different metabolites (such as amino acids, purines and fatty acids) to support their growth and maturation. During the maturation process, oocytes synthesized and stored a large amounts of mRNA and proteins in the cytoplasm, and this materials were utilized later in embryo development until the embryonic genome was activated 20 . Although transcription decreased in oocytes after GVBD, polyadenlyated RNA synthesis was still observed in fully grown mouse oocyte 21,22 . In this study, a large number of proteins was involved in mRNA processing, indicating that these proteins should be essential to maintain growth and genome activation for buffalo oocyte. In addition, large numbers of proteins were involved in cell cycle, which were associated with the growth and meiotic maturation occurring during maturation. These results indicate that the cell cycle progression in buffalo oocytes may be driven by the regulation of protein expression.
The important purpose of this work was to investigate the protein expression difference in buffalo oocytes with different maturity or quality. A total of 173 proteins differentially expressed was identified in immature and mature oocytes, and 146 proteins differentially expressed were identified between competent and incompetent matured oocytes. Among these differentially expressed proteins, proteins involved in the oxidative phosphorylation pathway and enriched in mitochondria were up-regulated in MII G compared to GVO and MII B oocytes. Oxidative phosphorylation (OXPHOS), one main pathway occurred in mitochondrial, is the important physiology process to generate ATP 23 . The increase of transcripts/proteins related to OXPHOS pathway would result in the high ATP synthesis. During oocyte maturation, mitochondria could produce ATP mainly through OXPHOS pathway which was used for spindle organization, chromosomal segregation, organelle redistribution, protein transport and other cellular processes 24,25 , which was essential for oocyte maturation 26 . The ATP content in morphologically normal or in vivo-derived oocytes was significantly higher than that of poor or matured in vitro oocytes 27,28 . Moreover, oocytes with high ATP content could result in higher morulae and blastocyst development 27,29 . In addition, the transcripts and encoded proteins associated with OXPHOS were down-regulated during oocyte maturation, which may reflect the decrease of energy production and utilization in MII oocytes 30 . Thus, the level of OXPHOS may be related to the oocyte quality and the mitochondria activity may have certain roles in regulating the IVM of buffalo oocytes.
In the present study, a large group of proteins associated with protein transport and transmembrane transport were also found to be up-regulated in MII G oocytes compared to GVO and MII B oocytes, indicating that substrates transport were necessary for the buffalo oocyte meiotic maturation. RAB family proteins included RAB2A, RAB3A and RAB21, which were associated with signal transduction, intracellular vesicular transport 31 . RAB3A has been detected in mouse oocytes during meiotic maturation, which was implicated in the regulation of cortical granules migration, polarity establishment and asymmetric division 32 . SEC61α 1 and SEC61α 2 are the two subgroups of SEC61α , which are localized in ER and ER-Golgi intermediate compartment. SEC61α with other two subunits, SEC61β and SEC61γ , comprise the SEC61 complex, and have function in proteins translocation across endoplasmic Reticulum (ER) membrane 33 . In addition, the members of solute-carrier (SLC) superfamily, SLC12A6, SLC35B2 and SLC25A15, are membrane-bound transporters, which play essential roles in transporting variety of substrates (such as amino acids, glucose, sugar, inorganic cations and anions) across the membranes of cell 34 .
Three major classes of molecular motor are kinesins, dyneins and myosin 35 . They are required for a series of cellular events, including chromosome segregation, spindle assembly, migration and anchoring, cytoplasmic organelles redistribution, mRNA position and cortical reorganization 36 . In the present study, several molecular motor proteins (KIF20A, KIF2C, MYH10, MYH9 and DYNLL2) were found to be up-regulated in MII oocytes compared with GV oocytes, suggesting that they may have potential important roles in the maturation of buffalo oocytes. For example, KIF2C (a member of kinesin-13 family) is an ATP-dependent microtubule depolymerase and involved in resolution of incorrect microtubule attachments in mitosis 37 . Studies in mouse oocytes showed that knockdown of KIF2C led to a delay in chromosome congression and meiosis I arrest, but did not prevent bipolar spindle assembly 37 . Similarly, KIF20A (Kinesin-6 family member, also named as MKlp2) was found to be involved in the cytokinesis 38 . Moreover, KIF20A was proved to be localized at oocyte microtubules and involved in polar body extrusion during mouse oocyte maturation 39 . Inhibition of KIA20A in porcine oocyte led to failure of polar body extrusion, but did not affect spindle morphology 38 . The members of myosin superfamily, MYH10 and MYH9 were also found to be involved in cell migration, adhesion, movement of vesicles and cytokinesis 40 . Recently, Simerly 41 found that MYH10 and MYH9 were crucial factors for meiotic maturation, fertilization and mitosis in mouse oocytes and embryos. Inactivation of MYH10 or MYH9 led to mouse embryonic lethality 42 .
Scientific RepoRts | 6:31795 | DOI: 10.1038/srep31795 DYNLL1 is one of two cytoplasmic dyneins, which engages in various cellular processes, such as mitosis 43 , chromosome segregation 44 , mRNA position 45,46 and vesicles transport 43,47 . Racedo 48 revealed that the higher mRNA expression of DYNLL1 was related to the developmental competence of bovine oocytes. Yao 49 revealed that dynein light chain was a regulatory gene related to follicular development and developmental competence of bovine oocytes. Therefore, all of these motor proteins may have crucial roles in maintaining proper nuclear and cytoplasmic maturation of oocytes.
Ribosome is ribonucleoprotein complexes comprising RNA and protein, whose major function is responsible for protein synthesis. The protein synthesis is essential for oocyte meiotic maturation and subsequently embryo development 50 . The high development capacity of oocytes is related to their high rates of protein synthesis 51 . A previously study indicated that differentially expressed genes engaged in protein biosynthesis were more abundant in the competent oocytes 52 . In this study, a large number of proteins (RPL30, RPL18A, RPL13A, RPL34, RPL26, RPS10, and RPL4) enriched in ribosome were found to be up-regulated in MII G oocytes compared to MII B oocytes, indicating that the protein synthesis was more active in MII G oocytes and level of protein synthesis in oocytes might be related to their developmental competence.
Furthermore, several proteins (UHRF1, UBE2C, USP28, UBE2H, UBE2L3, and UBE2K) related to ubiquitin-proteasome proteolytic pathway were found to be up-regulated in MII G oocytes in the present study. Ubiquitin-proteasome proteolytic pathway (UPP) is the main routes for intracellular protein degradation in eukaryotic cells 53 . Ubiquitin attached with substrate proteins and subsequently degraded by the 26S proteasome complex 53 . Numerous meiotic proteins involved in the regulation of cell cycle were found to be degraded by the UPP, such as cyclin B1, Cdc20, Cdc25, mos, and securin 54 . In rat oocytes, proteasomal catalytic activity was essential for the inactivity of MPF and completion of first meiosis 55 . Moreover, Huo et al. demonstrated that inhibition of UPP prevented cyclin B1 degradation, inhibited PB2 extrusion and pronuclear formation 56 . Degradation of cyclin B1 and securin mediated by UPP was required for disjunction of pairs of homologous chromosomes during the first meiotic division in mouse oocytes 57 . Therefore, UPP may play an essential role in the regulation of buffalo oocyte meiotic maturation.
Conclusions
In conclusions, the expression level of proteins in buffalo oocytes is related to their physiological states, and the up-regulated proteins in competent MII oocytes compared to GV and incompetent MII oocytes are related to chromosome segregation, microtubule-based process, protein transport, ribosome, UPP, and OXPHOX. The oxidative phosphorylation activity may be important for the meiotic resumption and competent acquisition of buffalo oocytes during maturation.
Oocyte collection. Buffalo ovaries were collected from the local slaughter house and transported to the laboratory in physiological saline at 25 °C. After washing in saline solution, cumulus-oocyte complexes (COCs) were aspirated from 2 to 6 mm follicles of ovaries and COCs with compact cumulus cell layers were selected for in vitro maturation (IVM). Then, COCs were cultured in droplets of TCM 199 medium supplemented with 10% fetal calf serum and antibiotics at 38.5 °C in an atmosphere of 5% CO 2 for 22-24 h. The GV and M II oocytes of which cumulus cells were removed by vortexing and pipetting were washed three times in PBS buffer and stored at − 80 °C until use. According to the morphological evaluation, Oocytes were divided into three kinds, I: immature oocytes with intact germinal vesicle and multilayered, compacted cumulus (GVO); II: competent matured oocytes with homogenous cytoplasm, at least three cumulus layers, and the first polar body (MII G); III: incompetent matured oocytes with heterogeneous cytoplasm, incompact and heterogeneously pigmented, and surrounding with few cumulus cells (MII B).
Protein extraction and separation. All of the GV, competent and incompetent MII oocytes were lysed in 20 μ L of lysis buffer (8 M urea, 50 mM IAA, and 1% (v/v) protease inhibitor cocktail). The samples were vortexed for 1 min and incubated on ice for 30 s with a total of 15 cycles. The lysates were centrifuged at 13,800× g for 3 min and the supernatants were collected. After adding of SDS buffer, the proteins of oocyte lysates (20 μ g of each sample) were separated by 10% SDS-PAGE, stained with Coomassie brilliant blue. The gels were scanned with a Scanjet image system (HP Scanjet G4050), the gel image was analyzed by Scion Image (http://rsb.info.nih. gov/nihimage/).
In-gel tryptic digestion. Gel lanes containing protein were sliced into 1 mm 3 small pieces. Gel pieces were washed with 50 mM NH 4 HCO 3 , 30% ACN and dried in a speedavc followed by in-gel digestion with trypsin at 37 °C for 16 h. NH 4 HCO 3 was added to a final concentration of 50 mM to stop the digestive reaction. Peptides were extracted from the gel pieces and desalted as described. Eluted peptides were dried with speedavc and stored at − 80 °C until use.
Peptides labeling using iTRAQ reagent. The peptides from each sample were first resuspended in labeled with iTRAQ tags 116 and 117, respectively. Tubes were incubated for 2 h at room temperature. The reaction was stopped by adding 120 μ L H 2 O, followed by centrifugation at 13,800 × g for 1 min. The samples were then pooled together into one fresh tube and dried with the speedavc.
Peptide identification by nano UPLC-MS/MS. Peptide fractions were suspended in buffer A (0.1% FA, 2% ACN) and analyzed by LTQ-Orbitrap Velos mass spectrometer (Thermo Fisher Scientific. San Jose. CA). Peptide mixtures were injected into the capillary column (75 μ m × 15 cm) and separated by a 3 μ m C18 column using NanoAcquity ultra performance liquid chromatography (UPLC) system (Waters, Milford, MA). Peptides were eluted with a linear gradient of 8-40% buffer B (0.1% FA in ACN) at a flow rate of 0.3 μ L/min for 60 min. The mass spectrometer was operated in positive ion mode (source voltage 2 KV) and data-dependent manner. The full MS scans were performed in the Orbitrap at the range of 400-1,800 m/z at a resolution of 30,000. For MS/MS scans, the 10 most abundant ions with multiple charge states were selected for higher energy collisional dissociation (HCD) fragmentation following one MS full scan. Isolation window was set as 2.0 m/z. The dynamic exclusion was 35 s, and normalized collision energy of 40% was applied.
Data processing and protein identification. The MS/MS spectra were obtained and searched using MaxQuant (version 1.5.1.2, Martinsried, Germany) against Uniprot Bos taurus protein database (version 13122013, 24,210 protein sequences). False discovery rate (FDR) was estimated by using target-decoy based strategy. The proteins and peptides were filtered with a FDR < 0.01. The enzyme parameter was limited to semi-tryptic peptides with a maximum miscleavage of 2. Peptides with at least six amino acids were accepted. For protein identification, at least one identified peptide was required. Peptides precursor mass tolerance was 20 ppm, and fragment mass tolerance was 0.1 Da. Variable modifications of oxidation (+ 15.9949 Da) of methionine was selected, and Carbamidomethylated cysteine (+ 57.0215 Da), iTRAQ 4-plex (K) and iTRAQ 4-plex (N-term) were set as fixed modification.
Protein quantification. Protein with at least one unique peptide was validated and selected for further quantitative analysis. Only the peptide-spectrum matched (PSM) with complete reporter ion was allowed and the reporter ion intensity was picked up using the central MGF files. The isotopic correction was applied to reporter ion intensities according to the correction matrix provided by the manufacturer (Applied Biosystems). Peptide intensities were calculated by averaging the intensity of all the high confident PSMs of the peptide. The ratio of a protein is computed as the geometric mean of corresponding unique peptides ratios belonging to the protein. Significance B approach was performed to determine the p-value. Protein ratios greater than or equal to 2 or less than 0.5 (P-value less than 0.05 with at least two unique peptides) were considered to be differentially expressed proteins.
Real-time Quantitative RT-PCR. Total RNA was extracted from five oocytes at different stages (GVO, MII G) using Cells-to-cDNA II Kit (Ambion Co., America). The cDNAs were reverse-transcribed using Takara RNA PCR kit (Takara) according to manufacturer's instruction. Quantitative real-time RT-PCR analysis was performed using ABI 7500 PRISM system (Applied Biosystems, Singapore). The reaction system (20 μ L) was consisted of 1 μ L cDNA, 10 μ L FastStart Universal SYBR Green Master (ROX) Mix, 0.5 μ L of up and down primer (10 nM) and 8.5 μ L ddH 2 O. The relative expression levels of targeted genes were calculated using the 2-ΔΔ CT method. Three replicates were carried out for each gene using different sets of oocytes. The primer sequences used for the qRT-PCR analysis were showed in Supplementary Information Table S5.
Bioinformatics and statistical analysis. GProX 58 was used for hierarchical clustering analysis. Gene Ontology and KEGG pathway were analyzed using the DAVID software 6.7 (http://david.abcc.ncifcrf.gov/). SPSS 17.0 was applied to evaluate the statistical significance of mean values. Probability values less than 0.05 was considered to be statistically significant.
|
2018-04-03T00:00:36.906Z
|
2016-08-26T00:00:00.000
|
{
"year": 2016,
"sha1": "8062aaf09a1ba7ab19e94de73b91cfe32683d9e6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep31795.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8062aaf09a1ba7ab19e94de73b91cfe32683d9e6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
225593314
|
pes2o/s2orc
|
v3-fos-license
|
CHANGES OF THE OPTICAL PROPERTIES OF TOP-GRADE FLOUR ( SEMOLINA) FROM DURUM WHEAT DURING ITS RIPENING
Using the rapid method of digital image analysis by the developed scanning flour analyzer the optical properties of flour (semolina) obtained in laboratory and production conditions from durum wheat of three years of harvest (2017–2019), namely the «yellowness» indicator and the color characteristic in the blue part of the spectrum, were determined. The semolina color was also evaluated by the Konica Minolta CR-410 colorimeter. It is established that the «yellowness» indicator and the color characteristic in the blue part of the spectrum did not change within the first 5 to 6 days after grinding. The change of these indicators for all the samples is observed in the period from 6 to 20 days after grinding, there with the «yellowness» indicator decreased by 25 to 40 relative units, the color characteristic in the blue part of the spectrum increased on average by 133,75 relative units. Over the next three months, there was no change of color (by both indicators). In the course of experimental work the optical properties change depending upon carotenoids content of flour (semolina) during its ripening was shown. The correla- tion dependence between the «yellowness» indicator of flour (semolina) and its carotenoids content is characterized by a high approximation coefficient. The dependence of the color characteristic of flour in the blue part of the spectrum on the content of carotenoids is characterized by an approximation coefficient equal to 0.9358, and is described as a polynomial equation. It shows, that with a low carotenoid content, the considered indicator is higher by an average of 1100 relative units compared to the color of samples with a carotenoid content from 0.70 to 1.21 mcg/g. At that during storage the optical properties of flour variety with the lowest carotenoids content remained practically the same. During 78 days of storage, there was no significant change of color characteristics of the industrial flour samples, studied from the eleventh day after grinding — 5–8 times higher than the average repeatability of the measurement results.
Introduction
The requirements for the design and implementation of a traceability system in the feed and food production chain are regulated by GOST R ISO 22005-2009 [1], which came into force in January 2011. The quality and safety of food products [2] is one of the main tasks of implementing a traceability system. Standards for implementing the system in the production of certain types of food products, have already been developed, for example, in the chain of confectionery, fish production. At KorolevPharm LLC, the introduction of traceability contributes not only to the release of high quality biological food additives, but also to their safety. One of the goals of implementing traceability in this company is to meet customer requirements.
The development and implementation of a traceability system is relevant in the production of group A pasta, as it meets the priority areas for the development of science in terms of the introduction of digital control methods. It is aimed at the production of high-quality pasta that meets the concept of state policy in the field of healthy nutrition of the population, and the exclusion of counterfeit products. The insufficient volume of durum wheat production and its price [3,4,5] is the reason for the falsification.
The main requirement for creating a traceability system is the ability to receive data quickly, accurately throughout the supply chain. The express method of product quality control, which is based on digital technologies, will provide a solution of the problem of supplying the population with healthy food products.
In order to maintain the quality of group A pasta that meets the requirements of consumers and to exclude adulterated products, an instrumental method of monitoring durum wheat flour for the presence of soft wheat flour impurities by the optical properties of flour has been developed. The method is based on obtaining optical characteristics from the results of mathematical analysis of a digital image of the studied flour and comparing them with the optical characteristics of standards with a fixed content of soft wheat flour. A patent was received for the method [6,7,8]. Analysis time is 3 to 4 minutes. There are no analogues [9,10,11]. A method for analyzing the image of durum wheat grain (Triticum durum) without destroying its structure was described in [11]. Used to assess the morphological properties of grain.
As a result of studies conducted in 2018 [12] to determine the feasibility and effectiveness of introducing optical properties (color characteristics) determined by the digital image method, when evaluating products by individual elements of the traceability chain in the production of group A pasta in three regions of Russia, it was found: a confounding factor for the introduction of an effective system for assessing the quality of group A pasta by its optical properties is the process of semolina ripening, affecting the change in its quality by its optical properties -by the «yellowness» indicator and by the color characteristic in the blue part of the spectrum.
The goal is to study the change in the optical properties of flour (semolina) obtained during laboratory grinding of durum wheat, as well as produced during industrial production, in the process of its ripening. According to the literature, it is known that during the ripening of flour during storage, its color becomes lighter [13]. The reason for the color change of the flour is the oxidation of the carotenoids contained in it. Carotenoids are the substances painted in yellow or orange color that belong to the «pigments» group [14,15]. These pigments oxidize with a large amount of oxygen and transform into oxidized, colorless forms. According to the literature [15,16], the duration of flour ripening depends on storage conditions, on the quality of the grain itself. The ripening of flour occurs the faster, the longer the period between harvesting and grinding of grain is. According to L. Ya. Auerman [17], wheat flour at a temperature of (20 ± 5) °C ripens within 1.5-2.0 months.
Objects and methods
The object of study is flour (semolina) obtained in laboratory conditions during the grinding of durum wheat grain of three years of harvest (2017-2019), as well as during production grinding (2018). Durum wheat flour (semolina) for pasta obtained during laboratory grinding, according to physico-chemical parameters, meets the requirements for the characteristics of durum wheat flour, regulated by the standard. The grindings were carried out on durum wheat, the quality indicators of which are presented in Table 1. According to the main quality indicators, the grain met the requirements established by the standards.
Optical properties (color characteristics) of flour (semolina)the «yellowness» index calculated from the basic colors and the color characteristic in the blue part of the spectrum are determined from the digital image of the flour. The measurements were carried out on an experimental sample of a scanning analyzer (CAM), designed to obtain a digital image of the studied sample [6,7,8]. A standard flatbed scanner of the Epson Perfection type with a CCD type sensor was used as the main unit. The experimental sample was developed at the VNIIZ in conjunction with the Scientific Research Center «Intelligent Scanning Systems».
The preparation of flour samples for measuring it on the CAM was carried out according to a special method using an original design cuvette. The measurement is carried out in this way: a cuvette with flour is mounted on a template -a latch placed on the surface of the scanner's exposure glass, a digital image is created and then is transmitted to the computer. A digital image of the flour is processed using the created special software (SSW) for calculating color characteristics.
The determination of the main indicators of the quality of grain and semolina, as well as the content of carotenoid pigments of semolina, was carried out according to the methods regulated by the standards. Laboratory grinding of durum wheat grain was carried out at the VNIIZ laboratory stand according to an extensive technological scheme in accordance with the «Rules for the organization and conduct of the technological process».
Results and discussion
As mentioned above, the duration of the ripening process depends on many factors: this is the period between harvesting and grinding of grain, the content of carotenoids, storage conditions, grain quality. These factors served as the basis for planning and conducting an experiment to study the influence of ripening of top-grade flour (semolina) on its optical properties.
The patterns of the optical properties change (the «yellowness» index and the color index in the blue part of the spectrum) of durum wheat flour (semolina) obtained by laboratory grinding of the 1st class durum wheat grain, harvest of 2017, carried out in 2018, are shown in Figure 1 and Figure 2.
Analysis of the curves showed: the color during the first 5-6 days remains practically unchanged (indicators within the measurement error); in the next 6 to 12 days, a sharp decrease of the «yellowness» indicator is observed for all analyzed samples i. e. the color has become less yellow. The change of color characteristic in the blue part of the spectrum for all 4 samples was an average of 133.75 rel. units -it increased, i. e. it is the very characteristic by which the enlightenment of samples, the transition of carotenoid pigments to oxidized colorless forms can be detected; for three months of further observation, a color change (according to both characteristics) is within the established measurement errors. It was these time frames that served as a guide for the experiment in 2019 on top-grade flour (semolina), formed from grains of the first and second quality grades obtained by laboratory grinding of grains, the characteristics of which are presented in Table 1. Color characteristics of top-grade flour (semolina) are given in Table 2. The semolina obtained by grinding of the grain from sample № 2z at the lowest yield among the studied samples of semolina has the lowest «yellowness» indicator. Analyzing the data in the table, we see that the flour is different in yield and ash content, despite the fact that durum wheat grain is almost the same in terms of quality (vitreousness, test weight), and the process of grain samples grinding is carried out according to one scheme.
For the proper analysis of the ripening process for each grind of durum wheat, the top-grade flour (semolina) with a yield of 60% was formed. The formation of flour varieties (semolina) with a yield of 60% and the measurement of color characteristics by digital flour image analysis was carried out on the 6th day after grinding of each grain sample. The carotenoids content of flour (semolina) was determined on the 10th day. Carotenoid pigments contained in flour (semolina) impart to pasta the desired amber-yellow color. That is why the research of individual breeding institutes is devoted to the selection of durum wheat for a high content of carotenoid pigments in grain [19,20].
The dependence of the «yellowness» indicator of flour (semolina) on its carotenoids content is presented in Figure 3.
The interrelation between the «yellowness» indicator of topgrade flour (semolina) with a yield of 60% and the content of carotenoid pigments in it is characterized by a high approximation coefficient.
After 10 days of storage, both color and pigment content were measured on the same day. The results of the experiment are presented in Table 3.
The table shows that: the carotenoid content of the studied samples decreased on average by 0.14 μg / g. (with repeatability of the measurement results -0.03 μg / g); the color characteristics of the flour (semolina) grade with the lowest carotenoid content during storage remained almost the same (within the repeatability range); the decrease of the «yellowness» indicator for 10 days of storage is insignifican and lies in the range from 7.5 to 10. In 2018, an additional experiment was conducted to identify changes in the color characteristics of top-grade flour (semolina) provided to us by individual enterprises with an exact date of flour production. Samples of flour were received on day 10 after production. By the presence of soft wheat flour, all 4 samples met the requirements regulated by GOST 9353-2016, according to which up to 15% of soft wheat is allowed at durum wheat processing. Using an instrumental determination method (patent), 10% of soft wheat flour was determined in the samples (based on comparison with flour standards with different soft wheat contents). The studied durum wheat flour (semolina) for pasta in all respects meets the requirements for the characteristics of durum wheat flour, regulated by the standard (Table 4). On the 11th day after production all 4 samples were checked for color characteristics. It was found that 3 samples from the studied did not meet the standards for color (at least 47.0 rel. units) developed by VNIIZ [21] for top-grade flour (semolina) of industrial grinding. The calculation is based on the interrelation between the readings of the CR-410 colorimeter and the scanning flour analyzer, which we use to obtain a digital image of the flour, described by the equation y = 3.378x-55.78, where y is the «yellowness» indicator for the scanning analyzer, x -for CR-410 [21]. At present, some Russian enterprises producing flour from durum wheat use devices of foreign manufacturethe CR-410 Konica Minolta colorimeter -for the flour color evaluation. The colorimeter is manufactured by Konica Minolta Sensing, a Japanese company that is a leader in the development and manufacture of a precision measuring equipment for the color determination and control. In 2008 the CR-400, CR-410 colorimeters were included in the State register of measuring equipment of the Russian Federation (Registration No. 0 5 Э 5 ~ 0 9). When using the CR-410 Konica Minolta colorimeter user-defined formulas entering is provided for evaluating and calculating the color of any object. For the purpose of evaluating the color of flour from durum wheat for pasta production, flour manufacturers were suggested to determine this color by the «yellowness» indicator. For this indicator on the colorimeter VNIIZ has developed the color standards (project) for top-grade flour (semolina) obtained from the durum wheat grain (durum), and intended for pasta production.
The results of changes of the optical properties (color characteristics) of flour produced at flour mills for three months are shown in Table 5.
Statistical processing of the results presented in table 4 showed: the color characteristics of flour (semolina) for 78 days of storage is insignificant for both the «yellowness» indicator and the color characteristic in the blue part of the spectrum. The maximum variation value is 5 times higher than the average repeatability value of the measurement results (7.5) in the blue part, and 8 times higher than the average repeatability value of the measurements results for the «yellowness» indicator, which is 1.13 rel. units. The results of an experiment conducted on industrial flour samples, the color characteristics of which were measured only on the 11th day after production, confirmed that flour ripening affected the color in the first 6-12 days (Figures 1, 2).
Conclusions
As a result of studies conducted to reveal the influence of the ripening process on the optical properties of top-grade flour (semolina) obtained in laboratory and industrial conditions from the durum wheat grain of various quality of three years of harvest the following data were collected: experimental data on the physicochemical properties of the durum wheat grain the analysis of which showed correspondence of the studied grain to the requirements regulated by the standard; experimental data on the optical properties («yellowness» indicator and color characteristic in the blue part of the spectrum) of the formed varieties of top-grade flour (semolina) with the yield of 60% and also of the industrial flour, that were determined by the rapid method of digital image analysis using the developed scanning analyzer and by the Konica Minolta CR-410 colorimeter; experimental data on the carotenoid pigments content of the formed varieties of top-grade flour (semolina) with the yield of 60%; patterns of the flour (semolina) optical properties change depending on its ripening process. As a result of the obtained data analysis the following conclusions can be drawn: The change of the optical properties of top-grade flour (semolina) obtained by grinding the grain of the 2017 harvest starts on the 6 th day and lasts till the 20 th day. There was no change in the next three months; During the simulation of the flour ripening process conducted with the formed varieties of top-grade flour (semolina) with the yield of 60% obtained by laboratory grinding of the 2019 harvest grain it was found: dependencies between the flour (semolina) color characteristics and its carotenoids content are characterized by high approximation coefficients -with that in terms of «yellowness» indicator the dependence is linear and in terms of color characteristic in the blue part of the spectrum it is described by polynomial equation; after 10 days of storage the color characteristics of flour (semolina) with the lowest carotenoids content remained practically the same; after 10 days of storage the magnitude of the carotenoids content decrease in the studied samples in 5 times exceeded the repeatability of the results of carotenoids content determination; monitoring of the color characteristics change of the studied samples with the carotenoids content of 0,9-1,0 μg/g for four months detected that the ripening process had been lasting; the color characteristics change of the industrial flour samples during 78 days of storage starting from the 11 th day after grinding is insignificant -5-8 times higher than the average repeatability of the measurement results.
|
2020-07-16T09:04:23.678Z
|
2020-07-14T00:00:00.000
|
{
"year": 2020,
"sha1": "2ed07930dd97358c8a4b416158be80ec65d38dd4",
"oa_license": "CCBY",
"oa_url": "https://www.fsjour.com/jour/article/download/70/106",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "222e22671e5fe26f838b411693b2ab17c094520c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
220633614
|
pes2o/s2orc
|
v3-fos-license
|
Uncertainty Quantification and Deep Ensembles
Deep Learning methods are known to suffer from calibration issues: they typically produce over-confident estimates. These problems are exacerbated in the low data regime. Although the calibration of probabilistic models is well studied, calibrating extremely over-parametrized models in the low-data regime presents unique challenges. We show that deep-ensembles do not necessarily lead to improved calibration properties. In fact, we show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models. This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce: data-augmentation, ensembling, and post-processing calibration methods. Although standard ensembling techniques certainly help boost accuracy, we demonstrate that the calibration of deep ensembles relies on subtle trade-offs. We also find that calibration methods such as temperature scaling need to be slightly tweaked when used with deep-ensembles and, crucially, need to be executed after the averaging process. Our simulations indicate that this simple strategy can halve the Expected Calibration Error (ECE) on a range of benchmark classification problems compared to standard deep-ensembles in the low data regime.
Introduction
Overparametrized deep models can memorize datasets with labels entirely randomized [48]. It is consequently not entirely clear why such extremely flexible models are able to generalize well on unseen data and trained with algorithms as simple as stochastic gradient descent, although a lot of progress on these questions have recently been reported [8,19,2,31,39,10].
The high capacity of neural network models, and their ability to easily overfit complex datasets, makes them especially vulnerable to calibration issues. In many situations, standard deep-learning approaches are known to produce probabilistic forecasts that are over-confident [16]. In this text, we consider the regime where the size of the training sets is very small, which typically amplifies these issues. This can lead to problematic behaviors when deep neural networks are deployed in scenarios where a proper quantification of the uncertainty is necessary. Indeed, a host of methods [22,30,40,12,37] have been proposed to mitigate these calibration issues, even though no gold standard 35th Conference on Neural Information Processing Systems (NeurIPS 2021). has so far emerged. Many different forms of regularization techniques [35,48,50] have been shown to reduce overfitting in deep neural networks. Importantly, practical implementations and approximations of Bayesian methodologies [30,44,3,14,27,38,28] have demonstrated their worth in several settings. However, some of these techniques are not entirely straightforward to implement in practice. Ensembling approaches such as drop-outs [12] have been widely adopted, largely due to their ease of implementation. Recently, [1] provides a study on different ensembling techniques and describes pitfalls of certain metric for in-domain uncertainty quantification. Also subsequent to our work, several articles also studied the interaction between data-augmentation and calibration issues. Importantly, the CAMixup approach is proposed as a promising solution in [42]. Furthermore, [47] analyzes the under-confidence of ensembles due to augmentations from a theoretical perspective. In this text, we investigate the practical use of Deep-Ensembles [22,4,25,41,9,16], a straightforward approach that leads to state-of-the-art performances in most regimes. Although deep-ensembles can be difficult to implement when training datasets are large (but calibration issues are less pronounced in this regime), the focus of this text is the data-scarce setting where the computational burden associated with deep-ensembles is not a significant problem.
Contributions:
We study the interaction between three of the most simple and widely used methods for adopting deep-learning to the low-data regime: ensembling, temperature scaling, and mixup data augmentation.
• Despite the widely-held belief that model averaging improves calibration properties, we show that, in general, standard ensembling practices do not lead to better-calibrated models. Instead, we show that averaging the predictions of a set of neural networks generally leads to less confident predictions: that is generally only beneficial in the oft-encountered regime when each network is overconfident. Although our results are based on Deep Ensembles, our empirical analysis extends to any class of model averaging, including sampling-based Bayesian Deep Learning methods. • We empirically demonstrate that networks trained with the mixup data-augmentation scheme, a widespread practice in computer vision, are typically under-confident. Consequently, subtle interactions between ensembling techniques and modern data-augmentation pipelines have to be considered for proper uncertainty quantification. The typical distributional shift induced by the mixup data-augmentation strategy influences the calibration properties of the resulting trained neural networks. In these settings, a standard ensembling approach typically worsens the calibration issues. • Post-processing techniques such as temperature scaling are sometimes regarded as competing methods when comparing the performance of many modern model-averaging techniques. Instead, to mitigate the under-confidence of model averaging, temperature scaling should be used in conjunction with deep-ensembling methods. More importantly, the order in which the aggregation and the calibration procedures are carried out greatly influences the resulting uncertainty quantification. These findings lead us to formulate the straightforward Pool-Then-Calibrate strategy for post-processing deep-ensembles: (1) in a first stage, separately train deep models (2) in a second stage, fit a single temperature parameter by minimizing a proper scoring rule (eg. cross-entropy) on a validation set. In the low data regime, this simple procedure can halve the Expected Calibration Error (ECE) on a range of benchmark classification problems when compared to standard deep-ensembles. Although straightforward to implement, to the best of our knowledge this strategy has not been investigated in the literature prior to our work.
Background
Consider a classification task with C ≥ 2 possible classes Y ≡ {1, . . . , C}. For a sample x ∈ X , the quantity p(x) ∈ ∆ C = {p ∈ R C + : p 1 + . . . + p C = 1} represents a probabilistic prediction, often obtained as p(x) = σ SM [f w (x)] for a neural network f w : X → R C with weight w ∈ R D and softmax function σ SM : R C → ∆ C . We set y(x) ≡ arg max c p c (x) and p(x) = max p(x).
Augmentation: Consider a training dataset D ≡ {x i , y i } N i=1 and denote by y ∈ ∆ C the one-hot encoded version of the label y ∈ Y. A stochastic augmentation process Aug : X × ∆ C → X × ∆ C maps a pair (x, y) ∈ X × ∆ C to another augmented pair (x , y ). In computer vision, standard augmentation strategies include rotations, translations, brightness and contrast manipulations. In this text, in addition to these standard agumentations, we also make use of the more recently proposed mixup augmentation strategy [49] that has proven beneficial in several settings. For a pair (x, y) ∈ X × ∆ C , its mixup-augmented version (x , y ) is defined as for a random coefficient γ ∈ (0, 1) drawn from a fixed mixing distribution often chosen as Beta(α, α), and a random index J drawn uniformly within {1, . . . , N }.
Model averaging: Ensembling methods leverage a set of models by combining them into an aggregated model. In the context of deep learning, Bayesian averaging consists of weighting the predictions according to the Bayesian posterior π(dw | D train ) on the neural weights. Instead of finding an optimal set of weights by minimizing a loss function, predictions are averaged. Denoting by p w (x) ∈ ∆ C the probabilistic prediction associated to sample x ∈ X and neural weight w, the Bayesian approach advocates to consider Designing sensible prior distributions is still an active area of research, and data-augmentation schemes, crucial in practice, are not entirely straightforward to fit into this framework. Furthermore, the high-dimensional integral (1) is (extremely) intractable: the posterior distribution π(dw|D train ) is multi-modal, high-dimensional, concentrated along low-dimensional structures, and any local exploration algorithm (eg. MCMC, Langevin dynamics and their variations) is bound to only explore a tiny fraction of the state space. Because of the typically large number of degrees of symmetries, many of these local modes correspond to essentially similar predictions, indicating that it is likely not necessary to explore all the modes in order to approximate (1) well. A detailed understanding of the geometric properties of the posterior distribution in Bayesian neural networks is still lacking, although a lot of recent progress has been made. Indeed, variational approximations have been reported to improve, in some settings, over standard empirical risk minimization procedures. Deep-ensembles can be understood as crude, but practical, approximations of the integral in Equation (1). The high-dimensional integral can be approximated by a simple non-weighted average over several modes w 1 , . . . , w K of the posterior distribution found by minimizing the negative log-posterior, or some approximations of it, with standard optimization techniques: We refer the interested reader to [34,29,45,3] for different perspectives on Bayesian neural networks. Although simple and not well understood, deep-ensembles have been shown to provide highly robust uncertainty quantification when compared to more sophisticated approaches [22,4,25,41].
Post-processing Calibration Methods: The article [16] proposes a class of post-processing calibration methods that extend the more standard Platt Scaling approach [36]. Temperature Scaling, the simplest of these methods, transforms the probabilistic outputs p(x) ∈ ∆ C into a tempered version Scale[p(x), τ ] ∈ ∆ C defined through the scaling function for a temperature parameter τ > 0 and normalization Z > 0. The optimal parameter τ > 0 is usually found by minimizing proper-scoring rules [13], often chosen as the negative log-likelihood, on a validation dataset. Crucially, during this post-processing step, the parameters of the probabilistic model are kept fixed: the only parameter being optimized is the temperature τ > 0. In the low-data regime, the validation set being also extremely small, we have empirically observed that the more sophisticated Vector and Matrix scaling post-processing calibration methods [16] do not offer any significant advantage over temperature scaling approach and in fact overfit the extremely small validation dataset as chosen by our setup.
Calibration Metrics: The Expected Calibration Error (ECE) measures the discrepancy between prediction confidence and empirical accuracy. For a partition 0 = c 0 < . . . < c M = 1 of the unit interval and a labelled set The quantity ECE is then defined as where A model is calibrated if acc m ≈ conf m for all 1 ≤ m ≤ M . It is often instructive to display the associated reliability curve, i.e. the curve with conf m on the x-axis and the difference (acc m − conf m ) on the y-axis. Figure 1 displays examples of such reliability curves. A perfectly calibrated model is flat (i.e. acc m − conf m = 0), while the reliability curve associated to an under-confident (resp. over-confident) model prominently lies above (resp. below) the flat line acc m − conf m = 0. We sometimes also report the value of the Brier score [5] defined as 1 . Setup and implementation details: For our experiments, we use standard neural architectures. For CIFAR10/100 [21] we use ResNet18, ResNet34 [17] for Imagenette/Imagewoof [18], and for the Diabetic Retinopathy [7], similar to [26] we use the architecture (not containing any residual connection) from the 5 th place solution of the associated Kaggle challenge. We also include the results for LeNet [23] trained on the MNIST [24] dataset in the appendix. A very low number of training examples (CIFAR10 : 1000, CIFAR100 : 5000, Image{nette, woof}: 5000, MNIST: 500) was used for all the datasets. However, we also show that our observations extend to full-data setups in 4. The validation dataset is chosen from the leftover training dataset. The test dataset is kept as the original and is hidden during both training and validation step.
Empirical Observations
Linear pooling: It has been observed in several studies that averaging the probabilistic predictions of a set of independently trained neural networks, i.e., deep-ensembles, often leads to more accurate and better-calibrated forecasts [22,4,25,41,9]. Figure 1 displays the reliability curves across three different datasets of a set of K = 30 independently trained neural networks, as well as the reliability curves of the aggregated forecasts obtained by simply linear averaging the K = 30 individual probabilistic predictions. These results suggest that deep-ensembles consistently lead to predictions that are less confident than the ones of its individual constituents. This can indeed be beneficial in the often encountered situation when each individual neural network is overconfident. Nevertheless, this phenomenon should not be mistaken with an intrinsic property of deep ensembles to lead to better-calibrated forecasts. For example, and as discussed further in Section 4, networks trained with the popular mixup data-augmentation are typically under-confident. Ensembling such a set of individual networks typically leads to predictions that are even more under-confident. and Imagewoof), as well as the pooled estimates (red) obtained by averaging the K individual predictions. This linear averaging leads to consistently less confident predictions (i.e. higer values of (acc m − conf m )). It is only beneficial to calibration when each network is over-confident. It is typically detrimental to calibration when the individual networks are already calibrated, or underconfident.
Other (3), of twenty individual models and the ensemble of SWAG [30] and MC-Dropout [11] trained with mixup augmentation on full CIFAR{10,100} dataset. The ensemble is less calibrated than the individual models.
In order to gain some insights into this phenomenon, recall the definition of the entropy functional Furthermore, tempering a probability distribution p leads to an increased entropy if τ > 1, as can be proved by examining the derivative of the function τ → H[p 1/τ ]. The entropy functional is consequently a natural surrogate measure of (lack of) confidence. The concavity property of the entropy functional shows that ensembling a set of K individual networks leads to predictions whose entropies are higher than the average of the entropies of the individual predictions. In order to obtain a more quantitative understanding of this phenomenon, consider a binary classification framework. For a pair of random variables (X, Y ), with X ∈ X and Y ∈ {−1, 1}, and a classification rule p : X → [0, 1] that approximates the conditional probability p x ≈ P(Y = 1|X = x), define the Deviation from Calibration score as The term is equivalent to the Brier score of the classification rule p and the quantity E p X (1 − p X ) is an entropic term (i.e. large for predictions close to uniform). Note that DC can take both positive and negative values and DC(p) = 0 for a well-calibrated classification rule, i.e. p x = P(Y = 1|X = x) for all x ∈ X . Furthermore, among a set of classification rules with the same Brier score, the ones with less confident predictions (i.e. larger entropy) have a lesser DC score. In summary, the DC score is a measure of confidence that vanishes for well-calibrated classification rules, and that is low (resp. high) for under-confident (resp.over-confident) classification rules. Contrarily to the entropy functional, the DC score is extremely tractable. Algebraic manipulations readily shows that, for a set of K ≥ 2 classification rules p (1) , . . . , p (K) and non-negative weights ω 1 + . . . + ω K = 1, the linearly averaged classification rule Equation (6) shows that averaging classifications rules decreases the DC score (i.e. the aggregated estimates are less confident). Furthermore, the more dissimilar the individual classification rules, the larger the decrease. Even if each individual model is well-calibrated, i.e. DC(p (i) ) = 0 for 1 ≤ i ≤ K, the averaged model is not well-calibrated as soon as at least two of them are not identical.
Distance to the training set: In order to gain some additional insights into the calibration properties of neural networks trained on small datasets, as well as the influence of the popular mixup augmentation strategy, we examine several metrics (i.e., Accuracy, Reliability, Negative Log-likelihood (NLL), Entropy) as a function of the distance to the (small) training set D train . The 2nd column of Figure 2 displays the mean Reliability (i.e., acc − conf) as a function of the distance percentiles. We focus on the CIFAR10 dataset and train our networks on a balanced subset of N = 1000 training examples.
Since there is no straightforward and semantically meaningful distance between images, we first use an unsupervised method (i.e., labels were not used) for learning a low-dimensional and semantically 0% 25% 50% 75% For these experiments, we obtained a mapping Φ : R 32,32 → S 128 , where S 128 ⊂ R 128 denotes the unit sphere in R 128 , with the SimCLR method [6]. We used the distance d(x, y) = Φ(x) − Φ(y) 2 , which in this case is equivalent to the cosine distance between the 128-dimensional representations of the CIFAR10 images x and y. The distance of a test image x to the training dataset is defined as min{d(x, y i ) : y i ∈ D train }. We computed the distances to the training set for each image contained in the standard CIFAR10 test set (last column of Figure 2). Not surprisingly, we note that the average Entropy, Negative Log-likelihood, and Error Rate all increase for test samples further away from the training set.
• Over-confidence: The second column represents the Reliability curve, but with bins (x-axis) as distance percentile, rather than confidence. The predictions associated with samples chosen further away from the training set have a lower value of acc − conf. This indicates that the over-confidence of the predictions increases (esp. lower mixup α) with the distance to the training set. In other words, even if the entropy increases as the distance increases (as it should), calibration issues do not vanish as the distance to the training set increases. This phenomenon is irrespective of the amount of mixup used for training the network.
• Effect of mixup-augmentation: The first row of Figure 2 shows that increasing the amount of mixup augmentation generally leads to an increase in entropy, decrease in over-confidence, as well as more accurate predictions (lower NLL and higher accuracy). Additionally, the effect is less pronounced for α ≥ 0.2. This is confirmed in Figure 3 that displays more generally the effect of the mixup-augmentation on the reliability curves over four different datasets. In the appendix we provide more analysis on this.
• Temperature Scaling: Importantly, the second row of Figure 2 indicates that a postprocessing temperature scaling for the individual models almost washes-out all the differences due to the mixup-augmentation scheme. For this experiment, an ensemble of K = 30 networks is considered: before averaging the predictions, each network has been individually temperature scaled by fitting a temperature parameter (through negative likelihood minimization) on a validation set of size N valid = 50.
Calibrating Deep Ensembles
In order to calibrate deep ensembles, several methodologies can be considered: (A) Do nothing and hope that the averaging process intrinsically leads to better calibration Simple pooling/aggregation rules that do not require a large number of tuning parameters are usually preferred, especially when training data is scarce [20,46]. Such rules are usually robust, conceptually easy to understand, and straightforward to implement and optimize. The standard and most commonly used average pooling of a set p 1:K of K ≥ 2 probabilistic predictions p (1) , . . . , p (K) ∈ ∆ C ⊂ R C is defined as Replacing the averaging with the median operation leads to median pooling strategy, where the median is taken component-wise and then normalized afterward to obtain the final probability prediction. Alternatively, trimmed linear pooling strategy removes a pre-defined percentage of outlier predictions before performing the average in 7.
Pool-Then-Calibrate (D): any of the aforementioned aggregation procedure can be used as a pooling strategy before fitting a temperature τ by minimizing proper scoring rules on a validation set. In all our experiments, we minimized the negative log-likelihood (i.e., cross-entropy). For a given set p 1:K of K ≥ 2 probabilistic forecasts, the final prediction is defined as where Scale(p, τ ) ≡ σ SM log p/τ . Note that the aggregation procedure can be carried out entirely independently from the fitting of the optimal temperature τ .
Joint Pool-and-Calibrate (C): there are several situations when the so-called end-to-end training strategy consisting in jointly optimizing several component of a composite system leads to increased performances [33,32,15]. In our setting, this means learning the optimal temperature τ concurrently with the aggregation procedure. The optimal temperature τ is found by minimizing a proper scoring rule Score(·) on a validation set where p τ i = Agg Scale(p 1:K (x i ), τ ) ∈ ∆ C denotes the aggregated probabilistic prediction for sample x i . In all our experiments, we have found it computationally more efficient and robust to use a simple grid search for finding the optimal temperature; we used n = 100 temperatures equally spaced on a logarithmic scale in between τ min = 10 −2 and τ max = 10. More formally, the group [B] of methods obtains for each individual model 1 ≤ k ≤ K an optimal temperature τ (k) > 0 as solution of the optimization procedure where p k i ∈ ∆ C denotes the probabilistic output of the k th model for the i th example in validation dataset. The light blue calibration curves corresponds to the outputs Scale p k , τ (k) for K different models. The deep blue calibration curve corresponds the linear pooling of the individually scaled predictions. For the group [C] of methods, a single common temperature τ > 0 is obtained as solution of the optimization procedure described in equation (9). The orange calibration curves are generated using the predictions Scale p k , τ , and the red curve corresponds to the prediction Agg Scale(p 1:K , τ ) . Notice that when scaled separately (by τ (k) ), each of the individual models (light blue) is close to being calibrated, but the resulting pooled model (deep blue) is under-confident. However, when scaled by a common temperature, the optimization chooses a temperature τ that makes the individual models (orange) slightly over-confident so that the resulting pooled model (red) is nearly calibrated. This reinforces the justifications in section 3, and it also shows the importance of the order of pooling and scaling. Figure 5 compares the four methodologies A-B-C-D identified at the start of this section, with the three different pooling approaches Agg avg and Agg med and Agg trim . These methods are compared to the baseline approach (in dashed red line) consisting of fitting a single network trained with the same amount α = 1 of mixup augmentation before being temperature scaled. All the experiments are executed 50 times, on the same training set, but with 50 different validation sets of size N val = 50 for CIFAR10, Imagenette, Imagewoof and N val = 300 for CIFAR100, and N val = 500 for the Diabetic Retinopathy dataset. The results indicate that on most metrics and datasets, the (naive) method (A) consisting of simply averaging predictions is not competitive. Secondly, and as explained in the previous section, the method (B) consisting in first calibrating the individual networks before pooling the predictions is less efficient across metrics than the last two methods (C − D). Finally, the two methods (C − D) perform comparably, the method (D) (i.e. pool-then-calibrate) being slightly more straightforward to implement. With regards to the pooling methods, the intuitive robustness of the median and trimmed-averaging approaches do not seem to lead to any consistent gain across metrics and datasets. Note that ensembling a set of K = 30 networks (without any form of post-processing) does lead to a very significant improvement in NLL and Brier score but leads to a serious deterioration of the ECE. The Pool-Then-Calibrate keeps the gains in NLL/Brier score unaffected, without compromising calibration.
Importance of the validation set: it would be practically useful to be able to fit the temperature without relying on a validation set. We report that using the training set instead (obviously) does not lead to better-calibrated models. We have tried to use a different amount of mixup-augmentation (and The total datasets (training + validation) were of size N = 1000 for CIFAR10 and Imagenette and Imagewoof, and N = 5000 for CIFAR100 and Diabetic Retinopathy. Experiments were executed 50 times on the same training data but different validation sets. The dashed red line represents a baseline performance when a single model was training with mixup augmentation (α = 1) and post-processed with temperature scaling.
other types of augmentation) on the training set for fitting the temperature parameter but have not been able to obtain satisfying results. Role and effect of mixup-augmentation: the mixup augmentation strategy is popular and straightforward to implement. As already empirically described in Section 3, increasing the amount of mixup-augmentation typically leads to a decrease in the confidence and increase in entropy of the predictions. This can be beneficial in some situations but also indicates that this approach should certainly be employed with care for producing calibrated probabilistic predictions. Contrarily to other geometric data-augmentation transformations such as image flipping, rotations, and dilatations, the mixup strategy produces non-realistic images that consequently lie outside the data-manifold of natural images: leading to a large distributional shift. Mixup relies on a subtle trade-off between the increase in training data diversity, which can help mitigate over-fitting problems, and the distributional shift that can be detrimental to the calibration properties of the resulting method. Figure 6 compares the performance of the Pool-Then-Calibrate approach when applied to a deep ensemble of K = 30 networks trained with different amounts of mixup-α. The results are compared to the same approach (i.e. Pool-then-Calibrate with K = 30 networks) with no mixup-augmentation. The results indicate a clear benefit in using the mixup-augmentation in conjunction with temperature scaling.
Extension to full-data setting: Although classification accuracy is usually not an issue when data is plentiful, the lack of calibration can indeed be still present when models are trained with aggressive data-augmentation strategies (as is common nowadays): the distributional shift between (dataaugmented) training samples and (non-augmented) test samples when models are used in production can lead to significant calibration issues. Although we mainly focus on low-data setting, below in table 2 we show that our conclusion extends to full-data setting as well. We have investigated below the CIFAR100 full dataset (ResNet architecture / no-mixup) setting under varying conditions. Table 2: In line with our discussion in Sec 3, we show that linear pooling (A) appears to be helping with calibration (2 nd row) when individual models are mildly over-confident (1 st row), but performs worse (4 th row) than individual models even in full-data setting (CIFAR100 50K training) when the individual models are near-calibrated (3 rd row). Our proposed pool-then-calibrate (D) has the best performance (5 th row).
Method
The first row reports the performance of individual models trained without mixup: the individual models are over-confident, but not extremely over-confident (presumably because of the large number of samples). When these models are pooled to make an ensemble in the second row, the pooled model is better calibrated. This is the setup that is usually studied in almost every early articles investigating the properties of deep-ensembles, hence leading to the conclusion that deep-ensembling inherently brings calibration. When we make the individual models calibrated in the 3 rd row, where we used temp-scaling but it can also be due to the effect of more aggressive data-augmentation schemes, the individual calibration naturally improves significantly. Nevertheless, when we pool these calibrated models to make an ensemble, the pooled model suffers from extreme under-confidence (4 th row). Our proposed method pool-then-calibrate (5 th row) performs well even in full-data setting.
Out-of-distribution performance: We show the out-of-distribution detection performance of our method compared to vanilla ensembling when the ensembles are trained on CIFAR10 and tested on a subset of CIFAR100 classes which are visually different from CIFAR10. In table 3, we show the metric: difference between the medians of the in-class and out-of-class prediction entropy (higher is better).
Pool-then-Calibrate performs significantly better than vanilla ensemble in separating the predictions for in-class and out-of-class observations (45% more separation in terms of distance between medians). In table 4, we also show the performance when we run inference on the CIFAR10-C dataset (Gaussian noise) after training our ensemble model on the setting: 1000 samples of CIFAR10 dataset with mixup 1.0. As expected, vanilla ensembling with linear pooling (A) has worse calibration than single models, while pool-then-calibrate (D) improves score across the board.
Additional experiments: In the appendix, we add more experiments on the effect of number of models in the ensemble, detailed numerical results for all datasets as well as MNIST, ablation study, and effect of different mixup levels on all the metrics.
Cold posteriors: the article [43] reports gains in several metrics when fitting Bayesian neural networks to a tempered posterior of type π τ (θ) ∝ π(θ) 1/τ , where π(θ) is the standard Bayesian posterior, for temperatures τ smaller than one. Although not identical to our setting, it should be noted that in all our experiments, the optimal temperature τ was consistently smaller than one. In our setting, this is because simply averaging predictions lead to under-confident results. We postulate that related mechanisms are responsible for the observations reported in [43].
Discussion
The problem of calibrating deep-ensembles has received surprisingly little attention in the literature.
In this text, we examined the interaction between three of the most simple and widely used methods for adopting deep-learning to the low-data regime: ensembling, temperature scaling, and mixup data augmentation. We highlight that ensembling in itself does not lead to better-calibrated predictions, that the mixup augmentation strategy is practically important and relies on non-trivial trade-offs, and that these methods subtly interact with each other. Crucially, we demonstrate that the order in which the pooling and temperature scaling procedures are executed is important to obtaining calibrated deep-ensembles. We advocate the Pool-Then-Calibrate approach consisting of first pooling the individual neural network predictions together before eventually post-processing the result with a simple and robust temperature scaling step.
Broader Impact
Producing well-calibrated probabilistic predictions is crucial to risk management, and when decisions that rely on the outputs of probabilistic models have to be trusted. Furthermore, designing wellcalibrated models is crucial to the adoption of machine-learning methods by the general public, especially in the field of AI-driven medical diagnosis, since it is intimately related to the issue of trust in new technologies. Size of the ensembles Figure 7 Table 6: Ablation study performed on CIFAR10 1000 samples. For ensemble temp scaling, we use 950 training samples and 50 validation sets. For setups with variation, we report metric mean and standard deviation.
A Additional experiments
Ablation study: We focus on the CIFAR10 dataset with N train = 1000 fixed training examples, and 100 different validation sets of size N val = 50: Table 6 reports the means and standard deviations across these experiments. For setups involving training a single model, we report the mean and standard deviations of the metric from a variety of 30 different trained models.
|
2020-07-20T01:00:59.340Z
|
2020-07-17T00:00:00.000
|
{
"year": 2020,
"sha1": "0cd2bbe237eb54ad36f79a1423d84c1692ba189f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "be006399be9ce17683b92eecef38ae9650def896",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
14029102
|
pes2o/s2orc
|
v3-fos-license
|
Plasmodesmata of brown algae
Plasmodesmata (PD) are intercellular connections in plants which play roles in various developmental processes. They are also found in brown algae, a group of eukaryotes possessing complex multicellularity, as well as green plants. Recently, we conducted an ultrastructural study of PD in several species of brown algae. PD in brown algae are commonly straight plasma membrane-lined channels with a diameter of 10–20 nm and they lack desmotubule in contrast to green plants. Moreover, branched PD could not be observed in brown algae. In the brown alga, Dictyota dichotoma, PD are produced during cytokinesis through the formation of their precursor structures (pre-plasmodesmata, PPD). Clustering of PD in a structure termed “pit field” was recognized in several species having a complex multicellular thallus structure but not in those having uniseriate filamentous or multiseriate one. The pit fields might control cell-to-cell communication and contribute to the establishment of the complex multicellular thallus. In this review, we discuss fundamental morphological aspects of brown algal PD and present questions that remain open.
1 3 2012). Intercellular connections allow cell-to-cell communication through the transport of various molecules and contribute to the elaboration of the complex multicellularity (Bloemendal and Kück 2013).
In land plants, PD are plasma membrane-lined tubular channels with a diameter of 30-50 nm, creating symplastic continuity across the cell wall. Endoplasmic reticulum (ER, desmotubule), characteristically passes through the PD lumen. PD of land plants are categorized into two types: unbranched "simple PD" and branched "complex PD". Molecules transported via PD include ions, small compounds, proteins and RNA (Kim 2005). Transport of these materials via PD is highly regulated and is involved in a number of developmental processes in land plants (Burch-Smith et al. 2011). The structure of PD is much different from that of gap junctions of animals; these 2-4 nm wide proteinous channels facilitate the transport of small molecules up to about 1 kDa such as ions, secondary signaling messengers, nucleotides and metabolites (Hervé and Derangeon 2013;Maeda and Tsukihara 2011). Septal pores of fungi are 50-500 nm wide plasma membrane-lined pores co-localized with peroxisome-derived vesicles or ER-derived septal pore caps. They contribute to the cellular differentiation (Bauer et al. 2006;Reichle and Alexander 1965;van Peer et al. 2010). Pit plugs of red algae consist of a proteinaceous plug core occluding a pore lined by plasma membrane in the cell wall and cap membrane covering both sides of the plug core (Pueschel and Cole 1982). The structure of pit plug provides significant taxonomical information (Pueschel and Cole 1982).
In brown algae, studies on intercellular transport have focused on sieve elements in kelps. In some laminarialean algae, the differentiation of tissues consisting of epidermal, cortex and medullary cells is conspicuous, and medullary cells (sieve elements) are functionally analogous to those of land plants. The sieve elements of brown algae are continuous to cortex cells via the complex filamentous cell network (Schmitz 1984). The monitoring of transport of isotopes ( 14 C, 32 P, 125 I) showed that the long-distance transport of photosynthetic products and iodine occurred through the sieve elements (Amat and Srivastava 1985;Srivastava 1975, 1979). The cross walls of sieve elements are perforated by numerous pores, linking adjacent sieve elements. The diameter of the pores ranges from 37.5 nm to 2.6 µm (Schmitz 1990). Although smaller pores can be regarded as PD, larger pores are predicted to be specialized structures of PD that are formed by the enzymatic digestion of the cross wall (Schmitz and Srivastava 1974;Marchant 1976;Schmitz 1981Schmitz , 1990. The detail of the structure and the function of PD in other cell types and algal species remain obscure. Considering the distant evolutionary relationship between brown algae and green plants, they must have evolved PD independently (Raven 2008). The similarity of molecular components between brown algal PD and those of green plants might be low (Cock et al. 2010;Salmon and Bayer 2013). Structural and functional analyses of PD will give insights into how brown algae established independently complex multicellularity. Recently, we carried out ultrastructural observations of PD in the brown alga Dictyota dichotoma (Terauchi et al. 2012). We characterized their detail structure and formation during cytokinesis. In this review, we summarize our current knowledge on the structure of brown algal PD and compare them with green plants.
Ultrastructure of brown algal PD and its relationship to the molecular traffic All brown algae are multicellular species organized in branching uniseriate and multiseriate filamentous, and complex multicellular thalli. For example, Dictyota dichotoma forms a macroscopic complex multiseriate thallus ( Fig. 1a), Sphacelaria rigidula forms a filamentous multiseriate thallus (Fig. 1b), and Ectocarpus siliculosus forms a filamentous uniseriate thallus (Fig. 1c). Transmission electron microscopic (TEM) observations showed that vegetative cells of all species examined had ER (desmotubule)-free PD with an inner diameter ranging from 10 to 20 nm and a length from 1 to several 100 nm ( Fig. 1d-f). Branched complex PD were never observed in brown algae, in contrast to land plants. Although in one of the published figures of pores in the sieve element from Laminaria groenlandica (Fig. 18 of Schmitz and Srivastava 1974) it has been reported that they contain ER, the existence of desmotubule in brown algal PD has never been described elsewhere. The occurrence of complex PD and desmotubule has been described in some members of bryophytes and charophycean algae (Cook et al. 1997;Franceschi et al. 1994) but not in other green algae (Fraster and Gunning 1969). It was argued that those specializations of PD occurred during the evolution toward land plants in the green lineage (Cook et al. 1997). Ultrastructural observations of brown algal PD in vegetative cells showed that PD have similar form from simple uniseriate to complex multicellular species, while PD (diameter 10-20 nm) and pores of sieve elements (diameter 37.5 nm-2.6 µm) significantly differ in their diameter. In Fucales and Laminariales, the number and size of the pores are variable among species and among sieve elements from different parts or ages of the thallus, and even within one cross wall (Moss 1983;Srivastava 1974, 1976). Although structure may differ depending on tissue fixation method used (chemical fixation or cryofixation), regulation of the diameter of PD and pores may be the main determinant for molecular transport conductance in brown algae as well as in green plants. PD with a large diameter in the sieve element can be regarded as pores specialized for the long-distance transport. In land plants, it has been reported that PD determine the upper limit of molecular weight of the cargo macromolecules (size exclusion limit, SEL) (Christensen et al. 2009;Zambryski 2004). Degradation and synthesis of callose (β1, 3-glucan) at the neck region of PD is one of the molecular mechanisms for controlling SEL; when the cell receives endogenous (e.g. developmental process) and exogenous (e.g. pathogen infection and cell injury) signals, callose is deposited at the neck region of PD, resulting in a decrease of the diameter of PD and SEL, while callose degradation reverses the effect (Zavaliev et al. 2011). SEL is varied in species, tissues, and developmental stages. In a study of Elodea canadensis leaf cells microinjected with fluorescent-labeled peptides, SEL was estimated to be less than 1 kDa (Goodwin 1983). In the analysis of tobacco leaf cells expressing a green fluorescent protein (GFP, fusion protein), SEL was estimated to be around 50 kDa in sink leaves while greatly reduced in source leaves (Oparka et al. 1999). In Zea mays, microinjection of GFP and fluorescentlabeled dextran in coleoptile epidermal cells showed that the intercellular movement of the dextran probe (4.4 kDa) was limited and movement of GFP was also observed to some extent (Wymer et al. 2001). SEL in these cells was predicted to be around 4.4 kDa. Theoretically, globular proteins with an approximate diameter of 9 nm correspond to a molecular weight of 45 kDa (Lucas and Wolf 1993), which predicts that the macromolecular transport via PD in brown algae may be possible in terms of their diameter (i.e. 10-20 nm). FITC-dextran (10 kDa) microinjected in early developmental stage zygotes of the brown alga Fucus spiralis was transported throughout the young sporophyte, suggesting that PD provides the functional symplastic route for the intercellular transport of molecules in brown algae (Bouget et al. 1998). However, it is yet unclear whether the temporal or permanent alteration of diameter of PD regulates SEL in brown algae.
Formation of brown algal PD
In land plants, two types of PD exist: the primary PD formed during cytokinesis and the secondary PD which have a post-cytokinetic origin. Primary PD are generated by the physical obstruction of ER incorporated into the cell plate as shown by TEM observations (Hepler 1982). Secondary PD are synthesized de novo by the local degradation of the cell wall and protrusion of plasma membrane and desmotubule or by addition of branches to primary PD (Lucas and Wolf 1993;Faulkner et al. 2008). In several brown algae, PD-like structures were observed in the cell partition membrane during cytokinesis (La Claire 1981; Katsaros et al. 2009). Recently, we confirmed by electron tomographic analysis that the precursor structures of brown algal PD, pre-plasmodesmata (PPD), occurred during cytokinesis in D. dichotoma (Terauchi et al. 2012). In 1 3 cytokinesis of brown algae, Golgi vesicles and flat cisternae take part in the formation of the cell partition membrane Motomura 2002, 2009;Katsaros et al. 2009;Nagasato et al. 2010Nagasato et al. , 2014. Similarly, in D. dichotoma, cytokinesis proceeds by the expansion of patches of membranous sacs which are formed by the fusion of Golgi vesicles and flat cisternae (Fig. 2a). In the developing cell partition membrane, tubular membranous structures (PPD) are recognized (Fig. 2b, c). It was suggested that PPD derived from the invagination of the membranous sac in D. dichotoma (Fig. 3a). Their inner diameter ranges from 10 to 20 nm as in mature PD. They are evenly distributed in specific areas of the membranous sacs and persist after completion of the cell partition membrane (Fig. 3b). Mature PD are clustered in a part of the cell walls called the "pit fields", cluster of PD corresponding to the deeper side of epidermal cells near the underlying medullary cells (see next paragraph for a detail description of the pit fields). PPD are preferentially formed in the restricted region of the cell wall where the mature PD are formed. This indicates that the PPD distribution in the newly forming cell partition membrane matches the sites of the ''future'' pit field (Fig. 3c). These data confirm that brown algae have primary PD that are produced in a manner different from land plants. It is unclear how PPD are constructed at the molecular level and how their position in the cell partition membrane is determined.
Are secondary PD present in brown algae? In previous studies of the first and second cytokinesis of zygotes of Scytosiphon lomentaria (Nagasato and Motomura 2002) and Silvetia babingtonii (Nagasato et al. 2010(Nagasato et al. , 2014, PPD were not observed in the first and second cell partition membrane. In the mature thalli of S. lomentaria and S. babingtonii, we observed dense PD in the cell wall. Additionally, at later stages of S. babingtonii zygote development, PD are present in the newly formed cell wall (unpublished data). There is a high possibility that brown algae have the capability to generate secondary PD.
PD distribution and their implication for the body plan in brown algae
Frequency and distribution of PD in the cell wall should play important roles in cell-to-cell communication in brown algae. In D. dichotoma, PD are clustered at the pit fields and localized in the thin cell wall. Clustering appears during cytokinesis (Terauchi et al. 2012). In our study, the pit fields were observed in many species examined (Dictyotales, Laminariales, Fucales, Desmarestiales, Scytosiphonales) ( Table 1; Figs. 4,5). The pit fields were present in the thin cell wall in Dictyotales, Laminariales and Fucales and in the relatively uniform cell wall in other species (Desmarestiales and Scytosiphonales). The pit fields are round or oval shaped and their number, mean area and PD frequency in the cell wall varies between species (Table 1; Fig. 4a, b). The pit fields could not be identified in Sphacelariales species (Fig. 4c) and E. siliculosus (Fig. 4d). In these species, PD are dispersed over the cell wall and the distance between PD (distance between the center of one plasmodesma and that of adjacent one averages about 250 nm) was much longer than the distance between PD in the pit fields (average: 60-120 nm) ( Table 1). The presence and absence of the pit fields was seen in different generations of the life cycle. S. japonica undergoes alternation between heteromorphic generations: macroscopic complex multicellular sporophyte and microscopic filamentous gametophyte. The thallus of well-developed sporophyte is composed of several layers of epidermal, cortex and medullary cells (Fig. 5a). PD traversed the cell wall making pit fields at places where cell wall thickness is reduced to about 0.1 µm (Fig. 5b). Tubular and other membranous structures in the cytoplasm are localized just beneath the pit field (arrowheads in Fig. 5b). Multiple round-or oval shapedpit fields are present in the central part of the cell wall Table 1 Comparison of PD distribution in cell wall in several species of brown algae PD frequency per 1 µm 2 was calculated using the absolute frequency a + pit fields observed − no pit field observed b The measured areas were smaller than 1 µm 2 . The absolute frequency is expressed as the number of (no.) PD per 0.5 µm × 0.5 µm c The measured areas were smaller than 1 µm 2 . The absolute frequency is expressed as no. PD per 0.4 µm × 0.4 µm or 0.25 µm × 0.25 µm d The measured areas were smaller than 1 µm 2 . The absolute frequency is expressed as no. PD per 0.5 µm × 0.5 µm or 0.25 µm × 0.25 µm e Calculated using mean area of the pit field (µm 2 ) and mean PD frequency (No. PD µm −2 ) Organism Pit field a Area of pit field (µm 2 ) No. pit field per wall PD frequency (µm −2 ) No. PD per pit field Distance between PD (nm) mean ± SD n mean ± SD n mean ± SD n 1 3 ( Fig. 5c, arrowheads in Fig. 5d, e) or its periphery. Similar features of PD were reported in other laminarialean species (Schmitz and Kühn 1982). The gametophyte of S. japonica has a low PD frequency and lacks pit fields (Fig. 5f, g) although a few PD are often located close together (arrowheads in Fig. 5g). The PD frequency has been reported in the above study. In the cortex cells of Saccharina latissima (Laminaria saccharina), it is 100-200 PD µm −2 (Schmitz and Kühn 1982), and the authors calculated the PD frequency from published micrographs of other brown algae: 168 PD µm −2 in Himanthalia lorea (Fucales), 132-172 PD µm −2 in Egregia menziesii (Laminariales). The PD frequency in S. japonica (Table 1) correlates well with the calculated PD frequencies. In cross walls of sieve elements, the pore frequency was 0.03-100 pores µm −2 in the Laminariales (Schmitz and Srivastava 1974, 1975, 1976Schmitz 1981Schmitz , 1990, 50-60 pores µm −2 in the Fucales and 2,000-3,000 pores µm −2 in Dictyopteris membranacea (Dictyotales, Katsaros and Galatis 1988). It was suggested that the pore frequency greatly depends on the pore diameter (Schmitz 1990). The distance between PD in the pit fields (Table 1) validates that they are arranged at almost regular intervals. The even pore distribution was observed in the sieve element of Fucus vesiculosus (Fielding et al. 1987) and D. membranacea (Katsaros and Galatis 1988). In D. dichotoma, the distance between PPD in the newly formed cell partition membrane was 81 ± 17 nm before, and 74 ± 11 nm after the initial cell wall development (Table 1). This means that PPD are also arranged at almost regular intervals in the cell partition membrane similar to the case of the mature pit fields. The process of pit field formation in land plants is different from that of brown algae. In four plant species (Trifolium repens, Raphanus sativus, Zea mays, and Sorghum vulgare), comparison of PD distribution in root meristem cells and elongating cells, provides evidence that the clustering of PD and secondary PD formation take place during cell wall expansion (Seagull 1983). As a result of cell wall expansion, there is a general shift from dispersed to clustered PD and the PD frequency is maintained even after cell wall expansion due to secondary PD formation (Seagull 1983). Observation of PD in the basal cell walls of trichomes during leaf development in tobacco demonstrated that there is a shift from randomly distributed simple PD to the pit field containing many paired PD during cell wall expansion. Land plants possess a system that inserts secondary PD into the vicinity of primary PD giving rise to pit fields composed of complex PD (Faulkner et al. 2008). The pit fields of land plants have a post-cytokinetic origin and the arrangement of PD can be changed during cell wall expansion. The pit fields of brown algae and land plants are much different in 1) the timing and process of their formation, 2) the presence or absence of branched complex PD within the pit fields, 3) the arrangement of PD within the pit fields. Secondary PD in brown algae might be inserted around primary PD at regular intervals increasing the surface area of the pit fields. There is no experimental data so far to explain how pit fields participate in the establishment of the complex multicellular system. One possibility might be that since the pit fields contain a number of PD (Table 1), the increase of the total number of PD per cell wall interface could lead to the higher flux rate of the molecular transport and active cell-to-cell communication. The PD frequency in the septum of filamentous gametophyte of S. japonica is quite low. While PD frequency is much lower in E. siliculosus than in other complex multicellular species, the total number of PD in the septum was estimated to be quite high. The area of the septum is about 300 µm 2 when the diameter of the cylindrical cell is 20 µm. Since PD of E. siliculosus are dispersed over the septum (average 13 PD µm −2 ), the total number of PD in the septum will be about 4,000. This number is much higher than that of any other species examined. Therefore, the total number of PD between cells is not an absolute determinant of the complex multicelluar thallus structure. The area, number and position of pit fields may comprehensively influence the pattern of molecular transport. This idea is supported by the report in laminarialean species that those properties of the pit fields are different between anticlinal and periclinal cell walls of epidermal and cortex cells, which determines the transport pattern of photosynthetic products from epidermis toward medulla (sieve elements) (Schmitz and Srivastava 1975;Schmitz 1981;Schmitz and Kühn 1982). One hypothesis is that cargo molecules and components that mediate the traffic via PD are gathered into the pit fields, thereby allowing effective and synchronized regulation of the molecular flux rate, direction and selection of the cargo molecules through each plasmodesma. If this mechanism exists, it could achieve a more dynamic and strict regulation of intercellular molecular traffic via the pit fields than via dispersed PD.
Concluding remarks and future perspective
We have investigated the morphology of brown algal PD, but many aspects still remain unclear and require further investigation: (1) the correlation between the formation of the pit fields and establishment of the complex multicellular body plan needs further validation. (2) The molecules that are transported via PD need to be inventoried and (3) proteins that make up PD should be identified. Moreover, it is also important to establish whether the PD distribution in the cell wall is fixed during cytokinesis or flexibly adjusted during the developmental process. Brown algae probably have secondary PD that are added to the pre-existing pit fields or other cell walls. It is unknown whether de novo post-cytokinetic insertion of the pit fields into the PD-free d). e Transverse view of one pit field. Note that the distance between PD is almost constant. f Overview of male gametophyte thallus. TEM samples were prepared by rapid freezing/freeze substitution using culture strains. Gametophyte has uniseriate filamentous thallus. g Transverse view of PD. PD frequency is much lower than that of sporophyte and do not gather as the pit field but PD are often located near to each other (arrowheads). Abbreviations: cw, cell wall. Scale bars: 10 µm (a, f), 2 µm (c, d), 100 nm (b, e, g) cell wall takes place in brown algae. In land plants, preexisting PD can be removed from the cell wall during the developmental process. For example, it is well known that cell walls in guard cells lose PD and become symplastically isolated from surrounding cells during their maturation (Wille and Lucas 1984). It is not known whether the elimination of pre-existing PD occurs in brown algae. From the reports of the absence of PD in the first cell partition membrane of zygotes of several brown algae (Nagasato and Motomura 2002;Nagasato et al. 2010Nagasato et al. , 2014, we can infer that some vegetative cells of the developing thallus as well as zygotes might undergo PPD-free cytokinesis. However, cells of mature thalli of all species examined had PD. The complete symplastic isolation by the absence of PD may be rare in brown algae. In land plants, it was reported that the local grouping of cells by SEL of PD, called "symplastic field", was the fundamental mechanism in creating the positional information, and achieving cell and tissue differentiation (Kim et al. 2004). In brown algae, the existence of a symplastic field is unknown. In the early developmental stage of zygotes in D. dichotoma, F. disticus and S. japonica, pit fields were not observed and the PD frequency was low (unpublished data). The onset of pit field formation might be regulated according to the developmental schedule. Brown algal PD still leaves many puzzles to be solved. The data presented here could serve as a framework to a detailed functional analyses of brown algal PD.
|
2017-08-02T21:23:52.534Z
|
2014-12-17T00:00:00.000
|
{
"year": 2014,
"sha1": "02229f76e7347e51b834426c5cec0db25d8b44f3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10265-014-0677-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5518b0ee4c222069e4e2f807a6474553413e502e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237422357
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of phase-1 biotransformation and cytostatic effects of diphenyleneiodonium on hepatoblastoma cell line HepG2 and a CYP3A4-overexpressing HepG2 cell clone
Cell-based in vitro liver models are an important tool in the development and evaluation of new drugs in pharmacological and toxicological drug assessment. Hepatic microsomal enzyme complexes, consisting of cytochrome P450 oxidoreductase (CPR) and cytochrome P450 monooxygenases (CYPs), play a decisive role in catalysing phase-1 biotransformation of pharmaceuticals and xenobiotics. For a comprehensive understanding of the phase-1 biotransformation of drugs, the availability of well-characterized substances for the targeted modulation of in vitro liver models is essential. In this study, we investigated diphenyleneiodonium (DPI) for its ability to inhibit phase-1 enzyme activity and further its toxicological profile in an in vitro HepG2 cell model with and without recombinant expression of the most important drug metabolization enzyme CYP3A4. Aim of the study was to identify effective DPI concentrations for CPR/CYP activity modulation and potentially associated dose and time dependent hepatotoxic effects. The cells were treated with DPI doses up to 5,000nM (versus vehicle control) for a maximum of 48 h and subsequently examined for CYP3A4 activity as well as various toxicological relevant parameters such as cell morphology, integrity and viability, intracellular ATP level, and proliferation. Concluding, the experiments revealed a time- and concentration-dependent DPI mediated partial and complete inhibition of CYP3A4 activity in CYP3A4 overexpressing HepG2-cells (HepG2-CYP3A4). Other cell functions, including ATP synthesis and consequently the proliferation were negatively affected in both in vitro cell models. Since neither cell integrity nor cell viability were reduced, the effect of DPI in HepG2 can be assessed as cytostatic rather than cytotoxic.
Introduction
In humans, the liver is the main organ for the metabolization and elimination of pharmaceuticals and xenobiotics due to the high expression of phase-1 and -2 enzymes in hepatocytes [1]. For this reason, hepatocytes are the subject of intensive research efforts, and in vitro systems based on these cells are 232 C. Schulz et al. / Inhibition of phase-1 biotransformation and cytostatic effects of diphenyleneiodonium often used in the context of drug development, diagnostics and therapeutics, for example to clarify and reduce drug side effects at an early stage [2,3].
In the context of phase-1 biotransformation, microsomal enzyme complexes in hepatocytes, consisting of cytochrome P450 oxidoreductase (CPR) and cytochrome P450 monooxygenases (CYPs), are essential components for a large number of oxidative metabolic conversions of pharmaceuticals or xenobiotics [4,5]. Despite the large number of different CYPs expressed in the human organism (57 are known to date), only a few, mostly from CYP families 1, 2, and 3, are responsible for the oxidative metabolization of more than 75% of all clinically approved drugs [2,3,6,7]. The microsomal flavoprotein CPR has a significantly lower diversity compared to CYPs with only one individually expressed polymorphic variant [8][9][10]. As the obligatory electron donor for CYPs, CPR is essential for the liver-mediated phase-1 metabolism. Further, CPR plays a vital role in both oxidative processes catalysed by several oxygenase enzymes as well as biosynthesis and metabolism of various endogenous substances of the hormone and fat metabolism [9,11]. During phase-1 biotransformation several successive oxidative reactions take place in which electrons and activated oxygen are transferred to a substrate in an nicotinamide adenine dinucleotide phosphate (NADPH)-dependent process [12,13]. In detail, two electrons are initially transferred from NADPH to the prosthetic group flavin adenine dinucleotide (FAD) contained in CPR before these are transferred to flavin mononucleotide (FMN), another co-factor of CPR, by means of interflavin electron transfer. Sequential electron transfer follows this via redox cycling to a heme-bearing microsomal CYP, which catalyses the oxidative conversion of a substrate [14][15][16]. For the prediction of the pharmacokinetics of new drug candidates, including relevant metabolites and hepatotoxicity, a clear understanding of the enzymatic phase-1 and -2 reactions interplay in the liver is crucial. In this context, preclinical drug screening with regard to biotransformation and toxicology is mostly based on physiologically relevant sensitive, reliable and in particular adaptable in vitro metabolism models of human hepatocytes [17][18][19][20]. Research into specific scientific issues also involves the availability of substances for targeted modulation. There are plenty of CYP inducers and inhibitors known for targeted phase-1 activity modifications [9]. However, the range of phase-1 modulating agents on only CPR activity level or on both CPR and CYPs is limited. However, such inhibitors are an important tool in drug studies, e.g. to elucidate side reactions that are not catalysed by phase-1 biotransformation or to monitor CPR/CYP-dependent pro-drug activation. In this study, diphenyleneiodonium (DPI) was investigated as an inhibitor candidate for CPR/CYP enzyme activity. In addition, the toxicological profile of DPI was analyzed in an in vitro hepatocyte model based on the human hepatoblastoma cell line HepG2 and a HepG2 cell clone with overexpression of CYP3A4. CYP3A4 was chosen as enzymes of the CYP3A family are involved in the metabolism of more than 50% of human approved drugs and CYP3A4 is the most important representative of the CYP3A family concerning drug metabolism in adult human liver [7,11,21].
DPI, a member of diaryliodonium salts, is an aromatic heterocyclic cation. Owing to their electron deficient properties at the iodine center, diaryliodonium salts are frequently used as aromatic electrophiles in aryl transfer processes [22]. Its chemical nature makes DPI a potent inhibitor of flavin bearing oxidoreductases, which are generally an integral element of electron transport chains. DPI have a wide spectrum of known cellular targets including CPR [13,15,23], NADPH oxidase (NOX) [24][25][26][27][28][29][30][31], mitochondrial respiratory chain complex I (NADH ubiquinone oxidoreductase) [28,[32][33][34], and different types of nitric oxide synthase [13,35]. It is assumed that DPI inhibition is achieved by covalent modification of flavin and/or heme prosthetic groups within enzymes based on radical formation. NADPH-dependent inhibition of CPR by DPI occurs via irreversible modification of reduced FMN, which effectively prevent electron transfer to their physiological targets [13,15,[36][37][38]. In these studies, DPI could be shown as an effective CPR inhibitor in recombinant expressed protein isolates, rat and human liver microsomes as well as in several in vitro cell models. Likewise, it was found, that DPI-mediated CPR inhibition prevented electron flow to CYPs, leading to inhibition of their monooxygenase activity [13,39]. In the context of further studies, DPI was also shown to irreversibly modify heme porphyrin in microsomal CYPs. Since both CPR-flavins and the heme in CYPs are a target for DPI, CYP-dependent monooxygenase activity is inhibited at two levels, with CYPs being significantly more sensitive to DPI than CPR [13].
In the past, inhibitory effects of DPI were investigated with regard to a potential application in the therapeutic field, i.e. as an antibiotic [29,40,41], anti-cancer [31,42,43], anti-inflammatory [26,30] and/or vasodilatory agent [23]. For the analysis of phase-1 biotransformation inhibition, studies were mostly performed in less complex model systems with recombinantly expressed and purified proteins or derived from microsomal fractions in order to clarify size and range of DPI effects and the mechanism of action. Ex vivo and especially in vivo studies are scarcely available. For example, the influence of DPI on CPR-mediated NO formation from glyceryl trinitrate has been investigated both ex vivo in microsomal fractions from rat aorta and in vivo regarding the influence on vasodilation in a rat model [23]. Due to its ability to inhibit phase-1 reactions both at the level of CPR electron transport and CYP monooxygenase activity itself, DPI promises to be an interesting tool for blocking whole biotransformation activity. However, the data available for the application of DPI in more complex in vitro cell models for pharmacological/toxicological biotransformation studies still is limited. Since DPI influences also other physiologically relevant processes such as the mitochondrial respiratory chain, it is of great importance to investigate its effects in a complex in vitro cell model. Therefore, the aim of our study was to investigate DPI as inhibitor of phase-1 activity via CPR/CYP inhibition in an in vitro hepatocyte model with elevated CYP3A4 activity. The focus was on the elicitation of effective DPI concentrations for CPR/CYP activity manipulation and potentially associated dose-and time-dependent toxic effects on HepG2.
Cell culture
Commercially available human hepatocellular carcinoma (HepG2) cells (HB-8065, ATCC, Manassas, VA, USA) as well as genetically modified HepG2 with stable recombinant overexpression of CYP3A4 (HepG2-CYP3A4), generated and kindly provided by the "Molecular Cell Biology" group from the BTU Cottbus-Senftenberg [44], were cultured under standard conditions (37 • C, 5 % CO 2 ) in polystyrene-based tissue culture flasks (SARSTEDT AG & Co. KG, Nümbrecht, Germany) in Dulbecco's minimal essential medium (D-MEM) supplemented with 10 % fetal bovine serum (FBS) superior, 6 mM L-alanyl-L-glutamine and 49.2 g/L NaHCO 3, all purchased from Biochrom GmbH (Berlin, Germany). During standard cell culture the culture medium was replaced every second day. Prior to the inhibition studies with diphenyleneiodonium (DPI), the HepG2-CYP3A4 cell line was post-selected by adding 3 g/mL Blasticidin (AppliChem GmbH, Darmstadt, Germany) to the culture medium over a period of two weeks [45]. No Blasticidin was present in the culture medium during the experiments with DPI. For either cell passaging or experimental seeding, hepatocytes were harvested by trypsin/EDTA treatment (0.05% v/v Trypsin and 0.02% v/v EDTA in water, Biochrom GmbH, Berlin, Germany).
CPR/CYP inhibition studies with diphenyleneiodonium (study design)
The presented study was divided in three consecutive parts. For the assessment of DPI mediated influences on both CYP3A4 monooxygenase activity or toxicological relevant parameters in hepatocytes, HepG2 and HepG2-CYP3A4 cells were seeded in all study parts at a density of 62.500 cells/cm² into either 96-well or 24-well plates (SARSTEDT AG & Co. KG, Nümbrecht, Germany) 24 h prior to DPI-treatment. The setup of the first study part initially aimed to determine the concentration range of an effective DPI-mediated inhibition of phase-1 biotransformation in the in vitro model system used. For this purpose, HepG2 with recombinant CYP3A4 activity were treated with DPI in a wide concentration range of 2.5-5,000 nM for a short, 30 min period, followed by analysing parameters such as cell morphology and CYP3A4 activity including cell number normalisation via intracellular ATP level. For this purpose, starting from a 1 mM diphenyleneiodonium chloride stock solution in CPR assay buffer (both purchased from BioVision Inc., Milpitas, CA, USA) buffer + 10% DMSO (AppliChem GmbH, Darmstadt, Germany) DPI dilutions (1:10 or 1:100) in cell culture medium were used, by medium change directly before treatment. The vehicle and the untreated parental cell line were always included as controls. Data of monooxygenase activity and intracellular ATP level were generated in triplicates in two independent experiments (n = 6 in sum). Prior and after any DPI treatment, morphological evaluation of the hepatocytes were performed using an Olympus CKX41 inverted microscope (Olympus Corporation, Tokyo, Japan). Pictures were documented in various magnifications in phase-contrast mode. In this part of the study, CYP3A4 activity and intracellular ATP levels were determined directly after DPI treatment as described below (see Section 2.3).
Based on the findings from the first study part, regarding effective DPI concentrations and the DPIrelated influence on the intracellular ATP level, as well as anticipating experimental planning for future metabolization studies of substrates/drugs (for which longer conversion times of up to 48 h often are required), the following study parts were performed with an extended setup to elucidate possible time dependent and toxic DPI effects on the HepG2 based in vitro model systems. In the second part of the study, cells were seeded according to the protocol described above in culture vessels suitable for the respective experiments. 24 h after seeding, the cells were treated with different DPI concentrations in the range of 50-5,000 nM over a period of 48 h. In the third part of the study, the cells were treated with higher DPI concentrations of 1,000, 2,500 and 5,000 nM (known to cause effective CPR/CYP inhibition) only for 30 min before switching to DPI-free medium and 48 h cultivation, to investigate a possible recovery of phase-1 activity over time. After 48 h incubation under cell culture conditions, analysis of various parameters including cell morphology, CYP3A4 monooxygenase activity, intracellular ATP, cell integrity, viability and proliferation was performed in the second and third study part with both cell lines as described below.
Determination of CYP3A4 enzyme activity and intracellular ATP level
For the assessment of DPI-induced inhibition of CYP3A4 monooxygenase activity in hepatocytes, HepG2 and HepG2-CYP3A4 cells were analyzed with the P450-Glo ™ CYP3A4 induction/ inhibition assay (Promega, Madison, WI, USA), used according to the manufacturer's instructions. Briefly, after DPI treatment, cells were incubated with 50 l CYP3A4 substrate Luciferin-IPA diluted in culture medium at 37 • C, 5 vol-% CO 2 for 60 min. Subsequently, 25 l of supernatants were transferred into a white-walled 96-well plate (SARSTEDT AG & Co. KG, Nümbrecht, Germany) and an equal volume of luciferin detection reagent was added followed by incubation for 20 min at room temperature in the dark. Luminescence was measured with a FLUOstar Omega microplate reader (Software version: 3.00 R2, BMG LABTECH GmbH, Ortenberg, German), followed by data analysis by MARS Data Analysis Software (Version: 2.41). In addition, the cells and the 25 l substrate solution remaining in the initial 96-well plate were mixed with 25 l ATP reagent solution of the CellTiter-Glo ® 2.0 assay (Promega, Madison, WI, USA) and incubated for 10 min in the dark. ATP level was detected by measuring luminescence with the FLUOstar Omega microplate reader to allow normalization to the effective cell number or assessment of DPI mediated influences on the intracellular ATP level.
Determination of cell integrity by LDH assay
To determine a possible concentration and/or time dependent influence of DPI on cell integrity, the amount of lactate dehydrogenase (LDH) released from the cytoplasm into the cell culture supernatant was determined in the second and third study part. For this purpose, the LDH Cytotoxicity Colorimetric Assay Kit II (Biovision GmbH, Ilmenau, Germany) was used according to the manufacturer's instructions. The experiments were performed in 96-well format (SARSTEDT AG & Co. KG, Nümbrecht, Germany) with both cell lines using triplicates in two independent experiments (n = 6 in sum). The cells were either treated with ascending DPI concentrations (50, 100, 250, 500, 1,000, 2,500, 5,000 nM) for a period of 48 h in the second part of the study or in the third part of the study with higher DPI concentrations for only 30 min (1,000, 2,500, 5,000 nM) before switching to DPI-free medium. After 48 h cultivation, the amount of cell-released LDH in the supernatant was determined. Completely lysed cells (high control), a LDH preparation (positive control) from the kit and a vehicle were always included as controls. High control cell lysis was achieved by adding the cell lysis solution contained in the kit and incubating for 10 minutes under cell culture conditions. After addition of the reagents described in the manual for LDH detection, LDH released from the cells was measured with the FLUOstar Omega microplate reader after 45 minutes of development at OD 450 nm (reference: OD 650 nm ).
Viability and cell density determination by FDA/PI fluorescent staining
DPI-induced changes in proliferation behaviour and cell viability were determined by live-dead staining of the cells with Fluorescein Diacetate (FDA) and Propidium Iodide (PI), both purchased from Sigma Aldrich (St. Louis, MO, US). FDA as a cell-permeant esterase substrate served as a vitality probe, whereby it is hydrolysed into its fluorescent form by intact and metabolically active cells. PI was used to detect dead cells, as it is a DNA-intercalating fluorescent dye that is not cell-permeant. Viability staining was performed in 24 well format (SARSTEDT AG & Co. KG, Nümbrecht, Germany) with both cell lines HepG2 and HepG2-CYP3A4 in two independent experiments with n = 2 wells of each experimental condition. Cells were seeded and treated with DPI analogous to the procedure already described in study design chapter (see Section 2.2). Briefly, for the 48 h treatment in the second part of the study, the cells were exposed to DPI concentrations of 50, 100, 250, 500, 1,000 nM. For the third study part the cells were exposed to higher DPI concentrations (1,000, 2,500, 5,000 nM) for 30 min before switching to DPI-free medium. After 48 h incubation under cell culture conditions, medium was changed and replaced with fresh medium containing FDA (1 g/mL) and PI (2.5 g/mL). The detection of vital/dead cells occurred by means of a LSM800 confocal Laser Scanning Microscope system and ZEN software for picture post processing (Carl Zeiss Microscopy GmbH, Jena, Germany) by taking 3 high resolution pictures of 2 × 2 tiles (n = 6 in sum from two independent experiments; whole covered area per picture ∼1.5 mm²) from different areas of each well in 10-fold primary magnification. For vitality and proliferation assessment, the cell-covered area was calculated from the pictures by using Image J software (version: 1.53c, National Institutes of Health, Bethesda, MD, USA).
Statistical analysis
For statistical analysis, one-way ANOVA with Turkey's multiple comparison test was used to calculate differences between groups using Prism 8 software (GraphPad Software, San Diego, CA, USA). Probabilities lower than 0.05 were considered statistically significant.
Short-term exposure with high-dose DPI completely inhibits CYP3A4 activity and is slightly affecting ATP level
For the experiments with DPI, parental HepG2 and HepG2-CYP3A4 with recombinant CYP3A4 overexpression (described previously [44]) were used as cell models. Initially, the main focus was to determine the DPI concentration range showing an inhibitory effect on phase-1 monooxygenase activity after a 30 min treatment. CYP3A4 activity in the HepG2-CYP3A4 cell line seemed to be slightly decreased already at 5 nM DPI (Fig. 1). Starting with a concentration of 50 nM, a significant reduction of CYP3A4 activity was caused by DPI (p = 0.0004). Treating the cells with DPI concentrations starting from 500 nM, a decrease also in intracellular ATP levels was evident and significant at 5,000 nM DPI (p = 0.0015). In this initial part of the study, the parental cell line HepG2 served as negative control with no detectable CYP3A4 activity. There was no difference in the ATP levels of both cell lines in untreated state. No morphological alterations were observed, when HepG2-CYP3A4 were treated for 30 min with increasing DPI concentrations.
Long-term exposure with DPI inhibits CYP3A4 activity and is affecting ATP levels and proliferation but not cell integrity
Next, we performed DPI treatments of HepG2 and HepG2-CYP3A4 for a longer period (48 h). In addition, we were interested to see if there could be a recovery of CYP3A4 activity as well as intracellular ATP level after short-term DPI treatment. For this, cells were treated with DPI concentrations between 1,000 and 5,000 nM for 30 min followed by 48 h of cultivation in DPI-free culture medium. As before, morphology of DPI-treated cells was analyzed and CYP3A4 activity as well as intracellular ATP level were measured. Moreover, a potential cytotoxic DPI effect on cell integrity was investigated by LDH assay, and the cellular viability status was analyzed with FDA/PI fluorescent staining.
As found with short-term treatments, DPI showed a concentration-dependent inhibitory effect on the CYP3A4 activity of HepG2-CYP3A4 also after 48 h of treatment (Fig. 2). A DPI concentration of 50 nM led to a significant reduction of CYP3A4 activity to about 60% (p = 0.0160). 500 nM was sufficient for an almost complete inhibition of CYP3A4 activity. Recovery experiments showed that HepG2-CYP3A4 cells treated with 1,000 nM DPI for 30 min could reactivate about 30% of CYP3A4 activity when subjected to a 48 h period in DPI-free medium. The recovery capacity was reduced below 10% with 2,500 and 5,000 nM. The intracellular ATP level was significantly reduced by treatment with high DPI concentrations of 1,000 to 5,000 nM. There were no significant differences between a 30 min and a 48 h DPI treatment. Only at 1,000 nM DPI was a tendency towards a slight recovery visible. No significant differences could be detected between both the two setups and the HepG2 cell lines. The experiments further revealed that, despite some DPI effects on ATP level, the cell integrity of both cell lines apparently was not negatively affected by DPI at any time (Fig. 3). The release of LDH was even slightly higher in the untreated cells and the vehicle controls (significant in HepG2 for all DPI concentrations). Direct comparison of the two cell lines showed only minor differences. Solely untreated HepG2 and its vehicle control tended to show an increased LDH release compared to HepG2-CYP3A4.
The situation is different for the area covered by vital cells, which was used as a further evaluation parameter. In both cell lines, a comparable reduction of the covered area with increasing DPI concentration was observed. There was a significant difference for the area covered by vital cells to decrease to about 80% after 48 h of treatment with 100 nM DPI (p HepG2-100 nM DPI < 0.0001). In HepG2-CYP3A4 only a slight tendency could be observed (p HepG2 CYP3A4-100 nM DPI = 0.2710). At higher DPI doses in the range of 250-1,000 nM, a more extensive and in all samples significant reduction of cell density to ∼50% was visible (all p < 0.0001) after 48 h treatment. The recovery experiments with high DPI doses (1,000-5,000 nM) revealed a concentration dependency, whereby higher DPI doses led to lower cell density. Here, 1,000 nM DPI led to a significant reduction of the hepatocyte covered area to about 80% (p HepG2 = 0.0018; p HepG2-CYP3A4 < 0.0001). The lowest cell density (∼40%) was observed with 5,000 nM DPI (p < 0.0001 in both cell lines). In none of the experiments, an increased incidence of dead cells caused by DPI could be detected.
Discussion
We were interested to evaluate the potential of diphenyleneiodonium (DPI) for the targeted modification of phase-1 monooxygenase activity in cell-based in vitro systems based on previous results from other groups [13,15,23,39]. HepG2 cells as well as recombinant CYP3A4-overexpressing HepG2 cells were used as hepatocyte model systems for functional and toxicological studies [17,[46][47][48][49][50]. HepG2 exhibit in vitro low basal CYP activity and are therefore well suited for recombinant modification with specific CYP activities [44,51]. In the present study, we investigated DPI concentrationand time-dependent effects both on phase-1 biotransformation and on cell viability. The latter might be detrimental or interfering with HepG2-based in vitro biotransformation studies.
In the first part of the study, we did not find any DPI effects on the cell morphology as analyzed by phase contrast microscopy. However, the strong CYP3A4 enzyme activity in the HepG2-CYP3A4 model could be significantly inhibited by DPI, depending on the concentration. For a relevant inhibition to approximately 20% of the original CYP3A4 activity of the HepG2-CYP3A4 cells, DPI concentrations of at least 500 nM were required. However, there was a negative effect on the intracellular ATP level at higher DPI concentrations detectable, which could have a serious impact on the on the energy balance and metabolism of hepatocytes. The aim of our study was to investigate not only a concentration but also a possible temporal dependence of the DPI effect on phase-1 activity. In addition, toxicological parameters such as cell integrity, viability and proliferation were analyzed to determine to what extent HepG2-CYP3A4 has the ability to regenerate phase-1 activity after a short 30 min DPI treatment and the extent to which toxicologically relevant effects emanate from DPI under these conditions.
With regard to the inhibition of CYP activity, there was no time dependence in the DPI effect when 50 nM was used. After both 30 min and 48 h DPI treatment the residual CYP3A4 activity was ∼60%, when compared to untreated HepG2-CYP3A4. The situation was different at higher DPI concentrations from 500 nM on, where compared to the 30 min treatment (∼20% residual activity) an almost complete inhibition of CYP3A4 activity was achieved after 48 h DPI treatment. Precisely in this concentration range, DPI mediated significant effects on intracellular ATP levels. This means that a substantial inhibition of phase-1 activity by DPI might have a negative impact on ATP synthesis. Higher concentrations of DPI did not further reduce the intracellular ATP level after 48 h of treatment. This could indicate that under the chosen experimental conditions 500 nM DPI was sufficient for maximum inhibition of CYP3A4 activity and the respiratory chain of the in vitro cell system used, and saturation of corresponding DPI targets was achieved. The data collected on cell integrity as well as vitality and cell density provide further insight. In the second and third part of the study, no significant difference between the two cell lines could be detected for any of these parameters, indicating that the genetic modification for recombinant overexpression of CYP3A4 does not significantly affect the DPI mechanism of action or its effect in HepG2. There was a tendency for ATP levels to be slightly increased in HepG2-CYP3A4 compared to the parental cell line, when the cells were treated with higher DPI concentrations. Obviously, cell integrity was not altered even by the highest DPI concentrations used as there was no increase of LDH activity detectable in the cell supernatants. This is in agreement with previous studies in which even higher DPI doses were well tolerated for prolonged periods in various in vitro and in vivo models. DPI was even shown to have anti-inflammatory effects by inhibiting NF-kB mediated free radical formation via NADPH oxidase [26,29,30]. The slight reduction in released LDH at higher DPI concentrations in both cell lines correlates with the reduced cell density induced by DPI. In line with that data, the viability of HepG2 and HepG2-CYP3A4 does not seem to be negatively affected by DPI, as no increased occurrence of PI positive cells with increasing DPI concentrations could be determined in any cell line. Nevertheless, an effect on cell viability caused by DPI cannot be completely ruled out, as a part of the dead cells might have been lost due to the medium change immediately before detection in the FDA/PI Assay. However, the results of the LDH assay, in which no increased LDH release could be detected over the 48-hour DPI treatment without medium change, contradict this. An indication that even lower DPI concentrations may be sufficient for the abovementioned saturation and thus complete inhibition of phase 1 activity is provided by the decreasing cell density with increasing DPI concentrations. The cell density was used as an analytical parameter for the toxicological evaluation of DPI, as no quantification of single cells was possible due to the HepG2 morphology and the high confluence of untreated cells at the end of the incubation period. It was shown that already a 48 h treatment with 250 nM DPI led to maximum detected reduction of cell density to ∼50% compared to untreated cells.
With regard to the detected reduction of the intracellular ATP level after DPI treatment, experimental limitations result in ambiguities in the interpretation of the data. The decreasing intracellular ATP level with increasing DPI concentrations is probably partly due to the lower cell number after DPI treatment. A direct comparison of ATP levels between untreated and treated cells requires a comparable cell number. According to our cell density data, this is no longer given after 48 h treatment at least from 100 nM DPI and in the case of short treatment followed by 48 h cultivation in the third study part at higher DPI concentrations, as the cell density is already substantially lower. Since only the ATP amount in a complete well could be detected after 48 h, it is conceivable that the influence of DPI on the energy metabolism of the individual cell is less than suggested by the detected ATP level per well. However, it was already shown that DPI has an inhibitory influence on complex I of the respiratory chain [42], where the FAD cofactor of the mitochondrial localised NADH-ubiquinone oxidoreductase is a target for DPI [23]. In view of these findings on mitochondrial function in different cell types, as well as the observations from our experiments, it is clear that the ATP synthesis is directly linked to the proliferation [52][53][54]. The resulting conclusion is, that DPI reduces the ATP level within a short period of time, which has a negative effect on proliferation and results in a reduced cell density after 48 hours.
In our studies, a partial recovery of CYP3A4 activity of up to 30% could also be observed after 48 h of cultivation under DPI-free conditions, following an almost complete inhibition by 30 min treatment with 1,000 nM DPI. These observations do not necessarily contradict findings by others concerning irreversible inhibition of DPI targets [13,15]. These measurements were made with protein isolates or microsomes. A cell-based system may has the possibility of reproducing enzymes and thus restore enzyme activity over time. At 2,500 and 5,000 nM DPI no recovery could be observed, as both phase-1 residual activity was still reduced below 10% after 30 min treatment followed by DPI-free cultivation as well as ATP levels and cell density were comparable to cells treated for 48 h.
Conclusion
The objective of the study was to investigate the potential of DPI as an inhibitor of phase-1 monooxygenase activity for in vitro drug and toxicity studies. Based on the HepG2 and HepG2-CYP3A4 in vitro model systems used, the results show that DPI mediated inhibition of phase-1 biotransformation could be achieved. DPI can be used as an inhibitor of CYP3A4 activity at concentrations up to 50 nM without inducing any morphological or toxic effects on the cells. At concentrations > 50 nM, cytostatic effects on HepG2 or HepG2-3A4 are to be expected, so that influences or interactions with activity determinations can not be excluded, which must be taken into account accordingly.
|
2021-09-07T06:23:02.797Z
|
2021-08-28T00:00:00.000
|
{
"year": 2021,
"sha1": "1bcca4134cab1aa4d059cdc4405d6e460cc6da6b",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/clinical-hemorheology-and-microcirculation/ch219117?id=clinical-hemorheology-and-microcirculation/ch219117",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0216aac5d61eb3a0c19de53f1f71946c49f95cc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246229
|
pes2o/s2orc
|
v3-fos-license
|
Modulation of human telomerase reverse transcriptase in hepatocellular carcinoma
AIM: Most cancer cells acquire immortal capability by telomerase activation. The human telomerase reverse transcriptase gene ( hTERT ) is considered to be the major determinant of the enzymatic activity of human telomerase, and the hTERT promoter contains several c-Myc binding sites that mediate hTERT transcriptional activation. Few studies
INTRODUCTION
Telomerase is a ribonucleoprotein enzyme that synthesizes Grich telomeric repeats using its complementary RNA sequence as a template [1,2] . Telomerase is expressed in most human cancers and immortal cell lines but is inactive in normal somatic cell lines or tissue [3][4][5] . Recent reports support the concept that activation of telomerase may be an important and obligate step in the development of most malignant tumors [6,7] , including human hepatocellular carcinoma (HCC) [8] . The human telomerase catalytic subunit (hTERT) has been shown to be a rate-limiting determinant of the enzymatic activity of human telomerase [9,10] . Takakura et al identified the proximal 181-bp core promoter region essential for transactivation of hTERT [11] . Their findings suggest that hTERT expression is strictly regulated at the transcription machinery, and that the proximal core promoter containing an E-box which binds to Myc/Max, as well as the 3'-region containing the GC-box which binds to Sp1, is required for transactivation of hTERT [12] . Their findings further indicate that c-Myc and Sp1 cooperatively function as the major determinants of hTERT expression, and that the switching functions of Myc/Max and Mad/Max might also play roles in telomerase regulation. Wang et al added further support that Myc induce telomerase both in normal human mammary epithelial cells and in normal human diploid fibroblasts by introducing HPV-16, E6 protein into these cells [13] . Their findings suggest that the ability of c-Myc to activate telomerase may contribute to its ability to promote tumor formation. Further, telomerase activity in estrogen receptor-positive MCF-7 cells was upregulated by treatment with 17β-estradiol [14] . Kyo et al reported that estrogen activated c-Myc expression in MCF-7 cells, and that E-boxes in the hTERT promoter that bind to c-Myc/max played additional roles in estrogen-induced transactivation of hTERT.
By using TRAP assay, we previously measured telomerase activity in surgically resected specimens from 25 cases of hepatocellular and adjacent healthy tissues [15] . Telomerase activity was detected in 21 of the 25 HCC specimens from 25 different cases. This telomerase activity was correlated with human telomerase reverse transcriptase (hTERT) mRNA isoform expression but was poorly related to c-Myc expression in the hepatoma cell line J5 [16] . However, the role of c-Myc in hTERT expression in HCC remains unresolved.
In this study, we explored the relationship between hTERT mRNA regulation and c-Myc expression by RNA in situ hybridization and immunohistochemistry stain, respectively. The methods of in situ hybridization and immunohistochemistry are semiquantitative and can determine localization. In addition, to determine the cis-elements essential for transcriptional activation of hTERT, luciferase assays were performed with reporter plasmids with serial deletions or mutation of the core promoter using hepatoma cell line J5. The results provide evidence for a role of c-Myc in the regulation of hTERT in hepatoma cells.
Cell lines
All culture media including fetal bovine serum were purchased from Gibco Laboratories (Grand Island, NY). L-glutamine and penicillin/streptomycin were obtained from Sigma (St. Louis, MO).
WI38 cells (normal human fibroblasts) were obtained from the American Type Culture Collection and grown in DMEM containing 2 mmol/L L-glutamine, 50 U/mL penicillin, 50 mg streptomycin, and 100mL/L fetal bovine serum. J5 [16] was maintained in RPMI 1640 medium containing 3g/L Lglutamine and penicillin/streptomycin. All cell lines were cultivated in an atmosphere of 50mL/L CO 2 at 37 .
Preparation of RNA probes
Total RNA was obtained from the HT29 cells (ATCC, Rockville, MD) by addition of TRIZOL reagent (Life Technologies, Rockville, MD) according to the manufacturer's instructions.
The sense and antisense riboprobes were synthesized from Bam HI-and Eco RV-linearized PCRII/hTERT-145 according to the manufacturer's instructions using T7 and SP6 RNA polymerase, respectively, and labeled with digoxigenin-UTP (DIG RNA Labeling Kit, SP6/T7, Roche Molecular Biochemicals, Mannheim, Germany). Moreover, the housekeeping gene GAPDH was used to confirm the presence of intact RNA within the slides from each sample used for ISH.
RNA in situ hybridization
Formalin-fixed, paraffin-embedded tissue sections (4-µm thick) were deparaffinized with two 10 min washes with xylene and a graded series of alcohols for 3 min each. The deparaffinized tissues were then pretreated with 20 µg/mL proteinase K (Sigma) and 40 µg/mL pronase (Roche Molecular Biochemicals) at room temperature for 30 min. The tissues were then fixed with 40g/L paraformaldehyde (Sigma) in phosphate-buffered saline at room temperature for 10 min and then acetylated with 2.5mL/L acetic anhydride in 0.1 mmol/L triethanolamine-HCl (pH 8.0) at room temperature for 10 min.
The slides were then incubated in a moist chamber at 50 for 16 h with the hybridization solution containing 0.1 to 0.5 µg/mL digoxigenin-labeled RNA probe. The slides were subsequently washed twice with 50% formamide-2× SSC at 50 for 30 min, twice with 2× SSC at room temperature for 15 min, and twice with 0.2× SSC at room temperature for 15 min. The slides were then equilibrated with 1× washing solution for 2 min and incubated with 10mL/L blocking solution (DIG wash and block buffer set, Roche Molecular Biochemicals) for 10 min. The tissues were incubated with a sheep monoclonal antidigoxigenin antibody (Roche Molecular Biochemicals) diluted 1:100 in 10 mL/L blocking solution at room temperature for 2 h. After washed three times with 1× washing solution (Roche Molecular Biochemicals), the color reaction was carried out by incubation with 1× nitroblue tetrazolium (NBT)/5-bromo-4-chloro-3-indolyl phosphate solution (Roche Molecular Biochemicals) at room temperature overnight. The slides were then counterstained with nuclear fast red for 5 min and then mounted with Crystal Mounting reagent (DAKO, Glostrup, Denmark).
Two independent observers evaluated the signal intensity of hTERT expression, which was semiquantitated as strong, moderate, weak, or no staining. Sense and antisense probes were applied to paired serial slides, and the noncoding strand detected by sense probes was used as a negative control.
Immunohistochemistry
Immunohistochemical staining was performed to determine the expression of c-Myc. The immunostaining procedure was performed using the labeled streptavidin-biotin method (LASB-2 Kit, DAKO). Briefly, the tissue was placed in a boiling citrate buffer (pH 6; ChemMate TM , DAKO) twice for 5 min in a microwave oven at 750 W after deparaffinization and rehydration, as previously described. Quenching of the endogenous peroxidase activity by incubation with 30mL/L hydrogen peroxide for 10 min at room temperature was followed by incubation with mouse monoclonal antibody NCL-cMYC (Clone 9E11, Novocastra Laboratories Ltd., Newcastle-upon-Tyne, UK) diluted 1:200 at room temperature for 2 h. After washed with Tris-buffered saline containing 1g/L Tween-20, the specimens were sequentially incubated for 10 to 30 min with biotinylated anti-mouse immunoglobulins and peroxidase-labeled streptavidin. Staining was performed after 10 min of incubation with a freshly prepared substrate-chromogen solution containing 3% 3-amino-9ethylcarbazole and hydrogen peroxide. Finally, the slides were lightly counterstained with hematoxylin, washed with water and then mounted. Two independent observers assessed the sections. Because the extent of c-Myc labeling index was heterogeneous, the scoring system included both the staining intensity and the percentage of stained cells [17] . Staining intensity was graded as no staining (0), weak (1), moderate (2), or strong (3). The percentage of tumor cells with c-Myc staining was scored as follows: 1, <5%; 2, 5-20%; 3, 21-50%; 4, >50%. The multiplication values were then grouped into 4 scores as 0 (multiplication values 0, 1), 1 (multiplication values 2, 3), 2 (multiplication values 4, 6), or 3 (multiplication values 8,9,12).
PCR amplification and mutation screening of hTERT promoter
One microliter of genomic DNA was obtained as DNA template for use in PCR amplification of the hTERT promoter. The forward primer 5'-CCC ACG CGT GCA TTC GTG GTG CCC GGA GC-3' and the reverse primer 5'-CCC AGA TCT ATC GCG GGG GTG GCC GGG GCC AGG-3' were designed on the basis of a published hTERT promoter sequence [11] . The PCR product was amplified in the presence of 1 µmol primers with Taq DNA polymerase (Takara Shuzo Company, Shiga, Japan) for 35 cycles of 1 min at 95 , 1 min at 56 , and 1 min at 72 . DNA sequencing using the reverse primer was performed directly from the gel-purified PCR product or individual PCR product subcloned into the pCRII-TOPO vector. The analysis of the DNA sequences was compared with the wild-type sequence.
'-CC C AG A TC TA T CG CG G GG GT G GC C GG GG C CAGGGCTTC-3' with the PCR condition supported by Satoru Kyo (Department of Obstetrics and Gynecology, Kanazawa
University, School of Medicine, Ishikawa, Japan) [11] . The PCR product was amplified in the presence of 1 µL primers with TaKaRa Taq DNA polymerase (Takara Shuzo Company, Shiga, Japan) for 30 cycles of 30 s at 96 , 45 s at 62 , and 7 min at 72 , and 30 min at 72 . The products were confirmed to have correct sequences by nucleotide sequencing, and their quantity and quality were routinely checked by agarose gel electrophoresis. All plasmid DNAs were purified with the QIAquick gel extraction kit (QIAGEN, Hilden, Germany).
Transfection luciferase assay
Transient transfection of luciferase reporter plasmids was performed using LipofectAMINE 2000 (LF2000, Invitrogen), according to the protocol recommended by the manufacturer. In brief, 5×10 4 cells were seeded on 24-well plates, cultured overnight, and exposed to transfection mixtures containing 2 mg luciferase reporter plasmids for 4 h at 37 . Then, 0.5 ml growth media was added and cells were harvested 48 h after transfection. Luciferase assays were performed with the dualluciferase reporter assay system (Promega) according to the manufacturer's protocols. The pGL3-control plasmid (1 mg/well, Promega) was also transfected into each cell line for better comparison among cell lines with different transfection efficiencies. The pRL-SV40 (1 ng/well, Promega) containing the Renilla reniformis luciferase gene was cotransfected with the hTERT promoter-luciferase constructs (1 mg/well) for normalization of the luciferase activity in each transfection. The MLX microtiter plate luminometer (Dynex Technologies, Chantilly, VA) was used to detect luciferase activity. All experiments were performed at least 3 times in each plasmid and represented the average relative luciferase activity.
Statistical analysis
To evaluate the relationships among paired groups, the Fisher exact test was performed using SPSS 10.0 software. Additionally, the correlation of paired groups was analyzed using chi-square test with the SPSS program. A P value 0.05 was considered statistically significant.
RESULTS
To investigate hTERT expression in HCC tissue, in situ hybridization was applied. Immunohistochemistry stain was used to observe c-Myc expression and its relationship with hTERT mRNA. Forty-seven of 57 cases showed weak to strong hTERT mRNA expression. The expression of hTERT mRNA was not related to tumor differentiation (P<0.815) ( Table 1). Forty-three of 57 cases showed c-Myc expression without tumor differentiation (P<0.348) ( Table 2). Thirty of 57 cases (52%) of hTERT mRNA expression were associated with c-Myc protein expression (30/57). However, 16 of 57 cases (28%) showed strong hTERT mRNA detection with no Myc protein expression, whereas 11 of 57 cases (19%) showed weak hTERT mRNA expression with strong c-Myc detection (P<0.079) (Figure 1).
Three different-length DNA fragments (-1 375, -776, and -100 bp) encompassing the hTERT promoter were placed upstream of the luciferase reporter gene, as was an hTERT promoter-Luc construct containing 2 c-Myc mutations (pGL-181 MycMT, a gift from Kyo et al). All constructs were transiently transfected into HCC cell line J5 for luciferase study. Luciferase activity decreased between upstream 1 375 and 776 bp, but there was no significant difference of luciferase activity between upstream 776 and 100 bp or the 2 c-Myc mutations (Figure 2).
DISCUSSION
In a previous report, we demonstrated that telomerase activity in the HCC cell line J5 was not related to c-Myc expression [16] . To our knowledge, this is the first study to determine the role of c-Myc in hTERT HCC. According to in situ hybridization and immunohistochemistry analysis, only half of hTERT mRNA expression co-occurred with c-Myc protein expression. Twenty-eight percent of HCC tissue samples had strong hTERT mRNA detection with no or weak c-Myc protein expression, 19% of HCC tissue samples had no or weak hTERT mRNA expression with strong c-Myc expression. However, several studies reported that Myc expression could transactivate hTERT via 2 E-boxes in cooperation with Sp1 motif [12] . One of our constructs (a gift from Kyo), which encompassed 4 Sp1 and 2 c-Myc mutations, showed a high luciferase activity in the HCC cell line.
In contrast to the data of Kyo et al [14] , pGL3-181MycMT, a double c-Myc mutant, compared with wild type pGL3-181, exhibited a 50% decreased luciferase activity when transfected to the MCF-7 breast cell line. These results implicate that c-Myc is a positive regulator of hTERT, though other yet undetermined regulatory elements of hTERT in HCC may exist. For example, hepatitis B virus pre-S2/S gene has been found to be a cis-activator of the hTERT promoter [19] . By transfection of HBx gene into the HepG2 cell line, the activity of telomerase and apoptosis were decreased [20] . Further investigation of non c-Myc regulatory proteins in hepatoma is required in the future.
In our 16 HCC tissue specimens and 1 J5 cell line, hTERT promoter cis-element sequencing was performed. There was a polymorphism site (A transversion to T) just 3 bp away from the distal E-box, which might have affected the binding affinity of c-Myc [21] . This effect might explain why two E-box mutations still had a high telomerase activity. However, more evidence is required in support of this polymorphism nucleotide in 2 E-box mutation construct.
Furthermore, the presence of a large CpG island with a dense CG-rich content implicates that DNA methylation and chromatin structure may play a role in the regulation of hTERT expression. Devereux et al demonstrated that the promoter of one hTERT-negative fibroblast cell line, SUSM-1, was methylated at all sites examined [22] . Treatment of SUM-1 cells with the demethylating agent induced the cells to express hTERT, suggesting a potential role for DNA methylation in negative regulation. This epigenetic mechanism could explain why 19% of HCC samples showed strong c-Myc detection with no or weak hTERT mRNA expression. The role of GC island methylation in the regulation of hTERT expression merits further study.
In the Kyo et al report, the cis-acting effect of E-boxes and the Myc or Max requirement for transactivation varied among different cell types [12] . Deletion and mutation of the E-box resulted in significant loss of transcriptional activity in C33A cells, but not in SiHa cells. In C33A cells, expression of In summary, in the present hepatoma tissue study, 50% of hepatomas showed c-Myc overexpression with hTERT transcript upregulation. Other regulator elements and epigenetic mechanisms may be involved in hTERT transcript regulation. The proximal c-Myc motif plays a minor role in hTERT gene regulation. The results of immunohistochemistry and promoter-constructed luciferase analyses suggest that, in HCC, hTERT regulation is not restricted to c-Myc and involves other mechanisms.
|
2018-04-03T03:07:55.869Z
|
2004-03-01T00:00:00.000
|
{
"year": 2004,
"sha1": "599966f1811acf371172e787be2d83c475563c2c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v10.i5.638",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e0f13594b8bd06a832d7f877a3a92259f504a9ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
235652441
|
pes2o/s2orc
|
v3-fos-license
|
Celestial fields on the string and the Schwarzian action
This paper describes the motion of a classical Nambu-Goto string in three-dimensional anti-de Sitter spacetime in terms of two `celestial' fields on the worldsheet. The fields correspond to retarded and advanced boundary times at which null rays emanating from the string reach the boundary. The formalism allows for a simple derivation of the Schwarzian action for near-AdS2 embeddings.
Introduction
This paper is concerned with classical Nambu-Goto strings in three-dimensional anti-de Sitter (AdS) spacetime. The system is interesting for many reasons. In general curved spacetimes, the worldsheet conformal field theory is too complicated to solve. Maximally symmetric spacetimes provide an interesting middle ground between such theories and the free theory in flat target space. Perturbed long strings in AdS spacetime are described by non-linear equations and thus they provide a simple laboratory for studying non-linear phenomena such as wave turbulence and energy cascades [1,2].
Strings in AdS naturally show up in the context of the celebrated gauge/gravity duality [3,4,5]. According to the correspondence, a long string on the gravity side ending on the boundary is nothing but the dual of a flux tube stretching between external quarks in the boundary field theory. Long strings have been studied to calculate, for instance, the drag force on an external quark moving in a thermal plasma [6,7,8].
In a remarkable paper, Maldacena, Shenker, and Stanford discovered a bound on the rate of growth of chaos in thermal quantum systems [9]. This universal "chaos bound" on the Lyapunov exponent is λ L ≤ 2πT where T is the temperature. In holographic theories with classical gravity duals, black holes saturate this bound [10,11] giving support to the conjecture that they are the fastest scramblers in Nature. The papers [12,13] investigated a Brownian particle coupled to a thermal ensemble in a holographic system. The holographic dual object is a string hanging from the boundary of a three-dimensional BTZ black hole geometry. The other endpoint of the string is beyond the event horizon. The Lyapunov exponent can be extracted from out-of-time order four-point functions and it also saturates the chaos bound. Hence, the (sub-)system provides an example for fast scramblers which contain no gravitational degrees of freedom.
Even though the worldsheet theory contains no ordinary gravitational degrees of freedom, it shares other interesting features with theories of quantum gravity, e.g. there are no local off-shell observables and there exist toy versions of black holes. Thus the system has been called the "simplest theory of quantum gravity" [14].
In this paper we will be concerned with the classical string. The equation of motion of the sigma model can be re-written as a generalized sinh-Gordon equation and thus the system is integrable [15,16,17] (see also the reviews [18,19] and related papers [20,21,22,23,24,25,26,27,28,29,30,31]). Integrability allows for an exact discretization of the string equation of motion [32,33,34,35,36,37,38]. The corresponding embeddings are segmented strings which are exact solutions even at finite lattice-spacing. Smooth strings can be approximated by segmented strings to arbitrary accuracy (by increasing the number of segments and by choosing appropriate initial positions and velocities for the individual string segments). Segmented strings generalize piecewise linear strings in flat space [39,40]. The points where the segments join together move with the speed of light. This condition is necessary, otherwise the arising forces would deform the string and it would not be piecewise linear after a while.
A discrete equation of motion has been derived by the author in [37]. Precisely the same equations are satisfied when the worldsheet is coupled to a background two-form whose field strength is proportional to the volume form of AdS 3 [38]. (A certain value for the coupling gives the SL(2) WZW model.) This non-linear discrete equation has appeared in the context of exact discretization in the mathematics literature [41].
In this paper, we will take a smooth limit of segmented strings: we will recast the string equations of motion in terms of two fields b(z − , z + ) and w(z − , z + ) (or black and white) where z ± are null coordinates on the worldsheet. We will see how the b and w fields are related to the sinh-Gordon field. An advantage of this approach is that the string embedding can be obtained directly without solving an auxiliary scattering problem. The b and w fields are R 2,2 analogs of coordinates on the so-called celestial sphere which is the sphere at null infinity in Minkowski spacetime [42] (see also the recent paper [43] for the scattering problem in (2, 2) signature). Hence, we will sometimes refer to these fields as celestial variables.
The paper is organized as follows. Section 2 discusses the Nambu-Goto string in AdS 3 spacetime and describes the discrete equation of motion that is based on boundary time fields. Section 3 introduces the new formalism by taking a continuum limit. It is shown how to compute the string embedding from the new variables and the relationship to the generalized sinh-Gordon equation is clarified. The section also discusses possible actions from which the equations of motion can be derived. Section 4 discusses a few concrete examples where the string embeddings contain cusps. Section 5 derives the celebrated Schwarzian action using the new approach. By putting either of the b or w fields on-shell, an exact worldsheet action is derived which contains the Schwarzian derivative of the other field. The paper ends with a summary of the results and an appendix in which the celestial fields are related to the spinor solutions of an auxiliary scattering problem.
String in AdS 3
Three-dimensional anti-de Sitter spacetime can be immersed into the R 2,2 linear ambient space. Then, AdS is the universal cover of the hyperboloid Global AdS time is the angle on the Y −1 , Y 0 plane. This coordinate has to be "unwrapped" to avoid closed time-like curves. A part of global AdS 3 is covered by the Poincaré patch. The metric on this patch is The coordinates t, x, y are related to Y via the following transformation which can be inverted on the hyperboloid: The spatial boundary of AdS lies at y = 0. In terms of the Y coordinate, the boundary is the set of points that satisfy Y 2 = 0 with the identification Y ∼ = c Y (where c ∈ R + ).
The string can be mapped into AdS 3 by first taking the target space to be R 2,2 and then forcing the string to lie on the hyperboloid (1) by means of a Lagrange multiplier λ. We will work in conformal gauge. The action is given by where T is the string tension and Y (τ, σ) ∈ R 2,2 is the embedding function. The non-linear equations of motion are where z − = 1 2 (τ − σ) and z + = 1 2 (τ + σ) are the two null coordinates on the worldsheet and ∂ − ≡ ∂ z − , ∂ + ≡ ∂ z + . Due to the gauge choice, the equations are supplemented by the constraints
The discrete equation of motion
The motion of the string can be equivalently described using segmented strings [32,33]. The classical theory is integrable which allows for the exact discretization of the equations.
A discrete evolution equation for the normal vectors (or, equivalently, the kink collision points) has been found [32,33] and can be used to build segmented string solutions. In [37] I showed that segmented strings in AdS 3 move according to the discrete equation of motion Here i and j are integer indices labeling lattice points on the string worldsheet. As illustrated in FIG. 1, kink worldlines pass through each of the lattice points (black and white dots). Kinks move with the speed of light both in target space and on the worldsheet. The dots are colored alternatingly, depending on which way the kink moves.
In FIG. 1, the variable a ij is expressed using the components of the difference vector V ≡ P − Q ∈ R 2,2 . If we define the a : R 2,2 → R function as then we simply have a ij = a( V ) (6) and similarly the other a variables are computed from their respective difference vectors. We will see shortly that they correspond to (advanced or retarded) boundary times.
Note that the equation of motion (4) is invariant under Möbius transformations, which is due to the left SL(2) factor in the AdS 3 isometry group SO(2, 2) = SL(2) × SL(2). The a field does not completely specify the string embedding: the right SL(2) group only acts on the "right-handed" variables, which will be denoted by a tilde: Theã field satisfies the same equation as a in (4).
What is the meaning of the a field? The kink difference vectors (i.e. V above) are null, therefore they correspond to points where the rays hit the boundary of AdS 3 . In terms of the boundary Poincaré coordinates t and x, the difference vector can be expressed as If we now consider an embedding of AdS 2 ⊂ AdS 3 with x = 0 on the Poincaré patch, we see that the a field tells us the retarded and advanced times at which kink null rays hit the boundary.
Celestial fields
In this section we derive partial differential equations by taking an appropriate continuum limit of the discrete equation of motion (4).
Continuum limit
In general there are different ways to take a continuum limit of discrete equations. In the following we investigate the case where the a variables over black dots and white dots (see FIG. 1) converge to two distinct fields. We will denote them by b(z − , z + ) and w(z − , z + ) and call them black and white celestial fields, respectively 1 .
Let us consider two adjacent patches as in FIG. 2 and set
By taking the → 0 small lattice spacing limit, the discrete equation (4) becomes and similarly for the black field In the continuum limit, the two fields can be computed via where Y is the embedding function from (2). Note that the black and white fields transform as scalars under worldsheet conformal transformations.
Area densities e α and e β
Let us define the worldsheet area density e α and the dual area density e β as follows: We can determine the area of an elementary patch of a segmented string [37]. For instance, the area of patch #1 in FIG. 1 is given by If we set the a variables according to (7), then in the → 0 limit we get α is an important field, since as we will see it satisfies the generalized sinh-Gordon equation, and its exponential is the Nambu-Goto Lagrangian. Note that e α is always non-negative whereas e β can be negative in certain regions on the worldsheet.
Target space vs. dual space
Let us consider two adjacent AdS 2 patches as in FIG. 2. If the normal vector N 1 and the a variables are known, then the vertices can be computed as follows. Let us define the antisymmetric matrix where a and b are two parameters. Note that and where indices are raised by η = diag(−1, −1, +1, +1).
Then the vertices are given by and similarly for the other vertices: the two parameters of the M matrix are the a values sitting on the black and white dots near the vertex (see FIG. 1). Since M µν M µκ = −δ ν κ and N 2 1 = 1, the vertices will be on the AdS hyperboloid, i.e. V 2 00 = V 2 01 = −1. Furthermore, since M is antisymmetric, we will also have V 00 · N 1 = V 01 · N 1 = 0, which means that they lie on the AdS 2 patch defined by N 1 . Finally, it is easily shown that a(V 01 − V 00 ) = a 10 . This means that the expressions for the vertices in (12) are indeed correct.
It can further be shown that Thus, if the a variables are known, then the M matrix can be used to switch between target space and dual space.
One can define an analogous antisymmetric matrix which does the same "reflection" operation, but this time using the tilde variables: In the continuum limit, the a variables converge to the b and w fields. Hence,
The auxiliary u and v fields
Let us define the following quantities We need to find a properly discretized versions of these equations. If we set the a ij variables as in (7), and take In fact, these two equations are equivalent to (8) and (9).
The generalized sinh-Gordon equation
By plugging in the expressions for α, u, and v, it is easy to see that α satisfies the generalized sinh-Gordon equation This equation was first derived in [15] (see also [16]). Using e α+β = uv the sinh-Gordon equation can be re-written as The dual field β satisfies the same equation with α and β exchanged: In a worldsheet region where uv > 0 one can perform a conformal transformation and (locally) set u(z − ) = v(z + ) = 1. We will call these balanced coordinates on the worldsheet.
In these coordinates we have α = −β. Note that these coordinates typically do not cover the entire worldsheet since uv can change sign.
Finally, note that α can be expressed from b and u or w and v: These formulas simplify even further in balanced coordinates when u = v = 1.
Constraints
The discussion has so far focused on the continuum limit of the a ij field while neglecting thẽ a ij variables and the correspondingb andw fields.
The a andã fields are not independent which can be seen as follows. Let us consider a single AdS 2 patch (see patch #1 in FIG 2). The patch is bounded by four kink worldlines. These are null vectors in R 2,2 which can be constructed from the a andã variables. For instance, It is easy to check that e.g. a( p (1) ) = a 00 andã( p (1) ) =ã 00 and similarly for the other vectors.
Finally, for the other two vertices we get V 01 =Ṽ 01 and V 11 =Ṽ 11 if and only if the determinant (17) vanishes.
There is an analogous dual constraint for the difference vectors computed from four adjacent normal vectors. It is easy to see that in the continuum limit (17) and the dual constraint are tantamount to α =α and β =β where theα andβ are computed from theb andw fields: Although it is not a separate constraint, one can derive an equation forũ andṽ as well. The string equation of motion (3) says that Y ∝ ∂ − ∂ + Y . The normal vector satisfies an analogous equation of motion (with a plus sign). Thus we have In order to express the right-hand side, similarly to (16), let us now take the following ansätze with arbitrary proportionality factors λ(z − , z + ) and κ(z − , z + ). If we plug these expressions into (19), we get 2λκ(∂ − w ∂ +w + ∂ + w ∂ −w ) = 0.
An analogous equation containing b andb can be derived if we start instead with a similar ansatz for ∂ − Y and ∂ + N . Combining these equations with (15) we get the constraints In addition to the above constraints, the area density must also be non-negative. This is ensured if the black and white fields satisfy
String embedding
Given a solution of the sinh-Gordon equation, the string embedding can be computed by solving an auxiliary Dirac equation where α appears as a potential. If the b,b, w,w fields are known, there is a simpler direct way to compute the embedding.
Similarly to (16), we can take the following ansatz for ∂ + Y : The proportionality factor λ(z − , z + ) can be determined as follows. From the string equation of motion (3) and α can be expressed using b and w as in (10). If we plug in the z-derivative of (22), we can compute the norm of the position vector. We get Note that terms containing partial derivatives of λ have dropped out. Since the string must lie on the AdS 3 hyperboloid, we have Y 2 = −1, which determines λ and therefore also the string embedding. However, the resulting expression is complicated and contains second derivatives of b and w. A simpler formula can be obtained by taking the continuum limit of (18) and then converting it into a target space vector via (12). We get where the normalization factor is defined by
The action
The Nambu-Goto action (after setting the prefactor to one) If we now plug in the expression (10) for e α , then we obtain the following action The Euler-Lagrange equations for b and w give the equations of motion (8) and (9). Note that only those solutions are allowed that satisfy (21).
The same equations of motion are obtained from the "dual" action S also appears in expressions for the regularized area of the worldsheet which equals the expectation value of the dual Wilson loop at strong coupling (see e.g. Appendix B of [44]).
Various limits
3.9.1 Flat space limit In the flat space limit, the string worldsheet is mapped into an infinitesimal volume of AdS 3 . Thus, the area density e α vanishes and the generalized sinh-Gordon equation (14) degenerates into the Liouville equation This equation can be explicitly solved in terms of two functions f (z − ) and g(z + ) This solution can be obtained using the black and white fields if we consider the limit as → 0. Here b 0 and w 0 are arbitrary functions, whereas b 1 and w 1 are determined from the equations of motion. Then, α can be determined from (10) Comparing this with (23) gives f = b 0 and g = w 0 .
AdS 2 limit
In the flat space limit one had X(z − , z + ) ≈ X 0 for a fixed X 0 ∈ R 2,2 vector in target space. One can consider a similar limit in the dual space of normal vectors, i.e. N (z − , z + ) ≈ N 0 . In this case, the black and white fields become Note that the dependence of b and w on the worldsheet coordinates is swapped compared to the flat space limit.
Non-linear waves moving in one direction
One can consider non-linear waves moving in one direction on the string in AdS 3 . In [45] Mikhailov gave an explicit expression for such solutions in terms of the position function of the string endpoint on the boundary. On the Poincaré patch the solution is where x 0 (τ ) specifies the endpoint of the string in terms of the retarded time τ . The induced metric is locally AdS 2 everywhere. The corresponding black and white fields can be computed We see that only w has a special form. These solutions describe right-moving waves. For left-moving waves one has Finally, one can consider a "dual" non-linear wave limit in which
Examples
The generalized sinh-Gordon equation in balanced coordinates gives the ordinary sinh-Gordon equation. This equation has singular soliton and antisoliton solutions, e.g.
There are explicit formulas for solutions containing solitons. The corresponding string embeddings have also been calculated, see [21,25,26,28]. The string has a cusp whenever the e α area density vanishes. This happens precisely at the location of a soliton.
In this section, we compute the black and white fields for a few examples.
Rotating string
The embedding is given by [20,25] and the auxiliary fields:
String with two cusps
The embedding with two cusps is given via the following two complex functions [25] is the Lorentz factor, T = 2vγτ , and X = 2γσ. Finally v is the asymptotic speed of the two cusps on the worldsheet in the center-of-mass frame.
For the black and white fields we get The expressions forb andw are too large to be presented here. A calculation yields
The Schwarzian action
The Sachdev-Ye-Kitaev model [46,47], the 2d gravity model of Jackiw and Teitelboim [48,49] and certain other gravity models [50] can be described at low energies in terms of a degree of freedom which is a function of one variable and describes reparametrizations of (imaginary) time. The aim of this section is to derive this effective action for a string worldsheet which is an AdS 2 slice of the AdS 3 target space.
Let us therefore consider a string with a constant normal vector. The induced metric on the worldsheet is AdS 2 . Small perturbations around the static embedding behave like a matter field with conformal dimension 2 ∆ = 1. "Conformal symmetry" on the boundary is the reparametrization symmetry and w → f (w) (26) This symmetry is spontaneously broken down to SL(2), i.e. simultaneous Möbius transfor- which still preserve the form of the equations of motion (8) and (9). There is an analogous symmetry breaking associated to the right-handed fieldsb andw.
One can introduce an explicit conformal symmetry breaking by cutting out a piece of AdS 2 in the UV region as in FIG. 4. Under a conformal transformation (26) this boundary changes and thus the value of the action also changes. Let us use worldsheet coordinates such that b = z + and w = z − . Then the change in the action can be computed using (10), In FIG. 4, the original AdS 2 boundary is the vertical black line. On the boundary, retarded and advanced times are equal, i.e. b = w. In our gauge this is at z − = z + . Let us now introduce the UV cutoff (red line in the figure) at Here we have included a varying coupling ϕ(z + ). After performing the z − integral between −∞ and the UV cutoff z − , we get For small , the integrand can be Taylor expanded. Assuming f (+∞) = f (−∞) we get This is the Schwarzian action [47,52,53,51,54]. Related expressions for the renormalized string area in the Euclidean case were obtained in [44] (see also [55,56,57,58]). In the Lorentzian case the above Schwarzian action for the string was computed in [59].
An exact Schwarzian on the worldsheet
The Euler-Lagrange equations for the Schwarzian action produce a fourth order differential equation. Note that if w is expressed using (9) as then (8) will give a fourth order equation for b.
Let us compute the change in the action if w is kept on-shell and b is transformed b → f (b). We obtain It is interesting to see the appearance of the Schwarzian again. In this case it is integrated over the two-dimensional worldsheet. Note, however, that this expression for the action is exact and it does not require the embedding to be special (e.g. close to an AdS 2 slice).
Discussion
In this paper we have considered a Nambu-Goto string in AdS 3 spacetime. We have mainly used the embedding into ambient space: Y (z − , z + ) ∈ AdS 3 ⊂ R 2,2 (here z ± are null coordinates on the Lorentzian worldsheet). By taking a continuum limit of the discrete equation of motion of segmented strings, we have obtained differential equations for a smooth string in terms of two fields where a(X) = X −1 +X 2 X 0 +X 1 . Our main results are the following equations of motion 3 We define the following auxiliary fields: the area density e α , the area density in the space of normal vectors e β , and the u and v fields. They are given by Using the equations of motion it is easy to see that the auxiliary fields satisfy the generalized sinh-Gordon equations [15] ∂ − ∂ + α + e α − uve −α = 0, There are analogous equations for the right-handed fields that we denote with a tilde. Instead of a, these are defined usingã(X) = X −1 +X 2 −X 0 +X 1 . The right-handed fields are not independent, because the constraints must be satisfied by the b, w,b,w fields. In particular, α =α means that the right-handed variables must give the same induced metric on the worldsheet as the left-handed ones. 3 Note that these equations are the Lorentzian analog of those coming from a torus compactification, i.e.
∂∂τ + 2∂τ∂τ τ − τ = 0 where τ (z,z) ∈ C is the complex structure parameter of the torus fiber and z = x 1 + ix 2 is the twodimensional Euclidean base [60]. In the case of the string in AdS, the base is the Lorentzian worldsheet which enjoys a conformal symmetry. Furthermore, τ 2 has to be Wick-rotated to purely imaginary values.
A potential application of the celestial variables developed in [37,38] and in the current paper is an alternative calculation of the spectral curve. Preliminary results show that in certain cases it is possible to write down explicit expressions for the quasimomenta without the use of transcendental functions [61].
The theory on the string worldsheet shares many features with theories of quantum gravity [14]. The b and w fields are in a certain sense holographic: they describe the string embedding using coordinates at null infinity. One might hope that the string worldsheet has a holographic description and that the celestial fields play a role in this.
In the flat space (or the dual near-AdS 2 ) limit the generalized sinh-Gordon equation degenerates and becomes the Liouville equation. This can be solved explicitly in terms of two arbitrary function (see section 3.9.1). As one can see from the expression for e α in (10), the celestial fields provide a generalization of the Liouville solution which is applicable to the sinh-Gordon equation. Although the description contains two fields, there is only one physical degree of freedom: the transverse position of the string in the AdS 3 target space.
The string embedding can be computed from a sinh-Gordon solution if one solves an associated Dirac scattering problem, see e.g. [62]. In the Appendix we discuss how the black and white celestial fields can be expressed as ratios of the spinor components of the solution.
In this paper we have used Poincaré coordinates. In global AdS coordinates the form of the equation change. This can be seen by replacing b → tan b and w → tan w. The new celestial fields now correspond to retarded and advanced boundary times in global AdS coordinates at which null rays emanating from the string reach the boundary. Due to the nature of the immersion of AdS 3 into R 2,2 in (1) the time coordinates are periodic, therefore b and w are coordinates on a torus. By plugging in the tangent functions into (8) and (9) it is easy to see that they satisfy the equations .
Similar equations are also expected to be valid when the target space is de Sitter spacetime [15,35]. The string worldsheet can also be coupled to a background two-form whose field strength is the volume form of AdS 3 (i.e. the coupling does not destroy any of the continuous symmetries of the system [36]). Independently of the value of the coupling, the discrete equation of motion in (4) remains the same [38] and thus the b and w fields will satisfy the same equations of motion. Finally, a higher-dimensional generalization may be possible since the spinor-helicity formalism has been extended to higher dimensions (see e.g [63]). In AdS 3 left-and right-handed celestial fields (b, w andb,w, respectively) decouple. Thus one can concentrate on either pair without having to deal with constraints between them. In general dimensions one does not expect such a decoupling. This makes AdS 3 special.
Acknowledgments
I thank Martin Kruczenski, Douglas Stanford, and the referee for valuable comments on the manuscript. The author is supported by the STFC Ernest Rutherford grant ST/P004334/1.
Appendix
The string embedding can be computed from a sinh-Gordon solution if one solves an associated scattering problem of a spinor field. In this Appendix we will discuss how the black and white celestial fields can be expressed as ratios of the spinor components of the solution. The SL(2) Lax matrices are [62] where ψ L α and ψ Ṙ α are two-component spinors (α,α ∈ {0, 1}). Each of these systems have two linearly independent solutions, denoted by ψ L αa and ψ Ṙ αȧ (here a,ȧ ∈ {0, 1} label the independent solutions). We will normalize them so that βα ψ L αa ψ L βb = ab , βα ψ Ṙ αȧ ψ Ṙ βḃ = ȧḃ (28) where is the 2 × 2 Levi-Civita tensor. Using the spinor solutions, one can compute the matrix [62] where Y (z − , z + ) is the embedding function, N (z − , z + ) is the normal vector function. Let α,α denote the indices of the 2×2 matrix, and use the equivalence of SO(2, 2) and SL(2)×SL (2) to decompose spacetime indices into spinor indices a,ȧ. The result is a quantity with four spinor indices W = W αα,aȧ . The trace computes the string embedding which is given by where M 1 = diag (1,1). Similarly, one can compute e − α 2 ∂ ± Y by replacing M 1 with another matrix. By taking ratios one can obtain the b and w fields,
Similarly, we have Note that the right spinors have dropped out from the final expressions.
One can also compute the 'right-handed' celestial fields
These formulas relate the celestial fields to the solutions of the linear problem.
|
2019-10-08T18:02:22.000Z
|
2019-10-08T00:00:00.000
|
{
"year": 2021,
"sha1": "0c00802e9f2a30696c11f6ad5bbe4d2c4b256a69",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2021)050.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0c00802e9f2a30696c11f6ad5bbe4d2c4b256a69",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
269827587
|
pes2o/s2orc
|
v3-fos-license
|
Interaction between Enrofloxacin and Three Essential Oils (Cinnamon Bark, Clove Bud and Lavender Flower)—A Study on Multidrug-Resistant Escherichia coli Strains Isolated from 1-Day-Old Broiler Chickens
Avian pathogenic Escherichia coli (APEC) causes a variety of infections outside the intestine. The treatment of these infections is becoming increasingly difficult due to the emergence of multi-drug resistant (MDR) strains, which can also be a direct or indirect threat to humans as consumers of poultry products. Therefore, alternative antimicrobial agents are being sought, which could be essential oils, either administered individually or in interaction with antibiotics. Sixteen field isolates of E. coli (originating from 1-day-old broilers) and the ATCC 25922 reference strain were tested. Commercial cinnamon bark, clove bud, lavender flower essential oils (EOs) and enrofloxacin were selected to assess the sensitivity of the selected E. coli strains to antimicrobial agents. The checkerboard method was used to estimate the individual minimum inhibitory concentration (MIC) for each antimicrobial agent as well as to determine the interactions between the selected essential oil and enrofloxacin. In the case of enrofloxacin, ten isolates were resistant at MIC ≥ 2 μg/mL, three were classified as intermediate (0.5–1 μg/mL) and three as sensitive at ≤0.25 μg/mL. Regardless of the sensitivity to enrofloxacin, the MIC for cinnamon EO was 0.25% v/v and for clove EO was 0.125% v/v. All MDR strains had MIC values for lavender EO of 1% v/v, while drug-sensitive isolates had MIC of 0.5% v/v. Synergism between enrofloxacin and EO was noted more frequently in lavender EO (82.35%), followed by cinnamon EO (64.7%), than in clove EO (47.1%). The remaining cases exhibited additive effects. Owing to synergy, the isolates became susceptible to enrofloxacin at an MIC of ≤8 µg/mL. A time–kill study supports these observations. Cinnamon and clove EOs required for up to 1 h and lavender EO for up to 4 h to completely kill a multidrug-resistant strain as well as the ATCC 25922 reference strain of E. coli. Through synergistic or additive effects, blends with a lower than MIC concentration of enrofloxacin mixed with a lower EO content required 6 ± 2 h to achieve a similar effect.
Introduction
Poultry are constantly exposed to microorganisms.Several important factors contribute to the relatively high level of microbial contamination in poultry farms, the most important of which are environmental factors (e.g., high temperature, dustiness, excessive moisture), poor hygienic quality of water and feed and lack of proper bio-assurance [1].
Escherichia coli, a Gram-negative facultative anaerobic rod-shaped bacterium, is part of the natural bacterial flora of the gastrointestinal tract of humans and animals and is therefore considered an important indicator of faecal contamination of water and food [2,3].However, poultry colibacteriosis can develop as a primary or secondary infection, alongside other viral or bacterial infections [4].Generally, about 10-15% of E. coli in the gastrointestinal tract of birds belong to the Avian Pathogenic Escherichia coli (APEC) serotypes [5].The most common form of colibacteriosis in chickens occurs between 3 and 10 weeks of age, with a variety of symptoms, such as navel and yolk sac inflammation, acute sepsis, respiratory and reproductive colibacteriosis, cellulitis, arthritis and osteoarthritis syndrome [6].Moreover, E. coli can penetrate the eggshell and spread to chicks during hatching, mainly causing yolk sac inflammation and acute septic colibacteriosis, resulting in early high mortality [7].
The treatment or prevention of colibacteriosis is mainly based on antibiotic therapy and autovaccines, but numerous studies indicate that multidrug-resistant strains of E. coli are common [8] and autovaccines are less effective and have not been widely used to date, mainly because APEC strains are very heterogeneous [9].Imported one-day-old chicks (especially from various hatcheries) may be a source of new serotypes/strains with unknown antibiotic resistance and can be a potential source of dissemination of resistant bacteria in poultry production.In addition, the restriction on the use of antibiotics introduced by the EU Parliament and Council Regulation No. 2019/6 [10], also known as the "new veterinary regulation", has forced the search for new ways to improve the level of biosafety.
As an alternative, essential oils (EOs) can be used either alone or in combination with common antimicrobial agents.Essential oils consist of approximately 20-60 volatile components, which are secondary metabolites produced by aromatic plants [11].These volatile compounds (generally of low molecular weight below 500 g/mol) belong to various chemical classes, including terpenes, aldehydes, alcohols, ethers, ketones, esters, amines, amides and phenols [12].The mechanism of antibacterial and antifungal actions of most of these components is not well established.The most popular opinion is that the interaction of hydrophobic components with lipids present in the cell membrane of microorganisms results in cell death [13].The association of antibiotics with essential oils against resistant bacteria may expand the antimicrobial spectrum to reduce the emergence of resistant variants and minimize the use of a single antibiotic [14].
To select essential oils that are effective against E. coli, especially multidrug-resistant strains, a checkerboard study was performed including three selected commercial essential oils from spices (cinnamon, clove) and flowers (lavender) and simultaneously taking into account the positive interaction with one of the most commonly used antimicrobial agents in poultry enrofloxacin.
Results
The MIC results for enrofloxacin were in agreement with the disk diffusion method from the official test reports for the selected isolates (Table 1 as well as Table S1 in the "Supplementary Materials" section).According to the recommendations in the VET01S, 5th ed.[15], the MIC interpretative criteria for enrofloxacin and E. coli in poultry (expressed in µg/mL) indicate that the isolate is resistant at an MIC ≥ 2 µg/mL and sensitive at ≤0.25 µg/mL of enrofloxacin (0.5-1 µg/mL is classified as intermediate).The results of the analysis are presented in Table 1 and are summarised in a gradient from isolates that are most resistant to enrofloxacin to those that are susceptible.MICi-individual minimum inhibitory concentration; MICc-MIC in combination: minimum inhibitory concentration of enrofloxacin in the presence of essential oil or minimum inhibitory concentration of essential oil in the presence of enrofloxacin; FIC-fractional inhibitory concentration; FICi-FIC index; non-APEC-E.coli serotype other than O1, O2, O18 or O78; Type of interaction: green-synergy (SYN), yellow-additive effect (ADD).
In the case of essential oils, each has its own best and usually constant individual MIC.Cinnamon bark EO was always effective at a concentration of 0.25% v/v (corresponding to 2.56 mg/mL; density 1.025 g/mL at 25 • C).Clove bud EO was always effective at two times lower concentrations than cinnamon-0.125%v/v (corresponding to 1.31 mg/mL; at a density of 1.05 g/mL).It is worth emphasising that the activity of these two oils was independent of the level of resistance of E. coli to enrofloxacin.A certain division was observed for the lavender flower essential oil.All multidrug-resistant isolates (MDR-1 to MDR-10) always had MICs for lavender at 1% v/v (equivalent to 8.79 mg/mL; at a measured density of 0.879 g/mL), whereas isolates with resistance to single antibiotic groups (SDR-1 to SDR-3) and drug-sensitive isolates (SENS-1 to SENS-3 and ATCC 25922) had MICs of 0.5% v/v (4.4 mg/mL of lavender EO).
Among the 153 checkerboards performed (17 E. coli strains × 3 combinations × 3 replications), no antagonism or even neutral interaction was found.The results are presented in Table 1 (best example of checkerboards; n = 51).The vast majority (64.7%) showed synergy between EOs and enrofloxacin.The remaining 35.3% of cases had additive effects.Cinnamon and clove EOs interacted similarly with enrofloxacin, with synergism noted more frequently for cinnamon EO (64.7%) than for clove EO (47.1%).The most common FIC index was 0.5 (further referred to as 'weak' synergy).It is characterised by a "stair-step" pattern on the plate, where the effective amount of both antimicrobials was reduced four times (referred to as 1/4 MIC of enrofloxacin and 1/4 MIC of EO) in relation to their individual MIC.A more detailed description is given in Figure S1 in the section "Supplementary Materials".In contrast, lavender EO was much more prone to interact with enrofloxacin, especially as a strong synergy (82.35% of cases; (FICi = 0.155-0.375and rarely 0.5).An example of strong lavender/enrofloxacin synergy is included (with the description) in the section "Supplementary Materials"-Figure S2.The exceptions were the MDR-3 and SENS-3 isolates, as well as the ATCC 25922 reference strain, for which only additive effects were always recorded regardless of the essential oil used.
Unfortunately, even with synergy between enrofloxacin and EO, high resistance to enrofloxacin (MIC > 16 µg/mL) resulted in a situation in which these isolates still remained at the level of insensitivity to this antimicrobial agent.Only at the MIC for a minor degree of enrofloxacin resistance (2-8 µg/mL) did such isolates become intermediate or susceptible to enrofloxacin.Due to synergy, strains intermediate to enrofloxacin (SDR-1 to SDR-3) may become susceptible, while the effective concentration of enrofloxacin can also be significantly reduced among the susceptible isolates (SENS-1 and SENS-2).However, as the sensitivity to enrofloxacin increased, the importance of this interaction decreased.This is well demonstrated by the identical results for the SENS-3 isolate and the reference strain ATCC 25922, where only half of the effective enrofloxacin concentration was observed (i.e., a reduction from 0.016 µg/mL to 0.008 µg/mL).
Among the serotypes, the highest number of APEC O78 isolates (56.25%) was identified.Other APEC isolates were also identified: two isolates of O1 (12.5%) and one isolate of O2 and O18 (6.25% each).However, a large number of isolates had unknown serotypes (18.75%).
A time-kill assay was used to study the activity of selected antimicrobial agents (cinnamon, clove, lavender EOs and enrofloxacin, alone and in combination) against two bacterial strains (MDR-9 and ATCC 25922) to determine the bactericidal or bacteriostatic activity of an agent over time.The MDR-9 strain of E. coli was chosen because of its common drug resistance observed in studies of 1-day-old chicks (own experience; see also Table S1 in Supplementary Materials section) and common results of interaction between all EOs under study and enrofloxacin: "weak" synergy (FICi = 0.5) for cinnamon and clove and "strong" synergy for lavender (FICi = 0.375).Figure 1 shows the time-kill results for the MDR-9 strain (expressed as mean viable CFU/mL over time) for each EO and enrofloxacin at the MIC level (continuous lines (1) to (4)) compared with the control (MHB and MHB with 5% ACN; continuous lines (13) and ( 14)) and for each synergistic combination (short-dashed lines (5) to ( 7)) compared with their components applied alone (long-dashed two-pointed lines (8) to (12)).
All samples were inoculated with a shared suspension of MDR-9 strain (1.68 × 10 6 CFU/mL; 0 h time point).Cinnamon EO at 0.25% v/v and clove EO at 0.125% v/v completely inactivated the MDR-9 strain within one hour (after 30 min, MDR-9 was barely detectable at 1.2 × 10 3 CFU/mL and 2.8 × 10 3 CFU/mL, respectively).In contrast, 1% v/v lavender EO required four times as long (i.e., 4 h) to reach this state (viable cells were still detectable after 2 h; however, only in trace amounts-5.03× 10 2 CFU/mL).Enrofloxacin at an MIC of 2 µg/mL, after an initially strong reduction in viable cells of the MDR-9 strain to 3.2 × 10 5 CFU/mL within 1 h, was unable to kill this strain for a long time (average 2.2 × 10 4 CFU/mL were still detected after 12 h).However, after 24 h of incubation, the viable cells were no longer detectable.
Among the three EO × enrofloxacin combinations studied, the most effective was the combination of lavender EO (1/4 MIC) and enrofloxacin (1/8 MIC), which killed this enrofloxacin-resistant strain within 4 h.The other two combinations (1/4 MIC of cinnamon or 1/4 MIC of clove and 1/4 MIC of enrofloxacin) required 8 h.However, the efficacy of all combinations derived from the checkerboard method was confirmed, especially when compared to that of slow-acting enrofloxacin administered alone.It is also noteworthy that all the components included in the synergistic blends when administered alone (lines (8) to (12)) had no bactericidal activity.In addition, visible turbidity of the culture appeared at the end of the incubation period.Bacterial growth without any antimicrobial agents (control samples 13 and 14) reached more than 1.0 × 10 10 CFU/mL at 24 h which was manifested by the high turbidity of the sample in the tube.The same graph, but on a full logarithmic scale, is available as Figure S3 in the "Supplementary Materials" section.Similar but significantly faster effects of antimicrobial agents were recorded for the E. coli ATCC 25922 reference strain (Figure 2; inoculation level: 1.80 × 10 6 CFU/mL).All samples were inoculated with a shared suspension of MDR-9 strain (1.68 × 10 6 CFU/mL; 0 h time point).Cinnamon EO at 0.25% v/v and clove EO at 0.125% v/v completely inactivated the MDR-9 strain within one hour (after 30 min, MDR-9 was barely detectable at 1.2 × 10 3 CFU/mL and 2.8 × 10 3 CFU/mL, respectively).In contrast, 1% v/v lavender EO required four times as long (i.e., 4 h) to reach this state (viable cells were still detectable after 2 h; however, only in trace amounts-5.03× 10 2 CFU/mL).Enrofloxacin at an MIC of 2 µg/mL, after an initially strong reduction in viable cells of the MDR-9 strain to 3.2 × 10 5 CFU/mL within 1 h, was unable to kill this strain for a long time (average 2.2 × 10 4 CFU/mL were still detected after 12 h).However, after 24 h of incubation, the viable cells were no longer detectable.
Among the three EO × enrofloxacin combinations studied, the most effective was the combination of lavender EO (1/4 MIC) and enrofloxacin (1/8 MIC), which killed this enrofloxacin-resistant strain within 4 h.The other two combinations (1/4 MIC of cinnamon or 1/4 MIC of clove and 1/4 MIC of enrofloxacin) required 8 h.However, the efficacy of all combinations derived from the checkerboard method was confirmed, especially when compared to that of slow-acting enrofloxacin administered alone.It is also noteworthy that all the components included in the synergistic blends when administered alone (lines (8) to ( 12)) had no bactericidal activity.In addition, visible turbidity of the culture appeared at the end of the incubation period.Bacterial growth without any antimicrobial agents (control samples 13 and 14) reached more than 1.0 × 10 10 CFU/mL at 24 h which was manifested by the high turbidity of the sample in the tube.The same graph, but on a full logarithmic scale, is available as Figure S3 in the "Supplementary Materials" section.
Similar but significantly faster effects of antimicrobial agents were recorded for the E. coli ATCC 25922 reference strain (Figure 2; inoculation level: 1.80 × 10 6 CFU/mL).Once again, cinnamon and clove EOs were the most effective against E. coli because they kill most rapidly, that is, within 30 min after inoculation (after 15 min, only 1.8-2.0× 10 5 CFU/mL were recorded).In addition, lavender flower EO at 0.5% v/v killed quickly, and a complete reduction was noted after 1 h of incubation.Similar efficacy, although with lower dynamics, was found for a blend consisting of cinnamon EO at 0.0625% v/v (1/4 MIC) and enrofloxacin at 0.008 µg/mL (1/2 MIC); an additive effect in the checkerboard method was noted for this blend.Enrofloxacin at the MIC (0.016 µg/mL) required 4 h to completely eliminate E. coli ATCC 25922.The second blend (enrofloxacin at 1/2 MIC mixed with only 1/8 MIC of lavender EO) was characterised by identical time-to-kill values and Once again, cinnamon and clove EOs were the most effective against E. coli because they kill most rapidly, that is, within 30 min after inoculation (after 15 min, only 1.8-2.0× 10 5 CFU/mL were recorded).In addition, lavender flower EO at 0.5% v/v killed quickly, and a complete reduction was noted after 1 h of incubation.Similar efficacy, al-though with lower dynamics, was found for a blend consisting of cinnamon EO at 0.0625% v/v (1/4 MIC) and enrofloxacin at 0.008 µg/mL (1/2 MIC); an additive effect in the checkerboard method was noted for this blend.Enrofloxacin at the MIC (0.016 µg/mL) required 4 h to completely eliminate E. coli ATCC 25922.The second blend (enrofloxacin at 1/2 MIC mixed with only 1/8 MIC of lavender EO) was characterised by identical time-to-kill values and similar dynamics.The last blend, consisting of 1/2 MIC of enrofloxacin and 1/4 MIC of clove, had the slowest activity; it required 6 h to completely inactivate this strain.Single components of the three blends (lines ( 8) to ( 11)) administered individually in the first 6 h of incubation were characterised by bacteriostatic activity, followed by the initiation of logarithmic growth, typical of E. coli.In contrast, control samples (lines (12) and ( 13)) were only characterised by logarithmic growth (up to 2.85 × 10 10 CFU/mL).The same graph, but on a full logarithmic scale, is available as Figure S4 in the "Supplementary Materials" section.
Discussion
Natural plant products (e.g., essential oils) are important sources of novel therapeutic molecules and have various applications; however, they are mainly used in the cosmetic and food industries [16].Moreover, these small molecules, alone and in combination, have powerful antiseptic, anti-inflammatory, antibacterial, antioxidative, and immune-boosting properties [17].Usually, the major component (determining the chemotype of the EO) reflects the biophysical and biological features of the essential oil from which they were isolated.In addition, their mode of action depends on their concentration and when tested alone or in combination with other antimicrobial agents [18].Unfortunately, new classes of antibiotics have appeared sporadically for the past 10 years, and most large pharmaceutical companies have left the field of new antibiotics or other antimicrobial agents.This task is now chiefly undertaken by academic laboratories and small-to-mediumsized companies [19].
The effects of cinnamon and clove EOs are independent of the degree of antibiotic resistance in E. coli.Because of the large number of constituents, in contrast to antibiotics, EOs seem to have no single specific cellular targets, but they "attack" comprehensively, by destroying the structure of the cell membrane, general leakage of the bacterial cell contents, and reducing the expression of certain genes [20].However, the question is whether their biological effects are the result of the synergism of all molecules or reflect only those of the major molecules present at the highest levels.However, in the case of lavender EO, a relationship was observed between the decrease in antibiotic resistance and the two-fold stronger effect of this EO, which is in agreement with previous observations by Adaszy ńska-Skwirzy ńska et al. [21].
The genus Cinnamomum comprises hundreds of species belonging to the Lauraceae family, which are distributed throughout Asia and Australia.Cinnamomum zeylanicum Blume (also known as Cinnamomum verum J. Presl) is an indigenous tree of Sri Lanka (Ceylon), the true source of cinnamon bark and leaf essential oils [22].Several studies have reported that (E)-cinnamaldehyde (also known as trans-cinnamaldehyde) is the major chemical compound of C. zeylanicum bark essential oil (55-78%) and contains only approximately 1-5% eugenol (as well as 1-5% of each other significant compounds like linalool, cinnamyl acetate, ß-Caryophyllene or 1,8-Cineol) while eugenol (60-80%) is the main compound in EO that is extracted from leaves [23,24] thus cinnamon leaf EO mimic clove bud EO which is also included in this manuscript.The Plant Therapy ® EO chemotype cinnamaldehyde (73.6%) tested in the present study met all the criteria for cinnamon bark essential oil.The spicy taste and fragrance of cinnamon are due to the presence of cinnamaldehyde, which is produced by the absorption of oxygen.As cinnamon bark "matures", it goes dark, improving the resinous compounds [25].In addition to being used as a spice and flavouring agent, cinnamon has been used as an anti-inflammatory, nematocidal, larvicidal, insecticidal, antimycotic and anticancer agent [26].Due to current restrictions on antibiotics in chicken production, the poultry industry has looked towards novel alternatives.Dietary supplementation of poultry feed with cinnamon as a natural feed additive has beneficial effects on nutrient digestibility, immunity, blood biochemical profile and particularly on gut health to alleviate the impact of disease and heat stress by maintaining water and electrolyte balance and feed intake [27].In addition, cinnamon essential oil resulted in an acceptable level of virulence gene downregulation in poultry respiratory bacterial agents, including the Escherichia coli stx1 gene [28].Unfortunately, this oil belongs to the "hot" EOs group.Therefore, the maximum content should not exceed 0.1% for topical application (manufacturer's recommendations = 1.03 mg/mL).However, oral administration has not yet been well described, especially in poultry.A study performed by Chowdhury et al. [29] suggested that cinnamon EO at 0.3 g/kg broiler diet could lower pathogenic bacteria (E. coli and Clostridium sp.) in the intestine and improve gut morphology along with improvement of immune response.Pure trans-cinnamaldehyde was administrated to broilers in drinking water at 0.06% fully inactivated Salmonella after 24 h [30].The in vitro antibacterial activity of the cinnamon bark EO against E. coli was also investigated.Alizadeh Behbahani et al. [31] used a hydrodistillation extraction technique to obtain cinnamon oil from the dried bark (71.50% of (E)-cinnamaldehyde and linalool-7.00%,β-caryophyllene-6.40%,eucalyptol-5.40%and eugenol-4.60%as the main components).MIC for E. coli ATCC 25922 was 6.25 mg/mL (for comparison, 2.56 mg/mL in our study).Stronger effects of this oil on Gram-positive bacteria have been reported.Results similar to our study (MIC 2.5 mg/mL) were obtained by Raeisi et al. [32] using an essential oil obtained by hydrodistillation of the bark of local cinnamon (E. coli ATCC 43894; main components: cinnamaldehyde-79.74%,trans-calamenene-2.62%,benzaldehyde-1.71%,borneol-1.73%,cinnamyl acetate-1.58%)and Ebani et al. [33] using commercial FLORA ® EO (Pisa, Italy; 56.4% of (E)-Cinnamaldehyde and β-caryophyllene-10.3%)and an E. coli strain, isolated in a case of poultry colibacillosis.However, other studies have reported a lower MIC for cinnamon EO: 1 mg/mL (92.4% cinnamaldehyde; E. coli ATCC 25922) [34] and 0.625-2.5 mg/mL (chemical composition unknown; E. coli O157:H7) [35].Nemattalab et al. [36] reported that the MIC of cinnamon EO (97.44% of (E)-cinnamaldehyde; origin of EO unknown) ranged between 155 and 165 µg/mL, which is in agreement with the results of Lu et al. [37] (MIC of 100-400 µg/mL).Surprisingly, El Atki et al. [38] reported a much lower MIC (only 4.88 µg/mL) for cinnamon EO from Chinese cinnamon bark (Cinnamomum cassia) against E. coli 25922.Only a few studies have investigated the activity of cinnamon oil against APEC serotypes in poultry.A total of 117 E. coli APEC strains (serotypes O78, O2 as well as O128 and O139) and commercial cinnamon EO (Erba Vita, San Marino; 88.2% cinnamaldehyde) were used for the analysis by Casalino et al. [39].Treatment with ≥1 mg/mL of cinnamon EO was effective against all APEC serotypes, regardless of the bacterial cell density used in the experiments (up to 10 8 CFU/mL).Identical results were obtained by Van et al. [40] for 10 field isolates (commercial Heber EO, Vietnam; 91.9% of cinnamaldehyde).Cui et al. [41] have also tested cinnamon oil (bought from J.E International, France) on E. coli ATCC 25922.The main components were eugenol (75.5%) and eugenyl acetate (4.4%).Both components suggested that the EO was derived from cinnamon leaves (not bark).Therefore, their MIC (0.05% v/v) should correspond to the activity of clove oil (in our study-0.125%v/v) instead of the conventional cinnamon bark EO (in our study-0.25%v/v).
The Lamiaceae family and Lavandula genus contain many aromatic and medicinal plants of which Lavandula angustifolia Mill. is the best-known source of lavender essential oils.L. angustifolia is extensively cultivated in some countries, especially Bulgaria, France, Greece, the United Kingdom, Spain and Morocco.True lavender EO is highly valued because of its attractive fragrance in comparison to spike oil (from L. latifolia) and lavandin oil (from L. x intermedia) [56].True lavender EO is characterised by a high content of linalool and linalyl acetate (both at a similar level, approx.20-45%), a moderate amount (0.5-8%) of lavandulyl acetate, lavandulol and terpinene-4-ol, and variable levels of eucalyptol and camphor [57][58][59].Once again, the Plant Therapy ® EO chemotype linalyl acetate (31.74%)/linalool (27.62%) tested in the present study met all the criteria for lavender essential oil.Lavender essential oil belongs to the group of weak allergens, and according to manufacturers' recommendations, the maximum content should not exceed 2% (optimum) to 5% (maximum) for topical application.Although lavender oil is quite popular, there are no studies on its activity against poultry isolates.One of the limited studies in this area is the research by Adaszy ńska-Skwirzy ńska et al. [21] using a commercial Avicenna EO (Wrocław, Poland).The main ingredients were linalool 35.17% and linalool acetate 46.25%.Similar to our study, for the five broiler field isolates (however, serotypes were not specified), the MIC was 1% v/v and for the reference strain E. coli ATCC 25922 0.5% v/v.The reference strain, E. coli ATCC 25922, was tested more frequently.For this strain, Puvaća et al.
[60] report a lower MIC of 2.1 mg/mL (4.4 mg/mL in our study).However, this commercially available essential oil purchased from a local distributor in Novi Sad (Serbia) had an atypical composition: carbitol (13.05%) and α-terpinyl acetate (10.93%) followed by linalool (10.71%) and linalyl acetate (9.6%).Lavandula angustifolia Sevastopolis EO (Romania) from the study by Predoi et al. [61] also had a rather unusual composition: high linalool content (47.55%) with low levels of linalyl acetate (only 3.75%; there was more camphor (9.67%), 1,8-cineole (8.6%), borneol (8.52%), and terpinene-4-ol (3.8%)).E. coli ATCC 25922 and E. coli ESBL 4493 were susceptible, with MIC values ranging from 0.1% v/v to 0.19% v/v, indicating strong antimicrobial activity of this lavender EO.E. coli O157:H7 cells treated for 2 h with pure linalool and observed by scanning electron microscopy have shown significant structural changes [62].Linalool, after contact with a bacterial cell, first acts on the cell membrane resulting in reduced membrane potential and structure, followed by intracellular leakage of macromolecules (DNA, RNA and proteins).In addition, it inhibits energy-related pathways and the activity of key enzymes, as comprehensively described in a review by M ączka et al. [63].Linalyl acetate also showed antimicrobial properties against E. coli ATCC 15221 with an MIC of 5.0 mg/mL [64].There is increasing evidence that the antimicrobial activity of lavender oil is dependent on the country of origin and chemotype; for example, the Bulgarian-type (51.9% linalool, 9.5% linalyl acetate) was effective against 23 of 25 bacteria, whereas the French-type (43.2% linalyl acetate, 29.1% linalool) was only effective against 13 bacteria [65].This suggests that linalool (alone or, more likely, in synergy with other ingredients) rather than linalyl acetate determines the activity of the lavender essential oil.
It is known that serotypes O78, as well as O1 or O2, are commonly associated with infections in chickens (more than 80% of the cases) [66].In our study, similar results were obtained (75% of the isolates).Broiler chicks undergo stress from hatching to their placement on the farm.Each placement of chicks may result in the introduction of different APEC serotypes with unknown drug susceptibilities.Undiagnosed treatment is often ineffective.Such APEC serotypes can survive until the end of rearing and can be detected in broiler meat after slaughter.Moreover, resistance to antibiotics may increase over time.In a study by van der Horst et al. [67], the acquisition of resistance to amoxicillin, tetracycline, and enrofloxacin by E. coli was tested by exposing living cells to constant or stepwise increasing concentrations of these compounds.The MIC for enrofloxacin increased from 0.25 µg/mL (upper sensitivity limit) to maximally 512 µg/mL (which is significantly higher than our extremely resistant MDR-1 isolate with an MIC of 64 µg/mL) after two weeks of exposure to low concentrations of enrofloxacin.The origin of the MDR-3 serotype O1 isolate resistance is well known to the first author of the present study.After the first reported mortalities, one of the broiler breeder flocks started treatment with enrofloxacin, but without success.Doxycycline was administrated after a further two weeks, followed by amoxicillin with clavulanic acid.On each occasion, swabs were taken from the internal organs and E. coli with increasing resistance were isolated.As it later turned out after realtime PCR testing, the primary cause of the problems in the herd was infectious bronchitis virus (IBv).The consequence of this situation was an outbreak of the MDR-3 isolate in 1-day-old chicks originating from this flock.The use of herbs, spices and increasingly, in the light of our research, essential oils during the rearing process can prevent this complex phenomenon.The review encompasses recent studies regarding the protection against pathogenic E. coli by EO with a major focus on the inhibition of toxins and proliferation in food is well described by Munekata et al. [68].The ability to disrupt the membrane of E. coli cells and facilitate intracellular compound leakage is well documented in scanning electron microscopy images of both cinnamon EO [31,41] and clove EO [45,52].Unfortunately, the use of EO has some disadvantages.First, an intense specific fragrance (even at a dilution of 0.1% v/v), a bitter taste (e.g., eugenol), poor water solubility, high volatility and low stability (e.g., linalool from lavender) may limit the possibility of its use on the farm (in drinking water, feed or as aromatherapy).Second, the intense scent/taste may be a direct result of the MIC, which is usually up to 1000x higher than the MIC for antibiotics, making antibiotics easier and cost-effective to administer in an effective dose compared to EOs.Therefore, it is preferable to exploit the synergy between EOs and antibiotics.However, there is still relatively limited research in this field.El Atki et al. [38] reported a synergy of cinnamon EO with chloramphenicol (FICi = 0.5) and an additive effect with streptomycin (FICi = 1.0) in E. coli ATCC 25922.Similar to our study, Adaszy ńska-Skwirzy ńska et al. [21] suggest a high potency of lavender EO to interact with enrofloxacin: additive effect to E. coli ATCC 25922, susceptible and intermediate strains (the FIC index between 0.56 to 1.0) and synergy in regard to enrofloxacin-resistant field strains.In our study, we also found that additional synergy is possible between enrofloxacin and cinnamon EO or clove EO.To our knowledge, this is the first study to analyse the effect of cinnamon and clove EOs on enrofloxacin used to control APEC strains in poultry.The synergy between other antibiotics and essential oils has been well documented [69][70][71].
Enrofloxacin is a chemotherapeutic (not an antibiotic sensu stricto) that belongs to the fluoroquinolone group.It was synthesised in 1983 from nalidixic acid; however, the first product was released in 1991 as an oral drug for poultry under the trade name Baytril ® (Bayer, Germany).The molecular targets of enrofloxacin are enzymes that control DNA topology: gyrase and topoisomerase IV [72].The natural consequence of this process is the inhibition of bacterial DNA replication.Moreover, enrofloxacin is not approved for use as a drug in humans.Antibiotic resistance can be acquired through three main mechanisms: (1) transfer of resistance genes from resistant to susceptible microorganisms; (2) genetic adaptation, and (3) phenotypic adaptation, which primarily increases the expression of existing cellular machinery such as efflux pumps [73].Multidrug-resistant APEC strains present in poultry products (meat, eggs, etc.) may potentiate the first mechanism because bacteria can share their genes with each other in a process called horizontal gene transfer.This can occur between bacteria of the same species or between different species on the path of conjugation, transduction or transformation [73].It can affect not only E. coli but also other enteric pathogens causing food poisoning, such as Salmonella spp., Staphylococcus aureus, Campylobacter sp. or human pathogens that acquire resistance genes.This can be a risk to consumers of poultry products if they are not properly processed.
The positive interaction between enrofloxacin and essential oils (synergy or additive effect) has not yet been sufficiently established.The development of resistance to fluoroquinolones occurs in several ways.The first is the presence of different quinolone resistance (qnr) genes in E. coli plasmids [74], which are capable of protecting the target gyrase and topoisomerase.Leakage of macromolecules (including plasmids) after cinnamon, clove or lavender "strike" may reduce the protective potential and enrofloxacin becomes more active than the individual MIC for enrofloxacin might suggest.Second, efflux pump systems are present.The efflux pump system decreases the intracellular concentration of fluoroquinolones by transporting, for example, enrofloxacin from the cell to the environment [72].As mentioned earlier, all the essential oils studied in the manuscript significantly damaged the structure of the membrane and thus could significantly inactivate the pumps.Enrofloxacin can act more effectively against E. coli than the initial MIC implies.Third, there is the presence of a gene encoding the aminoglycoside acetyltransferase AAC(60)-Ib-cr (also within the plasmid), an enzyme that modifies fluoroquinolones by acetylation [75].Once again, resistance is conditioned by the presence of plasmids making it vulnerable to changes in cell structure induced by essential oils.Finally, mutations appear in the quinolone resistance determinant region (QRDR) within the subunits forming topoisomerases II and IV.The occurrence of some mutations leads to abnormal conformation of the subunits and reduced binding affinity of, for example, enrofloxacin to the DNA-gyrase or DNA-topoisomerase IV complex [72].Probably, chromosome-borne mechanisms are the most resistant to essential oil activity.This may explain the observed fact that isolates with the highest MIC for enrofloxacin (MIC > 16 µg/mL; MDR-1 to MDR-3) are still classified as resistant to this antimicrobial despite the observed interaction.Unfortunately, the genetic basis of enrofloxacin resistance in all isolates under study has not been determined.
It is important to emphasize that the administration of an essential oil with enrofloxacin does not have a strict therapeutic purpose, because this is still what antibiotics are for.EO is intended to initiate damage to bacterial cells, facilitating the activity of enrofloxacin and, consequently, preventing the emergence of increasing resistance to this antimicrobial agent.The effectiveness of the EOs was confirmed by time-kill curve analysis.
The dynamics of essential oil and/or enrofloxacin activity cannot be assessed during checkerboard incubation.Visual reading only occurred at the end of the incubation period (up to 24 h).Time-kill studies have shown an extremely fast activity of cinnamon and clove oils (up to 1 h), as well as the fast effect of lavender oil (up to 4 h).Blends with a lower than MIC concentration of enrofloxacin mixed with a lower EO content (usually 1/4 MIC) required 6 ± 2 h to achieve a similar effect.Information related to other similar studies is very limited, especially for APEC strains.
A study by Iseppi et al. [76] was to assess the efficacy and synergistic potential of two essential oils (cinnamon and clove) traditionally used in the food industry to control food-borne pathogens in fresh-cut fruits (including E. coli ATCC 25922).Both singles (MIC for cinnamon at 8 µg/mL and 4 µg/mL for clove) and a blend consisting of these oils showed a reduction in viable E. coli ATCC 25922 cells of about 2 log CFU/g after 24 h.At the end of the trial (8 days), the EO/EO combination had the best results (reduction by 7.7 log CFU/g E. coli viable cells) followed by single EOs (reduction by approx.6 log CFU/g).It should be noted, however, that the initial number of bacteria was higher than that in our experiment-approx.10 8 CFU/g and EOs content multiple times lower than our MICs.A study by Yap et al. [11] investigated the mechanism of action of cinnamon bark EO (MIC at only 0.02% v/v) when used singly and in combination with piperacillin, for its antimicrobial and synergistic activity against the well-described ß-lactamase TEM-1 plasmid-conferred Escherichia coli J53 R1 strain.Similar to our study, the single components of the blend were ineffective, and cultures proceeded to unlimited logarithmic growth of viable cells of this bacterium over a period of 4-8 h of incubation; however, the blend itself was bactericidal after 20 h of incubation, meaning synergy has been confirmed.The time-kill curve assays revealed the occurrence of bactericide synergism in combinations of C. zeylanicum bark (0.25 mg/mL; 1/10 of our MIC) with rosemary [77].At this very low concentration of cinnamon EO, a bacteriostatic effect of single cinnamon EO on E. coli ATCC 25922 was noted for the first 12 h and a bactericidal effect after 24 h.In addition, after 24 h incubation, the synergistic effect of cinnamon bark EO (MIC at 0.8 µg/mL) or cinnamaldehyde (MIC at 0.15 mg/mL) with gentamicin against ESBLproducing E. coli isolates and ATCC 25922 reference strain was confirmed by time-kill curve experiments [78].Again, the individual components were ineffective when used individually.Pure eugenol (MIC 0.25 mg/mL) was sufficient to fully eradicate E. coli strain 128 MR within 2 h whereas 1/2 MIC had only a bacteriostatic effect [79].In the study of Wang et al. [80], for most rare clinical colistin-resistant or native colistin-sensitive E. coli strains as well as the ATCC 25922 reference strain, eugenol exhibited a synergistic effect (FICi from 0.375 to 0.5) or additive effect (FICi = 0.625) with colistin and a bactericidal effect within 2 h in the time-kill assay was noted.The mode of action of lavender EO on antimicrobial activity against multidrug-resistant Escherichia coli J53 R1 strain (carrying a plasmid encoding beta-lactamase TEM-1) when used singly and in combination with piperacillin was studied by Yap et al. [81].In their time-kill analysis, the complete killing of this bacterium was observed within 4 h when lavender EO (0.5% v/v; MIC similar to that in our study) was combined with piperacillin.Lavender EO and piperacillin administered alone at sub-concentrations did not show a complete killing profile within the time of the study.
Escherichia coli Strains
Sixteen field isolates of Escherichia coli (isolated from the hearts and yolk sacs of 1-dayold broilers that died during transport from 2017 to 2021) were tested.All field isolates were retrieved from a frozen strain bank (live animals were not included in the experiment).The strain collection included ten multidrug-resistant strains (resistance to enrofloxacin and more than three other antibiotic groups but susceptibility to colistin; labelled as MDR 1-10), three strains resistant to various antimicrobial groups (≤3) but known intermediate resistance to enrofloxacin (SDR 1-3) and three E. coli strains sensitive to all antimicrobials tested (SENS 1-3).Additionally, a non-APEC and antibiotic-sensitive reference strain of E. coli ATCC 25922 (WDCM 00013; serotype O6; KWIK-STIK™ Plus, Microbiologics, St. Cloud, MN, USA) was used.ATCC 25922, originally isolated from a human clinical sample in the USA, is the recommended reference strain for antibiotic susceptibility and media testing.APEC affinity was tested using diagnostic sera (Sifin Diagnostics GMbH, Berlin, Germany) according to the manufacturer's recommendations.A positive result was confirmed in the Widal reaction (microtitre plate confirmation test) to exclude the effects of any parallel nonspecific agglutination.The following diagnostic sera were used: polyspecific Anti-coli A (preliminary recognition of APEC) and monospecific (O1, O2, O18, O78).However, the ability to produce toxins is unknown.Drug resistance results were compiled from official test reports.Before inoculation on a checkerboard, each strain was revived on Columbia agar with the addition of 5% sheep blood (Graso, Starogard Gda ński, Poland) and incubated for 24 h at +37 • C ± 1 • C.
Antimicrobial Agents
Cinnamon bark oil (from Cinnamomum zeylanicum; origin Sri Lanka), clove bud oil (from Eugenia caryophyllus; origin Indonesia) and lavender flower oil (from Lavandula angustifolia; origin Greece) were used to assess the sensitivity of the above-mentioned E. coli strains to essential oils.The essential oils were purchased from Plant Therapy ® (Twin Falls, ID, USA).The density of the essential oil was assessed by weighing 1 mL of each oil (mean from 10 replicates).The manufacturer provides gas chromatography-mass spectrometry (GC-MS) reports for individual lots of essential oils (on the official website or on request).In addition, enrofloxacin (100 mg/mL; Medoxil Oral, Medivet S.A., Śrem, Poland), an antimicrobial agent belonging to the fluoroquinolone group, was also used.This ready-to-use solution can be administered to chickens in drinking water.It also contained benzyl alcohol (7.5 mg/mL) as an auxiliary substance.The essential oils were diluted in acetonitrile (LiChrosolv ® , Supelco, Merck KGaH, Darmstadt, Germany) to create a gradient ranging from 10% to 0.01% v/v.Enrofloxacin was pre-diluted in sterile 0.9% saline (Ecotainer ® , B. Braun Medical AG, Sempach, Switzerland) creating a gradient from 5.12 mg/mL to 0.01 µg/mL.This allowed the selection of appropriate serial dilutions for resistant, intermediate and sensitive strains during the construction of checkerboards.
Checkerboards
The creation of so-called checkerboards enabled the simultaneous estimation of individual minimum inhibitory concentration (MIC) for each antimicrobial agent as well as the determination of interactions between the selected essential oil and enrofloxacin (three possible combinations per bacterium: cinnamon × enrofloxacin, clove × enrofloxacin and lavender × enrofloxacin).Checkerboards were prepared in 96-well plates with a cover (Wuxi Nest Biotechnology, Wuxi, China).For the growth medium, 170 µL of Mueller-Hinton broth (MHB) (GRASO, Gdansk, Poland) was initially added to each well.Horizontal gradients of enrofloxacin were then performed (20 µL of each two-fold consecutive dilution-a total of ten columns; 1:10).The eleventh column did not contain enrofloxacin, only saline in identical proportions.The complementary gradient of essential oil was made vertically (10 µL of each of seven two-fold consecutive dilutions, the last eight rows did not contain EO but only acetonitrile in analogous proportions; 1:20).As high concentrations of acetonitrile (≥10%) can inhibit the growth of Gram-negative rods (own observations), it is important that the final concentration of acetonitrile in the well should not be higher than 5%.Such a situation occurs, inter alia, in the last twelfth column, reserved for controls.In this area of the checkerboard, the wells contained only MHB, saline and acetonitrile (purity/negative control), while bacteria were added to only half of them (growth-positive control).At the end, a bacterial suspension with a final concentration of approximately 1.5 × 10 6 colony forming units (CFU) per well (derived from 0.5 McFarland) was added simultaneously at a ratio of 1:10 to the 92 wells (excluding the four wells intended as negative controls).To prevent the transfer of bacteria between wells during incubation and the loss of some of the culture volume (caused, among other things, by evaporation), the entire plate was tightly covered with a protective breathable film (Axygen™, Thermo Fisher Scientific, Waltham, MA, USA).The plates were incubated for 18 h at +37 • C ± 1 • C. Each checkerboard test was performed in triplicate.
After incubation, owing to the possibility of false turbidity or sediment formation originating from essential oil at higher concentrations (≥0.1%), false results may be obtained.To detect the presence of viable bacterial cells in each well, 20 µL of 0.01% resazurin (POL-AURA, Olsztyn, Poland) was added to each well after incubation, sealed, and incubated for an additional 6 h (maintaining sterility is crucial).Resazurin is dark blue but changes colour to various shades of pink in the presence of live cells.The intensity of the pink colour was directly proportional to the number of live cells that originally survived the first incubation in the presence of single or both antimicrobial agents.In this case, the MIC was the lowest concentration of an antibacterial agent expressed in µg/mL (enrofloxacin) or % v/v (essential oils) which completely prevented the colour change (i.e., the last blue colour of the well which remained intact).
Interaction between Essential Oils and Enrofloxacin
To determine possible interactions between the three essential oils (cinnamon, clove and lavender) and enrofloxacin, the fractional inhibitory concentration (FIC) was calculated according to van Vuuren and Viljoen [82] using the following formulas: FIC ENRxEO = MIC ENRxEO /MIC ENR (reading in columns) and FIC EOxENR = MIC EOxENR /MIC EO (reading in rows) where: ENRxEO-MIC of enrofloxacin in the presence of essential oil; EOxENR-MIC of essential oil in the presence of enrofloxacin; EO-essential oil acting independently; ENR-enrofloxacin acting independently.The FIC index (FICi) was then calculated for each bacterial strain as the sum of the FIC: FICi = FIC ENRxEO + FIC EOxENR .The FIC index is expressed as the interaction of two antimicrobial agents where the concentration of each test agent in combination is expressed as a fraction of the concentration (corresponding to 1/2 MIC, 1/4 MIC, 1/8 MIC, etc.) that would produce the same effect when used independently.The interpretation of possible in vitro interactions between enrofloxacin and other antimicrobial agents (cinnamon, clove and lavender essential oils) was described as synergistic (FICi ≤ 0.5), additive (0.5 < FICi ≤ 1.0), noninteractive (1.0 < FICi ≤ 4.0), or antagonistic (FICi > 4.0).
Time-Kill Analysis
E. coli ATCC 25922 and MDR-9 were challenged with essential oils and enrofloxacin at various concentrations and bacterial viability.This value was determined at different time points during the incubation period.The final concentrations of essential oils and enrofloxacin in MHB were as follows: cinnamon bark (0.25 % v/v as MIC and 0.0625 % v/v as 1/4 MIC-identical for both strains), clove bud (0.125 % v/v as MIC and 0.03125 % v/v as 1/4 MIC-identical for both strains), lavender (1% v/v as MIC and 0.25 % v/v as 1/4 MIC for MDR-9 and 0.5% v/v as MIC and 0.0625 % v/v as 1/8 MIC for ATCC 25922), enrofloxacin (2 µg/mL as MIC, 0.5 µg/mL as 1/4 MIC, 0.25 µg/mL as 1/8 MIC for MDR-9 and 0.016 µg/mL as MIC and 0.008 µg/mL as 1/2 MIC for ATCC 25922).In addition to mimicking synergy (MDR-9) or additive effect (ATCC 25922) conditions, three blends per strain of enrofloxacin with the respective essential oils were created as shown in Table 1.As controls, pure MHB and MHB with 5% acetonitrile were also tested.Each test tube contained a final volume of 10 mL.Immediately after incubation, viable cell counts were performed for 100 µL of the samples collected at 0 min (inoculation), 15 min, 0.5 h, 1 h, 2 h, 4 h, 6 h, 8 h, 12 h and 24 h.To quantify viable cells, a horizontal method was used to determine the number of E. coli according to the ISO 4833-1:2013 standard [83] with minor modifications.Briefly, immediately after collection, ten-fold serial dilutions of each sample were performed with 0.9% saline (Ecotainer ® , B. Braun Medical AG, Sempach, Switzerland) on ice and 1 mL of each dilution was transferred to a Petri dish.Liquid Mueller-Hinton agar (Graso, Starogard Gda ński, Poland) was then added and mixed gently.After complete solidification, plates were incubated at +30 • C for 72 h.After incubation, the colonies were counted manually.ISO 7218 [84] is the calculation method.The experiment was performed in triplicate.
Conclusions
An in vitro study of the antibacterial activity of essential oils showed that cinnamon (MIC of 0.25% v/v), clove (MIC of 0.125% v/v) and lavender EOs (MIC ranged 0.5-1% v/v) had acceptable antibacterial activity against E. coli isolated from broilers (including multidrug-resistant APEC strains), which make these antimicrobial agents a potential candidate for the treatment of E. coli infections.Lavender oil had the best and highest percentage of synergy cases with enrofloxacin (82.35%), although cinnamon and clove oils also had this desirable potential (synergy in 64.7% and 47.1% cases, respectively).In light of our time-kill study and other studies, it can be concluded that long-term administration of multiple lower doses of essential oils can be carried out than would result directly from the MICs.At the same time, these EOs have a high potential for synergism with antibiotics applied over several days to control APEC strains in chicks.These combinations can be used as alternative therapeutic applications, which could decrease the minimum effective dose of the drugs, thus reducing their possible adverse effects and the costs of treatment.It is also important to consider whether, by analogy with antibiotics, the long-term use of essential oils will result in the acquisition of stepwise resistance.
Figure 1 .
Figure 1.Time-kill analysis of cinnamon bark, clove bud and lavender flower essential oils administered alone and in the synergistic combination with enrofloxacin (Escherichia coli MDR-9 strain).
Figure 2 .
Figure 2. Time-kill analysis of cinnamon bark, clove bud and lavender flower essential oils administered alone and in combination with enrofloxacin (additive effect; Escherichia coli ATCC 25922 reference strain).
Figure 2 .
Figure 2. Time-kill analysis of cinnamon bark, clove bud and lavender flower essential oils administered alone and in combination with enrofloxacin (additive effect; Escherichia coli ATCC 25922 reference strain).
Table 1 .
Escherichia coli susceptibility test results for enrofloxacin and three essential oils (cinnamon, clove and lavender) and the estimation of interactions (best match within triplicates).
|
2024-05-18T15:31:45.392Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "95faa07a2e579aedc086b5005d55b70ef7d23ca6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/10/5220/pdf?version=1715680050",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "114c6986107a87a85e3920b1d16c050a24f9fad7",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16638283
|
pes2o/s2orc
|
v3-fos-license
|
The Mathematical Theory of Molecular Motor Movement and Chemomechanical Energy Transduction
The mathematical formulation of the model for molecular movement of single motor proteins driven by cyclic biochemical reactions in an aqueous environment leads to a drifted Brownian motion characterized by coupled diffusion equations. In this article, we introduce the basic notion for the continuous model and review some asymptotic solutions for the problem. Stochastic, nonequilibrium thermodynamic interpretations of the mathematical equations and their solutions are presented. Some relevant mathematics, mainly in the field of stochastic processes, are discussed.
Introduction
One of the fascinating aspects of protein molecules in the biological world is their ability to perform various, almost "magic-like" tasks [16]. A particular class of proteins known as molecular motors can move linearly along its designated track against an external force by utilizing the biochemical energy source, adenosine triphosphate (ATP). In this manner, the motor proteins act as miniature engines converting chemical energy to mechanical work. Movement of single protein molecules inside a cell, however, has to experience thermal agitation from the aqueous environment in the cytosol. The movement is therefore a Brownian motion with drift (convective diffusion) [6].
Such movement provides the molecular basis for muscle contraction and various cellular transport processes [24,25]. Motor protein kinesin is known to carry out intra-cellular vesicle transport along microtubules. Various polymerases are moving along their corresponding templates. All these processes are essential to a living cell. In a muscle cell, the motor protein is called myosin, and its designated track is called an actin filament. The actin filament has a periodic structure of ∼36nm. Therefore, without loss of generality, we assume that a myosin molecule moves in a force field with periodic potential energy function U (x): U (x + L) = U (x) where L is 36nm for actin.
Treating the center of mass of the motor protein as a Brownian motion with the presence of a periodic energy potential, its movement can be modeled by the Smoluchowski equation [63] ∂P (x, t) ∂t where D and β are, respectively, the diffusion and frictional coefficients. F (x) = −dU (x)/dx is the force of the potential U , representing the molecular interaction between the motor protein and its track. P (x, t) is the probability density function of the motor protein at position x for time t. The first equality in Eqn. (1) is a continuity equation in which J is a probability flux. The first term on the right-hand-side is associated with the diffusion flux according to Fick's law. The second term is due to the convection associated with an overdamped Newtonian motion: βẋ = F (x). In fact, Eqn. (1) is mathematically equivalent to an overdamped Newtonian motion with a white random force f (t) representing the incessant collisions between the motor protein and the water molecules: βẋ = F (x) + f (t) [63]. The probability density P (x, t), as the solution to Eqn. (1), gives the mean position of the motor Moreover, its velocity is related to the flux J(x, t): When the motion of a motor protein becomes steady and if we are only interested in the mean velocity, we need only consider the steady-state solution for x ∈ [0, L] with periodic boundary conditions. With this setting, the steady-state velocity of the motor protein movement is v = LJ. Therefore studies on the steady-state movement focus on the flux J. To fix our terminology, we will refer to a stationary solution as a steady-state, but a stationary solution with zero flux as an equilibrium.
A little mathematical analysis immediately shows that with U (0) = U (L), i.e., L 0 F (x)dx = 0, the stationary solution of (1) allows only zero flux (J = 0). Therefore, in a one-dimensional periodic structure, there is no driving force to bias an inert Brownian particle to move in either direction. The simplest model given by Eqn. (1) fails to capture the essence of motor protein movement.
The driving force for a motor protein comes from a very important biochemical reaction, occurring inside the protein, called ATP hydrolysis: where H 2 O is water, ADP is adenosine diphosphate, and Pi is phosphate. Chemical reactions like this are well characterized by a two-state Markov process (or more generally, m discrete states) where non-negative f and g are rate constants for the reaction [35]. The deeply insightful work of Sir Andrew Huxley in 1957 was to introduce internal conformational states to the Brownian particle and to couple the (bio)chemical reaction in (4) with the motor protein movement in (1) [25]. This leads to the following equation [3,43,46,26], known as a coupled diffusion system in mathematics [53], for the movement of a Brownian particle with internal structures and dynamics: where f (x) and g(x) are non-negative periodic functions. In terms of this augmented Huxley model, the motor protein can be either attached to (+) or detached from (−) the filament with respective interaction energy functions U + (x) and U − (x) (In the original Huxle model, U − (x) = 0). D ± (β ± ) are the diffusion (frictional) coefficients of the motor protein in the attached and detached states, respectively. The attach-detach transition is coupled to the ATP hydrolysis. Therefore, in (5) At this point, it is fascinating to read the now classic work of Huxley on the theory of muscle contraction, which was written decades before the discovery of the motor protein molecule in its individual form. The Huxley model works as follows (p. 281 of [25]). Initially, the myosin (M) and actin filament (A) are detached; M oscillates (fluctuates) back and forth about its equilibrium position O as a result of thermal agitation (with diffusion coefficient D − for the Brownian motion). If A happens to be within the range of positions where f , the rate of association, is not zero, there is a chance that combination will take place (this event happens with a probability characterized by the Markov rate process); when this has happened the tension in the elastic element (F , i.e., the molecular interaction between M and A) will be exerted on the actin thread by M (and, conversely, the actin will exert the same force with opposite sign on M). As one can see, Eqn. (5) is the mathematical formulation of this model described in words. Readers who check Huxley's paper will find several differences between his original equation and (5). These differences arise because we have formulated a microscopic model for single motor proteins while a model for muscle contraction has to deal with a large number of myosin molecules. It can be shown that when stringing many motor proteins into a rigid chain, Huxley's original equation can be derived from Eqn. (5) [26].
Since the work of Huxley, there have been many investigations following and expanding the basic notion of the Huxley equation. Most notably are the work of Hill [22,23] who provided the Huxley equation with a sound thermodynamic basis, and the work of Astumian and Bier, and Peskin et al. who arrived at (5) from the Langevin dynamics (stochastic differential equation) point of view [2,43]. There is also a body of literature on Brownian ratchet, whose basic movement is characterized by equations identical to (5) [1,7,11,42,64]. 1 In a recent paper, Bentil [5] applied the Huxley model in conjunction with Langevin dynamics to simulate single myosin experiments. Qian [46,47,49] established a relationship between coupled diffusion like (5) and the circulation of a Markov process and its entropy production [27,52].
Eqn. (5) is certainly an oversimplified model for any realistic biological system. However, it captures the essence of a theory which unifies microscopic motor protein movement and macroscopic muscle contraction; thereby it provides a concrete model for chemomechanical energy transduction in living organism. Hence it deserves further detailed investigation as a topic in biophysics and physical chemistry. The mathematical treatment of (5) has been mainly in terms of differential equations. However, the nature of Brownian motion of the motor protein also calls for a treatment of (5) in terms of stochastic processes. We will review some of the pertinent mathematics in Section 4.
Mathematical Analyses of Several Limiting Cases of Augmented Huxley Equation
While the augmented Huxley Eqn. (5) is difficult to solve in general due to the non-local (non-equilibrium) nature of the steady-state, particular limiting cases can be analyzed to gain insights into the theoretical model. In this section, we present some known and also some new results.
Limit of rapid biochemical cycling
One particularly interesting limiting case is when the biochemical reactions are rapid with respect to the diffusion. Analysis of this limiting case clearly demonstrates how the internal biochemical reaction can give rise to a unidirectional motor protein movement. Hence it demonstrates the validity of mathematical models for motor proteins in terms of coupled diffusion equations.
Consider Eqn. (5), rapid biochemical reaction means we have conditional probabilities: 2 and thus P (x) = P (x, −) + P (x, +) satisfies in which Note that Eqn (6) is similar to Eqn. (1) but with one crucial difference: The mean force function now satisfies L 0 F (x)dx = 0 even though both F ± (x) satisfy L 0 F ± (x)dx = 0. The potential of the average of forces of periodic L potentials is in general not periodic. This indicates that the biochemical reaction (4) provides a drift for the motor protein movement in a periodic system. Finally, the transport flux can be obtained by solving (6) where 2 Readers who are familiar with the method of singular perturbation will identify this problem. Since we are seeking a nontrivial solution for a homogeneous equation, the solution is not unique. The functions F± has at least two zeros at which boundary layers might be expected. For more detail on this type of equations, see [39].
Limit of rapid diffusion
Another limiting case which has been nicely analyzed by Peskin et al [43] is when the diffusion is very rapid in comparison to the Markovian transition (Brownian ratchet): there are rapid equilibria for P (x, −) and P (x, +).
The mathematical problem is framed as follows. Let's consider the stationary coupled diffusion: where f (x), g(x), U ± (x) are periodic functions. When the regular perturbation parameter ǫ = 0, this system of uncoupled diffusion has zero transport flux Φ. For small ǫ, we therefore have Φ = ǫφ + ... and the asymptotics can be obtained by the method of regular perturbation [4]. At this point it is important to notice the other flux, the circular flux Π which moves the motor protein forward in the (+)-state but moves it backward in the (−)-state. Therefore Π does not contribute to the net transport but only generates heat. This type of flux is known as futile cycle in muscle biochemistry [45]. Perturbation calculations show that [43] and where then the system is reversible and the steady-state is in fact an (thermal) equilibrium. In mathematical term, the Markov process is symmetric [62]. In applied mathematics, the symmetricity leads to the Grasman-Matkowsky variational method [30]. More recent progress can be found in [31].
Limit of the original Huxley model
In the original Huxley model, the interaction between the track and the motor in the detached state is assumed to be zero. Hence U − (x) = 0 in Eqn. (5). Furthermore, it is also generally accepted that D + << D − , i.e., the Brownian motion of the motor protein in the attached state is negligible. The solution of (5) when D + → 0 is a problem of singular perturbations [30,36]. Note that because of the periodic U (x), F (x) has zeros on [0, L]. Thus the singular perturbation problem has at least two linear turning points [28,38]. The reduced equation when D + = 0 is: in which we have set D − = β + = 1 for simplicity. We are particularly interested in finding a condition for the existence of a solution corresponding to unidirectional motion. In the steady-state, the total transport flux of the system is a constant: 3 Using Eqn. (12) and eliminating P (x, +) from Eq. (11) we then have where the inhomogeneous term Φ on the rhs is to be determined by the normalization condition L 0 [P (x, +)+P (x, −)]dx = 1. The boundary conditions for Eq. (13) again are periodic.
Eqn. (13) has singular points at the zeros of F (x). A simple local analysis shows that for nonzero Φ and a physically meaningful P (x, −) ≥ 0, the solution to Eqn. (13) has to be nonanalytic at these singularities. This nonanalytic behavior, however, is expected to be obviated in an asymptotic study of the full equation (5) with small D + .
Entropy Production in Nonequilibrium Steady-State
We now give a brief discussion of the nonequilibrium thermodynamics in terms of Eqn. (5). Hill [22,23] has given an extensive account of this subject. We only discuss some recent developments in connection with the notion of entropy production [40]. The concept of entropy production rate (e.p.r.) can be easily introduced, mathematically, in terms of Eqn. (1). The validity of this novel thermodynamics of nonequilibrium steady-state (NESS), however, remains to be experimentally tested. For more discussion see [46,47,54,56].
Associated with (1) is a functional A[P (x)] called the Helmholtz free energy in thermal physics [58], which in units k B T (T is temperature and k B is the Boltzmann constant) is defined as: When P (x, t) changes with time according to Eqn. (1), the production rate of total entropy is the rate of decrease in A of the system, 4 which can be computed: The last step used the definition for J given in Eq. (1). This is the secondlaw of thermodynamics in terms of Smoluchowski equation, which is the 4 From thermodynamics stand point, a macromolecule is an isothermal system in contact with a thermal environment (i.e., aqueous solution) with temperature T . There is energy (chemical and heat), but no material, exchange between the system and its environment (clamped ATP and ADP concentration in the aqueous solution, and heat bath). The system and its environment as a whole is an isolated system with a constant total energy; and this is approximately hold also for a sufficiently large heat bath. In mathematical terms, we have dA/dt = dE/dt − T dS/dt where E is the internal energy of the system and S is the entropy of the system. With respect to the system and the environment together as a whole dA = (dEtot − dEenv) − T (dStot − dSenv) = −T dStot + (T dSenv − dEenv) ≈ −T dStot. For an isolated system (microcanonical ensemble), ∂Eenv/∂Senv = T [58]. nonequilibrium counterpart of a canonical ensemble in statistical mechanics. Eqn. (15) can be generalized to calculating the entropy production rate (e.p.r.), as well as the heat dissipation rate, in a NESS in which S tot continues to increase [50]. To see this, we note the entropy of the system is defined as: where F = −dU/dx, and the first term on the right-hand-side is the heat dissipation rate (h.d.r.). Therefore, in a NESS, the e.p.r. is equal to the h.d.r. Recent work in mathematical physics on entropy production in nonequilibrium systems [19] also focus on appropriately setting up the non-equilibrium steady-state with an external force and a thermostat simultaneously acting on a Hamiltonian system. The force supplies energy while the thermostat removes heat in order to keep the system in a steady-state with bounded energy. This leads to a random dynamical system in which entropy production is cogently defined [60]. The Smoluchowski approach we adopt has a quite similar setting: a driving force due to chemical reaction (rather than mechanical force) and an implicit thermostat: The Smoluchowski equation is a consequence of an overdamped Newtonian system with Maxwellian distribution for the velocity of the particles [63]. In fact, the diffusion coefficient D and frictional coefficient β in (1) define the temperature of the thermostat: T = βD/k B . The interesting mathematical questions are when these random dynamical systems become diffusion processes and whether the entropy production proposed in these studies is equivalent to Eqn. (15) for diffusion processes [54]. The recent work by Lebowitz and Spohn [32] has provided some insights on this problem [52]. In a different approach to weak random perturbation of Hamiltonian systems in a plane, Freidlin and Wentzell map the system to a diffusion process on a graph, which consists of vertices corresponding to the stationary states and edges corresponding to energy basins [18]. However, this approach remains to be generalized to higher dimensional Hamiltonian systems, and its relationship to the (Kramers') transition-state rate theory in theoretical chemistry [21] also remains to be elucidated.
It is interesting to note that in a steady-state, the flux J is a constant and all the entropy produced will become the dissipated heat. Hence e.p.r.= (J 2 /D) L 0 P −1 (x)dx ≥ J 2 L 2 /D which equals βJ 2 L 2 due to the Einsteine relation in k B T units Dβ = 1. The βJ 2 L 2 term is the energy dissipation due to a deterministic motion with velocity v = LJ and frictional coefficient β in continuous medium. The inequality indicates the additional dissipation due to random motion. It also indicates that when P (x) = constant, the e.p.r is at its minimum, i.e., the chemomechanical energy transduction is at its maximal efficiency.
To generalize the concept of entropy production to Eqn. (5) is mathematically straightforward. This yields a novel thermodynamic theory for NESS, which is particularly relevant to motor proteins. The importance of the theory is that it relates e.p.r. to the heat production of a working motor, which is a quantity that can be experimentally measured. With some simple algebra, we have in which It is obvious that e.p.r. is non-negative and equal to zero if and only if when detailed balance is hold [26,49]. In NESS without external load, h.d.r. = e.p.r. If there is an external load F ext , then Eqn. (18) can be further broken down into Hence, The free energy from ATP hydrolysis is equal to exactly the sum of the work done against the external load, F ext v, and positive heat dissipation, Eqn. (19).
Some Relevant Mathematics on Coupled Diffusion
While P (x, t) in Eqn. (1) characterizes a stochastic process X t in terms of probability density at each time t, P (x, t)dx = P rob{x ≤ X t ≤ x + dx}, there is an alternative view of a stochastic process in terms of its trajectories. In this approach, all possible trajectories {X t |t ≥ 0} form a function space Ω and a probability density (a measure) is defined. This naturally leads to the notion of "propagator" (a semi-group) which is formally defined as: where the exponential operator e Lt acts on the distribution P (x, 0) as a "row vector". The operator L satisfies the backward Kolmogorov equation: or its conjugate L * satisfies the forward Kolmogorov (or Fokker-Planck) equation: For symmetric operator L * = L. The symbolic relationship between the operator L and the propagator e Lt has been made rigorous in terms of linear operators in a Banach space and is known now as Hille-Yosida theorem.
Hence the modern theory of Brownian motion has brought several mathematical disciplines to bear [15,29]: partial differential equations, linear operators on functional spaces, and harmonic analysis.
Feynman-Kac formula
A major result in this area is a relationship between the solution of a boundary value problem (BVP) and the mean exit time (first passage time) of a diffusion process. This relation also has the potential for devising numerical methods for solving BVP. Let's now consider a Brownian motion in a domain D with the diffusion equation (23) and Dirichlet boundary conditions on ∂D. Now differentiate Eq. (23) with respect to t, and then multiply a t and integrate over t ∈ (0, ∞), we have If one interprets P (x, t|x 0 ) as the probability of the Brownian particle at x at time t starting at x 0 when t = 0, then is the mean time of the particle started at x 0 to exit D. The left-hand-side of (24) can be simplified into If now we multiply a function φ(x), which satisfies Lφ(x) = ψ(x) with (24) and then integrate over x ∈ D, we have This shows that the rhs, in which X t is the probabilistic Brownian motion and E x 0 [·] is the average along the paths of X t started at x 0 , satisfies the inhomogeneous ODE This is the well-known Feynman-Kac formula [37]. When ψ(x) ≡ 1, u(x) is the mean exit time of the Brownian motion X t .
Random evolution
While a deterministic dynamic equation coupled to a white noise is called a stochastic differential equation and leads to Brownian motion [37], a deterministic dynamic (evolution) equation coupled to a Markov process is called random evolution [44]. This is a class of stochastic models characterized by a system of equations like There is no diffusive motion in the movement. A particle follows deterministic ODEsẋ = F ± (x) and jumps between (+) and (−) states. Equations like (26) have wide applications in chemistry and biology. For example, the stochastic averaging problem in nuclear magnetic resonance spectroscopy is precisely such a problem [41,51]. For a recent work, see [14]. For large f and g, the motion is approximatelyẋ . For extremely large f and g, the Markovian process approaches a rapidly varying white noise and (26) again approaches a diffusion equation [51]. In spectroscopy, this corresponds to two distinct spectral lines merging into a single broad peak.
For small f and g, if both F − (x) and F + (x) have zeros, then the motion of the particle is still qualitatively simple: the particle will stay at a (+) fixed point, jump to (−), relax to a (−) fixed point, and stay there until jumping to (+) and relaxing back to the (+) fixed point, or relaxing to another (+) fixed point. If the two F 's are arranged appropriately, the particle can be continuously unidirectionally transported, step by step, as demonstrated in [2].
One insight from this discussion is that though Eqn (11) has no appropriate stationary solution, the time-dependent solution should be well behaved. This points out that one should approach the time-dependent version of (11) rather than its stationary solution (Eqn. 13).
Small diffusion and the theory of large deviations
Is the dynamics of the degenerate equation (26) the limiting behavior of Eqn. (5) when D ± → 0? This is clearly an important mathematical question which also has significant relevance to the modeling of muscle contractions. As we have stated, one way to address this question is to develop a complete singular perturbation theory for (5). There is, however, also a stochastic approach called theory of large deviations [13] which in recent years has thrown much light on the problem. A combination of both approaches is undoubtedly desirable. This technically very demanding task has been carried out in several occasions, for example in [34]. For a review, see [61].
Deterministic vs. Stochastic Motion of Molecular Motors
While there is no doubt that the motor protein movement is a drifted Brwonian motion, the extent of the randomness in the motion can be quantita-tively characterized according to the mathematical model. Let's now again consider Eqn. (5) in which there is no force in the detached state (−). As we have discussed above, in the limit of both D ± = 0, the motion will be trapped at the zeros of F (x). This indicates the importance of nonzero D − for the motor movement in this model, as has been repeatedly pointed out by Peskin et al [42,43]. On the other hand, when F − (x) = 0, a motor protein can move strictly in one direction in a random evolution model. Diffusion plays no role in this mechanism. These two different modes of movement correspond nicely with "Brownian ratchet" and "power stroke" in the biochemical literature. Whether a motor protein in fact moves back-and-forth with a drift or almost unidirectional consecutively can be quantitatively analyzed. Until now, there has no quantitative means to differentiate these two types of motion. The present theory offers a quantity method to address this issue. Taking Eqn. (1) as an example, we can introduce a function as the total movement of the protein. Note that the first term is associated with the Brownian motion and the second term is associated with the unidirectional movement. Hence their ratio quantitatively characterizes the mode of the motor movement. This integral is known as action in the theory of large deviations [18].
Future Work
While the detailed mathematical analyses remain to be carried out for models of single motor movement, the mathematic analysis of a chain of motor protein is largely unknown, except for the completely rigid chain of motors (Huxley model). The general theory can be developed by connecting N motor proteins by springs. Such a "bead-and-spring" model has been the theoretical foundation of polymer physics [12,48]. The late Professor P.J. Flory was awarded the Nobel Prize in chemistry in 1974 for his contribution to this theory. The difference, however, is that a polymer is an equilibrium system, while a chain of motors is a "living creature". Let's denote the positions of N motor proteins by x 1 , x 2 , ..., x N , and the corresponding internal states by σ 1 , σ 2 , ..., σ N , where σ k = 0, 1 for the detached and attached state of the kth motor. We therefore have the dynamic equation for probability P (x 1 , σ 1 , x 2 , σ 2 , ..., x N , σ N , t): where η is a spring constant. A computational analysis of such a model can be found in [9]. In a recent mathematical analysis, a deterministic counterpart of this system, a chain of bead-and-spring in a periodic force field, has been shown to exhibit globally phase-locked motion [55,57]. Eqn. (28) is a N-particle system which can be subjected to mean-field treatment as that for the N-particle Schördinger equation. As in the genesis of nonlinear Schrödinger equation [59], such treatment will lead to a nonlinear diffusion equation [20], opening a possible new mathematical approach to the problem of muscle contraction. In connection to the theory of probability, this is an interacting particle system with a nonequilibrium (Gibbsian) stationary state [33], and is a natural application for the theory of large deviations [10].
|
2001-06-15T17:49:09.000Z
|
2000-11-01T00:00:00.000
|
{
"year": 2001,
"sha1": "99b5a268e7b8cad4dd966be694fc45b164e58538",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0106302",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "99b5a268e7b8cad4dd966be694fc45b164e58538",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Biology"
]
}
|
12174494
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Direct Education on Breast Self Examination Awareness and Practice among Women in Bolu, Turkey
Breast cancer (BC) affects women across the world. It accounts for 14% of cancer deaths and 23% of new cancer cases. Sixty percent of cancer deaths occur in developing countries (Juan et al., 2004; Ozmen, 2011; Jamal et al., 2011). According to national data from the Ministry of Health of the government of Turkey in 2008, BC is now the most common cancer among women, with a frequency of 41.6 cases per 100,000 individuals (Republic of Turkey, 2011). Screening programs allow for early diagnosis of cancer and are crucial for better prognosis and long-term survival. The Ministry of Health recommends breast self examination (BSE) and clinical breast examination (CBE) for all women beginning at age 20 years. For at-risk women between the ages of 40 and 49 years, screening intervals are determined by the treating physician. Screening is recommended biannually for women over the age of 50 years regardless of the presence of risk factors (Republic of Turkey, 2008). However, insufficient data are available on the approach to breast health, information, behavior, and attitude of Turkish women. We evaluated the effectiveness of the designed training program by determining its effect on the BSE practices of women in our family practice office
Introduction
Breast cancer (BC) affects women across the world.It accounts for 14% of cancer deaths and 23% of new cancer cases.Sixty percent of cancer deaths occur in developing countries (Juan et al., 2004;Ozmen, 2011;Jamal et al., 2011).According to national data from the Ministry of Health of the government of Turkey in 2008, BC is now the most common cancer among women, with a frequency of 41.6 cases per 100,000 individuals (Republic of Turkey, 2011).
Screening programs allow for early diagnosis of cancer and are crucial for better prognosis and long-term survival.The Ministry of Health recommends breast self examination (BSE) and clinical breast examination (CBE) for all women beginning at age 20 years.For at-risk women between the ages of 40 and 49 years, screening intervals are determined by the treating physician.Screening is recommended biannually for women over the age of 50 years regardless of the presence of risk factors (Republic of Turkey, 2008).However, insufficient data are available on the approach to breast health, information, behavior, and attitude of Turkish women.
We evaluated the effectiveness of the designed training program by determining its effect on the BSE practices of women in our family practice office
Participants
Bolu is a city with a population of approximately 1
Effect of Direct Education on Breast Self Examination Awareness and Practice among Women in Bolu, Turkey
Sebahat Gucuk 1 *, Ummugul Uyeturk 2 140.000.The healthcare system of our city adopted a family practice model on October 16, 2006, and is amongst the cities in which the practice is carried out systematically.
We invited women between the ages of 20 and 49 years registered in our family practice clinic to participate in our case-control study.Potential participants were contacted by telephone between December 2012 and July 2013.Those who chose to participate were enrolled following completion of an informed consent document.Based on their respective order of enrollment, the female subjects were randomized into either the control or the test group.
Pregnant and breastfeeding women were excluded from the study due to concern physical distress would affect their compliance with the scheduled appointments and thus the overall outcome.Patients who missed their scheduled appointment were given one phone call and reminded of their appointment.Patients who did not complete all appointments were excluded from the study.The study was completed with 144 and 112 qualifying subjects in the test and control groups, respectively.
Procedure
The questionnaire was developed in a preliminary study at our clinic and re-reviewed with patients in our clinic to test the questions for proper clarity.The questionnaire containing 22 questions was completed for each participant by the physician at our clinic during the initial face-to-face interview (the first of three conducted to evaluate BSE practices).It consisted of two sections.The first contained questions addressing socio-demographic information and BC risks.The second section gathered information on breast health screening behaviors and the frequency of screening among respondents.
BSE training for participants was conducted in two ways: via brochures and instruction by a healthcare professional.
A 12-line BSE form was used to score interviews (in terms of evaluation performance) based on a training brochure normally given to women in our clinic (Republic of Turkey,2008).For each item, a score of two was given for full performance, one for partial performance, and zero for non-performance.Radiation examination was determined to be a more preferable evaluation method by our participants, was selected and described.
First interview
After completing the survey during the first interview, both groups were educated on the importance of breast health, factors causing BC, BC symptoms, and the importance of BSE.The control group was asked to perform BSE under the supervision of a physician.Then the participants were given a leaflet about BSE prepared by the Ministry of Health for review.The subjects were scheduled for a return visit in 2 months and dismissed.The test group was also asked to perform BSE under the supervision of a physician.The subjects were educated by the physician on the proper technique for each BSE step.Sections not understood by the participants were repeated.Participants were scheduled for a return visit in 2 months once they were fully capable to perform the evaluation.
Second interview
Two months after the first interview, the test group was asked to perform the BSE steps they remembered from the previous interview.This exam was scored and information was given a second time for reinforcement.The control group was also asked to perform a BSE and scored for performing and describing the details they read in the leaflet.Participants who had lost their leaflet for any reason were given another.Both groups were scheduled for a follow-up visit in 2 months.
Third interview
During the third appointment held 4 months after the first appointment, both groups were scored as in the second interview.Because it was the last appointment, the control group also received breast evaluation by the doctor.All participants were informed about the function of our provincial Early Cancer Diagnosis and Treatment Center.While they differed at each individual step, the duration of the interviews varied between 15 and 45 min.
Patients for whom pathology was identified in either self evaluation or our examinations were referred to the medical center and excluded from study to avoid inconsistency with the controls.
Statistical analyses
Data were evaluated using the Statistical Program for Social Sciences (SPSS 20) software package.A Mann-Whitney U Test was used to test differences across groups for abnormally distributed variables.A Kruskal-Wallis H test with Bonferroni correction was employed for abnormally distributed variables in more than two groups.The Wilcoxon signed rank test was used for the abnormally distributed variables when testing the score differences between the groups based on their arrival time.Between-groups differences were analyzed using 95% confidence intervals.
Results
The average age of the women was 34.97±7.17and 35.03±7.71 in the test and control groups, respectively.The socio-demographic information of the participants is shown in Table 1.Among all participants, 7.1% of controls and 2.8% of the test group had a family history of BC.The percentage of participants with a family history of other cancers was 16.1% and 14.1%, respectively.Having a family member diagnosed with BC or any cancer type had no impact on BSE (p>0.05) or CBE (p>0.05)behaviors.
While 39.5% of the participants obtained prior information on BSE from healthcare professionals, 25.8% of participants had no knowledge of BSE prior to enrollment in this study.The percentage of subjects who performed a BSE during the past year was 47.3% in the control and 33.3% in the test group, with 9.4% and 8.3% within each group performing regular BSEs.The status of and reasons for breast examination among participants within the past year are shown in Table 2.
None of the study participants had a history of prior breast tissue radiation exposure.The body mass index of the subjects was greater than 30 for 18.1% of the test and 29.5% of the control group.When questioned about exercise, 47.9% of subjects in the test and 29.5% in the control group said they never exercised.In the test group, 30% were smokers for greater than 10 years compared to 24% in the control group.
Growing older increased BSE ratio by a factor of 1.4 and increased CBE by a factor of 1.5 among the women.Higher education level increased BSE and CBE ratios by 3.4 times.The BSE performance ratio was 6.3 times higher in married women than single/widowed women and that of CBE was 3 times higher in married women.
While prior BSE performance during the past year was associated with a significant difference between scores for each interview, the scores for the first and second interviews of patients who previously performed BSE were significantly higher (p<0.05)than those who had not performed BSE prior to study enrollment and had no previous knowledge of BSE.
Scores increased significantly (p<0.05) with each subsequent interview.No statistically significant difference was found between the control and test groups in terms of the scores for the first and second visits.The scores for the final visit were higher for the test group than for the control group (p<0.05)(Table 3).Scores were higher amongst women who had performed BSE prior to study enrollment at each of the three interviews (p<0.05)(Table 4) The scores for the final visit were higher in subjects who performed regular BSEs throughout the study (p<0.05).BSE was performed between interviews by 63.6% of participants who retained the leaflet provided during their first and second interview compared to only 34.6% of those who did not retain the leaflet, a significant difference (p<0.05).Likewise, evaluation scores for the last interview were significantly higher for participants who retained the leaflet compared to those who did not (p<0.05).
Scores for all interviews were significantly higher in individuals who were educated about BSE by healthcare professionals or hospital awareness programs compared to participants who obtained information about BSE through other sources such as television, radio, and the Internet (p<0.05).
Discussion
The early diagnosis of BC is among the most important factors for reducting morbidity and mortality.Early diagnosis is only possible with proper screening methods.The majority of studies performed on screening programs have demonstrated that screening is able to control BC at an early stage and that the stage and the histopathological grade of cancers in women who received early screening are lower compared to the normal population (Andersson ve Janzon, 1997;Chu et al., 1988).
The selection of convenient, cost-effective methods for increased BC awareness, screening, and diagnosis is particularly important in developing countries (Mittra et al,2000).The low survival rates for BC in underdeveloped countries are associated with advanced-stage diagnosis of disease mainly due the lack of early diagnostic programs (Gupta, 2009).
BSE is a free, easily applicable method of early BC screening.Individuals who perform BSE tend to have more knowledgeable of BC (Dündar et al., 2006).However, many women refrain from using this technique due to a lack of self confidence, shortage of time, and embarrassment associated with manipulation of the breast (Lierman et al., 1994;Stillman 1997).However, regularly performed BSEs would provide reference information on the breast, thereby enabling a woman to know her breast tissue and notice any potential changes.The lack of BC awareness among young women results in BC diagnosis at progressed stages.This, again, leads to further increased mortality rates (Anders, 2008).
In our study, groups were trained using practical educational methods on BSE, and the methods were evaluated for their potential benefits.
Given the improved awareness of breast health in rural areas and developing countries with little to no access to healthcare services and the increased level of information on CBE mammography, BSE should be given importance and encouraged (Dişcigil, 2007).Although BSE awareness is 90% amongst women in developed countries, only 15-40% actually conduct the exams (Friedman, 1994).Tavafian et al. (Tavafian, 2009) found that 31.7% of women had performed BSE once, but only 7.1% were practicing it on a regular basis.Al-Dubai (Al-Dubai et al., 2012) reported that about 55.4% of respondents had performed a BSE, but only 28.5% performed the examination monthly.A study conducted in the Tekirdağ province (Gürdal et al., 2012) showed that 27.4% of women performed BSE regularly while 68.5% had performed at least one BSE.Donmez et al. (Donmez et al, 2012) reported that 61.3% of subjects were unaware of breast evaluation and while 49.2% performed BSE, only 15.4% did so once each month.In the present study, 47.3% of participants in the control group and 33.3% of participants in the test group had performed a BSE within the past year.Only 9.4% and 8.3% of these participants were performing BSE on a regular basis.The very low number of subjects regularly performing BSE indicates that a health behavior model regarding BSE could not be established among the women in our region.
Studies provided effective training increased BC awareness, knowledge of BC risk factors and BSE (Kuhns-Hastings et al., 1993;Wood et al., 2002).Not all women can perform a BSE with equal quality, and thus further training programs are required.In addition to improved levels of information among women, such training programs also increase BSE practice (Ozturk et al., 2000), and different training methods improve the quality of BSE practices (Oliver-Vázquez, 2002;Rao et al., 2005).Ozaras et al. (Ozaras, 2010) found that scores were higher after BSE training.CBEs may be associated with the education status of women (Juan et al., 2004;Achat et al., 2005;David et al., 2005).Indeed, in the present study, information scores significantly improved with the frequency of visits, and exams improved with increased education level.These results demonstrate the role of education in increasing the level of awareness and the practice of BSE among women.
Media, Internet, hospitals, primary healthcare clinics, and friends and acquaintances all assume an important role in the BSE education of society (Thomas et al., 2002), with the Internet, television, hospital, and primary healthcare institutions being the most common sources of information concerning BSE (Gürdal et al., 2012).Karayurt et al. (Karayurt et al., 2008) found that media was the primary source of information on BC for 48.6% of participants.Yoo et al. (Yoo et al., 2012) found that only 17.2% of subjects obtained BSE information from a physician or nurse.In the present study, 39.5% of the subjects obtained their information on BSE from healthcare professionals.We found that scores during all visits were significantly higher in individuals educated on proper BSE techniques by healthcare professionals and hospital programs compared to individuals who obtained this information from sources such as television, radio, and the Internet.This indicates that face-to-face patient education significantly increases BSE awareness.As a consequence, access to information provided by primary care professionals, who are in close personal contact with female patients, is essential for breast health.
Our study was conducted with women in our own service population.In our national family practice program, information is supplied via leaflets handed to patients who apply for family planning or any other reason.It is usually not possible to determine patient understanding of information contained with in these leaflets or to monitor the practice of this information by patients.There are diverse opinions on what constitutes effective BSE training.One of the restrictive aspects of our study was the selection of handouts and the breastevaluation methods taught by healthcare professionals, which were thought to be feasible for a realistic practice in a polyclinic setting with a highly busy patient population.
Considering that our study is among the first conducted at a family medical center, we believe that our results can DOI:http://dx.doi.org/10.7314/APJCP.2013.14.12.7707Effect of Direct Education on Breast Self Examination Awareness and Practice among Women in Bolu, Turkey make a difference further studies and training programs.
In conclusion, Oftentimes, a patient can notice a change in her breast.BSE establishes a reference point for an individual's knowledge of her own breast and can be key for early diagnosis.BSE information should be provided by healthcare professionals during the evaluation of female patients at family practices where healthcare services are acquired on a frequent basis.Especially in developing countries, such as our own, training courses addressing individual requirements should be organized, and the effects of such courses should be assessed by feedback.We believe that such efforts will result in increased BSE practice and BC awareness, thereby improving early diagnosis and treatment rates.
Table 2 . CBE Status of Study Subjects
*CBE:Clinical breast examination
|
2018-04-03T00:26:23.239Z
|
2013-12-31T00:00:00.000
|
{
"year": 2013,
"sha1": "4a556c02870d3300eb4f57559f6dc79e8b075cac",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201305981342611&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4a556c02870d3300eb4f57559f6dc79e8b075cac",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73034739
|
pes2o/s2orc
|
v3-fos-license
|
Colorectal Injuries in Minimal Invasive Urologic Surgery
Copyright © 2014, Colorectal Research Center and Health Policy Research Center of Shiraz University of Medical Sciences; Published by Safnek. This is an openaccess article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/ by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. Dear Editor,
Dear Editor,
Today most of urologic procedures such as adrenalectomy, nephrectomy, nephrolithotomy, pyeloplasty and prostatectomy are performed using minimal invasive methods like laparoscopic and robotic techniques. Colorectal injuries have been always a great concern during percutaneous and laparoscopic urological surgeries. Thorough preoperative evaluation and preventive measures as well as adequate surgical experience are crucial parameters to reduce the rate of such injuries. Here, we briefly reviewed the causes, diagnosis and management of possible colorectal injuries during percutaneous, laparoscopic and robotic urologic surgeries.
Colon Injury in Percutaneous Renal Surgery
Performing percutaneous renal surgeries (PRS) such as nephrolithotomy (the most common), nephrostomy insertion, endopyelotomy and tumor resectionin prone position, may lead to colon injury at a rate of about 1%. Because of the anatomic relationship, the left colon can be injured two times more than the right one(1). Specific characteristics of kidney such as renal fusion anomaly (i.e. horse-shoe kidney), renal ectopia and previous kidney surgery may increase the rate of colon injuries during PRS. Furthermore, thin women, patients with advanced age,kyphosis and mobile kidneys are at a higher risk of such complications (2). Retrorenal colon which is more prevalent in some circumstances such as horse-shoe kidney also increases the risk (2). Colon injuries more likely occur when PRS isperformed in prone position, rather than supine (2). In patients at risk of colon injury, a thorough preoperative computerized tomography (CT) scan can be useful to detect the anatomic position of colon andkidneys. Renal access under ultrasound or CT guidance can reduce the risk of colon injury in these challenging circumstances (3)(4)(5).
In case of colon injury, early detection and immediate management are vital to prevent fatal infectious sequel. When surgeon suspects any kind of colon injury during PRS, injection of contrast media through nephrostomy tract could confirm colon injury. Provided the extraperitoneal location of the colon injury, conservative management including placement of nephrostomy tube into the colon lumen as colostomy, ureteral stenting and bladder drainage seems enough. Low residual diet and broad-spectrum antibiotics should be considered for five to seven days. With this conservative strategy, the medial wall of the colon as well as the calyceal system would usually heal successfully. When the integrity of the collecting system and the colon is confirmed by a colostogram and retrograde pyelogram, the colostomy tube can be advanced into the retroperitoneum as an external drainage for more than 2 to 3 days, which allowsthe lateral wall of the colon to heal.In case of intraperitoneal perforation, or the presence of signs and symptoms of peritonitis, urgent abdominal exploration and colostomy is mandatory(1-5).
Colorectal Injuries in Laparoscopic and Robotic Urologic Surgery
The overall incidence rate of bowel injury in urologic laparoscopic surgeries is 0.2-0.7%. Rectal injury can occur in approximately 0.5% of laparoscopic radical prostatectomies (6). Intestinal injury seems to be more common in operative laparoscopy 0.3-0.5%comparedto diagnostic laparoscopy as 0.06-0.5% (7). In another report, there was an incidence rate of 0.5% to 9% for rectal injury during laparoscopic radical prostatectomy (8). Bowel complications are mostly caused bytrocar or the Veress needle (41.8%) (9). The rate of bowel injury is equal in closed and open access techniques for trocar insertion but with open access technique, there is a higher chance of immediate diagnosis of bowel injury. The second most common cause of intraoperative bowel injury is electrocautery (25.6%). Small intestine is the most commonly injured portion of the bowel by electrocautery (9). When multiple structures are injured, the most frequent combination is a vascular structure and bowel (7)(8)(9). Bowel injury can be thermal or mechanical. Thermal injury occursdue to four mechanisms asdirectly activated instrument, coupling to another instrument, capacitive coupling and insulation failure.
Mechanical damage can be caused by a wide variety of sharp and blunt instruments (graspers, scissors, and retractors) (7)(8)(9). Approximately,in a half of all large and small bowel injuries during laparoscopy, a delayed diagnosis later than 24 hours may happen. Delayed bowel injuries are more likely to be fatal compared to major retroperitoneal vessel injury (10). Bowel injury is a complication potentially debilitating and deadly if left unrecognized during the operationand leads to an acute abdomen and sepsis. Injures related to thermal damage are usually unrecognized during the operation. These patients present typically days after the operation with signs of sepsis and acute abdomen. The presentation may be quite subtle with pain on trocar sites, leukopenia, fever and chills. However, patient's deterioration can be rapid with a mortality rate of21% (9). Therefore, close surveillance is necessary to save patient's life. Rectal injury during laparoscopic radical prostatectomy can lead to severe postoperativecomplications (10). In a review of 1311 laparoscopic radical prostatectomy cases, three rectal injuries were found and required temporary colostomy (8).
Diagnosis of bowel injury during laparoscopy can be confirmed with abdominal CT scan. Extravasation of contrast mediafrom the bowel and/or the presence of free air arediagnostic. Other imaging modalities such as gastrografin enema can be used todiagnose rectal injury. Perhaps the most prevalent bowel injury in urological laparoscopic surgery is rectal injury, which can occur during robotic or laparoscopic prostatectomy. Most of these complications are ultimately managed successfully (9). Management of rectal injury remains debatable regardinginterposition of healthy tissue between rectal repair and vesicoureteral anastomosis, and the necessity of diverting colostomy (10). Bowel injuries recognized at the time of operation can be repaired by the same techniques as open surgery using intracorporeal suturing. Early diagnosis and repair of bowel injury reduce patient's morbidity (10).
|
2019-03-11T13:06:32.079Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "8fb45389a0a9bbbe399247274d97465e0a49f423",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.17795/acr-17983",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "02b93152c2ef07eb2ff691dddb4bc55e2813162f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5929680
|
pes2o/s2orc
|
v3-fos-license
|
On the possibility of using X-ray Compton scattering to study magnetoelectrical properties of crystals
The possibility of using X-ray Compton scattering to reveal antisymmetric components of the electron momentum density, as a fingerprint of magnetoelectric sample properties, is investigated experimentally and theoretically by studying the polar ferromagnet GaFeO3.
This paper discusses the possibility of using Compton scattering -an inelastic X-ray scattering process that yields a projection of the electron momentum density -to probe magnetoelectrical properties. It is shown that an antisymmetric component of the momentum density is a unique fingerprint of such time-and parity-odd physics. It is argued that polar ferromagnets are ideal candidates to demonstrate this phenomenon and the first experimental results are shown, on a single-domain crystal of GaFeO 3 . The measured antisymmetric Compton profile is very small (' 10 À5 of the symmetric part) and of the same order of magnitude as the statistical errors. Relativistic first-principles simulations of the antisymmetric Compton profile are presented and it is shown that, while the effect is indeed predicted by theory, and scales with the size of the valence spin-orbit interaction, its magnitude is significantly overestimated. The paper outlines some important constraints on the properties of the antisymmetric Compton profile arising from the underlying crystallographic symmetry of the sample.
Background
Compton scattering provides a projection of the electron momentum distribution in a target material (Cooper, 1985). While the exact relativistic form of the differential scattering cross section is complex, the momentum density derived from every measurement, and calculated by every theory, to date, has been symmetric. We argue that this is because all materials investigated so far have been symmetric with respect to time reversal or spatial inversion. Materials whose orbitals possess neither symmetry are said to be magnetoelectric as they play a major role in magnetoelectric phenomena. Of particular interest are toroidal moments, corresponding to time-and parity-odd vectors, that not only play a vital role in magnetoelectric phenomena (Spaldin et al., 2008) but have been suggested to be implicated in high-T c superconductivity (Scagnoli et al., 2011).
It is therefore of considerable interest to identify novel experimental probes of such moments. We show that the antisymmetric Compton profile is a unique signature of magnetoelectric properties and should therefore provide a very sensitive probe of the underlying orbitals, that can be compared in detail to electronic structure calculations to elucidate the underlying physics. In this article, we outline the principles behind this phenomenon, examine the possibility of observing such an effect in the polar ferromagnetic crystal GaFeO 3 , describe an experiment to measure the antisym-metric Compton profile, and compare the results with relativistic first-principles calculations.
Compton scattering and the electron momentum density
X-ray Compton scattering is an inelastic scattering process whereby the energy loss is an almost linear function of a projection of the electron momentum density: where ðpÞ dp x dp y ð2Þ is called the Compton profile (Cooper, 1985). Here, z lies (almost) parallel to the momentum transfer q 0 À q and p z ¼ p Áẑ z is the z projection of electron momentum. The momentum density ðpÞ and Compton profile Jðp z Þ are historically considered to be symmetric with respect to reversal of the momentum variable p ! Àp (or p z ! Àp z ). We suggest that this need not be the case. Let us first discuss the conditions under which the momentum density is symmetric. Since momentum is a function of both space and time (classically, p / dr=dt) we find that either inverting r (parity inversion) or t (time reversal) inverts p (p ! Àp). Consequently, any parity-even (centrosymmetric) or time-even (non-magnetic) material must satisfy ðpÞ ¼ ðÀpÞ. Compton profiles of this dominant class of materials are always symmetric.
However, no such constraint applies to materials that lack both time and inversion symmetry. Moreover, such systems form an interesting and important class of materials that often exhibit magnetoelectric phenomena such as linear magnetoelectric coupling, destined to play a key role in future technologies (Spaldin & Fiebig, 2005). We are therefore alerted to the possibility of using Compton scattering as a probe of timeand parity-odd magnetoelectric phenomena.
It is worth noting that while the breakdown of the impulse approximation (IA) can lead to an asymmetry in the measured Compton profile (Huotari et al., 2001), such effects need not be considered for the current analysis. This is partly because our measurements are not of the asymmetry in the energy spectrum directly, but rather of the intensity difference that is caused by an asymmetry in the electron momentum distribution. Moreover, the orbitals that are expected to contribute to the effect discussed here are relatively low-energy valence states, whereas the breakdown of the IA is expected to affect mainly tightly bound core electrons.
It is convenient to write the total momentum density as the sum of a symmetric component (with respect to p ! Àp) and an antisymmetric part: giving ðpÞ SðAÞ dp x dp y : The quantities A ðpÞ and J A ðp z Þ represent time-and parityodd properties. Since Compton scattering is an incoherent process, these objects are averages over all the constituent orbitals and therefore governed by the crystal (magnetic) point-group symmetry.
Zero-sum rule
If there is no net flow of electrons in the sample the integral of the flow along positive and negative z directions must cancel, i.e.
While this is satisfied trivially for the symmetric Compton profile, it imposes a useful constraint on each half of the antisymmetric profile: thus providing a 'zero-sum rule' that can be used to verify experimental results and model calculations. Most trivially, the zero-sum rule dictates that, for each half of the asymmetric profile, the existence of a positive contribution implies the existence of a negative one, and vice versa. A gedankenexperiment to demonstrate the possibility of a non-symmetric momentum density in a classical orbital. An observer sits in the plane of a highly eccentric planetary orbit, perpendicular to its major axis, and with the planet moving towards the observer at perihelion. The observer measures the amount of time each projection of the planet's momentum is observed for (perhaps via a Doppler-shift measurement) during a complete orbit, observing a large positive momentum projection for a small amount of time, and a small negative momentum for a long time as the planet orbits furthest from the star. The largest positive momentum has no negative counterpart and so the momentum density is clearly asymmetric. It is clear that this function is reversed under reversal of time (i.e. the planet orbits in reverse), and also under spatial inversion, realized (in this two-dimensional system) by a rotation of of the orbital within the orbital plane. employ a simple thought experiment to see that such an asymmetry is present in classical orbitals. Consider a highly elliptical planetary orbital, observed from within the orbital plane, as shown in Fig. 1. The orbiting body would be seen to have a very large positive momentum projection (towards the observer) for a short period of time, when the orbiting 'planet' is closest to the 'star' that it orbits. Conversely, it would exhibit a small negative momentum projection for a long period of time when it is far from the star and moving slowly. The large positive momentum has no negative counterpart and so the momentum projection distribution (analogous to the Compton profile) must be asymmetric. Note also that such an orbital is asymmetric with respect to time reversal and spatial inversion: reversing time would reverse the direction of the orbit, and spatial inversion would reverse its eccentricity.
Momentum density and rotational properties
The electron momentum density is a real-valued function of momentum, p, or (equivalently) of its magnitude, p, and direction,p p. We can expand this density in terms of a complete set of angular functions and prefactors that depend on p. For example, where Y K Q are real spherical harmonics (also referred to as multipoles or spherical tensors) of rank K and projection Q (Lovesey et al., 2005), and T K Q ðpÞ are the corresponding tensor components and are functions of p. The merit of such an expansion lies in the fact that each non-vanishing multipole Y K Q must be consistent with the symmetry of the physical system. For example, an isotropic system allows only a single term in the expansion and we find where IðpÞ is the radial momentum distribution (Cooper, 1985). For the present study we are primarily interested in the antisymmetric momentum density: where A K Q ðpÞ are the corresponding tensor components. Since reversal of the momentum vector, p ! Àp, is equivalent to carrying out the rotations Y K Q ð; 'Þ ! Y K Q ð À ; þ 'Þ, and all K = odd (even) real spherical harmonics are odd (even) under this transformation, we conclude that the antisymmetric momentum density contains only K = odd terms, ruling out contributions from magnetic monopoles or quadrupoles. (Conversely, the symmetric density contains only even multipoles, including the scalar K = 0 term). We can therefore write Importantly, the lowest-order allowed component has K = 1 and is therefore associated with a parity-odd, time-odd vector, i.e. a toroidal moment (Lovesey et al., 2005). A non-vanishing antisymmetric Compton profile therefore requires a material whose point-group symmetry permits the existence of oddrank time-and parity-odd multipoles. One might expect systems that allow the lowest (K = 1) rank multipole to be most favourable. If we assume that the antisymmetric momentum density is dominated by this term then we have Furthermore, if the direction of the toroidal moment is fixed by symmetry (i.e. the same for all momentum magnitudes) then the radial and angular parts can be factorized: (a 1 Q are now constants), which can be written in Cartesian form as whereT T is the toroidal moment direction. The Compton profile of such a momentum density can be written: The first two terms vanish as they involve integrals over functions that are odd with respect to the integration variable. This leaves While the integral is not straightforward to compute, there are two noteworthy aspects of this result. First, the Compton profile changes sign with p z ! Àp z , as expected. More interestingly, the Compton profile scales with the z projection of the toroidal moment unit vector. The latter gives a complete description of the angular dependence of the antisymmetric research papers [197][198][199][200][201][202][203][204][205] Compton profile in the case where multipoles of K ! 3 are negligible: the signal is maximum when the toroidal moment is along z, of equal and opposite magnitude when antiparallel, and zero when perpendicular. Armed with this approximate form, and the exact 'zero-sum rule' in equation (7), a clear picture begins to emerge about what the antisymmetric Compton profile should look like.
Experimental verification -suitable materials
It is likely that the antisymmetric part of the Compton profile is small as it depends on a subtle aspect of the anisotropy of the orbital polarization. An ideal experiment to test these ideas is therefore one where the antisymmetric part can be reversed simply and rapidly, inducing the smallest possible systematic error and allowing a sensitive 'difference' measurement to be performed. Reversal of the toroidal vector in many magnetoelectric materials requires simultaneous application of electric and magnetic fields, typically applied during 'field cooling'. An interesting class of materials where the toroidal moments are more easily manipulated are polar ferromagnets. One such material, that has been studied with X-rays for its directional dichroism (Kubota et al., 2004) and magnetoelectric multipoles (Staub et al., 2010), is GaFeO 3 . Large polar single crystals are available and the ferromagnetic moment can be reversed with a modest applied field. GaFeO 3 orders magnetically at the relatively high temperature of T C ' 210 K. We therefore selected GaFeO 3 as a potentially suitable test material and consider next the implications of crystal symmetry on the observable physical phenomena.
GaFeO 3 : symmetry and allowed tensor components
GaFeO 3 (space-group No. 33, Pc2 1 n) is a polar ferromagnet. It possesses both a magnetic (axial, time-odd) and polar (timeeven) vector moment. We have discussed the need for timeand parity-odd multipoles in the context of antisymmetric Compton profiles, and the desirability to possess a toroidal (polar, time-odd) moment. We now apply the magnetic crystal symmetry to all four permutations of time/parity odd/even vectors in order to find out (i) if they can exist and (ii) in which direction(s) they may point. The crystal point-group symmetry in the high-temperature paramagnetic phase is mm2. We can denote the symmetry group as f1; 2 y ; m x ; m z g (the identity, twofold rotation about y, mirrors normal to x and z). There are several possible magnetic groups that are consistent with this point group, which are formed by taking each spatial symmetry operator and either applying time reversal or not. Four such groups can be generated, with each placing specific constraints on the directions of the possible vectors, or rendering them absent. For GaFeO 3 , it is known that the magnetic easy axis lies along x (crystal a axis). However, the anisotropy is not strong and it is informative to consider the properties of all possible magnetic symmetry groups. These are shown in Table 1. The procedure for analysing the properties of various tensors, permitted by crystal symmetry, is described in Collins & Bombardi (2010). Briefly, the process involves generating a random vector, odd or even under T (time) and P (parity), generating a transformed vector for each of the four symmetry operations in the magnetic group, and adding the four resulting vectors. We find that the resultant, due to the high symmetry of the system, is always either zero or lies parallel to one of the three Cartesian/crystal axes.
Several interesting points emerge from this symmetrization procedure. First, we see that in all cases the time-even polar vector lies along y. This makes sense as only the magnetic symmetry is changed between the four groups. We see that there is no time-even axial vector. The four magnetic groups support a magnetic vector along x; y and z for the first three, with the fourth group not supporting a net magnetic moment. Table 1 Allowed vectors and their directions, for various magnetic symmetry groups (+/À indicates the absence/presence of the time-reversal operator) and vector symmetry with respect to time (T) and parity (P). The Cartesian axes x; y; z are parallel to the crystal axes a; b; c. The third of the four listed groups, m 0 m2 0 (conventionally written m 0 2 0 m), corresponds to the symmetry of GaFeO 3 .
Of particular interest are the toroidal vectors. For the first and third groups (the homomorphic mm 0 2 0 and m 0 m2 0 groups), the toroidal vector is perpendicular to the magnetic and polar vector, consistent with the sketch in Fig. 1. The second group (m 0 m 0 2) does not allow a toroidal vector. Interestingly, a toroidal vector is allowed by the fourth group (mm2), despite the absence of a magnetic moment, and it is parallel to the polar vector. This slightly counter-intuitive scenario can be visualized as the sum of two such classical orbitals, resembling a butterfly (Fig. 2). All four symmetry groups support K ¼ 3 and higher (odd) rank multipoles. For the present case, where the magnetic field is applied along the magnetic easy axis (the first magnetic group in the table), the presence of a toroidal moment suggests that GaFeO 3 is a suitable material for observing an antisymmetric Compton profile, and that the c-axis toroidal moment should be directed along the momentum transfer (z axis), with the polar b axis and magnetic a axis both perpendicular, as shown in Fig. 3.
Experiment on GaFeO 3
Experiments were carried out on beamline I12 (Diamond Light Source), using a linearly polarized monochromatic incident X-ray beam of energy 125 keV and bandwidth of 0.6 keV, selected by controlled bending of a double Laue monochromator (Drakopoulos et al., 2015). Compton scattering was detected close to back-scattering (2 $ 169 ) by a 23-element germanium solid-state detector. The orientation of crystal (a single polar domain -see Appendix A), X-ray beams and magnetic field were as shown in Fig. 3 and the sample was maintained at a temperature of 100 K with a nitrogen gas-jet cooler. As the aim of the experiment was to observe a small (antisymmetric) difference in the Compton profiles measured with two opposite magnetic field directions, the (0.3 T) field was flipped rapidly and repeatedly (1 s counting time for each direction) while data were accumulated for around 48 h.
The experimental Compton scattering results are shown in Fig. 4. The total Compton scattering (electron momentum density) is shown in blue, with the magnetic 'difference' signal indicated by red bars. The difference data have been multiplied by 10 4 , indicating that the difference, and any competing systematic and random errors, are extremely small. While the difference profile gives a hint of the anticipated antisymmetric shape, the effect is of the same order as the statistical errors (black bars). As such, the experimental results do not show conclusive evidence of the predicted asymmetry but give a clear indication of the maximum magnitude of such an effect.
First-principles calculations on GaFeO 3
To confirm the occurrence of an antisymmetric Compton profile in polar ferromagnets, density functional theory (DFT)-based theoretical investigations have been performed. The Compton profile was calculated from first principles using the Korringa-Kohn-Rostoker (KKR) Green's function method. This implies the electronic Green's function Gðr; r 0 ; EÞ is represented by means of the multiple scattering formalism by A schematic depiction of the experimental setup designed for measuring the antisymmetric Compton profile in a crystal of the polar ferromagnet GaFeO 3 . A reversible magnetic field was applied along the crystal c axis, with the polar b axis vertical. The toroidal a axis was aligned close to the direction of photon momentum transfer, which defines the projection direction for the momentum density. The sample was held below its magnetic ordering temperature by a nitrogen gas-jet cooler, and scattered photons detected by a multi-element germanium solid-state detector array (not shown), close to back-scattering.
Figure 4
Experimental Compton scattering results from GaFeO 3 . The total electron momentum distribution (Compton profile) is shown in blue, normalized such that the integral is the total number of electrons in the unit cell (eight GaFeO 3 formula units). The red bars show the antisymmetric Compton profile derived from the difference in Compton profiles measured with opposite magnetic field directions. Error bars are shown in black. Also shown on the plot is the calculated antisymmetric profile, convoluted with a Gaussian of width 0.8 a.u., to mimic the experimental momentum resolution. Note that, although the calculated and measured line shapes look similar, the experimental differences are of the same order as the statistical errors (error bars), and that the measurements are scaled by an order of magnitude compared to the calculations. Gðr; r 0 ; EÞ ¼ Here nqn 0 q 0 Ãà 0 ðEÞ is the scattering path operator with the combined index à ¼ ð; Þ representing the spin-orbit and magnetic quantum numbers and , respectively (Rose, 1961), and Z q à and J q à are the four-component regular and irregular solutions, respectively, to the single-site Dirac equation for the atomic site q (Ebert et al., 2011). The superscript  indicates the left-hand-side solution of the Dirac equation. The electron momentum density (p) = " ðpÞ þ # ðpÞ is decomposed into its spin-projected components "ð#Þ ðpÞ, which are given by the Green's function represented in momentum space, where m s represents the spin character. G m s ðp; p 0 ; EÞ is expressed in terms of the real-space Green's function Gðr; r 0 ; EÞ as follows: Here is the volume of the unit cell and È pm s are the eigenfunctions of the momentum operator, which can be written as È pm s ¼ U pm s expðiprÞ, where U pm s is a fourcomponent spinor satisfying the equation (Rose, 1961) ðcap þ mc 2 ÞU pm s ¼ E p U pm s : Using a Rayleigh-like expression, one obtains the angular momentum expansion for the eigenfunctions (Benea et al., 2006), where C m s à are Clebsch-Gordan coefficients, Y m l l are complex spherical harmonics, à ðr rÞ are spin-angular functions and j l ðprÞ are spherical Bessel functions.
The electronic structure calculations have been performed using the fully relativistic multiple scattering KKR Green's function method (Ebert et al., 2011(Ebert et al., , 2012 (adopting the atomic sphere approximation, ASA). Exchange and correlation were treated within the framework of local spin density approximation (LSDA) using the parametrization of Vosko, Wilk and Nusair (Vosko et al., 1980). Chemical disorder due to intermixing between the Fe and Ga sublattices in the system is treated by means of the coherent potential approximation (CPA) alloy theory (Soven, 1967;Ebert et al., 2011). For the angular momentum expansion of the Green's function [see equation (17)] a cutoff of ' max ¼ 3 was applied.
As a first step in the investigations of the occurrence of the antisymmetric Compton profile, the calculations have been performed for the non-centrosymmetric compounds MnGe and FeGe, with B20 structure (space group P2 1 3). FeGe is ferromagnetically ordered at ambient pressure with a Curie temperature T C ¼ 278:2 K (Wilhelm et al., 2012), while MnGe can be synthesized under a high pressure and exhibits antiferromagnetic (AFM) order below T N ¼ 197 K with a saturated magnetic moment of $ 1:9 B /Mn at 5 K (Kanazawa et al., 2011). Despite that, the calculations for both compounds have been performed for an FM (ferromagnetic) alignment of the magnetic moments to fulfil the precondition for the observation of an antisymmetric Compton profile. The calculated Mn magnetic moment in MnGe, of 2:1 B , fits rather well the experimental results. In line with the experimental setup, the orientation of the magnetization was taken to be perpendicular to the sample threefold axis as well as to the momentum transfer vector q. For both systems the antisymmetric part of the calculated Compton profile is very weak (but still significant), as is demonstrated in Fig. 5 showing the results for MnGe.
These results are in line with the results of the measurements on MnSi with B20 structure, performed with the same geometry and demonstrating the magnitude of the antisymmetric Compton profile that is beyond the current accuracy of the experiment.
In the case of the GaFeO 3 system the calculations were performed taking the occupation numbers 0. Calculated total valence electron (green) and antisymmetric (black) Compton profiles of the chiral magnet (parity-and time-odd) MnGe, broadened by 0.2 a.u. Calculations are done for the ferromagnetically ordered system with a magnetic field direction parallel to [1, À1, 0] and momentum transfer vector q k ½1; 1; À2.
The calculations of the antisymmetric Compton profile have been performed for the geometry as used in the experiment. This implies that the photon momentum transfer is along the toroidal axis a, while the magnetic field lies parallel to the crystal c axis. Assuming that the total magnetic moment follows the direction of the magnetic field, the contribution of the valence electrons to the Compton profile has been calculated for two opposite orientations ðAEÞ of the magnetization. Accordingly, the antisymmetric Compton profile J A ðp z Þ is defined as the difference J A ðp z Þ ¼ J þ ðp z Þ À J À ðp z Þ. Restricting the calculations to the valence electrons implies that contributions to J A ðp z Þ by core electrons are negligible. This simplification is very well justified. The total Compton profile due to the valence states is shown in Fig. 6 together with the antisymmetric Compton profile, calculated using a momentum broadening of 0.2 and 0.8 a.u. The latter value corresponds to the experimental momentum broadening. As can be seen, the amplitude of the antisymmetric profile is about three orders of magnitude smaller than for the total Compton profile. In the experiment this difference is even more pronounced, as can be seen in Fig. 4. To account for the rather low experimental momentum resolution of about 0.8 a.u., a corresponding momentum broadening has been applied to the calculated Compton profiles, shown in Fig. 4 (black line). This results in particular in a substantial decrease of the amplitude bringing the theoretical results closer to the experiment. Another source for the apparent overestimation of the antisymmetric part of the Compton profile in the calculations is the finite temperature of the measurements (T ¼ 100 K). Taking into account the rather small critical temperature T c ' 200 K, one can expect a rather pronounced temperature-induced magnetic disorder in the system which should lead to a smearing of the electronic states and as a result to a decrease of the magnitude of the antisymmetric Compton profiles.
As the antisymmetric Compton profile is a consequence of the anisotropy of the orbital polarization and accordingly is first of all a relativistic effect, it should depend on the strength of spin-orbit coupling (SOC) in the system. To demonstrate this, the calculations have been performed with the SOC scaled. In Fig. 6 the antisymmetric profile obtained using a scaling factor SOC ¼ 0:1 is plotted together with that obtained without any SOC scaling. One can clearly see a decrease of the amplitude of the profile by nearly one order of magnitude due to the SOC scaling. A further decrease of the scaling factor leads to a collapse of the antisymmetric part of the Compton profile.
Note also that spin-orbit interaction has a rather pronounced effect on the shape of the magnetic Compton profile (MCP) -the spin-projected momentum density. Fig. 7 gives the MCP for GaFeO 3 calculated without (green line) and with (red line) SOC. As one can see, neglect of the SOC results in an increase of the amplitude at p z ¼ 0 a.u., as well as a more pronounced oscillatory momentum dependence.
Conclusions and future prospects
We propose a new class of Compton scattering experiment with the potential to provide the antisymmetric part of the electron momentum density in materials. We show that the antisymmetric Compton profile is a unique fingerprint of timeand parity-odd properties of the underlying orbitals, and thus a sensitive probe of magnetoelectric phenomena. Initial Calculated GaFeO 3 total valance electron (green) and antisymmetric (black) Compton profiles, broadened by 0.2 a.u. (a quarter of the momentum resolution of the present work), indicating the potential benefit of performing measurements with improved resolution. Also shown is the same antisymmetric profile, but calculated with the spinorbit coupling (SOC) reduced by a factor of ten. The antisymmetric profile is reduced by very nearly the same factor, showing that SOC plays an essential role in the underlying physics.
Figure 7
Magnetic Compton profile for GaFeO 3 : red line with full SOC and green line with SOC suppressed. A momentum broadening of 0.2 a.u. has been applied. Both profiles have been normalized to the total magnetic moment of 0:27 B per formula unit. experiments on the polar ferromagnet GaFeO 3 demonstrate that our experimental technique is extremely sensitive, leading to very small systematic errors. Our results show that the magnitude of the antisymmetric momentum density, after broadening with the experimental momentum resolution of 0.79 a.u., is not larger than around 10 À5 of the peak in the symmetric part. While the optimistic eye might be tempted to pick out an antisymmetric difference signal (Fig. 4) above the statistical noise, we cannot claim that the present results are conclusive in this respect.
The main scientific motivation for these experiments is to provide a sensitive and stringent test of first-principles electronic structure calculations. To this end, we have performed calculations of the antisymmetric Compton profile using the KKR Green's function method. The results of these calculations suggest that the antisymmetric profile should be far larger than was observed in the measurements, thus proving that the antisymmetric Compton profile is indeed a highly challenging and stringent test of competing theories. Moreover, even without comparison to experimental data, deficiencies in the theory are evident from the fact that the calculated profiles clearly (visually) violate our zero-sum rule.
Deficiencies in the experimental determination of this effect are even more dramatic than those of the theory. The X-ray detection efficiency, determined by the total detector solid angle, is $ 6 Â 10 À3 . Moreover, the difference signal is inevitably washed out by convolution with the momentum resolution, determined by the incident photon beam bandwidth, and detector energy and angular resolution. Technological developments, especially high-resolution high-energy photon detector arrays, are likely to improve the quality of experimental data by a very significant factor, rendering such studies straightforward in the future.
To conclude, we have shown that antisymmetric Compton scattering should exist in an interesting and topical class of material. First attempts to measure this have shown that it is extremely small and close to the limit of statistical uncertainty, but differs sufficiently from predictions of state-of-theart first-principles calculations, to provide a very sensitive test of the microscopic origins of magnetoelectric phenomena. We expect that future improvements in experimental technology will make such measurements more straightforward. The present study focuses on a polar ferromagnet. While such a system affords simple control over magnetoelectric polarization, via magnetic field flipping, we note that this is by no means necessary. Antisymmetric components of the electron momentum density should be observable in materials that are odd under time and parity reversal separately, but even under the combination of the two, such as an antiferromagnetic/antiferroelectric crystal.
Finally, it is perhaps worth noting that Compton scattering has the potential to probe other exotic polarization-dependent properties. For example, one could envisage a study of surface states in a topological insulator, using the spin sensitivity of magnetic Compton scattering (Cooper, 1985) to probe the correlation between the momentum vector of the wavefunction and its spin direction. Such experiments would be significantly more challenging than the present one, due to the required surface sensitivity and the reduced cross section for magnetic Compton scattering, but might be feasible in the future.
APPENDIX A Verification of the single polar domain state in the GaFeO 3 crystal
Since the antisymmetric Compton profile would be expected to vanish if the X-ray beam sampled an equal population of opposite polar domains, the experiment hinged on being confident that the entire crystal, of dimensions $3 Â 3 Â 3 mm, consisted of a single polar domain.
As the crystal was far too large to establish its polar properties by conventional X-ray diffraction techniques, we employed a novel approach, described by Fabrizi et al. (2015), whereby spatial diffraction maps are made of opposite sample faces at two photon energies, just above and below the Fe K edge.
First, the detailed energy dependence of the (0 2 0) reflection was collected between 7.06 and 7. Left: a map of the intensity ratio between photon energies 7.10 and 7.16 keV (below and above the Fe K edge), on the (0 2 0) reflection, for one sample face. The red arrow indicates the intensity ratio for the opposite face, in the central position of the map. The values 2.78 and 3.71, highlighted in the colour bar, represent the expected ratios of a monodomain state, for the two faces, respectively. Right: the complete energy profiles of the reflection, measured at the centre of the map on the two opposite faces (red and blue). The dots represent experimental data (integrated rocking curves); the lines are simulated from a simple model of resonant diffraction from isolated Fe 3+ ions. expected, given the geometry of the system, that reversing the face of the sample equates to reversing the sign of the polar vector, if the crystal is in a monodomain state. This is confirmed by the difference in the two energy profiles (Fig. 8). The contrast between polar states is provided by the resonant contribution of the X-ray diffraction, which is enhanced in the proximity of an atomic absorption edge.
To verify that the monodomain state extends to the whole sample, a spatial map was collected on one of the faces, by measuring the intensity ratio between two opportune photon energies (7.10 and 7.16 keV). This provides us with a fingerprint of the domain composition, irrespective of the overall scattering power of the specific portion of the crystal illuminated by the X-rays.
These measurements confirmed that each face exhibited a single polar domain, and that the polar vector reversed between opposite faces. It therefore seems extremely likely that the entire crystal is formed from a single polar domain.
|
2018-04-03T03:45:40.259Z
|
2016-02-16T00:00:00.000
|
{
"year": 2016,
"sha1": "e4d721827151000318a2624d73d2a9e7e7aaba7f",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/a/issues/2016/02/00/kx5049/kx5049.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd4beeff7a0524eee31e97d7a1b01f98af5d777a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
259361508
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Hemp Particles as Functional Fillers for the Manufacturing of Hydrophobic and Anti-icing Epoxy Composite Coatings
The development of hydrophobic composite coatings is of great interest for several applications in the aerospace industry. Functionalized microparticles can be obtained from waste fabrics and employed as fillers to prepare sustainable hydrophobic epoxy-based coatings. Following a waste-to-wealth approach, a novel hydrophobic epoxy-based composite including hemp microparticles (HMPs) functionalized with waterglass solution, 3-aminopropyl triethoxysilane, polypropylene-graft-maleic anhydride, and either hexadecyltrimethoxysilane or 1H,1H,2H,2H-perfluorooctyltriethoxysilane is presented. The resulting epoxy coatings based on hydrophobic HMPs were cast on aeronautical carbon fiber-reinforced panels to improve their anti-icing performance. Wettability and anti-icing behavior of the prepared composites were investigated at 25 °C and −30 °C (complete icing time), respectively. Samples cast with the composite coating can achieve up to 30 °C higher water contact angle and doubled icing time than aeronautical panels treated with unfilled epoxy resin. A low content (2 wt %) of tailored HMPs causes an increase of ∼26% in the glass transition temperature of the coatings compared to pristine resin, confirming the good interaction between the hemp filler and epoxy matrix at the interphase. Finally, atomic force microscopy reveals that the HMPs can induce the formation of a hierarchical structure on the surface of casted panels. This rough morphology, combined with the silane activity, allows the preparation of aeronautical substrates with enhanced hydrophobicity, anti-icing capability, and thermal stability.
INTRODUCTION
In the last decades, superhydrophobic surfaces have drawn great attention both in academia and in industry due to their self-cleaning, 1 anti-icing, 2 antifouling, 3 antifogging, 4 and anticorrosive 5 properties. In particular, superhydrophobic surfaces can be useful in many industrial fields, such as automotive, 6 aerospace, 7 textiles, 8 medical diagnostics, and sensing, 9 plus they can be combined with smart functionalities derived from nanoparticles. 10−12 The surface wettability is determined by the strength of the cohesive forces within the water molecules and the interaction of the water with the surface itself. When water droplets roll off, keeping the surface dry, the cohesive forces within the water molecules are greater than those on the surface. 13 A hydrophobic surface provides a water contact angle (CA) larger than 90°, while a superhydrophobic surface exhibits a water CA larger than 150°and a contact angle hysteresis (CAH) lower than 10°. 14 Low surface energy and micro-/nano-hierarchical roughness of the solid substrate are the two fundamental parameters for designing a superhydrophobic surface. 13 Inorganic oxide nanoparticles (e.g., silica, zinc oxide, titania) can provide rough surfaces with a microscale/nanoscale architecture. 15−18 For the development of the hydrophobic/superhydrophobic coatings, different fabrication techniques can be utilized, for example, electrochemical deposition, 19 electro-spinning and electrospraying, 20,21 chemical vapor deposition, 22 layer-by-layer deposition, 23 sol−gel processing, and solution casting. 24,25 Among these, the sol−gel method and water glass route are widely used, as they operate under mild and low-cost conditions to produce a variety of nanostructured materials 26,27 and superhydrophobic layers on various substrates. 24,28 Superhydrophobic coatings derived from silica sols and hydrophobic compounds have been widely investigated. Silica can be functionalized by alkyl silanes and fluorinated alkyl silanes, obtaining hydrophobic and superhydrophobic coatings showing low surface free energy owing to the surface hydroxyl groups. 29−32 Among hydrophobic compounds, the fluorinated ones exhibit high effectiveness; however, they have some drawbacks, including high costs and risks for human health and the environment. 33,34 The US Environmental Protection Agency and the European Chemicals Agency are considering the restriction of several long-chain linear per-and polyfluoroalkyl substances (PFAS). 35,36 Nevertheless, PFAS are still widely used as functional additives to prepare hydrophobic products. Hence, the use of non-fluorinated silanes, such as long-chain alkyl silanes, is encouraged for fabricating sustainable coatings owing to their lower toxicity and low costs.
Epoxy resins have been widely used as a matrix to prepare hydrophobic nano-/micro-composite coatings 37−39 due to their easy processing, strong adhesion to many substrates, and excellent chemical resistance. 40 The high surface energy of epoxy resins strongly limits their use as the polymer matrix in the manufacturing of water-proof coatings. To overcome such limitations, it is possible to disperse hydrophobic nanoparticles into the epoxy matrix, 41−43 before applying the coating onto a substrate (e.g., carbon fiber panels, plastic films, glass slides, metals, etc.). 44−47 Recently, natural fibers have become more attractive as potential sustainable and eco-friendly reinforcement for epoxy composites, encouraging the use of surplus cellulose fibers derived from agricultural waste. 48,49 However, due to their hydrophilic chemical nature, the use of cellulosebased particles as platforms and functional fillers for the manufacturing of hydrophobic epoxy-based coatings has not yet been investigated. The few works on the wettability of epoxy composites containing natural microfibers reported an unaltered 50 or decreased 51 hydrophobic character. Moreover, the treatment of jute fibers with an epoxy resin solution caused only a slight increase in water CA (from about 67°to 74°), 52 confirming that offsetting the marked hydrophilicity of these materials is a challenging task. In the context of the circular economy, the production of modified hemp particles would represent a sustainable and easy solution for the recovery of waste hemp rugs, as epoxy resins and fabrics are largely used in the aerospace sector for the fabrication of multifunctional fiberreinforced composites. 53,54 In this perspective, it would be highly desirable to exploit properly functionalized hemp-based particles as fillers for manufacturing composite coatings with enhanced hydrophobicity and anti-icing performances without any detrimental impact on the viscoelastic behavior and the thermal stability of the polymer matrix.
In the present study, hydrophobic composite epoxy coatings were cast onto typical aeronautical panels, based on carbonfiber-reinforced (CFR) polymers, to assess their feasibility in aerospace applications. In particular, the possibility of obtaining innovative fiber-reinforced epoxy coatings reusing waste hemp as particles, functionalized with waterglass (i.e., sodium metasilicate) solution, 3-aminopropyl triethoxysilane (APTES), polypropylene-graf t-maleic anhydride (PPgMA) and silanes, either hexadecyltrimethoxysilane (HDTMS) or 1H,1H,2H,2H-perfluorooctyltriethoxysilane (PFOTES), was deeply investigated, and the influence of the molecular structure of these silanes on the wettability and anti-icing performances of treated aeronautical panels was evaluated. Through this approach, hemp particles' surface chemistry and morphological characteristics can improve the interphase between the filler and the polymer matrix and confer rough morphology to the surface, leading to a higher glass transition temperature and reduced wettability compared to the virgin epoxy coating.
consisting of a modified diglycidyl ether of bisphenol-A (DGEBA) resin and isophorone diamine (IDA), a cycloaliphatic diamine hardener, purchased from MATES S.r.l. (Milan, Italy), was used for the preparation of the polymer matrix in the composite coatings.
Methods. 2.2.1. Synthesis and Functionalization of Hemp Fabric
Microparticles. Hemp fabric rugs are generally used in the manufacturing of fiber-reinforced epoxy composites and are one of the most abundant waste materials resulting from the fabrication process. In Scheme 1, the overall procedure for the synthesis of hydrophobic hemp particles is shown.
Starting from waste hemp fabric rugs, HMPs were obtained through a sol−gel route previously established. 55,56 Briefly, hemp fabric rugs were submitted to iterated soaking−drying cycles (see stage 1 of Scheme 1) in a waterglass (Na 2 SiO 3 , 0.01 M) solution acidified up to pH = 2.5 with hydrochloric acid (HCl). The sol−gel methodology allows for the deposition of a silica gel on the surface of hemp rugs, which makes them brittle after drying. 56 Then, the dry fabrics could be reduced to silicacoated hemp powder (HEMPSi) particles by a low-power (350 W) mixer (IMETEC S.p.A., Azzano San Paolo, Bergamo, Italy). In the second step (see stage 2 of Scheme 1), the functionalization with amino groups was performed by laying HEMPSi particles in an EtOH/water solution acidified up to pH = 5 with acetic acid and containing APTES. The final product (HEMPN) was washed with an EtOH/water solution through centrifugation cycles. 55 In the present study, to confer hydrophobic character to HEMPN microparticles, the following strategy was adopted. The primary amino groups were left to react with maleic anhydride molecules grafted along polypropylene chains of PPgMA (see stage 3 of Scheme 1). For a typical synthesis, this reaction was performed as follows: 0.25 g of PPgMA was added into a vessel containing 50 mL of xylene solution at 80°C and left stirring till the complete dissolution of the polypropylene-based compatibilizer. Then, 1 g of HEMPN particles, fully dried overnight at 60°C, was added to the vessel and left reacting under reflux for 3 h at 80°C. The final product (HEMPP) was washed with pure xylene by three centrifugation cycles to remove unreacted PPgMA and finally with a solution of EtOH/water (80:20 v/v %). 57
Manufacturing of Fiber-Reinforced Epoxy
Coatings. The bulk sample of blank epoxy (EPO) was manufactured by mixing a certain amount of unfilled DGEBA with an amine hardener (26 wt % of the epoxy resin). After stirring at room temperature, the mixture was poured into a silicone rubber mold (5 × 5 × 0.3 cm 3 ), with a thickness of 0.1 cm, cured at 60°C for 24 h, and then post-cured at 80°C for 4 h. By following a similar procedure, functionalized HMPs were used to fabricate DGEBA-based epoxy composite coatings with the same dimensions (5 × 5 × 0.1 cm 3 ). In the preparation of a typical formulation, a specific amount of DGEBA resin was added with 2 wt % of HEMPP particle loading (this content was selected on the basis of tan δ measurements previously performed on a similar system by some of the authors 55 ), 10 wt % of acetone, and 1 wt % of HDTMS or PFOTES (see stage 4 of Scheme 1) silanizing agent. These percentage values for the synthesis all refer to the epoxy matrix cured with 26 wt % of hardener. These bulk composite coatings were cured at the same conditions as the EPO sample.
The unfilled epoxy coating and composite ones were also deposited by the drop-casting method from one-pot formulations on aeronautical panels based on the carbon fiber-reinforced material. Before the deposition, the formulations were strongly stirred to obtain homogeneous systems and then weakly sonicated to remove the bubbles. The final products were cast on CFR panels (7.5 × 2.5 × 0.1 cm 3 ) previously cleaned with ethanol. The coated substrates were cured at 60°C for 24 h and then post-cured at 80°C for 4 h. Untreated CFR panels and coated ones are shown in Figure S1. It can be observed that the deposition of the coatings does not alter the chromatic characteristics of the substrates, confirming a uniform distribution of the filler at a macroscopic scale. In the next sections, the epoxy coating samples containing HEMPP particles modified with HDTMS and PFOTES will be named EH_HDTMS and EH_PFOTES, respectively.
Wettability and Anti-icing Tests.
The wettability of composite coatings was evaluated by measuring the CA and the CAH. The CA measurement allows the evaluation of the surface interaction of the investigated system with three phases (solid, liquid, and vapor). The CA is the angle arising from the intersection of liquid−vapor and solid−liquid interfaces (see Figure S2a). 58 This angle forms when a liquid drop lies on a horizontal plane. The CA of distilled water was evaluated by a high-resolution camera (iPhone 13 Pro Max-12 MP, f/1.5, 26 mm-12 MP, f/2.8, 77 mm-12 MP, f/1.8, 13 mm-LiDAR ToF 3D, Apple Inc.) at room temperature using the sessile drop method. 58 The CA was measured with an open-source image processing software ImageJ (v.1.52t, NIH, 2020). The CAH was evaluated as follows: a distilled water drop was placed on the substrate, firmly anchored to an inclined plane, and the CAs (advancing and receding) were collected when the droplet started sliding down ( Figure S2b). The difference between the advancing (θ A ) and receding (θ R ) contact angles corresponds to the CAH (eq 1).
All the CAs were estimated by placing water drops of volume in the range of 30−40 μL on the coating surface with a micropipette.
The anti-icing test was performed by dripping water droplets (30−40 μL) on the surfaces of overcooled samples (−30°C). 2 First, the samples were placed in a refrigerator at a temperature of −30°C for 10 min. Then, the deionized water droplet was dripped onto the coating's surface. Meanwhile, a high-speed camera (pco. dimax cs1, PCO) was used for recording the freezing process. Figure S3 shows the apparatus used for the anti-icing tests.
Structural, Morphological, and Thermal Analysis. 2.2.4.1. Attenuated Total Reflectance−Fourier Transform
Infra-Red. Attenuated total reflectance−Fourier transform infra-red (ATR−FTIR) analysis was performed on the functionalized HMPs and epoxy coatings by using a Nicolet 5700 spectrometer (Thermo Fisher, Waltham, MA, USA) with a single reflection ATR accessory. The instrument has a resolution of 4 cm −1 , and the collected spectra was the result of 32 running scans. The analysis software used was Thermo Scientific OMNIC Software Suite (v7.2, Thermo Fisher, Waltham, MA, USA, 2005).
Atomic Force Microscopy.
Atomic force microscopy (AFM) analysis was performed by means of a Bruker NanoScope V multimode AFM (Digital Instruments, Santa Barbara, CA, USA) apparatus to quantify the surface roughness parameters and analyze the nanoscale surface morphology of samples EPO, EH_HDTMS, and EH_PFOTES. Topographic height images were acquired at room temperature and processed using a Bruker software Nanoscope Analysis 1.80 (BuildR1.126200). The measurements were performed in tapping mode, in which the sharp tip of the probe scans the sample surface intermittently by oscillating up and down as the cantilever is vibrated near its resonance frequency. The tip is characterized by a radius of 5−10 nm, a nominal spring constant of 20−100 N/m, and resonance frequencies of 200− 400 kHz. For each analyzed sample, several AFM images were acquired at different locations to evaluate the trend of the roughness parameters and verify if these are reproducible on different scanned areas of the samples. The scanning rate was 0.500 Hz per scan line, with 512 pixels per line. In order to evaluate the surface roughness, different roughness parameters are generally estimated and applied. The magnification of the scanned area during the AFM acquisition greatly influences the roughness parameters. This means that the roughness value measured for a large section of the surface will be very different from that calculated for a smaller section. In fact, AFM images having the same scan size were compared for the three samples EPO, EH_HDTMS, and EH_PFOTES in order to effectively quantify the roughness values. In this work, to derive the quantitative roughness, two of the most relevant height parameters, namely, the roughness average (R a ) and the root mean square roughness (R q ), have been considered. More precisely, R a represents the arithmetic mean of the absolute values of the height of the surface profile, and R q is analogous to the roughness average (R a ), with the only difference that R q is more sensitive to peaks and valleys than R a , due to the squaring of the amplitude in its calculation. These amplitude parameters, which characterize the surface based on the vertical deviations of the roughness map from the mean surface, are extensively used in the literature. 59,60 For all the samples, the R a and R q parameters were evaluated according to eqs 2 and 3 where l r is the length of the line, z is the height, and x is the position.
The surface morphology of the functionalized HMPs and coated aeronautical panels was observed via scanning electron microscopy (SEM) using a Leica Stereoscan 440 microscope (20 kV) (Cambridge Ltd., Cambridge, UK), coupled with an energy-dispersive X-ray (EDX) analytical system (Inca Energy 200
Structural Analysis of Functionalized Hemp Particles and Epoxy Composite Coatings.
It is known that the hydrophobicity of a surface depends on two main effects: the texture morphology of the surface and its chemistry. A polymer coating exhibiting a micro-structured surface is more hydrophobic than a flat surface. 61 Hence, to make hemp particles a suitable functional filler for lowering the wettability of epoxy-based coatings, their surface chemistry needs to be modified to increase their hydrophobicity and affinity toward the polymer matrix. HEMPSi powders were composed of microparticles of diameter ranging from tens of nanometers to tens of microns, as shown by SEM images (Figure 1a). 55 Such a morphology derives from the web-like structure formed by the fibrils and microfibrils of hemp. 57,62 The analysis of the ATR−FTIR spectrum of the HEMPSi sample reveals the presence of a layer of silica gel rich in silanol groups, as attested by the broad bands at about 3340 cm −1 (ν O−H ) and 1100 cm −1 (ν Si−O ) ( Figure S4). The formation of a silicon-based layer is confirmed by EDX analysis (Table 1). Silanol groups on the surface of HMPs can react through condensation with APTES, a well-known coupling agent for epoxy systems. 63,64 Figure 1b reports SEM images of aminofunctionalized hemp particles (HEMPN). The presence of primary amino groups on these fibers is clearly shown by the appearance of IR bands at 1554 cm −1 (δ N−H ) and 1407 cm −1 (ν C−N ) (see Figure S4) and is confirmed by EDX measurements (Table 1), which reveal a significant amount of nitrogen on the hemp surface. The amino groups on HEMPN particles can react with the oxirane rings of the epoxy chain; therefore, the resulting material can be used to prepare fiber-reinforced epoxy composites with tailored interphase. However, as observed in a previous work, 55 HEMPN particles tend to segregate in the polymer matrix as their surface chemistry limits the migration of such fillers to the surface of epoxy coatings. Conversely, PPgMA-functionalized (HEMPP) particles show a non-polar character and are not able to react with oxirane rings. SEM micrographs at different resolutions ( Figure 1c) reveal that HEMPP particles still exhibit the characteristic unregular rugged morphology, 57,62 and their surface appears slightly waxy and different compared to the ones of HEMPSi and HEMPN samples. This may be ascribed to a thin layer of PP covering the primary wall of hemp particles. The presence of this polymer coating is further supported by a higher amount of carbon recorded by EDX investigation for HEMPP particles with respect to HEMPSi (Table 1). It also makes nitrogen undetectable by EDX, probably because the nitrogen signal is covered by PPgMA after the reaction of primary amino groups with maleic anhydride moieties. 57,62 The successful grafting of PPgMA on the surface of the final product (HEMPP) is assessed by the characteristic bands attributed to C−H bending at 1376 cm −1 (ρ CH3 ) and 1460 cm −1 (δs CH2 ) and by the increase of the C−H stretching bands in the region 2900−3000 cm −1 ( Figure S4).
The functionalization of hemp particles by PPgMA completely changes the chemical characteristics of their surface, enhancing their hydrophobicity and causing them to appear much more similar to the epoxy resin from a chemical point of view. Thus, due to their chemical and morphological characteristics, HEMPP particles appear highly suitable to be incorporated into epoxy systems to manufacture polymer coatings with hierarchical micro-structured surface textures. To combine this aspect with an even more hydrophobic chemistry for HEMPP, these particles were additionally functionalized with two silanes with non-polar chains, HDTMS and PFOTES (see Scheme 1). As shown in Table 1, HEMPP particles display a residual amount of silicon on their surface due to some free silanol groups that have remained unreacted. These silanol groups can condensate with HDTMS and PFOTES (see Scheme 1) during the curing process (cure: 60°C/24 h, post-cure: 80°C/4 h) to form tailored hemp particles with a rugged morphology, high hydrophobicity, and enhanced compatibility with the epoxy matrix. 65,66 The chemical structure of the pristine epoxy resin (EPO) and the composites containing the functionalized hemp particles (EH_HDTMS and EH_PFOTES) was investigated by ATR−FTIR spectra ( Figure S5). All the samples appear completely cured, as attested by the disappearance of the oxirane ring vibration band at 913 cm −1 and by the rise of the O−H stretching band between 3500 and 3100 cm −1 . 67−69 It is known that the position of this band is very sensitive to the strength of H-bonding in which OH groups are involved, shifting toward lower frequency and increasing the magnitude of H-bonding. 70 For pure epoxy, two features at about 3470 and 3322 cm −1 are seen, which can be related to free and Hbonded hydroxyls, respectively. For the EH_HDTMS and EH_PFOTES composites, the O−H stretching band shows a single feature at about 3380 cm −1 , indicating weaker Hbonding interactions between the epoxy matrix and the functional groups of hemp particles. 68,71 Furthermore, in the spectra of EH_HDTMS and EH_PFOTES, despite the low concentration of both HEMPP particles and silanes, some of their main functional groups can be detected, for example, the Si−O bonds, whose stretching lies between 1030 and 1080 cm −1 , while other features are overlapped with the several characteristic vibrations of the polymer matrix.
Wettability and Anti-icing Properties.
The water CA and CAH values of the uncoated and treated aeronautical panels are displayed in Table 2 and Figure 2. The intrinsic water CA value of the pristine substrate was 62°± 3°, which denotes a hydrophilic behavior of the carbon fiber-reinforced panel. In coated samples, in the absence of a functionalized hemp filler, the deposition of EPO does not change the hydrophilic character (water CA = (84°± 2°) < 90°). This hydrophilicity can be explained by the presence of hydroxyl groups, arising from the curing of the resin, which can form Hbonds with water molecules, 72 as confirmed by the ATR− FTIR investigation. A slight hydrophobicity (water CA = (96°± 3°) > 90°) is observed for the EH_HDTMS coating. As previously demonstrated, HEMPP particles still exhibit free silanol groups, making them amphiphilic and able to migrate to the surface of the epoxy matrix, establishing a micro-structured and rough texture (see Section 3.4). The functionalization of hemp particles with PPgMA plays a key role in the above- described migration phenomenon and, at the same time, increases the hydrophobicity of the surface and the compatibility with the polymer matrix. The anchoring of HDTMS causes the exposure of alkyl chains at the solid−air interface, which, combined with the suitable morphology, provides higher CA and lower CAH than EPO (see Table 2). Concerning the panel coated with EH_PFOTES, the presence of fluorinated alkyl chains boosts the hydrophobic response of the surface [water CA = (115°± 3°), see Table 2 and Figure 2], due to the well-known function of fluorine-based silanes in lowering the wettability of coatings. 13 Figure 2 shows the progressive increase in CA and a relative decrease in CAH moving from EPO to EH_PFOTES, highlighting the effects of structural and chemical contributions on surface wettability. The rough morphology and non-polar character of the functionalized hemp particles exert a synergistic action in enhancing the hydrophobicity of the epoxy composite coatings, even using a low content of the waste-derived filler (2 wt %) in a simple one-pot procedure with mild operating conditions. The potential effectiveness of epoxy composite coatings as protective layers against ice formation was evaluated by measuring the freezing time of water droplets deposited on the surfaces of the aeronautical panels at −30°C. Figure 3 shows the shape of the droplets as deposited at room temperature and their change after the freezing process on pristine CFR panels and substrates coated with pure epoxy (EPO), EH_HDTMS, and EH_PFOTES coatings.
The water droplets on the CFR panel and on EPO were semicircular in shape and froze within 35 and 54 s, respectively. In contrast, the water droplet on the EH_HDTMS and EH_PFOTES coatings remained more spherical due to the higher hydrophobicity, and the freezing time was delayed up to 113 s (see Table 2 and Figure 3). These results can be explained by the reduction of the contact area between the water droplet and the hydrophobic surface, which leads to a lower heat transfer rate. 73−75 The increase of CA and reduction of CAH values caused by the presence of the silanized hemp particles in the epoxy coatings (EH_HDTMS and EH_P-FOTES) are clearly related to the anti-icing performances, as the observed variation of these parameters agrees with a reduced ice adhesion strength on the coated panels. 76,77 Based on the values of the collected CAs, the formation of ice on EH_HDTMS and EH_PFOTES coatings occurs by a heterogenous nucleation mechanism. 78 This low adhesion, combined with the decreased contact area, increases water droplet freezing time. Although PFOTES appears to be the most effective functionalizing agent, HDTMS represents a valuable greener alternative, still improving wettability and anti-icing properties.
3.3. Thermal Analysis. DSC analysis was performed to investigate the thermal behavior of the epoxy composite coatings (EPO, EH_HDTMS, and EH_PFOTES). Figure S6 shows the DSC curves collected at the second heating run, and Figure 4a reports the estimated glass transition temperature (T g ) values for each sample. The absence of exothermic phenomena in DSC thermograms proves that all the coatings are fully cured. Thus, the addition of the functionalized hemp particles does not impair the crosslinking process. These results agree with ATR−FTIR measurements. Despite the low amount (2 wt %) of hemp particles, EH_HDTMS and EH_PFOTES exhibit higher T g values (up to ∼26%) than EPO. The positive effect on the T g of epoxy coatings is probably due to the chemical functionalization of hemp particles, which makes the interphase between the polymer matrix and filler well-tailored. The non-polar character of silanized hemp particles guarantees their uniform distribution in the matrix, while their surface functional groups allow for the establishment of secondary interactions (e.g., hydrogen bonds) with the epoxy chains, increasing the rigidity of the polymer network. 68 These effects have already been observed with other kinds of fillers, where the instauration of strong non-covalent bonds led to higher T g values compared to virgin polymeric systems. 79,80 These results suggest that silanized hemp particles could be promising for manufacturing fiberreinforced epoxy coatings with good mechanical behavior.
The thermo-oxidative stability of epoxy composite coatings has been evaluated by TGA. The TGA and DTG profiles are shown in Figure 4b,c, and the temperatures at the main degradation steps are reported in Table S1. The decomposition of EPO, EH_HDTMS, and EH_PFOTES coatings occurs in two main stages, namely, a first step between 350°C and 400°C and a second one around 550°C, in agreement with the thermal behavior of DGEBA-based systems cured with amine hardeners. 81,82 The presence of physical interactions between the epoxy matrix and the chemically modified hemp fillers increases the initial degradation temperature of the resin by ∼20°C, as highlighted in the inset of Figure 4b. This result further confirms the formation of secondary bonds in the polymer network, 68 in agreement with ATR−FTIR analysis and T g values. Besides, EH_HDTMS and EH_PFOTES undergo the second degradation step at temperatures (T max2 ) up to 30°C higher than EPO. The T max2 values, together with the higher residual masses of the composite samples, prove that the addition of silanized hemp particles into the epoxy matrix allows the production of a more resistant char toward oxidative decomposition and, consequently, an enhanced thermal stability in the air atmosphere.
Surface Roughness Study.
Knowledge of the surface texture represents a crucial aspect of understanding the nature of the material's surface. It is essential to verify the effectiveness of the adhesion at the interface between the polymer matrix and the filler particles. Hydrophobicity is a property that depends on the microstructure of the surface and, more precisely, on its roughness. 83 AFM was used to evaluate the surface roughness of CFR panels coated with the developed epoxy composites and their nanoscale surface morphology. In particular, the measurement of the surface roughness allows assessing the differences in the roughness of each coated sample, even if the deposition method is the same for all investigated samples. AFM measurements also shed light on the influence of the support roughness on the texture of the protective coating. In this work, AFM analysis was used to prove the relationship between the surface roughness of the samples and their hydrophobicity, as demonstrated through the wettability and anti-icing tests (Section 3.2). Figure 5 shows the representative AFM 2D and 3D topographic pictures corresponding to EPO, EH_HDTMS, and EH_PFOTES coatings, where a surface of 15 μm × 15 μm was scanned in 512 lines. The AFM images show that the nanoscale roughness of the EPO sample is relatively low, as attested by the evaluated roughness parameters R a = 16.0 nm and R q = 19.9 nm (Figure 5a). By contrast, both EH_HDTMS and EH_PFOTES samples have a much higher roughness, exhibiting the roughness values R a = 31.4 nm and R q = 36.1 nm (Figure 5b) and R a = 55.9 nm and R q = 76.1 nm (Figure 5c), respectively. These results perfectly agree with the wettability and anti-icing properties as the surface roughness increases as the CA increases and the CAH decreases. The higher nanoscale roughness shown by EH_HDTMS and EH_P-FOTES samples is attributable to the presence of hydrophobic hemp fibers characterized by a rough, irregular morphology of the primary wall surface. With respect to EPO and EH_HDTMS, the EH_PFOTES sample presents a rougher texture due to the higher capability of HEMPP particles functionalized with PFOTES to migrate toward the surface and thus provide it with a more pronounced hierarchical structure. In summary, the roughness parameters corroborate the results from morphological and wettability analysis, evidencing that the proposed methodology allows employing a small amount
CONCLUSIONS
The current study focuses on the possibility of using functionalized particles based on waste hemp to obtain hydrophobic epoxy coatings for aeronautical carbon fiberreinforced laminates. The proposed chemical modification procedure allows turning an intrinsically hydrophilic material, such as hemp, into hydrophobic particles suitable as fillers in a non-polar polymer matrix to give rise to surfaces with reduced wettability. It was found that, by embedding the functionalized HMPs within the epoxy matrix, a great improvement in terms of thermal behavior, hydrophobicity, and anti-icing performance could be achieved. In particular, the incorporation of only 2 wt % of silane-modified hemp particles into the epoxy resin resulted in both an increased glass transition temperature (up to ∼26%) and an enhanced thermo-oxidative stability because of the good chemical affinity between the functionalized fiber particles and the polymer matrix. Aeronautical panels treated by the composite epoxy coatings exhibited water CA up to 115°and a freezing time of 1.8 min, unlike samples cast with pristine resin showing ∼84°and 0.9 min, respectively. AFM analysis of casted aeronautical laminates proved that the presence of hemp fillers in the epoxy coating led to a rougher surface morphology with a hierarchical structure. The boosted hydrophobicity and anti-icing properties are due to the synergistic effects of chemical and morphological features of the microparticles with anchored silanes. Furthermore, these properties can be tuned using either the high-performing fluorinated silane (PFOTES) or the greener and cheaper alkyl silane (HDTMS). This research may inspire the design and development of sustainable multifunctional composite epoxy coatings containing waste-derived fibers as fillers for aircraft components.
Supporting InformationPhotographs of CFR panels bare and treated with coatings of pristine resin (EPO), EH_HDTMS, and EH_PFOTES; schematic representation of water drop on a plane; schematic of the apparatus used for the anti-icing tests; ATR-FTIR spectra; results of TGA in air atmosphere for pristine resin and epoxy composites; and DSC curves for the samples EPO, EH_HDTMS, and EH_PFOTES (PDF) ■ ASSOCIATED CONTENT
|
2023-07-08T05:16:04.682Z
|
2023-06-16T00:00:00.000
|
{
"year": 2023,
"sha1": "b3dfb5d83a68aeb982801e191972333bfd50e054",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c01415",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3dfb5d83a68aeb982801e191972333bfd50e054",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235645875
|
pes2o/s2orc
|
v3-fos-license
|
Oncogenic FGFR Fusions Produce Centrosome and Cilia Defects by Ectopic Signaling
A single primary cilium projects from most vertebrate cells to guide cell fate decisions. A growing list of signaling molecules is found to function through cilia and control ciliogenesis, including the fibroblast growth factor receptors (FGFR). Aberrant FGFR activity produces abnormal cilia with deregulated signaling, which contributes to pathogenesis of the FGFR-mediated genetic disorders. FGFR lesions are also found in cancer, raising a possibility of cilia involvement in the neoplastic transformation and tumor progression. Here, we focus on FGFR gene fusions, and discuss the possible mechanisms by which they function as oncogenic drivers. We show that a substantial portion of the FGFR fusion partners are proteins associated with the centrosome cycle, including organization of the mitotic spindle and ciliogenesis. The functions of centrosome proteins are often lost with the gene fusion, leading to haploinsufficiency that induces cilia loss and deregulated cell division. We speculate that this complements the ectopic FGFR activity and drives the FGFR fusion cancers.
Primary Cilium and Its Role in Cancer Development
A majority of the vertebrate cells are capable of forming a primary cilium, a microtubulebased organelle that projects from the centrosome to integrate signaling pathways and mediate cell-to-cell communication. Mutations in genes that control cilia structure or function produce a growing list of diseases called ciliopathies. To this day, at least 35 ciliopathies exist, and more than 400 candidate proteins have been identified [1]. Virtually all annotated ciliopathies are genetic developmental disorders; however, function of cilia in the tissue homeostasis is also beginning to emerge [2].
During cell division, the centrosomes need to function in the mitotic apparatus. Therefore, the cilium is typically disassembled during mitosis, even though cilia rudiments may be preserved [3,4]. The presence of a primary cilium is, therefore, tightly coupled with the cell cycle. In the majority of the cilia-competent cells, the primary cilium is formed during the G0/G1 phase of the cell cycle and resorbs before the S phase [5,6]. Several mitotic kinases, Aurora A [7,8], polo-like kinase 1 (PLK1) [9] and NIMA-related kinase 2 (NEK2) [10], were shown to block assembly and induce disassembly of the primary cilium, and upregulated activity of these kinases is frequently found in cancer [11][12][13][14][15][16][17][18][19]. Inhibition of the cilia disassembly signaling using small chemical inhibitors restored ciliogenesis and suppressed tumor growth in cholangiocarcinoma [20] or chondrosarcoma [21].
It is mainly the loss of primary cilia, as well as of their regulatory function in cellular signaling and cell division, that has been associated with neoplastic transformation and tumor progression [22][23][24][25]. In glioblastoma, disruption of ciliogenesis was observed at all stages, starting at early tumor lesions [26]. In a mouse model of Kirsten rat sarcoma virus
FGFR Gene Fusions in Cancer
Deregulated FGFR signaling, mostly caused by increased FGFR activity, has been implicated mainly in tumor progression, through poorly understood mechanisms involving accelerated proliferation, resistance to apoptosis and enhanced angiogenesis [93, [149][150][151][152]. Among the 4853 tumor samples analyzed by next generation sequencing, a FGFR aberration was found in 7.1% of all cases [153]. The most frequent lesion was gene amplification, accounting for 66% of FGFR aberrations [153], and typically resulting in FGFR overexpression and increased activity [154][155][156][157][158]. FGFR mutations were less frequent, covering 26% of the identified aberrations [153]. More than 200 distinct FGFR point mutations have been identified in cancer, targeting the extracellular, transmembrane and kinase domains of all four FGFRs [133, [159][160][161]. The majority of the mutations lead to ligand-independent FGFR dimerization and increased pathway activity [162][163][164][165]. Interestingly, somatic mutations found in cancer frequently overlap with those causing developmental disorders (extensively reviewed in [133]); however, increased incidence of tumors has not been reported in these disorders. This can be exemplified by activating FGFR3-K650E/M mutation, causing thanatophoric dysplasia type II and SADDAN (severe achondroplasia with developmental delay and acanthosis nigricans), respectively [128,129,166,167]. Although this mutation has been detected in aggressive cancers, it failed to induce neoplastic transformation in mice. Additional mutation, involving deletion of the tumor suppressor PTEN (phosphatase and tensin homolog) or activating KRAS mutation were required to induce the FGFR3 cancerogenesis [168,169]. These data suggest that FGFR missense mutations are not likely to initiate the neoplastic transformation, but rather occur later to promote tumor progression and metastasis.
A gene fusion originates from the chromosomal rearrangement involving two genes, and results in a fusion protein capable of neoplastic transformation and oncogene addiction [170,171]. FGFR fusions are relatively rare, accounting for 8% of all FGFR aberrations found in cancer [153,172]. Additional missense mutations are sporadic [172], suggesting that the FGFR fusion protein holds sufficient oncogenic properties. In type I fusions, typically driving the hematological malignancies [173], the FGFR extracellular and transmembrane domains are excluded, and the fusion occurs at the N-terminus of the FGFR kinase domain (Figure 1). In type II fusions that are mostly found in solid tumors [173], the breakpoint usually occurs between exons 17 and 19, affecting only a varying part of the C-terminal region of FGFR [133]. In both types of fusion, the partner typically contains domains that facilitate dimerization such as the coiled-coil domain, the sterile alpha motif, the leucine rich repeat or the leucine zipper, leading to ligand-independent FGFR dimerization and signaling activity. The FGFR fusion protein may also be sequestered to an alternate subcellular location, trough features gained via the fusion partner, which can result in misplaced and deregulated activity. Finally, a substantial part of the fusion partner is typically lost during chromosomal rearrangement, producing haploinsufficiency or gaining novel function that may contribute to neoplastic transformation.
A substantial portion of the FGFR fusion partners are proteins associated with the centrosome functions, including spindle organization and ciliogenesis (8 of 14 recurrent FGFR fusions with at least partially characterized signaling properties; based on PubMed search in April 2021). This led us to speculation that disruption of the centrosome cycle may drive pathogenesis of the FGFR fusion cancers. In the following sections, we review the current knowledge of such oncogenic FGFR fusions, and discuss the possible involvement of both fusion partners in cancerogenesis. For a complete reference, the recurrent and characterized, yet not included fusions comprise FGFR2-CCDC6 [149,174]
FGFR3-TACC3
Gene fusion involving FGFR3 and the transforming acidic coiled-coil containing protein 3 (TACC3) is one of the recurrent gene fusions, found in glioblastoma (29 of 103), nonsmall-cell lung carcinoma (28 of 103), head and neck squamous cell carcinoma (11 of 103), bladder cancer (10 of 103), and other types of cancer (Table 1) [133, 149,153,179,[185][186][187][188][189][190][191][192][193][194][195][196][197][198][199][200]. FGFR3-TACC3 transformed NIH3T3 and Rat1A fibroblasts [179,187,201,202], and the xenografted astrocytes or glioblastoma cells stably expressing FGFR3-TACC3 gave rise to gliomas [187,203]. Mice with hippocampal cells transduced with FGFR3-TACC3 developed invasive, rapidly growing high-grade gliomas [187], proposing FGFR3-TACC3 as an oncogenic driver. The wild-type FGFR comprises the extracellular immunoglobulin-like domain, responsible for ligand binding, the transmembrane region, and the intracellular part that is responsible for binding and activation of the signal transducers including PLCγ (binding site indicated in green). The type II fusions lose a variable part of the C-terminal region of FGFR, frequently involving the PLCγ binding site, and attach a truncated C-terminal part of the fusion partner. In type I fusions, the FGFR extracellular and transmembrane parts are excluded, and the truncated fusion partner joins in just before the FGFR kinase domain. In both types of FGFR fusion, the partner possesses domains that facilitate dimerization-the coiled-coil domain, the sterile alpha motif, the leucine rich repeat or the leucine zipper. The positions of the fusion breakpoints are indicated. The chromosomal rearrangement produces loss of the FGFR3 3 UTR containing miR-99a that normally regulates the FGFR3 levels; this leads to overexpression of FGFR3-TACC3 [203] and abundant transactivation of the FGFR3 residues [201]. Similar to the majority of the type II FGFR fusions, the FGFR3-TACC3 protein lacks the C-terminus of FGFR3 that is necessary for phospholipase C γ (PLCγ) binding (Figure 1), leading to silencing of this signaling branch [179,249]. Conversely, the ERK MAP kinase and STAT (signal transducer and activator of transcription proteins) signaling is increased in FGFR3-TACC3 expressing cells [201,203], and silencing of these pathways was partially successful in targeting the oncogene-driven growth of cell lines and xenografts [36,149,186,187,202,203,[250][251][252].
TACC3 is an important component of the mitotic spindles, ensuring proper attachment of chromosomes to the microtubules [253,254]. During mitosis, FGFR3-TACC3 mislocalizes to the spindle poles while sequestering also the endogenous TACC3 from the mitotic spindle, through interaction of their coiled-coil domains [188, 255,256]. This delays mitotic progression, and induces chromosome segregation defects and aneuploidy that increases by greater than 2.5 fold [187]. Interestingly, targeting TACC3 proved a viable strategy in TACC3-overexpressing cancers, likely by inducing abundant multipolar spindles, which led to mitotic arrest and apoptosis [257][258][259]. Elevated cellular levels of TACC3 were shown to induce loss of primary cilia through Aurora A induction and disruption of the transmembrane protein 67 (TMEM67)-filamin A complex [260,261], and promoted oncogenic transformation and shortened survival of the patients with prostate cancer [262]. Knockdown of TACC3 rescued ciliogenesis, reduced transformation and inhibited xenograft growth [262]. Taken together, FGFR3-TACC3 could lead to neoplastic transformation partly through induction of cilia disassembly and deregulated cell division, which are both druggable targets.
FGFR1-TACC1
The FGFR1 fusion with transforming acidic coiled-coil containing protein 1 (TACC1) was found in various types of tumors arising within the central nervous system (14 of 15; 263], and the xenografted astrocytes stably expressing FGFR1-TACC1 gave rise to gliomas [187]. The biological and oncogenic functions of FGFR1-TACC1 appear similar to those assigned to FGFR3-TACC3 [187]. TACC1 has a coiled-coil domain at the C-terminus, that is preserved in the fusion protein (Figure 1), and that mediates localization to the mitotic spindle [264][265][266]. FGFR1-TACC1 expression increased the rate of errors in chromosomal segregation about five times [187], likely through mislocalization and sequestration of endogenous TACC1, and similar spindle defects were observed in HeLa cells with depleted TACC1 [266]. TACC1 interacts with Aurora A, which appears critical for spindle formation, and the expression levels of the two proteins seem to correlate in cancers [266]. This suggests that TACC1 overexpression caused by FGFR1-TACC1 fusion could participate in neoplastic transformation through deciliation caused by increased Aurora A activity and deregulated cell division, similar to FGFR3-TACC3 cancers.
As a consequence of the chromosomal rearrangement, the FGFR2 3 UTR is truncated which results in upregulation of the FGFR2-BICC1 fusion protein [214]. FGFR2-BICC1 dimerizes likely via the sterile alpha motifs of BICC1 [268], leading to ligandindependent dimerization [149] and activation of the ERK MAP kinase, but not STAT3 or AKT signaling [175, 212,267]. FGFR inhibitors were partially successful in targeting the oncogene-driven growth of cell lines, xenografts and patients' tumors [175,215,269,270]; acquired resistance through gatekeeper FGFR2-V564F mutation was also reported [270]. The FGFR2 V546F -BICC1 cells showed oncogene addiction that was fully inhibited by a synergistic effect of the FGFR and ERK MAP kinase pathway inhibitors [267].
BICC1 is a conserved RNA-binding protein that represses translation of selected mRNAs to control development [271][272][273][274][275]; the domains responsible for RNA binding are, however, partly lost during the chromosomal rearrangement, suggesting that this function is lost with the FGFR2-BICC1 fusion. Deletion of BICC1 leads to classical ciliopathy features, including randomization of the left-right asymmetry, and cystic development in the kidney, liver and pancreas [276][277][278][279][280][281][282][283]. Loss of BICC1 disrupted alignment of motile cilia and establishment of the cilia-driven fluid flow in the mouse embryonic node and Xenopus gastrocoel [279], producing laterality defects. This may be due to disrupted protein synthesis machinery at the centrosome that appears important for the adjacent cilia [284,285]. In humans, mutations in BICC1 were identified in patients with kidney dysplasia, likely caused by ectopic Wingless-related integration site (WNT)/β-catenin signaling [286]. Decreased levels of BICC1, or loss of some of the three RNA-binding domains which are also relevant for the FGFR2-BICC1 fusion, also upregulated WNT/βcatenin signaling [275,279,[287][288][289]. Taken together, the FGFR2-BICC1 fusion is likely to produce a BICC1 haploinsufficiency that leads to disrupted ciliogenesis and cilia-associated signaling, which may contribute to cancerogenesis.
FGFR2-NDC80
A cholangiocarcinoma patient was described with a fusion comprising FGFR2 and NDC80 (or HEC1, highly expressed in cancer 1) [216]. FGFR2-NDC80 was overexpressed in the tumor cells, and activated the ERK MAP kinase, PLCγ, and STAT3 signaling [216]. Considering the PLCγ binding site is lost with the fusion (Figure 1), it is possible that FGFR2-NDC80 activates this pathway through heterodimerization with the endogenous FGFR. The fusion protein retains the kinetochore microtubule binding region of NDC80 [290], sug-gesting possible mislocalization that was, however, not experimentally addressed; within the tumor samples, FGFR2-NDC80 localized predominantly to the cell membrane [216].
NDC80 localizes to the centrosomes and mitotic spindles where it is necessary for assembly and stabilization of the kinetochore microtubules (reviewed in [290]). High NDC80 levels were found in cancers [291][292][293][294], and overexpression of NDC80 in mice led to abnormal spindle formation, hyperactivation of the mitotic checkpoint and initiation of the tumorigenic events [295]. Depletion or inhibition of NDC80 induced mitotic arrest, and suppressed xenograft tumor growth [294,[296][297][298]. Taken together, these data suggest a possible involvement of mitotic defects in the FGFR2-NDC80 cancerogenesis, through ectopic FGFR and NDC80 activity.
FGFR2-OFD1
Fusions involving FGFR2 and the oral-facial-digital type 1 (OFD1) gene were reported in thyroid and endometrial cancer [149,219] (Table 1). FGFR2-OFD1 induced transformation of RK3E cells, that was abolished by FGFR kinase inhibitors [316]. Dimerization of the fusion protein likely occurs through the coiled-coil domains of OFD1 [149], which are preserved in the fusion protein ( Figure 1), leading to transactivation of the FGFR2 kinase domain and activated ERK MAP kinase signaling [316].
OFD1 localizes to centrosome [317] where it is required for centriole maturation and primary ciliogenesis [318,319]. This localization requires the N-terminal part of OFD1 [320] that is, however, lost in the FGFR2-OFD1 fusion. Heterozygous loss-of-function mutations in OFD1 produce the OFD1 syndrome, an X-linked dominant disorder lethal in males that is characterized by systemic ciliopathy features [306,[321][322][323][324]. The Ofd1 +/− female mice reproduced the main patient phenotypes [318,325], suggesting haploinsufficiency in the heterozygous animals. The cilia were severely disrupted or lost, producing defects in laterality and Hh-dependent tissue patterning [318,326]. The zebrafish ofd1 morphants also displayed laterality defects, due to cilia abnormalities in the Kupffer s vesicle, as well as additional ciliopathy features [327]. These data suggest that the decreased levels of endogenous and centrosome-competent OFD1 in the FGFR2-OFD1 cancers may lead to deregulated ciliogenesis and cilia signaling, potentially contributing to neoplastic transformation.
The FOP haploinsufficiency may contribute to FOP-FGFR1 cancerogenesis, as reduced FOP levels were shown to disrupt the centrosome structure and inhibit ciliogenesis [341][342][343], and similar defects were observed in FOP-FGFR1 expressing cells [227,340]. Although the hematopoietic cells do not produce cilia [344,345], the centrosome defects have also been associated with other myeloproliferative neoplasms [340,346], suggesting a common pathogenesis.
CEP110-FGFR1
The fusion of FGFR1 with the centrosomal protein 110 (CEP110) drives expansion of the hematopoietic stem cell population, and causes malignancies that frequently turn into AML [221, (Table 1). When expressed in cells, CEP110-FGFR1 likely dimerizes through the leucine zippers in CEP110 ( Figure 1) which drives constitutive autophosphorylation of the FGFR1 kinase domains [247]. CEP110-FGFR1 induced oncogene addiction in Ba/F3 cells [241,347,348], that could be targeted by tyrosine kinase inhibitors [241,348]. Transplantation of murine bone marrow or human CD34+ cord blood cells transduced with CEP110-FGFR1 produced AML in the recipient mice [347], further supporting the role of CEP110-FGFR1 as an oncogenic driver.
Pluripotent stem cells derived from the AML CEP110-FGFR1 patient showed aberrant hematopoietic differentiation, which was restored by tyrosine kinase inhibitors; a growth inhibition was also achieved with isolated primary AML CEP110-FGFR1 cells [240]. This is in a sharp contrast with the clinical observation, as patients with CEP110-FGFR1 disease do not respond to tyrosine kinase inhibitors and have particularly poor prognosis; allogeneic hematopoietic stem cell transplantation appears the only viable option [238,349]. These data suggest that inhibition of the ectopic FGFR1 kinase activity in CEP110-FGFR1 cancers [241,350] does not bring clinical benefits, and that perhaps additional mechanisms contribute to the disease pathogenesis.
CEP110 is a structural protein of the centrosome [351,352], for which it requires a 170aa region in the C-terminus that is retained in the CEP110-FGFR1 fusion ( Figure 1) [247]. The centrosome localization of the fusion may, therefore, interfere with centrosome maturation, likely due to combination of the steric effects of the fusion and its ectopic kinase activity, which in turn produces centrosomal and spindle abnormalities and drives the oncogenesis [351,353,354].
Conclusions and Perspectives
The FGFR fusion proteins are oncogenic drivers; therefore, patients typically show a good initial response to the targeted therapy using FGFR tyrosine kinase inhibitors [171,186,215,219,269,270,355]. However, secondary gatekeeper mutations occur during therapy [270,356], and inhibition of effectors downstream from the FGFR oncogene has not delivered strong clinical benefit; therefore, alternate approaches are being developed. One such strategy takes advantage of the general overexpression of type II FGFR fusion proteins [268], which makes them a good target for cytotoxic conjugates specifically binding FGFR. For example, FGF2 conjugated with auristatin induced endocytosis of the FGFR1-FGF2/auristatin complexes, which released auristatin and produced a strong cytotoxic effect on cancer cells overexpressing FGFR1 [357]. Similarly, the FGFR-specific antibodies or antibody fragments conjugated to a cytotoxic molecule enter the cells via endocytosis to induce cell death [358,359]. Clinical trials evaluating cytotoxic conjugates in FGFR fusion-driven cancers are yet to emerge.
Another possibility is to specifically target the fusion protein. For example, no therapy protocol is available for FOP-FGFR1-driven cancers, which are very aggressive [221,222,328,331]. FOP-FGFR1 saturates at the centrosome, which appears critical for oncogenic transformation [329,331]. An adeno-associated virus-mediated delivery of interfering RNA, peptide or a coding sequence, specifically targeting the FOP-FGFR1 fusion or its interaction interface with the centrosome, therefore represents an attractive therapeutic possibility [360][361][362].
Finally, the ectopic activity of the FGFR fusion protein, together with decreased levels of the endogenous fusion partner, may contribute to neoplastic transformation through loss of primary cilia and deregulated cell division. Restoration of ciliogenesis and/or cilia function is, therefore, an attractive and so far unappreciated strategy to attenuate tumor growth. NSC12, an orally available analog of the naturally occurring FGF ligand trap pentraxin 3 (PTX3), was developed to target the FGF-driven pathologies [363]. NSC12 rescued ciliogenesis defects in three FGFR-driven cancer cell lines and a xenograft, and inhibited tumor growth [363]. The clinical studies evaluating cilia targeting as a cancer therapy are however yet to emerge.
|
2021-06-27T05:19:20.501Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "994e1993672db15c6b6d5c9f756e960dbe8d8ac7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells10061445",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "994e1993672db15c6b6d5c9f756e960dbe8d8ac7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2053212
|
pes2o/s2orc
|
v3-fos-license
|
Chemical Alarm Cues Are Conserved within the Coral Reef Fish Family Pomacentridae
Fishes are known to use chemical alarm cues from both conspecifics and heterospecifics to assess local predation risks and enhance predator detection. Yet it is unknown how recognition of heterospecific cues arises for coral reef fishes. Here, we test if naïve juvenile fish have an innate recognition of heterospecific alarm cues. We also examine if there is a relationship between the intensity of the antipredator response to these cues and the degree to which species are related to each other. Naïve juvenile anemone fish, Amphiprion percula, were tested to see if they displayed antipredator responses to chemical alarm cues from four closely related heterospecific species (family Pomacentridae), a distantly related sympatric species (Asterropteryx semipunctatus) and a saltwater (control). Juveniles displayed significant reductions in foraging rate when exposed to all four confamilial heterospecific species but they did not respond to the distantly related sympatric species or the saltwater control. There was also a strong relationship between the intensity of the antipredator response and the extent to which species were related, with responses weakening as species became more distantly related. These findings demonstrate that chemical alarm cues are conserved within the pomacentrid family, providing juveniles with an innate recognition of heterospecific alarm cues as predicted by the phylogenetic relatedness hypothesis.
Introduction
Accurate assessment of predation risk is vital to the success of any individual, as early detection of a predator enhances the chances of prey survival [1,2]. However, to be successful antipredator defences must be balanced with other fitness enhancing behaviours (e.g. feeding and reproduction) [1]. This leads to a selective pressure on individuals to acquire information about current predation risks within their environment, in order to modify their antipredator behaviour to reflect their current level of risk. Such a strategy should optimise the trade-off between predator avoidance and other fitness enhancing behaviours [3,4]. Individuals that are also able to detect and respond to alarm cues from heterospecific species that share a common predator will also have a fitness advantage [5,6]. The use of heterospecific alarm signals in risk assessment is common across multiple taxa: birds [7], mammals [8], freshwater fishes [9], amphibians [10,11], insects [12] and crustaceans [13]. Furthermore, information from heterospecific individuals may be more valuable than that from conspecifics, as heterospecific species may impose a lower competitive cost than a conspecific [14].
In aquatic systems, chemical cues along with visual cues are the primary sources of information for assessment of predation risk [15]. Released from specialised cells in the epidermis, following mechanical damage during a predation event, chemical alarm cues provide early warning of potential danger for other individuals within the local area [16], enhancing chances of survival [9,17,18]. The importance of chemical cues is highlighted by the simultaneous evolution of chemical alarm cues within most aquatic taxa found in both freshwater and marine environments (reviewed in [19]). They are of particular importance in complex or turbid habitats where visual cues are reduced [15,20]. Unsurprisingly, prey also use chemical alarm cues derived from heterospecifics to gain information about local predation risks [9,10,21].
Responses to heterospecific alarm cues may arise from one of two non-exclusive mechanisms: 1) Individuals may possess an innate recognition of alarm cues common to closely related species (the ''phylogenetic relatedness hypothesis'') [10,11,22]; or 2) Individuals may acquire recognition of relevant alarm cues through experience (the ''ecological coexistence hypothesis'') [6,22,23]. The phylogenetic relatedness hypothesis proposes that alarm cues are conserved within taxonomic groups and thus individuals are able to generalise the recognition of their own alarm cue to those of closely related heterospecific species, as the composition of both alarm cues should be similar having been derived from a recent common ancestor [10,11]. Individuals should therefore display a stronger antipredator response to closely related species and a weaker response to species that are more distantly related, irrespective of whether the species are allopatric or sympatric [11,24,25]. Strong evidence supporting the phylogenetic conservation of alarm cues is provided for grey tree frog tadpoles, Hyla versicolor [11] and Ostariophysan fishes [26].
In contrast, the ecological coexistence hypothesis suggests that responses to alarm cues from heterospecific species arise due to individuals co-existing with species that are part of the same prey guild [11,23]. As both species share a common predator it is beneficial to respond to each other's alarm cues as it will enhance early detection of a predator. Such responses may arise through learning as individuals gain experience with the predator-prey community in their local environment [21,27], or they may be innately fixed through co-habitation with sympatric prey guild members over several generations [11]. Support for this hypothesis is often confounded by the use of wild caught individuals, as it is not possible to control for experience prior to collection. Consequently, it is not possible to make definitive conclusions about how observed responses to heterospecific alarm cues arose. Interestingly, two studies suggest that ecological coexistence may play an important role in modifying responses to phylogenetically conserved responses to heterospecific alarm cues [6,23].
For fishes, how responses to heterospecific cues arise is still open to debate. The ability to acquire learnt recognition of heterospecific cues has been demonstrated across a wide range of fish taxa: minnows [28], sticklebacks [9], cichlids [29], gobies [30] and salmonids [24]. This suggests that ecological coexistence plays a significant role in acquiring recognition of heterospecifics at the individual level. However, support for the conservation of chemical alarm cues within taxonomic groups varies greatly. Of the taxonomic groups tested to date, alarm cues appear to be highly conserved within the superorder Ostariophysan (where the putative chemical alarm cue hypoxanthine-3-N-oxide has been identified [26]) and the salmonid family [24]. Other studies on wild darters, genus Etheostoma [31][32][33] and two species of coral reef gobies Asterropteryx semipunctatus and Brachygobius sabanus [30] provide inconclusive support for either hypothesis. Indeed, a more rigorous empirical assessment is still needed to address the phylogenetic relatedness hypothesis among fishes and other vertebrates, and the extent to which phylogeny determines the magnitude of antipredator responses. The answers to these questions are particularly important in understanding antipredator behaviour in species-rich habitats such as coral reefs.
Recent studies have highlighted the importance of chemical alarm cues in predator-prey dynamics for coral reef fishes, particularly for newly settled recruits [18,34]. Recruits are exposed to a period of extremely high predation following settlement [35] and must rapidly learn to recognise predators to survive. During this period, chemical alarm cues play a crucial role in predator recognition [34,36] and survival [18]. Given that many species recruit to reefs around the new moon period and are likely exposed to a similar suit of predators, the ability to access information from heterospecifics will facilitate the rapid acquisition of predator identities and increase an individual's chances of surviving, particularly if they have an innate recognition of alarm cues from heterospecific species that share a common predator. However, to date only a goby, Asterryoptryx semipunctatus has been shown to be able to display antipredator responses to heterospecific alarm cues.
The aim of this study was firstly, to see if a common coral reef fish had an innate knowledge of heterospecific alarm cues at the time of settlement and secondly, to assess whether there was a relationship between the intensity of response to heterospecific cues and the extent to which they were related to each other, indicative of a phylogenetically conserved alarm cue. To do this we tested naïve juvenile anemone fish, Amphiprion percula (family: Pomacentridae), for an innate antipredator response to a range of chemical alarm cues from four heterospecific species within the pomacentrid family. They were also tested for their response to an alarm cue from a distantly related prey guild member, Asterropteryx semipunctatus, and a saltwater control. We then compared the intensity of the response to the heterospecific alarm cues to the time of divergence from the nearest common ancestor shared between A. percula and each of the heterospecific species.
Ethics Statement
Research was carried out under approval of the James Cook University animal ethics committee (permit: A1067) and according to the University's animal ethics guidelines. Fish collections around Orpheus Island, Great Barrier Reef were carried with permission of the Great Barrier Reef Parks Authority (permit: G10/33239.1) and Queensland Government Department of Primary Industry and Fisheries (permit: 103256).
Study Species
Amphiprion percula is a member of the highly diverse and abundant Pomacentridae family that inhabit coral reefs throughout the tropics. While it is found in the same general habitat to the heterospecific species in this study they display distinct micro-habitat difference due to its symbiotic relationship with certain anemone species [37][38][39][40][41]. Consequently the extent to which they are exposed to their alarm cues should be similar for all species. Additionally, all species in this study are targeted by similar range of predators [42][43][44]. The heterospecific species were chosen based on their phylogenetic relationship to A. percula; Amphiprion melanopus is a closely related congeneric species, Pomacentrus moluccensis and Acanthochromis polyacanthus are both from different genera with the Pomacentrinae sub-family, Chromis atripectoralis is from the Chrominae sub-family, one of the most basal groups with the Pomacentridae [45] and Asterropteryx semipunctatus are from the distantly related Gobiidae family. All species are known to possess chemical alarm cues ( [20,30,36], Mitchell unpublished data).
Collection and Maintenance
A. percula juveniles were captive bred and reared to settlement at the James Cook University aquarium facility, following the methods outlined in Dixon et al. [46]. Juveniles were maintained in three 40-l flow-through aquaria and fed 2/4 NRD marine food pellets (Spectrum Aquaculture) until they reached ,20 mm in length, at which point they were large enough to use in the experiments. Captive breeding ensured that the fish would be completely naïve to the alarm cues of other species.
The five donor species were either taken from captive breed stocks or collected from the wild. A. melanopus and Ac. polyacanthus were captive bred at the university aquarium facility and reared to the same size as A. percula. All other species (P. moluccensis, C. atripectoralis and As. semipunctatus) were collected from coral reefs around Orpheus Island, Great Barrier Reef, Australia. Juveniles of each species were collected using hand nets and anaesthetic clove oil. All fishes were maintained in separate 40-l flow-through aquaria and fed ad libitum twice a day with 2/4 NRD marine food pellets (Spectrum Aquaculture).
Stimulus Preparation
Alarm cues were prepared fresh directly before being used in each trial. One individual per treatment was sacrificed by a quick blow to the head and placed in a disposable Petri dish. Using a clean scalpel blade, 15 superficial cuts were made along each flank of the fish. Fish were rinsed with 15-ml seawater and the solution was filtered through filter paper to remove any solid material.
Observation Tanks
Conditioning and recognition trials were conducted in 11-l flowthrough aquaria (30620615 cm). Each tank had a 2-cm layer of gravel, a small terracotta pot (5-cm diameter) for shelter at one end and an air stone at the opposite end. An injection tube was attached to the air stone tube to allow food and odours to be introduced with minimal disturbance to the fish. A 366 grid (465 cm) was drawn onto the front of each tank. Each tank was surrounded on three sides with black plastic to visually isolate the fish and a black plastic curtain was hung in front of the tanks to create an observation blind.
Recognition Trials
Individual A. percula were placed into test aquaria and left to acclimate for two days. On the morning of testing fish were fed 30ml Artemia solution (containing ,200 individuals per ml) and left for at least 1 h before testing began. Trials were conducted between 0800 h and 1600 h each day. Prior to the start of trials the flow-through system was turned off and 10-ml of seawater were withdrawn and discarded from the tube, to remove any stagnant water, and a further 20 ml were withdrawn and retained for flushing. Trials consisted of an initial 2-min feeding period, a 5min pre-stimulus observation period and a 5 min post-stimulus period. At the start of the 2-min feeding period 30-ml of Artemia were injected into the tank followed by 10-ml seawater to flush the tubing to allow feeding rates to stabilise. Once feeding rates had Figure 1. The phylogeny of Pomacentridae study species and antipredator response to heterospecific alarm cues. The phylogenetic relationship and antipredator response of Amphiprion percula, to heterospecific family members (Amphiprion melanopuş Pomacentrus moluccensis, Acanthochromis polyacanthus and Chromis atripectoralis), a distantly related sympatric prey guild member (Asterropteryx semipunctatus) or a saltwater control. a) A chronogram (modified from [45]) displaying the divergence times of the MRCA of the focal species, A. percula to each of the heterospecific donor lineages within the family Pomacentridae. Ages are calibrated to millions of years before present. b) The mean change in foraging rate (6SE) of juvenile A. percula exposed to the chemical alarm cues of five heterospecific species and a saltwater control. Fishes are ordered with respect to their relatedness to A. percula. Letters below bars indicate Tukey's groupings. doi:10.1371/journal.pone.0047428.g001 stabilised the 5-min pre-stimulus observation commenced. At the end of the observation period 15-ml of stimulus odour were injected followed by 10-ml of seawater for flushing and the poststimulus observation period began 1 min later. The stimuli consisted of one of the five skin extracts or a saltwater control. Stimuli were assigned randomly to the tanks. Individuals were tested for their response to one skin extract only. A total of 150 fish were tested (18-20 individuals per treatment).
The behaviour of the focal fish was observed during the preand post-stimulus observation periods. We quantified two response variables: foraging rate and distance from shelter. Decreased foraging rate and distance from shelter are well known antipredator responses displayed by a number of prey species, including coral reef fishes [19,36,47]. Foraging rate included all feeding strikes irrespective of whether they were successful at capturing prey. For distance from shelter, the horizontal and vertical locations of the fish in the tank were recorded every 15s, using the grid drawn on the side of the tank. The position of the fish in the tank was then converted into a linear distance from shelter using the dimensions of the grid squares and Pythagoras's theorem.
Identification of Phylogenetic Relatedness
To assess if the magnitude of an antipredator response to a heterospecific alarm cue is regulated by the phylogenetic relatedness of the focal species to the heterospecific species, we used the 'time of divergence' of our focal species (A. percula) and the most recent common ancestor (MRCA) to the heterospecific lineage in question. We made use of a recently published chronogram (time-calibrated phylogeny) of the family Pomacentridae [45] to find the divergence time of the MRCA of A. percula and the heterospecific alarm cue donors ( Table 1). The pomacentrid chronogram was reconstructed using Bayesian age estimation methods and fossil calibration techniques (see methods [45]). It includes all of the pomacentrid taxa used in this study and all major lineages of the family Pomacentridae. The timing of divergence (T D ) of each pomacentrid heterospecific from A. percula was taken as the age of the MRCA of both lineages (T MRCA ), minus the age of the node representing the origin of A. percula (T Ap ; Fig. 1a; Table 1). This correction for the age of the A. percula lineage standardises the MRCA age to a metric that is specific to an ancestor node of A. percula.
Statistical Analysis
The proportional difference between pre-and post-stimulus behavioural observations were calculated and used as the raw data. The effects of test odour (the six fish alarm cues and saltwater) on foraging rate and distance from shelter of A. percula were analysed using individual 1-factor ANOVA's. To account for ANOVA's being run on two variables that were potential interrelated a Bonferroni adjustment was employed (adjusted a = 0.025). The ANOVAs revealed that only foraging rate was affected by test odour. The subsequent analyses were done on the foraging variable only. Tukey's HSD post-hoc analysis was used to identify significant differences between responses to the test odours.
The relationship between the foraging response of individuals to the different pomacentrid chemical alarm cues and the divergence time between the different pomacentrid species and A. percula (T D ) was investigated using a linear regression. Divergence time was used as the predictive variable and mean change in foraging rate was used as the response variable. For both analyses, the data was checked for outliers and residual analyses revealed that all data met the assumptions of homogeneity of variance and normality.
Results
Test odour had a significant influence on A. percula foraging rate (F 6,111 = 18.78, p,0.0001). Post-hoc tests showed that individuals displayed a significant reduction in foraging rate when exposed to alarm cues from conspecific A. percula and the heterospecifics A. melanopus, P. moluccensis and Ac. polyacanthus compared to the saltwater control and As. semipunctatus control (Fig. 1b). Individuals also showed a significant reduction foraging rate when exposed to C. atripectoralis compared to the saltwater control but not to the As. semipunctatus outgroup control. There was no difference in foraging rate between the saltwater control and As. semipunctatus, with feeding rate remaining constant throughout the trials (see Fig. S1 for mean pre-and post-exposure foraging rates). The 1-factor ANOVA on distance from shelter revealed there was no significant effect of test odour on A. percula (F 6,111 = 1.38, p = 0.23).
There was a significant relationship between the response to pomacentrid chemical alarm cues and the timing of divergence of the MRCA of the donor species and A. percula, which accounted for 66% of the intensity in antipredator response (r 2 = 0.66, F 1,88 = 16.72, p,0.001; Fig. 2). The greatest reduction in foraging rate was displayed by individuals exposed to alarm cues from conspecifics and A. melanopus, the intensity of response then decreased as the donor species became more distantly related (Fig. 2).
Discussion
Our results demonstrate that juvenile reef fish are able to detect and respond to heterospecific chemical alarm cues and that chemical alarm cues are conserved within the Pomacentridae family. Naïve juvenile A. percula displayed a significant reduction in foraging rate, when exposed to alarm cues from conspecific and heterospecific family members but not to alarm cues from the distantly related sympatric As. semipunctatus or the saltwater control. Additionally, the intensity of antipredator responses to heterospecific alarm cues diminished as the timing of divergence between the heterospecific cue and A. percula increased. These results support the findings of similar studies on salmonids [24] and invertebrates [10,11,23]. However, this is the first to demonstrate a strong relationship between phylogenetic relatedness and response intensity to heterospecific chemical alarm cues for a vertebrate species. This strong relationship suggests that the innate recognition of heterospecific cues by A. percula resulted from phylogenetic conservation of alarm cues as predicted by the 'phylogenetic relatedness hypothesis'.
The ability to recognise and respond to heterospecific alarm cues will confer a significant survival advantage for reef fish throughout their lives but particularly during critical ontogenetic life history changes. Following an initial pelagic stage, larval reef fish recruit to reefs in pulses around the new moons throughout summer [48]. During this transition to the reef they enter an environment rich in generalist, opportunistic predators [49] and are subject to extremely high mortality (up to 60% in first 2 days [35]). Several studies have shown that coral reef fishes lack an innate antipredator responses to predator odours with regards to short term changes in risk perception [36,47,50], although Vail and McCormick [51] and Dixon et al. [46] suggest there maybe some level of innate recognition of certain predators. In the absence of innate predator recognition, there will be a strong selection pressure to rapidly gain information about potential predators, risky habitats or time periods in respect to predation. Consequently, individuals that are able to detect and respond to heterospecific alarm cues will increase their chances of detecting an active predator in their local vicinity and enhance their chances of surviving any subsequent attack. The finding that A. percula responded to all the heterospecific alarm cues but not to As. semipunctatus (a prey guild member) demonstrates that alarm cues are conserved within the pomacentrid family. There was a strong relationship between the intensity of the antipredator response and the time since each heterospecific species diverged from its common ancestor with A. percula. These results support the predictions of the phylogenetic relatedness hypothesis, matching the findings of a number of previous studies on salmonids [24] and invertebrates [10,11,23]. In contrast, other studies investigating antipredator responses to heterospecific alarm cues found that responses to heterospecific cues were highly variable and there appeared to be no support for the phylogenetic relatedness hypothesis and only tentative support for the ecological coexistence hypothesis [13,30,52]. For example, while As. semipunctatus responded to both conspecific cues and heterospecific cues from Gnatholepis anjerensis, G. anjerensis responded to only conspecific cues [30]. Similarly, studies on freshwater darters [33] and sea urchins [52] found inconsistent patterns in responses to both conspecific and heterospecific cues. However, the previously mentioned studies were confounded by the fact that they used wild caught individuals rather than naïve individuals. Consequently, any innate responses to phylogenetically conserved alarm cues (if present) may have been modified through experience with coexisting prey guild members, masking any response patterns indicative of phylogenetically conserved cues.
While there is the potential that ecological coexistence could have influenced the innate patterns of response observed here, we would suggest it is unlikely that it caused the responses observed. The heterospecific species in this study were selected based on the consistency of overlap in habitat preference and exposure to common predators between A. percula and the heterospecific donor species. Given this, if ecological coexistence was causing the innate response to heterospecific alarm cues we would have expected the responses heterospecific cues to be uniform irrespective of the time of divergence from their common ancestor with A. percula. Additionally, we would have expected individuals to respond to As. semipunctatus as well. However, as we were unable to include any allopatric pomacentrid species there is the possibility that ecological coexistence might have influenced the responses observed. Dalesman et al. [23] and Dalesman and Rundle [6] demonstrated that ecological coexistence with heterospecific species can modify responses to phylogenetically conserved cues in snails, both at the population level, through coexistence over several generations, and at the individual level, through short term changes in prey guild community structure. Ecological coexistence may therefore play a secondary role in determining responses to phylogenetically conserved cues.
The capacity of any species to use heterospecific cues may depend on a number of intrinsic (e.g. the ability to detect heterospecific alarm cues) and extrinsic factors (e.g. how the individual interprets the relevance of the information once detected). The ability to detect heterospecific cues is dependent on them being sufficiently similar to the focal species' own cues for recognition to occur. As demonstrated here, the intensity of response to heterospecific cues is directly related to the time of divergence from the most recent common ancestor. Species may not recognise heterospecific cues simply because the time since the two species diverged from their common ancestor was sufficient for the cues to become unrecognisable. Similarly, the rate at which such changes to the chemical cues occur will determine recognition patterns. For example, it is thought that chemical alarm cues play a significant role in immune system function for fishes [53,54]. The composition of the alarm cues may therefore be affected in part by the need to maintain immune system functioning. Consequently, changes in the composition of alarm cues may be driven by changes in the environment (and immune challenges) to which the individual species is exposed. Such drivers may cause a rapid change in the chemical alarm cue of species that have moved into a markedly different ecological niche.
Extrinsic factors, such as the prey species' ecology and life history, or the composition and foraging strategies of the predator community to which they are exposed may also influence how they respond to heterospecific alarm cues. The diversity of predatory species and their preferred foraging mode will likely influence responses of prey to heterospecific cues. Prey exposed to generalist predators (abundant on coral reefs), which target a broad range of species within a preys' guild, will benefit from responding to heterospecific cues. Conversely, prey individuals exposed to specialist predators that target discrete types of species Figure 2. Relationship between antipredator responses and divergence times. The relationship between divergence time from the most recent common ancestor and the intensity of antipredator response of juvenile Amphiprion percula exposed to chemical alarm cues from various heterospecific species within the family Pomacentridae. Circles represent in the mean change in foraging rate (6SE) of A. percula to chemical alarm cues of each heterospecific species. doi:10.1371/journal.pone.0047428.g002 (or ontogenetic stages) within the prey guild may not gain benefits of responding to heterospecific cues, especially if the focal prey is rarely targeted by that predator [23]. Furthermore, life history strategies have the potential to strongly influence responses to heterospecific cues. Hazlett and McLay [13] suggested that the extent to which various crayfish responded to heterospecific cues did not depend on phylogenetic relatedness, but rather on whether they evolved in specious regions and had the ability to disperse widely. The dispersive pelagic larval phase of reef fish may help to maintain a prey fish's responsiveness to heterospecifics, through the necessity for conservative risk assessment when settling to an environment that is highly patchy and unpredictable.
This study demonstrates that juvenile A. percula have an innate ability to recognise and respond to chemical alarm cues from closely related heterospecifics. The patterns of response strongly suggests that responses to heterospecific alarm cues result from a conserved chemical alarm cue within the Pomacentridae family as predicted by the phylogenetic relatedness hypothesis. Given the similarities between early life histories within reef fish, such baseline knowledge will enhance their capacity to detect risky situations and learn about the predators present in their new environment during a critical period in their life history. However, these innate patterns of response may not be permanently fixed.
Previous studies have shown that responses to alarm cues can change throughout development, particularly in regards to how individuals perceive heterospecific cues [55]. As prey grow not only does perception of risk change with experience [56] but they will move into new prey guilds composed of different prey species and are exposed to different predators. Consequently, the patterns of responses to heterospecific cues will change throughout their lives to suit their current situation, incorporating new prey guild members and modifying innate responses as the perceived value of the information changes. To further understand the complexities of the predator-prey interactions that affect community composition and diversity on coral reefs, future studies need to look at how perception of risk alters with development and experience. Figure S1 The mean foraging rates (± S.E.) of juvenile Amphiprion percula before (shaded bars) and after (open bars) being exposed to the chemical alarm cues from conspecifics and five heterospecific species and a saltwater control. A one-factor ANOVA revealed there was no significant difference in foraging rate between treatments foraging rates before being exposed to one of the odours (F 7, 140 = 1.77, p = 0.097). (PDF)
|
2016-05-04T20:20:58.661Z
|
2012-10-18T00:00:00.000
|
{
"year": 2012,
"sha1": "9fcefa94f04a2f9021b823a5d82711bc3a3df64c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0047428&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fcefa94f04a2f9021b823a5d82711bc3a3df64c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
213482655
|
pes2o/s2orc
|
v3-fos-license
|
NUGGET EFFECT INFLUENCE ON SPATIAL VARIABILITY OF AGRICULTURAL DATA
Spatial variability description of soil chemical properties by thematic maps depends substantially on suitable geostatistical models. One of the parameters composing a geostatistical model is nugget effect. This study aimed to evaluate the simultaneous influence of nugget effect and sampling design on geostatistical model estimation and estimation of soil chemical properties at unsampled sites, considering simulated data. Our results will be used as scientific basis for spatial variability analyses of soil chemical properties in agricultural areas. Given the simulation results and agricultural data, we concluded that the high nugget effect values obtained here reduced spatial estimation efficiency. Moreover, a systematic sampling design promoted the least accurate estimates of geostatistical model and at non-sampled sites. Despite that, these nugget effect estimates should be kept in the analysis. However, further studies will be needed to investigate which factors are responsible for such high nugget effect values.
INTRODUCTION
Spatial analysis of a georeferenced variable using geostatistical models enables measuring the spatial dependence degree among samples within a determined area, thus describing its spatial dependence structure (Guedes et al., 2018). The spatial dependence analysis, mainly of soil chemical properties in farmlands, allows to know their values in subregions (management zones) within the area of interest, which, in turn, enables the application of agricultural inputs at specific points (Gazolla-Neto et al., 2016).
The spatial dependence structure of a certain georeferenced variable should be described considering a stochastic process, whose data are expressed by Z(s1), Z(s2), …, Z(sn), which are known in n sites si (i = 1, …, n), being that si = (xi, yi) T is a two-dimensional vector. The georeferenced variable can be expressed by a Gaussian spatial linear model: Z(si) = µ(si) + (si) , in which µ(si) = µ is the deterministic term, µ is a constant, and (si) represents the stochastic term with mean zero, i.e., E[(si)] = 0; and variation between points in space separated by an Euclidean distance hij = ||h||, so that h = si -sj (i, j = 1, …, n) is determined by a covariance function: C(hij) = cov[(si), (sj)] = σij, which depends only on h. Moreover, a covariance matrix is found from the covariance function, as follows: Σ = [(σij)] Guedes et al., 2011;.
One of the functions used to describe a spatial dependence structure is referred to as a semivariance function γ(hij). It measures dissimilarity between values sampled at sites separated by a distance hij, for stationary and isotropic processes, and is associated with the covariance function by γ(hij) = C(0) -C(hij) .
In the literature, different theoretical models have been developed to define spatial dependence structure (semivariance function), and several methods have been used to estimate these models (Cressie, 2015;Monego et al., 2015;Cortés-D et al., 2016). The model estimated for semivariance function has the following parameters: range (a), partial sill (C1), nugget effect (C0), and sill (C0+ C1). Range is the longest distance between sampling sites spatially correlated within an area. Sill is the semivariance value when distance equals range, corresponding to the variance of a georeferenced variable (Kestring et al., 2015).
Semivariance function has a minimum distance (hmin), within which its semivariance value (γ(hij)) is calculated. When it is high, the phenomenon has high variability over a small range of distances. In these situations, the semivariance function has a parameter called nugget effect (Peng & Wu, 2014;Cressie, 2015;Genton & Kleiber, 2015. Small-scale variability may be associated with features of the studied process and/ or Engenharia Agrícola, Jaboticabal, v.40, n.1, p.96-104, jan./feb. 2020 measurement errors. It may also occur due to data heterogeneity or sampling size and scheme (BOSSEW et al., 2014;Seidel & Oliveira, 2014;Lark & Marchant, 2018;Wadoux et al., 2019). Vallejos & Osorio (2014), Cressie (2015), Bassani et al. (2018) suggested a relationship between nugget effect and (a) features related to spatial prediction, (b) a structure that describes spatial dependence and (c) sample design. Chipeta et al. (2016) described that when nugget effect is not zero, a sampling design with closer pairs of sampling points should be considered to improve geostatistical model estimation and spatial prediction.
However, there is a gap about the implications of nugget effect and sample design on geostatistical model estimation and spatial prediction simultaneously (expressed by thematic maps). This is especially true if the evaluation considers the simultaneous influence of nugget effect and sampling design.
Thus, the goals of this study were: 1) to evaluate the influence of nugget effect on geostatistical model estimation and estimation at non-sampled sites of a georeferenced variable, using Monte Carlo simulated data and considering different sample designs (random, systematic, and lattice plus close pairs); 2) to analyze the spatial variability of the following soil chemical properties: carbon, calcium, magnesium, and pH, considering the sampling data of an area whose sampling design was a lattice plus close pairs.
MATERIAL AND METHODS
Simulated datasets originated from multivariate stochastic processes, assuming stationary variables, with an isotropic Gaussian linear model . Sampling designs with 100 sampling points arranged in a regular area with a maximum limit of coordinates equal to 100 m were considered. Three sample designs were simulated: systematic (lattice) 10 × 10 grid, random and lattice 19 × 19 grid added by 19 randomly chosen nearby points (lattice plus close pairs). Sample size and the latter sampling design were chosen due to the design used in the study area.
Twelve trials were considered for each sample design, totaling 36 sets of simulations. For each set of simulation were considered 100 simulations, totaling 3600 simulations for every trials. Each simulation attempt was made considering an exponential model for semivariate function, with fixed parameters for a practical range equal to 60 m and a sill of 10 m. Nugget effect was the parameter that varied between trials. Thus, 12 trials were performed considering the following nugget effect values: 0, 1, 2, 2.5, 3, 4, 5, 6, 7, 7.5, 8, and 9.
Semivariance function parameters were estimated for each simulation by maximum likelihood and respective asymptotic standard errors were calculated (De . Moreover, the following measures that quantify the spatial dependence intensity were estimated: relative nugget effect (RNE) (Equation 1) and spatial dependence index (SDI) (Equation 2).
(2)
in which: MF is the model factor and reflects the spatial dependence strength of each model (for exponential, spherical, and gaussian models, MF values are 0.317, 0.375, and 0.504 respectively); C0 is the nugget effect; C0 + C1 is the sill; a is the practical range; and q.MD is the fraction (q) of the maximum distance (MD) between sample points. In this study, we assumed q as 50% of the maximum distance. RNE value was proposed by Cambardella et al. (1994), which describes the proportion of sill represented by the nugget effect. Yet, SDI was proposed by Seidel & Oliveira (2014) and includes a greater amount of information in its calculation compared to the RNE (nugget effect, sill, practical range, and semivariance function model).
Scale of values of RNE and SDI indexes are different from each other, as well as their interpretation. While RNE ranges from 0 to 100% for all semivariance function models, SDI depends on semivariance function model. According to Seidel & Oliveira (2014), for exponential, spherical, and Gaussian functions, SDI varies respectively from 0 to 31.7%, from 0 to 37.5%, and from 0 to 50.4%. Regardless of the model describing semivariance function, the closer the SDI to its maximum value, the greater the spatial dependence of the variable under study.
Considering the estimated geostatistical models, the values of the georeferenced variables at unsampled sites were estimated using the kriging method. Spatial estimation mean variance or kriging variance ( ) was calculated since it is an estimation efficiency measure, wherein the lower its value, the better the spatial estimation efficiency (Cressie, 2015;Kleijnen & Mehdad, 2016).
Simulations using nugget effect values from 0 to 8 were compared to those with nugget effect equal to 9. This comparison encompassed the following measures: the sum of squared difference between spatial estimates and accuracy measure, known as overall accuracy (OA) (De Bastiani et al., 2012). Comparison methods were chosen to compare georeferenced spatial estimates with those generated with a lower degree of spatial dependence.
We also analyzed the spatial variability of a set of real data from a commercial grain area of 167.35 ha. The area is located in the city of Cascavel, western Paraná State, Brazil. The geographical coordinates are the following: 24.95º S latitude, 53.37º W longitude, and 650 m above sea level. Local soil is classified as Oxisol, with clayey texture and deep layers of good water storage capacity, porosity, and permeability (De Bastiani et al., 2012). Local climate is very wet and classified as mesothermal, Cfa (Köeppen), with an average annual temperature of 21ºC (Kestring et al., 2015).
Engenharia Agrícola, Jaboticabal, v.40, n.1, p.96-104, jan./feb. 2020 A lattice plus close pairs sampling was performed, with a maximum distance of 141 m between sampling sites. In some sites, random samples were taken at shorter distances: 75 and 50 m between sites, resulting in a total of 102 sampling points. All samples were georeferenced and located with the aid of a signal reception device of Global Positioning System (GPS) GEOEXPLORE 3, in the UTM spatial coordinate system.
Soybeans have been grown under no-till system since 1994. We used data from the 2010/2011 crop season, which are related to the following soil chemical properties: carbon (C -g dm -3 ), calcium (Ca -cmol dm -3 ), magnesium (Mg -cmol dm -3 ), and pH. The dataset of this study was acquired by routine chemical analysis, collecting a soil sample at each marked point and five subsamples of the 0.0 to 0.2 m depth range, close to the marked points, being mixed and placed in plastic bags of about 500 g, thus composing a representative sample of the portion. These samples were sent to the Laboratory of Soil Analysis of the Central Cooperative for Agricultural Research (COODETEC) for routine chemical analyses.
The best model was fit to the semivariance function for each variable under study, according to cross-validation criteria (Lu et al. 2012;Robinson et al., 2013). Asymptotic standard errors, RNE (Equation 1) and SDI (Equation 2), were calculated for each model. Moreover, using the kriging method, thematic maps of spatial variability of variables were created for the area under study. Simulated data sets and statistical and geostatistical analyses were performed by R software (R Development Core Team, 2018), using the geoR module (Ribeiro Junior & Diggle, 2001).
Simulated Data Analysis
Nugget effect estimation was the mostly influenced parameter by sample design changes in all simulations. On average, the worst results were obtained in systematic sampling, where the estimated values were more distant from the nominal value of this parameter (Table 1). Moreover, the standard error estimates of C0 were higher (Figure 1-a) compared to those of random design and lattice plus close pairs (Figure 1-b and 1-c).
Nugget effect was overestimated in simulations using lower values of this parameter, but underestimated when higher values were used. The best estimates were achieved in random design as it was one of the sampling designs. Nugget effect estimates were on average close to the nominal value, showing less variability (Table 1) and lower standard errors (Figure 1-b). Lattice plus close pairs was considered the secondbest design in terms of nugget effect estimation, as it also showed on average nugget effect estimates closer to the nominal value (Table 1) and low standard error estimates (Figure 1-c). But unlike the previous one, this model showed a greater variability of estimates compared to those of the random design.
All sampling designs in this study generated similar range and sill estimates. In this sense, relevant differences in C0 estimates influenced directly RNE and inversely SDI index calculations. When considering RNE and SDI indexes and all simulations, the lowest spatial dependence was observed in systematic sampling if compared with random design and lattice plus close pairs. Estimations of RNE and SDI indexes closer to the nominal values were obtained by random design.
These results corroborate the conclusion of Kestring et al. (2015), Zhao et al. (2016), and Bussel et al. (2016), who claimed who stated about sample design and size influences on geostatistical model estimation. Table 2 also presents a descriptive summary of average kriging variance ( ) for a georeferenced variable at non-sampled sites, with different nugget effect values and sampling designs. Average kriging variance results were similar for all sampling designs and increased as nugget effect was raised.
According to Cressie (2015), georeferenced variables can be decomposed into two random terms: a second-order stationary process and a white-noise measurement process. In this case, when interpolation is done at an unsampled point, variance estimate exceeds stationary variance by the amount of white-noise measurement variance (corresponding to nugget effect variation) (Burgess & Webster, 2019). Therefore, the nugget effect and kriging variation are directly related. Average kriging variance shows how efficient a spatial estimation of unsampled sites was, thus, the smaller the spatial estimation the more efficient it is. These results (Table 2) showed that the higher the nugget effect (i.e., the lower the spatial dependence degree of a georeferenced variable), the lower the efficiency of spatial estimation by kriging.
Kriging spatial estimation results in unsampled sites for simulations with C0 between 0 and 8 were compared to those obtained for simulation with C0 equal to 9 (Table 2 and Figure 2) by the sum of squared difference (SSD) ( Table 2). In all sample designs, as the nugget effect increased, SSD between spatial estimates decreased.
This result indicates that the closer the nugget effect values are in a geostatistical model for spatial estimation, the more similar their results are in terms of estimation, regardless of the sampling design. The geostatistical model used for comparison (C0 = 9) represents a georeferenced variable with pure nugget effect, that is, without spatial dependence. Thus, the results of SSD (Table 2) showed that the stronger the spatial dependence of a georeferenced variable (lower C0 value), the greater the dissimilarity thereof with a georeferenced variable without spatial dependence, as far as spatial prediction is concerned.
Kriging equations depend on semivariance function, especially for nugget effect. Higher values of this parameter imply higher kriging weight for distant samples, which, in turn, produces a smoother thematic map (Bassani et al., 2018).
Engenharia Agrícola, Jaboticabal, v.40, n.1, p.96-104, jan./feb. 2020 These results evidenced the importance of accurately modelling the nugget effect, considering its association with kriging estimation and sampling design. For high nugget effect values (concerning its sill), it is therefore recommended to consider a sample design with many short-distance sites (Chipeta et al., 2016), to minimize uncertainties in semivariance function parameters and kriging estimates (Wadoux et al., 2019). Figure 2 shows the box plot graphs for each sampling design, with similarity measure OA by comparing spatial estimations, considering estimates from data simulated using nugget effect equals to 9 (as reference for mapping) and spatial estimation from data simulated using the other values of nugget effect (as a model maps).
For systematic sampling (Figures 2-a), the findings described above are valid for all simulations considering the nugget effect values that describe the georeferenced variable as having weak spatial dependence; and most nugget effect values that consider the georeferenced variable as having moderate spatial dependence.
The systematic sampling showed the highest measurements of accuracy compared with the others, especially for C0 above 5. For C0 = 8, which belongs to the same classification as spatial dependence intensity for the data simulated with nugget effect equal to 9, simulations increased (20%) with OA ≥ 0.85, which indicates high similarity between spatial estimations carried out at nonsampled sites (De Bastiani et al., 2012).
Based on the results, the systematic design had lower sensitivity to the spatial estimation process, with changes in nugget effect value, when compared with the random sampling and lattice plus close pairs. This is due to the poor quality of nugget effect estimation in systematic sampling (Table 1 and Figure 1), which hence produced a low quality in kriging estimation.
SPATIAL VARIABILITY ANALYSIS OF SOIL CHEMICAL PROPERTIES
Descriptive statistics for carbon (C), calcium (Ca), magnesium (Mg), and pH are given in Table 3. In analyzing these values, we can note that all parameters presented data with low dispersion and homogeneity. According to crossvalidation criteria, the Gaussian model was the best-estimated model of the semivariance function for C, Ca, and Mg. As for pH, the best-estimated model was the exponential one. The spatial dependence radius for C, Ca,Mg,and pH were 254.90,639.90,685.47, and 300 m, respectively. All parameters showed nugget effect values higher than sill ones. If we associate this with simulation results, we could notice that these high nugget effect values for all parameters generate loss of efficiency in spatial estimation by kriging. However, as reported by Webster & Oliver (2007), such values should not be disregarded because the model must be correctly estimated and incorporate the estimated nugget effect value. Furthermore, in analyzing spatial dependence degree, the parameters C, Ca, and pH are classified as with moderate (25% < RNE ≤ 75%) while Mg as with weak (RNE > 75%) (Cambardella et al., 1994).
Based on quartiles, the same analysis was performed for SDI since it makes up the strictest index to evaluate degree of spatial dependence. This approach was used to rank C, Ca, and Mg within a weak spatial dependence and pH within a moderate degree (Seidel & Oliveira, 2014). When comparing the two indices, we found a difference for C and Ca, which is related to the range value and the estimated model. Figure 3 displays the thematic maps of estimates for each factor. We observed that the thematic maps for carb2on (Figure 3-a) and pH (Figure 3-d) had less smoothing as for distribution of estimates in the area, whereas calcium (Figure 3-b) and magnesium (Figure 3-c) had more smoothed maps. Calcium and magnesium showed weak spatial dependence and nugget effect values higher than those of sill (Table 3). Such finding emphasizes the influence of nugget effect on the spatial estimation of georeferenced parameters in non-sampled sites, and as a result, a greater thematic-map smoothing for high nugget effect values.
According to Webster & Oliver (2007) and Hofmann et al. (2010), increasing values of nugget effect provide an improved distribution of weights in spatial estimation, which hence generates smoother thematic maps. The nugget effect can be decreased by shortening gaps between samples, i.e., increasing sample density (Kestring et al., 2015); however, it is usually unfeasible for soil properties due to the costly requirements.
The spatial variability map for carbon (Figure 3-a) showed that lower carbon contents are located midwestern, while high levels are mainly within the northern region. The thematic maps for calcium (Figure 3-b) and pH (Figure 3-d) presented reduced levels mainly within the southern region. Figure (3-c) presents the thematic map for magnesium, in which can be seen that the northern and southern regions have the lowest magnesium contents of the studied area.
CONCLUSIONS
The behavior of near-origin semivariograms described by the nugget effect strongly affects spatial estimation by kriging of georeferenced parameters at unsampled sites. Nugget effect has negative influence on stability of this type of modeling, that is, the higher the nugget effect, the lower the kriging efficiency of spatial Engenharia Agrícola, Jaboticabal, v.40, n.1, p.96-104, jan./feb. 2020 prediction. Nugget effect was also directly related to smoothed kriging estimates.
Systematic sampling exhibited the least accurate nugget effect estimation, the worst efficiency of spatial estimation, and the lowest degree of sensitivity to spatial changes as the nugget effect changes. Given the changes in nugget effect values, the best parameter estimates of the geostatistical model and estimates at unsampled sites occurred with the use of random design, followed by the lattice plus close pairs.
High nugget effect value was observed for all soil chemical properties. Low carbon contents were found in the midwestern region, while in the northern they were high. Calcium and pH presented the lowest values in the southern region, and the lowest magnesium levels were observed in the northern and southern regions.
|
2020-02-20T09:05:58.311Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7f6164ef15a52f2ce58f5d9878b4af8a4d97a0f7",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/eagri/a/pwTQht5kTpnfgVVDF78zVqc/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2a5e85d08673705b50479d34ba6afcb8f643d10",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
7772159
|
pes2o/s2orc
|
v3-fos-license
|
A Biometric Identification Based Scheme for Secured E-Payment
Means of electronic payment are of diverse forms, some of which are mostly an extended version of the existing offline method. Much attention has not been paid to proper authentication for users, non-repudiation from the merchant to the users, and adequate protection from unauthorized use of payment data at the merchant or Payment Service Provider (PSP)’s end. The quest for additional security measure in identification process in online e-payment systems has brought about the need for the development of an improved e-payment system. This paper presents a scheme for biometric identification for electronic payment system tailored towards the payment of fees in tetairy instutions that addressed the challenges emanating from the present e-payment systems.
INTRODUCTION
Electronic payment (e-payment) system refers to the automated processes of exchanging monetary value among parties in business transaction and transmitting this value over the information and communication technology networks, [2].Examples of electronic form of payments are scratch cards, electronic cheque, electronic cash, smart cards and so on which have been used for exchange of goods and services over the internet electronically.
The advent of electronic payment systems, regardless of the adopted system opened the doors to a lot of risks due to the fact that the process of identification mostly require replay of personal information across the internet.These forms of identification sometimes include entering of PIN's, account numbers, name and address of payers, [15].The risks attached to this kind of authentication are too numerous to be overlooked.The account details could be intercepted, eavesdropped on, amount paid can be compromised and intercepted information can be used for obtaining loans and withdrawals from account holder.Fake PINs can be generated to fool the electronic medium into acknowledging payments whereas none has ever been made, PINs can also be forgotten or stolen thereby locking out individuals from accessing his or her rightful accounts.Online database can be breached just like it happened to "CD universe" (a web based CD store) when a hacker broke through their security system and gained access to 300,000 credit card account numbers resulting in the web site being shut down [7].
The attending problems to identification for e-payment have raised the alert level for security in the course of e-payment.And this has led to a lot of security initiatives like Secure Electronic Transaction protocol (SET), Secure Socket Layer (SSL) and so on.These has helped considerably, but it has been found to fail in several ways and at times where the culprits are the employees' of the financial organizations handling payment instructions, where such individuals are able to hack or intercept payment instructions, identification process and clone similar portals for e-payment or generate fake PINs [12].Also, identification for e-payment deals mainly with the security at the point of entry that is identifying the user who is about to gain entry whereas the above mentioned security initiatives are more pronounced on the protection of data integrity that is, what is sent over the internet.Therefore, there is a need of a more robust authentication system for e-payment systems where additional identification is based on "who you are" and not the usage of information that can be intercepted faked and compromised.This in turn informs a shift in focus during the quest for security for online payment to biometric identification.
Biometric system identifies people by measuring some aspect of individual anatomy or physiology (such as hand geometry or fingerprint), some deeply ingrained skill or other behavioral characteristics (handwritten signature) or something that is a combination of the two [5].This implies that even if an employee of an organization has access to a client's record, no fraud can be perpetrated without the presence of the clients, in that the biometric traits captured from the individual for authentication is not present to complete the authentication process for fraud to take place, most especially with the inculcation of "liveness factor" into present day biometric technology which can determine whether the biometric traits used for authentication is a live one [10].However, Biometric identification is of many forms and care must be taken to choose one with wide acceptance, less difficult in enrolment and cost effective.
REVIEW OF EXISTING E-PAYMENT METHODS Online electronic credit-debit card payment system
In this e-payment system, credit cards holders are granted a revolving credit line which enables the holder to make purchases and or cash advance up to a prearranged limit [13].The online credit card payment system extends the functionality of existing credit card as online shopping payment tools as shown in Figure 1 below.It is the most popular methods of payment especially in the retail markets.Here, data transfer is protected by the Secure Socket Layer (SSL) protocol whose major duty is to ensure the encryption and integrity of a transferred message [11].
In credit cards payment system, merchants can use it in two versions: with or without an intermediary.The version without intermediary assures message encryption and integrity but exposes both parties to other risks.As a customer communicates their card number and expiry date directly to a merchant, the card number can be taken from an insufficiently protected server of the merchant or illegally reused.Moreover, the existence of the merchant is not ensured.The merchant in turn does not have a guarantee that the buyer exists and that they will not repudiate the purchase afterwards [14].The version with an intermediary assumes the participation of a third trusted party, which guarantees the existence of the vendor as well as denies them access to the buyers' card data.It increases security on the customer side assuring them the merchant authentication and data confidentiality.Nonetheless, the latter is still not able to identify the buyer.This asymmetry can be eliminated by integrating into the technology the system of an electronic signature.The electronic signature allows the authentication of the buyer.As for Debit cards, that is with the debit approach, the buyer needs to have a positive balance of the account before payment as it is deducted immediately (Pay now), (Visa Debit cards, ATMs).With the Credit approach, charges are posted against buyer's account and billed to the buyer later (Visa Credit Cards generally).
Figure 1: Credit-Debit E-payment System, At present, two emerging solutions seem quite interesting: dynamic e-cards and payment via sound waves.Dynamic e-cards allow banks to generate a one-use card and cryptogram number and expiry date every time the card user buys online [1].This solution does not require any additional applications and significantly minimizes the risk of transaction but ignore convenience.The second solution facilitates the identification and authentication of a card user via unique sound waves generated by the card.However, this system is still in the development phase.Figure 2 is an example of the online version of the credit card form, which is not quite different from that of the Debit Card form and it shows lack of proper authentication on the part of both the buyer and the merchant, most especially when the buyer is paying for services or downloadable products i.e. digital products where the billing address cannot be corroborated.Initially, electronic money included three types of payment systems: virtual money, the electronic wallet and the virtual wallet.However, the methods based on virtual money (digital currencies) were abandoned after a short trial period.Nowadays, only two of them are in use [1].The electronic wallet is based on smart card technology, which is used to store data about the customer's funds.Money is loaded into the e-wallet by transfer from the cardholder's account.In this way, bank is not involved in the transaction at the moment of purchase.Smart cards are used to target mainly the market of micro-payments.At present, they can be used at points of sale, vending machines, parking meters and ticket machines, public payphones, set-top boxes for interactive television and for online transactions, and so on.The integration of this system into internet payments requires installing on the customer side smart card readers.Smart Cards are Credit Card sized plastic cards that have embedded chip with microprocessor and memory capabilities.In e-payment, smart cards are used either as storage devices for much greater information than credit cards with inbuilt transaction processing capability [13],or to enhance e-payment security.
To use smart card offline, it is necessary to have a smart card reader, a hardware device that communicates with chips on the smart card [11].The reader can be attached with PCs, electronic cash registers, automated teller machines (ATM) and so on.Smart cards used for the storage money are actually variations of debit cards that substitute the previous magnetic strip based debit card.These are actually stored-value cards in which prepayment or currency values are electronically stored on the card chips.First, the card has to be loaded with specific amount of money.
This can be done by downloading cash from the bank account or exchanging cash for tokens which can then be used to pay the merchant.The card can then be replaced with more digital cash when the previous money is used up [14].This card also contains some kinds of an encrypted key that is compared to a secret key contained on the user's processor.Some smart cards have provision for allowing users to enter a Personal Identification Number (PIN) code.The simplest and the most realistic way to achieve this, is to build such readers into mobile phones.Such solutions can accelerate the development of 'pay-as you-use' services such as online games, music ticketing or mass transit systems [1].Systems based on the virtual wallet are quite similar to electronic wallets.The only difference is that money is stocked on software using tokens instead of on a smart card [14].Such a system is usually managed by a bank or a bank card issuer.Having created an account, the buyer only has to enter their ID and password at the moment of transaction.Smart card can hold hundred times more data, including multiple credit card numbers and information regarding health insurance, transportation, personal identification, bank accounts and loyalty programs, such as frequent flyer accounts [6].However, these days smart card technology is being used for debit cards too e.g.ATM cards.The virtual wallet is used for micro-payments via Internet.Nowadays, electronic cash has been broadened to include dedicated account scratch cards [3]. Figure 3 depicts a typical online electronic cash system.However, the dedicated account scratch cards and debit cards, under the online electronic cash system are widely accepted in Nigeria most especially in tertiary institutions in Nigeria for the purchase of application forms and payment of fees electronically.Ladoke Akintola University of Technology (Lautech), Ogbomoso, Nigeria whose electronic payment of fees will be taken as our case study has this method of e-payment system.
Description and Challenges of the Present E-Payment System
The existing method is the payment by scratch cards where the student pays at the bank and obtains a scratch card for the amount paid.Armed with the scratch card which contains a covered panel of secret pin numbers and an uncovered serial numbers both of which has been uploaded to the server prior to the time of purchase, the student visits the school's portal online to make payments as requested.At the school's portal, the student enters the PIN revealed under the panel where the server compares the PINs with the preloaded PINs and the serial numbers if it tallies, the tuition fee button is then highlighted for payment and the student can proceed for registration.This system which is a subset of e-money method of e-payment systems operates based on trust due to the fact that the pin (the number in the covered panel) is generated and uploaded with the serial number by a human operator.This payment how has the following shortcomings: a) Problem of physical card costs which either takes its toll on the students by paying extra or on the service provider.b) Retailer's Commission paid by the school.c) Limited interoperability where the students or their parents must get to the school to make a purchase of the card before going online to enter it at the school's portal.d) Prone to fraud due to generation of fake PINs, where PINs are entered without any payment being made.
METHODOLOGY Design Methodology
The developed scheme is based on fingerprints as the authentication technique right from the bank, where the student is enrolled at the opening of a new account with the bank and funding it or upgrading the existing account by adding fingerprint's templates and also funds it.With a preloaded ATM card (Debit Card) the student proceeds to the school's portal online to pay the acceptance fee as a new student using the fingerprint and the ATM card numbers which has been linked to the account number as directed on the portal.This enables the student to proceed for the payment of tuition fee after which registration can be processed.However, a returning student after completing the bit about entering ATM number and authorization with fingerprint, the system skips the acceptance fee module for him to the tuition fee payment process as explained above after which the registration process begins.The fingerprint serves as the authorization factor and it shows that the user is the card owner.
On the acceptance of the fingerprint, deduction is made from the student's account and this enables the student to proceed for the payment of tuition fee after which he can complete the registration online.
System Architecture
The framework for the biometric identification for e-payment is as shown in Figure 4.The developed biometric e-payment system consists of two parts: the server and web clients.The server houses the account details, transactions and biometric details of the users at the various banks that the user may chose to transact with (BIES SERVER 1).Since the system will be accessed by the students of Ladoke Akintola University of Technology, Ogbomoso, Nigeria, for payment of all necessary fees to the Institution, another server houses details of users that are known as students, both new entrants and returning students (BIES SERVER 2).The multiple" biometric bank readers" indicates that the system is independent of any bank (more than a type of bank can access the system).The web clients are points at which the students access both the school's portal and in the process of payment also access the bank with the users' ATM.
Implementation and Results
Hypertext markup language was employed in the Microsoft visual studio integrated development environment.The overall system was developed on the Microsoft.NET framework using Visual Studio.NET (visual C#) and MS SQL Server 2008.Third party software used is the GRfinger SDK.The system is of two parts, namely: The Bank server's side and the Web clients' side for the school portal.Some of the graphical user interface of the developed system is depicted in Figures 5 -8.The developed system was evaluated based on users' assessment by a computer network administrator and fifty students.Three metrics which includes System Ease of Usage (SEU), System Novelty Index (SNI) and System Degree of Relevance (SDR) were used for evaluation.The response mean of the SEU, SNI and SDR were 3.89, 3.96 and 3.86 respectively on a rating scale of 1 to 5 as depicted in Figure 9.This shows that users find the system relatively easy to use, as the technical knowhow requirement to use the system is considerably minimal; it also shows that the system has an appreciable degree of integrity and it is relevant in the delivery of secured and credible electronic payments.
CONCLUSION
Evolution of means of payment through the various forms of the offline method to the online methods have presented problems of proper authentication for users, non-repudiation from the merchants to the users and adequate protection from unauthorized use of payment data at the merchant or Payment Service Provider's end.Consequently, there is a need to proffer solution to this attendant problem of e-payment systems.This paper has detailed the development of a biometric identification scheme for electronic payment.It is believed that the developed scheme will reduce fraudulent practices in the payment of fees in tertiary institutions to a greater extent.
Figure 2 :
Figure 2: Credit Card Payment Form Sample Online electronic cash payment systemInitially, electronic money included three types of payment systems: virtual money, the electronic wallet and the virtual wallet.However, the methods based on virtual money (digital currencies) were abandoned after a short trial period.Nowadays, only two of them are in use[1].The electronic wallet is based on smart card technology, which is used to store data about the customer's funds.Money is loaded into the e-wallet by transfer from the cardholder's account.In this way, bank is not involved in the transaction at the moment of purchase.Smart cards are used to target mainly the market of micro-payments.At present, they can be used at points of sale, vending machines, parking meters and ticket machines, public payphones, set-top boxes for interactive television and for online transactions, and so on.The integration of this system into internet payments requires installing on the customer side smart card readers.Smart Cards are Credit Card sized plastic cards that have embedded chip with microprocessor and memory capabilities.In e-payment, smart cards are used either as storage devices for much greater information than credit cards with inbuilt transaction processing capability[13],or to enhance e-payment security.To use smart card offline, it is necessary to have a smart card reader, a hardware device that communicates with chips on the smart card[11].The reader can be attached with PCs, electronic cash registers, automated teller machines
Figure 3 :
Figure 3: Online Electronic Cash Payment SystemHowever, the dedicated account scratch cards and debit cards, under the online electronic cash system are widely accepted in Nigeria most especially in tertiary institutions in Nigeria for the purchase of application forms and payment of fees electronically.Ladoke Akintola University of Technology (Lautech), Ogbomoso, Nigeria whose electronic payment of fees will be taken as our case study has this method of e-payment system.Description and Challenges of the Present E-Payment SystemThe existing method is the payment by scratch cards where the student pays at the bank and obtains a scratch card for the amount paid.Armed with the scratch card which contains a
Figure 4 :
Figure 4: Developed Architectural Framework for Biometric Identification for Secure E-payment.
Figure 5 :
Figure 5: Customer information interface
Figure 8 :Figure 9 :
Figure 8: Validation of Payment Account Interface
|
2016-01-22T01:30:34.548Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1a972bc6c32efbb4417e070cf2495cf3760494d7",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/898800/files/760447181.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1a972bc6c32efbb4417e070cf2495cf3760494d7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
258309857
|
pes2o/s2orc
|
v3-fos-license
|
Fatigue in Post-Acute Sequelae of Coronavirus Disease 2019
Fatigue from post-acute sequelae of coronavirus disease 2019 is a complex constellation of symptoms that could be driven by a wide spectrum of underlying etiologies. Despite this, there seems to be hope for treatment plans that focus on addressing possible etiologies and creating a path to improving quality of life and a paced return to activity.
Fatigue in Post-Acute Sequelae of Coronavirus Disease 2019 fatigue as a PASC and provides clinicians with guidance regarding the evaluation and treatment of this complex symptom.
Pathophysiology of Fatigue/Post-Exertional Fatigue Related to Coronavirus Disease 2019
There are currently limited data regarding the pathophysiology of fatigue as a PASC. Mackay 22 posited that SARS-CoV-2 infection, as with other triggers of ME/CFS such as vaccination, severe emotional distress, and viral infection, may act as a severe physiological stressor that could lead to fatigue by impacting hypothalamic dysfunction, spurring systemic inflammation via cytokine storm, and causing chronic damage to the pulmonary, cardiac, neurologic (including psychiatric), and myofascial systems.
Hypothalamic-pituitary axis dysfunction
Given the role of SARS-CoV-2 as a severe physiological stressor, a logical driver of COVID-related fatigue may be an altered response by the brain's stress center, the hypothalamic paraventricular nucleus (PVN). The hypothalamic PVN plays a crucial role in the body's hypothalamic-pituitary axis (HPA), a series of feedback mechanisms that regulate the release of glucocorticoids and the body's autonomic response to stimuli. Neurons within the PVN secrete the first hormone in this cascade, known as corticotrophin-releasing hormone or corticotropin-releasing hormone (CRH). These cells show significant plasticity in response to acute and chronic stress and have been suggested as a potential source of ME/CFS. 22 The trigger of PVN dysfunction in cases of COVID- 19 is not yet fully defined. In cases of ME/CFS, multiple mechanisms of damage have been described. A study by De Bellis and colleagues 23 investigated the role of an autoimmune response targeting the pituitary and hypothalamus. Their study assessed anti-pituitary antibody (APA) and anti-hypothalamic antibody (AHA) blood levels among 30 adult women diagnosed with ME/CSF using validated criteria 23 compared with 25 healthy controls. Patients diagnosed with ME/CFS had AHA and APA levels that were significantly higher (56% and 33%, respectively) than control patients. Further, important markers of appropriate stress response including adrenocorticotropic hormone (ACTH), cortisol, and insulin-like growth factor 1 (IGF-1) were lower in ME/CFS patients compared with controls. Studies have recently suggested that acute COVID-19 can be associated with similar autoimmune hypothalamic and pituitary involvement. Gonen and colleagues 24 performed a prospective, case-control study (49 COVID 1 patients, 28 healthy controls) which assessed prevalence of adrenal insufficiency and HPA antibodies. Their data identified AHA in 31% of patients and APA in 51% of patients. Proposed Diagnostic Criteria for ME/CFS A significant reduction or impairment in the ability to engage in premorbid activities for greater than 6 months, accompanied by new and profound fatigue, not the result of excessive exertion and is not substantially improved with rest Post-exertional malaise Unrefreshing sleep AND 1 of the following: Cognitive impairment Orthostatic dysfunction
Fatigue in PASC
Adrenal insufficiency was present in 4 (8.1%) of the 49 cases. We did not find any data regarding the prevalence of these antibodies and/or adrenal insufficiency in cases of PASC.
Primary adrenal insufficiency
Some authors 25 have suggested that primary adrenal gland insufficiency specifically may be underdiagnosed among PASC. COVID-19 infection may impact the adrenal glands by both direct and indirect mechanisms. From a cellular level, SARS-CoV-2 binds preferentially to angiotensin-converting enzyme 2 (ACE-2). Various studies have suggested that ACE-2 is expressed on adrenal glands and that SARS-CoV-2 may have a propensity to replicate within these cells. 25 However, there has been no direct evidence of cellular damage in these tissues. More concerning are the potential impacts of COVID-19 infection on the adrenal vasculature. Autopsy studies of patients with severe COVID-19 have found evidence of adrenal hemorrhage and infarct. 26,27 From a histopathologic level, there is evidence of coagulation sequelae such as fibrin and microthrombi in the adrenal vasculature. 27 Multiple studies have suggested the potential for these hypercoagulable sequelae to persist following initial infection even in mild cases of COVID-19. 28 With these potential impacts in mind, it is essential to consider the role of adrenal damage and insufficiency in producing common PASC including fatigue.
Cytokine storm
Beyond autoimmune dysfunction, the body's initial inflammatory response to COVID-19 may directly impact the PVNs ability to regulate stress. Acute COVID-19 infection has been shown to cause what is known as a "cytokine storm," a dramatic, systemic immune response that involves pro-inflammatory cytokines such as interleukin (IL)-1, IL-6, and tumor necrosis factor (TNF). 29 The magnitude of the body's response to infection has been correlated with the severity of this inflammatory response. For example, dramatic surges of inflammatory markers in severe COVID-19 infection have been linked with worse outcomes, including the development of acute respiratory distress syndrome and multiorgan failure. 30 In the case of COVID-related fatigue, cytokine expression patterns may play a role. Previous work has suggested that the expression of Th1 and Th17, important factors in cell mediated immunity, is downregulated in patients with ME/CFS. 25 Further, studies in populations following infection with Epstein-Barr virus and West Nile virus found that patients who experienced postviral fatigue had higher expression of pro-inflammatory cytokines IL-2 and IL-6 compared with controls who recovered from infection without fatigue. 31,32 In patients experiencing PASC, increased levels of pro-inflammatory biomarkers such as IL-6, creactive protein (CRP), D-dimer have been identified. 33 Imaging studies of COVID-19 with ongoing symptoms greater than 30 days from infection found higher levels of FDG uptake in bone marrow and blood vessels, signifying increased inflammation. 34 These prolonged pro-inflammatory states have been hypothesized as a potential cause of long-COVID-related fatigue and are being targeted as potential therapeutic targets. 35
Pulmonary dysfunction
Aside from inflammatory and endocrinologic pathways, COVID-19 has been implicated in damage to cardiac, pulmonary, myofascial, and neurologic systems. Given its tropism for pulmonary tissue, it is expected that COVID-19 would be linked to long-term pulmonary sequelae. Patients with COVID-19 have been found to have radiological lung abnormalities, including fibrosis, and impaired pulmonary function (including diffusion capacity) months after initial infection. 10 Impaired gas exchange and pulmonary function may play a role in fatigue both directly (hypoxemia or hypercarbia) and indirectly (increased respiratory effort), in addition to causing more common pulmonary symptoms such as dyspnea and cough which will be covered elsewhere in this series.
Cardiac dysfunction
Closely linked to COVID-19-related respiratory compromise is the cardiovascular system. Sars-CoV-2 has been linked to both microvascular damage and direct cardiac myocyte invasion. Autopsy studies of patients with COVID-19 have shown evidence of direct viral invasion in both cardiac myocytes and endothelial cells associated with inflammation and dysfunction. 36 A cohort study performed by Puntmann and colleagues 37 investigated a group of 100 COVID-19 recovered individuals at a median time frame of 2 to 3 months postinfection and found evidence of cardiac involvement (including fibrosis) in 78%, with evidence of ongoing myocardial inflammation in 60%, independent from preexisting cardiovascular conditions. This significantly high prevalence has been brought into question by Malek, 38 who performed a similar analysis and identified similar pathology in a twofold lower frequency. Despite this controversy, there is further evidence of cardiac inflammation, including subclinical myocarditis, in healthy, recovered COVID-19 patients. 39 A cardinal symptom of cardiac dysfunction, including heart failure and myocarditis, is fatigue. This highlights the potential for cardiac damage to drive chronic fatigue symptoms in COVID-19.
Myopathy
In conjunction with fatigue, myalgia has been a commonly reported PASC. This has led to an investigation of myopathy as a driver of physical fatigue during both acute and post-acute periods of COVID-19 infection. Agergaard and colleagues 40 investigated the presence of neuropathy and myopathy in 20 patients recovered from COVID-19 (median 216 days) who experienced persistent neuromuscular symptoms (including fatigue) using nerve conduction studies and electromyography (EMG) needle examination. Nerve conduction studies were normal in all 20 patients. However, myopathic changes were present in 55% based on EMG. In the acute phase, there is more definitive evidence of structural myopathic changes. Multiple studies 41-43 performed biopsies on patients deceased from severe COVID-19 and identified patterns consistent with inflammatory myopathy and critical illness myopathy. Case reports have corroborated the presence of critical illness myopathy 44 in cases of severe COVID-19. There have also been case reports of myositis during or after mild/moderate COVID-19 infection, though these data lack power to draw conclusions regarding causation. 45,46 According to a systemic review by Soares and colleagues, 47 there are some similarities between skeletal muscle changes in PASC and chronic fatigue syndrome. These similarities have yet to be fully investigated. In addition to structural and inflammatory muscle changes, metabolic changes may also play a role in myalgia and fatigue as PASC. For example, elevated levels of growth/differentiation factor-15 (GDF-15), an indirect marker of mitochondrial stress, have been found in a significant proportion of patients hospitalized with COVID-19. 48 Further, mitochondrial stress and skeletal muscle metabolic changes are associated with critical illness myopathy, a well-known sequela of severe COVID-19 infection. Little data exist regarding metabolic or myopathic changes in patients with mild-moderate COVID-19 infection who experience PASC.
Neuropsychiatric dysfunction
Fatigue can be a common manifestation of both central and peripheral neurologic dysfunction. The pathophysiology of PASC as it relates to the central and peripheral neurologic system will be discussed elsewhere in this series. Importantly,
Fatigue in PASC
coronaviruses are known to invade neurologic tissues, potentially leading to alterations in cognition, behavior, mood, and function. 49 Recent data have emerged regarding the neuro-psychiatric sequelae of COVID-19 in driving fatigue. In a systemic review, Renaud-Charest and colleagues 50 found that depressive symptoms were present in 11% to 28% of patients greater than 12 weeks following initial infection, irrespective of initiation infection severity. Further, Ortelli and colleagues 51 performed thorough neuropsychological assessments on recovered COVID-19 patients. They found that, compared with controls, patients had higher levels of perceived exertion and fatigue. They also exhibited apathy, executive deficits, and impaired cognition compared with controls. Physiologic testing suggested that these outcomes may be related to GABA dysfunction though further testing is needed to clarify this hypothesis. Penninx 49 suggested that previously described altered inflammatory/immunologic pathways may contribute to the development of depression and anxiety in patients with PASC. Conversely, Stengel and colleagues 45 considered the potential for COVID-19 to trigger a functional disease or bodily distress disorder, akin to postinfectious irritable bowel syndrome, for which there is no clear pathophysiological basis.
Sleep disturbance
Dysfunctional sleep is a commonly reported symptom in both acute and chronic COVID-19 infection. [52][53][54] Estimates from Pataka and colleagues 55 suggest that 50% to 75% of all patients who suffer from COVID-19 infection experience sleep disturbance. Insomnia and fatigue often persist concurrently in patients once acute COVID-19 infection. 54 Ferná ndez-De-Las-Peñ as and colleagues 56 reported data from a multicenter cohort study including individuals previously diagnosed with COVID-19 and focused on reported symptoms of anxiety, depression, and sleep disturbance. They found that 33.2% of randomly selected patients experienced sleep disturbance at 6 to 10 months and that number decreased to 27.7% in the 11 to 15month window. These data were modeled on trajectory curves and found to decrease at a slower rate than other major medical events such as cardiac surgery, suggesting that poor sleep quality could be longer lasting PASC, even when compared with anxiety and depression.
Gut dysregulation
Recently, literature has linked the gut microbiome to chronic fatigue syndrome. 57 Similar findings have started to emerge for PASC. Yeoh and colleagues 58 analyzed fecal microbiota from 100 patients with confirmed COVID-19 infection, including serial samples from 27 of those patients at least 30 days after virus resolution. They identified a significant difference in microbiome for patients with COVID-19 compared with controls, regardless of medication use. Further, samples greater than 30 days from infection resolution continued to exhibit lower levels of immunomodulating bacteria, potentially leading to prolonged symptoms and changes in the inflammatory cascade. Liu and colleagues 59 recently published data from a prospective cohort study suggesting the gut microbiome profile may affect both susceptibility to PASC and the symptom profile of patients experience. In their panel of 106 COVID-19 recovered patients, fatigue was the most reported PASC.
EVALUATION
The prevalence of people suffering from symptoms of PASC required professionals to develop guidelines to establish a cohesive and standardized treatment approach. Herrera and colleagues 20 provided guidance through the American Academy of Physical Medicine and Rehabilitation (AAPM&R) Multi-Disciplinary PASC Collaborative. With the help of multiple professionals and patient representatives, they created a cohesive approach to the care of people with PASC with a focus on fatigue. To improve access to interventions, this group recommends early evaluation, diagnosis, and treatment. With this goal in mind, assessment should begin if symptoms of PASC are not improving after 1 month from acute symptom onset, if symptoms are severe, or if symptoms are significantly interfering with quality of life. It is common for fluctuations in symptoms over the course of 1 to 2 months. Therefore, mild fatigue that is not functionally limiting can be closely monitored without extensive management. Gathering a thorough history of the patient's experience with their initial illness can help guide assessments as well. The severity of illness related to COVID-19 has been associated with the risk of long-term sequelae and impairment. Approximately twothird of outpatients diagnosed with COVID-19 return to full health by the fourth week. On the other hand, patients diagnosed in an emergency room, two-third of which were eventually hospitalized, 50.9% developed chronic symptoms. Other risk factors include increased age, number of medical comorbidities, and premorbid psychological disorders. Special consideration should be given to assessment for mental health disorders. Acute mental health disorders (55%), anxiety (4.7%), depression (2%), and pos-traumatic stress disorder (PTSD) (23.5% of ward survivors, 46.9% of intensive care unit [ICU] survivors) need to be fully evaluated, all of which can contribute to a patient's activity level. [60][61][62] Vulnerable populations including pregnant women, minority racial and ethnic groups, and low socioeconomic status should be considered at higher risk for post-COVID-related illnesses, as outlined by the PASC Collaborative review. A broad differential for contributors to these symptoms should be considered when evaluating these patients. These include, but are not limited to, critical illness myopathy and polyneuropathy, circadian rhythm disorders, and mood disorders.
Further, Herrera and colleagues 20 provided the following recommendations when assessing patients with PASC should include.
Detailed impacts on functional limitations throughout the day Impact of activity and activity intensity on fatigue Fatigue's impact on activities of daily living, occupational activities, and vocational activities Utilization of physical function tools such as timed walked tests and timed sit to stand should be used to guide targeted therapies A detailed history of premorbid health conditions and activity level Consideration of other etiologies that may exacerbate fatigue symptoms including areas of mood, sleep, nutrition, endocrine, immunology cardiopulmonary A review of medication adverse effects that could exacerbate fatigue. Drug classes include antihistamines, anticholinergic, pain medications, and anxiolytics.
Given the similarities between PASC and ME/CFS, as previously discussed, a similar initial assessment can be considered. The CDC and National Institute of Health and Clinical Excellence (NICE) recommend a minimal set of tests for patients presenting with fatigue. The CDC recommends initial evaluation with urinalysis; complete blood count; comprehensive metabolic panel; and measurement of phosphorus, thyroid-stimulating hormone, and C-reactive protein. NICE also recommends using immunoglobulin A endomysial antibodies to screen for celiac disease, and if indicated by the history or physical examination, urine drug screening, rheumatoid factor testing, and antinuclear antibody testing. Viral titers are not recommended unless the patient's history is suggestive of an infectious process, because they do not confirm or eliminate the diagnosis of ME/CFS. The National Collaborating Centre for
Fatigue in PASC
Primary care identified red flag symptoms and associated conditions. This includes Chest Pain (cardiac etiology), Focal Neurologic Deficits (central nervous system pathology), Shortness of Breath (pulmonary etiology), Inflammatory Signs or Joint Pain (autoimmune processes), and Wight Loss or Lymphadenopathy (malignancy). 16 It is important to note that PASC-related fatigue may be a unique diagnosis but can also be manifestation of ME/CFS. The diagnosis can also be multifactorial without an identifiable, singular cause.
TREATMENT
Although counseling patients with diminished activity level related to PASC, practitioners should be comfortable discussing the unknowns of PASC treatments. Most patients who present to multidisciplinary clinics report moderate to severe symptoms. 63 Overall, PASC treatment centers have noted gradual improvement in symptoms. Treatment involves a highly multidisciplinary group of specialists ranging from pulmonologists, physiatrists, neurologists, cardiologists, physical therapists, occupational therapists, psychologists, neuropsychologists, psychiatrists, speech therapists, infectious disease specialists, and nutritionists. Telemedicine can also be used to help guide treatment approaches, but the effectiveness has not been quantified. The CDC has developed guidelines for treating ME/CFS that helped guide the treatment recommendations for PASC-related fatigue. Treatment recommendations should be like the evaluation and be tailored to the patient based on their history, comorbidities, confounders for fatigue, and activity limitations.
NICE published guidelines in 2007 to help quantify fatigue severity in the context of ME/CFS. 64 The PASC Collaborative further defined fatigue severity to help guide treatment options ( Table 1). Most patients who present to multidisciplinary clinics report moderate to severe symptoms. 63 The PASC Collaborative has developed guidelines which have helped symptoms in patients when no identifiable, contributing cause has been determined. These include beginning an individualized return to activity program, discussing energy conservation Table 1 American Academy of Physical Medicine and Rehabilitation post-acute sequelae of COVID-19 collaborative guidance statement recommendations 20
Mild fatigue
Intact mobility Can perform activities of daily living and do light housework (often with difficulty). Able to continue working or going to school but may have stopped other, nonessential activities. Often take time off, require modifications to their schedule, and use weekends to recover from their work week.
Moderate fatigue Decreased community mobility Limited in their performance of instrumental activities of daily living (particularly preparing meals, shopping, doing laundry, using transportation, and performing housework). Require frequent rest periods and naps Generally stopped work or school.
Severe fatigue Mostly confined to the home May have difficulty with activities of daily living (eating, bathing, dressing, transferring, toileting, mobility) Leaving the home is very limited and often leads to prolonged/severe after-effects.
strategies, education and encouragement of a healthy diet and fluid intake, and treating comorbidities such as sleep hygiene, mood disorders, and pain with the assistance of other medical specialists.
Return to Activity Program
The goal of this program is to return to premorbid activity levels. This should start slowly, avoiding strenuous activities such as high-intensity workouts or heavy resistance training, as these activities can exacerbate an individual's symptoms. Patients should be counseled on perceived exertion and educated on metrics used to quantify this. Scales, such as the Borg Rating of Perceived Exertion Scale, can be used to target submaximal exertion. Recommended programs are also determined based on the severity of PASC-related fatigue and gauged using the Rate of Perceived Exertion scale (RPE scale, Table 2). Those with mild fatigue, they can continue household and community activities. A slow return to higher intensity activity using a "rule of 10's" is recommended. This is defined as increasing activity duration, frequency, and intensity by 10% every 10 days. Using the RPE scale, progression from Light (10)(11) to Hard (15-16) is recommended.
Those with moderate fatigue can continue household and previously tolerated community activities. Activity or aerobics should begin at Very Light-Light (RPE: 9-11) and can be slowly advanced depending on patient tolerance. If acute or delayed worsening symptoms occur, the activity should be returned to the previously tolerated level.
Those with severe fatigue can continue household activities that are tolerated without symptomatic exacerbation. Upper and lower extremity stretching with light strengthening should occur before any aerobic activity. Once tolerating these well, a light aerobic activity can be engaged at Extremely Light to Very Light (RPE: 7-9). Activity levels can be slowly advanced depending on the patient tolerance. If acute or delayed worsening symptoms occur, the activity should be returned to the previously Fatigue in PASC tolerated level. A home health program can be considered for those with very limited activity tolerance. If a patient is not tolerating their return to activity program, consider a referral to a specialist familiar with post-COVID care (such as a physiatrist) to help guide the rehabilitation program.
Energy Conservation Strategies
Patient education regarding energy conservation can also aid the recovery process. Remembering the "4 Ps": Pacing, Prioritizing, Positioning, and Planning 66 can be useful for patients. Pacing refers to the concept of shorter duration activities with frequent rest to avoid prolonged recovery. Moderation of activities that increase recovery phases should be monitored. Prioritizing activities that have increased weight or importance and deferring those that can wait can help lessen overexertion and the need for extended recovery periods. Positioning is the idea of emphasizing ergonomics and focusing on energy efficiency. An example of this could include using a shower chair or bench instead of standing. Planning can help patients identify when energy expenditure is optimal or suboptimal. Periods of time during which energy is higher, coined "energy windows," are common. A personalized energy diary can help identify and plan tasks for a given day. Planning can also help schedule a gradual return to activity and work. All these tools can be used to inform employers and work with them to ensure a successful return to work. A vocational rehabilitation specialist can assist with these steps as well. A focus on quality of sleep and sleep hygiene should also be a point of emphasis to maximize recovery.
Nutritional Education
There is no diet prescription that can be universal for all patients to help combat the associated immune dysfunction related to COVID-19. Instead, nutrition guidance should be provided to account for patient preferences, allergies, and comorbidities. In general, counseling and education should be provided to encourage a wellbalanced diet. Pro-inflammatory states have been linked to chronic fatigue syndromes. Clinical studies have demonstrated a well-rounded diet high in "whole grains high in fibers, polyphenol-rich vegetables, and omega-3 fatty acid-rich foods might be able to improve disease-related fatigue symptoms. 67 " There is currently no evidence to support the supplementation of B vitamins, omega-3 fatty acids, or coenzyme Q10. Muscle atrophy associated with disuse or deconditioning can also contribute to fatigue. More evidence is needed to strengthen the association of anti-inflammatory diets and fatigue improvement but is a safe addition to add as a treatment approach for PASC. An effective treatment approach for severe cases will rely on a multidisciplinary approach. The involvement of multiple specialties including pulmonology, cardiology, physiatry, primary care, nutrition, psychiatry, infectious disease, speech therapy, occupational therapy, and physical therapy should be considered to maximize recovery and optimize function for patients with PASC symptoms. A transdisciplinary approach will allow patients to receive care across the spectrum of their PASC symptoms and ideally recovery as quickly as possible.
DISCUSSION
Given the recency of the SARS-Co-V-2 epidemic, there are clear limitations in the scientific community's ability to fully understand and manage the constellation of symptoms identified as PASC. In addition, it becomes cumbersome to provide specific treatment recommendations due to several different potential etiologies of PASC-related fatigue. The existing literature focused on ME/CFS, and other post-viral syndromes are the best analogs to guide interventions. This section of our publication has been focused on physical or mobility-related PASC fatigue symptoms, which excludes a full constellation of PASC symptoms that are related to psychologic, mental, and cognitive-based manifestations of PASC fatigue. Further research should help delineate improved evaluation strategies and treatment plans that can better target the various etiologies of PASC-related fatigue (ie, HPA, adrenal, cytokine, cardiac, pulmonary, myopathy, neuropsychiatric, sleep, and/or gut biome dysfunction).
Because of the varied possible etiologies causing PASC symptoms, there does not seem to be a single-laboratory evaluation, diagnostic test, or examination finding that will best identify the etiology of PASC symptoms. In addition, there is no consensus about a specific medication, therapy program, diet, infusion, or supplement that would best improve PASC symptoms.
Until the body of research is grown, the clinical community will need to work from the vantage point of expert opinion and the guidance from patient-led advocacy groups, PASC-focused clinical care sites, and consortiums that bring these groups together. Despite the lack of published research, it seems reasonable to model evaluation and treatment plans from existing knowledge in the ME/CFS and post-viral syndrome domains. It is reassuring that a proportion of patients with PASC symptoms are showing signs of improvements over time. Clinical care in this domain will continue to develop, and recommendations are likely to change with additional updates. This evolving landscape further complicates care for patients dealing with PASC symptoms.
SUMMARY
Fatigue from PASC is a complex constellation of symptoms that could be driven by a wide spectrum of underlying etiologies. Despite this, there seems to be hope for treatment plans that focus on addressing possible etiologies and creating a path to improving quality of life and a paced return to activity.
CLINICS CARE POINTS
Collect a detailed fatigue history that includes information on the wide differential diagnoses that can impact post-acute sequelae of COVID-19 (PASC) symptoms including the endocrinologic (hypothalamic-pituitary axis, adrenal gland), inflammatory, cardiac, pulmonary, myofascial, neuropsychiatric (including sleep), and gastrointestinal systems.
If not already completed, perform a general workup that includes complete blood count comprehensive metabolic panel, and measurement of phosphorus, thyroid-stimulating hormone, and C-reactive protein.
If needed, refer to clinical sites and providers who have more experience assisting patients with PASC symptoms.
DISCLOSURE
The authors have nothing to disclose.
|
2023-04-26T05:06:56.577Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "47f67b8a59c3e5f07d885d53ac5631a5eda0c01e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "684bfb8771e49817096346cd6d7ccef8e0f768e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
85535576
|
pes2o/s2orc
|
v3-fos-license
|
Tate (co)homology via pinched complexes
For complexes of modules we study two new constructions, which we call the pinched tensor product and the pinched Hom. They provide new methods for computing Tate homology and Tate cohomology, which lead to conceptual proofs of balancedness of Tate (co)homology for modules over associative rings. Another application we consider is in local algebra. Under conditions of vanishing of Tate (co)homology, the pinched tensor product of two minimal complete resolutions yields a minimal complete resolution.
Introduction
Tate cohomology originated in the study of representations of finite groups. It has been generalized-through works of, in chronological order, Buchweitz [5], Avramov and Martsinkovsky [3], and Veliche [14]-into a cohomology theory for modules with complete resolutions. The parallel theory of Tate homology has been treated in the same generality by Iacob [9].
While these theories function for modules over any associative ring, the central question of balancedness has yet to receive a cogent treatment. The extant literature only solves the problem for modules over special commutative rings. The issue is that if M and N are modules with appropriate complete resolutions, then there are potentially two ways of defining Tate cohomology Ext * (M, N ); do they yield the same theory? For Tate homology Tor * (M, N ) one encounters a similar situation, and one goal of this paper is to resolve these balancedness problems.
Proving balancedness of absolute (co)homology, Ext and Tor, boils down to showing that, say, Tor * (M, N ) can be computed from a complex constructed from resolutions of both variables M and N ; namely the tensor product of their projective resolutions. Our approach is similar, but for Tate (co)homology the standard tensor product and Hom complexes fail to do the job, so we introduce two new constructions. We call them the pinched tensor product and the pinched Hom. They resemble the usual tensor product and Hom of complexes, but they are smaller in a sense that is discussed below. The central technical results are Theorems (3.5) and (4.7), which establish that Tate (co)homology can be computed from pinched complexes. The balancedness problems are resolved in Theorems (3.7) and (5.4).
As part of our analysis of the pinched complexes, we establish "pinched versions" of standard isomorphisms for complexes, such as Hom-tensor adjunction. They allow us to give criteria-Corollaries (4.10) and (5.9)-in terms of vanishing of Tate (co)homology, for when a pinched Hom complex Hom 1 (T, U ) or a pinched tensor product T ⊗ 1 U of complete resolutions is a complete resolution. This is of particular interest in local algebra since, if one starts with unbounded complexes of finitely generated modules, then the pinched Hom and the pinched tensor product are also complexes of finitely generated modules. Theorem (6.1) gives a criterion, in terms of vanishing of Tate (co)homology, for a tensor product of minimal complete resolutions to be a minimal complete resolution.
Standard constructions with complexes
In this paper R, R ′ , S, and S ′ are associative unital rings; they are assumed to be algebras over a common commutative unital ring k. The default k is the ring Z of integers, but in concrete settings other choices may be useful. For example, if the rings are algebras over a field k, then k = k is a natural choice. If R is commutative, and R ′ , S, and S ′ are R-algebras, then k = R is a candidate.
Modules are assumed to be unitary, and the default action of the ring is on the left. Right modules over R are hence treated as (left) modules over the opposite ring R • . By an R-S • -bimodule we mean a module over the k-algebra R ⊗ k S • . Note that every R-module has a natural R-k • -bimodule structure; in particular they are symmetric k-k • -bimodules. Modules over a commutative ring R are tacitly assumed to be symmetric R-R • -bimodules.
Complexes. An R-complex is a (homologically) graded R-module M endowed with a square-zero endomorphism ∂ M of degree −1, which is called the differential.
Here is a visualization, A morphism of complexes M → N is a degree 0 graded homomorphism α = (α i ) i∈Z of the underlying graded modules that commutes with the differentials on M and N ; i.e. one has ∂ N α = α∂ M . The category of R-complexes is denoted C(R). If the underlying graded module is an R-S • -bimodule, and the differential is a bimodule endomorphism, then the complex is called a complex of R-S • -bimodules; 1 the category of such complexes is denoted C(R-S • ).
The kernel Z(M ) and the image B(M ) of ∂ M are graded submodules of M and, in fact, subcomplexes, as the induced differentials are trivial. A complex M is called acyclic if the homology complex H(M ) = Z(M )/ B(M ) is the zero-complex. We use the notation C(M ) for the cokernel of the differential, i.e. C i (M ) = Coker ∂ M i+1 . The notation sup M and inf M is used for the supremum and infimum of the set One has sup (Σ n M ) = sup M + n and inf (Σ n M ) = inf M + n. Let n be an integer. The hard truncation above of M at n is the complex M n with (M n ) i = 0 for i > n and ∂ Similarly, M n is the complex with (M n ) i = 0 for i < n and ∂ M n i = ∂ M i for i > n. Note that M n is a subcomplex of M , and M n is the quotient complex M/M n−1 . The soft truncations of M at n are the complexes A morphism of complexes that induces an isomorphism in homology is called a quasi-isomorphism and indicated by the symbol '≃'. A morphism α is a quasiisomorphism if and only if its mapping cone, the complex Cone α, is acyclic.
The central constructions in this paper, (3.2) and (4.4), start from the standard constructions of tensor product and Hom complexes, hence we review them in detail.
Tensor product and Hom. Let M be an R • -complex and N be an R-complex. The tensor product M ⊗ R N is the k-complex whose underlying graded module is given by and whose differential is defined by specifying its action on an elementary tensor of homogeneous elements as follows, here |x| is the degree of x in M . For a morphism of R • -complexes α : M → M ′ and a morphism of R-complexes β : , is a morphism of k-complexes. The tensor product yields a functor which is k-bilinear and right exact in each variable. In case M is a complex of R ′ -R • -bimodules and N is a complex of R-S • -bimodules, then the tensor product M ⊗ R N is a complex of R ′ -S • -bimodules. The tensor product yields a functor For R-complexes M and N , the k-complex Hom R (M, N ) is given by for a homogeneous ϕ in Hom R (M, N ). For morphisms of R-complexes α : M → M ′ and β : N → N ′ , a morphism Hom R (α, β) : With these definitions, Hom yields a functor, where the superscript 'op' signifies the opposite category; it is k-bilinear and left exact in each variable. In case M is a complex of R-R ′• -bimodules and N is a complex of R-S • -bimodules, the complex Hom Resolutions. An R-complex P is called semi-projective if each module P i is projective, and the functor Hom R (P, −) preserves quasi-isomorphisms (equivalently, it preserves acyclicity). A bounded below complex of projective R-modules is semiprojective. Similarly, an R-complex I is called semi-injective if each module I i is injective, and the functor Hom R (−, I) preserves quasi-isomorphisms (equivalently, it preserves acyclicity). A bounded above complex of injective R-modules is semiinjective. Every R-complex M has a semi-projective resolution and a semi-injective resolution; that is, there are quasi-isomorphisms π : P → M and ι : M → I, where P is semi-projective and I is semi-injective; see [2] 2 . An R-complex F is called semiflat if each module F i is flat, and the functor − ⊗ R F preserves quasi-isomorphisms (equivalently, it preserves acyclicity). Every semi-projective complex is semi-flat.
For an R-module M , a projective (injective) resolution in the classic sense is a semi-projective (-injective) resolution. Thus, the following definitions of homological dimensions of an R-complex extend the classic notions for modules.
Complete resolutions and Tate homology
In this section we recall some definitions and facts from works of Iacob [9] and Veliche [14], and we establish some auxiliary results for later use.
(2.1) Complete projective resolutions. An acyclic complex T of projective R-modules is called totally acyclic, if the complex Hom R (T, Q) is acyclic for every projective R-module Q.
A complete projective resolution of an R-complex M is a diagram where π is a semi-projective resolution, T is a totally acyclic complex of projective R-modules, and τ i is an isomorphism for i ≫ 0.
See [14] for a proof of the following fact. 2 In this paper the authors use 'DG-' in place of 'semi-'.
For every morphism α : M → M ′ there exists a morphism α such that the right-hand square in the diagram is commutative up to homotopy. The morphism α is unique up to homotopy, and for every choice of α there exists a morphism α, also unique up to homotopy, such that the left-hand square is commutative up to homotopy. Moreover, if τ ′ and π ′ are surjective, then α and α can be chosen such that the diagram is commutative. Finally, if one has M = M ′ and α is the identity map, then α and α are homotopy equivalences.
(2.3) Gorenstein projectivity. An R-module G is called Gorenstein projective if there exists a totally acyclic complex T of projective R-modules with C 0 (T ) ∼ = G.
In that case, the diagram T → T 0 → G is a complete projective resolution, and for brevity we shall often say that T is a complete projective resolution of G.
The Gorenstein projective dimension of an R-complex M , written Gpd R M , is the least integer n such that there exists a complete projective resolution (2.1.1) where τ i is an isomorphism for all i n. In particular, Gpd R M is finite if and only if M has a complete projective resolution. Notice that H(M ) is bounded above if Gpd R M is finite; indeed, there is an inequality If M is an R-complex of finite projective dimension, then there is a semi-projective resolution P ≃ − −− → M with P bounded above, and then 0 → P → M is a complete projective resolution; in particular, M has finite Gorenstein projective dimension.
(2.4) Tate homology. Let M be an R • -complex with a complete projective resolution T → P → M . For an R-complex N , the Tate homology of M with coefficients in N is defined as Tor R i (M, N ) = H i (T ⊗ R N ). It follows from (2.2) that this definition is independent (up to isomorphism) of the choice of complete projective resolution; in particular, one has is a (bounded above) complex of finite projective dimension; this is the content of (2.5) and (2.7) below.
The boundedness condition on N in Lemma (2.7) is a manifestation of the fact that Tate homology Tor R * (M, −) is not a functor from the derived category D(R). Indeed, every R-complex is isomorphic in D(R) to a semi-projective complex, and for such a complex P one has Tor R * (M, P ) = 0 for every R • -complex M of finite Gorenstein projective dimension.
Notice, though, that if M and M ′ are isomorphic in D(R • ) and of finite Gorenstein projective dimension, then it follows from [2, 1.4.P] that every complete projective resolution T → P → M yields a complete resolution T → P → M ′ , so one has an isomorphism Tor R * (M, −) ∼ = Tor R * (M ′ , −) of functors from C(R).
We recall from works of Jensen [10, prop. 6] and Raynaud and Gruson [13, II. thm. 3.2.6] that if R has finite finitistic projective dimension-for example, R is commutative Noetherian of finite Krull dimension-then every flat R-module has finite projective dimension, and it follows that the conditions (i)-(iv ) are equivalent.
(2.6) Remark. Let T → P → M be a complete projective resolution over R • . For every semi-projective resolution π ′ : P ′ ≃ − −− → N over R, application of the functor T ⊗ R − to the exact sequence 0 → N → Cone π ′ → ΣP ′ → 0 yields a short exact sequence, as T is a complex of projective R • -modules. The associated exact sequence in homology yields an isomorphism as one has H(T ⊗ R P ′ ) = 0 because P ′ is semi-flat. If N is bounded above and of finite projective dimension, then one can assume that P ′ and, therefore, Cone π ′ is bounded above, and then [6, lem. 2.13] yields H(T ⊗ R Cone π ′ ) = 0. Thus, we record the following result.
Moreover, if the original exact sequence is one of complexes of R-S • -bimodules, then the derived exact sequence is one of S • -modules.
Proof. Let T → P → M be a complete projective resolution. The sequence The associated exact sequence in homology is the desired one, and the statement about additional module structures is evident.
Proof. By [14, prop. 4.7] there is a commutative diagram is exact, and the associated sequence in homology is the desired one. The statement about additional module structures is evident.
As with absolute homology, dimension shifting is a useful technique in dealings with Tate homology.
(2.10) Lemma. Let M be an R • -complex of finite Gorenstein projective dimension and let N be an R-complex. For every complete projective resolution T → P → M and for every m ∈ Z there are isomorphisms (b): We may assume that N is bounded above; otherwise the statement is void. For every n sup N there is a quasi-isomorphism π : L ⊂n → N . The acyclic complex Cone π is bounded above, so T ⊗ R Cone π is acyclic by [6, lem. 2.13]. An application of Proposition (2.8) to the exact sequence 0 → N → Cone π → ΣL ⊂n → 0 yields isomorphisms The first complex in the exact sequence 0 → L n−1 → L ⊂n → Σ n C n (L) → 0 of R-complexes has finite projective dimension. Indeed, in the exact sequence 0 → L n−1 → L → L n → 0, the complexes L and L n are semi-projective, so L n−1 is semi-projective and, moreover, bounded above. Now apply Lemma (2.7) and Proposition (2.8) to get The desired isomorphisms follow from these last two displays.
Pinched tensor product complexes
We start by noticing that a very natural approach to the balancedness problem for Tate homology fails.
(3.1) Example. Let k be a field and consider the commutative ring R = k[x, y]/(xy).
where ∂ T i is multiplication by x for i odd and multiplication by y for i even. As multiplication by y on R/(x) is injective, it is immediate from the definition of Tate homology, see (2.4), that one has Tor R i (R/(x), R/(x)) = 0 for i even. The complex T ⊗ R T , however, has non-vanishing homology in even degrees. Indeed, for each n ∈ Z the module (T ⊗ R T ) n is free with basis (e i,n−i ) i∈Z . The differential is given by n−i−1 n odd and i odd ye i−1,n−i + xe i,n−i−1 n odd and i even xe i−1,n−i − xe i,n−i−1 n even and i odd ye i−1,n−i + ye i,n−i−1 n even and i even.
For n even, the element xe 0,n is a cycle and clearly not a boundary. Indeed, since R is graded, the complex T ⊗ R T has an internal grading, and the differential is of degree 1 with respect to this grading. Suppose that xe 0,n is a boundary. Since it is an element of internal degree 1, a preimage i∈Z α i,n+1−i e i,n+1−i of xe 0,n under ∂ T ⊗RT may be assumed homogeneous of internal degree zero. That is, we may assume that α i,n+1−i is in k for all i. Let i 0 and i 1 be, respectively, the least and the largest integer i with α i,n+1−i = 0. With respect to the basis The isomorphism (2.6.1) shows, nevertheless, that one can compute Tate homology from a tensor product of acyclic complexes. This motivates the next construction; see also the comments before the proof of Theorem (3.5).
(3.2) Construction. Let T be an R • -complex and let A be an R-complex. Consider the graded k-module T ⊗ 1 R A defined by: It is elementary to verify that one has We refer to this k-complex as the pinched tensor product of T and A.
For morphisms α : For every R • -complex T and every R-complex A there are equalities of k-complexes, The proof of the next proposition is standard, and we omit it.
(3.4) Proposition. The pinched tensor product defined in (3.2) yields a functor Moreover, it is k-bilinear and right exact in each variable. N ). If A is a complex of R-S • -bimodules, then the isomorphism is one of S • -modules.
Before we proceed with the proof, we point out that if N is an R-module, and A is the acyclic complex 0 → N = − − → N → 0 with N in degrees 0 and −1, then one has T ⊗ 1 Proof. By definition one has Tor R i (M, N ) = H i (T ⊗ R N ), so the goal is to establish an isomorphism between H(T ⊗ 1 R A) and H(T ⊗ R N ). The quasi-isomorphisms 2), and [6, prop. 2.14]. It follows that there are isomorphisms To establish the isomorphism in the remaining two degrees, consider the following diagram with exact columns.
The identity ǫ 0 π 0 = σ∂ A 0 shows that the twisted square is commutative. That the other two squares are commutative follows by functoriality of the tensor product.
To see that the homomorphism T 0 ⊗ R π 0 induces the desired isomorphism in homology, H 0 (T ⊗ 1 R A) ∼ = H 0 (T ⊗ R N ), notice first that it maps boundaries to boundaries, and that for by commutativity of the twisted square. As T −1 ⊗ R ǫ 0 is injective, it follows that . It is immediate from the surjectivity of T 0 ⊗ R π 0 and commutativity of the twisted square that the homomorphism H(T 0 ⊗ R π 0 ) is surjective. To see that it is injective, let x be an element in Z 0 (T ⊗ 1 R A) and assume that there is a y in ( and so x is a boundary: ∂ Similarly, for i = −1, it is evident that T −1 ⊗ R ǫ 0 maps cycles to cycles. Let x be a boundary in (T ⊗ R N ) −1 , and choose a preimage y of x in (T ⊗ R N ) 0 . By surjectivity of T 0 ⊗ R π 0 , this y has a preimage z in (T ⊗ 1 R A) 0 , and by commutativity of the twisted square one has ∂ . It follows immediately from the injectivity of T −1 ⊗ R ǫ 0 and commutativity of the twisted square that H(T −1 ⊗ R ǫ 0 ) is injective. To see that it is surjective, let x be an element in Z −1 (T ⊗ 1 R A). Then, in particular, one has
and it follows by injectivity of
The claim about S • -module structures is immediate from Construction (3.2).
(3.6) Proposition. Let T be an R • -complex and let A be an R-complex. The map is an isomorphism of k-complexes. Moreover, if T is a complex of R ′ -R • -bimodules and A is a complex of R-S • -bimodules, then ̟ is an isomorphism of complexes of R ′ -S • -bimodules.
Proof. The map ̟ is clearly an isomorphism of graded k-modules, and it is straightforward to verify that it commutes with the differentials. The assertions about additional module structures are immediate from Construction (3.2).
If M is an R • -module of finite Gorenstein projective dimension and N is an Rmodule of finite Gorenstein projective dimension, then one could also define Tate homology of the pair (M, N ) in terms of the complete projective resolution of N . Do the two definitions agree; that is, is Tate homology balanced? This is tantamount to asking if one has Tor R * (M, N ) ∼ = Tor R • * (N, M ). Iacob [9] gave a positive answer for modules over commutative Noetherian Gorenstein rings. The next theorem settles the question over any associative ring. Lemma (2.10), Theorem (3.5), and Proposition (3.6) now conspire to yield the desired isomorphism, (3.8) Remark. In [9] Iacob considers a variation of Tate homology based on complete flat resolutions. The proof of Theorem (3.5) applies, mutatis mutandis, to show that also these homology groups can be computed from a pinched tensor product. From a result parallel to Lemma (2.10) it, therefore, follows that also this version of Tate homology is balanced.
Pinched Hom complexes and Tate cohomology
Tate cohomology was studied in detail by Veliche [14]; we recall the definition.
It is elementary to verify that one has for n 0 is a differential on Hom 1 R (T, A). We refer to this k-complex as the pinched Hom of T and A.
For morphisms α : T → T ′ and β : A → A ′ of R-complexes it is elementary to verify that the assignment ϕ → βϕα defines a morphism of k-complexes The identity ǫ 0 π 0 = ∂ A 1 ς ensures that the twisted square is commutative; also the other two squares are commutative by functoriality of the Hom functor.
It follows immediately from injectivity of Hom R (T 0 , ǫ 0 ) and commutativity of the twisted square that H(Hom R (T 0 , ǫ 0 )) is injective. To see that it is surjective, let ζ be a cycle in Hom 1 R (T, A) 0 ; one then has 0 = ∂ . By exactness of the second column from the right, it now follows that ζ is in the image of Hom R (T 0 , ǫ 0 ), and by injectivity of Hom R (T 1 , ǫ 0 ) it follows that the preimage of ζ is a cycle in Hom R (T, N ) 0 . Thus, H(Hom R (T 0 , ǫ 0 )) is an isomorphism.
The claim about S • -module structures is immediate from Construction (4.4).
The next result is a pinched version of Hom-tensor adjunction.
That is, the identity ∂ HomS (A,B)) n+1 holds for n 1. By (3.3.1) and (4.5.1) there are equalities of k-complexes Hom S (A 0 , B)). Thus, for n 0 the map ̺ n is the degree n component of the Hom-tensor adjunction isomorphism Hom To prove that ̺ is an isomorphism of k-complexes, it remains to verify the identity ∂ Hom 1 R (T,HomS (A,B)) 1 . For t ∈ T 0 and a ∈ A 0 one has (4.9) Proposition. Assume that R is commutative. Let M be an R-complex with a complete projective resolution T → P → M and let N be a Gorenstein projective R-module with complete projective resolution T ′ . For every projective R-module Q and every i ∈ Z there is an isomorphism of R-modules Hom R (N, Q)). Proof. The R-complex Hom R (T ′ , Q) is acyclic, and Hom R (N, Q) is the kernel of the differential in degree 0. The assertion now follows from Proposition (4.8) and Theorem (4.7).
(4.10) Corollary. Assume that R is commutative. Let M and N be Gorenstein projective R-modules with complete projective resolutions T and T ′ , respectively. If one has Tor R i (M, N ) = 0 for all i ∈ Z, then the complex T ⊗ 1 R T ′ of projective R-modules is acyclic, and the following conditions are equivalent.
When these conditions hold, the R-module M ⊗ R N is Gorenstein projective with complete projective resolution T ⊗ 1 R T ′ .
Proof. By construction the complex T ⊗ 1 R T ′ consists of projective R-modules, and one has C 0 (T ⊗ 1 The assumption that the Tate homology Tor R * (M, N ) vanishes implies that T ⊗ 1 R T ′ is acyclic; see Theorem (3.5). The equivalence of (i) and (ii) now follows from Proposition (4.9), and the last assertion is then evident.
Tate cohomology is balanced
For R-modules M and N , a potentially different approach to Tate cohomology Ext * R (M, N ) uses a resolution of the second argument N . The resulting theory, which is parallel to the one developed in [3,5,14], was outlined by Asadollahi and Salarian in [1]. In this section we use the pinched complexes to show that when both approaches apply, they yield the same cohomology theory.
(5.1) Complete injective resolutions. A complex U of injective R-modules is called totally acyclic if it is acyclic, and the complex Hom R (J, U ) is acyclic for every injective R-module J.
A complete injective resolution of an R-complex N is a diagram where ι is a semi-injective resolution, U is a totally acyclic complex of injective R-modules, and υ i is an isomorphism for i ≪ 0.
(5.2) Gorenstein injectivity. An R-module E is called Gorenstein injective if there exists a totally acyclic complex U of injective R-modules with Z 0 (U ) ∼ = E.
In that case, the diagram E → U 0 → U is a complete injective resolution, and for brevity we shall often say that U is a complete injective resolution of E. The Gorenstein injective dimension of an R-complex N , written Gid R N , is the least integer n such that there exists a complete injective resolution (5.1.1) where υ i is an isomorphism for all i −n. In particular, Gid R N is finite if and only if N has a complete injective resolution. Notice that H(N ) is bounded below if Gid R N is finite; indeed, there is an inequality If N is an R-complex of finite injective dimension, then there is a semi-injective resolution N ≃ − −− → I with I bounded below, and then N → I → 0 is a complete injective resolution; in particular, N has finite Gorenstein injective dimension.
The identity ǫ 0 π 0 = σ∂ A 0 ensures that the twisted square is commutative; also the other two squares are commutative by standard properties of the Hom functor.
To see that Hom R (ǫ 0 , U 1 ) and Hom R (π 0 , U 0 ) induce isomorphisms in homology, one proceeds as in the proof of Theorem (4.7).
If M is a Gorenstein projective R-module with complete projective resolution T , and N is a Gorenstein injective R-module with complete injective resolution U , then Theorem (4.7) and Proposition (5.3) yield ). That is, the Tate cohomology of M with coefficients in N can be computed via a complete injective resolution of N . What follows is a balancedness statement that shows that for appropriately bounded complexes-for modules in particularone can unambiguously extend the notion of Tate cohomology Ext * R (M, N ) to the situation where N has a complete injective resolution; see Definition (5.5). Proof. Set n = sup{− inf N, Gid R N }; then the module Z −n (I) ∼ = Z −n (U ) is Gorenstein injective with complete injective resolution Z −n (I) → Σ n I −n → Σ n U . Further, set m = Gpd R M and let T → P → M be a complete projective resolution; then the module C m (P ) ∼ = C m (T ) is Gorenstein projective with complete projective resolution Σ −m T → Σ −m P m → C m (P ). In the next chain of isomorphisms, the first one follows from Lemma (4.3), the second and third follow from Theorem (4.7) and Proposition (5.3), and the last one follows by dimension shifting.
Finally, an argument parallel to the one for Lemma (2.10)(b) yields isomorphisms this time it is [6, lem. 2.5] that needs to be invoked. [1]. In that paper, the notation ext * R (M, N ) is used for the cohomology defined in (5.5), and it is shown to agree with the notion from [3,5,14], see (4.1), over commutative Noetherian local Gorenstein rings.
More generally, for a module N with a complete injective resolution, Nucinkis' [12] notion of I-complete cohomology agrees with Tate cohomology as defined in (5.5). Similarly, for a module M with a complete projective resolution, the Pcomplete cohomology of Benson and Carlson [4], Vogel/Goichot [8], and Mislin [11] agrees with Tate cohomology in the sense of (4.1). Nucinkis proves [12, thm. 5.2, 6.6, 7.9] that P-and I-complete cohomology agree over rings where every module has a complete projective resolution and a complete injective resolution.
The next result establishes a pinched version of the Hom swap isomorphism. It is proved in the same fashion as Proposition (4.8). Moreover, if T is a complex of R-R ′• -bimodules, and B is an S ′ -S • -bimodule, then ϑ is an isomorphism of complexes of R ′ -S ′• -bimodules.
(5.8) Proposition. Assume that R is commutative. Let M be an R-complex with a complete projective resolution T → P → M and let N be a Gorenstein injective R-module with complete injective resolution U . For every injective R-module J and every i ∈ Z there is an isomorphism of R-modules N )).
Proof. The complex Hom R (J, U ) is acyclic and Hom R (J, N ) is the kernel of the differential in degree 0. The assertion now follows from Proposition (5.7) and Theorem (4.7). When these conditions hold, the R-module Hom R (M, N ) is Gorenstein injective with complete injective resolution Hom 1 R (T, U ).
Proof. By construction the complex Hom 1 R (T, U ) consists of injective R-modules, and one has Z 0 (Hom 1 R (T, U )) ∼ = Hom R (M, N ). The assumption that the Tate cohomology Ext * R (M, N ) vanishes implies that Hom 1 R (T, U ) is acyclic; see Theorem (4.7). The equivalence of (i) and (ii) now follows from Proposition (5.8), and the last assertion is then evident.
Local algebra
Throughout this section R denotes a commutative Noetherian local ring with maximal ideal m. Recall that every projective R-module is free. An acyclic complex T of finitely generated free R-modules is totally acyclic if and only if Hom R (T, R) is acyclic. For an R-module M we use the standard notation M * for the dual module Hom R (M, R). A finitely generated R-module G is Gorenstein projective if and only if one has G ∼ = G * * and Ext i R (G, R) = 0 = Ext i R (G * , R) for all i 1, see [3], and following op. cit. we use the term totally reflexive for such modules.
A complex F of finitely generated free R-modules is called minimal if one has ∂(F ) ⊆ mF ; see [3, sec. 8]. A complete projective resolution T → P → M is called minimal if T and P are minimal complexes of finitely generated free R-modules. By [3, thm. 8.4] every finitely generated R-module M of finite Gorenstein projective dimension has a minimal complete projective resolution T → P → M , and it is unique up to isomorphism. The invariants β n (M ) = rank R T n are called the stable Betti numbers of M ; for n Gpd R M they agree with usual Betti numbers. R T ′ consists of finitely generated free R-modules, and the assumption that Tate homology Tor R * (M, N ) vanishes implies that T ⊗ 1 R T ′ is acyclic; see Theorem (3.5). To prove equivalence of the three conditions it suffices, in view of Corollary (4.10), to prove the implication (iii) =⇒ (i). Assume that C 0 (T ⊗ 1 R T ′ ) = M ⊗ R N is totally reflexive. It follows immediately that the syzygies of M ⊗ R N , i.e. C i (T ⊗ 1 R T ′ ) for i 1 are totally reflexive as well. For i −1 it follows that C i (T ⊗ 1 R T ′ ) has finite Gorenstein projective dimension. The Krull dimension d of R is an upper bound for the Gorenstein projective dimension of any R-module, so C i (T ⊗ 1 R T ′ ) is totally reflexive as it is the dth syzygy of C i−d (T ⊗ 1 R T ′ ); see [6, thm. 3.1]. Thus, each module C i (T ⊗ 1 R T ′ ) is totally reflexive, and then T ⊗ 1 R T ′ is totally acyclic by [3, lem. 2.4]. The assertions about minimality follow immediately from Construction (3.2), and so does the equality of stable Betti numbers.
(6.2) Corollary. Let R be Gorenstein and let M and N be totally reflexive R-modules with (minimal) complete projective resolutions T and T ′ , respectively. If one has Tor R i (M, N ) = 0 for all i ∈ Z, then M ⊗ R N is totally reflexive with (minimal) complete resolution T ⊗ 1 R T ′ . Proof. As R is Gorenstein, every acyclic complex of projective modules is totally acyclic; see [3, lem. 2.4].
For modules M and N of finite Gorenstein projective dimension, vanishing of Tate homology Tor R * (M, N ) yields information about the complex M ⊗ L R N that encodes the absolute homology Tor R * (M, N ); we pursue this line of investigation in [7]. This paper we close with an interpretation of the Tate homology modules Tor R 0 (M, N ) and Tor R −1 (M, N ) in terms of a natural homomorphism. Proof. Let T → P → M be a minimal complete projective resolution. The natural map θ F N : F ⊗ R N → Hom R (F * , N ) is an isomorphism for every finitely generated
|
2011-11-14T22:01:04.000Z
|
2011-05-11T00:00:00.000
|
{
"year": 2011,
"sha1": "2f3d39eade89d1b33a7568765bab1e143622cc41",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1090/s0002-9947-2013-05746-7",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "2f3d39eade89d1b33a7568765bab1e143622cc41",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
53549551
|
pes2o/s2orc
|
v3-fos-license
|
A case study on the use of appropriate surrogates for antecedent moisture conditions ( AMCs )
Introduction Conclusions References
Introduction
A large number of non-linear hillslope and catchment rainfall-runoff responses have been documented around the world (e.g.Wipkey and Kirkby, 1978;Sidle et al., 1995;Buttle and Peters, 1997;Buttle et al., 2001;Van Meerveld and McDonnell, 2005;Tromp-Van Meerveld and McDonnell, 2006a;James and Roulet, 2007).Justification for such hydrological responses often lies in the temporal variability in storm size or antecedent moisture conditions (AMCs) (Longobardi et al., 2003;Mishra et al., 2005;James and Roulet, 2009) and the spatial connectivity between source areas.Soil moisture is often described as a major control on catchment response (e.g.Meyles et al., 2003;Western et al., 2004;Western et al., 2005).It is notably used to determine whether a catchment is in a dry and spatially disorganized or in a wet and connected state (Grayson et al., 1997).Catchment AMCs are most often associated with soil moisture contents over a fixed antecedent temporal window that can be defined as: (1) Where t 0 is the reference time and x is the amount of time to be subtracted to account for conditions observed before the reference time.Hence, AMCs are used for various purposes, from computing direct surface runoff via the Soil Conservation Service Curve Number (SCS-CN) methodology (Mishra et al., 2005) to characterizing favourable conditions for hydrologic connectivity to occur (James and Roulet, 2009).
The determination of a catchment AMCs remains difficult given the strong spatio-temporal heterogeneity of soil moisture across any typical catchment and the relative scarcity of spatially-detailed soil moisture data in comparison to rainfall or streamflow data that are more accessible.Owing to these difficulties, several practical approaches have been proposed to define surrogates or proxies for AMCs.Precipitationbased indices have received the largest attention as rainfall data are often available (Longobardi et al., 2003).We here distinguish between antecedent precipitation (AP x ) and the antecedent precipitation index (API n ).AP x is simply the cumulative sum of rainfall recorded over any fixed antecedent temporal window as defined in Eq. ( 1).The API n as put forward by Kohler and Lindsey (1951) is rather a weighted summation of daily precipitation amounts recorded since the last rainfall as described in Eq. ( 2): Where t = t nt n−1 is the time (d) elapsed between the end of the previous rainfall P n−1 and the beginning of the next one P n , and α is a parameter equal to the inverse of the characteristic time of soil moisture depletion (d −1 ).According to Kohler and Lindsey (1951), precipitation-based indices are universally applicable and yield good results provided that they are used in conjunction with season of the year or temperature.Basin evaporation (Longobardi et al., 2003) and the soil moisture index (SMI), which only includes potential evaporation and other climatic factors in its formulation (Mishra et al., 2005), have also been described as potential proxies for AMCs since they relate to soil moisture depletion.Given the findings that pre-event water can play a substantial role in rainfall-runoff response (e.g.Sklash and Farvolden, 1979;Pearce, 1990;Rice and Hornberger, 1998;Kirchner, 2003) and given the wide availability of streamflow data, the antecedent baseflow index (ABFI) (Mishra et al., 2005) and other measures related to discharge recorded just prior the reference time (Kohler and Lindsey, 1951;Longobardi et al., 2003) have been proposed as surrogates for AMCs.Kohler and Lindsey (1951) have advocated that baseflowderived indices provide reasonably good results in humid and sub-humid regions; however, such as the API n , baseflow indices are strongly dependent upon season of the year and do not necessarily reflect short-term changes in a catchment state.Several authors have emphasized the relative advantage of the ABFI in comparison to antecedent rainfall not only because it reflects both shallow soil moisture and deeper groundwater conditions (Young and Beven, 1994) but also because it does not force the choice of an antecedent temporal window (Mishra et al., 2005) and it is a better predictor of runoff generation (Longobardi et al., 2003).Nonetheless, the ABFI is not often used in the hydrological literature (Mishra et al., 2005), with the exception of a few studies based on water table heights (e.g.James and Roulet, 2009).The number of days since the last rainfall event is another proxy for AMCs that is seldom used in catchment hydrology (Kohler and Lindsey, 1951) Several questions arise concerning the selection of a proxy for AMCs for a specific catchment.For instance, with regards to antecedent precipitation, what duration of antecedent temporal window should be used?The term "antecedent" is broadly used in the literature and refers to durations from one hour to 30 days.Antecedent temporal windows of seven days (e.g.Woods and Rowe, 1996;Inamdar and Mitchell, 2007;James and Roulet, 2009) and ten days (e.g.Noguchi et al., 2001;Western et al., 2004) are relatively popular in catchment hydrology.Several studies have relied on the dual use of AP 10 and AP 30 (e.g.Sidle et al., 1995;Vidon et al., 2009).The curve number (CN) method considers rainfall over a 5-day long antecedent temporal window (SCS, 1956), an approach taken up by some hydrological modeling studies (e.g.Brocca et al., 2008).Silveira et al. (2000), however, compared the single use of 5-day antecedent rainfall with the combined use of 15-day antecedent rainfall and potential evaporation and found no significant differences between the two approaches.While working in a semi-arid environment, Frot and van Wesemael (2009) argued that the use of a 48-hour long antecedent temporal window was not appropriate to explain the differences in runoff for events with similar precipitation characteristics and rather chose an antecedent period of 20 days.Within the antecedent window, several scenarios can occur as there may be no rainfall, a single rainfall event, or multiple storms.These events will or will not be accounted for depending on the chosen duration (Salvadori and De Michele, 2006).Thus, Seeger et al. (2004) used a large selection of antecedent windows (i.e. 6 h, 24 h, and 3, 7, 15 and 21 days) in order to discriminate the effects of short-term AMCs from those of long-term AMCs in a small headwater catchment.The wide range of antecedent temporal windows in the literature is unavoidable as there are no explicit guidelines available to specify the relations between soil moisture content and antecedent rainfall during a specific time period (Mishra et al., 2005).Moreover, the effectiveness of surrogate measures for AMCs may be highly dependent upon climate characteristics and scale of observation.However these issues have yet to be addressed if we are to decide between a universal or a regional proxy for AMCs.
One can also ask if it is reasonable to use a sole measure of AMCs for a given catchment.Several authors (e.g.Cappus, 1960;Betson, 1964;Hewlett and Hibbert, 1967;Dunne and Black, 1970;Aryal et al., 2003;Ambroise, 2004) have shown that storm runoff usually originates from consistent parts of a catchment that often represent a small fraction of the whole topographic drainage area.This has been observed in a range of climatic regimes.Soil moisture is a critical hydrological state variable whose spatiotemporal variation indicates the presence of "active" or "contributing" Table 1.Links between surrogate measures for AMCs and hydrologically relevant observations from other studies focusing on the Hermine catchment."××" means that a strong significant correlation was found, "×" means that rather weak correlations were found, and blanks mean that no significant correlations were found.-Areas of 0.85 to 1.4 ha at a depth of 15 cm ×× -Areas of 0.85 ha and less at a depth of 45 cm ×× -Areas of less than 0.1 ha at soil depths of 5, 15, 30 and 45 cm × × Source : Ali et al., 2010a Relative contribution of geographic sources (e.g.riparian ×× × versus upslope throughfall, organic and mineral soil water) to streamflow Source : Ali et al., 2010b Presence of high magnitude and quick timing rainfall-runoff × events Source : Ali et al., 2010c PET: Mean daily evapotranspiration computed after the temperature-based Hargreaves formula (Hargreaves, 1975) DSP: Number of days since the last rainfall input DSP 10 , DSP 20 , DSP 30 : Number of days since the last rainfall intensity exceeding 10, 20 and 30 mm/d AP 1 , AP 2 , AP 5 , AP 7 , AP 10 , AP 12 , AP 14 : Cumulative precipitation from 1, 2, 5, 7, 10, 12 and 14 days before the survey areas or periods (Ambroise, 2004), and this relates to hydrologic connectivity.Dynamic connectivity of catchment source areas is controlled by the time-changing availability of surface/subsurface storm water, not only in terms of magnitude but also in terms frequency, duration, timing and rate (Bracken and Croke, 2007).Disconnected "active" areas involve water fluxes that do not contribute to the global output at a catchment outlet, while "contributing" areas to catchment response are composed of spatially connected "active" areas.It is generally accepted that catchment structure and morphology are the main factors controlling not only the activation of source areas but also their threshold-driven interconnectivity.From a spatially-distributed point of view, the fact that all catchment areas are not "activated" at the same time may indicate that they are responsive to different antecedent conditions and/or storm events characteristics.Similarly, the non-uniform contribution of source areas to streamflow may point towards different triggering factors.In that context, should multiple proxies for AMCs be used in order not to bias our understanding of a catchment hydrological behaviour?
This paper investigates that specific question.We examine the hydrological behaviour of a 5 ha headwater temperate humid forested system, the Hermine, for which several catchment-wide soil moisture patterns are available.The approach relies on point-scale temporal relations between actual soil moisture content values and selected meteorological-based indices so as to identify the surrogates for AMCs that are best suited to characterize the hydrological behaviour of the system.Such a statistical analysis on data from the Hermine catchment could be particularly useful as previous studies show some inconsistencies in identifying a "universal" AMCs surrogate measure (Table 1).For instance, while characterizing the emergence of spatially coherent saturation patches in the Hermine catchment (Ali et al., 2010a), DSP 30 (i.e.number of days elapsed since the last rainfall intensity exceeding 30 mm/d) appeared to be the most influential surrogate measure for AMCs: the smaller the value of DSP 30 , the more likely the presence of 0.85-1.4ha wide saturation patches at a depth of 15 cm.In a paper aiming to identify hydrologically representative connectivity metrics in the Hermine catchment, Ali and Roy (2010d) found that the spatial connectedness of locations whose volumetric soil moisture content exceeded 30% was rather dependent upon AP 7 (i.e.7-day antecedent precipitation).The relative contribution of sources (i.e.organic versus mineral soil water originating from riparian or upslope areas) to streamflow were also found to be weakly correlated to AP 2 and rather strongly correlated to AP 7 (Ali et al., 2010b).The occurrence of high magnitude and quick timing rainfall-runoff events was also found to be coincident with 10-day cumulative antecedent precipitation amounts ranging from 24.5 to 40.5 mm (Ali et al., 2010c).These contrasting results have prompted the current paper where we wish to examine with direct measures whether the use of different AMCs measures lead to different approximations of the Hermine catchment hydrological state.
Hermine catchment
The Hermine is a 5.1 ha forested catchment located in the Lower Laurentians 80 km north of Montréal, Québec, Canada (Fig. 1a).The total annual precipitation to the region averages 1150 mm (±136 mm) over the last 30 years, of which 30% falls as snow (Biron et al., 1999).The catchment has a relief of 31 m and is drained by an ephemeral stream (Fig. 1b).Soils are 1 to 2 m deep Podzols developed over a bouldery glacial till.The presence of a confining layer at a depth of approximately 75 cm in the soil restricts root penetration, slows water infiltration and thus enhances the probability of rapid lateral shallow subsurface flow.In wet conditions, catchment-scale soil moisture patterns highly depend upon the asymmetric distribution of thick organic horizons; hydrophilic regions are preferentially located on the northern, steeper hillslopes.Near-surface soil moisture is also influenced by the catchment complex surface microtopography due to fallen tree trunks and boulders at the soil surface.Other particular features of the Hermine include intermittent rills that are activated in very wet conditions (Fig. 1b) and a wet zone located in the upstream part of the valley bottom (Fig. 1b).Forest canopy is dominated by sugar maple and other deciduous tree species.Thus, transpiration is minimal between October and April so that changes in soil moisture and water table in that period are mostly governed by downslope drainage.The interception capacity of the forest canopy, combined with high summer potential evapotranspiration, greatly reduces the likelihood of high runoff except during heavy rainstorms or wet and cool periods.canopy is, however, variable throughout the catchment, with a lower coverage density in upper parts of the southern slope near the catchment divide for example.
Topographic and soil moisture data
A surface digital elevation model (DEM) of the Hermine was obtained by interpolating 640 elevation points collected in the field.Elevation above the catchment outlet was then extracted for 121 sampling locations defined along a 15 by 15 m sampling grid in the catchment (Fig. 1c).The depth to the confining layer was measured at 257 points using a small hand auger that was forced vertically to refusal through the soil profile.For each sampling location, three auger to refusal measurements were made in a 1 m radius and checked for consistency to disregard data that are likely associated with the presence of individual clasts in the soil matrix instead of the targeted confining layer.Data were then interpolated into a subsurface DEM.In order to evaluate topographic influences on the spatial distribution of soil moisture, several secondary terrain attributes were derived from both the surface and the subsurface DEMs: local slope, contributing area and the topographic index (Beven and Kirkby, 1979) were computed using the D∞ algorithm (Tarboton, 1997), while the multi-resolution valley bottom flatness index (hereafter referred to as the Flatness index) was calculated after Gallant and Dowling (2003).The Flatness index is derived from an elevation map and identifies flat and low regions at a range of scales.Its largest values flag the broadest and flattest low areas in the catchment.The depth to the confining layer was then extracted for each of the 121 sampling locations (Fig. 1d), together with the values of all secondary terrain attributes.
Soil moisture contents at multiple soil depths were surveyed using a portable 40-inch long rod equipped with a capacitance-based probe (AQUATERR Instruments & Automation) that was manually pushed into the ground to the desirable depth.On 16 occasions between August 2007 and July 2008, volumetric moisture content in the top 5, 15, 30 and 45 cm of the soil profile was measured on a 0 to 60% scale along the previously defined 15 by 15 m sampling grid, for a total of 121 sampling points.Figure 2 illustrates the contrast between surveys conducted at the Hermine, in terms of measured soil moisture patterns for different AMCs and discharges at the catchment outlet (Table 2).In general, saturation patterns tend to be more pronounced at depths of 5 and 15 cm rather than 30 and 45 cm (e.g.Fig. 2).Also, in wetter conditions, spatial patterns show higher soil moisture contents on the northern slope of the catchment.The main variability in the patterns is found from sampling time to sampling time, as we observe a strong contrast between dry, transitional and wet conditions (Fig. 2).
Surrogates for AMCs and catchment response
For each of the 16 soil moisture survey dates, 12 temperaturebased, precipitation-based, and soil moisture-based indices (Table 2) were derived in order to assess their potential to serve as surrogates for antecedent conditions estimated from the soil moisture measurements.Mean daily potential evapotranspiration (PET) was computed on a diurnal timescale after the temperature-based Hargreaves formula (Hargreaves, 1975).A first group of seven precipitation-based indices were used to capture the amount of rainfall added to the system over a given period x (AP x ) prior to the time of interest.AP 1 , AP 2 , AP 5 , AP , AP 10 , AP 10 and AP 14 were, respectively calculated as the cumulative rainfall over the 1, 2, 5, 7, 10, 12 and 14 days prior to the survey.A second group of precipitation-based indices were used to reflect the time distribution of the antecedent water inputs.DSP (i.e.days since precipitation) was computed as the number of days elapsed since the last recording at the rain gage, while the DSP 10 , DSP 20 and DSP 30 indices were computed as the number of days elapsed since the last rainfall intensity exceeding 10, 20 and 30 mm d −1 , respectively.These indices were especially chosen for their computational simplicity and the absence of mathematical parameters (e.g.soil moisture depletion time) to be estimated.The ability of the survey mean soil moisture content (θ mean , computed over all depths and sampling points) to represent the catchment macrostate was also evaluated.Lastly, catchment discharges recorded on survey dates (current-day discharges, hereafter referred as Q obs ) were used to portray the integrated hydrological response at the catchment outlet. Hydrol
RESEARCH QUESTION #1 Examine the spatial variability of hydrological behaviours between point-scale soil moisture and surrogate measures
Methodology: a) For each sampling location, identify the statistical relationship type and strength between actual soil moisture and the surrogate b) Build database to map relationship type and strength results to assess spatial variability (Fig. 4, 5 and 6)
RESEARCH QUESTION 2 Identify potential topographic influences on the spatial variability of hydrological behaviours
Methodology: a) Determine statistical difference in topography of regions with distinct hydrological behaviours (Fig. 7 and 8
Data analysis
Our methodology intended to answer two research questions (Fig. 3).Firstly, we aimed to determine the nature and the strength of the relationships between point-scale soil moisture (i.e.soil moisture measured at each sampling point) and each of the AMCs and catchment response surrogates previously described.We hypothesized that the identified relationships would illustrate the variety of point-scale hydrologic behaviours that can be encountered within the Hermine catchment.Secondly, we examined the spatial organization of the nature and the strength of these point-scale relationships to link them with possible topographic controls.
For the determination of point-scale relationships, data cases were soil moisture survey dates (n = 16).When the aim was to evaluate the ability of AMCs measures to describe soil moisture patterns, the independent variable was the chosen surrogate for AMCs and the dependent variable was the depth-specific, point-scale soil moisture content.In order to assess the potential of θ mean to represent the Her-mine catchment macrostate, we rather considered θ mean to be an independent variable while point-scale soil moisture was the dependent one.Lastly, in order to identify catchment areas that might contribute to streamflow discharge, the statistical procedure detailed below was also applied using point-scale soil moisture as the dependent variable and Q obs as the independent one.No postulate could be made on the form of the relationship between the dependent and the independent variables since no such exercise has been done before.Six regression models (i.e.linear, quadratic, cubic, exponential, logarithmic and logistic), which represent six different types of possible relationships, were fitted to the data and compared so as to select the one with the best fit.Model equations can be written as follows: www.hydrol-earth-syst-sci.net/14/1843/2010/ Hydrol.Earth Syst.Sci., 14, 1843-1861, 2010 Cubic (monotonic increasing) model ( 5) Logarithmic model ( 7) Where Y is the dependent variable, X is the independent variable, and a, b, c and d are model parameters.There was no physical basis for the choice of the six mathematical models; the aim was rather to explore the dataset using different models with various degrees of complexity.In each case, a least squares-like regression method was used for all models, which means that the fitting of each model to the data had to minimize the squared differences between observed and predicted values.Selection of the best mathematical model among the six tested was then performed in three steps.
First, the adjusted coefficient of determination (R 2 α ) was used to discard any model that would only explain a small proportion of the variance in the data.Throughout this paper, we refer to R-square (R 2 ) as the proportion of variance in the dependent variable that is explained by the chosen regression model.It can be computed for any linear or nonlinear model: where SS refers to a sum of squares.TSS is the total amount of variability in the dependent variable while residual RSS is the amount of variability that still cannot be accounted for after the regression model is fitted to the data.Given that the value of R 2 often increases when a nonlinear model is used instead of a linear relationship, the use of the R 2 α is more adequate in the context of multiple models evaluation and comparison as it assesses the goodness of fit while taking into consideration the numbers of degrees of freedom of the numerator and the denominator of R 2 (Legendre and Legendre, 1998): where N is the sample size and k is the number of parameters.Hence, R 2 α "penalizes" models bearing a large number of parameters.For the current analysis, if all six models failed to produce an R 2 α value exceeding 0.3, then the relationship between point-scale soil moisture and the surrogate measure being evaluated was labelled as "not significant".Otherwise, only the models with an R 2 α exceeding 0.3 were kept for further consideration towards choosing the best fitting model.As a second step in the best model selection, the models with an R 2 α exceeding 0.3 were ranked according to their corrected Akaike Information Criterion value.The Akaike Information Criterion or AIC (Akaike, 1974) is also a measure of the goodness of fit of a mathematical model but on the contrary to the R 2 α , it is not grounded in the statistical theory of hypothesis testing but rather in the information theory.The AIC estimates the Kullback-Leibler information loss by approximating the observed data with the fitted model (details regarding model selection using information theory can be found in Burnham and Anderson, 2002).The fit of any regression model to any dataset can be summarized by the Akaike Information Criterion (AIC) defined by the equation: where N is the number of data points and K is the number of parameters fit by the regression plus one.The definition of I as the number of parameters plus one is justified by the fact that the regression is "estimating" not only the values of the parameters but also the sum of squares.It is worth noting that the computation equation of the AIC consists of two additive terms, namely one term representing the lack of model fit to the data and another term related to the number of parameters; hence, the AIC can be seen as a measure of both the accuracy and the complexity of the chosen model.
In cases where N/K < 40 as in this study, a second order corrected AIC, hereafter referred to as AIC c , is used: When comparing several mathematical models, it is the one with the lowest AIC c that is the best or that is most likely to be correct.Hence, in this study, mathematical models with an R 2 α exceeding 0.3 were ranked by sorting their associated AIC c scores in ascending order, and the top-ranked model chosen as the best one.
The third and last step in the best model selection process consisted in confirming the choice made at the end of step 2. Indeed, if the AIC c scores between the top two-ranked models are very close, there is not much evidence to choose one model over the other.We therefore used the following equation to compute the probability that the top ranked model is indeed the best one: This probability can thus be seen as an uncertainty measure as it expresses the likelihood that the top-ranked model is the best among the set of models being evaluated.
The possible influence of catchment topography, both surface and subsurface, was studied with regards not only to the nature (e.g.linear versus quadratic, versus cubic, etc.) but also to the strength of the point-scale relationships between actual soil moisture and surrogate measures.Nonparametric Kruskal-Wallis tests were run to assess whether the different types of point-scale relationships were spatially associated with specific topographic properties.The Kruskal-Wallis test is identical to one-way analysis of variance except that the data are replaced by their ranks.Hence, it is used to compare samples from three or more groups.The null hypothesis states that all group medians are equal, while the alternative hypothesis states that at least one group median is different from the others.In this study, each mathematical model is a group and we compare the topography underlying the locations subjected to different relationships between pointscale relationships between actual soil moisture and surro-gate measures.When the p-value associated with the statistical test is less than 0.05, we reject the null hypothesis and suggest that the differences in relationship types can be explained by topography.Spearman correlation coefficients were also computed between the strength of the point-scale relationships (i.e.R 2 α values) and the values of the terrain attributes.
Point-scale relationships
Each symbol in Figs. 4 and previously described, it appears that on average, the models chosen as the best ones had a probability of being correct ranging from 52 to 100% (Table 3).Figures 4 and 5 illustrate the spatial heterogeneity in the Hermine when it comes to the relation between point-scale actual soil moisture measurements and any catchment-wide, meteorological-based proxy for AMCs.Figures 4 and 5 also show that the spatial patterns are highly dependent not only upon the chosen surrogate for AMCs but also upon the soil depth considered.For instance, only 10% of the soil moisture sampling sites at a 5 cm depth are related to PET (Fig. 5).The best regression model for that relationship is a quadratic one; however R 2 α values do not exceed 0.38.A similar result is obtained at a depth of 15 cm where only 10% of the sampling locations are related to PET, and that proportion drops to zero when depths of 30 or 45 cm are considered.
For precipitation-based indices computed from cumulative rainfall, especially AP 1 , AP 2 and AP 5 , linear relationships are mostly present at a 5 cm depth while nonlinear relationships tend to dominate from a depth of 15 cm and below (Fig. 4).Regarding the point-scale relationships between soil moisture and AP 1 (or AP 2 ), exponential models dominate at the 15 cm depth while quadratic models rather dominate at the 30 and 45 cm depths.It is at a depth of 30 cm that most locations with a significant relationship between AP 1 (or AP 2 ) and soil moisture content measurements were found, with a mean relationship strength (i.e.R 2 α ) of 0.4.With AP 5 , linear and exponential models are mostly present at the 5 and 15 cm depths while cubic and quadratic relationships make up most of the patterns at depths of 30 and 45 cm.Significant relations between AP 5 and point-scale soil moisture content measurements are the strongest (0.30 ≤ R 2 α ≤ 0.73) and the most widespread over the Hermine catchment area.On the contrary, patterns associated with AP 7 , AP 10 and AP 14 show very few, if any, significant relations.Relationships between AP 12 and point-scale soil moisture content measurements are of interest not because of their magnitude but rather because they are only made out from four to six locations confined to the catchment southern slope (Fig. 4), which is opposite to the patterns associated with AP 1 , AP 2 and AP 5 .Indeed, with AP 12 , the small cluster of locations subjected to significant relationships on the southern slope is opposed to the widespread presence of significant models on the northern hillslope and in the catchment upstream area when AP 1 , AP 2 or AP 5 are used as surrogates for AMCs (Fig. 4).Spatial patterns of point-scale relationships were also different depending upon the chosen rainfall intensity-based measure of AMCs. Figure 5 shows that for all soil depths, a large proportion of sampling locations are significantly, yet weakly related to DSP at depths of 5 and 15 cm (mean R 2 α of 0.42).Exponential relationships between DSP and soil moisture dominate at 5 cm while logarithmic relationships rather dominate at 15 cm.With DSP 10 , cubic and logarithmic relationships were present at depths of 5 and 15 cm.With DSP 20 , significant logistic models were found particularly at the 30 and 45 cm depths in the headwater, upslope portion of the study area near the catchment divide (Fig. 5).As for significant relationships between DSP 30 and point-scale soil moisture content measurements, they were the most obvious at the 5 cm depth with a mix of linear and exponential regression models.
For almost all sampling locations at all depths, significant relationships between soil moisture content measurements and θ mean are found (0.32 ≤ R 2 α ≤ 0.91).The vast majority of these relationships are linear, quadratic, cubic and exponential (Fig. 5).The proportion of sampling locations sharing nonlinear relationships with θ mean is 58% at a depth of 5 cm and reaches 66% at a depth of 15 cm and even 90% at depths of 30 and 45 cm.As far as the variable Q obs is concerned, the presence of statistically significant relations is highly dependent upon soil depth (Fig. 6).At the 5 cm depth, very few locations are characterized by a significant relationship between point-scale soil moisture and Q obs .At the three other soil depths investigated, exponential and cubic models dominate while the strength of the relationships is greater than at 5 cm (mean R 2 α value of 0.62, maximum R 2 α ranging from 0.80 to 0.90 from the 15 cm depth below).At 30 and 45 cm, the spatial patterns of significant relationships resemble the spatial patterns obtained with AP 1 , AP 2 and AP 5 .
The relationships between θ mean and surrogate measures for AMCs and between θ mean and Q obs were also examined (Fig. 7) and compared to the point-scale relationships illustrated in Figs. 4, 5 and 6. θ mean was only found to be correlated with AP 5 , DSP and DSP 10 given weak R 2 α values (≤ 0.4).Among the six mathematical models tested (e.g.linear, quadratic, cubic, exponential, logarithmic and logistic), the exponential one was best suited to explain the relationship between Q obs and AP 5 while Q obs and DSP and DSP 10 were rather linked in a logarithmic manner.The exponential (or logarithmic) model was also the most widespread when relationships between point-scale soil moisture contents and AP 5 (or DSP and DSP 10 ) were examined (Figs. 4 and 5).However, the use of θ mean rather than point-scale soil moisture measurements prevented us from knowing that the strengths of the relationships were highly variable in space and were often associated with R 2 α values exceeding 0.4 (Figs. 4 and 5).A strong (R 2 α = 0.88) cubic relationship was found between θ mean and Q obs (Fig. 7); this was a surprising given Fig. 6 showing that exponential relationships dominate between point-scale soil moisture contents and θ mean at all depths but 15 cm.
Topographic influences
Regardless of soil depth, no significant Spearman correlation coefficient was found between the strength (i.e.R 2 α ) of the identified point-scale relationships and the values of any surface or subsurface terrain attribute.Nonparametric Kruskal-Wallis tests showed that the nature of the mathematical model chosen to illustrate the relationship between pointscale soil moisture contents and surrogate measures was seldom controlled by topographic variables (Tables 4 and 5).
p-values reported in Table 4 indicate that elevation above the catchment outlet can be used to infer the nature of the point-scale relationships between soil moisture and DSP 20 .Figure 8 shows that at depths of 15, 30 and 45 cm, relationships between DSP 20 and point-scale soil moisture tend to be not significant or cubic at intermediate elevations above the catchment outlet (∼20 m) and rather logarithmic or logistic at high elevations above the outlet (>25 m), especially in the most upstream part of the catchment (see Fig. 5).For surrogate measures such as AP 1 , AP 5 , θ mean and Q obs , a clear influence of elevation above the catchment outlet on the pointscale relationships was not discernable.For θ mean , in particular, linear and nonlinear relationships expand across the whole range of elevation values (Fig. 8), thus making it difficult to discern any clear spatial pattern.As for the influence of surface elevation on the relationships between Q obs and point-scale soil moisture measurements, it is only perceptible at depths of 5 and 45 cm (Table 4, Fig. 8) given the relative location of cubic and exponential models.The influence of the surface compound topographic index (CTI) and the Flatness index of the surface and of the confining soil layer on the patterns of point-scale relationships between selected AMCs proxy variables (AP 2 , AP 5 , DSP) and soil moisture was also examined (Fig. 9).Even though some minor differences can be perceived in the nature of the point-scale relationships as a function of the values of the subsurface Flatness index (Fig. 9), these differences are not significant (refer to p-values reported in Table 5).The same conclusion applies to the nature of the point-scale relationships as a function of CTI or the surface Flatness index (Fig. 9, Table 4).
Discussion
The simple exercise conducted in this paper yielded new insight into the spatial representativity of proxy variables for AMCs or catchment response.While the relationships between actual soil moisture and several surrogate variables do exhibit strong spatial patterns (see examples in Figs. 4, 5 and 6), some others show rather poor spatial organization, thus casting doubt on the use of a single surrogate to illustrate a catchment state of wetness.Reaching such a conclusion was only possible through the use of an exhaustive soil moisture dataset that covers nearly the entire set of hydrological conditions of the Hermine catchment (Table 2), except for the winter and early spring seasons.Even though the patterns illustrated in Figs. 4, 5 and 6 only portray the spatial distribution of statistical relationships between actual soil moisture measurements and surrogate indices, they may reveal critical hydrological information.Hence, we argue that the simple statistical analyses conducted in this paper give a better understanding of the spatial heterogeneity of hydrological patterns and processes in the Hermine catchment.
It is not surprising that the 10% of the near-surface catchment area subjected to the influence of PET are located on Table 4. Influence of catchment surface topography (rows) on the nature of the point-scale relationships between soil moisture content and various surrogates (columns).Reported p-values are significant and suggest that at least one relationship type is associated with a median value of the studied topographic variable that is significantly different from the others.the upper parts of the southern slope, near the catchment divide and in a few other zones (Fig. 5) where canopy density is lower.On much of the Hermine catchment area, especially near the catchment head and on the northern slope, shallow soil moisture seems to be dominantly controlled by AP 2 and AP 5 (Fig. 4).Statistically significant, even though weak, relations between soil moisture measurements and AP 12 (Fig. 4) suggest that soil wetness is not persistent in the long-term except for a small portion of the catchment corresponding to a low-elevation wet zone and to thin soils developed over a bedrock outcrop.Linear relationships between point-scale soil moisture and AP 1 , AP 2 and AP 5 are mostly present at 5 cm but regardless of the soil depth considered, they are outnumbered by nonlinear polynomial (i.e.quadratic and cubic) and exponential relationships (Fig. 4).This may be linked to the fact that the soil storage capacity is a function of the amount and timing of precipitation in addition to evapotranspiration (Ritcey and Wu, 1999), or simply to the transmissivity mechanisms governing the vertical drainage of water in the soil.
Locations for which soil moisture is strongly related with catchment discharge may be indicative of catchment areas where triggering conditions for stormflow initiation are met.In that respect, it is worth noting that at depths of 30 and 45 cm, in particular, the spatial patterns of significant relationships between actual soil moisture measurements and Q obs resemble the spatial patterns of significant relationships between actual soil moisture and AP 5 (Figs. 4 and 6).Our approach makes it possible to distinguish near-surface from deeper potential "contributing" areas.It is also interesting to compare locations subjected to cubic or exponential relationships with Q obs (Fig. 6) as the two mathematical models mainly differ by their rate of increase.We could argue that the particular locations of exponential relationships hint towards rapidly enhanced subsurface water fluxes leading to the catchment outlet following water inputs to the catchment.It would be reasonable to assume that locations subjected to exponential relationships with Q obs are associated with the absence of depressions to fill in the topography of the soilconfining layer interface.It would also be reasonable to set them in opposition to the other locations which may be subjected to a soil storage threshold to exceed before any lateral water fluxes can occur (Spence and Woo, 2003;Tromp-Van Meerveld and McDonnell, 2006b;Kusumastuti et al., 2007), Table 5. Influence of the soil confining layer topography (rows) on the nature of the point-scale relationships between soil moisture content and various surrogates (columns).Reported p-values are significant and suggest that at least one relationship type is associated with a median value of the studied topographic variable that is significantly different from the others.4 and 5.In fact, there are no statistically significant differences in subsurface terrain attributes between locations sharing exponential relationships with Q obs and locations sharing cubic relationships with Q obs .This conclusion highlights the main drawback of the purely statistical approach with regards to hypothesis testing, as the obtained regression models may not necessarily reflect causal relationships.Hence, we can only formulate hypotheses that would have to be tested against additional field data.For instance, in order to confirm or infirm the influence of subsurface topographic features on the rate of increase of catchment discharge with respect to point-scale soil moisture, the fluctuations of water storage at the soilconfining layer interface could be investigated.
Concerning the results on θ mean , the important spatial extension of statistically significant relations with point-scale soil moisture content at all four depths indicates that it is a good surrogate for describing the catchment soil moisture macrostate.This is in accordance with the methodology of several previous studies (e.g.Thierfelder et al., 2003;Grant et al., 2004;James and Roulet, 2009) that relied on the use of the catchment mean shallow soil wetness for process un-derstanding or modeling purposes.We, however, found that the less shallow the soil depth considered, the more locations whose soil moisture measurements were nonlinearly related to θ mean (Fig. 5).The identified relationships between θ mean and surrogate measures for AMCs and between θ mean and Q obs fell short of capturing the heterogeneity of the pointscale mechanisms.This result requires further investigation as to how representative the θ mean really is over different catchment areas and with changing depths.
It must be stressed that the sole reliance on indices often used in catchment hydrology, namely AP 7 and AP 10 , would have led us to rely on a surrogate measure that is not related to soil moisture measurements in the Hermine.Even though soil moisture proxies based on antecedent rainfall can give good results (e.g.Kohler and Lindsey, 1951;Longobardi et al., 2003), the choice of the antecedent temporal window is crucial.In our case, AP 5 is the best index to use as a surrogate for AMCs in the Hermine catchment while AP 1 , AP 2 and AP 12 yield fairly good results.Kohler and Lindsey (1951) have argued that indices simply computed from the number of days since the last rain are "obviously insensitive and should not be used if accurate results are required" (p.2).This statement does not reflect the results obtained for the Hermine catchment, especially when not only the days since the last rain but also the rainfall intensity are considered.We suspect that Kohler and Lindsey's argument might be true in the large river basins with multiple tributaries as they refer to in their paper but not in a small headwater catchment like the Hermine.Statistically significant relationships were obtained between point-scale soil moisture measurements and DSP, DSP 10 , DSP 20 and DSP 30 .For DSP 20 , a weak significant topographic control was even identified as logarithmic or logistic point-scale relationships with soil moisture were mostly present at high elevations above the catchment outlet (>25 m) (Fig. 8).It is also worth mentioning that previousday discharges were also used as surrogates for AMCs (data not shown) but they were not involved in any significant relationship with point-scale soil moisture measurements; this result is contradictory to the affirmation of Kohler and Lindsey (1951) who argued that baseflow-derived indices provided reasonably good results in humid and sub-humid regions.
It is interesting to compare results obtained from previous studies in the Hermine catchment (Table 1) with the conclusions of the current paper.For instance, the same soil moisture content dataset was analyzed to characterize the emergence of spatially coherent saturation patches in the Hermine catchment (Ali et al., 2010a).The importance of DSP 30 , in particular, was then revealed: the smaller the surrogate measure for AMCs, the more likely the presence of 0.85-1.4ha wide saturation patches at a depth of 15 cm and the more likely the presence of saturation patches of less than 0.85 ha at a depth of 45 cm.These conclusions are consistent with the patterns illustrated in Fig. 5. Furthermore, results from Ali et al. (2010a) corroborate the fact that relations between actual soil moisture and AP 7 or AP 14 are very rare and can only be perceived at the scale of very small saturation patches (<0.1 ha).This comparison sheds light on the scale-dependent spatial representativity of AMCs surrogate measures.Ali et al. (2010a), however, did not identify any significant relations between soil moisture patterns and AP 2 , while it only captured the influence of AP 5 on 0.54-0.85ha patches and the influence of AP 12 on 0.02-0.1 ha patches.These results are opposite with some of the AP 2 patterns illustrated in Fig. 4 and the reason for this is unclear.Ali and Roy (2010d) also found that the spatial connectedness of locations whose volumetric soil moisture content exceeded 30% was dependent upon AP 7 .The relationship between connectivity and AP 7 then had the form of a step function, which may explain why it was not captured by any of the tested regression models in the current paper.By stating that the relative contribution of geographic sources (i.e.organic versus mineral soil water originating from riparian or upslope areas) to streamflow are strongly correlated to AP 2 and rather weakly correlated to AP 7 , Ali et al. (2010b) echo the conclusions of the present study about the appropriateness of AP 2 as a proxy for the Hermine catchment AMCs and the insignificance of AP 7 in that regard.On the contrary to the current paper, Ali et al. (2010c) found that AP 10 had an influence on the catchment behaviour only when the cumulative antecedent rainfall amounts lay in the range of 24.5 to 40.5 mm.There again, such a relationship between catchment discharge and AP 10 can be schematized as a rectangular function that does not bear any resemblance with any of the regression models tested in this paper.Hence, these results highlight the sensitivity of the results to the nature of the relations and of the ensuing regression model that is used.
Our results are catchment specific.They pertain to a small forested watershed with relatively steep slopes in a temperate humid climate.The small scale of the headwater basin and its relief may play a role on the optimal antecedent temporal window size that has been identified (i.e. 5 days) through Hydrol.Earth Syst.Sci., 14, 1843Sci., 14, -1861Sci., 14, , 2010 www.hydrol-earth-syst-sci.net/14/1843/2010/ the analysis.The approach, however, has a general value as the simple analysis described in this paper can be repeated for several catchments under various climatic regimes and for which spatially-detailed soil moisture data are available.This will allow the hydrological community to compare findings and maybe derive guidelines regarding the choice of proxy measures of AMCs in catchments with specific climatic and topographic characteristics.Lastly, it is worth mentioning that the rationale behind our statistical analysis comes from several studies that have described soil moisture as a major control of catchment response and an indicator of the location of active subsurface flow paths (e.g.Grayson et al., 1997;Meyles et al., 2003;Western et al., 2004;Western et al., 2005).However, while Van Meerveld and McDonnell (2005) have also agreed that soil moisture may co-vary with streamflow, they have rather identified transient saturation at the soil-bedrock interface or near a soil layer of reduced permeability to be a real trigger for lateral subsurface stormflow.This was latter confirmed with the fill and spill hypothesis (Tromp-Van Meerveld and McDonnell, 2006b).Tromp-Van Meerveld and McDonnell (2006c) also showed that in catchments such as the Panola study site (Georgia, USA), pre-event soil moisture variations were not the main control on the distribution of subsurface saturation during winter storms.The hypothesis according to which shallow soil moisture is a passive signal of transient saturation at the soil-confining layer interface in the Hermine catchment should therefore be verified in order to shed light on the patterns illustrated in Figs. 4, 5 and 6.
Conclusions
This paper aimed at determining whether or not multiple surrogates for AMCs had to be used in order to describe the moisture conditions within a catchment.With regards to the Hermine catchment, the answer to that question is affirmative.Without making any assumption on active processes, we computed the point-scale temporal relations between actual soil moisture measurements and commonly used meteorological-based indices so as to identify the surrogates for AMCs that are best suited to the Hermine catchment.Two principal results stood out.Firstly, it was shown that the sole reference to AMCs indices often used in catchment hydrology (i.e.AP 7 or AP 10 ) does not help predicting the catchment moisture conditions when linear, quadratic, cubic, exponential, logarithmic or logistic relationships are considered.Secondly, the relationships between point-scale soil moisture measurements and surrogates for AMCs were not spatially homogeneous, thus revealing a mosaic of linear and nonlinear catchment "active" and "contributing" sources whose location was seldom controlled by surface terrain attributes or the topography of the soil-confining layer interface.These results represent a step forward for the Hermine catchment as they point towards depth-specific pro-cesses and spatially-variable triggering conditions that are not controlled by topography.Such hydrological behaviour may also exist in other catchments.The analysis also raises several questions on the use of surrogate AMCs measures and on the generalization of results obtained with a single surrogate.Further investigations are, however, necessary to establish robust, causal relationships between soil moisture and meteorological-based proxies for AMCs and then derive guidelines concerning the best surrogate choice.
Fig. 1. (A) Location of the Hermine catchment; (B) Hermine catchment particular features; (C) Elevation above the catchment outlet; and (D) Depth to the confining layer for each of the 121 soil moisture sampling locations. Forest
Fig. 2 .
Fig. 2. Sample soil moisture maps obtained after three contrasted surveys in the Hermine catchment.
Fig. 3 .
Fig. 3. Methodological approach used in this paper.R 2 α refers to the adjusted coefficient of determination while AIC refers to the Akaike Information Criterion.
Fig. 4 .
Fig.4.Nature and strength of the relationships between point-scale soil moisture content and AP x indices (x = 1, 2, 5, 7, 10, 12 or 14 days) used as surrogates for AMCs.R 2 α refers to the adjusted R-square.
Fig. 5 .
Fig. 5. Nature and strength of the relationships between point-scale soil moisture content, PET, DSP and DSP x indices (x = 0, 10, 20 or 30 mm/d) used as surrogates for AMCs, and θ mean used as a surrogate for the Hermine catchment macro-state.R 2 α refers to the adjusted R-square.
Fig. 7 .
Fig. 7. Relationships between the mean soil moisture content (θ mean ) and surrogate measures for AMCs and catchment response."r" refers to the Spearman correlation coefficient.
Fig. 8 .
Fig.8.Influence of surface topography (elevation above the catchment outlet) on the nature of the relationship between point-scale soil moisture (columns) and selected proxies for AMCs, catchment macrostate and catchment response (rows) in the Hermine.Grey numbers illustrate the number of data used to plot each box.
Fig. 9 .
Fig.9.Influence of various topographic properties (surface and subsurface multi-resolution valley bottom flatness and compound topographic index) on the nature of the relationship between point-scale soil moisture and selected surrogate measures for AMCs.Blue, green and red numbers illustrate the number of data used to plot each box of the same color.
Significant correlations identified with surrogate measures for AMCsHydrologically relevant observations in the Hermine catchment PET AP 1 AP 2 AP 5 AP 7 AP 10 AP 12 AP 14 DSP DSP 10 DSP 20 DSP 30
Table 2 .
Surrogates for AMCs, catchment macrostate and hydrologic response for 16 soil moisture surveys in the Hermine.See meaning of abbreviations in text.
Table 3 .
Nature and strength of the relationships between point-scale soil moisture content and Q obs used as a surrogate for the Hermine catchment response.R 2 α refers to the adjusted R-square.Catchment-wide average of Akaike weights or probabilities associated with the best mathematical model chosen to illustrate the relationships between point-scale soil moisture content and surrogate measures.
|
2018-10-19T19:43:35.442Z
|
2010-10-11T00:00:00.000
|
{
"year": 2010,
"sha1": "7e6331c618d4eada1159328e3a0044e72ecde66e",
"oa_license": "CCBY",
"oa_url": "https://hess.copernicus.org/articles/14/1843/2010/hess-14-1843-2010.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "efb08e25c89ce1c59655dbc06bbce4e29a76ffd1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
249536654
|
pes2o/s2orc
|
v3-fos-license
|
Posterior Pole Asymmetry Analysis in the Children with Anisometropia
Objectives: The objectives of the study were to investigate the inter and intraocular differences in posterior pole asymmetry analysis (PPAA) with optical coherence tomography (OCT) in anisometropia, to examine the relationship between the presence of anisometropia and amblyopia and retinal thickness. Methods: Patients between ages of 5 and 16 years with anisometropia who applied to our clinic were included in the study. Macular retinal thickness measurements were evaluated by PPAA using the posterior pole algorithm of the spectral domain-OCT device. Asymmetry was analyzed both as the difference between the right and left eyes and the difference between the superior, inferior, and mean retinal thicknesses of 64 separate quadrants in the same eye. Hemispheric and right-left eye asymmetry differences analyses were performed. Results: 118 patients were included in the study(65 females and 53 males). Group 1 consisted of anisometropic patients (n=46), Group 2 consisted of anisometropic amblyopia patients (n=40), and Group 3 consisted of control group (n=32). The mean age of the patients was 9.72±5.6 years. The mean spherical equivalent difference between the two eyes of the patients was 1.7±0.6 D. When anisometropic eyes were compared with normal eyes, there was no significant difference between mean superior, inferior and total retinal thickness, and right-left eye asymmetry values (for all, p>0.05). In the asymmetry evaluation performed by counting the black boxes in the PPAA, a significant difference was found in the right-left asymmetry evaluation in anisometropic amblyopic eyes, in some quadrants and in the right-left asymmetry analysis (p<0.05). Conclusion: While no difference was found between anisometropic and normal eyes in the PPAA, there was differences in some quadrants in the anisometropic amblyopic group compared to the control group suggesting that there is an involvement in the peripheral quadrants of the macula, especially in treatment resistant amblyopic patients.
of preterm birth, neurological diseases or in the group of children we think is healthy. All the OCT devices have an integrated normative database, which includes only individuals 18 years of age and older. To evaluate changes in retinal measurements accurately, it is first necessary to determine the range in the normal population and to quantify the accuracy, reproducibility, and repeatability of measurements made by the system. For these reasons, it is very important to have normative OCT data in healthy children with different refractive values and to evaluate the test results in a healthy way in these age groups.
When we look at the recent literature, the posterior pole asymmetry analysis (PPAA) test has been a guide for us when it comes to early diagnosis in glaucoma cases and the question of whether the localized defects are really glaucoma or the difference in symmetry between the two eyes (3,5,9,10). Our curiosity about whether this method, which was investigated in adult healthy cases with suspected glaucomatous, is affected by anisometropic amblyopia or other high refractive values in childhood, led us to this study.
PPAA is a novel retinal imaging technique of the Spectral domain OCT (SD-OCT) device that at once maps the posterior pole retinal thickness and performs asymmetry analysis between eyes and between the hemispheres of each eye (11)(12)(13)(14).
In 2011, Heidelberg Engineering (Spectralis, SD-OCT, Heidelberg, Germany) customized the most recent retinal thickness protocol to obtain retinal thickness measurements of the central 20° of the posterior pole. The posterior pole retinal thickness map is a color-coded map that provides a mean retinal thickness value of an 8 × 8 grid centered on the foveal pit. The grid is positioned symmetrically to the foveadisc axis. Each cell of the grid represents a square area of 3 × 3° of the posterior pole. Concurrently, PPAA protocol was created (15). This protocol compares retinal thickness measurements of the corresponding cells in the retinal thickness map between the eyes and between two hemispheres within each eye. The asymmetry map is displayed as a gray scale depiction of difference in thickness from 0 to 30 µm.
A few studies detected inter-and intraocular retinal thickness asymmetry (RTA) in pre-perimetric glaucoma, and therefore concluded that RTA may be the first sign of glaucoma, and that the PPAA thus can be used in the early diagnosis and later follow-up of glaucoma (16)(17)(18).
At present, we have no data about PPAA regarding the anisometropia and anisometropic amblyopia in pediatric subjects. To determine any asymmetry that may exist, our study investigated children aged between 5 and 16 years, comparing all 64 cells of the asymmetry grid in the PPAA.
When we look at the previous studies, we come across a small number of studies on children, and these studies are generally determined to detect normal asymmetry between the two eyes in OCT data in healthy children (6,(17)(18)(19). All these data will guide the research and the new diagnostic and following tests, to prevent late or unnecessary diagnosis. The database of OCT devices in cases under the age of eighteen is newly created, and it is intended to guide the studies on this subject and to ensure that high refractive disorders and especially anisometropia are considered in these tests as a corrective factor in future software of these devices.
In the literature, very limited data exist about childhood refractive status effect on the posterior asymmetry analysis, such as high refractive values, anisometropia or amblyopia. Only there are some studies regarding the normal asymmetry analysis in healthy pediatric subjects (6,19). To the best of our knowledge, this is the first study to present anisometropia for asymmetry using the entire PPAA-protocol.
Methods
This study was conducted by the Declaration of Helsinki and approved by the ethics committee of the Başkent University (Project No: KA 21/240). All participates involved in the study were required to sign written informed consent from legal guardian of each child.
The subjects underwent a full ophthalmic examination including best corrected visual acuity (BCVA) tested with age-appropriate charts, cycloplegic refraction with cyclopentolate 1% or tropicamide 1% eye drops, slit-lamb biomicroscopy, intraocular pressure measurements with air-puff non-contact tonometry if possible, fundus examination with indirect ophthalmoscopy, and orthoptic examination. All patient examinations were performed by same pediatric ophthalmologist (SAB), and all OCT measurements were done by the same expert technician.
Participants
This prospective, cross-sectional study conducted at Başkent University Hospital between May 2020 and April 2021. We recruited 118 children aged between 5 and 16 years among these patients.
Subjects with a spherical equivalent (SE) between -1.00 and +1.00 diopters (D), and BCVA of 20/20 or better in both eyes were enrolled as the control (Group 3) group.
Care was taken to ensure that the refractive error was not above 1 D as a spherical value and not above 1.5 D as a SE in the children in the control group. There were no cases with anisometropia of 1.25 D and above in Group 3. All subjects had to complete a set of examinations including BCVA, auto-refraction, slit-lamp, and OCT measurement.
OCT Imaging Acquisition
The same examiner finished all OCT measurement (Heidelberg Engineering, Heidelberg, Germany) and PPAA were performed. Only images of good quality of OCT were used for further analysis. The PPAA screen showed the mean superior, inferior and total retinal thickness on posterior pole region (Fig. 1). The mean RTA was calculated for all cells of the posterior pole grid between superior and inferior hemispheres retinal thickness of the same eye. In this study, the central four cells of the whole 64 squares positioned around the fovea was named as the central macular area (called region 1), whereas the surrounding 16 square around the region 1 was named as the peri-central area (called region 2), the surrounding 20 square areas around the region 2 as the peri-macular area (called region 3), and the outer 28 square areas as the peripheral area (called region 4). The average thickness of each region was calculated as well. All images were acquired with the Spectralis SD-OCT (version 5.6.1) after pupillary dilation using eye tracking software (TruTrack; Heidelberg Engineering). Subjects were instructed to fixate on the internal fixation target prior to each scan. The instrument has a scan speed of 40.000 A-scans per second, with a 12° diameter scan circle around optic nerve. The scan circle diameter (mm) depends on the axial eye length of the eye, which is typically 3.5-3.6 mm. All scans had a quality score of >25. Images with artifacts or missing parts were excluded and repeated.
Spectral OCT
All patients were scanned using commercially available SD-OCT Spectralis HRA + OCT (Heidelberg Engineering). This instrument uses a wavelength of 820 nm in the near infrared spectrum in the SLO mode. The light source of the SD-OCT is a super luminescent diode with a wavelength 0f 870 nm. Infrared images and OCT scans (40.000 A-scans/s) of the dual laser scanning systems are acquired simultaneously. The macular thickness measurements were obtained using the posterior pole asymmetry scan protocol. This scan protocol was applied to targeted eyes in all subjects; the camera was centered on the fovea with even illumination within 6x6 mm area. The retinal thickness grid overlays a 24 × 24° retinal region centered on the measured area of 30 × 25°. This grid is composed of 64 cells; each cell represents the average measured retinal thickness of a 3 × 3° area. Asymmetry analysis of the posterior pole was evaluated with the map which compares the superior to inferior hemispheres for each eye. One hemisphere includes 32 cells, and each cell has an equivalent in the opposite hemisphere. The difference between the two equivalent cells is indicated with colors changing from the white to black (Fig. 1). A black cell means that the difference in retinal thickness is ≥30µm.
For calculating the superior-inferior (S-I) asymmetry, one eye was randomly chosen by a random number table and the inferior area values were subtracted from those of the superior. The differences were established in percentiles.
Statistical Analysis
SPSS software version 21.0 for Microsoft Windows was used for statistical analysis. All data were expressed as the mean ± standard deviation. Means and standard deviations of each zone asymmetry in anisometropic, anisometropic amblyopic, and control group were assessed. Independent sample t and Chi-square tests were used to determine significant differences between the groups, respectively. Results with p<0.05 were considered statistically significant. Multiple linear regression analysis was done to see the effect of age and refraction on the interocular as well as intraocular superior-inferior asymmetry of the OCT parameters.
Data Analysis
The area under the receiver operating characteristic curve (AUROC) was calculated to assess the ability of the overall numbers of black cells. Based on the AUROC analysis, criteria that might be clinically meaningful were selected, and the sensitivity and specificity of such criteria. To detect prominent thickness differences, black cells and dark-grey cells were included for interocular zonal comparison. Black cells indicate a mean thickness difference of >30 µm, whereas dark grey cells indicate a mean thickness difference of between 20 µm and 30 µm.
Results
A total 132 subjects were initially included. Fourteen subjects were unable to undergo a completed SD-OCT imaging or due to poor image quality were excluded from the study; finally, 118 subjects completed the study.
The mean BCVA was 0.2±0.02 logMAR in amblyopic eye at the Group 2 (anisometropic amblyopic). The refractive errors ranged from -6.00 to +6.00 D of SE. Strabismus and nystagmus were not included in this study groups. Table 1 shows mean refractive errors in both eyes and mean difference between two eyes, and the mean difference of OCT parameters between the right and the left eyes as well as between the superior and inferior areas of the same eye. Figure 2a-c shows one sample case from each of the anisometropic, anisometropic amblyopic, and control groups (Fig. 2). Table 2 shows the mean posterior pole retinal thickness of total, superior, inferior, four sub-region fields, and the intraocular RTA. The percentile distribution of inter-and intraocular asymmetry of PPAA macular thickness parameters is shown in Table 3. The 2.5 and 9.75 percentiles of interocular difference tolerance limits for average total PPAA macu- lar thickness and intraocular superior-inferior area difference for the PPAA macular thickness were -9-21 µm and -32-38 µm, respectively. The interocular correlations between the right and the left eye were significant for all OCT parameters as shown in Table 3.
The 2.5 th and 97.5 th percentiles of interocular difference tolerance limits for central macular thickness were 17.60 µm and 23.30 µm, respectively. In the whole group, the interocular total macular thickness asymmetry limit was 23 µm and the difference between the intraocular superior-inferior hemispheres was 19 µm. In Group-1, the interocular macular asymmetry is 28±3.2 µm, and the intraocular S-I hemisphere difference is 22±4.1 µm. In Group-2, the interocular macular asymmetry was 33±5.6 µm, and the intraocular S-I hemisphere difference was 25±4.3 µm (Fig. 3).
In 95% of the children, interocular differences in macular parameters were up to 23.20 µm (macular thickness) and 0.64 mm 3 (macular volume). We have found the least difference between right and left eyes in parameters related to the optic disc, with 0.02 µm, 0.03 µm, and 0.01 µm for rim area, disc area, and cup-to-disc area ratio, respectively.
Discussion
There is a dire need for tools for objective assessment in the pediatric age groups, which are quick, reliable, reproducible, and less invasive. SD-OCT is one such diagnostic tool for assessing the macular thickness. Knowledge of normal interocular and intraocular asymmetry is, therefore, essential to avoid confusion with physiological variations.
Many pathological diseases are unilateral or asymmetrical in children. Changes in OCT measurements compared with the previous examinations or interocular asymmetry exceeding normal limits should be considered warning signs, and an indication for further examinations. Deviation from this difference may be deemed abnormal even if the absolute value appears to be within normal limits. There are several articles in the literature evaluating normal interocular asymmetry in children (3,6,(19)(20)(21)(22)(23)(24). Based on these articles, we conducted this study to understand whether high refractive values and refractive differences between two eyes influence the evaluation of OCT results in children with age of 5-16 years. In this article, we tried to investigate the criteria that we should consider as correction parameters in terms of the follow-up of retinal or optic disc-based diseases that may develop in the future, especially in anisometropic amblyopia cases. The mechanism underlying interocular differences remains unclear. Because RNFL thickness is affected not only by the number of ganglion cell axons but also by glial and Müller cells, we cannot completely attribute the asymmetry to differences in a particular cell line. Huynh et al. (21) reported that 2.5-97.5 percentile limits of interocular asymmetry for their macular thickness parameters as -31-31 µm. In another study by Altemir et al., (19) they reported their limits from -17.6 to -23.2 µm. They suggested that interocular differences in average RNFL and macular thickness of normal individuals should not exceed 13 µm and 23 µm, respectively, if measured with Cirrus HD-OCT (19). And differences greater than this value should be considered suggestive of pathology, such as pediatric glaucoma, optic nerve diseases, or macular diseases according to their study (19). Amblyopia can also affect OCT parameters, especially in patients with severe anisometropia and these group of patients may be followed by the changes on the PPAA.
In another study published by Altemir et al. (22) in the same year, one eye of 100 children was included in the study and they investigated the accuracy and reliability of repeated FD-OCT measurements in children. When inter-observer and intra-observer reproducibility were evaluated, it was found to have good repeatability in childhood (22). In the study made by Dave et al., (6) they reported that the refractive error did not affect the OCT measurements in their study. They thought that this was because they did not include children with high refractive errors in their study group (6). They also stated that like as a few previous studies, refractive errors and axial length have been shown to have minimal effects on macular thickness measurements (6,23,24).
Dave et al. (6) stated that they did not look at the effect of anisometropia on OCT measurements, which is the weakness of their study. In addition, they mentioned that severe anisometropic cases were not included in their study group because they did not include amblyopia cases in the study (6). On the other hand, we evaluated the macular asymmetry measurements of anisometropia of 1.50 D and above, as well as the group of patients with anisometropia that we followed up for amblyopia, in our own study, which contributed additionally to all these studies.
Altemir et al. (19) discussed in their articles that one of the limitations of their study was that they did not consider the axial length of children with high refractive disorder. At the same time, none of these studies looked at the effect of anisometropia and high refractive values in both eyes, and the study groups were not homogeneously distributed. In a study published by Hwang et al.(25) in 2014 involving a wide age group (5-80 years), no statistically significant relationship was found between age and refractive error and macular thickness. Considering the study methodology, the refractive error ranges of this group, whose mean age was 36.4 years, were between -14.13 and +5.75 D and -14.50 and +5.75 D in the right and left eyes, separately (25). As a result of the study, the mean temporal equivalent of the RE was more myopic and the macular thickness in the RE was significantly thin in the superior quadrant (r=0.160, p<0.001), while it was thick in the temporal quadrant (r=-0.236, p<0.001), and there was no difference in the other quadrants (p>0.05) (25). In addition, it was observed that the difference in interocular nerve fiber layer thickness was not correlated with age or mean refractive disorder (p>0.05) (25). However, considering the study design, it was observed that the comparison was made over the absolute value and mean of the refractive error, and the refractive differences between the two eyes were not classified separately (25). In our study, we also evaluated whether the refractive difference between the two eyes influence the differences in interocular macular thickness, and especially if there is amblyopia, whether there is a different effect in those cases. Hwang et al. mentioned that in their previous studies, only adults or only children were included in the study, and in their study, it was superior to the study to look at the influence in a wide age group (25). In their study, it was mentioned that this asymmetry, which is considered normal up to a certain cutoff value between the two eyes, may be affected developmentally by the topographical location of retinal blood vessels, retinal ganglion cell axons and glial cell density, and cyclotorsions in the eye (25). Indeed, our observation is that asymmetry between the two eyes is evident in cases with cyclotorsion, and there may even be variability in these values after strabismus surgery.
Banc et al., (20) in a review of OCT studies in the childhood, which compiled data from 74 very valuable studies that were newly published in 2021, the following common conclusions were highlighted: (1) Average RNFL thickness is not influenced by age, a gender, or eye laterality, (2) Macular thickness should be considered separately for children aged <5 and children aged >5, (3) central macular thickness has a tendency towards higher values in boys, (4) temporal RNFL sector is thicker in the RE, (5) superior RNFL sector is thicker in the left eye, (6) macular thickness is not significantly different between the right and the left eye, (7) the ISNT rule is not necessarily valid, (8) RNFL thickness increases as the SE of refractive error increases, (9) the ONH OCT parameters are not influenced by the refractive error, (10) ocular axial length can have an effect on the ocular magnification, and thus influence the lateral OCT measurements, and (11) handheld OCT devices are a good alternative for young or uncooperative children (20).
When we compared our results with all literature data, we came up with the following results: (1) The differences between the two eyes were determined by all these studies, the values above the cutoff limits should be reported to us by the test instruments, and the pediatric group should be considered as a separate entity. In this patient group data, new software suitable for age and refractive status should come to the fore, (2) Amblyopia cases should be followed up in terms of pre-and post-treatment changes between the two eyes, just as in other optic nerve or macula pathologies, and change indices in annual follow-ups, (3) If all the factors related to the increase in age from childhood and the development of refractive and axial length and cornea and lens are collected in a pool and age-appropriate nomograms are obtained, perhaps adult test data may also change.
One of the limitations or deficiencies of our study was that the high refractive errors could be classified as hyperopic, myopic, or astigmatic anisometropic and groups could be separated. We could not do this because it would not be statistically significant in this study due to the small number of people; but we included it in our further study plans. Our second limitation was that some of the amblyopia cases were naive, that is, they did not receive any treatment, and the other part had received occlusion treatment before. The changes during the months or years in the post-treatment follow-up of naive cases who have never received treatment are comparable; we considered this as another study plan.
In our study, we investigated the effect of these factors and for the first time investigated the effect of anisometropia and anisometropic amblyopia on PPAA in children. While no difference was found between anisometropic eyes and normal eyes in the PPAA, the difference in some quadrants in the anisometropic amblyopia group compared to the normal group suggests that there is involvement in the extra central quadrants of the macula, especially in amblyopes that are refractory to the occlusion.
Conclusion
PPAA is a new entity that allows us to recognize the differences between the two eyes in childhood and adulthood and can give us more objective data about the prognosis of optic disc and macula diseases that may occur later and are present. More useful results will be obtained in the future with more comprehensive studies.
Disclosures
Ethics Committee Approval: This study was conducted by the Declaration of Helsinki and approved by the ethics committee of the Başkent University (Project No: KA 21/240). All participates involved in the study were required to sign written informed consent from legal guardian of each child.
|
2022-06-10T17:15:44.501Z
|
2022-05-27T00:00:00.000
|
{
"year": 2022,
"sha1": "2aa561eb319c2c5d59376039132d9fd85f742bd2",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.14744/bej.2022.48344",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2aa561eb319c2c5d59376039132d9fd85f742bd2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264682136
|
pes2o/s2orc
|
v3-fos-license
|
Case Report: Solitary mastocytoma treated successfully with topical tacrolimus
Solitary mastocytoma, a rare dermatological entity accounts for 10-15% of cutaneous mastocytosis. We report a rare case of solitary mastocytoma presenting at birth, treated successfully with topical tacrolimus. Along with reassurance and strict avoidance of triggering factors, no recurrence was reported within the one year follow-up period.
Introduction
Solitary mastocytoma, a rare dermatological entity, represents the second most common type of cutaneous mastocytoma. Solitary mastocytomas constitute 10-20% of all childhood cutaneous mastocytosis. They usually present within 2 years of age, mostly within first 3 months 1 .
We report a case of solitary mastocytoma presenting a birth that was treated successfully with topical tacrolimus with no recurrences noted during a one year follow-up period.
Case report
An eighteen month old girl presented with a solitary, itchy dark coloured, minimally elevated lesion over her left elbow that had been evident since birth. The lesion used to itch and swell on scratching, bathing and toweling of the area. The child was otherwise healthy and no other systemic manifestations were noted. Clinical examination revealed a solitary, 3.5 × 6.5 cm, non-tender, minimally elevated plaque with central shiny skin and peripheral marginal hyperpigmentation over left elbow. On scratching the lesion with the blunt end of a pin, the central shiny skin became edematous and itchy (positive Darier's sign) ( Figure 1). Hematological and biochemical investigations were within normal limits. A 5 mm biopsy of the skin tissue obtained from the center of the lesion revealed a dense monomorphic inflammatory infiltrate consisting of round to oval cells with clear cytoplasm and centrally located nuclei in the upper and mid dermis (Figure 2a, 2b). Special staining with toluidine blue revealed metachromatic staining of the monomorphic mast cells, confirming the diagnosis of mastocytoma (Figure 3).
The child was treated with topical tacrolimus 0.03% ointment which was applied on the lesion site twice daily. The child was also prescribed an oral antihistamine (levocetirizine syrup, 1.25 mg once a day). By the end of third month, complete subsidence of the lesion was noticed with residual hyperpigmentation, negative Darier's sign, and no signs of atrophy. This treatment was continued for another four months which led to resolution of the lesion with residual hyperpigmentation, negative Darier's sign, and no signs of atrophy. Treatment was continued with only a once a day
A B
application of topical tacrolimus for a month after clinical resolution to prevent further recurrence ( Figure 4). Reassurance and strict avoidance of triggering factors such as pressure, friction (rubbing or toweling of the lesion), extreme temperature changes, intake of mast cell degranulating agents like aspirin, NSAIDS, morphine, codeine (especially in the form of cough preparations) has led to no recurrence of the child's symptoms during a 1 year follow-up period.
pressure, friction (rubbing or toweling of the lesion), physical exertion, extreme temperature changes, emotional stress, intake of mast cell degranulating agents like aspirin, NSAIDS, morphine, codeine (particularly in cough preparations), alcohol and radio contrast dyes are of utmost importance 6 .
In symptomatic patients, oral H1 and H2 antihistamines are commonly used. Topical steroids with or without occlusion, intralesional steroids, oral sodium cromoglycate, oral ketotifen and surgical excision are other treatment options 6,7 . Though topical steroids have shown good results, their topical and systemic side effects are a matter of concern, especially when treating infants.
Tacrolimus and pimecrolimus are topical immunomodulators, the first in a new class of topical calcineurin inhibitors. These drugs act as immunosuppressants by binding to a cytosolic ligand in the cytoplasm of T cells called FK506-binding protein (FKBP) and inhibit the cytoplasmic enzyme calcineurin, thus inhibiting the activation and maturation of T cells and blocking transcriptional activation of several cytokine genes -interleukin (IL)-2 [mainly], IL-4, IL-10, interferon-γ, tumor necrosis factor-α, and granulocyte-macrophage colony-stimulating factor 8 .
Other immunomodulatory effects of tacrolimus include the inhibition of mast cell adhesion and the inhibition of the release of mediators from mast cells and basophils 9 , which might explain its efficacy in the improvement of the lesion and alleviation of the symptoms in cutaneous mastocytosis.
These immunomodulators offer advantages over corticosteroids in terms of a more selective action, no associated systemic sideeffects, and the absence of associated skin atrophy, depigmentation and telangiectasia.
This case report demonstrates that topical calcineurin inhibitors can be considered as a safe and efficacious modality of treatment in cutaneous mastocytoma.
Consent
Written informed consent for publication of the clinical details and clinical images was obtained from the father of the patient.
Author contributions
Dr. Sukesh M.S. and Dr. Ameet Dandale were involved in clinical diagnosis, work-up, treatment and writing up of this case report. Dr Smita Ghate contributed to the histopathologic diagnosis, Dr Rachita Dhurat contributed to the conception and design and final approval of the paper; Dr Ankur Sarkate contributed to the assimilation of all data and the histopathological pictures.
Competing interests
No competing interests were disclosed.
Grant information
The author(s) declared that no grants were involved in supporting this work.
Discussion
Solitary mastocytoma, the second most common type of cutaneous mastocytosis, accounts for 10-15% of cutaneous mastocytosis 1 . Nearly half of solitary mastocytomas present within the first 3 months of life and the remaining half during the first year 2 . Solitary mastocytoma presenting in adults has also been noted 3 . The most common locations of mastocytomas are on the trunk, neck, and arms.
Most solitary mastocytomas are about 1-5 cm in diameter and are seen as skin areas that are colored yellow to brown and present as minimally elevated plaques with a smooth shiny surface having a soft to rubbery consistency. The lesion turns edematous and itchy on manipulation [rubbing or trauma to the lesion]. Mild tenderness and the formation of vesicles or bulla can also occur 4 . These features can sometimes be so mild that they may not come to the attention of parents.
Diagnosis is by biopsy that reveals a dense monomorphic inflammatory infiltrate consisting of round to oval mast cells containing a clear cytoplasm and centrally located nuclei in the dermis. Confirmation of diagnosis is usually by special staining with toluidine blue that reveals the metachromatic staining of the monomorphic mast cells 5 .
The course of solitary mastocytomas is benign and the disease is self-limited. Systemic involvement is uncommon and complete spontaneous resolution is expected in months to years' time. Reassurance along with avoidance of triggering factors such as 1. The authors described an 18-mo old girl with a solitary mastocytoma, which was successfully treated with tacrolimus. This might suggest that the mastocytoma requires therapy. This is not the case. Mastocytomas are self-limited and usually don´t need therapy. The most important management is the avoidance of known trigger factors. It is not clear whether the reduction is due to the self-limiting nature of the tumor or to the therapy.
Open Peer Review
Pathogenetically and from the mechanism of action of tacrolimus, which prevents mast cell degranulation, an improvement of mastocytoma can be expected. However it should be made clear that the indication for treatment of a solitary mastozytoma should be made very cautiously.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed. Is solitary mastocytoma really "rare"? I see an awful lot of them for such a designation even taking into consideration referral bias. Uncommon might be a better descriptor.
Robert Sidbury
Could the line " " be modified? I do not think these absolutely must be Diagnosis is by biopsy... biopsied and in fact almost never do. If there is a + Dariers sign and a strong clinical suspicion this presentation is specific enough that I do not think biopsy is mandatory. As a pediatric dermatologist I do all I can to avoid biopsying when not absolutely necessary and I worry readers might take this 3.
5.
I do all I can to avoid biopsying when not absolutely necessary and I worry readers might take this line to imply diagnosis mandatory for diagnosis of mastocytoma. It is not.
Can we still call tacrolimus and pimecrolimus "new" given they have been available almost 15 years now? I think in fairness the authors must mention the boxed warning about this class of medications somewhere. If the authors cite the concerns for topical and systemic side effects of topical steroids as they do I think they must balance this by mentioning the biggest barrier to using these agents -the black box. In the abstract and introduction sections the authors stated that solitary mastocytoma accounts for 10-15% of cutaneous mastocytosis whereas in the introduction section it was 10-20%.
The lesion was present since birth, but parents sought medical advice after 18 months. Was there any particular reason for this delay? In particular, was any other previous treatment(s) prescribed for the child?
The magnification of Fig, 2 was mentioned as 40X but it seems that Fig. 2A and 2B have different magnifications. Authors should also insert the magnification of Fig.3.
As per any case report describing a treatment for a disorder known to be self-limited, it is uncertain whether the resolution is due to the applied drug or due to natural spontaneous subsidence.
|
2016-05-12T22:15:10.714Z
|
2014-08-01T00:00:00.000
|
{
"year": 2014,
"sha1": "bb81f59c7a0b3a242e0523a5a74a00426c273d5d",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/3-181/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bfdc756835af9aa8752adf8c1a7aef9c6ce7e7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252903845
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the nexus between natural resource depletion, renewable energy use, and environmental degradation in sub-Saharan Africa
Abstract This study explores the nexus between natural resource depletion, renewable energy use, and environmental degradation in 48 sub-Saharan African (SSA) countries from the period 2000 to 2020 using generalized panel quantile regression. The findings show that, at 90th quantiles the magnitude of natural resource depletion is positive and stronger associated with environmental degradation in SSA. This is probably attributed by countries with higher natural resource depletion such as Congo Republic (37.10%), Equatorial Guinea (27.60%), Angola (21.14%), Gabon (12.84%), Chad (12.19%), Burundi (8.92%), Uganda (6.16%), and Congo Democratic (5.24%). Furthermore, at lower quantiles (30th and 10th), natural resource depletion negatively affects environmental degradation in SSA. This might be attributed by countries with negligible natural resource depletion like Carbo Verde (0.16%), Central African Republic (0.04%), Comoros (1.17%), Eswatini (0.01%), Gambia (0.92%), Guinea-Bissau (0.33%), and Madagascar (0.07%). Moreover, the findings show that renewable energy use reduces environmental degradation and is statistically significant at almost all quantiles. Finally, the findings reveal that industrialization, trade, and economic growth all contribute to environmental degradation (i.e. carbon emissions) in SSA. The policy implication is to adopt measures that reduce poverty, which is linked to natural resource depletion, and scale up renewable energy use technologies for SSA. Policymakers should develop strategies to reduce carbon dioxide emissions and enable better use of natural resources by enforcing environmental laws. Concurrently, we propose natural resource management to be multi-sectoral and integrated into institutional structures by allocating funds to the natural resources sector for intervention programs in SSA countries.
Introduction
Natural resources are the building blocks of life on Earth (we live, produce, survive, and earn from it). Sub-Saharan Africa is endowed with abundant natural resources such as forests, oil and gas reserves, mineral deposits, and water resources. Many people in sub-Saharan Africa depend on natural resources for their livelihood. However, most natural resources (land, air, minerals, wildlife, forests, and water) are degrading at an alarming rate, raising global concerns about their long term and management (Herbst 2020;Gogoi 2013). Natural resources in Africa have been depleted due to abuse and poor management, which has resulted in environmental problems (Aluu 2019). Furthermore, Africa power generation resources have been depleted over the last two decades as a result of deforestation, desertification, land degradation, water scarcity, and climate change (Ochola et al. 2010). This has resulted in increased natural resource scarcity and climate change in some African regions. For example, soil erosion and deforestation decrease the amount of fertile soil available for agricultural production and endanger biological diversity, which could affect climate change. In turn, extreme droughts can hinder people's ability to raise livestock and grow food crops. That means farmers and pastoralists must adapt to new water regimes in order to maintain their livelihoods and well-being (Kabede et al., 2011). Similarly, natural resources are currently under threat from everincreasing population and economic growth needs, as well as urbanization, trade, and industrialization, all of which pollute the environment (Byaro et al. 2022;Opuala et al. 2022).
Even though sub-Saharan Africa has suffered greatly as a result of environmental degradation, some environmental sustainability reforms that have been implemented are ineffective and irrelevant due to outbreak of corona virus, energy poverty, economic policy uncertainty, and pastoral land tenure conflicts (See Anser et al. 2021;Adedoyin et al. 2021;Basupiet al. 2017). In the modern world, human activities (such as urbanization, industrialization, population growth, deforestation) and natural causes (such as floods, typhoons, droughts, rising temperatures, fires) are the most important factors that contribute to environmental degradation (Maurya et al. 2020). For instance, as the world becomes more urbanized, rural people are migrating to cities, resulting in the unplanned and rapid expansion of small cities, putting enormous strain on natural resources (Arsiso et al., 2018). Wassie (2020) argued that urbanization has resulted in uncontrolled degradation of land, forest, water, air, and minerals. Similarly, Fenta et al. (2020) claimed that countries in sub-Saharan Africa (SSA) have experienced changes in land cover and land degradation, altering natural ecosystems as a result of human activities. In developing world, increased carbon dioxide emissions, oil spills and flaring, massive deforestation, and land degradation are all major environmental issues (Adedoyin et al. 2021). Furthermore, increased exploration of natural resources, such as agriculture and mining, could result in higher carbon dioxide emissions and environmental damage due to deforestation (Nathaniel et al. 2021). Meanwhile, the use of renewable energy has been more adopted in developed countries compared to sub-Saharan Africa as an attempt to improve environmental quality (Obiakor et al. 2022).
In the light of this background, our study investigates the relationship between natural resource depletion, renewable energy use, and environmental degradation in SSA. The study is useful in providing information to policy makers to formulate and implement pertinent rules and regulations regarding the exploration of natural resources. Similarly, it will act as a guide for African governments in determining how changes in natural resources can affect environmental degradation. It is also significant in the African context because population growth, trade, economic growth, industrialization, deforestation, and depletion of natural resources (i.e. mineral depletion, forest depletion) continue to make a toll as a means of escaping poverty and improving wellbeing. For example, population growth and human activities such as tree cutting for agriculture and fuel have resulted in the loss of woody vegetation in the Miombo woodlands (Mitchard and Flintrop 2013). Similarly, extraction of minerals and oil in the region has contributed to the faster depletion of natural resources. For instance, mineral depletion (i.e. percentage of gross national income) in sub-Saharan Africa increased from 0.2% in 2000 to 1.9% in 2020 (World Bank Development Indicators 2021). Furthermore, net forest depletion (i.e. percentage of gross national income) increased from 2% in 2000 to 2.3% in 2016, before falling to 1.6% in 2020 (World Bank Development Indicators 2021). Therefore, the overall natural resource depletion in SSA increased from 6.3% in 2000 to 11.6% in 2008, before falling to 5.3% in 2020 (World Bank Development Indicators 2021).
Conversely, Hao (2016) argued that the ongoing development of advanced technologies and methods has enabled the huge extraction of wanted natural resources such as gas, minerals, forests, and land, resulting in environmental degradation to critical levels. In the same argument, Wassie (2020) cautioned that not all natural resource discoveries are harmful to the environment; other technologies, such as the use of renewable energy (i.e. solar and wind), provide a constant flow of energy and appear to be inexhaustible. This suggests that renewable energy technologies support lowering global emissions from the energy sector to achieve low carbon development goal (Fotio et al. 2022). While African countries are increasing their use of renewable energy, it is still hampered by high start-up costs, expertise, and supporting infrastructures, resulting in continued non-renewable energy consumption and further environmental degradation (Adedoyin et al. 2021). This suggests that sub-Saharan Africa countries rely on carbon-emitting non-renewable energy as their primary source of energy, leading to energy poverty (Adedoyin et al. 2021;Acheampong 2018). Likewise, an increase of economic growth and energy use in the region has lead to greater carbon emissions (Adedoyin et al. 2021). Thus, it is also worth noting that natural resources have the potential to improve environmental quality and accelerate global sustainable development reforms when backed up by proactive economic production (Feleke et al. 2021;Anser et al. 2020).
Our study is also motivated by a number of factors. First, the choice of sub-Saharan Africa (SSA) is due to its vulnerability to climate change and the depletion of its natural resources (Asongu and Odhiambo 2020). For instance, Konya (2016) reported that about 28% of the 924.7 million people who reside in SSA live in areas that have degraded since 1980. Second, the majority of people in SSA are trapped in poverty, reliant on carbon-emitting non-renewable energy (Obiakor et al. 2022;Adedoyin et al. 2021). Third, rising population growth and fast-growing African economies in terms of industrialization, trade, and hightechnology in natural resource extraction can cause environmental problems that need further research. Fourth, with over 1.39 billion people living in Africa (Worldometer 2022), and many of them engaging in poor farming practices and deforestation, natural resource depletion can lead to environmental degradation. Fifth, as more people in sub-Saharan African (SSA) countries try to alleviate poverty and improve living standards, they may find it more difficult to protect the environment, resulting in environmental degradation. Sixth, governments, decision-makers, researchers, and international organizations have all raised awareness of the urgency of improving environmental quality in order to achieve sustainable development goals (See Obiakor et al. 2022;Adedoyin et al. 2021;Imasiku et al. 2020). Finally, the statistics show that fossil fuels still account for over 80% of total energy consumption (see Goldemberg 2018).
This study seeks to determine whether there is heterogeneity in natural resource depletion and renewable energy use on environmental degradation across 48 sub-Saharan African countries. It aims to contribute to the existing literature in SSA in four ways: First, by exploring the nexus between natural resource depletion, renewable energy use, and environmental degradation using recently updated data from the period 2000 to 2020. Second, our study departs from previous literature (as shown in Table 1) by filling a gap in the African context by employing Powell's (2020) novel panel quantile regression methods, which control endogeneity of variables and fixed effects in the modeling approach. Unlike previous literature that deployed the ordinary Least Squares (OLS) based on mean estimation (see Table 1), the quantile regression predicts for conditional quantiles based on median and is robust to outliers explaining the heterogeneous effects of independent variables on dependent variable (see Bilgili et al. 2022;Khan et al. 2020;Chen and Lei 2018). Few studies have looked at natural resource depletion as a factor in explaining environmental degradation in developing countries (see Yang et al. 2022;Ali et al. 2021). Therefore, this is the first study to examine the nexus between natural resource depletion, renewable energy use, and environmental degradation in 48 countries in SSA using the panel quantile regression technique for the period from 2000 to 2020. Third, shocks from the corona virus (COVID-19) could have pushed back the environmental sustainability agenda, because many countries have been unable to absorb the virus, which could have serious consequences for the environment and natural resource management.
To achieve our objective, the generalized quantile regression method is used to solve endogeneity (i.e. omitted variable bias, simultaneity bias) of variables using an instrumental variables approach (see Opuku and Aluko 2021; Powell 2020). Furthermore, we apply the generalized panel quantile regression since the 48 countries selected in SSA differ substantially in terms of natural resource depletion, renewable energy use, and environmental degradation.
This study aims to address the following question: Does natural resource depletion and the use of renewable energy have heterogeneous effect on environmental degradation (i.e. carbon dioxide emission) in sub-Saharan African countries?
The rest of the paper is laid out as follows: the "Literature review" section presents the review of the literature. The methodology is presented in the "Data sources and methodology" section. The results and discussion are presented in the "Results and discussion" section, while the "Conclusion" section concludes the study.
Theoretical literature
This study examines the theoretical literature based on the interaction of human activities and the environment. It mainly focuses on theories used to analyze environmental degradation. For instance, neoclassical economists view environmental problems as one of the consequences of the production process (Fardian et al. 2021). From this scenario, environmental economics emerges as a response to externalities that arise as a result of economic activity (Hussen 2004). On the other hand, Kuznets (1955) developed the Environmental Kuznet Curve (EKC), which states that economic growth first moves parabolically upward until it reaches its highest point before decreasing, described by an inverted U-shaped curve which connects economic growth and environmental problems. This implies that environmental problems are linked to the stages of human economic development. After that, pollutants like carbon dioxide emissions and others were formulated to be the cause of environmental damage that linked emissions and income (Grossman and Kruger 2002). In this view, several factors that contribute to environmental damage were considered. For example, economic activity, depletion of natural resources used in the production process, industrialization, trade, urbanization, and financial development were linked to GDP (see Byaro et al. 2022;Opuala et al. 2022;Ozturk and Ullah 2022;Uche and Effiom 2021). Overall, all human activities interact with the environment to cause environmental degradation, which in turn causes climate change and global warming (Fardian et al. 2021).
Empirical literature review
As shown in Table 1, previous literatures have been identified on the nexus between natural resource depletion, renewable energy use, and environmental degradation in both developing and developed countries.
Environmental degradation has been a hot topic in terms of sustainable development goals. Many studies, such as Dagar et al. (2022), Usman et al. (2022), Ali et al. (2021), Tenaw and Beyene (2021), and Yahaya et al. (2020), have well captured the essence of environmental degradation in These studies, on the other hand, did not pay much attention to natural resources, which are the foundation of the environment. Despite the fact that environmental degradation and natural resource depletion are often used interchangeably and have a similar definition, degradation of a resource signifies a loss of value, whereas depletion signifies exhaustion and extinction. Natural resource depletion receives less attention because it is a long process with compounding effects that are more difficult to grasp intuitively. Usman et al. (2022) Table 1. Meanwhile, we take into account renewable energy use and other factors to see if depletion of natural resources is likely to result in further environmental degradation. The literature gap also reveals that no study in SSA has used quantile regression with fixed effects to investigate the relationship between natural resource depletion, renewable energy use, and environmental degradation.
On the other hand, natural resources are essential for the production and use of renewable energy. Studies by Usman et al. (2022) Kunze and Becker (2015) have looked into renewable energy and its role in reducing carbon dioxide emissions and improving power supply stability. However, many studies did not link natural resources, which are the primary drivers of renewable energy. Although renewable energy is limitless, the depletion of other resources may have an impact on its accessibility, operation, and utilization. In this regard, our study employs Powell (2020) generalized quantile regression to address the limitations of the standard linear regression technique (i.e., ARDL, FMOLS, DOLS, AMG, GMM estimator, DID and PSM approaches). The quantile regression method is based on the assumption that the influence of the explanatory variables varies along the conditional distribution of the dependent variable (Amegavi 2022;Koenker and Bassett 1978).
Data sources
We used unbalanced panel data to cover 48 sub-Saharan African countries from 2000 to 2020. It was justified because data for a few selected countries was missing in some particular periods. Thus, in the case of missing data, the unbalanced panel still permits the regression estimates without any problem. The dependent variable included environmental degradation (measured as carbon dioxide emissions expressed in tons per capita), while the independent variables were natural resource depletion measured as (adjusted savings % of gross national income) and renewable energy consumption (% of total final energy consumption). The control variables included GDP per capita (in constant 2015 US dollars), industrialization (% of manufactured value added), and trade (% of GDP). All variables were extracted from the World Bank's Development Indicators (2021). The variables used in this study were also justified in other previous literature (see Byaro et al. 2022;Opuala et al. 2022;Van Cam Thi and Le, 2022;Moustapha et al. 2021;Rahman 2020).
Model estimation techniques
We apply the generalized panel quantile regression since the 48 countries selected in SSA differ substantially in terms of natural resource depletion, renewable energy use, and environmental degradation. The main advantage of using quantile regression is its ability to examine heterogeneity and asymmetry of explanatory variables on the conditional location of the dependent variable. This means, the effects of regressors on dependent can be negative or positive across the quantiles. Furthermore, regardless of data distribution (i.e. skewed), outliers and heteroskedasticity are not a serious problem because the method is very robust and computationally intensive (Bilgili et al. 2022). The method does not require the traditional OLS (Ordinary Least Square) assumptions of zero mean, constant variance, and normal distributions to be met (Lin and Xu, 2018). For this study, panel quantile regression is also relevant when some countries have higher and lower rates of natural resource depletion, renewable energy use, and environmental degradation, as it offers flexible results. Lastly, quantile regression enables to control country and time-specific confounders. In identifying different relationships at different points of the dependent variable's distribution, quantile regression (i.e. median estimates) is more flexible than other regression methods.
We build a quantile regression model as follows:where x ′ represents vector of explanatory variables for each fixed country i at time t , including natural resource depletion (% of GNI), renewable energy consumption, and other control variables such as trade, industrialization, and economic growth. ( is the th conditional quantile of environmental degradation (proxied by carbon dioxide emissions) as a linear function of the explanatory variables. i is the coefficient of explanatory variables and it is the vector of residuals and indicate a quantile.
It is clear that the use of a linear regression model tells us the average/mean relationship between the explanatory variables (i.e., natural resource depletion, renewable energy consumption) and the dependent variable (environmental degradation). Their estimation method relies on the dependent variable's central distribution tendency without integrating for the upper and lower ranges (see Amegavi 2022). This also means that the linear regression estimation method does not take into account countries with higher/lower (natural resource depletion, renewable energy, and environmental degradation) than medium countries. This can cause overestimation or underestimation of regression coefficients (Sarkodie and Strezov 2019), as all data cannot be fitted to reflect reality and distorting some important information (see Amegavi 2022). For this reason, the panel quantile regression approach is used to address the limitations of the standard linear regression technique. The quantile regression method assumes that the independent variables' impact varies along the dependent variable's conditional distribution (Amegavi 2022;Koenker and Bassett, 1978). Powell (2020) argued the generalized panel quantile regression to produce consistent estimates in small T panels.
In the estimation process, the panel quantiles divide the data into nine different quantiles (10 th , 20 th , 30 th , 40 th , 50 th , 60 th , 70 th , 80 th , and 90 th ) to explore the nexus between natural resource depletion, renewable energy use, and environmental degradation while controlling other covariates such as trade, industrialization, and economic growth. In other words, the panel quantiles show an observation of data into intervals values, whereas the country performance indicates the magnitude at a median for all countries at 50% quantile compared to other countries (Amegavi 2022). Therefore, countries with lower and higher quantiles than the median (50th Quantiles) can be described as having worse or better performance.
Using unbalanced panel data for 48 countries in sub-Saharan African from the period 2000 to 2020 (see appendix 1), the quantile regression with fixed effect was applied to account for unobserved heterogeneity and heterogeneous covariates effects. The role of including fixed effects is to control for unobserved covariates. Then, we adopted Powell's (2020) generalized quantile (GQR) estimator, which uses regressor's lags as an instrumental variable to eliminate the endogenous feedback, such as economic growth and trade. This procedure solves the omitted variable bias. According to Powell (2020), the generalized quantile regression is used within an instrumental variable framework for generality and to estimate unconditional quantile treatment effects for both endogenous and exogenous policy variables. Models with non-additive disturbances, which are functions of both unobserved and observed factors, are included in the framework. Finally, the generalized quantile regression model is estimated using adaptive Markov Chain Monte Carlo (MCMC) sampling and numerical optimization (see Opuku and Aluko 2021; Powell 2020). Table 2 shows the summary of descriptive statistics of variables for 48 sub-Saharan African (SSA) countries from the year 2000 to 2020.
Results and discussion
The median for carbon dioxide emissions (i.e. environmental degradation) and natural resource depletion in SSA were 0.26 tons per capita and 2.59%, respectively. Likewise, the mean for natural resource depletion and carbon dioxide emissions in the region were 6.11% and 0.95 tons per capita, respectively. The median and mean uses of renewable energy are 77.22% and 65.63%, respectively. Among the countries with high environmental degradation (i.e. carbon dioxide emissions) are South Africa (7.49 tons per capita), Seychelles (6.40 tons per capita), Equatorial Guinea (5.09 tons per capita), Botswana (3.64 tons per capita), Mauritius (3.34 tons per capita), and Gabon (2.17 tons per capita). Countries with high natural resource depletion (% of GNI) include Congo Republic (37.10%), Equatorial Guinea (27.60%), Table 3 shows the generalized quantile regression with fixed effects for 48 countries in SSA from the year 2000 to 2020.
The result shows heterogeneous effects of natural resource depletion on environmental degradation (i.e. carbon dioxide emissions) at 10 th , 30 th , 40 th , 50 th , and 90 th quantiles. At the 10 th quantile, resource depletion has a negative impact on environmental degradation, whereas at the 40 th quantile, it has a positive impact. This implies that natural resource depletion has a nonlinear relationship with environmental degradation. The magnitude of natural resource depletion is positive and stronger associated with environmental degradation at the 90 th quantile compared to the median countries quantile (i.e. 50 th quantiles). The median countries also revealed that natural resource depletion has a positive effect on increasing environmental degradation in sub-Saharan Africa. Moreover, at the 10th and 30th quantiles, natural resource depletion negatively affects environmental degradation in SSA.
On the other hand, the findings show that using renewable energy reduces environmental degradation in the majority of quantiles and is statistically significant at the 20 th to 90 th quantiles. The findings also reveal that industrialization has nonlinear relationships with environmental degradation. Industrialization increases environmental degradation in SSA at the 20 th , 30 th , 50 th , and 90 th quantiles. Furthermore, it reduces environmental degradation at the 60 th and 70 th quantiles. On the other hand, trade and economic activities (i.e. GDP) in the region continue to increase environmental degradation. This is true across all quantiles and is statistically significant.
Discussion of findings
This study examines the nexus between natural resource depletion, renewable energy, and environmental degradation in 48 countries in sub-Saharan Africa from the period 2000 to 2020. The findings indicate that natural resource depletion increases environmental degradation (i.e. carbon dioxide emissions) at 40 th , 50 th , and 90 th quantiles. At 90 th quantiles the magnitude of natural resource depletion is positive and stronger associated with environmental degradation compared to median countries quantiles (i.e. 50 th quantiles). This is probably attributed by countries with higher natural resource depletion such as Congo Republic (37.10%), Angola (21.14%), Burundi (8.92%), Equatorial Guinea (27.60%), Chad (12.19%), Gabon (12.84%), Uganda (6.16%), and Congo Democratic (5.24%). Furthermore, at 30 th and 10 th quantiles, natural resource depletion negatively affects environmental degradation in SSA. This might be attributed by countries with negligible natural resource depletion (% of gross national income) like Carbo Verde (0.16%), Central African Republic (0.04%), Comoros Standard errors in parentheses (). The notation *, **, *** show significant at 10%, 5%, and 1% level respectively. Acceptance rate is set at 0.5 for the algorithm. The algorithm performs 1000 draws and burn in of 100 through MCMC diagnostic. Year dummies (time -fixed effects) are included in the regression. All independent variables are lagged by one as instrumental variables.
It is important to note that, as the human population grows and economies develop, more natural resources are being used in different ways. For instance, the struggle of communities to survive and earn enough income to meet their daily needs is a major cause of resource depletion. Weiskel and Gray (1990) support this claim by stating that the more poor and vulnerable people there are, the greater the likelihood of environmental degradation and resource depletion. In sub-Saharan Africa for example, minerals, forests, water, and fertile soils, among other natural resources, are depleted faster than nature can replenish it. During the dry season, most communities degrade more resources such as water and forest than they do during the rainy season, when these resources are plentiful. This suggests that resource utilization varies depending on the season, which could explain the nonlinear relationship between natural resource depletion and environmental degradation in SSA. Ibrahiem and Hanafy (2020) supported this claim that income and fossil fuel consumption deplete resources, and this is especially true in Africa, where most communities rely on natural resource extraction to support their daily lives. Meanwhile, the faster depletion of natural resources will likely have an impact on sub-Saharan Africa's GDP growth. For instance, as more people rely on natural resources to support their daily lives, natural resource depletion lowers national income and creates a vicious cycle of poverty in the region. Similarly, depletion of natural resources increases carbon dioxide emissions and contributes to global warming.
In Africa, wood still remains the most common fuel. Over 90% of the population in sub-Saharan Africa relies on firewood and charcoal for energy, especially for cooking, and wood fuels account for over 80% of the primary energy supply (https:// nextb illion. net 2021). In the USA, for example, wood fuels accounts for only 2% of total energy consumption (https:// nextb illion. net2021). According to these statistics, the use of firewood and charcoal in SSA is heavily influenced by limited technology and poverty (Hartely et al., 2019). Therefore, as Africa's population is expected to double by 2050, the region will face greater pressure on resource extraction and depletion (Mitchard and Flintrop 2013).
On the other hand, the findings suggest that SSA should adopt renewable energy technologies as a clean source of energy in order to reduce the use of charcoal and firewood. Renewable energy reduces environmental degradation in the majority of quantiles and is statistically significant from 20th to 90 th quantiles. Renewable energy has transformed the reduction of carbon dioxide emissions from human activities, which is essential for sustainable development Imasiku et al. 2020). Our descriptive statistics show that renewable energy is used at an average rate of 65% in sub-Saharan Africa. By increasing the use of renewable energy to at least an average of 85%, the continent will be able to move away from its reliance on wood fuel and charcoal, preserve natural resources while maintaining sustainable development. For instance, countries with high natural resource depletion (% of gross national income, GNI) such as Angola (21.14%), Burundi (8.92%), Chad (12.19%), Congo Republic (37.10%), Equatorial Guinea (27.60%), Gabon (12.84%), Uganda (6.16%), and Congo Democratic (5.24%) should immediately adopt more renewable energy technologies to avoid the prolonged natural resource depletion and environmental degradation. Among the studies claiming that renewable energy can help to reduce environmental degradation include (Usman et al. 2022;Dagar et al. 2022;Abbasi et al. 2022;Awan et al. 2022;Aziz et al. 2021;Imasiku et al. 2020). In addition, some studies show that natural resource depletion is linked to environmental degradation (see Nathaniel et al. 2021;Ali et al. 2021).
On the other hand, industrialization is seen as a path to poverty reduction in most sub-Saharan African countries. Tanzania, Rwanda, and Kenya, for example, have undertaken numerous economic reforms aimed at promoting industrial development. However, while such reforms are good-intentioned, they are linked with the threat of environmental degradation (see Byaro et al. 2022;Kwakwa 2021;Xing and Zhang, 2021). This argument is in line with the findings of our study that industrialization increases environmental degradation (i.e. carbon dioxide emissions) at the 20 th , 30 th , 50 th , and 90 th quantiles in sub-Saharan Africa. In similar vein, industrialization reduces environmental degradation at the 60 th and 70 th quantiles in sub-Saharan Africa. These findings are also supported by Opuku and Aluko (2021) who found that industrialization increases environmental degradation in sub-Saharan Africa at the lower quantiles and reduces it at the upper quantiles. This implies that environmental degradation is minimized when industrialization is effectively implemented with caution and the use of renewable resources. Feleke et al. (2021), Imasiku et al. (2020), and Park (2016) all emphasized the importance of incorporating bioeconomy and renewable energy into existing institutional frameworks.
It is also worth noting that, with globalization, trade is becoming more open, influencing people to engage in productive activities in order to meet market demand and stimulate the economy. The findings show that, across all quantiles (10 th to 90 th ), both trade and economic growth (GDP) contribute to environmental degradation (i.e. carbon dioxide emissions) in sub-Saharan Africa. The struggle to establish and expand industries and trade has resulted in increased GDP in African countries, both of which are detrimental to the environment. This claim is also supported by Byaro et al. (2022) and Yahaya et al. (2021). It is true that in terms of economic development, sub-Saharan Africa continues to lag behind compared to developed countries.
While attempting to eradicate poverty through improved economic growth, trade, and industrialization in SSA, these variables exacerbate environmental degradation. This serves as a reminder that not all that glitters is gold. To achieve long-term development in the region, both consumers and producers involved in trade activities should protect the environment.
The best way to reduce environmental degradation (i.e., carbon dioxide emissions) that cause climate change in sub-Saharan Africa is to reduce poverty, which increases the high consumption of charcoal and firewood through deforestation (i.e., depleting more natural resources). It is also advisable for sub-Saharan African countries to use sustainable agriculture practices (i.e. fewer chemicals) and scale up renewable energy use technologies from the current average of 65% to 90% by 2030. The overall practical implication of our findings is to develop strategies to reduce carbon dioxide emissions and enable better use of natural resources by enforcing environmental laws. Policymakers need to formulate and implement relevant rules and regulations governing the exploration of natural resources.
Conclusion
The objective of the study is to explore the nexus between natural resource depletion, renewable energy, and environmental degradation in 48 sub-Saharan African countries from the year 2000 to 2020. Carbon dioxide emissions (CO2) expressed in metric tons per capita are used as a measure of environmental degradation; the adjusted savings percentage of gross national income is used as a measure of natural resource depletion; and renewable energy consumption is expressed as a percentage of total final energy consumption. The study also included control variables such as GDP per capita to measure economic growth; percentage of manufactured value added as a measure of industrialization; and trade measured as a percentage of GDP. The study fills the gaps in the literature and specifically in sub-Saharan African countries using generalized panel quantile regression developed by Powell (2020).
The findings shows heterogeneous effects of natural resource depletion on environmental degradation (i.e. carbon dioxide emissions) at 10 th , 30 th , 40 th , 50 th , and 90 th quantiles. At 90 th quantiles, the magnitude of natural resource depletion is positive and stronger associated with environmental degradation in SSA. This is probably attributed by countries with higher natural resource depletion such as Congo Republic (37.10%), Angola (21.14%), Burundi (8.92%), Equatorial Guinea (27.60%), Chad (12.19%), Gabon (12.84%), Uganda (6.16%), and Congo Democratic (5.24%). Moreover, at 30 th and 10 th quantiles, natural resource depletion negatively affects environmental degradation in SSA. This might be attributed by countries with negligible natural resource depletion (% of gross national income) like Carbo Verde (0.16%), Central African Republic (0.04%), Comoros (1.17%), Eswatini (0.01%), Gambia (0.92%), Guinea-Bissau (0.33%), and Madagascar (0.07%). The findings also show that renewable energy reduces environmental degradation in the majority of quantiles and is statistically significant at the 20 th to 90 th quantiles. This suggests scaling up renewable energy use technologies in SSA from the recent average of 65% to 90% by 2030. Furthermore, the findings reveal that, across all quantiles (10 th to 90 th ), both trade and economic growth (GDP) contribute to environmental degradation (i.e. carbon dioxide emissions) in sub-Saharan Africa. The findings imply that economic development is sustainable and safe with the use of renewable energy in industrial sector.
Addressing environmental sustainability reforms is a practical implication for SSA in order to realize sustainable development goals. Since industrialization, economic growth, trade, and natural resource depletion increase carbon emissions (i.e., environmental degradation), African governments should reduce carbon dioxide emissions by implementing carbon-efficient technologies. This suggests investing in clean energy production and technologies is crucial in the region. SSA should increase its use of renewable energy technologies from its current average of 65% to 90% by 2030. To make these technologies available across the continent, both local and international funding are required to ensure the region's environmental sustainability agenda. Similar to this, other taxes should be charged for pollutionproducing activities like mining and industrial waste.
As population growth in SSA continues to rise, the demand for more natural resources will increase, resulting in more resource depletion. Therefore, governments should encourage their citizens to adopt clean technologies like the use of renewable energy and the recycling of plastics and other materials. Furthermore, natural resources like fossil fuels are used to generate electricity in sub-Saharan African countries. To minimize the depletion of natural resources from fossil fuels, the policy implication is to promote the production of electricity from renewable sources like wind and sunlight. It is also worth noting that forests are rich in natural resources. Therefore, the best way to minimize the depletion of natural resources is to promote sustainable forest management practices. For instance, forest harvesting plans and establishing protected areas.
The other policy implication for SSA countries is to adopt measures that reduce poverty levels, which increase high consumption of charcoal and firewood through deforestation (i.e. depleting more natural resources). Concurrently, we propose the natural resource management to be multisectoral and integrated into institutional structures by allocating fund to the natural resources sector for intervention programs in SSA countries.
The current study has some limitations, such as the inability to consider other econometric techniques based on the mean. Despite the fact that our findings are limited to sub-Saharan Africa, they should not be generalized. More research beyond SSA is needed to influence the generalization of the findings. However, the study findings are robust and support other researchers to conduct similar studies in the future using more variables, other environmental metrics, and econometrics techniques to provide more policy options.
|
2022-10-16T06:17:30.895Z
|
2022-10-15T00:00:00.000
|
{
"year": 2022,
"sha1": "54004393b0be1b7b99249d75c46649b2fe14e07e",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9569016",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcaee263b3148447af7f8b246a0d510bbec45284",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15005513
|
pes2o/s2orc
|
v3-fos-license
|
Multiparticle amplitudes at one-loop: an algebraic/numeric approach
We discuss algebraic/numeric methods to compute one-loop corrections for multiparticle/jet production cross sections. By using efficient reduction algorithms a compact expression for the ggg\gamma\gamma ->0 amplitude is obtained. Further a numerical approach for 6-point 1-loop diagrams is presented.
INTRODUCTION
The theoretical description of multi-particle production at the one-loop level is a very challenging task, as the complexity of the Feynman diagrammatic approach grows exponentially with the number of external partons. No Standard Model process which has generic 2 → 4 kinematics is computed at the one-loop level although this is highly relevant for many Higgs boson search channels at the LHC, like gluon fusion and weak boson fusion, where additional jets have to be tagged to improve the signal to background ratio. For signal reactions like P P → H + 0, 1, 2 jets, with H → γγ, W W * , τ + τ − which are available at one-loop level, many backgrounds remain to be calculated. As an example for needed calculations consider P P → bbbb + X, P P → γγ + 2 jets + X or P P → ZZ + γγ + X, which require the evaluation of hexagon graphs like the ones given in Fig. 1.
Te computation of the related amplitudes relies on efficient methods for the evaluation of the corresponding Feynman graphs. In the next section we shortly review our reduction formalism. As an example for the efficiency of our methods we discuss the 5-point 1-loop amplitude gg → γγg in Section 3. It seems to be feasible to apply the presented techniques also for 6-point processes, as long as the internal masses of the problem can be neglected. In [1,2] we have shown that in the massless Yukawa model our formalism leads to compact expressions. Going to the massive case leads generally to much more involved expressions and in that case numerical methods seem to be preferable. An approach for the numerical evaluation of 6-point Feynman diagrams is outlined in Section 4.
REDUCTION FORMALISM
In the Feynman diagrammatic approach any one-loop amplitude can be represented as a linear combination of factors which contain the group theoretical information and tensor one-loop integrals: To separate the Lorentz structure from the integrals it is useful to express the tensor integrals in terms of scalar integrals with nontrivial numerators.
[R/2] is the smaller nearest integer to R/2. The bracket with the Lorentz indices as superscripts stands for the sum over all distinguishable distributions of the Lorentz indices to the metric tensors and external momenta. The separation of the kinematical information allows to sort the amplitude into gauge invariant subsets. Basic ingredient of the given formula is the Feynman parameter integral defined in D = n + 2m dimensions: In [3] we have derived a reduction formula for such parameter integrals for general N, R in arbitrary dimensions D. It is based on differentiation by parts in parameter space. The derived formula maps rank R N -point integrals in D dimensions to rank R − 1 N -point integrals and higher dimensional integrals with lower rank. As the latter are IR finite, a separation of IR divergent and IR finite terms can be obtained in this way which defines an approach for a semi-numeric method. After extraction of all UV/IR poles the remaining integrals can be treated numerically. If one wants to proceed analytically one has to iterate the formula. In this way one can show that arbitrary N -point Feynman integrals can be expressed in terms of n = 4 − 2ǫ dimensional bubble and triangle functions and n + 2 dimensional boxes. More details on reduction formalisms can be found in [3,4].
THE LOOP AMPLITUDE gg → γγg
To give an example for our algebraic approach we have considered the 5-point 1-loop amplitude gg → γγg [5]. This amplitude is indirectly known from the 1-loop 5-gluon amplitude [6] by turning gluons into photons.
We define all particles as incoming.
In hadronic collisions this amplitude is relevant for the production of photon pairs in association with a jet and as such a contribution of the background to the Higgs boson search channel H → γγ + jet. For a phenomenological analysis see [7,8]. The colour structure of this amplitude can be written as A λ1λ2λ3λ4λ5 are helicity dependent linear combinations of scalar integrals and a constant term which is a remnant of two-point functions with coefficients of order (D − 4). Six independent helicity components exist: +++++,++++ -, -++++,--+++, +++ --, -+++ -. As the amplitude is finite one expects that all 3-point functions which carry spurious infrared poles cancel. The function basis of the problem is thus reduced to 2-point functions
To give an example for a compact helicity amplitude we show here the result for A −−+++ only. The remaining ones which have also compact representations can be found in [5]. The result is expressed in terms of field strength tensors F µν j = p µ j ǫ ν j − p ν j ǫ µ j where ǫ ± j are the polarization vectors of the gluons and photons.
We split the result of A −−+++ into three pieces with indices F, B, 1, which belong to the part proportional to 6-dimensional boxes F 1 , a part containing bubble graphs I D 2 , and a constant term, respectively. In the given expressions the S 2 ⊗ S 3 symmetry under exchange of the two photons and the three gluons is manifest after taking into account the omitted colour factor. The result indicates that with our approach indeed a compact representation of complicated loop amplitudes can be obtained. The application of our approach to relevant 6-point amplitudes is presently under study.
NUMERICAL APPROACH
Due to the complexity of the analytic approach if massive particles are present, a numerical approach seems to be more appropriate to tackle different types of one-loop amplitudes in a unified and efficient way.
Recently a great activity in that direction with many new ideas can be observed [9,10,11,12].
Reduction to basic building blocks
As basic building blocks for an amplitude in our numeric approach, we choose scalar 2-point functions I n 2 and 3-point functions I n 3 and n + 2 dimensional box functions I n+2 4 with nontrivial numerators. The latter are infrared finite. Possible UV singularities are only contained in the 2point functions and their subtraction is straightforward. The (soft and collinear) IR singularities are, as a result of the reduction, only contained in 2-point functions and 3-point functions with one or two light-like legs. In this form, they are easy to isolate and to subtract from the amplitude. After reduction and separation of the divergent parts, we are left with finite integrals I n 3 (j 1 , j 2 , j 3 ) and I n+2 4 (j 1 , j 2 , j 3 , j 4 ), with nontrivial numerators. As numerical stability problems are entirely from the denominators we discuss only the case of scalar integrals with trivial numerators here. Systematic methods for the combination of the IR divergences from the virtual corrections with their counterparts from the real emission contribution already exist ( [13] and references therein).
In this section we focus on the evaluation of a finite 6 point scalar integral. As a first step we reduce the hexagon integral to box and triangle functions which are the basic building blocks of the reduction.
Parameter representation of basic building blocks
To evaluate the box and triangle functions numerically, we first perform a sector decomposition.
for the integration over N parameters (N = 3 for the triangle, N = 4 for the box). The step function Θ is defined as 1 if the inequality of its argument is fulfilled, and 0 else. Now, we carry out one parameter integration explicitly. We show the explicit expressions only for the triangle integral, the ones for the box are analogous and can be found in [14]. We obtain The discussion is also valid in the case of vanishing masses or invariants, as long as the functions remain IR finite. Note that if infrared divergences are present the triangle integrals can typically be treated analytically. The (n + 2)-dimensional box function are infrared finite for any physically relevant kinematics.
Singularity structure
Starting from (4) one integration is performed explicitly. In order to analyse the singularity structure of the integrands, we then separate imaginary and real part. One obtains Three regions which lead to an imaginary part can be distinguished: Region II: Region III: Region I is an overlap region where the imaginary part has two contributions. In regions II and III only one of the Θ-functions contributes. Note that the box function I D=6 4 has the same singularity structure [14]. As I D=4
3
and I D=6 4 are the basic building blocks, this analysis of the singularity structure is done once and for all. Knowing the critical region of integration it is possible to map out the singularities by adequate parameter transformations.
Numerical integration
To demonstrate the practicality of our method to evaluate multi-leg integrals, we show in Fig. 2 a scan of the 2m t = 350 GeV threshold of the 4-dimensional scalar hexagon function for a realistic kinematical configuration. For details of the integration methods see [14,15].
CONCLUSION
To make reliable phenomenological studies for collider experiments operating at the TeV scale 1loop calculations with many external particles are mandatory. In this talk I have outlined recent developments concerning the analytic and numeric evaluation of 1-loop Feynman diagrams. Using reduction methods a compact result for the 3-gluon 2-photon amplitude was presented. Concerning numerical methods we have developed an approach to successfully integrate hexagon functions numerically. Merging and applying these techniques to more challenging situations is presently under study.
|
2014-10-01T00:00:00.000Z
|
2004-06-30T00:00:00.000
|
{
"year": 2004,
"sha1": "e07d582958704b9c07da56900aefc7de2b03508b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0407003",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0694f19475367049e11c8c3c53658ca88596bab3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
49407497
|
pes2o/s2orc
|
v3-fos-license
|
The economic consequences of attention-deficit hyperactivity disorder in the Scottish prison system
Background Attention-deficit hyperactivity disorder (ADHD) is highly prevalent amongst prison inmates and the criminal justice system (CJS) likely bears considerable costs for offenders with ADHD. We aimed to examine the relationship between ADHD and health-related quality of life (HRQoL) and quality-adjusted life years (QALY) amongst imprisoned adults; and to estimate the annual expenditure associated with ADHD status in prison. Methods An observational study was performed in 2011–2013, at Porterfield Prison, Inverness, United Kingdom (UK). The all male sample included 390 adult prison inmates with capacity to consent and no history of moderate or severe intellectual disability. Participants were interviewed using the Diagnostic Interview for ADHD in Adults 2.0. The Health Utilities Index Mark 3 (HUI3) was used to measure health status, and to calculate attribute specific HRQoL scores and QALY. Health service utilisation was obtained through inspection of medical prison records. Inmates with ADHD were compared with inmates without ADHD. Results Inmates with ADHD had significantly lower QALYs, with a clinically significant adjusted difference of 0.13. Psychiatric co-morbidity accounted for the variation of ADHD on the HUI3 emotion domain only. Medical costs for inmates with ADHD were significantly higher; and behaviour-related prison costs were similar to prisoners without ADHD, reflecting a low frequency of recorded critical incidents. Conclusions ADHD may directly contribute to adverse health and quality of life through cognitive and executive function deficits, and co-morbid disorders. The extrapolation of conservative cost estimates suggests that the financial burden of medical and behavior-related prison care for inmates with ADHD in the UK is approximately £11.7 million annually. The reported cost estimates are conservative as there is great variability in recorded critical incidents in prisons. In turn, for some prison establishments the prison care costs associated with prisoners with ADHD may be considerably greater.
Background
Among the general population attention-deficit hyperactivity disorder (ADHD) confers significant financial burden [1,2] and given the disproportionate prevalence in the prison population, the criminal justice system (CJS) likely bears considerable economic consequences for offenders with ADHD.
ADHD is a childhood onset neurodevelopmental disorder [3] often persisting into adulthood. It is one of the most common mental health disorders in children with recent prevalence estimates ranging between 5.9 and 7.1% [4,5]. There is recent evidence suggesting late-onset of ADHD, which will require research to better understand its implication [6,7]. Clinically significant symptoms persist beyond childhood in 65% of cases [8], and may affect as much as 2.8-5.3% of adults worldwide [9,10]. Its substantial burden of disease is evidenced by an increased likelihood for serious accidents [11], earlier mortality rates [12], substance dependence [13], criminality, incarceration, and false confessions [14]. ADHD confers significant impairment [15], and reduced quality of life [16] to those afflicted by it. It is also highly prevalent amongst prison inmates, with a meta-analytical prevalence estimate of 25.5% [17]. Prison inmates with ADHD are reported to be at significant risk for having increased psychiatric co-morbidity and poorer psychosocial adjustment to the prison environment [18][19][20][21].
Health economic evaluations have become an essential part of research and provide evidence supporting health interventions [22]. ADHD is consistently linked with substantially elevated costs and with significant economic burden on education [23] and health [2]. Annual service costs linked with ADHD are reportedly £670 million in the UK [24]. Meanwhile, annual ADHD-related healthcare costs are estimated between $21 to $44 billion in the United States [23]. Furthermore, in the US costs associated with accident claims are more than three times higher in adults with ADHD [1].
Despite the disproportionate representation of ADHD within the prison population, the health-related quality of life (HRQoL) and related costs remains unknown.
In this study we aim to examine the impact of ADHD amongst imprisoned adults. We set out to determine prisoners': 1) scope and extent of impaired HRQoL utility scores and quality-adjusted life years (QALY) and 2) service use and costs attributable to ADHD.
Participants and sample selection
Following approval from the Scottish Prison Service Research Access and Ethics Committee and in accordance with the Declaration of Helsinki written consent was obtained by prisoners who were recruited by opportunity sampling from Porterfield Prison, Inverness, Scotland, UK, over a period of 18 months in 2011-2013. Participants included 390 adult male prisoners who consented to participate. Those with moderate or severe learning difficulties, lack of fluency in the English language, and severe mental illness (as judged by prison officers) were excluded from participating.
Participants in the study were indirectly compensated. The study group deposited £20 per participant into a Prison Common Good Fund, which was managed by a group of prisoners. The fund was then used to purchase items for the common good of all prisoners to enhance prison life.
Prisoners who indicated interest attended an appointment with the researcher where they were given detailed oral and written information about the study and the consent procedures. After obtaining written consent, researchers administered a comprehensive battery of measures, which took approximately 4 h to complete (usually split across 2 or 3 sessions). The researchers received comprehensive training to administer the measures from the Maudsley Hospital Adult ADHD Service. Further details about the comprehensive battery of measures have been published elsewhere [25].
Data related to medical service use were gathered through inspection of prison medical records and medical costs were calculated based on reference costs reported by the (NHS; see details below). Data related to behavioural disturbance incidents were gathered through inspection of prison records and related costs were calculated based on similar reference costs and were reported as prison costs (see details below).
Health utilities index mark 3 (HUI3)
The HUI3 is a multi-attribute health status classification system that enables researchers to map levels in the following categories: vision, hearing, speech, ambulation, dexterity, emotion, cognition, and pain; using decision tables and coding algorithms, which can be represented in terms of attribute specific HRQoL scores [26,27]. HRQoL refers to the value assigned to life span when considering impairments and functional states that may be affected by disease, injury, and treatment [28]. The HUI3 scoring system provides HRQoL utility scores ranging from 0.00 (dead) to 1.00 (perfect health), and meets criteria for calculating QALY [26]. Prisoners' were asked to answer HUI3 questions based upon their health status in the 4 weeks prior to the interview. The HUI3 composite score was used to calculate QALY and was extrapolated to 1 year to represent the study health evaluation time frame, as previously applied on cost-effectiveness studies [29]. Estimating beyond this time frame would have introduced a very high degree of uncertainty in estimates.
ADHD diagnosis
All participants underwent a comprehensive evaluation for ADHD and were interviewed using the Diagnostic Interview for ADHD in Adults 2.0 (DIVA-2) [30]. The DIVA-2 is a validated semi-structured clinical interview used to diagnose ADHD in adults based on the 5th edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria [3]; and has been used in clinical [31] and law enforcement settings [32]. Questions addressed their current and childhood (ages 5 to 12) presentation of ADHD symptoms and scope of impairment.
Participants were also questioned whether they were previously diagnosed or treated for ADHD or any other psychiatric illness.
Brief symptom inventory (BSI)
The Brief Symptom Inventory (BSI) is a brief psychological self-report scale [33]. The BSI has 9 subscales (Somatization, Obsession-compulsion, Interpersonal sensitivity, Depression, Anxiety, Hostility, Phobic anxiety, Paranoid ideation and Psychoticism), and 3 composite measures (Global Severity Index, Positive Symptom Distress Index, and Positive Symptom Total). We used the BSI depression and anxiety measures as covariates in our health evaluation models because they represent common mental health conditions.
Medical service use and costs
Detailed medical service utilization history was obtained through inspection of participants prison medical records. Data from prisoners' medical charts (covering the 3 months prior to the appointment with the researcher) were abstracted, verified, and entered into a database for analyses. The authors chose to include 3 months of service for practical reasons; and additionally thought this time period fairly represented the medical service use of all prisoners given the variance in prison stays. Data included details from appointments with a general practitioner, physical health nurse, mental health nurse, addiction services nurses, or any other type of nurse, psychiatrist, psychologist, podiatrist, oral health practictioner, or any other type of health related visit such as Well-man clinic or other health clinics, and hospital outpatient visit. Medical costs for these appointments were calculated according to reference costs reported by the NHS Trust [34]. Medication costs were not explicitly collected in the study.
Prison service use and costs
Prisoners' behavioural disturbance incidents were obtained from prison records. Reports of non-attendance to prison activities, being under observation, number of adjudications, and critical incidents were collected and used to calculate the related prison costs. Prison costs were calculated based upon reference costs from the UK Ministry of Justice and HM Prison Service [35], Social Research Unit, Dartington [36], and from direct communication with Scottish Prison Service management.
All reported costs were in Pounds Sterling (£) for the years 2012-2015, and adjusted using the Consumer Price Index (CPI, 2016).
Analytical strategy
Frequencies were reported for all categorical variables, and means with their standard deviation for continuous variables. The median and inter-quartile range was used for all cost related values.
Because of the HUI3 utility scores' interval properties, we used t-tests for unadjusted analyses. To estimate the association between ADHD and HUI3 single attribute and composite utility scores, Type I Tobit models were used in favour of traditional ordered logistic regression models. A Tobit model is designed to estimate linear relationships between variables when there are ceiling or flooring effects on the outcome [37]. Ignoring the censoring and fitting regression models estimated using OLS would have been systematically biased toward the null hypothesis, whereby type II error is increased. HUI3 single attribute and composite utility scores with the value of one are considered censored.
Considering the highly skewed nature of cost variables we used generalised linear models (GLM) with a gamma distribution and log-link function. This way the natural log is modeled and then the predicted margins are calculated in order to obtain the cost differential for those with ADHD [38]. All cost models were adjusted only for age. HUI3 includes an emotion domain that may be sensitive to coexisting disorders (in addition to ADHD). Therefore, models for all HUI3 variables were further adjusted for co-morbid anxiety and depression standardized symptom scores.
We established a significance level at p ≤ 0.05 for all statistical tests. All analyses were performed using Stata version 13 (StataCorp) [39].
Descriptive statistics
The all male sample was essentially Caucasian British (99.0%) with an average age of 30.3 years (sd 8.3). Prisoners with ADHD had a significantly lower mean age than those without ADHD (28.2 years (sd 7.5) vs. 31.0 years (sd 8.5,) p < 0.01). 18.8% (18/96) prisoners with ADHD reported a prior diagnosis of ADHD and 15.6% (15/96) reported having ever received pharmacological treatment for ADHD.
Out of the total sample of 390 participants, 81 (20.8%) required assistance with reading the questionnaires. For those diagnosed with ADHD, 31/96 (32.3%) required assistance in contrast to 50/294 (17.0%) of the other participants. This difference is significant (Chi 2 (df 1) = 10.2, p = 0.001; Odds Ratio = 2.3, Confidence Interval 1.3-3.9). Table 1 includes the mean and distribution of all HUI3 specific attributes and composite HRQoL utility scores for all inmates. Prisoners' variability noticeably increased in scores for emotion, cognition, pain, and HRQoL.
Health status
Independent sample t-tests were estimated for all utility scores and HRQoL (Table 2) comparing prisoners with ADHD with prisoners without ADHD. Inmates with ADHD had significantly lower scores in the following categories: speech (p < 0.05), ambulation (p < 0.01), emotion (p < 0.001), cognition (p < 0.001), pain (p < 0.05), and HRQoL composite (p < 0.001). Figure 1 shows the distribution of HRQoL utility scores comparing prisoners without ADHD with those with ADHD. Table 3 demonstrates that even after adjustment for age, anxiety, depression, and/or without correction for missing values, that censored Tobit models were significant in each adjusted model for vision, ambulation, emotion, cognition, and QALY. The inclusion of anxiety and depressive disorders in models 2 and 3, attenuated the associations with emotion, hearing, and pain attributes. Attenuation of the association with hearing and pain should be interpreted with caution, as their unadjusted effect sizes were small. Models 2 and 3 show that prisoners with ADHD have a significant difference in QALY of 0.13 and 0.10, compared to those without ADHD. 25% of all participants had missing values by endorsing 'Don't Know' on several questions of the HUI3. Patterns of missing values were analysed and the most plausible values were imputed using a technique developed by Naeim and colleagues specifically for HUI-3 scores [40].
All 41 questions of the HUI-3 instrument allow respondents to answer 'Don't Know'. Because there are no instructions in the instrument manual for how to manage or score these answers, the 'Don't Know' category interferes with the scoring leading to substantial amounts of missing data. Common methods for imputing data in this scenario may not be as effective or can even be misleading, given that answers to other questions within the same domain (e.g. vision) often help identify a sole correct answer to those marked as 'Don't Know'. The well cited imputation method by Naeim et al. [40] advises inspecting each possible change in attribute score for every answer to the 'Don't Know' missing value, then selecting the most plausible value accordingly.
Furthermore, we performed a sensitivity analysis based only on those with complete HUI3 data to examine any differences in estimates before and after using the inspection and deduction method to account for 'Don't Know' answers. Table 4 includes all cost model inputs for the total median associated medical and prison costs for all inmates. Table 5 shows that in terms of medical service utilisation, prisoners with ADHD visited significantly more general practitioners (p < 0.05), physical health nurses (p < 0.05), and mental health nurses (p < 0.01) in the three-month period assessed. No significant associations were observed for any other health services. Table 6 shows that age adjusted medical costs were significantly greater among inmates with ADHD (p < 0.05), but prison costs were not. Cost items were assessed based on a 3 month window, then calculated for 1 year assuming similar patterns of health service utilisation and behaviour in prison. Total medical and prison costs for inmates with ADHD were £ 590 more per year than for inmates without ADHD.
HUI3 health attributes and QALY
To the best of our knowledge, our study is the first addressing ADHD health status using HUI3 amongst prison inmates. Previous studies documented the relationship between symptom severity and poorer HRQoL, including somatic symptoms [41], whereas a UK cross-sectional study reported that across most health domains, children and adolescents with ADHD had poorer scores when compared with samples of children with diabetes, and a healthy comparison group [42]. These studies highlight the extent that ADHD has on the health impact on affected individuals. We analysed the role of ADHD on QALY based on a one-year horizon. Notably, the proportion of inmates with a HRQoL over 0.90 (healthy state) was vastly superior amongst those without ADHD. The final adjusted Note. The first two columns refer to Tobit models using data that was corrected for 'don't know' answers. The third column includes the sensitivity analysis, in which we fitted a similar Tobit model but using only available data without accounting for 'don't know' answers a All tobit models to account for censoring at the upper level of the outcome QALY Model 1 is adjusted for age Model 2 is adjusted for age + BSI anxiety + BSI depression Model 3 is adjusted for age + BSI anxiety + BSI depression and is a sensitivity analysis of the sample without correction for 'don't know' answers *p < 0.05 **p < 0.01 ***p < 0.001 model that accounted for psychiatric co-morbidity produced a 0.13 difference in QALY, four-fold above the 0.03 clinically relevant threshold estimated by the instrument developers [26]. QALY based on inmates' one-year health utility scores for those with ADHD were significantly lower than those without ADHD. Poorer specific health attribute scores on vision and mobility indicate that inmates with ADHD have significantly compromised health states that go beyond of those more usually expected (such as emotion and cognition) from the disorder. Furthermore, health utilities models adjusted for psychiatric co-morbidity accounted for the variation of ADHD on emotion aspects, but not on the cognition attribute, providing an important insight regarding the contributing factors to impairment amongst inmates with ADHD. The significantly poorer vision score among the ADHD group may relate to their reading difficulties. In the present study, those diagnosed with ADHD were over two times more likely to require assistance with reading the questionnaires than the other participants. With respect to mobility, the finding that prisoners with ADHD have significantly poorer ambulation problems may reflect that prisoners with ADHD suffer more injuries that hinder their mobility compared with non-ADHD prisoners. Data obtained from the Danish registry reported that the morbidity rate is nearly three times higher if you have ADHD, and that 77.7% of unnatural deaths were accounted for by accidental injury [12]. Additionally, given the higher rates of aggression and violence in the ADHD population [43], there may be mobility problems arising from assault.
ADHD is frequently reported to be associated with a substantial reduction in the quality of life of children [44] and with increased chronic health problems in adults [45]. Study results indicate that ADHD impacts HRQoL with severe effects in emotional and social domains, and at least moderate effects in physical domains [46,47]. Adult inmates in our sample had an unadjusted HRQoL of less than 0.60. It is likely that undiagnosed and untreated ADHD has a cumulative effect and increases the risk for further health impairments, especially among imprisoned adults with coexisting mental health and social problems. There is evidence to suggest that poor HRQoL in individuals with ADHD may be driven by the existence of co-morbid conditions [48]. In our study, although co-morbidity played a role in the impact of ADHD on HRQoL, the association is not entirely explained by coexisting psychiatric symptoms of anxiety and depression. Moreover, there was no attenuation on the association with the cognitive attribute of the HUI3 on adjusted models, suggesting a domain-specific link. Cognitive dysfunction in the form of difficulties allocating attentional resources [49], response inhibition, and management of reward are hallmarks of the ADHD phenotypic expression. These results denote different paths through which ADHD may impact adverse health and quality of life, directly through cognitive deficits and via co-morbid disorders. We therefore provide evidence of domain-specific and shared contributions to impaired HRQoL in ADHD.
Service use and costs
Health economic studies on the general population report that ADHD (including symptoms of hyperactivity) is associated with significant economic burden [1,2]; however, studies focusing exclusively on the economic impact of ADHD on adult prisoners were not identified.
A US study of disability claims reported that patients with ADHD had 2.6 more medical claims than those without ADHD and that ADHD imposed a significant financial burden [1]. A recent prospective UK study reported that preschoolers with high levels of hyperactivity had a 17-fold increase in overall costs compared with non-hyperactive controls; costs were mainly driven by mental health, educational, social, and criminal justice system service use [2]. A Danish study reported that the direct medical costs of ADHD patients were relatively high, whereof mental care and inpatient hospitalizations accounted for approximately 60% of the costs and medication use accounted for 13% [50]. Results of one study demonstrated that public costs (due to mental health, school services, and the juvenile justice system) are more than double for youth with ADHD compared with those without ADHD [51].
Hospital inpatient stays are a significant driver of costs attributable to ADHD. A retrospective analysis during a 9 year period reported that median hospital inpatient, hospital outpatient, or ED admission costs for individuals with ADHD were more than double for those without ADHD [52]. Pharmacotherapy costs are also a large part of medical costs attributable to ADHD. Medication costs were reported to account for about 13-38% of total costs [1,2,24,50,52,53]. Psychological therapy Note: Findings from Generalised Linear Model using gamma error distribution and log link function, adjusted for age *p < 0.05 **p < 0.01 (individual or group modalities) is often another common important driver of costs, which was essentially not utilised by the participants of our study. Our total estimated annual cost of £590 per inmate with ADHD demonstrates that the costs attributable to ADHD are relatively high. But because our estimate did not include costs for hospital stays, medication, and/or psychological treatment, the total cost estimate therefore represents a conservative figure.
In our study, costs associated with ADHD were driven by increased medical service use and not by behavioural disturbance incidents. This may indicate that costs related to behavioural incidents were more generally distributed across the prison sample and due to many other factors besides having a diagnosis of ADHD. Service utilisation patterns were restricted to general medical and nursing services. Low endorsement of engagement with these and other services may have been a true reflection of the patterns of use in our sample, or of the Scottish prison system at large. As many resources were not used, costs remained lower compared with other studies mentioned.
Because the present study found there were significantly greater medical costs but not behaviour-related prison costs, the cost implication seems to be largely for the NHS. While the assignment of prisoner medical costs based on NHS reimbursements may not perfectly represent prisoners medical costs (possibly over-or under-estimated), it helped to estimate and interpret costs using standard more widely used and familiar terms. There may be, however, variability in the recording of critical incident data, which will be a fundamental driver of prison costs, leading to increased number of seclusions, adjudications, injury costs and potentially staff sickness. In a previous study conducted in a large prison in Aberdeen there were highly significant differences found in aggressive critical incidents between an ADHD and a non-ADHD group [43]. Hence for some prison establishments, costs to the prison service may be considerably higher.
Limitations
A key strength of the study is its large sample size and a methodology in which every participant was clinically diagnosed using the DIVA-2. Nonetheless, there are several limitations.
Because of missing data, some bias may be present in our analyses of HUI3 specific attribute scores. However, our models accounted for missing data using a well established and oft cited method and the sensitivity analysis on adjusted models allowed us to have confidence in our methods.
Ethnic minority groups and females did not have representation in this sample, therefore, it remains unclear whether these findings may be fully applicable and generalized to the entire prison population. ADHD diagnosis was based on self-reported information and we did not include informant (e.g. familial) reports. Recall bias is unaccounted for and may have been a factor in symptom measures and service use. Nevertheless, any bias related to under-reporting was presumed to have similar effects on estimates for both the ADHD and non-ADHD groups. Other studies have reported considerably higher rates of critical incidents [43,54], and it is likely that prison costs based on these would be considerably inflated compared with the estimates derived from the present data.
Our extrapolation method (using 3 months of data to estimate 1 year) may be limited in its accuracy. We used a one-year horizon for our HRQoL and service use estimates, and more time than this would have conferred too much uncertainty. Future research should address measuring utilities over more time, thereby providing a better foundation of QALY estimates beyond 1 year. Finally, the opportunity sampling method used may have introduced selection bias into the results, limiting their generalizability both within the prison and across other prisons.
Conclusions
Research on HRQoL and costs related to adult ADHD is limited in the general population and is virtually non-existent in the prison population. We addressed this paucity of data on HRQoL, QALY, service utilization, and costs attributable to ADHD based on 1 year in prison. We performed HRQoL and cost analyses for adult prison inmates with ADHD based on a cross-section of the Scottish prison system in the UK.
Our study provides evidence that HRQoL of life is considerably poorer in adult male prison inmates with ADHD, with an adjusted reduction of 0.13 QALY. Affected health attributes extend beyond emotional and cognitive deficits, suggesting chronic effects of ADHD on health over the lifespan. ADHD may contribute to adverse health and quality of life directly through executive function and cognitive deficits; and co-morbid disorders. Combined costs within prison were significantly higher for those with ADHD and were driven by medical expenses. Service utilisation was for the most part limited to general practitioner services and nursing staff visits.
Approximately 80% of inmates considered to have ADHD did not receive a prior diagnosisindicating that a significant proportion of adult prison inmates are inadequately identified and treated. This has policy implications for both the National Health and the prison service. There is a need for the prison service to develop improved awareness about ADHD in adult prisoners, including the clinical and behavoural presentation of ADHD. There is also a need to introduce a brief and reliable screen on admission, such as the 6-item B-BAARS, which has high sensitivity and specificity [25]. Furthermore, there is a need for the NHS to address the general absence of health service provision for adults with ADHD in prisons, as prisoners continue presenting multiple times for their health problems and seem to remain mis-or undiagnosed.
In 2015 the Ministry of Justice reported a population of 77,472 adult male inmates in the UK. Given the prevalence rate of 25.5% of ADHD among prisoners [55] and our estimated annual total cost per adult inmate with ADHD of £590, we estimate a total cost for medical and behaviour-related prison care of approximately £11.7 million per year. This cost estimate, however, is conservative as it is seemingly driven by general medical expenses and not by critical incidents. There may be variability in the reporting of critical incidents in prisons, and prison care costs associated with behavioural disturbances may be much higher in other establishments.
ADHD is a prevalent mental health disorder, and a known risk factor for a series of adverse health and social outcomes. Population studies report the community (and society at large) bears considerable medical costs associated with ADHD [22,23]. Although ADHD is disproportionately prevalent in prison, it is understudied and inadequately addressed in this context [55,56].
Our results provide evidence that adult prisoners with ADHD represent a unique population with unmet needs and high costs. Given the Swedish study of patients showing a 32% reduction in criminality for men and 41% for women during periods when they were receiving ADHD medication [57], effective identification and treatment of ADHD may have important cost implications.
We recommend directing efforts to increase access to effective interventions for adult inmates with ADHD. Setting up provisions for better access to early diagnosis and treatment is likely to improve inmates' HRQoL and decrease impairment related to ADHD symptoms and associated co-morbidities.
Acknowledgements
We are grateful to Mr. Gordon Morrice, the Scottish Prison Service, and staff at Porterfield Prison in Inverness for their support of the study. We thank Laura Mutch and Isabella Mallet-Lambert for data collection. This study was supported by a grant from Shire Pharmaceutical Development Limited.
Funding
The study was supported by Shire Pharmaceutical Development Limited through a restricted grant. Shire had no role in the design and conduct of the study (collection, management, analysis, and interpretation of the data) or on the preparation, review, or approval of the manuscript, and the decision to submit the manuscript for publication. The research was also supported by the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre.
Availability of data and materials
The datasets used and analysed during this current study are available from the corresponding author upon reasonable request.
Authors' contributions SY, RG, MF, and GG led the planning and scientific input of the study. RG conducted the statistical analysis and wrote the first draft with input from SY and GG. KK critically edited the data tables, figures, and manuscript; and wrote the final draft. All authors have read and approved the final manuscript.
Ethics approval and consent to participate
Research was performed in accordance with the Declaration of Helsinki and was approved by the Scottish Prison Service Research Access and Ethics Committee (reference: 7/13/10/10). Written consent to participate was received from each prisoner.
Consent for publication Not applicable
Competing interests SY has received honoraria for consultancy, travel, educational talks and/or research from Janssen, Eli Lilly, HB Pharma, and/or Shire. MF consulted for Amgen, CSL Behring, Merck, Novo Nordisk, Shire, and Vertex. GG and RG have no conflicts of interest. PH was an employee of Shire working on ADHD projects from 2009 to 2013. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. Aside from KK, none of the authors received funds for their involvement in this manuscript.
|
2018-06-27T14:31:22.613Z
|
2018-06-25T00:00:00.000
|
{
"year": 2018,
"sha1": "242b116d78ec5f1fd5303ea04659fb15c429638f",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-018-1792-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "242b116d78ec5f1fd5303ea04659fb15c429638f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
2303825
|
pes2o/s2orc
|
v3-fos-license
|
Alcohol reduces aversion to ambiguity
Several years ago, Cohen et al. (1958) demonstrated that under the influence of alcohol drivers became more risk prone, although their risk perception remained unchanged. Research shows that ambiguity aversion is to some extent positively correlated with risk aversion, though not very highly (Camerer and Weber, 1992). The question addressed by the present research is whether alcohol reduces ambiguity aversion. Our research was conducted in a natural setting (a restaurant bar), where customers with differing levels of alcohol intoxication were offered a choice between a risky and an ambiguous lottery. We found that alcohol reduced ambiguity aversion and that the effect occurred in men but not women. We interpret these findings in terms of the risk-as-value hypothesis, according to which, people in Western culture tend to value risk, and suggest that alcohol consumption triggers adherence to socially and culturally valued patterns of conduct different for men and women.
INTRODUCTION
Several years ago, Cohen et al. (1958) demonstrated that drivers became more risk prone under the influence of alcohol. Surprisingly, however, the drivers' risk perception remained unchanged. This pattern suggests that the increase of risk acceptance when under the influence of alcohol is not an effect of changes in perceptions of outcomes' probabilities, but rather is caused by a change in the evaluation of outcomes' attractiveness. We might, then, ask how an increase in the attractiveness of an outcome would occur.
Actually, it is suggested by several studies that it is an increase in outcome desirability rather than an increase in perceived feasibility of the outcome that is responsible for heightened propensity to take risk under the influence of alcohol. For example, Sevincer and Oettingen (2014) showed that alcohol intoxication resulted in an increase of participants' desirability (incentive value), but not feasibility of important goals. Similarly, Lane et al. (2006) found that alcohol increased individuals' sensitivity to consequences (gains and losses), but not expectancy updating rate. Steele and Josephs (1990) explain changes in behavior under the influence of alcohol by referring to cognitive processing impairment. They claim that alcohol leads to "myopia" i.e., narrowing of the attention and focusing on the most salient features of the situation. According to the authors in real life risky situations (sexual behavior, dangerous driving, etc.) salient cues concern gains, while the likelihood of losses is less silent. However, our explanation of these results as well as that by Cohen et al. (1958) is different, motivational rather than cognitive.
One significant reason for a change may be associated with the fact that in many social contexts risk itself is considered to be of value. Indeed, in most cultures courage is considered a virtue. In line with this argument, Brown (1965) formed the hypothesis that moderate risk is valued in Western culture and that people shift toward risky decisions to gain approval from other members of their group. According to this hypothesis, people would also tend to perceive themselves to be more risk seeking than their peers. Following this hypothesis, Levinger and Schneider (1969) found that college students considered higher levels of risk to be more admirable than those levels that they had accepted in their own previous decisions. Thus, the findings of Cohen et al. (1958) could be interpreted as indicating that alcohol consumption triggers adherence to socially and culturally valued patterns of conduct, and leads to a real-life increase in willingness to take risks. Ellsberg (1961) described a phenomenon known as ambiguity aversion. Ambiguity aversion differs from risk aversion. Risk aversion refers to the preference of having a less than expected value of a lottery for sure than the lottery itself. Ambiguity aversion refers to the preference for situations containing precisely defined probabilities of possible states of nature over situations involving undefined probabilities. Research shows that ambiguity aversion is to some extent positively correlated with risk aversion, though not very highly (Einhorn and Hogarth, 1986;Camerer and Weber, 1992).
The main question addressed in the present research is whether alcohol reduces not only risk aversion but also ambiguity aversion. Based upon the previous finding that increased risk acceptance under the influence of alcohol is not an effect of a change in perceptions of probability of outcomes, but rather an effect of a change in evaluations of attractiveness of outcomes, we hypothesized that under the influence of alcohol people will not only be less risk averse, but will also be less ambiguity averse.
It should be noted that "risk as value theory," which the current hypothesis is based on, uses a concept of risky behavior in the colloquial sense referring to courage in decision making under uncertainty. Contrary to its name, it does not refer specifically to the situation of risk (understood as a combination of probabilities and outcomes), but rather broadly to choices under conditions of uncertainty, ambiguity or risk. Thus, consequently, our hypothesis says that under the influence of alcohol people will choose the www.frontiersin.org option less certain but more attractive when it comes to potential outcomes.
Another question addressed in the present research concerns a possible gender difference in alcohol's influence on ambiguity aversion. There is substantial evidence that women and men differ in risk taking (Byrnes et al., 1999;Cross et al., 2013). Moreover, stereotypically, similarly, to competitiveness and dominance, risk taking is considered to be a masculine trait (as measured by Bem, 1974). For example, Wilson and Daly (1985) concluded from their literature review that risk taking is a central characteristic of the psychology of men. Thus, assuming that the increase of risk acceptance under the influence of alcohol results from risk being valued in Western culture, we formed the hypothesis that alcohol decreases ambiguity aversion more in men than in women.
PARTICIPANTS
One hundred participants, 46 women and 54 men, took part in the study. Their ages ranged from 18 to 43, with mean age M = 26.3 years, SD = 5.35 years. Most participants (n = 66) were educated to university degree level, 33 declared a high school education, and 2 declared lower than a high school education.
TASK AND PROCEDURE
The study was conducted individually in a restaurant which was a part of a large leisure center 1,2 . It was carried out in the evenings between 9 pm and 12 pm. To obtain reliable measures of people's blood alcohol levels, the time elapsed since having the last drink or smoking a cigarette had to be at least 20 mins. A precision Breathalyzer Alkohit X100 was used to measure blood alcohol levels. One of the experimenters approached a restaurant visitor and told them that he and the other experimenter represented a Research Centre and that they were conducting a study examining how accurately people estimate their own blood alcohol level. Then, the participant was told that as compensation for participation in the study they would be offered the possibility of winning free drinks. If a person expressed willingness to participate, they were invited to a separate room where the experiment was carried out. In the experimental room, the second experimenter executed the following procedure: (1) Participants provided demographic information concerning gender, age and education. Then they estimated their blood alcohol level, choosing one of six intervals: 0-0.2‰, 0.2-0.5‰, 0.5-1.00‰, 1.00-1.50‰, 1.50-2.00‰, and above 2.00‰. (2) Then the experimenter gave participants a cup of water and asked them carefully to rinse their mouth (to remove any residual alcohol). 1 We were looking for a naturalistic setting for our experiment. Drinking bar where individuals decide themselves to consume alcohol seemed to be a convenient setting. Moreover, by conducting the study in a drinking bar we were able to avoid the so called "demand characteristics," i.e., participants' interpretations of the experiment's purpose and changing their behavior to fit that interpretation. We believe that the cover story of our procedure provided participants with a convincing justification, and thus they expressed their true preferences. 2 Research was approved by the Commission of Ethics in Research at the Kozminski University.
(3) Next, participants blew into the alcoholmeter until it produced a sound signaling completion of blood alcohol measurement. (4) Finally, participants completed a task where they could win free drinks. They saw two urns. Both had labels. On one of them the label informed them that there were 30 coupons inside, of which 15 were vouchers for one free drink to use in the bar and the other 15 were empty cards (the customer did not win anything). On the second, the label informed participants that the urn contained 30 coupons, of which some were vouchers for two free drinks to use in the bar and some were empty cards (the customer did not win anything); however, the numbers of the two types of coupons were unknown to participants. The former urn was thus an unambiguous urn, offering a 50/50 chance of winning a free drink, and the latter was an ambiguous urn, offering a chance of winning a higher prize -two drinks, but with an unknown probability of success.
Thus, we measured: subjectively estimated blood alcohol level, real (objectively measured) blood alcohol level, and the choice between risky vs. ambiguous options.
RESULTS
Real blood alcohol level and subjectively estimated blood alcohol levels were significantly positively correlated -Spearman's rho = 0.48, p < 0.001, n = 100. Thus, participants were moderately good at estimating their real blood alcohol levels. Choices between risky vs. ambiguous options did not differ across subjectively estimated blood alcohol levels.
Participants were divided into three groups depending on their real blood alcohol level: low -up to 0.5% (n = 32), medium -0.51 to 1.00% (n = 39), and high -above 1.00% (n = 29). As Figure 1 shows, there was a relationship between blood alcohol level and preferences for the risky vs. ambiguous options. Those with higher blood alcohol levels choose the ambiguous option more often than those with low alcohol levels, χ 2 (2, n = 100) = 6.77, p = 0.03.
We compared preferences for risky vs. ambiguous options as a function of level of blood alcohol separately for women and men. As Figure 2 shows, men who had higher levels of blood alcohol chose ambiguous option more often than those with lower alcohol levels, χ 2 (2, n = 54) = 7.57, p = 0.02. On the other hand, as Figure 3 shows, more women with higher levels of blood alcohol than with lower levels chose risky option, yet independent of the blood alcohol level similar number of women decided for ambiguous option. Thus, blood alcohol level did not change women's attitude toward ambiguity [χ 2 (2, n = 46) = 0.52, p = 0.77]. This difference cannot be ascribed to the level of blood alcohol in men and women. The average level of blood alcohol was indeed slightly higher in the men than in the women sample (M = 0.89, SD = 0.50 vs. M = 0.73, SD = 0.43), but the difference was not statistically significant (U Mann-Whitney test, U = 993, Z = −1.92, p = 0.9). The difference in the level of blood alcohol was even slighter in the group of participants with the highest blood alcohol level -M = 1.35, SD = 0.31 for women and M = 1.48, SD = 0.37 for men (U Mann-Whitney test, U = 77, Z = 0.97, p = 0.33).
DISCUSSION
The present study yielded two findings. First, it showed that in addition to the known tendency of people to become more risk prone when they consume alcohol, alcohol also reduces ambiguity aversion. Second, we found that the reduction of ambiguity aversion under conditions of alcohol consumption is more prominent in men than in women. We interpret these findings in terms of two presumptions. First, that alcohol consumption triggers adherence to socially and culturally valued patterns of conduct. Second, that people in Western culture tend to value risk (as suggested by the risk-as-value hypothesis). In line with this, we confirmed the hypothesis that alcohol consumption leads to more positive valuation of risk and courage, and, in effect, to more risky choices.
Surprisingly, we observed somewhat analogous results concerning willingness to engage in risky behavior in a study based on terror management theory. In a nutshell, according to terror management theory, people's fear of death can be regulated through the maintenance of self-esteem. This in turn can be achieved by satisfying the norms of one's culture (Pyszczynski et al., 1997). In line with this idea, Hirschberger et al. (2002) showed that mortality salience induction led men, but not women, to reveal high willingness to engage in risky behaviors. This finding seems parallel to ours: both consumption of alcohol and mortality salience induction reduce risk aversion in men but not in women. Both of these findings seem to be in line with the premise that in Western culture men are socialized to be more risk-oriented than women.
Furthermore, one could ask how alcohol consumption would influence the willingness of people to engage in other behaviors related to social values. For example, there is evidence, that women are socialized to be more caring (Gilligan, 1982). One can speculate, then, that alcohol consumption would result in the increase of nurturing behavior in women but not in men. Of course, this possibility needs separate examination.
On the other hand, it is likely that alcohol consumption has no influence on behaviors that are unrelated to social or cultural norms. In particular, alcohol should not influence attitude toward ambiguity that is unrelated to uncertainty of outcome occurrence. For example, Weber and Tan (2012) showed that ambiguity aversion occurs not only in the context of risk, but also in intertemporal choices (delivery of a package either in an exact time or within a range of dates). Since, to our knowledge, there is no social norm concerning the value of time inaccuracy, alcohol consumption should not reduce ambiguity aversion in intertemporal choices.
|
2016-05-12T22:15:10.714Z
|
2015-01-15T00:00:00.000
|
{
"year": 2014,
"sha1": "264a75e00d68cdc42789dc6df679157c9811310a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01578/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "264a75e00d68cdc42789dc6df679157c9811310a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
55813078
|
pes2o/s2orc
|
v3-fos-license
|
Fiscal Decentralization , Corruption and Urban-Rural Income Inequality : Evidence from China
This paper follows fiscal federalism that a higher degree of fiscal decentralization is always associated with lower corruption and income inequality. There exists a stronger dynamic relationship among fiscal decentralization, corruption and income inequality in developing countries. Based on the panel dataset from 1999 to 2012, this research is focusing on China, showing that it does not exist a simple linear relationship among fiscal decentralization, corruption and urban-rural income inequality, instead, the relationship between fiscal decentralization and urban-rural income inequality is more in line with a specific “U” shape. While the effect of corruption on urban-rural income inequality, it can be gradually weakened as the reform of fiscal decentralization which expands the existed researches made by Mah (2013) and Lessmann (2010).
Introduction
The second-generation theory of fiscal federalism pointed out that decentralization could effectively curb corruption (Oates, 2005; Bird, 2003; Lecuna, 2012 [1]) and it had been testified in most empirical researches (Albornoz, 2013 [2]; Lessmann, 2002 [3]; Fisman, 2002 [4]).The expenditure decentralization could strengthen the local bureaucratic competition while reduced the incidence of corruption efficiently by boosting the marketoriented reform (Zhou, 2004;Blackburn, 2009 [5]).Based on the studies on developing countries, bureaucratic corruption was the key factor for the income inequality, which behaved as the significant positive relationship (Dincer, 2012 [6]; Bin, 2013 [7]).Especially in the developing countries, the institutional root of income inequa-lity was corruption formed by the predatory ruling and market alternative resource allocation policy (Dobson, 2012 [8]).Although there exited extensive researches focusing on the relationship between fiscal decentralization and corruption, as well as how corruption affected income inequality.These researches failed to draw the consistent conclusion when discuss the relationship between fiscal decentralization and income inequality directly.Most cross-country studies approved that expenditure decentralization could narrow domestic income inequality (Gallo, 2011 [9]; Chen, 2009 [10]).However, the certain studies focusing on the developing countries always acquired the opposite conclusion or no relationship between them (Nayapti, 2006 [11]; Mah, 2012 [12]).Based on the research by Fan (2012), as the rapid transformation of economic structure and the promotion of market-oriented reform, there existed the nonlinear relationship between fiscal decentralization and income inequality in the developing countries especially in the empirical countries, which is corresponding to the research conducted by Zhang (2006) [13] focusing on China that income inequality was affected by Chinese-style decentralization through two aspects, which were regional governance capacity and investment expansion, respectively.Specifically, if the local government attached importance to the quality of governance, the income inequality would be reduced through strengthening the social service system.If not, the income inequality would be enlarged as the continuous expansion of the productive fiscal expenditure (mainly focusing on the urban infrastructure investment).
This study explored the effect of fiscal decentralization and corruption on urban-rural income inequality in China.More interestingly, there existed a significant nonlinear relationship among fiscal decentralization, corruption and urban-rural income inequity.Currently, In order to reverse the long-term tendency of the increasing urban-rural income inequity caused by bureaucratic corruption, the Chinese authority was devoting to reshape the decentralization system.It would provide valuable experiences for the development and transition of most developing countries.The rest of the paper was structured as follows.Section 2 described the econometric methodology, and Section 3 reported the empirical results while the final section was the conclusion.
Data and Methodology
This study employs static panel model and threshold panel model proposed by Hansen (2000) [14], respectively.Specifically, the sample set includes 31 provincial statistics of the Mainland China from the period of 1998-2011.The specification of static panel model can be described by Equation (1).Besides, the specification of threshold panel model can be shown by Equations ( 2) and (3), which fiscal decentralization and corruption are selected as the threshold variable to describe their influence on urban-rural income inequality.
Specifically, in Equations ( 1), ( 2) and ( 3), focusing on the province i in the year of t, it I denotes the urbanrural income inequality; it X denotes other independent variables matrix affecting it I except the threshold variables; it FD describes the fiscal decentralization; it CO denotes the incidence of corruption; G denotes the indicative function; γ is the value of threshold; both i α and i µ are the entity-fixed effect; it ε is the ran- dom error term, which satisfies the assumption of classical linear regression.
In the estimation of statics panel model, 11 independent variables are incorporated, which are the incidence of corruption (CO), the decentralization of expenditure shares (FD), the size of government (GS), GDP per capita per year, the rate of urbanization (UR), market index (MI), the proportion of consumptive fiscal expenditure (CFE), openness indicator (OP), the proportion of state-owned economy in the industries (SOE), social consumption rate (CI) and industry softening coefficient (SIC).In China, the variable of CO always is substituted by the number of registered corruption cases of public servants per 10000 people per province while the government size is estimated through the number of public servants per million populations (Wu, 2010).For the variable of I, it can be computed by the ratio of urban residents' disposable income to rural residents' net income.In terms of the data source for all incorporated variables, except the incidence of corruption acquired through "Chinese Surveillance Yearbook" from 1999 to 2012, the others all come from "Chinese Statistical Yearbook" of corresponding year.During the process of estimation, in order to remove the negative effect of multicollinearity, stepwise regression is applied.In the meanwhile, to avoid spurious regression and improve the validity of parameter estimation, the process of logarithmic transformation is applied on both variables of government size and GDP per person per year.The specific estimation result for the static panel model can be seen in Table 1.
For China with rapid economic growth, threshold panel model proposed by Hensen (1998Hensen ( , 2000) ) enables to conduct the economic variable estimation under the condition of rapid regime transition.This study explores the variation of Chinese urban-rural inequality under decentralization transition and corruption transition, respectively.Specifically, under the condition of decentralization transition, the independent variables have been selected as GS, GDP per capita, UR and SOE.While under the condition of corruption transition, the independent variables can be selected as GS, MI, UI and SOE.
Empirical Result
For all these 5 models in Table 1, the statistics of Hausman test exceeds the critical value significantly, rejecting the specific hypothesis of random effect, denoting there exists significantly positive effect between UR, OP, SOE, CI, SIC and I. Based on the estimation of model 2 and 4 in Table 1, fiscal decentralization and urban-rural income inequality have the significant negative relationship.While GDP per capita on urban-rural income inequality is not clear.Moreover, based on the estimation of model 1 and 3, the expansion of government size is beneficial to the improvement of income inequality.If the decentralization is not considered, a proper degree of corruption is beneficial to reduce the income inequality.While under the endogeneity influence of corruption, the negative effect of governmental consumptive fiscal expenditure on reducing income inequality can be weakened.Furthermore, it can be shown that market-oriented resource allocation will intensify the income inequality while the Matthew effect through the specific reform can be offset by revenue redistribution regulating effect through the fiscal decentralization.
In conclusion, the Chinese fiscal decentralization aiming to expand the expenditure shares and intensify the quality of local public goods supply is beneficial to balance the income between urban and rural residents.However, as the different incentive effect of decentralization on the behavior of the government bureaucracy, the effect of corruption on urban-rural income inequality is not clear, denoting there exist the complex interactions between fiscal decentralization and corruption during Chinese transition period.Hence, it is unreliable that there exist sample linear effects of both Chinese fiscal decentralization and corruption on urban-rural income inequality.
In Table 2 and Table 3, the nonlinear effect of fiscal decentralization and corruption on urban-rural income inequality has been testified.Moreover, there exists the "U" shaped relationship between fiscal decentralization and urban-rural income inequality while fiscal decentralization has two threshold estimates ( 0.011 and 0.028 γ γ = = , respectively).Simultaneously, the positive threshold effect can be obtained between corruption and income inequality, denoting corruption incidence also has two threshold estimates ( 3.22 and 3.57 , respectively).Although there exists the positive relationship between corruption and income inequality through the sign of parameters, this positive relationship will be weakened sharply with the accumulation of corruption.Finally, it has been proved that the variables such as GS, UR, GDP per capita, MI and SOE are the main contributors to this nonlinear change.Specifically, the government expansion elastic effect is the most beneficial factor to narrow the income gap.While the marketization forces and nationalization trend will further contribute to the alienation of urban-rural income.
Conclusion
This paper explores the direct relationship among decentralization of expenditure shares, corruption and urbanrural income inequality in China.Specifically, the existence of nonlinear effect of decentralization and corruption on urban-rural income inequality has been testified through this study.In the meanwhile, there are two important findings in this paper.Firstly, under the background of rapid transition of Chinese economic system, the expansion of state-owned economy has become one of the reasons that intensify the urban-rural income inequa- lity and market monopoly.Secondly, although corruption is the common social problem facing by most developing countries, in China, the effect of corruption on urban-rural income inequality is gradually weakening.In the future, the relevant policies should be made following the idea of fiscal federalism to improve the quality of public goods supply through decentralization.Finally, the "U" structure of urban-rural income inequality can also be crossed (See Figure 1 and Figure 2).
Figure 1 .
Figure 1.LR value for threshold panel model.(The ratio of disposable income between urban and rural residents as the dependent variable).
Figure 2 .
Figure 2. Relationship between fiscal decentralization, Corruption and income inequality in China.
Table 1 .
OLS estimation results for static panel model.
Table 2 .
Estimation for threshold panel model (Fiscal decentralization as the threshold variable).
Table 3 .
Estimation for threshold panel model (Corruption incidence as the threshold variable).
|
2018-12-14T22:20:56.445Z
|
2015-07-20T00:00:00.000
|
{
"year": 2015,
"sha1": "439261268967da7d3dafde7a15261318e6d97e87",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=58570",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "439261268967da7d3dafde7a15261318e6d97e87",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
18691575
|
pes2o/s2orc
|
v3-fos-license
|
Robotic-assisted radical prostatectomy learning curve for experienced laparoscopic surgeons: does it really exist?
ABSTRACT Background Robotic-assisted radical prostatectomy (RALP) is a minimally invasive procedure that could have a reduced learning curve for unfamiliar laparoscopic surgeon. However, there are no consensuses regarding the impact of previous laparoscopic experience on the learning curve of RALP. We report on a functional and perioperative outcome comparison between our initial 60 cases of RALP and last 60 cases of laparoscopic radical prostatectomy (LRP), performed by three experienced laparoscopic surgeons with a 200+LRP cases experience. Materials and Methods Between January 2010 and September 2013, a total of 60 consecutive patients who have undergone RALP were prospectively evaluated and compared to the last 60 cases of LRP. Data included demographic data, operative duration, blood loss, transfusion rate, positive surgical margins, hospital stay, complications and potency and continence rates. Results The mean operative time and blood loss were higher in RALP (236 versus 153 minutes, p<0.001 and 245.6 versus 202ml p<0.001). Potency rates at 6 months were higher in RALP (70% versus 50% p=0.02). Positive surgical margins were also higher in RALP (31.6% versus 12.5%, p=0.01). Continence rates at 6 months were similar (93.3% versus 89.3% p=0.43). Patient’s age, complication rates and length of hospital stay were similar for both groups. Conclusions Experienced laparoscopic surgeons (ELS) present a learning curve for RALP only demonstrated by longer operative time and clinically insignificant blood loss. Our initial results demonstrated similar perioperative and functional outcomes for both approaches. ELS were able to achieve satisfactory oncological and functional results during the learning curve period for RALP.
INTRODuCTION
Prostate cancer is the most common non--cutaneous men malignancy and the second leading cause of cancer related mortality in Brazil (1).
Minimally invasive approaches for prostate cancer have evolved significantly after 2000.
Laparoscopic radical prostatectomy (LRP) demonstrated improved visualization of the pelvic anatomy, improvements in potency and urinary rates, lower blood loss, while upholding principles of oncological therapy (2)(3)(4)(5)(6). Although, this technique presented a limited expansion due to the steep learning curve, which requires at least 60 cases to obtain proficiency (6).
Recently, robot-assisted radical prostatectomy (RALP) brought several mechanisms which may significantly decrease the learning curve for unfamiliar laparoscopically surgeons (2). The Da Vinci surgical system (Intuitive Surgical, Sunnyvale, California, USA) magnification, robotic-wrist instrumentation and increased degrees of freedom, associated with the 3-dimensional visualization provided surgeons extremely detailed pelvic anatomy which enables the appropriate prostate extirpation (7)(8)(9). This minimally invasive technique has received widespread acceptance by physicians and patients and was established as the standard surgical treatment for localized prostate cancer in the US (10)(11)(12).
In Brazil, the Da Vinci System was introduced in 2008. However, it was implemented only in 9 hospital centers (Albert Einstein, Sirio Libanes, Oswaldo Cruz, Nove de Julho, INCA, Samaritano, HC Porto Alegre, ICESP and Fundação Pio XII). In addition, this high-cost technology is not provided by health insurances, being mostly performed by private services, which provides low volume of RALP for most urologists.
The aim of this study was to report our initial experience and assess the learning curve of experienced laparoscopic surgeons in robot-assisted radical prostatectomy (RALP). We compared perioperative, functional and oncological outcomes between RALP and LRP.
MaTERIals aND METHODs
The project was approved by the Ethics Committee for Analysis of Research Projects of the involved institutions.
A retrospective review of prospectively collected data was performed from 2008 to 2013, including 120 patients with localized low or intermediate risk of prostate cancer who were indicated for surgical treatment. All selected cases presented previous urinary and potency rates preserved. Patients with previous prostate cancer treatment, neoadjuvant or adjuvant hormonal treatment were excluded from the study. The robotic procedures were performed at a private hospital while the LRP in public and private hospitals.
Preoperative, perioperative, oncological and functional outcomes of the first 60 cases of robot-assisted radical prostatectomy were compared to the last 60 consecutive cases of laparoscopic radical prostatectomy. All procedures were performed by three experienced surgeons with a 200+experience in LRP, under the same defined protocol.
Data included demographic characteristics, operative parameters (operative time, blood loss, positive surgical margin, complications, conversion and transfusion rates and postoperative (early urinary and potency continence and post--operative stay).
Robotic-assisted laparoscopic radical prostatectomy
The RALP was performed using the S and Si da Vinci Robotic System (Intuitive Surgical, Sunnyvale, CA). First, the patient was positioned supine in low lithotomy in a 15º Trendelenburg position. All cases were performed transperitoneally using the six-port technique as described by Patel et al. (13). Non robotic ports were placed higher or above umbilicus's level in order to provide maximum range of motion to the assistant. Dorsal venous complex was initially isolated and ligated. The seminal vesicles dissection was then performed and prostatic pedicles ligation was carried out. Nerve--sparing surgery was performed when using a clip technique without the use of any kind of thermal energy. Finally, the running vesicourethral anastomosis was performed as described by Van Velthoven et al. with conventional 3-0 barbed sutures.
Laparoscopic radical prostatectomy
Pure laparoscopic cases were performed with five-port extraperitoneal approach described by us previously (14,15). The patient was placed in supine position with Y-shaped abduction of lower limbs. Optics trocar was inserted in the umbilical incision, two trocars were inserted in the pararectal external area and two in the iliac fossa. Vascular control of dorsal venous complex was performed using a 2-0 polygalactine suture with CT-1. The bladder neck was incised and the vasa deferentia and seminal vesicles were dissected. Posterior prostate pedicles were clipped and incised. The dorsal vein complex and urethra were incised and the prostate released. Continuous 3-0 monocryl or 3-0 barbed sutures were used to perform the vesicourethral Van Velthoven anastomosis.
statistical analysis
The statistical analyses were performed using SPSS software (IBM® SPSS® Statistics20; SPSS, Inc., Chicago, IL, USA). The significance level was defined as 0.05 (5%). All confidence intervals used in this study were constructed with a 95% confidence level.
The paired Student t test was used to assess quantitative data and compare means (age, operative time, blood loss, PSA level). The two-samples z test was used to compare intraoperative complications, continence and potency rates, positive surgical margins, transfusion rate, Gleason score, pathologic stage and nerve sparring between the groups.
REsulTs
Patients who have undergone LRP and RALP were similar in terms of age and ranged from 50 to 70 (p=0.99). PSA level, Gleason score and pathologic stage (T2, T3) were also similar between the groups ( (Table-2). The length of hospital stay was similar between the groups (p=0.92) and ranged from 1-3 days.
Functional and oncological outcomes are described in Table-3. Continence rates at six mon-
DIsCussION
Laparoscopic radical prostatectomy was the first successful minimally invasive procedure that provided several benefits concerning potency and urinary continence, blood loss, while upholding principles of oncological therapy (2). However, the two-dimensional image associated with lower range of motion turned LRP into a challenging procedure, which presents a steep learning curve that requires nearly 70 cases to attain proficiency (6,15).
Robotic assisted radical prostatectomy emerged as an effective alternative to LRP. The Da Vinci 3-dimensional image, magnification, multi--joints devices, increased degrees of freedom sig-nificantly improved surgical ergonomics and therefore decreased the learning curve of LRP. RALP has received worldwide acceptance by urologists and is on the verge of becoming the preferred surgical treatment of localized prostate cancer (12,(16)(17)(18).
However, the high cost of this technology remains as the primary obstacle towards RALP expansion. The Da Vinci system is evaluated at 2 million euros and its maintenance increases financial burden by $2.698 per patient given an average of 126 cases per year. Previous reports estimated that a total of 75 cases per year with an average operation time of three hours per case are necessary to be cost-effective in the United States (16,19). In Brazil, this system was introduced in 2008 and was implemented only in 9 hospital centers. INCA's hospital (Instituto Nacional do Câncer) and ICESP (Instituto do Câncer do Estado de São Paulo) were the first public services that provided the Da Vinci System in Brazil. Therefore, based on the medical system without a reference To our knowledge, this is the first Brazilian series that analyzes the learning curve of experienced laparoscopic surgeons and compare perioperative and functional outcomes between RALP and LRP. In this preliminary report, we found differences and similarities between the groups outcomes.
RALP operative time was longer than LRP, which is in accordance with previous larger series which estimated a range from 140 to 354 min (8,11,(20)(21)(22). Menon et al. reported in early series of RALP a progressive decrease of operative time over time which is not observed in LRP (23). This finding suggests that further experience could lead to similar operation time. Estimated blood loss was higher in RALP and is in accordance with previous reports which reported an average of 234ml with a range of 75-500ml (20)(21)(22). Estimated blood loss was higher in RALP (approximately 50ml), however it was clinically insignificant and blood transfusion was not necessary in any case. This difference could be explained by the longer operative time of RALP.
Robotic-assisted radical prostatectomy presents several potential complications. Some authors include catheterization time, symptomatic lymphocele, hematoma, emphysema whereas other uses the Clavien grading system for short--term complications (11,21,24). In our initial experience we presented the most common complications and our rate was 10%, in accordance with most reports (22,24,25). Both RALP and LRP present similar incidence of conversion to open surgery, which are significantly low (10). In our experience, no procedures needed conversion or transfusions. Length of hospital stay is usually associated with perioperative complications and patient's well-being, and we found no differences between LRP and RALP.
Continence rate at six month was significantly equal between our groups (93.3% versus 89.3%). This finding will be definitive only after a one-year evaluation. Ficarra's et al. meta-analysis observed that RALP was significantly superior to LRP in terms of 12-month urinary continence recovery. Although he concluded that the prevalence of urinary incontinence after RALP is influenced by several factors including preoperative patient characteristics, surgeon experience, surgical technique and collective methods, which hinder this assessment (7).
However, potency rates were higher in RALP when compared to LRP (70% versus 50%). This finding is in accordance with Ficarra's et al. meta--analysis that demonstrated a significant advantage in favor of RALP in comparison with RRP in terms of 12-month potency rates (26). In addition, this finding suggests that further experience on RALP and longer follow-up could lead to early potency rates, even for experienced laparoscopic surgeons.
Positive surgical margin rates were significantly similar between the groups (21.6% for RALP and 12.5% in LRP). This finding was similar to previous studies which RALP ranged from 12.3% to 17.2% and LRP 11-29%. Most series reported no statistically significant difference between LRP and RALP (16,20,23,27).
Currently, there is no consensus over the superiority of RALP or LRP in the treatment of localized prostate cancer. Several studies compared both techniques and presented different results rather in favor of RALP or LRP (2,11,16,19,(27)(28)(29)(30). We believe that the Da Vinci System is a technological evolution which provides more detailed information regarding this complex procedure. On the other hand, considering the low volume of Da Vinci's system installed in Brazil during the 7 last years, most urologists won't have access to robotic surgery in Brazil for a long time, which turns LRP into a feasible alternative. Additionally, LRP may be a shortcut for reducing the learning curve of RALP. We observed that surgeons who are proficient in LRP and have low volume of RALP presents a learning curve that did not jeopardize their oncological and functional outcomes. Similar to USA, where massive RALP expansion turned it to be the established surgical treatment for localized prostate cancer, it will be natural that RALP replace LRP in the future, when technology and trained surgeons could be largely available (10,23,27,28).
In our study we observed that an experienced laparoscopic surgeon was able to attain perioperative and functional outcomes in his/her initial results similar to surgeons who present higher experience in RALP. The previous experience on LRP could decrease the learning curve of RALP, mainly concerning the similarity of surgical steps and pelvic anatomy visualization. Therefore, the learning curve would be mainly related to the management of the robotic system new features such as multi-joints devices and absence of tactile feedback.
We consider the limitations of our initial experience which was performed in a low volume center for both procedures in private hospitals. Our results aid the comparison between LRP and RALP for experienced laparoscopic surgeons, however our results should be considered indicative only. Longer oncologic and functional follow-up are still required.
Experienced laparoscopic surgeons present a learning curve when first performing an RALP, demonstrated only by longer operative time. Even though our perioperative and functional outcomes were similar for both approaches and in accordance with previous reports (11,21,31). ELS were able to achieve satisfactory oncological and functional results during the learning curve period for RALP.
|
2016-05-12T22:15:10.714Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "4ceb90bad5d78e3ccb1549158087a5568ca8e0f9",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ibju/v42n1/1677-5538-ibju-42-1-0083.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ceb90bad5d78e3ccb1549158087a5568ca8e0f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
72023142
|
pes2o/s2orc
|
v3-fos-license
|
Diarrheal disease and DRGs
This article has presented a rational protocol for examining stools for enteric pathogens. When modified according to a laboratory's geographical location and patient population, this approach should allow efficient, comprehensive diagnosis of diarrheal disease. Laboratories should be able to examine stools routinely for Campylobacter spp., Salmonella spp., Shigella spp., Y. enterocolitica, intestinal parasites [including Cryptosporidium (14)], rotavirus, and C. difficile toxin. When unusual organisms such as pathogenic E. coli are suspected, or tissue culture facilities for C. difficile toxin assays are unavailable, the use of reference laboratories is strongly encouraged.
Current regulations, by which reimbursement is based on diagnosis related groups (DRGs), are changing clinical laboratories from revenue producing centers to cost centers (25). This transition is happening at the same time as a dramatic increase in our knowledge of infectious diseases, including new understanding of the etiology of diarrhea1 disease. Ten years ago, the etiology of enteric disease in many patients was unknown because stool specimens were routinely screened only for Salmonella spp., Shigella spp., and perhaps intestinal parasites such as Giardia lamblia. At present, many laboratories also routinely examine stool specimens for Campylobacter spp., Yersinia entero-colitica, Clostridiutn difficile, Cryptosporidium, and rotavirus. Aeromonas spp. (l), Vibrio spp. (4, 5), strains of Escherichia coli (6,(17)(18)(19), Norwalk agent, enteric adenoviruses, coronaviruses, and caliciviruses (8) have also been suggested as causes of diarrhea1 disease; food poisoning agents such as Staphylococcus aureus and Bacillus cereus must also be considered. Faced with declining resources and expanding knowledge, what approaches should be taken to provide efficient, yet comprehensive, diagnostic services to patients with diarrhea1 disease?
To adequately address this question, retrospective review of laboratory results is needed to determine which agents are the ones most likely to cause diarrhea1 disease in a particular patient population. Table 1 lists data on the frequency with which we detected enteric pathogens during the past 3 yr at our 600-bed hospital. These data were obtained by culturing approximately 10,000 patients for Her areas of special research interest are antimicrobial susceptibility testing and applications of DNA probes to clinical microbiology. Her input into these, as well as other areas of clinical microbiology, will help to maintain the Newsletter as a current and high quality publication.
Selective Stool Examination
Not all stool specimens should be examined routinely for all agents listed in Table 1. For the most efficient use of laboratory facilities, as well as maximum benefit to the patient, a selective examination process must be followed based on factors such as patient age and history, as well as the clinical microbiologist's understanding of the pathogenesis and local epidemiology of diarrhea1 disease. Figure 1 presents an algorithm that can be used as a guideline to determine how to examine stool specimens from different patient populations. This algorithm divides patients according to age, whether those older than 3 yr are inpatients or outpatients, and whether they have received antimicrobial agents or cancer chemotherapeutics. Stools from patients who have currently or recently received antimicrobial agents or cancer chemotherapeutics should be examined only for C. d@cile toxin because studies have shown that, in such patients, other enteric pathogens are rarely, if ever, present (9). Tests for detecting C. di'cile toxin must be done instead of culture because up to 20% of people receiving antimicrobial agents are asymptomatic carriers of the organism (24). On the other hand, only 2% of patients have toxin in the absence of symptoms. Toxin is detected in the feces of approximately 30% of patients with antimicrobial agent-associated diarrhea and essentially all patients with pseudomembranous colitis (3). Tissue culture assay remains the standard test for toxin detection. Alternative methods such as counterimmunoelectrophoresis ( 12) and latex agglutination (2 I) lack sufficient sensitivity and specificity; ELISA tests (13), although accurate, are not yet widely available.
Inpatients
Stools from inpatients who are not receiving antimicrobial therapy usually are negative for enteric pathogens. Because nosocomial outbreaks of Salmonella and Shigella have occurred (10, 16), stool specimens may be screened for these agents. In certain areas of the United States where Giardia is endemic, e.g., Colorado (1 I), stools from patients with appropriate signs and symptoms may be examined for this protozoan.
AIDS Patients
Patients with acquired immune deficiency syndrome (AIDS) present a special challenge because diarrhea can be a frequent, potentially lifethreatening illness. Cryptosporidium, Salmonella, Shigella, and Giardia are commonly associated with this syndrome (10). The diagnosis of C. diffitile enterocolitis must also be considered because these patients may receive a variety of antimicrobial agents to treat their opportunistic infections.
Outpatients
Outpatients with diarrhea1 disease present a greater challenge than inpatients because they can be infected by a large variety of microbial agents. Rational strategies for evaluating these patients can be developed by carefully noting their travel and food history. Stool examinations should not be performed unless the patient has been symptomatic for at least 3 days. In this way, persons with self-limited disease will be spared the expense of the culture, an especially important factor in health maintenance organizations (HMOs). Stools from patients with persistent symptoms should be cultured for Salmonella, Shigella, and Campylobacter, as well as examined for intestinal parasites, which may be found almost as commonly as bacterial pathogens (Table I).
Specimen Examination
The complete stool work-up should include two cultures for bacteria and three examinations for parasites. Specimen collection should be spaced so that the examinations can be completed before another stool analysis is performed. If the tests are negative and the patient remains symptomatic, a search for "unusual" agents of diar-rhea1 disease should be made, e.g., in patients with bloody diarrhea, strains of E. coli that cause hemorrhagic colitis (17). Such isolates are usually identified in reference laboratories by a combination of biochemical tests, serotyping, and ability to produce a Shigalike toxin (15). State health laboratories should be contacted about their ability to process specimens to detect these, as well as other, pathogenic E.
coli.
The stools of patients who have either a history of travel to undeveloped countries or of ingestion of raw seafoods, should be examined for Vibrio spp. and enterotoxigenic E. coli. Vibrios can be cultured on TCBS agar, but specimens for enterotoxigenic E. coli should be sent to a reference labo- ratory. During the summer months in particular, stools from symptomatic individuals living along the Gulf of Mexico should also be screened for vibrios (4).
In patients with chronic diarrhea, whose routine stool examinations are negative, the following etiologies should be considered: Ctytosporidium in immunocompetent individuals (2). C. dijjicile toxin in patients with a history of antibiotic therapy within the past 4 weeks, and Giurdia. Giardia may not be detected by repeated stool examination, and therefore the parasite should be sought either in duodenal biopsy material or duodenal fluid.
Neonates and Infants
Most enteric pathogens are detected in neonates and infants up to 3-yr old. The epidemiology of disease in this patient population greatly influences the diagnostic approach. In temperate zones of the Northern Hemisphere, from December to March, rotavirus is the most important cause of diarrhea1 disease in young children (8). For diagnosis, a rotavirus ELISA test is usually the first step because the results are often available within 24 hr. The specimen can be refrigerated and further tests deferred until the rotavirus test result is known. If the test is negative, the fecal screen outlined previ-ously can be performed. If symptoms persist for more than 5 days in rotavirus-positive patients, a stool specimen should be examined for other pathogens. Pathogens such as calicivirus and coronavirus produce disease it can be identified in a reference laboratory by its adherence to HEP-2 cells (6). Enteroadherent E. co/i are associated with failure-to-thrive syndrome and nursery outbreaks of diarrhea1 disease (18).
The role of C. d$ficile diarrhea1 disease in children less than 3-yr old is unclear. Some investigators believe that this organism is normal flora in neonates and that toxin can be present in feces without accompanying disease. Others, however, have associated C. difJicile with chronic diarrhea and failure-to-thrive syndrome (23).
The age at which children can develop antimicrobial agent-induced diarrhea and colitis requires clarification. Stark and Lee (22) suggest that the organism is normal flora for at least 9 months. In children younger than 3-yr old, the Y. enterocolifica is frequently isolated from children. Although selective media and special culture condi-diagnosis of C. difficile-associated distions may be used to enhance its re-ease should be based on strong evi-
Food-Borne Outbreaks
The possibility of a food-borne outbreak should be considered when diagnosing diarrhea1 disease. In most instances outbreaks are limited to small numbers of people and the disease is mild and self-limited. Some outbreaks, however, are widespread, affect hundreds to thousands of individuals, and have potentially high morbidity and mortality. Clinical microbiologists, in conjunction with their infectious disease colleagues, should evaluate unusual patterns of recovery of enteric pathogens. For example, this past summer one of our technologists, working with our pediatric infectious disease physicians, helped to uncover an outbreak of shigellosis which occurred after a family reunion. Nine family members had stool cultures positive for Shigella. By working with public health officials, we were able to contain the outbreak to this single family.
Summary
This article has presented a rational protocol for examining stools for enteric pathogens. When modified according to a laboratory's geographical location and patient population, this approach should allow efficient, comprehensive diagnosis of diarrhea1 disease. Laboratories should be able to examine stools routinely for Campylobatter spp., Salmonella spp., Shigella spp., Y. enterocolitica, intestinal parasites [including Cryptosporidium (1411, rotavirus, and C. di$icile toxin. When unusual organisms such as pathogenic E. coli are suspected, or tissue culture facilities for C. di#icile toxin assays are unavailable, the use of reference laboratories is strongly encouraged.
|
2019-03-09T14:16:19.101Z
|
1986-01-01T00:00:00.000
|
{
"year": 1986,
"sha1": "eea2cfd05d67869cbfff33f999f3cfb1fa703494",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/0196-4399(86)90096-6",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca105be65dffffd6e0e5b402d45d1e6394161a8d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247983696
|
pes2o/s2orc
|
v3-fos-license
|
Combining Pre- and Postoperative Lymphocyte–C-Reactive Protein Ratios Can Better Predict Hepatocellular Carcinoma Prognosis After Partial Hepatectomy
Background Various preoperative inflammatory indicators have been identified as potential predictors of poor prognosis in patients with hepatocellular carcinoma (HCC), but the role of postoperative inflammatory indicators remains unclear. This study aimed to explore the prognostic value of the postoperative lymphocyte–C-reactive protein ratio (PostLCR) on its own and combined with preoperative LCR (PreLCR). Methods A total of 290 patients with primary HCC were retrospectively enrolled in the study. Univariate analysis was used to identify factors significantly associated with poor disease-free survival (DFS) and overall survival (OS), then multivariate analysis was performed to identify independent prognostic indicators of poor survival. Prognostic models based on preoperative, postoperative, and both types of indicators were then constructed, and their predictive performance were evaluated using time-dependent receiver operating characteristic curves and the concordance index (C-index). Results PreLCR and PostLCR levels correlated with DFS and OS more strongly than other pre- and postoperative inflammatory indicators, respectively. Decreased PreLCR and PostLCR were independent prognostic factors for both DFS and OS, while HCC patients with decreased PreLCR and PostLCR had worse prognosis than patients with increased PreLCR and PostLCR. Patients into three groups based on their cut-off values of PreLCR and PostLCR, Kaplan–Meier survival analysis indicated that HCC patients with low PreLCR and PostLCR had the worst DFS and OS. The combined model showed better predictive performance at 1 and 3 years post-surgery than individual pre- and postoperative models, the American Joint Committee on Cancer/Tumor-Node-Metastasis (8th edition) staging system and the Barcelona Clinic Liver Cancer system. The combine model demonstrated a markedly superior C-index compared with the other models in DFS and OS. Conclusion Our study showed PreLCR and PostLCR are independent predictors of DFS and OS in HCC patients after partial hepatectomy. Models that include both PreLCR and PostLCR can predict prognosis better than well-established clinical staging systems.
Introduction
survival rates in HCC patients, robust biomarkers are needed to predict disease recurrence, identify high-risk patients, facilitate close patient follow-up, and decide on appropriate postoperative treatments. The 8th edition of the American Joint Committee on Cancer (AJCC)/Tumor-Node-Metastasis (TNM) staging system and the Barcelona Clinic Liver Cancer (BCLC) classification are commonly used for HCC risk stratification and identification of potential anticancer therapies, but their application is limited as they can incorporate only a few clinicopathological indicators. 4 Other many factors also affect tumor occurrence and progression, such as inflammation, viral infection, and the tumor macro-and microenvironment.
Unlike most other malignancies, more than 90% of HCC cases develop due to chronic inflammation. 5 The host inflammatory response has also been related to cancer progression and patient survival, 6,7 while systemic inflammation due to host-tumor interactions is currently considered a cancer hallmark. 8 Therefore, the prognostic value of various preoperative inflammatory indicators has been extensively studied, including preoperative platelet-lymphocyte ratio (PrePLR), preoperative lymphocyte-monocyte ratio (PreLMR), systemic immune inflammation index (PreSII), preoperative derived NLR (PredNLR), and preoperative neutrophil-lymphocyte ratio (PreNLR). [9][10][11][12][13] The preoperative lymphocyte-C-reactive protein ratio (PreLCR) has also recently been identified as a powerful prognostic marker in HCC. 14,15 However, the balance between immune and inflammatory responses may change after the surgical removal of HCC lesions. 16 Indeed, postoperative inflammatory indicators, such as postoperative platelet-lymphocyte ratio (PostPLR) and postoperative neutrophil-lymphocyte ratio (PostNLR), can greatly affect HCC prognosis. [17][18][19] Various postoperative inflammatory indicators have been linked to the long-term prognosis of patients with different solid tumors. 20,21 For instance, PostNLR has been identified as an independent prognostic factor of survival in patients with small HCC undergoing radiofrequency ablation. 22 Whether postoperative LCR (PostLCR) has prognostic value in HCC, analogous to PreLCR, has not yet been investigated.
In this study, we explored the prognostic value of PostLCR and compared its performance to that of models based only on PreLCR or the combination of PreLCR and PostLCR. We also compared these models against existing clinical staging systems.
Study Population
In this study, we retrospectively investigated the medical records of 290 HCC patients treated with R0 resection at the Affiliated Cancer Hospital of Guangxi Medical University in Nanning, China, between August 2014 and January 2017. Patients were enrolled if they met all the following criteria: (1) definitive HCC diagnosis based on World Health Organization criteria; (2) Child-Pugh A stage and Performance Status Test score of 0-1; (3) no prior anticancer treatment, such as transarterial chemoembolization or radiation; (4) complete clinical pathological data; and (5) underwent R0 resection, defined as complete macroscopic tumor removal, negative resection margins, and no detectable intra-or extrahepatic metastatic lesions. The study was conducted according to the principles of the Declaration of Helsinki and was approved by the Ethics Committee of the Affiliated Cancer Hospital of Guangxi Medical University. The requirement for written informed consent was waived because all patients, on admission, consented for their anonymized medical data to be analyzed and published for research purposes.
Clinicopathological Indicators
Preoperative blood samples were collected and assayed within one week before surgery. Postoperative blood samples were collected and assayed within 25-40 days after surgery (the first reexamination after surgery discharge). Laboratory measurements included alpha fetoprotein (AFP), hepatitis B virus DNA (HBV-DNA), C-reactive protein, total peripheral white blood cell count (W), total peripheral lymphocyte count (L), total peripheral platelet count (P), total peripheral monocyte count (M), and total peripheral neutrophil count (N). Inflammation biomarkers were defined as follows: NLR = N/L, PLR = P/L, LMR = L/M, SII = (P×N)/L, and dNLR = (W−N)/L. LCR was defined as the ratio of lymphocyte count (number/mL) to the level of serum C-reactive protein (mg/dL).
Patient Follow-Up
After initial treatment, laboratory examinations (serum AFP, liver function, blood tests), abdominal ultrasonography, and contrast-enhanced CT were performed every three months for the first two years and every six months thereafter. The first date of follow-up was the date of the initial diagnosis of HCC, and the last day was the date of the most recent follow-up visit (June 2021) or the date of the patient's death. DFS was measured from the date of hepatectomy until tumor recurrence. Overall survival (OS) was measured between the date of hepatectomy and the date of death or the date of the last follow-up visit. Recurrence was defined as a significant increase in postoperative AFP levels or tumor lesions.
Statistical Analysis
Statistical analysis was performed with SPSS 26.0 (IBM, Chicago, IL, USA), MedCalc version 20.015 (Broekstraat 52, 9030; Mariakerke, Belgium), and R version 4.1.2 (http://www.r-project.org/). Patient characteristics were analyzed using descriptive statistics. Significant intergroup differences were determined using the chi-squared test. Kaplan-Meier survival curves were compared using the Log rank test. Receiver operating characteristic (ROC) curve analysis was used to calculate the area under the ROC curve (AUC), together with 95% confidence intervals (95% CIs). Correlation between patient characteristics and survival rates was investigated using univariate and multivariate Cox proportional hazard regression models. The optimal LCR cut-off values for DFS were determined using the X-Tile statistical package (version 3.6.1, Yale University, New Haven, CT, USA) and the highest χ 2 value obtained from Kaplan-Meier survival analysis and the Log rank test. 23 C-indexes were calculated using the "Hmisc" package in R, and time-dependent ROC (timeROC) analysis was performed with the "timeROC" package in R.
The ability of the models to predict DFS and OS was evaluated by 1000 bootstrapping replications, and their performance at 1 and 3 years post-surgery was assessed using calibration plots. The risk score of each patient was determined with the "nomogramFormula" package, and timeROC analysis was used to compare the predictive performance of the models at different time points. All P values were two-sided, and differences associated with P < 0.05 were considered statistically significant.
Patient Characteristics and Clinical Outcomes
The study included 239 males (82.4%) and 51 females (17.6%) with a mean age of 49.7 years (range, 20-79). None of the patients received chemotherapy or radiotherapy prior to surgery, and no perioperative mortality was observed. Of the 290 patients, 135 (46.6%) showed MVI and 134 (46.2%) liver cirrhosis. In addition, 143 patients had positive AFP levels before surgery (Table 1).
Predictive Performance of PreLCR and PostLCR
To identify inflammation biomarkers with the highest prognostic value for DFS and OS, we calculated the AUC values of preoperative and postoperative NLR, LCR, LMR, PLR, SII, and dNLR. PreLCR and PostLCR showed the highest AUCs (Figure 1-2 and Figures S1 and S2) and were therefore further assessed for their clinical impact and potential as biomarkers in HCC using respective optimal cut-off values of 4600 and 4300. Kaplan-Meier survival analysis indicated that HCC patients with higher PreLCR and PostLCR had significantly better DFS and OS than those with lower PreLCR and PostLCR (Figures 3 and S3).
Predictive Performance of Combined PreLCR and PostLCR
We divided the total of 290 patients into three groups based on their cut-off values of PreLCR and PostLCR. Patients with low PreLCR and PostLCR were categorized into Cohort A (n=53); Patients with high PreLCR and PostLCR were categorized into Cohort C (n=133); And patients with either high PreLCR or high PostLCR were categorized into Cohort B (n=104). Kaplan-Meier survival analysis indicated that HCC patients with low PreLCR and PostLCR (Cohort A) had the worst DFS and OS, whereas patients with high PreLCR and PostLCR (Cohort C) presented the best DFS and OS ( Figure 4).
Prognostic Model Based on Preoperative Indicators
Univariate analysis showed that AFP, tumor size, tumor number, MVI, PreLCR, and PostLCR were significantly associated with poor DFS in patients with primary HCC after partial hepatectomy ( Table 2). Multivariate analysis of PreLCR and preoperative clinicopathological indicators also showed that AFP, tumor size, tumor number, MVI, and PreLCR were independent prognostic factors of poor DFS (Table 3). These indicators were further used to construct a preoperative prognostic model for DFS ( Figure 5A).
Similarly, AFP, tumor size, HBV-DNA, MVI, PreLCR, and PostLCR were found to be significantly associated with poor OS after partial hepatectomy in primary HCC patients (Table 2), while multivariate analysis indicated that AFP, tumor size, MVI, and PreLCR were independent prognostic factors of poor OS (Table 3). These indicators were then included in a preoperative prognostic model for OS ( Figure S4A).
2232
The high consistency between predicted results and actual observations was confirmed by the calibration curves for 1-and 3-year DFS ( Figure 5B-C) and OS ( Figure S4B and C).
Prognostic Model Based on Postoperative Indicators
Multivariate analysis of PostLCR and postoperative clinicopathological indicators showed that AFP, tumor size, tumor number, MVI, and PostLCR were independent prognostic factors of poor DFS (Table 3). These indicators were used to construct a postoperative prognostic model for DFS ( Figure 6A). Similarly, AFP, tumor size, MVI, and PostLCR were identified as independent prognostic factors of poor OS (Table 3) and were included in a postoperative prognostic model for OS ( Figure S5A).
The high consistency between predicted results and actual observations was confirmed by the calibration curves for 1and 3-year DFS ( Figure 6B-C) and OS ( Figure S5B and C).
Prognostic Model Based on Pre-and Postoperative Indicators
Multivariate analysis of PreLCR, PostLCR, and clinicopathological indicators suggested that AFP, tumor size, tumor number, MVI, PreLCR, and PostLCR were independent prognostic factors of poor DFS and OS (Table 3). These indicators were then used to construct combined prognostic models for DFS ( Figure 7A) and OS ( Figure S6A).
In addition, the calibration curves for 1-and 3-year DFS ( Figure 7B-C) and OS ( Figure S6B and C) confirmed that the predictions of the combined model were consistent with observations. Table 4).
Further comparison of their prognostic efficacy at different time points by timeROC analysis revealed that the predictive performance of the combined model at 1 year (AUC = 0.690) and 3 years (AUC = 0.747) after surgery was better than that of the preoperative model, the postoperative model, the AJCC TNM (8th) system, and the BCLC system in DFS (Figure 8), similarly, the combined model had the best predictive performance in OS ( Figure S7).
Discussion
To date, several preoperative inflammatory indicators have been identified as potential prognostic markers for patients with HCC. However, the prognostic value of postoperative indicators has not been adequately explored. In the present Abbreviations: AFP, alpha-fetoprotein; 95% CI, 95% confidence interval; HR, hazard ratio; MVI, microvascular invasion; PostLCR, postoperative lymphocyte-C-reactive protein ratio; PreLCR, preoperative lymphocyte-C-reactive protein ratio. 2236 study, we investigated PostLCR as a potential predictor of poor DFS and OS and assessed the performance of a combined model incorporating both PreLCR and PostLCR, comparing it to separate preoperative and postoperative models as well as to existing clinical staging systems. Our results indicate that LCR is a significantly better predictor of DFS and OS than other inflammation-based prognostic scores and that decreased PreLCR and PostLCR are independent predictors of DFS and OS in HCC patients after partial hepatectomy. In addition, we found that HCC patients with lower PreLCR and PostLCR values have worse prognosis than those with higher PreLCR and PostLCR. Systemic inflammation due to host-tumor interactions is known to promote tumor growth and metastasis in patients with various types of malignancies. 7,24 High levels of serum C-reactive protein have been associated with poor systemic inflammatory response, early HCC recurrence, and worse survival after hepatic resection. 25,26 Lymphopenia, defined as a reduced number of anti-cancer lymphocytes, 27,28 has also been identified as a marker of poor immune response and a prognostic factor in patients with malignant disease. 29,30 For this reason, low PreLCR has been associated with poor immunological response, malnutrition, and/or enhancement of systemic inflammatory response in cancer patients, and it is a convenient prognostic marker for patients with HCC. 14,15,31 LCR may not only directly impact a patient's outcome but also rather reflect an systemic inflammatory state. A low LCR indicates low immunity or high inflammatory state, and thus in our results, low PreLCR and low PostLCR (cohort A) has worst DFS and OS in these patients. Patients with depressed postLCR have a relative lymphocytopenia and increased CRP, which indicated that the balance is tipped in favor of inflammatory or Immunosuppression response after surgery, and is associated with poor oncologic outcome. The survival of patients with lower or higher preLCR can be distinguished more accurate by postLCR change, which can also reflect the efficacy of treatment.
To the best of our knowledge, this study is the first to compare the prognostic efficacy of traditional clinical staging systems and a prognostic model combining PreLCR and PostLCR. Our results showed that the combined model had
2237
a better prognostic performance for 1-and 3-year DFS and OS than individual models and traditional clinical staging systems. This superior performance may reflect that the combined model considers both pre-and postsurgical phases of cancer treatment. It may also be attributable to severe postoperative inflammation that activates micrometastasis and affects the microenvironment of residual liver cancer tissue, thus promoting HCC recurrence even after complete removal or ablation of the tumor tissue. 32 A recent study of postoperative inflammatory biomarkers revealed that their prognostic value stabilized at three days after liver transplantation. 33 It has also been shown that the optimal period for measuring postoperative inflammatory markers is at 21-56 days after surgery, when surgery-induced inflammation is minimal. 34 Our blood samples were 2238 collected at 25-40 days postoperatively. Thus, we speculate that PreLCR and PostLCR can be used to decide whether a patient with HCC who underwent surgical resection can forego postoperative chemotherapy, although further studies are needed to confirm our hypothesis. Our study has certain limitations. First, it was a retrospective study and included patients from a single institution, although the study population was relatively large and homogeneous in terms of cancer stage. Moreover, the timing of blood sampling varied over a nearly two-fold range, which might have affected the data on inflammatory status. Therefore, our findings should be confirmed by large-scale prospective studies in which blood is sampled during a narrow window.
Conclusion
Our study showed that PreLCR and PostLCR are valuable prognostic markers of survival in patients with HCC after partial hepatectomy. Moreover, we found that the combined prognostic model performed much better than pre-or postoperative models or the well-established TNM and BCLC staging systems. Further studies of postoperative inflammatory indicators are needed in order to exploit their full prognostic potential.
|
2022-04-07T15:07:53.644Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "43305e8a74cc3e3aee3f41459ac0a53dda1985d4",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=79676",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8351eb36f4bc75cf9692bfd2adc746483359a5c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252882372
|
pes2o/s2orc
|
v3-fos-license
|
Understanding the perceived psychosocial impact of father absence on adult women
Given South Africa’s social, historical, political, and economic landscape which has contributed towards a relatively high prevalence of father absence, particularly in Black families, and the risk of adverse implications for children’s psychosocial development, the issue of absent fathers is an important area for research. However, the differential impact of father absence on the girl child remains relatively under-researched. Hence, the study explored the perceived psychosocial impact of father absence during childhood and adolescence on adult women. A case study research design located within a qualitative research approach was employed and nine adult women aged 18 to 35 years were purposefully recruited from Grobler Park, Johannesburg West for participation in the study. Due to the COVID-19 pandemic, three participants were interviewed telephonically and seven face-to-face. The research was guided by Mkhize’s sociocultural psychological tradition and Erikson’s psychosocial development theory. Thematic analysis was employed to analyse the collected data. Among the key findings was that the women perceived the experience of father absence to have adversely affected their feelings of belonging and sense of identity, with some participants having suffered emotional and financial challenges. Participants acknowledged a lack of healthy relationships with other men associated with having grown up with an absent father. While most of the women adopted positive coping strategies, a small number resorted to negative coping. There was also recognition of the important roles that social fathers assume in child-rearing. These findings have important implications for promoting positive father-daughter relations.
The country's multifaceted historical, social, economic, and political processes have contributed towards father absenteeism in South Africa, particularly in Black families (Makusha et al., 2019). Land dispossession through colonialism, and Apartheid policies such as the creation of homelands and forced settlements, migratory labour policies, influx control, and pass laws all disrupted family relations (Ngcukaitobi, 2021). These policies resulted in the breakdown of the Black African family with many fathers having to leave their wives and children in the rural areas to work in the urban areas. Consequently, the children ended up being raised by grandparents, a single parent or male members of extended families formally known as 'social fathers' (Richter & Morrell, 2006). Poverty and unemployment also made it difficult for men to be able to pay damages or lobola. In some South African cultures, inhlawulo/damages refers to monetary compensation paid to a woman's family by the father of the future child for impregnating a woman out of wedlock. Lobola is an African practice where the groom makes a payment in live cattle or cash to the bride's family before marriage (bride price). Other factors that have put further strain on family life include HIV/ AIDS and substance abuse (Mokomane et al., 2019). Makusha et al. (2019) highlight the importance of research on fatherhood. They maintain that involved and caring fathers are critical in the lives of children and accessible, supportive, engaged, and responsible fathers enhance girls' self-confidence and help boys to develop healthy masculinity. Children tend to attend school longer and achieve more when fathers participate in their educational activities (Richter et al., 2011). Moreover, children have been found to have higher self-esteem while females tend to be confident in their relationships with men (Schacht et al., 2009). In contrast, lack of father involvement has been linked to a range of adverse implications for well-being such as stigmatization, substance abuse, and risky sexual behaviour as well as significant economic disadvantages (Heartlines, 2020). Richter and Morrell (2006) conducted an in-depth study on men and fatherhood in South Africa and explored the complexities that affect fatherhood in the post-apartheid era. Annual reports by Sonke Gender Justice and The Human Sciences Research Council on the State of South Africa's Fathers highlight general implications of father absence but they provide minimal information about the impact of father absence on female children.
Nevertheless, there are some studies that have explored the impact of father absence on the girl child. For example, Allen and Daly (2007) found that poor academic results, teenage pregnancy, and low self-esteem were among the implications of growing up with an absent father. Padi et al. (2014) analysed the narratives of 20 young women who had grown up with absent fathers. More recently, Kamau and Davies (2018) conducted an empirical study regarding the links between having an absent father and psychosocial development, from the subjective experiences of females. However, the impact of father absence on the girl child remains relatively under-researched. Hence, the aim of the present study was to explore the perceived psychosocial impact of father absence during childhood and adolescence on adult women. Objectives were to examine (1) the attitudes of women towards their biological father (and any social father) and whether he was physically and/ or emotionally absent; (2) their perceptions regarding the impact of having an absent father on their sense of identity and self-esteem; (3) the perceived impact of having an absent father on their attitudes towards men and their relationships with men; (4) the benefits and challenges of having an absent father; and (5) the coping strategies they adopted in response to having an absent father.
This study was guided by two theoretical frameworks, namely Erikson's (1993) psychosocial development theory and Mkhize's (2006) sociocultural psychological tradition. Erikson's psychosocial development theory was selected because it focuses on personality progression in a fixed order across the eight stages of psychosocial development. This theory assisted in exploring the women's attitudes towards their father, whether the father was physically and/or emotionally absent, and the perceived impact on identity and self-esteem. Mkhize's (2006) sociocultural psychological tradition 'which conceptualizes identity formation in social, historical, political and ideological terms' (p. 186) was adopted to complement Erikson's theory. This approach concentrates on social rather than biological fathering, which is in line with the traditional view of character and family that characterizes African families. This approach emphasizes that collective fatherhood has the capability of enhancing the child's social capital and may contribute to the child's emotional, educational, social, and cognitive development. The sociocultural psychological tradition assisted in exploring the women's relationships with their social fathers.
Participants
The study employed a case study design located within a qualitative research approach. Nine women, aged between 18 and 35, were recruited via purposive and snowball sampling. First, the participants were required to be Black African women between the ages of 18 and 35 years who had grown up with an absent father. The motivation behind the selection of this age group derived from Langa's (2014) contention that young adults between these ages are constantly and continually searching for their self-identity, especially within their cultural contexts and thus seek to know both the maternal and paternal sides of their family. Given the higher prevalence of father absence among Black persons (Sonke Gender Justice and The Human Sciences Research Council, 2018), the study's focus was on Black African families only. As the first author had grown up with an absent father, she knew of other women with a similar upbringing, who in turn referred her to other potential participants. Due to the Coronavirus pandemic, all participants were approached via phone calls and WhatsApp.
In terms of the biographical profile, participants' ages ranged from 18 to 31 years. They were all Black females and their home languages included isiZulu, Sepedi, Setswana, and isiXhosa. Two participants had never met their father, while seven knew who their father was but his whereabouts were unknown.
Interview guide
An interview schedule was employed which included closed-ended questions on biographical data and open-ended items designed to explore the five objectives of the study. The interview schedule was pre-tested on two participants who met the inclusion criteria but did not participate in the study. A sample of the interview questions include (1) Was there any other male person who acted as a 'social father'? In other words, did any other man behave towards you in the manner of a father even though he was not your biological father? (2) Has having an absent father affected the way you feel about yourself, that is, your identity as a person with an absent father? If yes, please describe. (3) Do you think having an absent father may have influenced your attitudes towards men? If yes, in what way?
Procedure
Prior to the commencement of the interviews, participants were given Information Sheets explaining the purpose and procedures of the study and their rights as research participants. They were requested to consent to participation in the study and for audio-recordings of the interviews. Seven participants were interviewed face-to-face while three were interviewed telephonically due to the COVID-19 pandemic.
Ethical considerations
The following ethical considerations were considered: voluntary participation, confidentiality, the right to withdraw from the study, and the right to decline to answer any questions they felt uncomfortable with answering. Due to the sensitive nature of the research, counselling was made available to any persons who might have experienced distress as a result of the interview. The study was granted clearance by the University of Johannesburg Faculty of Humanities Research Ethics Committee (Reference number: REC-01-242-2020).
Data analysis
Data were thematically analysed following the six stages outlined by Braun and Clarke (2006), namely, familiarization with the data, generating initial codes, searching for themes, reviewing themes, defining and naming themes, and producing the report (Braun and Clarke, 2006).
Results
Results are presented in accordance with the objectives of the study.
Attitudes towards biological father (and any social father) and whether he was physically and/or emotionally absent
Five themes were identified under this objective: Theme 1: hatred for biological father due to lack of involvement in their lives. Three of the participants harboured feelings of hatred towards their biological fathers due to his lack of involvement in their lives. One participant expressed animosity towards her biological father for not providing her with protection and essential needs and felt that her father did not make any effort to develop a relationship with her. She explained, . . . As women we are vulnerable, so you need someone to help you be strong. Someone to help show you the way. And yes, we do have a mother, but there are things that need a father that a mother cannot provide. Even how to protect myself as a woman from GBV, rape. . . . At most I hated him for not standing up for me, in terms of having a relationship with me, I felt like he respected his wife and his relationship with his other children more than me. (Participant 1) Theme 2: no real feelings because he did not play a role in her life. Participant 3 maintained that she did not have any feelings or attitudes towards her father simply because her father did not raise or provide for her; therefore, this absence created an emotional disconnect between them.
Theme 3: no hatred towards father even though he was absent. Participant 5 emphasized that despite her father having left the family, she did not harbour any negative feelings towards him. 'But I do not hate him'.
Theme 4: conflict over whether to reconcile with father. Participant 4's biological father returned after many years, which resulted in her being conflicted on whether to reconcile with him.
Theme 5: appreciation for the positive roles played by social father/s in their lives. Eight of 9 participants appreciated the positive roles that were undertaken by their social fathers. These social fathers played two distinct roles in the participants' lives. Some social fathers supported the participants materially and emotionally while other social fathers only supported the participants materially. Participant 4 reported, 'I feel like those were the only men present in my life, I would say those were my father figures to me. Because they were there, they supported me in life up to where I am now . . . '.
Perceptions regarding the impact of having an absent father on their sense of identity and self-esteem
When asked to elaborate on the perceived impact of having an absent father on their sense of identity and self-esteem, some participants felt that they did not have a sense of belonging, while others felt that they belonged to a family. Moreover, some participants felt that having an absent father had negatively impacted their self-confidence and self-esteem; while others felt that not having a father had not exerted any negative impact in this regard.
Theme 1: no sense of belonging. Participants 1 and 3 felt that having an absent father resulted in their lack of self-identity and sense of belonging. Participant 1 felt that if her father had been present, she would have developed a positive self-identity. She explained, I feel like if he were here, I would know myself even more, I would have a sense of self even more because, you know as a father you must play a certain role in your daughter's life. So, if you are not there something is missing. So, I think if he were to be here, I would be more self-aware, be stable in myself. But now I feel like I am all over the place because I feel like I do not belong anywhere.
Another participant felt that having her father absent from her life resulted in her lack of selfawareness and felt that she did not belong anywhere: I feel like if he were here, I would know myself even more, I would have a sense of self even more because, you know as a father you must play a certain role in your daughter's life. So, if you are not there something is missing. So, I think if he were to be here, I would be more self-aware. Be stable in myself, but now I feel like I am all over the place because I feel like I do not belong anywhere. (Participant 3) Theme 2: had a sense of belonging. Three participants felt that their sense of belonging was well developed as they could easily identify with their maternal family. Participant 5 stated, Culturally, I always felt like I belong. My mom is Sotho and father is Xhosa and both my name and surname are Sotho. So, I never really felt like I should learn my father's language because I always felt like I belong.
For many of the participants, culture played a significant role in developing a sense of belonging, as expressed by participants using the father's surname. Participant 1 commented, I am culture orientated and when my mom moved on and being married, I did not change my surname and the reason behind it is because my father paid damages for me, so clearly, I am identified as part of his family. But I cannot practice some cultural things on his side due to the wife blocking me.
Theme 3: father absence impacted negatively on confidence and self-esteem. For three participants living with an absent father affected their confidence and self-esteem as they had to face life's challenges alone. The following verbatim quote explains some of the psychological hardships that they had to face: The fact that he is not here means that there is something missing that he was supposed to be doing in my life, especially when it comes to confidence, relationships, I think he was supposed to be there for me, but because he is not here, I have to figure it out on my own, which is hard as well.
I do not love myself as I should. Because I do not feel worthy of love and loving myself. Because I believe that love is taught, and I was never taught how to love myself. So, I do not know how to love myself. No, no self-confidence. I was not taught how to be confident, for me I was not taught how to confident as well. So that is why I do not have high self-confidence because I feel like I was not appreciated or loved from an early age. (Participant 2) Theme 4: father absence did not impact negatively on confidence and self-esteem. In contrast to the previous participant, two participants felt that their self-esteem was not adversely affected due to their father's absence but was enhanced because their mothers and/or grandmothers always affirmed them. Participant 5 asserted, 'I was raised by my mom and grandmother, so I felt really affirmed by them. I have never felt less than'. In a similar vein, Participant 8 reflected, I think my mom did a very good job raising me. So, I have a very high self-confidence because of my mom but I think if he were still alive it would have been higher than it is right now.
Theme 5: father absence affects sense of identity. Two participants emphasized that their sense of identity was adversely affected: 'I also think the issue of identity is big, I think everyone deserves to know where they come from and who they identify themselves with, that gives one some sort of belonging' (Participant 9).
The perceived impact of having an absent father on attitudes towards men and relationships with men
Five themes surfaced in this regard.
Theme 1: fear of being hurt by other men. Participants 1, 3, and 6 shared the experience of how they feared being hurt by men. They further explained that they had developed trust issues and had difficulty interacting with men. Participant 1 commented, . . . I built a wall so that another man does not hurt me the way my father did. For example, the relation I had with my son's father. Like a lot of times, I was expecting him to hurt me, I was so used to hurt.
Participant 2 feared being hurt by romantic partners: 'Yes, in terms of romantic relationship I fear being hurt'.
Theme 2: fear of being abandoned. Participant 9 reflected that one of her fears was being abandoned by her romantic partners: But what I can say is that I view men and relationships with them in a way that is scary, I sometimes do not know how to interact with them or even know what to say to them. When I get into relationships, I fear that they will leave me, just like how my dad left. But I am working on that now, I can't continue living thinking that every man I meet will hurt me.
Yes, I am a kind of person who has attachment issues. I feel like if somebody comes, they will end up leaving me. I experienced them even in my past relationships. That I feel like if somebody comes, they just want to benefit from me or they want my money or resources and they are going to leave. Like I would say 'you are here but you are going to leave just like my father left me when I was only three'. (Participant 4) Theme 3: inability to trust men. The following response from Participant 2 explains the participant's inability to trust male romantic partners: Even though I believe that a person is genuine, and they have good intentions, I end up not believing or trusting them. So, I always push good men away because I feel like they are too good to be true.
Theme 4: negatively affected relationships with other men and choice of partners. Some participants believed that their father's abandonment had a negative impact on how they interact with men in general. Participant 3 clearly stated that she blames her father for every man that leaves her life: I always blamed him for every man that left me in my relationships. I am always insecure around men uhm. A father needs to teach you how to phatha (behave) yourself in the society, regarding especially men, how to respect yourself as a girl towards men, you know, giving you the confidence to live in a society that has men. Participant 7 acknowledged that she sought love and validation from men around him, particularly romantic partners: 'I do not know how to behave around men. I have had many failed relationships because I would seek my father or fatherly love in every man I dated, which was unfair on them'.
Participant 6 contended that father absence damages a person emotionally because it can result in potentially wrong choice of partners: Well, I really think growing up without a father, damages one emotionally, and I think with us women are kind of different from guys, in a sense that having a father figure or biological father around helps you in determining the type of relationships you have with men around you, and with the type of romantic partners one chooses. So yeah, that's what I think.
However, Participant 5 changed her perspective on men after she got married: Because now I am married, before my husband I used to feel negative towards men, I used to not want to get married. And then when I met him my husband, my attitude completely changed. So yes, I think having an absent father changed my attitudes towards men. Because I always thought that's how men are 'they make children and then they leave'.
Theme 5: attachment issues. This theme emerged for two participants who explained that they developed unhealthy attachment issues in their romantic relationships in an attempt to fill the void that was left by their absent father.
Yes, I am a kind of person who has attachment issues. I feel like if somebody comes, they just want to benefit from me or they want my money or resources and they are going to leave. Like I would say 'you are here but you are going to leave just like my father left me when I was only three'. (Participant 4) The same theme was reflected in Participant 6's response: Yes, it has, so in my past relationships I would always seek validation from men or date older men, or even have attachment issues because I would subconsciously want to feel how it was to have a father around, but I did it unaware, and also, I don't trust men as much because I always feel they will hurt and leave me as my father did.
Benefits
Theme 1: derived no benefits from having an absent father. Five participants believed that living with an absent father did not have any benefits. One participant believed that benefits only apply to women who have present fathers in their lives. Participant 1 reflected, 'I do not think there are any benefits because I do not know what I would have gotten if I had a relationship with my father. The benefits only apply to those who had fathers'. Participant 5 reiterated that a child needs both parents. 'I feel like every child needs both parents. So, him not being in my life does not have any benefits. Because I wish like he was still alive so he can teach me a lot of things'.
Theme 2: derived benefits from having an absent father. While some participants did not benefit from having an absent father, others believed it benefitted them in other ways. These benefits varied between having the resilience to survive in life, to the belief that father absence builds emotional strength. 'I think it made me stronger, it made me be able to stand up for myself and be able to talk for myself, especially against men' (Participant 2). Participant 9 also concurred with this viewpoint by stating that the benefit she derived from father absence was emotional strength and independence: 'The benefits would be that I learnt to fend for myself, I sort of know how to protect myself emotionally against men, like I already know red flags'.
Challenges
Theme 1: financial challenges. Four participants reported that they faced financial challenges as their single mothers could not always afford to meet their needs: 'The challenges were there when I was younger. My mom really struggled financially to support me' (Participant 7). However, Participant 9 was grateful to her single mother who worked hard and managed to provide for her: 'But financially my mom really provided me with everything I needed. We never lacked anything, and I am grateful for that'.
Participant 8 described how her mother had struggled to provide for her financial needs: 'The challenges were there when I was younger. My mom really struggled financially to support me'.
Theme 2: emotional challenges. Four participants also reported that they experienced some emotional challenges, with one participant explaining that she faced several challenges, including financial, emotional, and psychological difficulties: 'The challenges were financially and emotionally. I have low self-esteem and low self-confidence' (Participant 2). Participants 3 and 4 emphasized that lack of emotional support from their father was the main challenge they faced: 'Challenge are things like what a father must do in child's life. Support their child, be there for them emotionally' (Participant 3). '. . . emotionally when you need someone to talk to things like that' (Participant 4).
Participant 9 shared the poignant experiences that she faced: . . . emotional challenges, like sometimes I would just isolate myself in my room and ask myself why he left me. This would lead me into some sort of anxiety. I missed how I would open to him about things I was going through like with my friends or at school.
Theme 3: no challenges as mother provided financial and emotional support. In contrast to the challenges that some of the participants faced, others confirmed that their single mother was able to offer financial and emotional support to them. 'My mom really did a good job. She managed to be my mother and my father at the same time' (Participant 3). There was also appreciation and gratitude for the roles played by their mothers. 'But financially my mom really provided me with everything I needed. We never lacked anything, and I am grateful for that' (Participant 5).
Coping strategies adopted in response to having an absent father The participants developed strategies that enabled them to better cope with the challenges caused by father absence. These strategies were categorized into two groups -positive coping and negative coping.
Theme 1: positive coping One participant stated that her coping strategy was being independent and working to pay for her tertiary studies. Participant 2 explained that she had to overcome everything and develop self-love to cope. She learned that because she could not receive love from her father, she needed to love herself. Participant 3 reflected, Every time a person mentions their father and good things about them. Or simple things like 'let me call my dad', it triggers me. And then I am gonna have a build-up of anger and after some time I am gonna cry it out, then I will be fine.
Both Participants 8 and 9 used acceptance as a coping mechanism. Participant 8, whose father had passed on stated, 'I have been telling myself that he is no more and believing that my mother will do a good job. So, my coping strategy was acceptance'. Similarly, Participant 10 explained, 'I just had to accept that he started his new family, and he probably did not want anything to do with me, so that made me cope with his absence'.
Theme 2: negative coping
Two participants resorted to negative coping strategies. Participant 6 explained, 'I started hanging out with the wrong crowd, I think that's how I coped', while Participant 7 described how she constantly felt the need to numb her feelings, thus leading to her addiction to smoking and drinking: Smoking, and drinking alcohol was probably my coping mechanism, maybe for emotional, because I would want to numb my feelings. Although, I was not diagnosed or whatever, I think my addictions back then when I was a teenager were because of my absent father.
Discussion
The participants' narratives suggested that most of them harboured feelings of hatred and anger towards their absent biological father because of his lack of protection and involvement in their lives. These findings are consistent with those of Tau (2020) who found that some participants expressed feelings of hatred towards their biological father, for having neglected his children. The implication of this finding is that these women could be living with unresolved feelings of resentment which could negatively affect their other relationships and sense of well-being.
At the same time, they expressed their gratitude towards their social fathers for the physical, emotional, and academic support they provided. According to Mkhize's sociocultural psychological tradition, within African families, child-rearing is viewed as the shared obligation of the extended family and is linked to the African concept of Ubuntu. Consistent with the African saying that it takes a village to raise a child, 'the entire community is thus expected to play a vital role in raising children' (Mkhize, 2008, p. 23). Given the high prevalence of absent fathers in South Africa, shared child-rearing is a strength of African families that needs to be nurtured.
Despite the support from their social fathers, the women struggled to develop a sense of belonging and identity according to their cultural beliefs. These findings were aligned with those of Smith et al. (2014) who explored social identity in a group of young men and women aged 21 to 35 years and found that 'father absence was associated with lower self-perceptions and non-use of paternal surname with diminished sense of identity' (p. 433). These findings also have implications in terms of Erikson's psychosocial development theory as the fifth stage of his theory, namely, identity versus role confusion, suggests that failure to form a sense of identity may result in role confusion (McLeod, 2018). Moreover, in addition to a diminished sense of identity, most participants experienced low self-esteem. These findings support Rosenberg and Wilcox's (2006) view that emotional well-being plays a significant role in a child's life, inclusive of high self-esteem and confidence, in the sense that children who have present fathers tend to grow up to be emotionally secure and confident enough to positively navigate through life. In a similar vein, Krohn and Bogan's (2001) view that females tend to seek acceptance from other people due to experiencing non-acceptance from the father resonates well with the emotional challenges that some participants faced. This notion suggests that an emotionally unavailable father may cause a child to have low self-esteem and confidence, due to the lack of affirmation from the father.
The participants' responses reflected a lack of trust when it came to intimate relationships with other men, attachment issues, fear of being hurt, fear of commitment and a general inability to sustain healthy relationships. Consistent with these findings, Wilson (2006) asserts that for some women, the absence of a father figure may 'leave them with a distrust of men so deep that, in extreme cases, marriage is out of the question' (p. 33). These findings are also in line with Erikson's sixth stage of the psychosocial development theory, namely, intimacy versus isolation. Erikson highlighted the importance of immediate families -particularly fathers -in fostering positive identities thereby ensuring that young women are secure in their intimate relationships with men. These findings suggest that growing up with an absent father may adversely affect women's ability to successfully navigate the psychosocial stages of development necessary for healthy functioning.
However, despite their negative relationships with men, some participants reported deriving benefits from living with an absent father such as high levels of independence and resilience, while for others there were challenges such as emotional distress and financial difficulties. While some participants adopted positive coping strategies, others resorted to negative coping mechanisms.
In interpreting these findings, it is important to acknowledge the limitation of possible bias stemming from the first author having grown up with an absent father, although efforts were made to reduce this weakness by engaging in reflexivity. A second limitation relates to the small, nonprobability sample which precludes generalization of the findings. A third limitation involved the use of telephonic interviews necessitated by the pandemic, which made it difficult for the interviewer to pick up nonverbal cues such as body language and facial expressions, and gauge participants' emotional state.
Conclusion
This research indicated that the phenomenon of father absence had a profound impact on the psychological, emotional, and financial well-being of the women in this study. However, the participants' experiences also revealed that a child who grows up with an absent father may also develop resilience and independence due to the support offered by a single mother, suggesting that being raised by a single mother is not always a negative experience. Nevertheless, the study underlines the importance of promoting positive involvement of fathers in their daughters' lives -even if they do not live with them. A particular contribution of the study was the melding of individualistic Western psychological insights from Erikson's (1993) theory with collectivist indigenous African psychological and sociocultural traditions from Mkhize's (2006) theory, to enhance understanding of the impact of father absence on adult women. The results highlighted the value of cultural factors such as the philosophy of Ubuntu and the collective responsibility of the community for raising a child, and the importance of cultural traditions such as introducing a child to the ancestors and giving him or her one's surname.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The first author received funding from the Tessa Hochfeld Bursary administered by the Centre for Social Development in Africa at University of Johannesburg.
|
2022-10-14T15:08:30.631Z
|
2022-10-12T00:00:00.000
|
{
"year": 2022,
"sha1": "5b75362183537b2a744a3053373620164030c781",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/00812463221130194",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "1df794167b0a7859ce6c8ed2506820e1a75af467",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158870514
|
pes2o/s2orc
|
v3-fos-license
|
More landings for higher profit? Inverse demand analysis of the bluefin tuna auction price in Japan and economic incentives in global bluefin tuna fisheries management
This paper estimates the price changes in global bluefin tuna (BFT) markets in response to shifts in regional and global landings to evaluate the conservation and economic incentives from changes in the Total Allowable Catch (TAC) managed by all three Regional Fisheries Management Organizations. A fisherman’s income, and thus the financial incentive to accept management measures controlling catch levels, depends in part on how responsive price is to overall catch. Individual fisherman, with their own best interest in mind, used to wish to increase their individual landings and create an incentive to ask to increase the TAC for the industry, without realizing the possible revenue loss due to the resulting falling prices. To protect the value of all stakeholders’ property rights, a consensus to avoid abruptly raising the TAC, without first considering the potential loss due to market response, is needed. Alternatively, if revenue increases with lower TAC, a positive economic incentive for conservation is created if price increasing proportionately more than the lower supply, with harvest profits boosted by lower costs of production. To capture the complexity of substituting across various sources of supply and product form, a general synthetic inverse demand system is estimated to identify the impact of overall landings on BFT prices. This system estimates price flexibilities of both fresh and frozen longline-caught sashimi-grade tunas (Pacific, Atlantic and southern bluefins, and bigeye) at the Tokyo Center Market in Japan, including the Tsukiji Market, the world’s largest fish auction market that served as the single global price leader for BFT. The resulting estimation shows that own-quantity price flexibilities of every type of fresh and frozen BFTs are less than unity and inflexible in their own consumption. This creates poor individual producer incentives for fishermen to reduce wild or farmed BFT supply, as there is a chance to increase their own revenue, under the unlikely condition that the total supply is fixed. However, by observing the rapid increases in the TAC of Eastern Atlantic bluefin tuna (EABFT) in the coming years, suppliers may not be better off as price will drop proportionally faster and total revenue if the estimated scale flexibility is greater than one. Based on the estimated scale flexibility of frozen BFT, which is slightly less than unity, the frozen subsector of EABFT suppliers is the only winner under the supply increases. Suppliers of frozen BFT in other regions, fresh BFT (in the Atlantic and elsewhere), and southern BFT and bigeye tuna will all be harmed through lower revenue by the supply increases. Additionally, while total revenue might stay the same for frozen BFT suppliers, fishermen will potentially receive lower profits due to higher operating costs associated with increased landings when the supply of EABFT increases. Given the number of sectors that ultimately lose financially in the short term and given the ecological (and production) risks accompanying an abrupt increase in fishing pressure in the long term, the global economic losses resulting from an increase in the allowable catch of Atlantic bluefin tuna will outweigh any potential increases to revenue.
Introduction
One of the primary challenges in fisheries management is to determine the right set of incentives for resource conservation and management of the industry, which depends on the utilization of its resource. Fisheries managed by a Total Allowable Catch (TAC) and not covered by individual or group property rights are driven by the "race to fish." The tragedy of the commons externality creates incentives among harvesters to expand the TAC and allow more catch, ceding higher revenues and profits to harvesters. However, is expanding the TAC always a preferred solution? Does an increase in TAC truly guarantee an increase in revenues and profits? After all, raising the TAC can also lower the resource stock and thus the long-run, sustainable supply.
Furthermore, are there other economic incentives that arise from TAC-managed fisheries under open access in addition to the "race to fish"? Are there related management options? The answers depend, in part, on the nature of the product's price response to changes in TAC. TAC, in this case, is the aggregate supply in the market in which product prices form. Depending on the responsiveness of prices to declines in quantities, reductions in TAC that favor conservation might increase prices more than they cause a fall in quantity. In this case, the accompanying revenue increase would compensate for the decrease in quantity under a reduced TAC. Access fees of coastal states can also potentially be raised [1]. A lower TAC and larger resource stock following population growth can also increase society's welfare as measured by non-market economic values, such as increased biodiversity and ecosystem services (indirect use values) and greater assurance of the continued existence of a species and richer ecosystem (existence value). In short, conservation, in circumstances when reduced overall catch increases prices, can increase economic rents and generate positive economic incentives for conservation that enhance acceptance of management measures and cooperation, i.e., conservation can be profitable.
Besides economic rent (producer surplus, economic profits), the other half of total economic benefits is consumer benefits (consumer surplus or the more preferred measures of compensating or equivalent variation). Part of the increase in producer benefits with a higher price and lower quantity comes from a transfer from consumer benefits. To the extent that the inverse demand curve estimated in this paper is an equilibrium demand curve, the welfare measures capture both consumer and producer surplus [2]. Even when consumer benefits from direct use values decline due to a rise in price and fall in quantity, consumers can gain through increased enjoyment of non-market values such as indirect use value and existence value when there is more conservation from larger resource stocks. In short, the consumer picture is more complex and consumer gains in non-market benefits can potentially outweigh any reduction in consumer benefits from the bluefin market's higher prices. Conversely, increases in TAC can almost paradoxically lower producer revenues and profits.
As early as 1696, Gregory King [3] had shown how an abundant harvest would dramatically reduce a farmer's revenue, long before Augustin Cournot [4] and Alfred Marshall [5] specified the conditions under which King's law holds, i.e., prices falling proportionately more than the proportionate increase in supply. However, the fishing industry is generally less concerned about a scenario like King's foundational example-a price flexibility of demand greater than unity in absolute value. Given the persistent challenges posed by overfishing and excess effort, fisheries managers are typically far more concerned with the consequences of catch reduction than expansion, at the current level of fish stocks [1]; [6][7][8][9].
In addition to the efficacy of an expanded TAC, two related questions can be asked: are the gains from rights-based management due to the incentives created by secure and transferable rights, or due to a more conservative or better-enforced TAC? And are there additional unrecognized-and even unused-economic incentives available to fisheries managers, aside from the TACs (regardless of whether or not there are some types of property rights)? (This issue extends to many cap-and-trade systems.) These questions have only rarely been raised or thoroughly investigated in the fisheries and property rights literature [10] [11]. While a better understanding of the price responsiveness to changes in both quantities landed and TACs cannot directly answer the second question, we can unequivocally state that price responsiveness leading to declines (increases) in revenues with reduced quantities creates incentives that counter (reinforce) the gains in positive incentives from rights-based management [12]. A related issue is the proportionate change in costs. With constant returns of scale proportionate changes in profits or economic rent track changes in revenues. Unitary price flexibilities lead to no changes in revenues whether the TACs are expanded or contracted.
In sum, economic incentives created through product price responsiveness have substantial effects on conservation, industry profits, and society's economic rent, but remain an overlooked policy consideration. When price rises more than a proportional decrease in supply, thereby affecting revenue and profit more than proportionately, an additional conservation incentive is created: conservation pays. When price falls proportionately more than the increase in TAC, industry has an economic incentive to act collaboratively and discourage the increase in TAC to protect the profitability of their fishery Evaluating these economic incentives under a TAC system (or any industry quota/cap on overall output) requires information on the product price responsiveness, calling for the estimation of a demand system. To that end, a series of choices must be made before a good model specification can be reached. The demand function can either be linear or logarithmic, ordinary or inverse, final or derived, Marshallian or Hicksian, static or dynamic, detailed or aggregated, etc. These options are not neutral and can lead to substantial differences in the empirical estimates of how the prices of a product responds to a change in its own quantity ("price flexibility") and to a change in all of the combined quantities in a market ("scale flexibility"), which have important consequences for the prediction of the effects of a supply shock on market prices [13][14][15][16]. A temporary or permanent reduction in catches may not only lower the costs, but also raise revenues, profits and resource rent depending on the proportional responsiveness of price to proportional changes in the quantities of the produce available i.e. the price flexibility for each product.
Recent spikes in TAC, and the corresponding rise in catch, of the Eastern Atlantic bluefin tuna (EABFT) stock may have seriously impacted the global bluefin tuna (BFT) price through the auction market in Japan, which generally acts as a global price-setter for bluefin sashimi trade. Previous estimates of price flexibility and inverse demand for tuna products have largely been ad hoc and focused on single species [17][18][19]. Without an empirical estimation of the theoretically plausible demand system, consumption substitution possibilities among species are excluded and estimates of the price flexibility could be biased [20][21]. Depending on the price responsiveness, increases in TAC of bluefin tuna can lead to price decreases at a greater proportion than the increase in quantity, and revenue and profit decrease shall soon follow.
Understanding this underlying market force is necessary if more fundamental management and conservation questions are to be answered. Is a stabilizing quota at a conservative level for global BFT a better option for the industry, and related industries? What was the impact of the recent spike in eastern Atlantic BFT catch on the price of Atlantic BFT and its substitutes? Would boosting the eastern Atlantic BFT quota further benefit the fishing industry? And who stands to win and lose from supply increases?
This study proposes to better understand the implications of these price effects through an original estimation based on the General Synthetic Inverse Demand Systems (GSIDS) approach [22]. This family of demand systems nests several flexible specifications and gives more robust estimates than other demand models [23][24]. The estimates of own-and crossquantity flexibility and scale flexibility can be used to assess the impact of global quota management control and other supply shifters.
The paper is organized as follows: the marketing value chain of sashimi-grade tuna products is introduced in the second section to justify the estimation procedure motivated by the market delineation literature. The GSIDS model is presented in the third section, followed by presentation of the results. Elasticity and flexibility coefficients found in the empirical literature and their consequences on fisheries management are discussed in the last section.
Global bluefin tuna landings and its auction market in Japan
Globally, there are three species of BFT: Atlantic (the largest and most threatened [25]), Pacific, and Southern BFT, which have all been regulated by TAC in their trade, including import, export, and re-export, since the 1990s. Since 2015, while both Pacific BFT and Southern BFT have shown no signs of increasing TACs, the eastern Atlantic BFT stock has started to show signs of rebuilding with a hike in TAC by 20 percent per year between 2015 and 2017 (a 75.22% increase over three years), while the TAC for western Atlantic BFT stayed the same at 2,000 metric tons (mt). As depicted in Fig 1, with the rise of eastern Atlantic BFT quota from 13,500 mt in 2014 to 23,655 mt in 2017, the global BFT landings increased by 16.92% from 43,895 mt in 2014 to 51,322 mt in 2017.
The stock area of the Atlantic bluefin tuna landings (Fig 1(A) from 1950-2016 is defined by IATTC as Western Atlantic, Mediterranean, and Eastern Atlantic and abbreviated as ATW, MED, and ATE. The projected TACs from 2017-2020 in Fig 1, defined by this study under Scenario #1, is to simulate if the quota for ATE+MED will increase to 30,000 mt in 2020, or will increase to 40,000 mt under Scenario #2.
The two populations of Atlantic BFT, the Eastern (EABFT) and Western (WABFT) stocks mix. The majority of the Atlantic BFT landings come from the eastern population (including catch in the Mediterranean Sea) (Fig 1(A)). Before 1950, fishermen in the western Atlantic held little interest in BFT. Commercial catch was virtually nonexistent, but demand soon quickly grew in the following years. In 1964, for example, approximately 18,000 mt of BFT were caught in the western Atlantic, which was 18 times larger than the 1960 catch level. This intense fishing pressure soon took its toll, leading to a dramatic decrease in the western Atlantic BFT population. By the end of 1990s, the catch had fallen by 80 percent. In 1998, recognizing that the population was at a low point, the International Commission for the Conservation of Atlantic Tunas (ICCAT) implemented a recovery plan that set the TAC, and instituted a stricter minimum legal size to limit fishing mortality of juvenile Atlantic BFT. In 2009, many governments and environmental organizations called for a suspension of international trade in BFT. ICCAT finally responded by reducing the TAC to scientifically recommended levels. In 2006, ICCAT (based on Recommendation Rec-06-05) adopted a 15-year rebuilding period for Eastern Atlantic and Mediterranean BFT, starting in 2007 and continuing through 2022, with the objective of recovering the stock to the biomass level that enables a fish stock to deliver the maximum sustainable yield (BMSY), with greater than 50% probability (increased to 60% probability in 2010). 1(B) shows that global landings of BFT reached 143,000 mt in 1961 and fell below 50,000 mt by the early 1990s-a 65% decrease. Landings subsequently increased to 80,000 mt but since 2010 fell again to lower than 40,000 mt. Because the FAO global tuna landings dataset is only updated through 2010, the landings from 2011 to 2015 are collected from reports provided by three tuna Regional Fishery Management Organizations (RFMOs): ICCAT, IATTC, and CCSBT. The projection of future landings is plotted based on possible TAC scenarios set by the RFMOs from 2016 to 2020.
For example, the range of the shocks was specified to simulate an increase in the quota of EABFT to 30,000 mt (scenario #1) or 40,000 mt (scenario #2) in 2020. Under EABFT quota scenario #1, annual global BFT landings are expected to be 57,917 mt in 2020-31.94% more than the global landings in 2014. Under EBFT quota scenario #2, the annual global BFT landings will reach 67,917 mt in 2020-a 54.73% increase from 2014.
According to the latest ICCAT stock assessment in 2014, the eastern Atlantic BFT population increased dramatically. The goal of achieving BMSY (through 2022), with at least 60% probability, might have already been, or will soon be reached. Following the advice of scientists and better controlling for overcapacity and Illegal, Unreported, and Unregulated (IUU) fishing has been effective. However, there is no discussion of how ICCAT should consider adding a new phase to the current recovery plan. Nonetheless, while eastern Atlantic BFT is on the road to recovery, ICCAT scientists have repeatedly pointed out a "problematic" level of uncertainty in the assessment for eastern Atlantic BFT, which makes it difficult to determine the speed and magnitude of the recovery. As indicated by previous ICCAT Standing Committee on Research and Statistics (SCRS) reports, a long-lived species such as bluefin tuna requires some time (over 10 years) for the stock to realize the benefit of recovery. The overhauled stock assessment, which was supposed to be completed in 2015 but was delayed until July 2017, was intended to address this uncertainty.
Fifteen governments hold shares of the eastern Atlantic BFT quota, with the EU nations holding 59% of the total eastern quota. Multiple countries determine the quota, including Spain, Italy, France, Japan, and N. African Countries. In addition, the EU industry has considerable influence in ICCAT's TAC decision-making.
Although BFT have been part of the Mediterranean diet for thousands of years, the 1990s marked the explosion of the industrial-scale fishery in the eastern Atlantic. That decade brought a significant increase in the size of the purse-seine fleet-fishing boats that use large nets to surround the entire schools of bluefin. Rising Japanese demand for fatty tuna to accommodate the growing appetite for high-quality sushi also led to the development of tuna "ranching." In 2015, purse-seiner fleets/countries with ranching capacity in the eastern Atlantic landed 63.39% of eastern Atlantic BFT. The gain in weight after 6 months of ranching was significant, for the same quota ranching operations effectively sent 30-60% more weight of bluefin tuna to Japan. In this case, the negative impact upon price from an increase in TAC for the purse-seining fleet targeting juvenile EABFT for ranching will be even more severe than when viewed strictly through the lens of quota allocations. Every additional unit of TAC used to capture juvenile BFT yielded 1.3 to1.6 times the volume when sold at auction, furthering the impact on the auction price. Ranching's ability to deliver higher volume to the market than what was directly harvested means ranching operations had more impact on price than they otherwise would have. Note that none of the western Atlantic BFT was ranched, and the majority of those catches were shipped fresh directly to Japan for auction.
Bluefin is typically consumed as sashimi. Sashimi is defined as sliced raw fish meat served on a plate with various vegetables (Miyake et al. 2010, p. 63). The belly meat of bluefin tuna is the most demanded part for consumers. Japan constitutes the largest fresh, chilled, and frozen market for tuna sashimi in the world (60 to 80% of global demand). Since the early 90s, more than 80% of the fresh and frozen Pacific BFT, Atlantic BFT and Southern bluefin (SBT) sashimi consumption in Japan have come from imports, while the Japanese domestic landings of long-liners have steadily fallen (Fig 1(B)).
Most of tuna auctioned in Japan are measured in terms of Dressed Weight (DWT). DWT measures the weight of the animal after it has been gilled, gutted, and the head and fins have been removed. The corresponding round weight (which was reported in catch statistics) is 1.25 times the DWT, based on the ICCAT Conversion factors for fish products adopted by the SCRS for major species. In some cases, landings of tunas are reported in terms of Belly Meat (BM), which has a corresponding round weight are 10.28 times that of the BM.Imports of frozen SBT into Japan increased steadily from 1997-98 due to successful cage aquaculture in Australia, resulting in a downward trending SBT import price. The supermarket channels in Japan have attempted to attract customers by offering special discount for farmed SBT whose price is cheaper and supply more stable [26]. The domestic price has become more sensitive to imports. In general, BFT prices in Japan have displayed a decreasing trend since the economic recession of the 1990s (Miyake et al. 2010).
In addition, those frozen tuna consumed in Japan is called super frozen tuna, have been frozen at -60˚C, and approximately 80% of tuna sold in the Japanese market lies in this category. This unique method of freezing tuna below -60˚C on board maintains the freshness of the catch. Tuna is frozen before rigor mortis sets in. Immediately after the tuna is caught, it is quickly cleaned, bled, processed, and flash frozen to -60˚C. This process prevents dehydration, spoilage, and bacteria growth-with very little moisture loss when thawed and no added chemicals or preservatives. It is said to be "fresher than fresh." Because of the scarcity of fresh BFT and extremely high prices in Japan, substituting different tuna species and alternating between fresh and frozen tuna is common [27][28] [20]. Japanese consumers have many substitutes to BFT, such as bigeye and yellowfin tuna. Bose and McIlgorm [20] utilized cointegration analysis to show that the substitution of yellowfin and bigeye tuna for bluefin tuna clearly exists in Japan. However, the price response to fluctuations in landings was not addressed because only price data were used in their analysis. Sun and Wang [29] investigated the monthly fresh tuna auction market structure and the relationship between major fresh tuna auction markets in Taiwan, and major fresh tuna auction markets in Tokyo. A multivariate ARMA model with a seasonal adjustment factor showed a significant relationship between Taiwanese and Japanese prices. Sun and Hsu [30] examined the price linkages of the frozen yellowfin and bigeye tuna markets for canning across three countries-Japan, Taiwan, and South Korea. Based on an error-correction model, they showed that all the price series exhibited similar linear long-run change patterns with the law of one price prevailing internationally. However, the price response relationship they obtained could be applied to the frozen tuna raw material market for canning and is not sufficient to address how the prices of bluefin tuna might be impacted by either bigeye or yellowfin tuna on the sashimi market.
With a multivariate Markov-switching error-correction model, Wu [31] demonstrated that the import prices of frozen bigeye and yellowfin tuna for the sashimi market in Japan had no significant impact on the decreasing price trend of the same species caught by the Japanese fleet during1990-2006. They instead found the Japanese auction price demonstrated a structural change after the introduction of comparatively less expensive farmed Bluefin tuna to Japanese grocery markets in the late 1990s.
Chiang et al. [21], examining the role of inventories on tuna auction prices in Japan and using the Rotterdam inverse demand system, found that frozen tunas of different species (bluefin, bigeye and yellowfin tuna) were likely to be close substitutes in consumption. Fresh and frozen tunas of the same species also demonstrated substitution in consumption, and inventories of frozen tunas had significant impacts on auction prices.
Consumer preference for sashimi products appeared to have changed in response to the Asian financial crisis in 1997 and 1998. After this breakpoint, Japanese demand shifted towards cheaper products, such as frozen yellowfin and bigeye, instead of the more expensive fresh/chilled bluefin tuna species [9]. Since a large majority of the premier sashimi-grade tuna is shipped daily from all over the world to the Tsukiji Market in Tokyo, the central metropolitan wholesale fish market in Tokyo, the BFT auction prices observed in Japan capture the tuna price response to global supply changes. Most of the BFT auctioned in the Tsukiji Market were fresh. In contrast, the majority of the frozen BFT sales in Japan were directly sold to buyers without entering the auction market. The BFT auction price in the Tsukiji Market significantly drives the direct sale pricing because of the close substitution between fresh and frozen BFTs. The Tsukiji Market thus creates a reference price for both fresh and frozen BFTs within Japan and globally.
The frequent adjustments of the monthly bluefin auction price in Tsukiji demonstrate high responsiveness to the highly seasonal global landings. This justifies the use of an inverse demand analysis, also advocated by other analysts of global seafood markets [32] [16].
Estimates of the responsiveness of price to landings would help inform policy analysis on the economic benefit of global quota management control and the impact of changes in fishing capacity upon the value of total landings. Success of quota control relies crucially on guaranteed higher profit when the quota is managed such that the net present value of the fishery resources is maximized in the long run [33].
A general synthetic inverse demand system approach
Inverse demand system
In a study of the price formation of fish, Barten and Bettendorf (1989) developed a Hicksian inverse demand model, known as the Rotterdam inverse demand system (RIDS), using the direct utility function and the Hotelling-Wold identity. Barten [34] compared the RIDS and the almost ideal inverse demand system (AIIDS), along with two mixed models-one with Rotterdam-type price effects and AIDS-type income effects and the other with AIDS-type price effects and Rotterdam-type income effects. Barten [34] proposed a synthetic direct model that combined the features of the latter four models and allowed non-nested hypothesis tests among models. Brown et al. [22] specified a family of the general synthetic inverse demand systems (GSIDS), which included two flexible specifications: the RIDS and the AIIDS on the one hand [24], and the inverse demand system proposed by Laitinen and Theil [23] with a fourth variant on the other.
Background: Inverse demand systems
The question underlying inverse demand approaches is: on a market with several exchanged commodities, how relative variation of prices depend on relative variation of landings quantities. Barten and Bettendorf [24] approach assumes there are n goods, denoted i, j, k, a price vector P = (p i ) 2 R n þ , a quantity vector Q = (q i ) 2 R n þ , and a maximum expenditure m 2 R n þ . The solution of maximization a utility function U(Q) subject to a budget constraint ∑ i p i q i = m and express π i = p i /m as an explicit function of Q, P, and gradient of U: D i = @U/@q i .
The log derivatives of P = (π i ) can be expressed as functions of log derivatives of Q, of partial derivatives of U and of its Hessian: H ij = @ 2 U/@q i @q j . That is, Barten and Bettendorf proceed as follows. The mathematical expression of U is unknown. Putting w i = π i q i , they get the expression: Where h i and h ij are, as well as F, complicated expressions in terms of the gradient and the Hessian of U. h i and h ij are considered as characteristics of the inverse demand system: h i is the scale flexibility, and h ij the compensated cross-price flexibility. Having time series of prices p i,t of quantities q i,t of expenditure m t , then the each variable can be expressed as, The unknown parameters h i and h ij are then estimated with a maximum likelihood method.
To express different ideas of flexibility, Brown et al. [22], Holt [35] and Sun et al. [33][36] considerations leads to an alternative formulation by introducing new parameters d 1 and d 2 .
The constraints on these parameters will lead to several type of models. The estimated parameters and models can be given several economic interpretations. The analysis of different inverse demand systems here can be viewed as allowing the scale flexibility, and the compensated cross-quantity flexibility to be variational parameters dependent on budget shares [22]. The model specification of General Synthetic Inverse Demand System (GSIDS) is shown in S1 Appendix.
The demand system itself is subject to several ex-ante decisions on the part of the analyst concerning the appropriate market delineation (the limits of the relevant market). Several recent studies have shown strong globalization of the tuna markets [37][38][39][40] [33] and identified two separate market chains: purse-seine/cannery-grade and long-line/sashimi-grade tuna markets [41] [42] [9] [36]. Each of the two distinct markets, purse-seine/cannery-grade and longline/sashimi-grade, are highly integrated at the global level by both price and commodity flows across locations and species, making any regional change in catches important to the entire industry. The concentration of processors and traders is high, and the information is rapidly transmitted from one location to the other [43][44], with the leading sashimi-grade tuna market located in Japan. In response to these market dynamics, a set of demand equations including different species and products would be needed for sashimi-grade tuna (fresh and frozen bluefin, southern bluefin, and bigeye tuna) on the Japanese market.
Data collection and inverse demand system parameterization
All of the data used in the model are compiled from the monthly average tuna auction price and cumulative monthly quantity from January 2003 to December 2016, with more than three transactions across more than three dealers and sellers at Tokyo Metropolitan Central Wholesale Market (Abbreviated as "Tokyo Market", http://www.shijou.metro.tokyo.jp/), and cannot be de-identified for any individual information. Those transactions at Tokyo Market represent not only transactions at the Tsukiji central market but also at the Adachi and Ota markets. The monthly total value for sashimi-grade tuna auctioned at the Tokyo Market was about 6-8 billion yen during 2003 to 2016, and accounts for 80% of the global bluefin tuna sashimi consumption. Although frozen bigeye tuna (BET) accounts for more than half of the volume auctioned, fresh and frozen BFT accounts for more than 60% of the monthly auction value. Fig 2 (A) shows that fresh BFT demonstrates the highest price at the auction with a strong seasonal pattern. In comparison, the auction price of frozen bigeye tuna remains the lowest and is unresponsive to any change in BFT supply or seasonal variation in BFT prices. These trends reflect the relatively higher level of demand for BFT than bigeye and yellowfin in addition to the higher value typically achieved by fresh tuna when compared to frozen tuna.
To capture how the BFT price responds to the volatility of global supply BFT shock, such as indicated in Fig 2(B), six species/product forms are included in the inverse demand system. Table 1 shows that the fresh BFT supplied by Japanese fleet and non-Japanese fleets, including supplies from both Pacific and Atlantic BFT, accounts for 22.25% and 9.31% of the total sale revenue at the Tokyo Market in Japan, between January 2003 and December 2016. Frozen BFT, including supplies from both Pacific and Atlantic BFT, accounts for 33.29% of the total revenue share. The total aggregated revenues from BFT accounts for 64.85%, and the southern BFT and bigeye tuna accounts for the 25.91%, and 9.24%, respectively, of the total sales revenue. Variability of budget shares is also much more pronounced for bluefin than bigeye tuna. In a GSIDS estimation, supply at the seafood auction market is treated as fixed in the short run, with the prices at the auction adjusting to a point that clears the market. (In other words, an equilibrium price is reached where the quantity supplied-in this case, the entirety present at the market-and demanded are exactly equal.) Because frozen products often do not go through the Tokyo Market, the auction market dataset represents most of the fresh product supply. Almost all of the frozen bluefin tuna products are not auctioned at Tokyo Market and was supplied by most of the ranching bluefin tuna (instead of aquaculture-to date, bluefin aquaculture is not economically viable). In ranching, live juvenile bluefin tuna are corralled into pens and fattened up for a minimum of 6 months, before shipment to Japan for auction, where it is mixed with "wild caught" (in this case, referring to those bluefin that are caught and immediately shipped to auction). Importantly, because the juveniles caught for ranching still count against the quotas and ICCAT sets the quotas for bluefin tuna, the supply of fresh bluefin tuna in the auction market is treated as exogenous.
Results of the inverse demand analysis
The estimated GSIDS gives the price responses of six tuna products, summarized in Table 1, to quantify changes in monthly totals at the tuna auction market in Tokyo. Additional supporting information section containing the TSP program codes are provided in the public repository RePEc (Research Papers in Economics) at the IDEAS working paper series no. 1901 (https:// ideas.repec.org/p/nto/wpaper/1901.html) archived by Institute of Applied Economics, National Taiwan Ocean University. The GSIDS approach is chosen because it overcomes the usual problems of specification by covering a broad range of inverse demand systems (IDS) and offering the best tradeoff between a good fit of observations and a theoretically grounded approach. First differences of the nominal prices time series are taken because differentials must be approximated to first differences for the differential inverse demand system to be estimated.
There are different assumptions for the demand parameters. The RIDS approach assumes fixed demand parameters (i.e. scale and cross-quantity or Antonelli coefficients) and the AIIDS assumes variable demand parameters where scale and cross-quantity coefficients are a function of budget shares. A set of 7 synthetic models and restricted versions of IDS are estimated, and Table 2 shows the likelihood-ratio (LR) test result for each of the models. Based on the LR test, the synthetic IDS model best fits our data, with a higher log likelihood value than all other IDSs.
Scale flexibility
Since the synthetic IDS, best fits our data, all forthcoming estimates (scale flexibility, cross flexibilities) come from this model. The estimated scale flexibilities of all products under the "Scale Flexibility" column in Table 3 all have the expected negative sign and are significantly different from zero at the 1% level. In reference to their standard errors, neither of the scale flexibility coefficients for fresh imported bluefin (-1.004) and fresh southern bluefin (-0.899) tunas are significantly different from unity in absolute value.
A summary of the scale flexibilities for all products is shown in Table 4. We now turn to the implications of the scale flexibilities upon Tokyo Market prices, producer revenues, and operating profits for each product and discuss whether the products are necessary or luxury goods. Scale flexibilities are greater than unity (in absolute values) for both Atlantic and Pacific BFT from the Japanese fleet (-1.15) and fresh bigeye tuna (-1.22), and thereby are scale flexible. Prices would decrease more rapidly than increases in aggregate supply and TACs for all products, so that producer revenues and profits would fall, with profit losses further increased by increases in operating costs. Conversely conservation through lower aggregate supply and lower TACs would raise revenues and profits (boosted by lower expected operating costs). These two fresh products are necessities in the Tokyo Market (1<f i <−1).
The scale flexibilities are not significantly different from unity for fresh Atlantic and Pacific BFT (-1.00) not from the Japanese fleet or fresh southern BFT (-0.90). These unitary ownquantity price flexibilities imply that prices will increase (decrease) by1% if total supply of all products decreases (increases) by 1% for each. Total sales revenue will remain constant for different catch levels, under the assumption that the worldwide demand for BFT is fixed in the short to intermediate run, although profits would fall (rise) due to lower (higher) expected operating costs. Moreover, the sales shares are constant, and consumption of these two products is independent of the level of total expenditure. Under the SSP (Shared-Socioeconomic Pathways) scenarios developed by Intergovernmental Panel on Climate Change (IPCC), the BFT prices might be impacted by a steady trend of some possible futures at the global scale. For example, such as the SSP5 scenario (rapid growth in the long run), characterized by a global economic growth rate of 3.5% per year up to 2040 [45], is not considered in this study. The unitary scale elasticity indicates that preference for fresh imported bluefin and fresh southern bluefin tunas is homothetic, which means that the sales shares are constant and that consumption of these two products is independent of the level of total expenditure [24]. Moreover, at the margin, normalized price is proportional to marginal utility. Therefore, as consumption of all goods increases by l%, the marginal utility of necessities declines more than proportionately (1<f i <−1) and the marginal utility of luxuries declines less than proportionately (−1<f i <0). Scale flexibilities are less than unity for both frozen BFT (-0.91) and frozen southern BFT (-0.90), and thereby scale inflexible. Prices would decrease proportionately less than proportionate increases in aggregate supply, so that producer revenues for these two frozen products would climb. Whether profits rose or fell would depend upon whether revenue increases outpaced expected cost increases. Conversely, conservation through lower aggregate supply and lower TAC for all products would raise revenues and profits from lower expected operating costs. Japanese consumers are still willing to pay a premium for sashimi-grade frozen BFT, since it is a luxury good (−1<f i <0).
Prices for the two frozen products are less responsive to changes in aggregate supply than the four fresh products or conversely, the prices for fresh products are more responsive than the prices for frozen products when aggregate supply changes (scale flexibilities for frozen products are smaller in absolute value than for fresh products). This stance is perhaps influenced by the frozen BFT's significantly longer shelf life than fresh BFT and bigeye and the related capability of storing inventories and releasing when profitability or to keep markets supplied.
The revenue and profit impacts of increases in aggregate supply and TACs for all products from across the globe, or conversely declines in both, are unevenly spread among fisheries and product forms. Some gain while others lose or remain the same.
Because of the greater than unity scale elasticities for fresh BFT supplied by the Japanese fleet and for bigeye tunas supplied by all fleets, an increase in the TACs of global BFT would cause a decline in gross revenue for fishermen currently targeting BFT. In addition, even the fresh BFT supplied by other fleets shows a unitary scale flexibility, i.e., the total revenue would remain the same when total supply increases, but the fishermen would still possibly incur a loss since the increase in supply may come at higher operating expenses. More importantly, there is a negative spillover effect to fishermen operating in the Pacific and western Atlantic Oceans when only those operating in the eastern Atlantic increase their landings, because the decrease in the auction price would lead to a reduction in revenue for Pacific and Western Table 4. The impact of a greater global supply of bluefin and bigeye tuna on the price of each product.
Product
Scale Flexibility (Absolute Value)
Resulting Total Revenue Change
Atlantic & Pacific bluefin, fresh (Japanese fleet) Atlantic fishermen without an parallel chance to increase landings to overcome the decrease in price.
For three of the five products examined, prices would decrease more rapidly than aggregate supply increases, meaning that the revenue of the fishermen supplying these products would decline even if their landings increased when global supply of all species and product forms increases. This poses even more of a problem for western Atlantic bluefin fishermen, whose quotas stabilized as eastern Atlantic bluefin quotas rose annually by 20 percent from 2015 to 2017, and for Pacific BFT and fresh/frozen SBT fishermen, whose quotas declined or stayed the same when eastern bluefin quotas increased. Conversely, fishermen's revenue would increase proportionately more than any proportionate reductions in supply, creating incentives for conservation. Profits would likely increase as well due to likely lower operating costs with the reduced fishing, boosting the conservation incentives.
The only sector not expected to be harmed by a greater aggregate tuna supply from all parts of the globe is the frozen Atlantic industry. For example, if the ICCAT quota increases, the fishermen catching Bluefin in the Pacific will see the price fall without receiving any of the extra quota. However, there are still many factors that could minimize an individual fisherman's ability to profit from this circumstance and it depends on whether or not the individual fishermen-level action would be consistent with their country-level, and RFMO-level actions. Unless each country implements individually assigned quotas, for example, fishermen selling these products are not guaranteed to increase their own catch, even if their own fishery's quotas are raised. Additionally, if quotas are capped, as they were for SBT in 2016 and 2017, fishermen could simply face lower prices as they not be able to boost their individual catch as aggregate supply increases.
Own-and cross-quantity price flexibility
The uncompensated own-quantity price flexibilities of demand (the shaded diagonal elements of Column 3) are significantly negative, smaller than the corresponding scale flexibility (in absolute value) for all products and are significantly different from zero at the 5% level of significance.
The uncompensated own-price flexibilities capture the combined effect of compensate price and scale flexibilities, i.e. they account for both the price and scale (expansion in the same proportion) effects. The uncompensated cross-price flexibilities (off-diagonal elements) are all q-substitutes and statistically significant. The q-substitution means that with a price rise, consumers in the Tsukiji market substitute away from the product to another. The q-substitution counters the impact of the product's scale flexibility, leading to the uncompensated ownprice flexibilities (capturing the combined effect of compensated price and scale flexibilities) smaller in value than the scale flexibilities.
These own-quantity price flexibility estimates are all less than unity (i.e. they are inelastic), implying that own prices are inflexible to changes in their own consumption, i.e. own prices demonstrate responses proportionately smaller than own quantity changes (allowing for both changes in the scale of consumption and responses to price in consumption of the commodity bundle). These inelastic own-quantity flexibilities create weak producer incentives to individually reduce wild or farmed bluefin tuna supply in each region because a 1% fall in supply leads to a less than 1% decline in price and hence a decline in total revenue from the product. These inelastic own-quantity flexibilities also imply that the corresponding price elasticities of demand are elastic, where sashimi-grade BFT tuna is commonly recognized as luxury good for which demand increases more than proportionally as income rises.
The uncompensated own-quantity price flexibilities of the fresh BFT supplied by domestic Japanese fleets (-0.478) and fresh bigeye tuna (-0.439), while inflexible, are larger than the own-quantity price flexibilities of the other products. The results suggest that prices for these products will fall more than the rest of the commodities with an increase in own supply, implying 0.478% and 0.439% reductions in marginal values with a 1% increase in own supply.
Tokunaga [46] also utilized Tsukiji market data, in this instance directly estimating an ordinary demand equation along with various supply shocks as instrument variables, and showed own-price elasticities of demand all greater than unity (in absolute value), also implying an elastic demand of bluefin tuna. However, Huang [47] argued that using directly inverted elasticities to represent flexibilities, or vice versa, may commit sizable measurement errors, and only elasticity from directly estimated ordinary demand systems should be used to evaluate the quantity effects of price changes. Note that Tokunaga [46] stated that it is beyond the scope of their paper to identify the substitutability across various bluefin tuna (and other tuna species) products, which is an advantage of the approach employed here.
Reciprocals of the price flexibilities provide a lower bound on the p-elasticities of substitution between products in direct demand by Japanese buyers in the Tokyo Market [48]. The pelasticities of substitution measure the responsiveness in quantity directly demanded of a product q it to a change in either that product's own price p it , giving the own-price elasticity of direct demand @lnq it /@lnp it , or to the change in the price of another product, p jt , giving the crossprice elasticity of direct demand @lnq it /@lnp jt . @lnq it /@lnp jt > 0 for p-substitutes, so that as the price p jt of the substitute product q jt increases, the quantity directly demanded of product q it increases. @lnq it /@lnp jt < 0 for p-complements, so that as the price p jt of the complement product q jt increases, the quantity directly demanded of product q it decreases. P-elasticities correspond to direct demand, in which quantity demanded depends upon price, and q-elasticities correspond to inverse demand, in which price depends upon quantity supplied.
Since the reciprocal of the price flexibility forms the lower limit, in absolute terms, of the price elasticity [48], the difference of the true price elasticity of direct demand from the reciprocal of the price flexibility from inverse demand depends on the entire matrix characterized by the substitution and complementarity of price flexibilities with other commodities [47]. Table 5 provides these reciprocals, taken by inverting the matrix in Table 4. Because these are lower bounds, the p-elasticities should be as least as large as indicated by Table 5. Ownprice direct demand is very elastic, indicating considerable responsiveness in quantity demanded of a product to changes in its own price. Pervasive positive cross-price price flexibilities and the q-complementarity found in Table 4 indicate p-substitution with direct demand, which at first glance is largely confirmed by the widespread positive and often large signs in Table 5. The pervasive p-substitutability among the different products is also highly elastic in most instances, as indicated by the large absolute values. Thus, buyers ("demanders") in the Tokyo Market seem to easily substitute one product for another when there are relative price changes.
Exceptions to pervasive p-substitutability, however, start to emerge upon closer examination. One exception is fresh BFT from the Japanese fleet, in which the p-elasticities are small in absolute value, and there are even complementary products (given by negative signs). These results indicate that buyers in the Tokyo Market distinguish fresh BFT from the Japanese fleet compared to other products, i.e. the other products are not considered to be close p-substitutes. Another exception is frozen products that do not readily substitute in buyer direct demand for some fresh products as indicated by p-elasticities with positive signs that are small in absolute value and that are complements with other products as indicated by p-elasticities with negative signs. Thus p-substitution in direct demand is more concentrated upon fresh SBT and non-Japanese BFT. Frozen products (SBT and BFT) are highly substitutable for each other in Tokyo Market buyer direct demand (reciprocal values of 5.35 and 3.36 in Table 5) when there are changes in the relative prices of the frozen products. Tokyo Market buyers seem largely indifferent to the source of frozen products, i.e. frozen products are not differentiated products in the eyes of buyers. The Morishima elasticities of complementarity (MEC), reported in Table A1, shown in S1 Appendix, are all positive, indicating pervasive q-complementarity and consistent with the widespread p-substitutability from Table 5. All values are inelastic, indicating that demand price ratios change proportionately less than the change in one of the quantities due to high substitution. The market for all three species of BFT is integrated and is considered highly qsubstitutable between different product forms (fresh/frozen and Japanese/non-Japanese) and species of BFT (indicated by small and positive MECs). Nonetheless, the column and row MECs between the Japanese and non-Japanese BFT products are generally larger than the column and row MECs for the other BFT products (but not for fresh bigeye). The implications of demand system scale flexibilities on market behavior and incentives are important for fishery management and market incentives and should be taken into greater consideration by local regulatory bodies.
Simulation of the impact of increasing the TAC of EABFT
This section examines the fishery for Atlantic BFT in greater detail. This fishery is complicated by the presence of two different stocks (that mix); the number of members of ICAAT, and the many States comprising the European Union; and the different types of fishing gear used to catch BFT, ranging from coastal longliners to high seas longliners to purse seiners. As shown in Fig 1(B), global BFT landings increased by 16.92%, from 43,895 mt in 2014 to 51,322 mt in 2017. This is due to an increase in landings of EABFT by 75.22% between the years 2015 and 2017.
EABFT's TAC was renegotiated by ICCAT in November 2017 and set for the following three years, with a TAC of 28,200 mt for 2018, 32,240 mt for 2019, and 36,000 mt for 2020 [49]. Prior to the meeting, discussion suggested the TAC could even increase up to 40,000 mt, for example that Spanish industry group Balfego has been calling for an increase to 40,000 mt, approaching the catch levels when the stock was becoming severely depleted during 1990s. With bluefin landings elsewhere in the world expected to remain essentially unchanged, the increased landings of EABFT will push the global landings of BFT up and are, therefore, expected to lower the BFT, SBT and BET auction price in Japan. We now project the impact on the BFT auction price in Tokyo Market of these increases in EABFT TAC, based on the GSIDS demand system estimated in this study. the two can be considered interchangeable in this discussion) in EABFT on the auction price. For example, the auctioned quantity of fresh BFT supplied by non-Japanese fleet doubled from 792 mt in 2012 to 1,632 mt in 2016. Over the same period, the average annual auction price for fresh BFT supplied by non-Japanese fleet decreased by 24.35%, from 3,434 yen/kg in 2012 to 2,864 in 2014 and further to 2,598 yen/kg in 2016. A similar downward trend is observed for the auction price of frozen BFT from 3,826 yen/kg in 2012 to 3,400 yen/kg in 2016.
Simulated impact of increases in EABFT TAC in 2020 on the auction price in Japan
Given the integrated market and high substitutability between sashimi grade tuna products, to project the potential impact of the increases in the TAC of EABFT in 2020, the estimated scale flexibility from the SIDS demand system is used to capture the complexity of substitution across various sources of BFT supply and products forms.
The range of the shocks was specified to simulate an increase in the quota of EABFT to 30,000 mt or 40,000 mt in 2020 as EABFT quota scenarios #1 and #2, respectively. Under EABFT quota scenario #1, the annual global BFT landings will be 57,917 mt in 2020, i.e., 31.94% more than the global landings in 2014. If the eastern Atlantic BFT quota increases to 40,000 mt in 2020 under EABFT quota scenario #2, annual global BFT landings will reach 67,917 mt in 2020, a 54.73% increase from the global landings in 2014.
The percentage of EABFT accounting for global supply increased from 33.89% in 2014 to 49.99% in 2017. Under the EABFT quota scenarios #1 and #2, the percentage of EABFT accounting for global supply will increase further to 55.68% and 62.21% in 2020, respectively. Table 6 shows how different sectors/fisheries would be affected by the TAC increases of EABFT by using the estimated scale flexibility to accommodate the potential of the substitution effect. This is to quantify the potential negative impacts on price of the proposed TAC in the retrospective status in 2014, 2017, and 2020, respectively.
Do higher landings guarantee higher revenue for EABFT fisheries?
Hypothesis testing (1): Impacts on the price of fresh BFT landed by Japanese Fleets. The scale flexibility of fresh BFT supplied by Japanese fleets from either the Atlantic or Pacific Ocean is estimated to be over 1, at the 5% significance level for a one-sided hypothesis test. This means total revenue will in fact decrease slightly, with a price that drops proportionally more than the supply increases. Given that any increase in global BFT landings will consist mainly of landings associated with the increased TAC for EABFT, the auction price for fresh BFT landed by the Japanese fleet is projected to decline by 36.80% and 63.04% in 2020 under EABFT quota scenarios #1 and #2, respectively. If some of these Japanese fleets are constrained by fishing area and cannot increase their landings of fresh BFT from the Eastern Atlantic (for example, for vessels that operate exclusively in the Pacific), they will suffer negative price effects and see their revenue fall as a direct result of the EABFT quota shock.
Hypothesis testing (2) Impacts on the price of fresh BFT landed by Non-Japanese Fleets. The scale flexibility of fresh BFT supplied by non-Japanese fleets is not significantly different from 1, which means total the revenue likely remains the same with an increase in global supply of BFT. Once again, however, if those fishermen are constrained by any inability to increase their landings of fresh BFT (for example, if they do not have the ability to fish in the Eastern Atlantic), they will also incur negative price effects.
Hypothesis testing (3) Impacts on the price of frozen BFT. Only frozen BFT exhibits a scale flexibility of less than unity (-0.911) under a 5% significance level for one-sided hypothesis test, though it is close to 1 (unity). Thus, given a 1% increase in global supply of BFT, the price will drop by 0.911%, and total revenue will increase overall by just 0.089%, implying a slight gain in total revenue if they are able to increase their landings of EABFT. Note (again) that this result only applies to those landing bluefin from the Eastern Atlantic; those unable to acquire any increased landings will simply see a decline in price.
While the frozen EABFT sector may be the only "winner" if potential EABFT supply increases come to pass (given, again, that price still decreases, just at less of a rate than supply increases), there is absolutely no guarantee that an individual fishermen's revenue will go up, as the absence of individual harvesting rights prevents any individual from a guaranteed share of the TAC/catch. Additionally, the risk to the health of the stock and the number of "losers" may outweigh any net benefit accrued by the frozen EABFT sector. Fishermen of Japanese BFT, western bluefin, Atlantic and Pacific bigeye and SBT, and exporters of fresh BFT all likely Non-Japanese fleets will experience a loss in total revenue, as the price for their products drops slightly proportionally more than supply increases. Price goes down proportionally less than supply goes up, which may be beneficial to the EU fishermen who catch more than in previous years as a result of the overall quota increase. Note that absent individual quotas, there is no guarantee that this will be the case for individual fishermen. BFT landed by fishermen in either the Pacific or Western Atlantic will be worse off, as their quota will not be allowed to raise while they will still experience the price shock. stand to lose. Furthermore, supply chain characteristics texture the small benefit enjoyed by the fresh EABFT sector.
The 2016 EU trade data (EUMOFA) shows that EABFT exports from the EU totaled 15,760 mt and that 78.3% of these exports were sold fresh and, most importantly, 80.0% of the EU's EABFT exports to Japan were fresh (a loser under the demand system specified). Thus, there is a strong argument that a solid portion of the EU industry/export sales, overall, is harmed financially by the increased supply of EABFT in recent years.
In summary, the price for fresh BFT is more responsive than the price for frozen BFT. There is a concern from the industry about the impact of the increase in landings-potentially by 31.94% to 54.73% in 2020 under EABFT quota scenarios #1 and #2, respectively-compared to the base year of 2014. Such increases will drive the BFT export value from the EU to reflect the likely impact to different sectors outlined above. Since 80% of the BFT exports to Japan are fresh and only 20% are frozen, only those exporting the relatively smaller proportion of frozen BFT may see any positive change in export value. However, the losses to the 80% of harvesters who export fresh BFT will certainly not be compensated by the gains to the 20% of those who export frozen BFT, raising important questions about the objectives of state trade policies.
Because ranching activities mean the surface purse-seine fleet's impact can add an additional 30-60% increase in the quantity of tuna at the auction market than countries with other types of fishing gear, the impact they have on global prices may be somewhat magnified, depending on how their quota is utilized. If the surface purse seine fleet uses the EABFT quotas for ranching, the purse seine fleet will add more pounds at the auction per % increase in quota than other sectors. That is also the case for SBT quota used by Australia surface purse-seine fleet to cage culture juvenile SBT, so there is a need to calculate whole round equivalent quota weight auctioned in Japan.
Hypothesis testing (4) Impacts on the price of fresh and frozen southern bluefin tuna. The fresh SBT exhibits a scale flexibility that is no different than unity (-0.899) at a 5% significance level, but the frozen SBT exhibits a scale flexibility of less than unity (-0.900) under a 5% significance level for one-sided hypothesis test, albeit close to one. Currently, the SBT TAC is set to remain at 14,637 mt between 2015 and 2020, and thus the price is projected in this study to decline between 28.72% and 49.20%, if the EABFT supply shock continues through 2020. Thus, SBT will be worse off, whether fresh and frozen, due both to the price elasticities as well as declining prices caused by the increases in quota from another region.
Hypothesis testing (5) Impacts on the price of fresh bigeye tuna. Fresh bigeye tuna auctioned in Tokyo Market might be supplied by landings from the Pacific, Indian and Atlantic Oceans. Either the bigeye tuna's global landings or landings in each of the oceans have all trended down since 2000. The aggregated landings, which peaked at 533,192 mt in 2000, declined to 376,360 in 2010 [50]. The Atlantic bigeye industry has already been subject to significant losses due to the decrease in landings from 85,865 mt in 2011 to 67,986 mt in 2014, with an estimated 67% chance that the stock had been overfished and with overfishing still occurring in 2014 [51]. Even if the quota is stabilized as expected at the lower level of 65,000 mt in 2020, the auction price of fresh bigeye is projected to decline by 39.10% to 66.98% in 2020 due to the EABFT quota shock ranging from 30,000 mt to 40,000 mt under scenarios #1 and #2, respectively. In this case, whether the bigeye tunas are supplied from the Pacific, Indian or Atlantic Oceans, their revenue will all decline dramatically.
Discussion and concluding remarks
This study presents the first comprehensive view of the global BFT market and the incentives created for conservation and management of regional and global BFT by changes in supply and through the management of quotas. We simulate the response of BFT prices and revenues through changes in supply by using the price and scale flexibilities estimated from an inverse demand system based on data from the central auction seafood auction market in Tokyo Market. Because the quantities, and not the prices, of fish closely related in demand are held constant, the price flexibilities (quantity elasticities) account for adjustments in related markets, i.e. the flexibilities have a general equilibrium interpretation.
The results show several broad patterns. First, when there are increases in aggregate bluefin (and bigeye) tuna on the global market, the prices of two major bluefin products decline proportionately more and lead to a loss in revenue, two product prices decline proportionately less and lead to a rise in revenue, and two product prices decline proportionately the same and lead to no change in revenue (scale flexibilities larger, less than, or equal to one, respectively). Second, product prices are inflexible to changes in their own supply, i.e. these prices demonstrate responses proportionately smaller than the quantity changes (in other words, their ownprice flexibilities of inverse demand are inflexible). This inflexibility implies that the corresponding price elasticities of direct demand are elastic (i.e. that quantity demanded is highly responsive to own price changes) due to the generally high substitution of products in direct demand. Third, the price for fresh product is more responsive than the price for frozen product (seen in both BFT and bigeye) in the Tokyo Market when aggregate supply changes (i.e. frozen scale flexibilities smaller in absolute value than fresh products).
Fourth, buyers further distinguish fresh from frozen products, as indicated by the estimated scale flexibilities for both frozen BFT (-0.91) and frozen southern BFT (-0.90) are significant less than unity, and more inflexible than those of fresh bluefin tuna products. Fifth, buyers in the Tokyo Market distinguish Japanese producers' BFT from the non-Japanese producers' BFT on several fronts: (1) the scale flexibility for fresh BFT from Japanese producers is the only flexible BFT scale flexibility (bigeye's is also flexible, but it is not BFT); (2) the own price of fresh BFT from Japanese producers, while not very responsive to changes in its own quantity, is still more responsive than the own price responsiveness of other BFT producers to changes in their own quantities (the own-quantity price flexibility for fresh BFT from Japanese producers, while inelastic, is larger than the inelastic own-quantity price flexibilities for other BFT products); (3) the other products are not considered to be close p-substitutes in direct demand (as indicated by the large positive reciprocals of the price flexibilities); and (4), the relative price of Japanese producer supplied BFT to other BFT prices is more responsive to changes in the quantities supplied of other products (as indicated by larger positive Morishima Elasticities of Complementarity than for other product forms).
Sixth, relative prices for two different products, even for Japanese producer-supplied BFT, are comparatively stable with changes in supply of one of the other products (as indicated by small and positive Morishima Elasticities of Complementarity). Seventh, the market for all three species of BFT is integrated and is considered highly substitutable between different forms (fresh/frozen and Japanese/non-Japanese) and species of BFT (small and positive Morishima Elasticities of Complementarity). Prices and quantities of other species and product forms absorb any small shocks due to sudden changes in a species price. This absorption smooths out market responses and stabilizes consumer welfare. And finally, eighth, regional context matters in a globally integrated market; as shown in this study, even a scale flexibility less than unity can result in lost revenue, if the quota increases are captured entirely by a different region or sector.
Unfortunately, these results imply that there is little incentive for individual regional suppliers of bluefin tuna, or even individual tuna-Regional Fisheries Management Organizations (t-RFMOs) that manage the fisheries, to individually reduce their catches for either wild caught or ranched bluefin tuna, as this would lower their revenue due to the inflexibility of their own-quantity price flexibilities. In fact, the individual regional incentives across the board are to increase supply and Total Allowable Catches to increase revenue. However, this is not the case if the total supply of all product forms and species increase in four of the six cases considered here: all fresh products, because the unitary or flexible scale flexibilities, imply price responses proportional-or even more than proportional-to reductions in total supply, at most maintaining and at worst increasing fishermen's revenue and operating profits. Put another way, the global incentives for these four cases of fresh product forms (two of which are staples of the Tokyo Market) are to reduce supply to increase operating profit. In the other two cases, both of which are frozen product forms and luxury goods in the Tokyo Market, increases in aggregate supply will raise revenue and reductions in aggregate supply will lower revenue.
In sum, uniform reduction in aggregate supply through TAC reductions creates conflicting incentives for different actors the short run (setting aside, for the moment, differing fleet abilities to take advantage of increased quotas). Short-run incentives for individual BFT stocks counter conservation for some stocks. Reductions in aggregate supply and conservation for the four fresh products create positive short-run conservation incentives through no loss in revenue and increases in operating profit, but concomitantly create negative short-run conservation incentives for the stocks that support the two frozen products. Eventually, reductions in catch and supply would allow stocks to rebuild and total revenues to climb for all BFT species, but short-run incentives through revenues do not uniformly support this approach through RFMOs.
The price response of the global BFT market to an exogenous supply shock from one region justifies the need to call for consistent management measures across all the Regional Fisheries Management Organizations together [52]. When talking about a globally-traded commodity such as highly migratory tuna, localized decisions (such as those in just one ocean) that would restrict allowable catch does not lead to the same responses and create the same incentives for stakeholders in other regions-despite many of these stakeholders in different regions ostensibly representing the same overall economic interest of the same countries, just in different fora. The mixed requirements of increased, constant, or decreased aggregate supply for different products from different fisheries managed by different RFMOs, and the consequent requirement for coordination across RFMOs to create positive conservation incentives, makes this a difficult task indeed.
That economic incentives count when conserving renewable resources and managing fisheries is beyond doubt. However, the primary focus so far has been on incentives created by property rights. The relationship between TACs (or any other total production cap) and revenues and profit, all conjoined by globally linked prices, has not received sufficient attention. First, local fishermen's revenue and license fees collected by coastal states can fluctuate substantially because of higher catches in other tuna fisheries elsewhere in the world. The developing economies that are highly dependent on tuna fisheries could be strongly affected [53] [7]. Second, the investment cycle, and the resulting dynamics of the fleet, might be influenced with lagged effects both by the availability of resources and price levels. If fishermen could collectively lower their operating cost through a quota-trading mechanism monitored by the tuna RFMOs, for example, they would be in a better position to contribute to the conservation effort by avoiding overcapacity and "the race to fish." Setting aside the context of RFMO negotiations, the tuna fishing industry would still benefit by recognizing that a stable global supply for bluefin tuna helps the industry maintain total landing values in the long run, since increases in quantity could be more than offset by decreases in price.
Is there another way to create conservation incentives as an alternative to globally coordinated reductions in TACs by all of the t-RFMOs? One option is to alter the focus from producers' revenues to producers' profits, through reducing capacity and thereby reducing operating and fixed costs and leaving more bluefin tuna available to each producer. Higher producer profits from reduced capacity and lower fixed and operating costs would compensate for lower overall revenues. Other than bankruptcy or voluntary exit from the fisheries, some form of individual or group right, such as a transferable quotas or limits on catch or effort, provides one of the few other methods by which to reduce capacity and hence increase profits. Many tuna fisheries are comprised of multi-vessel companies, and transferable quotas or limits would allow these companies to reallocate fishing opportunities among their vessels and even remove vessels from production.
Rights could also lower the comparatively high private discount rate, created by the "Tragedy of the Commons" and the race to fish under the absence of property rights, to a lower social discount rate, thereby boosting the net present value of higher future revenues and catches that eventually follow through rebuilding stocks with higher biomass and numbers. Moreover, the long-lived and slow growing life history of BFT aggravates the incentives from the current high private discount rate to emphasize current catches at the expense of larger future catches, which require the rebuilding of stocks through lower current production. The other main alternative to reduce capacity, and thereby increase producer profits (to counter the lower immediate revenues from conservation), is vessel buybacks; however, these are ineffective without first altering the incentives away from the "race-to-fish" in the absence of property rights [54]. A different type of approach allocates TAC across Contracting Parties to the Conventions, which several tuna and non-tuna RFMOs follow [55]. However, allocating TAC across RFMO CPCs merely shifts the "race-to-fish" incentives down to the CPC level from the RFMO level, and unless individual CPCs introduce incentive-based policies, nothing substantive changes [56]. The Commission for the Conservation of Southern Bluefin Tuna (CCSBT) is the apparent exception, due to the small number of CPCs and the composition of the CPCs, both of which collectively favor property rights, profits, and conservation.
Society draws many benefits from the existence of healthy tuna stocks, including non-market economic values for biodiversity, existence, and ecosystem services. Unless and until conservation-oriented actions begin to pay short-term benefits for fishermen who receive direct use value from tuna in the form of profits, however, the current incentives fishermen face seem unlikely to motivate support for a reduction in bluefin tuna quota. This study, however, raises the notion that the price response within the global bluefin sashimi market itself could potentially provide the short-term incentives needed to achieve dual management goals of a sustainable industry and a healthy tuna population. With the multilateral cooperation among nations required for self-enforcing through the international tuna bodies likely remaining elusive, an alliance of economic and conservation-motivated stakeholders may just be bluefin tuna's best shot at recovery.
|
2019-05-20T13:02:43.645Z
|
2019-08-23T00:00:00.000
|
{
"year": 2019,
"sha1": "fe2c8e518cb179338191e76b64ea4128cafbbd98",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221147&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b7bf7d73a0612afea874dd538c33e9828467421",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
}
|
260286083
|
pes2o/s2orc
|
v3-fos-license
|
Updating “Dataset of transcriptomic changes that occur in human preadipocytes over a 3-day course of exposure to 3,3′,4,4′,5-Pentachlorobiphenyl (PCB126)” with additional data on exposure to 2,2′,5,5′-tetrachlorobiphenyl (PCB52) or its 4-hydroxy metabolite (4-OH-PCB52)
Polychlorinated biphenyls (PCBs) were used extensively in building materials, including those used in schools. PCBs accumulate in fat, and exposure to PCBs is associated with the development of cancer, neurodevelopmental disorders, cardiovascular disease, obesity, and diabetes. The non-dioxin-like PCB congener, PCB52 (2,2′,5,5′-tetrachlorobiphenyl), is found at one of the highest levels of any congener in school air. PCB52 is oxidized in the liver to hydroxylated forms, mainly 4-OH-PCB52 (2,2’,5,5’-tetrachlorobiphenyl-4-ol). In a previous study, we reported on RNAseq data generated from exposure of human preadipocytes to the dioxin-like PCB congener, PCB126. In this new dataset, we used identical techniques to examine alterations in gene transcript levels in human preadipocytes exposed to PCB52 or 4-OH-PCB52 over a time course. This updated set of data provides a comprehensive transcriptional profile of changes that occur in preadipocytes exposed to PCB52 or 4-OH-PCB52 over time and allows for comparison of these changes between the parent compound and its hydroxy metabolite. The datasets will allow others to explore how PCB52 and 4-OH-PCB52 impact biological pathways in preadipocytes. Further studies can be performed to determine how these changes might lead to disease.
a b s t r a c t
Polychlorinated biphenyls (PCBs) were used extensively in building materials, including those used in schools. PCBs accumulate in fat, and exposure to PCBs is associated with the development of cancer, neurodevelopmental disorders, cardiovascular disease, obesity, and diabetes. The non-dioxinlike PCB congener, PCB52 (2,2 ,5,5 -tetrachlorobiphenyl), is found at one of the highest levels of any congener in school air. PCB52 is oxidized in the liver to hydroxylated forms, mainly 4-OH-PCB52 (2,2',5,5'-tetrachlorobiphenyl-4-ol). In a previous study, we reported on RNAseq data generated from exposure of human preadipocytes to the dioxin-like PCB congener, PCB126. In this new dataset, we used identical techniques to examine alterations in gene transcript levels in human preadipocytes exposed to PCB52 or 4-OH-PCB52 over a time course. This updated set of data provides a comprehensive transcriptional profile of changes that occur in preadipocytes exposed to PCB52 or 4-OH-PCB52 over time and allows for comparison of these changes between the parent compound and its hydroxy metabolite. The datasets will allow others to explore how PCB52 and 4-OH-PCB52 impact biological pathways in preadipocytes. Further studies can be performed to determine how these changes might lead to disease.
© 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) Table Subject Health, Toxicology and Mutagenesis Specific subject area Temporal gene expression changes in preadipocytes caused by exposure to PCB52 or its hydroxylated metabolite 4-OH-PCB52 Type of data
Value of the Data
• The updated dataset represents the first RNAseq data reported for exposure of human cells to PCB52, an important non-dioxin-like persistent organic pollutant that is present at high levels in school air, or its oxidized metabolite 4-OH-PCB52. The data can be mined to reveal novel pathways and genes activated directly or secondarily upon exposure of preadipocytes to PCB52 or 4-OH-PCB52. • The new data adds value to and builds upon the previous report of RNAseq data for exposure of preadipocytes to PCB126, a dioxin-like PCB. This will allow for comparison of the effects of dioxin-and non-dioxin-like PCBs on human preadipocytes.
Data Description
Normal human preadipocytes (NPADs) derived from subcutaneous adipose tissue were exposed to 10 μM PCB52, 4-OH-PCB52, or DMSO over a time course and then subjected to RNAseq. The 10 μM concentration was chosen based on our studies demonstrating that this concentration was non-cytotoxic and studies using other PCB congeners and mixtures showing that this dose causes phenotypic changes in preadipocytes/adipocytes/adipose mesenchymal stem cells [2][3][4] . To define the temporal changes in gene expression that occur after PCB52 or 4-OH-PCB52 exposure, the specific time points of 9 hours, 24 hours, and 72 hours were chosen. This time frame is based on our previous findings with another congener, PCB126, demonstrating induction of genes at the early time point of 9 hours with further changes at 24 hours and 72 hours [5] . The exposures to PCB52 or 4-OH-PCB52 were done at the same time as the previously reported dataset using PCB126, and the DMSO control samples used are the same as for that dataset [1] . Table 1 outlines each raw data file and describes the treatment condition, either DMSO or 10 μM PCB52 or 4-OH-PCB52, as well as the duration of exposure to the treatment condition, 9, 24, and 72 hours. The number of aligned reads for each sample is reported in Table 2 . After alignment, differentially expressed genes (DEGs) were identified by comparing the 10 μM PCB52 or 4-OH-PCB52 treated NPAD data to the DMSO treated NPAD data for each of the 3 exposure durations. The list of DEGs was filtered only to include genes that showed a log fold change ≥ |0.3| & FDRadjusted p-value ≤ 0.05. Lists of raw counts for every gene and filtered DEG for each exposure duration and their corresponding log fold change and p-value are available in the files listed in Table 3 and can be found on GEO Accession number: GSE205813. Venn diagrams were created in iPathwayGuide to display the overlap of DEGs between PCB52-or 4-OH-PCB52-treated cells at the same time points ( Fig. 1 ).
We used immortalized human preadipocytes called NPADs (Normal PreADipocytes) and cultured them as previously described [1 , 2 , 10 , 11] . Cells were treated as described previously [1 , 2 , 11] using dimethyl sulfoxide (DMSO), 10 μM PCB52, or 10 μM 4-OH-PCB52 dissolved in DMSO. The level of DMSO was held constant in all conditions at a level of 0.1% (v/v). The DMSO or toxicantcontaining media remained on the cells until RNA harvesting. RNA was isolated as previously described [2 , 11] . Treatment conditions and treatment durations were repeated 4 times to provide biological replicates. RNA quality was assessed by using an Agilent Bioanalyzer. Any samples with RNA integrity numbers below 8 were excluded from further analysis. This resulted in one sample of 4-OH-PCB52 treated cells at the Day 3 time point to be eliminated from further analysis.
RNA library preparation, RNA sequencing, data processing, and differential gene expression analysis were performed exactly as described previously in the original Data in Brief article [1] . To generate Venn diagrams, DEGs were exported to iPathwayGuide (Advaita). Meta-Analysis in the iPathwayGuide software was used to determine what DEGs overlapped between treatments and time points to generate Venn diagrams.
Ethics Statements
This manuscript complies with ethical publishing guidelines and does not involve human subjects. The NPAD cell line utilized in this study is an immortal cell line that has been previously published [9] and was developed from de-identified primary preadipocytes that were obtained by consent and purchased from Lonza.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2023-07-16T15:16:44.388Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "09f620a9dcaa939b5c6bc587eef3a44c6481df62",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b2a4a3027375ae0b9bbce7c22a785d8414d53db1",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225099801
|
pes2o/s2orc
|
v3-fos-license
|
Aortic Cannula Tip Dislodgement: A Rare Complication
Cardiac surgery involves use of cardiopulmonary bypass which usually requires a circulatory circuit containing numerous cannulae and tubings draining from major vessels (like superior and inferior vena cavae) and returning it back to the systemic circulation (via the aorta, femoral artery, axillary artery etc). Establishment of this circuit not only requires good surgical skills for technical procedures but also requires stringent vigilance and awareness about the working of these disposable items. Surgeons concentrating in the technical aspect might miss out on the minor manufacturing defects in these disposable items and anesthesiologist as well as perfusionist can contribute in this aspect by including systematic precheck of these items to avoid complications in future. In this case report, we would like to discuss a simple case of mitral valve replacement where during aortic decannulation the metallic tip got dislodged and thus got migrated to the abdominal aorta. This is a rare complication which none of us were expecting. By prechecking the various components of the cardiopulmonary bypass circuit, this complication was expected to be avoided.
Introduction
Aortic cannulation is one of the major steps from both anesthesia and surgical point of view in cardiac surgery and many complications have occurred in the past during this stage which includes tear of the aortic wall, dissection of the aortic wall, bleeding, posterior wall puncture leading to trauma to the oesophagus, and even cardiac arrest. Like cannulation of the aorta, decannulation is also an important step while coming off bypass towards the later stage of surgery. Most of the time, it is bleeding from the aortic cannulation site which is encountered during decannulation. We wish to discuss a rare yet avoidable complication which we never expected to happen.
Case History
A 42-year-old male patient weighing 60 kg with severe mitral stenosis underwent mitral valve replacement on cardiopulmonary bypass after receiving standard general anesthesia. The patient was weaned off successfully from bypass and was hemodynamically stable during this phase. However, while returning the pump blood from the aortic line, there was excessive bleeding from the site of entry of aortic cannula. Hence, it was decided to remove the aortic cannula. However, to our surprise, while removing the aortic cannulae, the tubing suddenly gave away [ Figure 1] and the metallic tip got dislodged inside the aorta. The entire thoracic part of the aorta was checked by palpation but we could not trace the metallic tip. An intraoperative X-ray abdomen was done and to our fear, the metallic tip was in the abdominal aorta at the level of third lumbar vertebra [ Figure 2]. Thereafter, under fluoroscopic guidance an attempt was made to extract the cannula tip through femoral route however, the tip being larger in size was not able to negotiate beyond the aortic bifurcation into iliac arteries [ Figure 3]. Finally, the patient underwent laparotomy and after locating the cannula tip, in the abdominal aorta, a partial clamp was applied, and the cannula tip was successfully extracted out [ Figure 1]. The patient required additional ionotropic support (adrenaline and dopamine in titrable dosage) along with 2 units of packed RBCs and 2 units of fresh frozen plasma due to blood loss during the entire procedure. The patient was extubated the next day and the rest of the postoperative stay was uneventful.
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
Discussion
It is said that the consequences of mistakes done in prebypass phase can be seen clearly in the postbypass period. Complications associated with aortic cannulation for the bypass although rare are well versed. These mainly include aortic rupture, dissection, and pseudoaneurysm formation. [1] A very rarer complication reported is the embolism of the adjustable "position stop" of the aortic cannulae that comes with them. [2] However, the complication which we mentioned has never happened in the past in our institute. In the English literature also, there is only one such report of disconnection of the tip. [3] After looking for reasons and possible ways of avoiding it, we came across various possibilities.
Repeated use of same arterial cannulae after repeated sterilization
Although repeated use of cannulas has never been encouraged it has been seen that even after limited reuse the properties of the cannula are not affected beyond safety limits. [4] In our case also we used a fresh piece of disposable 20 Fr single-use aortic cannula with curved metallic tip (manufactured by Doctor Surgicals, batch number-201712156).
Manufacturing defect
The cannula which we were using contained two parts, a flexible body and a metallic tip. The body of the cannula is a wire-reinforced flexible tubing. The tip, on the other hand, is curved metallic and nonflexible. These two are joined with a flange to hold them together. The process of bonding is specific for the manufacturer and type of cannulae and is unknown to us but if this bond between the tip and the body is not tight enough, it might lead to many such complications like the one we faced.
Change in temperature
During cardiopulmonary bypass, the patient is cooled to a relatively low temperature and then rewarmed again. This change in temperature may affect the expansion and contraction of the cannula material which may or may not be significant. [4] The metallic tip and the cannula tubing are made up of different materials and have a different coefficient of expansion which may lead to differential size change causing loosening of cannula tip from tubings and disconnection of the same. However, looking at the rarity of this event, it is less likely to be the cause for the same.
Pinching effect of snugger tie
At our setup, the surgeons use a heavy silk thread (3-0 or 2-0) or linen to secure the snugger to the aortic cannula. We believe that there may be a pinching effect if this thread is tied close to the junction between the metallic and cannula tubing. [5] This pinching effect may cause slipping of the tubing from the metallic tip, especially when this tie is opened up during decannulation. Moreover, this step is done in almost all bypass surgeries and if this would have been the cause then this complication would have occurred more often. So probably it may be a contributing factor, if not the cause.
Conclusion
Being an unusual and rare complication, to be honest, we were not prepared for the effective management of such case. Avoidance of major temperature variations does not seem to be an effective and sure shot measure to avoid such incidence and in some situations, it is practically not possible as well. Since many of the contributing factors for the cause may not be modifiable, such possibilities should, therefore, be necessarily kept in mind. We suggest a precheck of all such cannulae as well as other disposable before considering them for use. The implications of even the tiniest defect in these products are serious complications and may add up to the morbidity as well as mortality of the patient. Moreover, even after considering the cost-benefit for reuse of cannulae, one must wherever possible, go for a fresh unused product over the reused ones.
Finally, we would conclude by saying that, as an anesthesiologist, one must be vigilant and careful to rare and such unknown complications.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2020-10-29T13:06:02.857Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e9d2330c135f1cfdc5dc85b4e4665e5a8c4b7bc1",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aca.aca_122_19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70a8d6e3c3c50b6c78bf576dbad233057e657acd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
132345441
|
pes2o/s2orc
|
v3-fos-license
|
Optimal control policies for an inventory system with commitment lead time
We consider a firm which faces a Poisson customer demand and uses a base‐stock policy to replenish its inventories from an outside supplier with a fixed lead time. The firm can use a preorder strategy which allows the customers to place their orders before their actual need. The time from a customer's order until the date a product is actually needed is called commitment lead time. The firm pays a commitment cost which is strictly increasing and convex in the length of the commitment lead time. For such a system, we prove the optimality of bang‐bang and all‐or‐nothing policies for the commitment lead time and the base‐stock policy, respectively. We study the case where the commitment cost is linear in the length of the commitment lead time in detail. We show that there exists a unit commitment cost threshold which dictates the optimality of either a buy‐to‐order (BTO) or a buy‐to‐stock strategy. The unit commitment cost threshold is increasing in the unit holding and backordering costs and decreasing in the mean lead time demand. We determine the conditions on the unit commitment cost for profitability of the BTO strategy and study the case with a compound Poisson customer demand.
INTRODUCTION
The consequences of demand and supply uncertainties and eventual mismatch between demand and supply are well known to many companies. The need for designing company operations such that this mismatch is minimized or avoided has motivated many researchers and resulted in a rich literature on demand and supply management. Among the various methods, information sharing has received a lot of attention. The benefits of acquiring and providing information about future demand are undeniable. Having information on future customer demand helps companies in reducing their inventory levels without sacrificing high service levels. Customers, who provide information on the timing and quantity of their future demand get a high quality service.
One form of advance demand information (ADI) is a preorder strategy in which customers place orders ahead of their actual need. The preorder strategy is characterized by a commitment lead time which is defined as the time that elapses between the moment an order is communicated by the customer and the moment the order must be delivered to the customer. Although in today's competitive market firms cannot force their customers to place orders before their actual need, they can tempt them to follow the preorder strategy by giving a bonus. In order to make long commitment lead times acceptable and attractive, companies should propose bonuses which increase with the length of commitment lead times. The commitment lead time contracts reduce the companies' demand uncertainty risk and the customers' inventory unavailability risk (Lutze & Özer, 2008). Although the preorder strategy is a form of ADI, it is different than the form of ADI that is usually used in the existing literature. In the existing literature, ADI helps to make a better forecast of the future customer demand. In this paper, the demand distribution is known. ADI is utilized operationally to reduce demand-supply mismatching by reducing the lead time demand uncertainty. Under the optimal solution, lower lead time demand uncertainty results in lower holding and backordering costs. This form of ADI works for service and custom-production companies, where service customers can make reservations and customers of custom products order in advance of their needs (Hariharan & Zipkin, 1995).
In the business-to-business (B2B) environment this is typically the case when the customer is planning production to efficiently exploit resources, whereby plans are frozen a few weeks ahead of time. Thus the moment of need for materials with short shipment lead times is known earlier and the supplier can be informed about it. By doing so, the supplier can save on inventory holding cost and exploit the early demand information to produce more efficiently. The latter impact is out of scope of this paper. Also in the business-to-consumer (B2C) environment many online purchases are not time-critical, such as books and electronic devices, and a bonus may seduce the customer to accept a later moment of delivery. Once identifying the possible mutual benefit of early order placement, the question arises what commitment lead time should be chosen. Thus, providing incentives for customers to inform their supplier earlier than the point in time determined by the moment of need and the shipment lead time may substantially reduce cost for the supplier, while hardly having an impact on the customer cost. This paper discusses both the potential benefits of preordering and the optimal commitment lead time choice.
We study the preorder strategy of a firm in a single item, single location setting. The firm faces random customer demand and uses a continuous-review base-stock policy to replenish its inventory from an uncapacitated supplier with a deterministic lead time. Under the aforementioned setting, the firm offers a preorder strategy to its customers and consequently, they are paid a commitment cost. The commitment cost function is strictly increasing and convex in the length of the commitment lead time. Since a commitment lead time longer than the replenishment lead time does not have any effect on reducing the lead time demand uncertainty, the commitment lead time is bounded by zero and the length of the replenishment lead time. The firm aims to evaluate this preorder strategy and find the optimal length of the commitment lead time and the optimal base-stock level, which minimize the total long-run average cost. This cost is the sum of long-run average holding, backordering, and commitment costs. We formulate the total long-run average cost and answer the following questions: 1. When and how should the firm use the preorder strategy?
Based on the structure of the commitment cost, we find the sufficient conditions under which the firm should offer the preorder strategy. More specifically, assuming a linear commitment cost per time unit, we find a unit commitment cost threshold such that for any unit commitment cost below the threshold it is better for the firm to offer the preorder strategy. The threshold is a function of the mean lead time demand, holding and backordering unit costs. By means of this unit commitment cost threshold the firm can decide whether offering the preorder strategy is cost effective or not. When the preorder strategy is beneficial to the firm, the firm should choose a strategy that is similar in spirit to a make-to-order production strategy. In our context, we call this a buy-to-order (BTO) strategy. This strategy works for car dealers, expensive furniture manufactures, and so on. When preordering is not beneficial, the firm should use a strategy that is similar in spirit to a pure make-to-stock production strategy. In our context, we call this a buy-to-stock (BTS) strategy. This strategy works for grocery products, clothing, and so on.
2. What are the optimal commitment lead time and its corresponding optimal base-stock level?
The optimal commitment lead time and the optimal basestock level are not independent from each other. We characterize the optimal base-stock level and its corresponding optimal commitment lead time. We prove the optimality of bang-bang and all-or-nothing policies for the commitment lead time and the base-stock policy, respectively. We show that the optimal commitment lead time is either zero or equal to the replenishment lead time. We show that when the commitment lead time is zero, the corresponding base-stock level is the solution of the well-known Newsvendor problem with deterministic lead time and when the commitment lead time equals the replenishment lead time, the corresponding optimal base-stock level is zero. Consistent with the literature we call this policy as an all-or-nothing policy (Lutze & Özer, 2008).
Which factors have impact on the benefits of the preorder strategy?
Through exact sensitivity analysis on the unit commitment cost threshold, we provide insights on the benefits of the preorder strategy. We find that the preorder strategy helps with high demand uncertainty, even when the unit commitment cost is high. Similarly, the preorder strategy benefits the firm when the unit holding and backordering costs increase, even when the unit commitment cost is high. We also find that when demand uncertainty is low, the unit commitment cost threshold is more robust to changes in the unit holding and backordering costs.
Scholars have studied inventory management with ADI broadly from different perspectives. They have considered several bonus conditions for providing ADI. We study the impact of commitment cost as a function of the commitment lead time in a firm with perfect ADI, continuous-review, deterministic replenishment lead time, and Poisson demand. A similar setting has only been studied in Hariharan and Zipkin (1995) but the authors do not assign a cost to the commitment lead time. Assuming a commitment cost strictly increasing and convex in the commitment lead time, we contribute to the literature by characterizing the optimal preordering strategy and the corresponding optimal replenishment strategy. The results of this study can serve as a building block for characterizing the optimal preorder and replenishment strategies for more complicated assemble-to-order systems. In addition, firms can use our results to evaluate the potential of preorder strategies and can make decisions on rejecting or accepting a preorder strategy. The rest of this paper is organized as follows. In Section 2, we provide a brief review of related literature. In Section 3, we formulate the problem. In Section 4, we find a lower bound for the minimum cost function and characterize the optimal policies in terms of the optimal commitment lead time and the corresponding optimal base-stock level. In Section 5, we study a linear commitment cost and determine the conditions under which the preorder strategy is optimal. In Section 6, we extend our results to the compound Poisson demand case. We provide our concluding remarks in Section 7. We defer the proofs to the Appendix.
LITERATURE REVIEW
The literature on ADI assumes either perfect or imperfect demand information available ahead of the realization of actual demand. This literature can be broadly classified into two categories based on the accuracy of the demand information. These categories are perfect ADI and imperfect ADI.
When a firm has perfect ADI, customers place orders ahead of time in specific quantities to be delivered at specified due dates. Hariharan and Zipkin (1995) are the first to study the perfect ADI situation in a continuous-review setting. After this seminal work, many researchers assume perfect ADI and study different problems. We provide a summary of the most relevant literature on perfect ADI in Table 1.
When a firm has imperfect ADI, customers place their orders in advance but they provide only an estimate of either the actual due dates or order sizes (Gayon, Benjaafar, & De Véricourt, 2009). We provide a summary of the most relevant literature on imperfect ADI in Table 2.
Multiple studies consider both perfect and imperfect ADI. These are listed in Table 3.
Our work belongs to the category of perfect ADI. The closest study to our research is by Hariharan and Zipkin (1995). The authors study a continuous-review, single item, single-location inventory system with perfect ADI. Demand is according to a Poisson process and each customer order has a delivery due date. The firm uses a preorder strategy, which requires that customers place their orders before their actual need. The time between an order placement and its due date is called commitment lead time. Assuming that this commitment lead time is the same for all customers, Hariharan and Zipkin (1995) prove the optimality of a base-stock policy. The authors consider three different settings for both replenishment and commitment lead times; constant, independent stochastic, and sequential stochastic. For each case, they formulate an equivalent inventory model by replacing the actual replenishment lead time with the difference between the replenishment and commitment lead times. We consider the same setting as Hariharan and Zipkin (1995). Different The optimal production and allocation policies are state-dependent base-stock and multilevel rationing policies, respectively Wang and Tomlin (2009) A single-location periodic-review system with forecast updating and lead time uncertainty The firm becomes less sensitive to lead time variability as the forecast updating process becomes more efficient Iida and Zipkin (2010) A two-echelon serial system in competitive and cooperative settings The impact of sharing forecasts on profit could be negative in the competitive setting, but it is always positive in the cooperative setting Benjaafar, Cooper, and Mardan (2011) A capacitated supplier with stochastic production times A state-dependent base-stock policy is optimal Explore the impact of perfect and imperfect ADI on optimal capacities and profit Benbitour and Sahin (2015) A capacitated single-location periodic-review system The imperfectness of demand information reduces the benefits of ADI from them, we have an additional cost component, which is the commitment cost. In addition to the base-stock level, we also optimize the commitment lead time. We prove the optimality of a so-called bang-bang policy for the commitment lead time, that is, it is either 0 or equal to the replenishment lead time, and we show that the corresponding optimal base-stock policy is an all-or-nothing policy.
PROBLEM FORMULATION
We consider a firm managing the inventory of a single item. The firm uses a continuous-review base-stock policy with base-stock level s ≥ 0 to replenish its inventory from an uncapacitated supplier. The replenishment lead time, L, is constant. Customer orders/demands describe a Poisson process with a rate . Each customer orders a single unit. The firm uses a preorder strategy, which requires that customers place their orders time units before their actual need. We say that the demand occurs time units after the corresponding order. We call the commitment lead time. We assume that is continuous. However, our results would still hold for discrete . If > L, the firm can meet every demand without holding inventory by placing a replenishment order − L time units after a customer order occurs. This results in zero holding and backordering costs. In this paper, we analyze the case with 0 ≤ ≤ L. We define Ψ as Ψ = [0, L] and write ∈ Ψ.
The firm pays a commitment cost CC( ) per customer. We assume that CC(0) = 0 and CC( ) is strictly increasing and convex in the length of commitment lead time . In addition to the commitment cost, the firm pays an inventory holding cost of h per unit per time unit. Moreover, there is a commitment to deliver each customer order by the end of the commitment lead time , otherwise, demand is backordered and a backordering cost of p per unit per time unit is paid to the customer. The customer does not take delivery before the end of commitment lead time since the product is not needed before that. The firm's objective is to find the commitment lead time, , and the base-stock level, s, which minimize the total average cost.
Consistent with Hariharan and Zipkin (1995), we define D − (t) and D(t) as the cumulative customer orders through time t and the cumulative customer demands through time t, respectively. We have D(t) = D − (t − ). We call the system analyzed in this paper the current system.
Under the assumptions outlined above, the base-stock policy is optimal (Hariharan & Zipkin, 1995). According to the base-stock policy, each customer order triggers a replenishment order. If a customer order occurs at time t, the corresponding item arrives at time t + L. In a conventional base-stock policy a replenishment order is triggered by actual customer demands, which occur time units later here. If a customer order occurs at time t, the actual demand in the conventional policy occurs at time t + and the corresponding item arrives at t + + (L − ) = t + L. Hence, the supply lead time of the corresponding conventional system is L − . As a result, the supply and demand processes of the current and conventional systems are identical. This is why the equilibrium net inventories are equivalent and expressed as s − D − (L) = s − D(L + ). We refer to Hariharan and Zipkin (1995) for the details.
For practical purposes, from now on we use X to indicate the demand during L − . X has a Poisson distribution with a rate (L − ). We use P X (x, ) to represent the probability mass function of X. The dependency on is made explicit since this helps in subsequent analysis. We define C(s, ) as the total average cost as a function of the decision variables s and . C(s, ) can be written as follows; (1) The first term in (1) is the average holding cost, the second term is the average backordering cost, and the final term is the average commitment cost. Defining F X (x, ) as the cumulative distribution function of X, we obtain the following alternative expression for C(s, ): Refer to Appendix A.1 for the derivation of this alternative expression. The firm aims to solve the optimization problem min s∈N 0 , ∈Ψ C(s, ) to find the optimal commitment lead time, * and the optimal base-stock level, s * .
ANALYSIS
In this section, we initially analyze the properties of C(s, ) and prove its convexity with respect to the decision variables and construct a lower bound on it (Section 4.1). Then, we prove the structure of the optimal policy (Section 4.2).
Analysis of the cost function
C(s, ) is a continuous function with respect to and a discrete function with respect to s. In this section, we investigate the properties of C(s, ), which help in proving the structure of the optimal policy. The proofs of the results can be found in Appendix A.
In Lemmas 1 and 2 we prove the convexity of C(s, ) with respect to the commitment lead time and the base-stock level, respectively.
These results imply that we can find the optimal value of each decision variable by fixing the other one. Initially, for a given value of , we minimize C(s, ) with respect to s. Using the first order conditions, we find that the optimal base-stock level for a given value of ∈ Ψ is the base-stock level s that satisfies the following inequalities: With the following theorem, we prove that the optimal base-stock level is nonincreasing in the length of the commitment lead time. Hence, the maximum value of the optimal base-stock level can be found by setting the commitment lead time to zero. Similarly, the minimum value of the optimal base-stock level, which is 0, occurs when the commitment lead time is at its maximum value, that is, L.
Theorem 3 The optimal base-stock level is nonincreasing in ∈ Ψ. The set of optimal base-stock levels can be written as S = {S, S − 1, … , 2, 1, 0}, where S is the optimal base-stock level corresponding to = 0 and 0 is the optimal base-stock level corresponding to = L.
We refer to Appendix A.4 for the proof. This result implies that the whole interval Ψ can be partitioned into S + 1 subintervals Ψ s , where subinterval Ψ s covers all the commitment lead times for which the corresponding optimal base-stock level is s.
Define C( ) as C( ) = min s∈N 0 C(s, ). According to Theorem 3, there is a finite sequence of continuous convex functions C(s, ) each defined on ∈ Ψ for which Hence, the real valued function C( ) is continuous piecewise-convex in Ψ. In fact, C( ) constitutes a tight lower bound for C(s, ) (refer to Figure 1 for an example).
4.2
The structure of the optimal policy In this section, we prove the main result of this paper. We assume a general structure for the commitment cost, CC( ), 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 8 8.5 and under a very mild condition we show the optimality of a so-called "bang-bang policy" for the commitment lead time. This policy implies that the optimal commitment lead time is either zero or the maximum possible value, which is L.
Hence, it is one of the endpoints of the interval Ψ. In addition, we show that the corresponding optimal base-stock policy is of "all-or-nothing" type. When the optimal commitment lead time is zero, the optimal base-stock level is at its maximum value S (refer to Theorem 3). Otherwise, the optimal lead time is equal to the replenishment lead time L and the corresponding optimal base-stock level is zero or nothing.
Theorem 4 Let ( ) = ∫ 0 (x) be the commitment cost function, where (x) is a positive and differentiable function such that Φ(x) = (x) − C(0) is either a constant function or a nonconstant function without a root in the interval (0, L). Then the optimal commitment lead time policy on Ψ is a "bang-bang" policy and its corresponding optimal base-stock policy is of "all-or-nothing" type.
We refer to Appendix A.5 for the proof. Φ(x) and (x) are two auxiliary functions. Φ(x) is used for presenting the sufficient condition for the validity of Theorem 4 and all the other relevant proofs. It does not have any specific meaning. (x) is used to build the amount of commitment cost paid to each customer as ( ) = ∫ 0 (x) . Generally speaking (x) has no meaning; however, when (x) is a constant function (x) = c, it can be interpreted as the unit commitment cost per time unit per customer. For a linear commitment cost, that is, CC( ) = c , we have (x) = c and Φ(x) = c − C(0) . Given that Φ(x) is a constant function, the condition in Theorem 4 is satisfied. Hence, the bang-bang policy is optimal for linear commitment costs.
For a nonlinear commitment cost if the condition in Theorem 4 does not hold, the result may or may not hold. Theorem 4 provides the sufficient condition but not a necessary condition. We provide two numerical examples to clarify this further. We use the same parameters in both examples with different commitment costs (please refer to Figure 2). In both examples, we have C(0) = 12.69 4 = 3.17.
We have 1 (x) = x and 2 (x) = 2 5 x + C(0) 5 . Both Φ 1 (x) and Φ 2 (x) have a root in (0, L). Therefore, for both cases, the 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 10 sufficient condition does not hold. However, as it can be seen in Figure 2, the bang-bang policy may or may not hold. It does not hold for the first example, while it holds for the second one. The proof of Theorem 4 relies on constructing a monotone lower bound for C( ) on Ψ. We need the conditions in Theorem 4 to hold for constructing this lower bound. Let C LB ( ) be the lower bound. We construct it so that the endpoints of C( ) and C LB ( ) coincide, that is, C(0) = C LB (0) and C(L) = C LB (L). Therefore, the minimum of C( ) on Ψ is equal to the minimum of C LB ( ) on Ψ. Given that C LB ( ) is a monotone function on a closed interval Ψ, the minimum of C( ) always happens at the endpoints of Ψ. Hence, * ∈ {0, L}, that is, the optimal commitment lead time policy is a bang-bang policy (refer to Appendix A.5 for the details). In Figure 3, we provide two examples for the monotonic lower bounds; one where * = 0 and another where * = L.
With Corollary 5, we provide sufficient conditions for the optimality of BTS and BTO strategies.
This corollary implies that the firm either uses the standard base-stock policy without any commitment lead time (BTS strategy) or holds no inventory and uses a commitment lead time of L time units (BTO strategy).
LINEAR COMMITMENT COST
In this section, we consider a linear commitment cost function. We define c as the commitment cost per time unit and write CC( ) = c . With the following corollary, we prove that the sufficient condition in Theorem 4 is satisfied and, therefore, the bang-bang policy is optimal under the linear commitment cost. In addition, we show that there is a threshold value c 0 such that if c > c 0 , the optimal strategy is the BTS strategy and if c ≤ c 0 , it is the BTO strategy.
Corollary 6 Let the commitment cost function be CC( ) = c , then 1. The optimal commitment lead time policy on Ψ is a "bang-bang policy" and its corresponding optimal base-stock policy is of "all-or-nothing" type.
The intuition behind this result is that, under the linear commitment cost structure, when a firm has two decisions to make, one on the base-stock level and another on the commitment lead time, it has to optimize the total cost by considering the trade-off of having more/less inventory or longer/shorter commitment lead time. If c ≥ c 0 , that is, the unit commitment cost is higher than the threshold, it is cheaper to hold inventory. This is why a solution with no investment on commitment cost but highest investment on inventory is chosen. On the other hand, if c ≤ c 0 , that is, the unit commitment cost is lower than the threshold, it is cheaper to have a long commitment lead time. This is why a solution with maximum investment on commitment cost but no investment on inventory is chosen.
In the rest of this section, we perform sensitivity analysis on the unit commitment cost threshold, and we illustrate the behavior of C( ) through numerical examples. In addition, we consider the profit maximization version of the problem and determine the conditions on the unit commitment cost for profitability of the BTO strategy.
Sensitivity analysis on the unit commitment cost threshold
The unit commitment cost threshold c 0 plays a critical role in determination of the optimal strategy. According to Corollary 6, c 0 is a function of the unit holding cost h, unit backordering cost p, the mean lead time demand and distribution function of the lead time demand F X (x, ) for the following pairs of x and ; (S, 0) and (S − 1, 0). Since the values of F X (S, 0) and F X (S − 1, 0) also depend on h, p, and the effect of a change in one of these parameters on c 0 is not straightforward. In Lemma 7 we characterize the effect of each parameter on c 0 .
Lemma 7
The unit commitment cost threshold c 0 is 1. increasing in the unit holding cost h, 2. increasing in the unit backordering cost p and 3. decreasing in the mean lead time demand .
We refer to Appendix A.8 for the proof. According to Lemma 7, the constraint c < c 0 is likely to hold more often when h and p increase and decreases. Hence, the BTO strategy becomes preferable (refer to Corollary 6). Inventory management is difficult under high demand uncertainty. Firms may keep excess inventory to protect against a stock-out situation or they might keep low inventory to prevent a surplus situation. Under Poisson demand a low mean lead time demand implies a high coefficient of variation ( 1 √ ) and, therefore, high demand uncertainty. The BTO, that is, preorder, strategy becomes more beneficial as the demand uncertainty increases. This holds even for high values of the commitment cost. When the unit holding and backordering costs increase the surplus and stock-out situations become more expensive compared to paying a commitment cost. This is why the preorder strategy becomes more beneficial. Figure 4 confirms this conclusion. Figure 4 also illustrates that when the firm has less demand uncertainty, the unit commitment cost threshold is more The unit commitment cost threshold as a function of mean lead time demand sensitive to changes in unit holding and backordering costs. In addition, the unit commitment cost threshold is more sensitive to changes in the unit holding cost compared to changes in the unit backordering cost.
Consistent with Lemma 7, Figure 5 shows that the firm prefers the BTO strategy as demand uncertainty increases; inventory related costs outweigh the commitment cost, and the preorder strategy becomes preferable even for high values of the unit commitment cost.
5.2
Numerical analysis on C( ) Next, we conduct multiple numerical analysis analyses to illustrate the behavior of the function C( ) and the optimality of the bang-bang policy when the commitment cost is linear in . As a base case, we consider the following parameter values; L = 10, = 1, h = 4, p = 20, and c = 2. In Figure 6, are chosen such that we observe the following three optimal structures: 1. a single optimal solution (s * , * ) = (S, 0) indicated by a dotted line, 2. two alternative optimal solutions (s * 1 , * 1 ) = (S, 0) and (s * 2 , * 2 ) = (0, L) indicated by a continuous line and 3. a single optimal solution (s * , * ) = (0, L) indicated by a dashed plot.
In Figure 6, the behavior of C( ) with respect to its argument for different parameter values is depicted. For each parameter combination C( ) is a continuous piecewise function of . Each "piece" is for a specific base-stock level and each piece is convex (Lemma 1). From Theorem 3, we know that the optimal base-stock level is nonincreasing in . Since the base-stock levels take integer values, by increasing the corresponding optimal base-stock level may decrease by one unit. This is why the optimal base-stock level remains the same for an interval of values and in each convex piece the optimal base-stock level is the same. As soon as the optimal base-stock level decreases (as a response to increasing ), another convex piece emerges. Figure 6 suggests that for low values of h, * is 0 and, hence, the firm follows a BTS strategy. For sufficiently large h values * becomes L and the firm follows a BTO strategy.
This result does not only follow intuitively but also directly from Corollary 6. Note that as h → 0, S → ∞, and c 0 → 0. Hence, c > c 0 holds and Corollary 6 implies the optimality of BTS strategy.
As the unit backordering cost p increases we observe a similar behavior, that is, the strategy changes from BTS to BTO. In the classical inventory theory, the effects of h and p are usually opposite. However, in our problem, their effects are similar. The reason is that the optimality of a policy depends on the value of c 0 . As p → ∞, S → ∞, and c 0 → ∞. Hence, c 0 > c holds and Corollary 6 implies the optimality of the BTO strategy. Therefore, a BTS strategy is optimal for low values of p and a BTO strategy becomes optimal as p gets sufficiently large. This implies the same trend in the optimal policy structure as h changes. Figure 6 confirms this conclusion.
As the mean lead time demand increases the strategy changes from BTO to BTS. This is in line with what we observe in the classical inventory theory. We also observe that as increases C( ) becomes a smooth concave function. As explained above, in each convex piece in C( ) the optimal base-stock level is the same. When increases, the optimal base-stock level becomes more sensitive to . Hence, the interval of for which a base-stock level remains optimal gets shorter, that is, convex pieces become smaller. As observed in Figure 7, when mean lead time demand increases the number of convex pieces increases and for high
Profitability of the optimal policy
In the previous sections, we study the cost minimization problem min s∈N 0 , ∈Ψ C(s, ). We prove that the optimal strategy is either BTO or BTS. In this section, we concentrate on the BTO strategy and analyze the average total profit. If the firm follows the BTO strategy, the only positive cost component is the commitment cost cL. Define v as the unit selling price and Θ as the average total profit of the firm per time unit. Then we formulate the long-run average profit of the firm under the BTO strategy as follows; The long-run average profit of the firm is nonnegative when c < v L . In addition, the optimality of the BTO strategy requires c ≤ c 0 . Hence, the unit commitment cost should satisfy the following inequality In Figure 8, we consider the unit commitment cost and the mean lead time demand and determine the region where the BTO strategy is profitable. Figure 8 helps managers in making strategic decisions when, in addition to the inventory-related and commitment costs, the revenue from selling the product is considered. This figure suggests that profitability of the BTO strategy depends on the value of c and, although it is cost optimal, the firm should not follow this strategy for high values of c as it is nonprofitable. When the optimal strategy is the BTS strategy, the unit commitment cost does not play a role in the total profit of the firm.
COMPOUND POISSON DEMAND
In this section, we assume that the customer demand follows a compound Poisson process. By definition, compound Poisson demand means that the size of customer demand is a stochastic variable. The demand size is independent of other customer demands and the distribution of the customer arrival process. Similar to our previous assumption, we assume that the customer arrival process is a Poisson process with parameter . We define as a random variable representing the total demand in the time interval L − . P (x, ) is the probability that takes the value x when the commitment lead time is . We define F (s, ) = ∑ s x=0 P (x, ) as the cumulative distribution function of and write the long-run average total cost (s, ) consisting of holding, backordering and commitment costs as follows: Here equals (L − ). Refer to Appendix B.1 for expressions and detailed derivations. Similar to the pure Poisson demand case, we investigate the behavior of (s, ) with respect to the decisions variables. In Lemma 8 we prove the convexity of (s, ) with respect to the base-stock level.
Lemma 8 For each ∈ Ψ, the cost function (s, ) is convex in s.
Refer to Appendix B.2 for the proof. Lemma 8 implies that for a given value of ∈ Ψ, (s, ) can be minimized with respect to s. For a given ∈ Ψ value, the optimal base-stock level satisfies the following inequalities: Similar to the pure Poisson customer demand case, the state space Ψ is divided into multiple subintervals with the optimal base-stock level being different in each subinterval.
Next with Conjecture 9, we claim that the convexity result also holds with respect to the commitment lead time.
Conjecture 9 For each s ∈ N 0 , the cost function (s, ) is convex in .
Our proof on optimality of the bang-bang policy relies on constructing a monotone lower bound with end points of the lower bound coinciding with the end points of the cost function. We aim to do the same for the compound Poisson demand case. We define(s, ) = (s, ) − CC( ). We have the following conjecture: If Conjecture 10 holds, the optimality of the "bang-bang policy" follows. We are unable to provide a formal proof. We summarize our attempt and findings in Appendix B.3. Our extensive numerical analysis confirms the correctness of Conjectures 8 and 10. In Appendix B.4 we represent some of our numerical results.
CONCLUDING REMARKS
In this paper, we investigated the impact of the commitment cost on the replenishment strategy of a firm under continuous-review setting. The firm faces a Poisson customer demand and uses a preorder strategy which requires that customers place their orders ahead of the actual need based on a predetermined time window called commitment lead time.
The firm pays a commitment cost which is strictly increasing and convex in the length of the commitment lead time. The firm uses a base-stock replenishment policy with a deterministic lead time. The firm's objective is to find the commitment lead time and the base-stock level which minimize the total average cost consisting of inventory holding, backordering and commitment costs. This is a nonlinear mixed-integer optimization problem. We have a continuous commitment lead time and a discrete base-stock level as our decision variables and they are dependent on each other. We proved the convexity of the average cost function in each decision variable. We showed that the optimal base-stock level is nonincreasing in the length of the commitment lead time. The average cost as a function of a commitment lead time and the corresponding optimal base-stock level served as a continuous piecewise-convex lower bound on the original average cost function. We constructed another monotone lower bound on this piecewise-convex lower bound. This monotone lower bound implied the optimality of the bang-bang policy for the commitment lead time. Under this policy the commitment lead time is either zero or the maximum possible value, which is the replenishment lead time. In addition, we showed that the corresponding optimal base-stock policy is of all-or-nothing type. Hence, the optimal base-stock level is either zero or the solution of the well-known Newsvendor problem with complete replenishment lead time.
The optimality of the bang-bang policy holds for general commitment cost structures under a very mild condition. As a specific case, we studied a linear commitment cost. For this case, we found a unit commitment cost threshold which dictates the optimality of either a BTO or a BTS strategy. More specifically, we showed that when the unit commitment cost is less than the unit commitment cost threshold, the optimal ordering strategy is a BTO strategy (the optimal commitment lead time is equal to replenishment lead time and the optimal base-stock level is zero) and when the unit commitment cost is more than the unit commitment cost threshold, the optimal strategy is a pure BTS strategy (the optimal commitment lead time is zero and optimal base-stock level is nonzero). Moreover, we showed that the unit commitment cost threshold is increasing in the unit holding and backordering costs and decreasing in the mean lead time demand. For a given base-stock level, we developed a simple and accurate approximation for the corresponding optimal commitment lead time. We determined the conditions on the unit commitment cost for profitability of the BTO strategy and study the case with a compound Poisson customer demand.
In this paper, we studied a system with a single location and a single item with a cost minimization objective. A natural extension is to replace the backordering cost with a service level and/or waiting time constraints and check whether the optimality of a bang-bang policy remains. Another extension is to consider a system where each customer places an order for a product which needs to be assembled from multiple components with different replenishment lead times. Then, the firm needs to find the optimal replenishment policy for each component and also the optimal commitment lead time (Ahmadi, 2019;Ahmadi, Atan, de Kok, & Adan, 2019;Atan, Ahmadi, Stegehuis, de Kok, & Adan, 2017). More general assembly structures can also be analyzed. The combination of planned lead time and commitment lead time decisions can also be an interesting and challenging problem (Atan, de Kok, Dellaert, Janssen, & van Boxel, 2016;Jansen, Atan, Adan, & de Kok, 2018, 2019.
A.2 Proof of Lemma 1
We show that ∀s ∈ N 0 , the second derivative of C(s, ) with respect to ∈ Ψ is nonnegative. For this purpose, we initially calculate the derivatives of P X (x, ) and F X (n, ) = ∑ n x=0 P X (x, ) with respect to . We define (z) as the derivative of function G(z) with respect to z. We have is as follows: X (n, ) is as follows: = P X (n, ).
Using these results, we obtain the first and second derivatives of C(s, ) with respect to : Using the fact that (L − )P X (s − 1, ) = sP X (s, ), we write and Since ∀s ∈ N 0 and ∈Ψ, we have d 2 ( ) It means that for all s ∈ N 0 , then C(s, ) is convex with respect to .
A.3 Proof of Lemma 2
The concept of convexity for a real valued function on a discrete domain is not common. According to Van Houtum and Kranenburg (2015) this concept can be defined as follows.
Definition 11 Let f (x) be a function on Z and The function C(s, ) is convex in s ∈ N 0 if for all ∈Ψ we have Δ 2 s C(s, ) ≥ 0, where Δ 2 s C(s, ) is defined as follows: Next, we determine the expressions for C(s + 1, ) and C(s + 2, ).
A.4 Proof of Theorem 3
We use the following lemma in proving the result.
Lemma 12
Let F X (s, ) be the Poisson cumulative distribution function of X with mean (L − ), then, Proof For all s ∈ N 0 , F X (s, ) is a continuous and differentiable function with respect to , then from Appendix A, we have X (s, ) = P X (s, ). For each s ∈ N 0 and ∈ Ψ, P X (s, ) ≥ 0. Then, for all ≥ 0, F X (s, ) is increasing in .
For all ∈ Ψ, F X (s, ) is a discrete function with respect to s, then by calculating the first order forward difference, we have Let S be the base-stock level corresponding to = 0. It means that Since F X (s, ) is increasing and continuous in (Lemma 12), then by increasing , both functions F X (S − 1, ) and F X (S, ) increase until a S,S−1 > 0 such that F X (S − 1, S,S−1 ) = p h+p . Hence, for 0 ≤ < S,S−1 and s * = S, inequality (3) holds. Since F X (s, ) is increasing in s (Lemma 12), we can write, In a similar way, by increasing , we can find S−1,S−2 such that S−1,S−2 > S,S−1 and F X (S − 2, S−1,S−2 ) = p h+p . Hence, for S,S−1 ≤ < S−1,S−2 and s * = S − 1, inequality (3) holds. Continuing this process, we find S = {S, S − 1, … , 2, 1, 0} as corresponding to each s * ∈ S, there is a subinterval Ψ s , Ψ s ⊂ Ψ such that, From the last expression, it is clear that s * depends on and s * is nonincreasing in . ▪
A.5 Proof of Theorem 4
We need an additional result to prove this theorem. We definẽ C(s, ) on Ψ as follows; ≥ 0. Remember that S is the set of optimal base-stock levels (refer to Theorem 3). C(s, ) is defined asC(s, ) = C(s, ) − ( ). In Appendix A, we provide an alternative expression for C(s, ). Using this alternative, we obtain the following expression for C(s, ): Then, the derivative ofC(s, ) with respect to is Using the fact that for Poisson distribution we have (L − )P X (s − 1, ) = sP X (s, ), we write dC(s, ) as Since we subtract CC( ) from C(s, ) and CC( ) does not depend on s, we have the same set S as we had before for the lower bound of C(s, ). Also, from inequality (3) By adding CC( ) to the both sides of (A7), we havẽ We know that CC(0) = 0. Also, from expression (A6), we haveC( ) = C( ) − ( ). Then, C(0) = C(0) and we can rewrite expression (A8) as follows.
Let C ( ) = C(0) L (L − ) + ( ). Then, C LB ( ) is a lower bound for C( ) on Ψ. Since C(0) = C LB (0) and C(L) = C LB (L), then C( ) and C LB ( ) coincide at the endpoints of the interval Ψ. In addition, C LB ( ) is a continuous function by construction. Now, we want to show that C LB ( ) is a monotone function on Ψ. To do this, we show that for all ∈ Ψ, ( ) ≠ 0.
By taking the first derivative of C LB ( ) we have For all ∈ (0, L), we have |Φ( )| > 0, then ( ) ≠ 0. It means that C LB ( ) is a monotone lower bound for C( ) on Ψ. Then, the minimum of C( ) on Ψ is equal to the minimum of C LB ( ) on Ψ. Since C LB ( ) is a monotone function on Ψ and Ψ is a closed interval, then the minimum of C( ) always happens at the endpoints of Ψ. Then, * ∈ {0, L}. The optimality of "all-or-nothing" base-stock policy follows directly from Theorem 3. ▪
A.6 Proof of Corollary 5
From expression (A10), it is obvious that when ( ) > C(0) , then C LB ( ) is a strictly increasing function in and its minimum occurs at zero, that is, * = 0. And when ( ) < C(0) , then C LB ( ) is a strictly decreasing function in and its minimum occurs at L, that is, * = L.
A.7
Proof of Corollary 6 1. This part follows directly from Theorem 4.
which is a straight line. Then, ∀ ∈ Ψ, C LB ( ) is a monotone function. Also, C LB (0) = C(0) and C LB (L) = C(L). Then the optimal on Ψ is either zero or L.
From
when − C(0) L > 0, then C LB ( ) is increasing in and C LB (0) is its minimum value on Ψ. Otherwise, C LB ( ) is decreasing in and C LB (L) is its minimum value on Ψ. Also, when − C(0) L = 0, then C LB ( ) = C(0). So, C LB ( ) is a constant function on Ψ and the optimal commitment lead time is 0 and L.
A.8 Proof of Lemma 7
For each h, p, and ≠ 0, c 0 is continuous and differentiable.
1. The first derivative of c 0 with respect to h is Now we need to show that F X (S, 0) ≥ S F X (S−1, 0). For x = 0, 1, 2, … , S − 1 and ≥ 0, we can write S ≥ x + 1. Then, 1 S ≤ 1 x+1 . We can multiply both sides of the last inequality by Using this inequality, we write Hence, S F X (S − 1, 0) < F X (S, 0). Then, from expression (A11), we have c 0 h > 0. As a result, c 0 is increasing in h. 2. The first derivative of c 0 with respect to p is .
3. The first derivative of c 0 with respect to is Using the equalities F X (S,0) = −P X (S, 0) and F X (S−1,0) = −P X (S − 1, 0) we write For Poisson probability distribution we know that P X (S − 1, 0) = SP X (S, 0). Then, the last expression reduces to .
Since S ∈ S, then F X (S, 0) ≥ p p+h and c 0 ≤ 0. As a result, c 0 is decreasing in .
B.1 Derivation of the cost function
By definition, compound Poisson demand means that the size of customer demand X is a stochastic variable. It is independent of other customer demands and the distribution of the customer arrival process. Similar to our previous assumption, we assume that the customer arrival process is a Poisson process with parameter . Then the number of customers in a time interval of length L − (ie, lead time demand) has a Poisson distribution with mean (L − ). We define P (n, ) as the probability that the random variable takes the value n when the commitment lead time is . We have Let f x be the probability that a customer demands x units, that is, the probability that X takes the value x = 1, 2, … units. Assume that the mean demand size is . When f 1 = 1, the demand process is a pure Poisson process. Then total demand in a given interval is equal to the number of customer arrivals. For handling the general case with varying demand sizes, we define f n x as the probability that n = 1, 2, … customers demand x units. Since customer demand with the size of zero is not rational, then without loss of generality we assume that f 0 = 0 (Feeney and Sherbrooke (1966)). Knowing that f 0 = 0 and f 1 x = f x , f n x can be calculated recursively as follows (see Axsäter, 2015, p. 79): We define as a random variable representing the total demand in the time interval L − . In addition, we define P (x, ) as the probability that takes the value x when the commitment lead time is . Using expressions (B1) and (B2), we write P (x, ) as follows: Then, the cumulative distribution function of is calculated as F (s, ) = ∑ s x=0 P (x, ). Based on these definitions, the long-run average total cost (s, ) consisting of holding, backordering and commitment costs can be calculated as follows: Note that E{} = can be calculated as We show that Δ 2 s (s, ) = (s + 2, ) − 2(s + 1, ) + (s, ) ≥ 0. Knowing that (s+1, ) = (h+p)F (s, )−p+ (s, ) and (s+2, ) = (h+p)F (s+1, )−p+(s+1, ), then Δ 2 s (s, ) = (h + p)P (s + 1, ). Since ∀s ∈ N 0 and ∈ Ψ, P (s+1, ) ≥ 0, then Δ 2 s (s, ) ≥ 0. Hence, ∀ ∈ Ψ, (s, ) is convex in s.
B.3
Attempt to prove Conjecture 10 The expression for(s, ) is as follows: Then we need to derive the expression for ≥ 0. However, we could not prove this analytically.
B.4
Numerical results for the compound demand case We confirm the correctness of Conjectures (9) and (10) through numerical analysis. Without loss of generality, we assume that the demand size X has a shifted Poisson distribution by one unit, that is, X = Y + 1, where Y ∼ Poisson( Y )). Then, P (x, ) can be written as follows: P (0, ) = P (0, ) P (x, ) = x ∑ n=1 P(X 1 + X 2 + · · · + X n = x)P (n, ) = x ∑ n=1 P(Y 1 + Y 2 + · · · + Y n = x − n)P (n, ) − n Y F Z (s − n − 1))P (n, )) The expressions for ∑ s x=0 xP (x, ) and F (s, ) rely on the following derivations: since we know that n ≤ x, then by changing the order of sums, we have since we know that n ≤ x, by changing the order of sums, we have For a parameter setting shown in Figure A1, we can see that (s, ) is convex in . We also observe that consistent with our previous analysis for pure Poisson demand, the optimal base-stock is nonincreasing in .
For the same parameter setting, in Figure A2 we observe that d ( ( ) L− ) ≥ 0. We run similar numerous experiments with different parameter settings and observe the same results. Therefore, we conjecture the optimality of the bang-bang commitment lead time policy.
|
2019-04-26T14:16:17.907Z
|
2019-03-28T00:00:00.000
|
{
"year": 2019,
"sha1": "0a2aa76f958d53808e394694694648d02ae3bec4",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nav.21835",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "15a42ebf20d80e54c01270ea7188fcc35f479aee",
"s2fieldsofstudy": [
"Mathematics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
9721968
|
pes2o/s2orc
|
v3-fos-license
|
Hirsutism in Saudi females of reproductive age: a hospital-based study
BACKGROUND Hirsutism among women of fertile age is commonly seen in clinical practice, but the pattern of the disease in Saudi Arabs has not been studied. The aim of the study was to determine the clinical, biochemical and etiologic features of hirsutism in Saudi females. METHODS 101 Saudi Arab women presenting with hirsutism at King Khalid University Hospital, Riyadh, Saudi Arabia, from 1 January 2000 to 31 December 2005 were prospectively assessed using the recently approved diagnostic guidelines for hyperandrogenic women with hirsutism. RESULTS Polycystic ovary syndrome (PCOS) was the cause of hirsutism in 83 patients (82%) followed by idiopathic hirsutism (IH) in 11 patients (11%). Others causes of hirsutism included late onset congenital adrenal hyperplasia in 4 patients (4%), microprolactinoma in 2 (2%) and Cushing’s syndrome in 1 (1%) patient. Age at presentation of PCOS was 24.5±6.6 years (mean±SD) and 51% of the subjects were obese. Furthermore, 74 (89%) of patients with PCOS had an oligo/anovulatory cycle while the remaining 9 patients (11%) maintained normal regular menstrual cycle. Luteinizing hormone and total testosterone were significantly higher in patients with PCOS than in those with IH (P<.05). CONCLUSIONS The present data show PCOS to be the commonest cause of hirsutism in our clinical practice and PCOS is prominent amongst young obese females. However, further studies on a larger scale are needed to verify our findings.
H irsutism is a common endocrine disorder among women of fertile age. 1,2 Of the vario o ous etiologies of hirsutism, polycystic ovary syndrome (PCOS) is reported to be the commonest cause worldwide. 3,4 In Saudi Arabia, the prevalence of PCOS is still unknown but it is the authors' belief that it might be similar to other reports. 4 The lack of intero o national consensus on the definition of PCOS prior to the year 2003 accounted for the widely variable prevao o lence in the world. For instance, using ultrasound aco o cording to European criteria, 91% of cases of hirsutism were due to PCOS in the United Arab Emirates 5 yet others reported a rate as low as 59% in the same region. 6 As far as we know, no study has been conducted using the recently approved consensus opinion on diagnoso o tic criteria for diagnosis of PCOS 7 in Saudi patients. Furthermore, there is no study on the pattern of hirsuto o ism in the local population despite it being commonly seen in clinical practice, with its associated infertility and metabolic syndrome. 8,9 The aim of the study thereo o fore was to determine clinical, biochemical and etiologic Hirsutism in Saudi females of reproductive age: a hospital-based study METHODS We studied a consecutive series of 148 Arab Saudi women presenting with hirsutism to the endocrinolo o ogy clinic at King Khalid University Hospital, Riyadh, Saudi Arabia from 1 January 2000 to 31 December 2005. Patients were examined for the severity of hiro o sutism according to modified Ferriman and Gallwey scale. 10 Women who scored 8 or more were included in the study. Using this scale, we assessed the growth of tero o minal hairs on the upper lip, sideburn area, chest, upper abdomen, and lower abdomen. Acne was also assessed. The presence of comedones on the face, neck, upper chest, upper back, or upper arms was classed as acne. Patients who were taking drugs that might interfere with the results (oral contraceptives, prolactinolowering drugs, and any drug given for hirsutism) and those who failed to report at any scheduled followoup visit were excluded from the study. Other exclusion criteria ino o cluded pregnancy, breastfeeding, known liver or kidney disease, alanine aminotransferase >60 IU/L, creatinine >130 µmol/L, and known alcohol abuse. Only 101 subo o jects were evaluable at the end of the study. The clinical evaluation consisted of a detailed history, including the rapidity of onset of symptoms, the presence of sympo o toms of virilization or other endocrinopathies or metao o bolic disorders, menstrual and reproductive history, and drug and family history. Subjects were assessed for the presence of other signs of hyperandrogenism and virilization and for stigmata of endocrinopathies, and abdominal masses. Height and weight were measured to calculate body mass index (BMI) in kg/m 2 .
PCOS was diagnosed based on the presence of 2 out of 3 of the following: 1) oligoo or anovulation and excluo o sion of other etiologies (congenital adrenal hyperplasia, androgen secreting tumors, Cushing' s syndrome), 2) clinical and/or biochemical signs of hyperandrogeno o ism, and 3) polycystic ovaries as recommended by the 2003 consensus diagnostic criteria. 7 Ultrasonographic examinations were performed on average 4 weeks afo o ter the clinical evaluation. Patients were examined in a supine position with a 6oMHz probe, and a polycystic ovary was considered based on presence of 12 or more follicles in each ovary measuring 2o9 mm in diameter, and/or increased ovarian volume (>10 mL) according to the guideline. 7 A regular menstrual cycle was defined as one between 21 and 35 days with no more than a 4oday variation. Oligomenorrhea was defined as meno o strual cycles >35 days in length and amenorrhea was defined as an absence of a menstrual period in more than 6 months. Without the presence of menstrual diso o turbances and any other signs or symptoms of hypero o androgenism, except hirsutism, the diagnosis of idioo o pathic hirsutism (IH) was made. Hyperandrogenemia plus hirsutism (HH) was defined by the presence of elevated androgen levels and hirsutism but normal ovuo o lation. 11 Patients with evidence of ovulatory dysfunco o tion underwent measurements of serum prolactin and thyroidostimulating hormone levels to exclude a prolaco o tin secreting adenoma and thyroid dysfunction, respeco o tively. Screening for Cushing' s syndrome was performed by either an overnight dexamethasone suppression test (i.e. measurement of a cortisol level the morning afo o ter the administration of 1 mg dexamethasone orally at bed time) or by measuring 24 hr urine free cortisol content. All patients had a 2oday laboratory evaluation on days 20o22 of the menstrual cycle (when applicable) in order to characterize ovulatory dysfunction and horo o monal profile in the luteal phase. On day 1 a random blood sample was obtained for total testosterone, ano o drostenedione (A), dehydroepiandrosterone sulphate (DHEAS), prolactin (PRL), progesterone (P), luteinizo o ing hormone (LH), follicle stimulating hormone (FSH), thyroid stimulating hormone (TSH), T3, and T4. 21o hydroxylase deficiency was excluded by a basal follicular phase 17ohydroxyprogesterone (17oOHPG) level <6.0 ng/mL. Subjects with a basal 17oOHPG level equal to or higher than 6.0 ng/mL underwent an acute ACTH stimulation test, in which 250 ug of cortrosyn (alpha 1o24 corticotrophin) was administered and 17oOHPG levels were determined immediately before and again after 30 and 60 minutes. ACTH stimulated 17oOHPG levels >10 ng/mL were considered the criteria for 21o hydroxylase deficient late onset congenital adrenal hyo o perplasia (LOCAH) while values > 30 ng/mL were the criteria for classical congenital adrenal hyperplasia. 12 All statistical procedures were performed using SPSS. Values were reported as mean and standard deviation and when applicable, standard error of the mean. Differences between groups were evaluated with two tailed totests for independent samples or the Manno Whitney zotest, where normality could not be assumed. Pearson correlation coefficients were calculated for coro o relation analyses. Twootailed Povalues <0.05 were cono o sidered significant. Simple linear regression analyses were formed for the investigation of linear trends.
RESULTS
PCOS was the cause of hirsutism in 83 patients (82%) followed by idiopathic hirsutism (IH) with 11 patients (11%) (Figure 1). Others causes of hirsutism in our study included lateoonset congenital adrenal hypero o plasia in 4 patients (4%). Of the miscellaneous causes, 2 patients (2%) had microprolactinoma and 1 patient (1%) had Cushing' s syndrome. Of the two major groups identified, PCOS and IH, there were no significant difo o ferences between the groups in terms of age at preseno o tation, distribution of hirsutism, body weight or BMI (Table 1). Overall, 51% of PCOS and 45% of IH pao o tients were obese. Hirsutism scores were 20.1±7.8 in PCOS and 16.6±6.2 in patients with IH; the differo o ences were not significant (Table 1). Seventyofour pao o tients (89%) with PCOS had oligo/anovulatory cycle while the remaining 9 patients (11%) maintained noro o mal regular menstrual cycle.
The distribution of BMI among patients with PCOS is shown in Table 2. Biochemical characteristics of the patients based on etiology are shown in Table 3.
There were no significant differences between groups for metabolic tests such as fasting blood glucose, 2ohour postoprandial blood glucose and cortisol. As would be expected, LH was higher in the PCOS group compared to the IH group, but the difference was not statistically significant. Furthermore, there was a significant differo o ence in the LHoFSH gradient (serum LH level subo o tracted from serum FSH level) whereby the values were higher for the PCOS patients compared with the other two groups (P<.05). In contrast, IH subjects had sigo o nificantly higher FSH levels when compared to PCOS (P<.05). Total testosterone was found to be significanto o ly higher in PCOS patients compared to IH patients (P<.0.05). Serum levels of progesterone, estradiol, proo o lactin, androstenedione, and DHEAS did not differ beo o tween the two main etiologic groups. According to our diagnostic guideline for LOCAH, 12 four patients met the criteria for 21ohydroxylase deficiency.
DISCUSSION
We have reported our experience evaluating 101 Saudi Arab patients presenting to our endocrine clinic with hirsutism. Of the patients included in the study, 82% had hirsutism caused by PCOS. This is similar to large studies reported by others. 1,11,15 However, a close look at our data revealed that 75% of the subjects studied were less than 30 years of age at the time of their initial visit and 51% were obese (BMI ≥30 kg/m 2 ). Thus, the prevo o alence of obesity among our relatively young patients was higher than in the general population. For example, the Saudi National Survey indicated that the prevalence of obesity in women was 44%, 16 a higher prevalence than our PCOS subjects. In population studies, 10% to 38% of women with PCOS were obese. 17o20 Thus, the high prevalence of obesity in our PCOS patients may reflect an overall pattern of obesity in our general female population. 16 Insulin resistance with compensatory hyperinsuo o linemia has been associated with PCOS and is thought to contribute to other features of the metabolic syno o drome. 9 Hyperandrogenism has been found to manio o fest clinically by frontal balding, acne, hirsutism, and clitoromegaly. In our series, we found that over half of the patients in both PCOS and IH had acne, an obo o servation similar to others. 6 Nevertheless, there were no differences in metabolic data amongst the groups, which was similar to the report by Taponen et al 21 but different from others. 22 This disparity might be due to a higher prevalence of metabolic syndrome in our study population. 23 The prominence of frontal balding in pao o tients with IH as compared to PCOS is not explained. However, it might be due to the small sample size in the IH group. Further studies on a larger population are needed to characterize this finding.
In our series, 9% of patients diagnosed with PCOS had hirsutism in addition to biochemical hyperano o drogenism, yet they maintained a normal menstrual cycle. Azziz et al reported and introduced a new term, hyperandrogenism plus hirsutism (HH), for the cato o egory of hirsute women with normal regular menses that are also hyperandrogenic. 11 The controversy over the disparity between the clinical manifestations of hyo o perandroginism including menstrual irregularity and the metabolic data suggestive of hyperandrogenism was settled in the recently reviewed international consensus on diagnostic criteria for PCOS. 7,15 Based on the new guideline we included the HH into PCOS. In retroo o spect, Gatee et al reported a higher prevalence of 26% of HH from the same racial group in the neighboring United Arab Emirates. 5 Previously, most hirsute women were labeled as havo o ing 'idiopathic hirsutism' , but up to 60% of these women do have some disturbances in androgen metabolism. 11,24 Furthermore, more than 90% of patients with IH were proved to have PCOS. 25,26 Lack of uniformity in agreeo o able guidelines for diagnosis of PCOS might account for the higher figure of IH in older series. 27,28 In an attempt to unify the diagnosis of PCOS, a joint European and American group came up with a consensus opinion on the diagnostic criteria we used in our study. 7 According to the new guideline, PCOS is diagnosed if two of the following three criteria are present, after the exclusion of other etiologies: 1) oligoo and/or anovulation, 2) clinical and/or biochemical signs of hyperandrogenism, and 3) polycystic ovaries on ultrasonography. 15 Other etiologies of hirsutism were not common in our study group, which is reflected in the observao o tions of others in the region. 5,6 Of the less common causes, LOCAH was found in 4% of patients, higher than the 1.6% reported in Whites, but lower than the 9.5% reported in patients of Mediterranean descent. 29 The relatively higher prevalence of LOCAH in this study may reflect the pattern of referrals received by our unit, which is a referral center for all of Saudi Arabia, and may differ from the prevalence in the community. However, the overall pattern of PCOS being the como o monest is the same as seen all over the world. 3,5,11 Two patients had hyperprolactinemia, which may be part of the syndrome of PCOS. 30 However, these patients lacked the fulloblown picture of PCOS, which includes biochemical hyperandrogenism and one patient did not have menstrual disturbances. Furthermore, the typical polycystic ovaries were lacking. Both were proven on pituitary MRI to have adenoma with a normal thyroid function test and both responded well to bromocripo o tine. Thus, our data shows that PCOS is the commonest cause of hirsutism in our clinical practice and that it is prominent among young obese females, which reflects the worldwide pattern. Our findings call for an early intervention strategy to prevent or reduce metabolic syndrome in this subgroup of the population. Further prospective studies on a larger scale are needed, howo o ever, to verify our findings.
|
2017-04-18T18:56:23.368Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "de4f1af70ec642e04dba33de028c008f08701e28",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6074238?pdf=render",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "de4f1af70ec642e04dba33de028c008f08701e28",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215774589
|
pes2o/s2orc
|
v3-fos-license
|
High Fat High Calories Diet (HFD) Increase Gut Susceptibility to Carcinogens by Altering the Gut Microbial Community
Objective: To investigate the risk of colorectal cancer and its relationship with colonic flora and microenvironment under high-fat and high-calorie diet. Methods: Wistar rats were used to study, and they were given normal diet, high-fat diet, and dimethyl hydrazine (DMH) to induce the occurrence of colorectal cancer. Then observe the difference in tumor formation and the relationship among microbial community, inflammatory factors and metabolism. Results: No tumors were found in the normal diet group (G1) and the high-fat diet group (G3). Four nodules were found in the four rats in the normal diet + DMH group (G2) and 8 cancerous nodules were formed in 7 rats (70%) from high-fat diet + DMH group (G4). Cholesterol and TNF-α increased, IL-1, IL-6 and LEP decreased in the high-fat diet group. The difference was statistically significant. In the cancer-inducing group, only the difference in cholesterol was statistically significant. Compared with the normal diet group (G1) and the high-fat diet group (G3), the rat's gut bacterial abundance was not significantly different, but the gut flora structure was significantly changed. The content of Candida in the intestinal tract of rats in the high-fat diet group was reduced (P = 0.015), while the content of Verrucomicrobia increased (P = 0.035); In the comparison of genus content, Ruminococcus, Candida, Saccharibacteria genera incertae sedis, Enterobacter, Clostridium IV, Enterococcus, Enterorhabdus, Acetivibrio, Adlercreutzia, Lactococcus, etc., decreased significantly, while Akkermansia, Warthococcus, Staphylococcus, Butyricimonas, Clostridium XVIII, etc. increased significantly. Conclusion: This study found that high-fat, high-calorie diet can increase the susceptibility of the intestine to carcinogenic factors. The reason may be that the high-fat diet causes the body to appear inflammatory states and microbial community imbalance, especially rumenococcus, candida, Saccharomyces, Enterobacter, Clostridium IV, Enterococcus, Enterobacter, Vibrioaceticus and other genus reduction are important links. Exploring ways to improve these floras is an important factor to improve the resistance of the intestinal tract to cancer-inducing agents.
Introduction
High-fat and high-calorie diet is an important factor in the development of colorectal cancer, but the exact mechanism is still controversial. It is generally believed that high-fat and high-calorie diet causes hyperlipidemia and obesity, puts the body in a chronic inflammatory state, and may promote the intestine and thus stimulate the occurrence of tumors [1][2]. Recent studies have shown that high-fat and high-calorie diets can cause changes in the abundance and types of microbial community, which may be an important cause of colorectal cancer [3][4]. In our previous study,80% of the 159 colorectal patients have hyperlipidemia [4].We also used tea extract to treat patients with hyperlipidemia, the serum lipid level ,insulin resistance and the inflammation factor such as TNF-α and IL-1 and found out that decreases [5][6].
Ivyspring International Publisher
However, we still did not know what is the exact the pathway of the high-fat and high-calorie diet, body inflammation state, changes in the intestinal flora and colorectal cancer. Is the colorectal cancer caused by changes in the intestinal flora itself? Or is it due to changes in the intestinal flora that the body's resistance to carcinogens changes, thereby making colorectal cancer more likely? There is no conclusion yet. This study investigated the relationship between changes in intestinal flora of rats under high-fat and high-calorie diet and tumorigenicity of carcinogens, and further explored the relationship between changes in intestinal flora under high-fat diet and carcinogenicity of carcinogens to provide new ideas for the prevention and treatment of colorectal cancer.
Feeding and grouping of experimental animals
40 SPF Wistar rats, male and female, 5-6 weeks old, weight 200g (animal certificate number: NO.11400700154060, experimental animal production license number: SCXK (Su) 2011-0003). Animal experiments were approved by the Animal Ethics Committee of the School of Medicine, Southeast University.
Animal diet was cobalt 60 radiation-sterilized pellets for rats and mice (Nanjing Jiangning Qinglongshan feed Company), high-fat feed (formulation: 20g lard, 5g cholesterol, 10g Tween 80, add water to 50ml).High-fat diet group: daily high-fat feeds were administered to the stomach. Gavage daily with warm saline in normal diet group.
Animal treatment
At the 4th week of feeding, the mice were injected subcutaneously with dimethyl hydrazine (DMH) at 30 mg / kg twice a week. Dosing was continued for 8 weeks. The other groups were given weekly subcutaneous injections of the same amount of normal saline. Rats were weighed once a week to adjust the amount of DMH. At the 20th week of feeding, the rats were sacrificed.
Detection of lipid metabolism and inflammatory factors
The blood was removed by eyeballs, and the plasma was separated by centrifugation, and the blood glucose and blood lipid metabolism indicators (total cholesterol, triolein, HDL, LDL) were detected.
Fecal specimens and tissue specimens are retained
Before putting the animals to death, take 0.1 ~ 0.3g of fresh feces by massaging the abdomen, and immediately put them into liquid nitrogen and then store them in -80 ℃ refrigerator.
The rats were dissected, and the large intestine was cut freely and fully. The ascending colon, transverse colon, descending colon, and rectal intestinal mucosa were separated and stored in a -80 ° C refrigerator. The number and size of colon tumor in the rats were calculated and measured. Tumors or suspected tumors were completely removed and divided equally 2 parts, one part is stored in 10% formaldehyde and the other part is quickly frozen and transferred to -80 ℃ refrigerator for storage. The intestine is cut open longitudinally, flattened between two layers of filter paper and fixed at 10 % Neutral buffered formalin solution. Tumor size was measured and HE stained for pathological analysis.
Research methods of coliform flora [8-9]
Frozen fresh rat feces samples were quickly weighed and 0.1g samples were taken and centrifuged to extract DNA samples after rewarming.1ul MD microplate reader (MD company in the United States) was used to quantitatively detect the quality of DNA samples. After extraction, the DNA samples were expanded by 16sPCR (primer: 16s) -341F: CCTACGGGNGGCWGCAG; 16s-341F: CCTACGGGNGGCWGCAG), the samples were sequenced according to the Illumina Miseq high-throughput sequencer usage guide. After running for 3-5 days, the original data was converted to Fastq format, and the data quality reached more than Q30 80%. In microbial diversity research, alpha diversity was measured by species richness (Richness, Chao, ACE) and diversity index (Shannon, Simpson); beta diversity analysis compares the similarity of species composition of samples, including cluster Class analysis and ranking analysis. The species abundance ratio chart counted each sample according to the boundaries, phylum, class, order, family, genus, and species level of the taxonomic unit of the species. Kruskal / wilcoxon rank sum test, similarity analysis (Anosim), Permanova substitution variance test, etc. were used to analyze the abundance and difference between species.
Statistical methods
SPSS18.0 statistical software was used to analyze the data. The measurement data were all Mean ± SD.
The comparison of multiple groups of data was performed by single factor analysis of variance. The comparison between groups was performed by LSD-t test. The repeated measurement data was analyzed by repeated measurement analysis of variance. When P <0.05, the difference was statistically significant.
Tumor formation characteristics of rats in each group
Rats gradually gained weight, but there was no significant difference between the 4 groups.There was no tumor formation in the normal diet group (G1) and high-fat diet group (G3). Four rats (40%) in the normal diet + DMH group (G2) formed 4 nodules, and the tumor weight was 0.06 ± 0.05g. Seven rats (70%) from fatty diet + DMH group (G4) formed 8 cancerous nodules with a tumor weight of 0.05 ± 0.03g. Compared with the normal diet cancer-inducing group, the high-fat diet cancer-inducing group had increased intestinal tumor formation (70% VS 40%) ( Figure 1).
Effects of high-fat diet on metabolism and inflammatory factors
Animals in the high-fat diet group increased cholesterol and TNF-α, while IL-1, IL-6, and LEP decreased. The difference was statistically significant. In the DMH-inducing group, only the difference in cholesterol was statistically significant ( Table 1).
Effect of high-fat diet on gut flora
Compared with the normal diet group (G1) and the high-fat diet group (G3), the intestinal bacterial abundance of rats was not significantly different, but the intestinal flora structure was significantly different (ANOSIM R = 0.2884, P = 0.011; PERMANOVA F = 3.750, P = 0.001). In the comparison of the phylum, the content of Candidatus Saccharibacteria in the high-fat diet group (G3) was significantly lower than that in the normal diet group (G1) (P = 0.015), and the content of Verrucomicrobia was significantly higher than that in G1 group (P = 0.035) (Figure 2).
Effect of carcinogen DMH on the gut flora of animals in normal diet group and high-fat diet group
Compared with G4 group, G2 group had no significant difference in gut bacterial abundance, but the intestinal flora structure changed significantly (ANOSIM R = 0.292, P = 0.001; PERMANOVA F = 3.697, P = 0.001). In the comparison of bacteria categories, the contents of Actinobacteria and Candida in the G4 group were significantly lower than those in the G2 group (Figure 4).
Discussion
High-fat and high-calorie diet plays an important role in the development of colorectal cancer, but there is no clear evidence on how high-fat and high-calorie diet can cause colorectal cancer [10][11][12]. High-fat and high-calorie diets cause hyperlipidemia in the body, which in turn causes the body to be in a chronic inflammatory state. But can high-fat and high-calorie diets alone cause the occurrence of colorectal tumors?
From the research in this group, compared with the normal diet group, the high-fat and high-calorie diet group had hyperlipidemia and inflammatory factors such as TNF-α increased.But after 20 weeks of observation, the animals did not develop colorectal cancer,which suggested that only high-fat, high-calorie diet will not cause colorectal tumor for a short period of time. In the literature, there have been few reports of the use of high-fat diet alone to induce colorectal tumors [13][14]. Current research suggests that a high-fat, high-calorie diet can cause the body to be in an inflammatory state, which may cause the body to be susceptible to cancer [15][16][17]. Tuominen et al. [18] studied AOM-induced colorectal cancer in animals using high-fat and high-calorie diets. If the high-fat and high-calorie diet was switched to a normal diet at the beginning of the cancer-inducing experiment, the risk of colorectal cancer in animals would not be significantly reduced. Therefore, the author believed that the high-fat and high-calorie diet played a role in colorectal tumor which may be an early stage affair. Xiu et al. [3] also considered that a high-fat, high-calorie diet can promote tumor growth in tumor-bearing animals. In general, hyperlipidemia caused by a high-fat, high-calorie diet may put the body in a chronic inflammatory state, increases the body's sensitivity to carcinogenic factors, and makes the body more prone to tumors [15][16][17]. In our study, the incidence of colorectal cancer in the normal diet group with DMH was 40% (4/10), while the incidence of colorectal cancer in the high-fat diet group with DMH was 70% (7/10), and the tumor formation rate increased significantly. It implied that the high-fat and high-calorie diet causes the body to be in a chronic inflammatory state and may increase the sensitivity to carcinogens.
Modern research has shown that chronic inflammatory station leading to imbalance of the coliform flora may be an important cause of susceptibility to colorectal cancer [17][18]. In this group of studies, although the bacterial abundance of animals in the high-fat diet group did not change significantly, the types and distribution of bacteria changed significantly. In the comparison of gut microbiome, the content of candida in the intestine of rats in the high-fat diet group was significantly reduced compared with the normal diet group, while the content of Verrucomicrobia was significantly increased compared with the normal diet group. Verrucomicrobia is a kind of bacteria enriched in the intestinal mucosa, which will decrease in high-fat diets, obesity and other diseases. This study is similar to reports in the literature [19].
At the species level, rumenococcus, candida, saccharin, enterobacteria, clostridium IV, enterococcus, enterobacteria, vibrioaceticus, etc. were reduced in animals of the high-fat and high-calorie diet group. Most of these bacteria are already reported probiotics. The reduction of these bacteria may cause intestinal microenvironment disorders, and the intestinal mucosa barrier to resist external invasion will decrease, leading to increased susceptibility to carcinogenic factors [20][21]. After using DMH, the proportion of rumenococcus, Candida, Clostridium IV, and Enterobacter bacteria in the high-fat diet group decreased further, further indicating that the decrease in the proportion of these bacteria is an important cause of the increased intestinal susceptibility to carcinogens the reason.
In the high-fat and high-calorie diet group, bacterial genera such as Akkermansia, Warthococcus, and Staphylococcus increased. The increase in the proportion of these bacterial genera may be related to the decrease in the proportion of other bacterial genera. In this group of results, the content of Akkermansia increased in high-fat and high-calorie diets [22][23], and the literature reported that in high-fat and high-calorie diets and metabolic diseases, proportion of Akkermansia may decrease. The results were different from those reported in the literature. The reason was not clear. It may be related to the significant decrease in the proportion of other bacteria and lead to increase Akkermansia relatively in this group of studies. Further research will be conducted to conform our outcome. After the use of DMH to induce cancer, bacteria such as Clostridium XI, Streptococcus digesta, Ackermania, Clostridium XVIII increased, suggesting that these bacteria play a harmful role in the intestinal, which may be the reason of the intestinal tract to carcinogens.
Due to the complexity of the intestinal flora, a series of bacteria rise and fall after a high-fat, high-calorie diet but how these bacteria work and what role they play is currently unclear. From the perspective of clinically regulating the intestinal microecology, selective fecal transplantation can be used to supplement the decreased bacteria, but there is currently no better way to remove the increased bacteria [24].Therefore, it is significant to explore the decreased beneficial bacteria and find ways to improve these.
In summary, this study shows that high-fat and high-calorie diets can increase the susceptibility of the intestines to carcinogenic factors, which may be due to high-fat diets leading to inflammatory states and intestinal flora imbalances, especially rumenococcus, candida, Saccharomyces, Enterobacter, Clostridium IV, Enterococcus, Enterobacter, Vibrioaceticus and other genus reduction are important links. Exploring ways to improve these flora is an important factor to improve the resistance of the intestinal tract to cancer-inducing agents.
|
2020-04-15T13:56:06.495Z
|
2020-04-07T00:00:00.000
|
{
"year": 2020,
"sha1": "bad5a45d9c1387b2043bd3d06bcb90af16676296",
"oa_license": "CCBY",
"oa_url": "https://www.jcancer.org/v11p4091.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bad5a45d9c1387b2043bd3d06bcb90af16676296",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
270588438
|
pes2o/s2orc
|
v3-fos-license
|
Semi-automated software improves interrater reliability and reduces processing time of magnetic resonance imaging-based exocrine pancreatic assessments in pediatric patients
Objectives Magnetic resonance (MR) imaging with secretin stimulation (MR-PFTs) is a non-invasive test for pancreatic exocrine function based on assessing the volume of secreted bowel fluid in vivo. Adoption of this methodology in clinical care and research is largely limited to qualitative assessment of secretion as current methods for secretory response quantification require manual thresholding and segmentation of MR images, which can be time-consuming and prone to interrater variability. We describe novel software (PFTquant) that preprocesses and thresholds MR images, performs heuristic detection of non-bowel fluid objects, and provides the user with intuitive semi-automated tools to segment and quantify bowel fluid in a fast and robust manner. We evaluate the performance of this software on a retrospective set of clinical MRIs. Methods Twenty MRIs performed in children (< 18 years) were processed independently by two observers using a manual technique and using PFTquant. Interrater agreement in measured secreted fluid volume was compared using intraclass correlation coefficients, Bland-Altman difference analysis, and Dice similarity coefficients. Results Interrater reliability of measured bowel fluid secretion using PFTquant was 0.90 (0.76–0.96 95% C.I.) with − 4.5 mL mean difference (-39.4–30.4 mL 95% limits of agreement) compared to 0.69 (0.36–0.86 95% C.I.) with − 0.9 mL mean difference (-77.3–75.5 mL 95% limits of agreement) for manual processing. Dice similarity coefficients were better using PFTquant (0.88 +/- 0.06) compared to manual processing (0.85 +/- 0.10) but not significantly (p = 0.11). Time to process was significantly (p < 0.001) faster using PFTquant (412 +/- 177 s) compared to manual processing (645 +/- 305 s). Conclusion Novel software provides fast, reliable quantification of secreted fluid volume in children undergoing MR-PFTs. Use of the novel software could facilitate wider adoption of quantitative MR-PFTs in clinical care and research.
Introduction
Exocrine pancreatic function, wherein bicarbonate and digestive enzymes are secreted from the pancreas in response to food within the duodenum, is required for digestion and absorption of food particles and certain micronutrients including fat-soluble vitamins.Exocrine pancreatic insufficiency (the inability for the pancreas to make enough of the digestive enzymes) is associated with weight loss and steatorrhea in children and adults and poor growth and development in children [1].Reference standard testing for exocrine function requires either collection of stool for quantification of fecal elastase (indirect test) or endoscopy with enteric fluid collection (endoscopic pancreatic function testing [ePFT]) to measure bicarbonate concentration and/or enzyme function (direct test) [2][3][4].
MR-pancreatic function testing (MR-PFT) [5], wherein T2-weighted images are acquired subsequent to administration of a secretagogue and the volume of fluid secreted is subjectively graded or quantitatively measured, have been described as a non-invasive method of exocrine pancreatic function testing.The most common method of assessment of pancreatic function based on MR-PFT relies on the Matos criteria, with qualitative assessment of the degree of duodenal filling [5].Matos grade has been linked to pancreatic exocrine function in adults [6].Although assessment of pancreatic function via the Matos criteria is clinically utilized in day-to-day practice, the assessment is subjective and may not be appropriate for pediatric patients [7].Instead, the quantitation of secreted fluid volume in response to administration of a secretagogue may represent a more accurate and generalizable test for exocrine pancreatic function in both children and adults [6,[8][9][10].Threshold values for normal secreted fluid volume have been defined for both adults and children [8,11], but this quantitative approach has yet to enjoy widespread adoption.This, in part, reflects the manual effort and time required to segment images pre-and postsecretagogue and likely also reflects lack of validation of diagnostic thresholds for exocrine insufficiency, particularly for children.Validation of previously defined diagnostic thresholds necessitates tools to rapidly and reliably quantify secretory function measured by MR-PFTs.
Current methods of quantifying secretory response in MR-PFTs rely on image thresholding or other means of segmenting fluid pixels within MRI images [8,10], a largely manual, time-consuming task.Manual image segmentation is also associated with interrater variability [10].This variability encompasses differences in image windowing and leveling, in the threshold applied, and in region of interest placement, among other factors.With two experienced radiologists segmenting images after careful co-training, the average difference in measured fluid volume was approximately 2 mL but with 95% limits of agreement of +/-40 mL [10].In real-world clinical practice, clinically employed image segmentation tasks are often performed by clinical image analysts in so-called "3D labs."Variability in measured fluid volume in this environment is expected to be greater.
To increase the utility of MR-PFTs and to enable application, we set out to develop software to facilitate fluid volume quantification by reducing the time required to perform this task and reducing interrater variability in measured fluid volume.Our software solution aimed to accomplish this by (1) automating initial thresholding of the images, (2) automatically detecting and removing hyperintense voxels that are not bowel fluid, and (3) providing semi-automated interactive tools for refinement that encourage consistent, data-driven contours.Herein, we detail the processing steps for (A) current standard methods of quantifying secretory response and (B) our proposed software solution.We also compare the performance of the two methods using existing clinical MRI examinations obtained with MR-PFT.
Methods
Institutional review board approval was received for this retrospective study with a waiver of documentation of informed consent.
MR exams
We searched our clinical picture archiving and communication system (PACS) (Merge PACS; Merative; Ann Arbor, MI) for clinically-obtained MRI examinations for use in this work.Inclusion criteria were: (1) Examination performed between January 15 and June 15, 2023; (2) Examination performed on a Philips MRI machine; (3) Examination performed with secretin administration; (4) Patient age < 18 years at the time of imaging.This query returned a set of 38 examinations (11 at 3T, 27 at 1.5T), from which 10 examinations at each MRI field strength were randomly selected to be used in the comparative analyses herein.MRI examinations were routed from the clinical PACS server to a secure network storage location accessible to the study team.
All MR examinations had been acquired on Philips Ingenia scanners.The acquired MR-PFT series used a T2-weighted single shot spin echo sequence with respiratory triggering.Acquisition parameters were precisely matched for the pre-and 15-minute post-secretin images and were as follows for the analyzed examinations: TE = 140ms, flip angle = 90°, slice thickness = 4 mm.Field-of-view ranged from 220 × 220 mm to 350 × 350 mm depending on patient size, acquisition matrices ranged from 256 × 256 to 400 × 400, and in-plane resolution ranged from 0.78 × 0.78 mm to 0.88 × 0.88 mm.
Standard image processing
For each pair of MR-PFT series, bowel fluid volume preand post-secretin administration was quantified via manual segmentation using ImageJ software (https://imagej.net/software/imagej) by a board-certified pediatric radiologists (ATT, 11-years experience) and a PhD advanced image analyst (JAD, 11-years experience).The raters will be referred to as R1 standard and R2 standard from hereon.Each rater recorded their time to complete each exam using a stopwatch app on their phone; segmentation was performed in accordance with the following instructions: (1) Import pre-and post-secretin DICOM series.
Semi-automated processing (PFTquant software)
For each examination, bowel fluid volume pre-and postsecretin administration was quantified via semi-automated manual segmentation by the same two raters described above.The raters will be referred to as R1 PFTquant and R2 PFTquant from hereon.To minimize potential sources of systematic bias, the following conditions were set: (1) R1 performed manual segmentations prior to semi-automated segmentation whereas R2 performed semi-automated segmentations prior to manual segmentations.(2) Neither rater could view any segmentation results prior to completing all segmentations using both approaches.(3) For each examination, an interval of at least 4 weeks separated the processing approaches of each rater in order to minimize recall of specific segmentation choices for a given examination.
(4) Three different MR exams that were not among the 20 selected for this study were used in the development of the software.
The user interface of the software is shown in Fig. 1.Time to complete processing of each examination was recorded by the software; the timer started the moment the user initiated a new case and stopped when the case was saved and closed (no case was re-opened for editing).
Upon selection of the pre-and post-secretin series, the software processes the data in the following manner: First, images and their corresponding metadata are loaded into memory from their DICOM files and then pre-and postimages are converted to single-precision grayscale images with the top 0.001% of voxel intensity values clipped to a value of 1. Next, pre-and post-secretin bowel-fluid candidate voxels are selected and stored in 3D binary arrays by thresholding at the intensity level that maximizes the
Statistical analyses
Interrater reliability was assessed for the outcome measure of pre-to post-secretin change in bowel fluid volume (in mL) using the two-way random effects intraclass correlation coefficient for absolute agreement between single raters, also known by the convention ICC(2,1) [13].Results were interpreted as: ICC < 0.5 indicating poor agreement, ICC = 0.5-0.75indicating moderate agreement, ICC = 0.75-0.9indicating good agreement, and ICC > 0.90 indicating excellent agreement [13].Bland-Altman difference analyses were also performed to quantify the difference in measured fluid volume between observers.
While change in bowel fluid volume is the clinical metric of interest, it is conceivable that two raters could produce the same result for an exam despite identifying vastly different sets of voxels as bowel fluid.Accordingly, we also computed the Dice similarity coefficient (DSC) as an alternative metric of interrater agreement.DSC is a measure of spatial overlap ranging from 0 (indicating no overlap in segmentation results) to 1 (indicating perfect overlap in segmentation results.Statistical inferencing was performed using a paired samples t-test on the logit transformed DSC.The logit transform, defined as logit(DSC) = ln[DSC/(1-DSC)] maps the range of [0,1] to a normal distribution with range (-∞,∞).
Agreement analyses were performed for the following analysis pairs: R1 standard :R2 standard , R1 PFTquant :R2 PFTquant to characterize agreement between observers performing manual processing and observers performing semi-automated processing respectively.
Finally, we tested whether the PFTquant software enabled more efficient processing of the images by comparing the mean time to process images for each method via an independent t-test with p < 0.05 considered statistically significant.All statistical tests were performed using MAT-LAB R2022b (MathWorks, Natick, MA).
Results
A total of 20 exams from 20 unique patients were included.Patient median age was 11.2 years (IQR: 8.7-15.2).The youngest patient was 2.7 years old.Twelve (60%) of the 20 patients were female.All patients underwent MRCP with secretin for clinical indications and 18 out of 20 were diagnosed with some form of pancreatic disease: 7 acute pancreatitis, 2 acute recurrent pancreatitis, 6 chronic pancreatitis, 2 exocrine pancreatic insufficiency, 1 fatty pancreas, 1 Caroli's disease, and 1 choledochal cyst.
Table 1 displays summary statistics for the quantification of bowel fluid by each rater.Agreement statistics are detailed in Table 2, including agreement between manual inter-class entropy across all slices of both pre-and postimages.From this set of candidate voxels, clusters of volume less than 1 mL are removed.Finally, several non-bowel fluid objects are identified and removed based on the morphometric properties of the clusters to which they belong.This step is accomplished by resampling candidate voxel arrays to isotropic spacing before calculating morphometric properties of each cluster (e.g., centroid, bounding boxes, sphericity, angle of the principal axis of ellipsoid) and comparing those properties to sets of heuristically determined ranges for each of several non-bowel fluid objects.Nonbowel fluid objects targeted for identification are spinal canal, intervertebral discs, renal pelvis and proximal ureter, bladder, gallbladder, and incompletely saturated fat signal.
Remaining candidate voxels are mapped onto the pre-and post-secretin images of the PFTquant interface (demarcated by marginal lines tracing their boundary) for manual refinement by the rater as shown in Fig. 1.The PFTquant allows the rater to scroll through slices using the mouse wheel or left and right arrow keys.The rater may add voxels to the bowel fluid segmentation in two ways.(1) Single click -the rater simply uses the left mouse button to click on a point in the image within the bounds of an area of fluid.This initiates the Chan-Vese automated active-contouring algorithm which detects object boundaries by minimization of an energy function using the selected point as a starting seed [12].(2) Regional selection -the rater presses and holds the left mouse button to draw a freeform shape encompassing the fluid and any amount of background.This performs a maximum entropy threshold operation limited to the voxels contained within the freeform shape.Similarly, the rater may remove voxels from the bowel fluid segmentation in two ways: (1) Single click -the rater simply uses the right mouse button to click on a point in the image within the bounds of an area of fluid.This initiates the aforementioned automated active-contouring algorithm using the selected point as a seed and removes the identified voxels from the segmentation.(2) Regional selection -the rater presses and holds the right mouse button to draw a freeform shape.All voxels within the shape are removed from the segmentation.Pre-and post-secretin volumes are calculated as the sum of the segmented voxels of their respective image multiplied by the product of the slice thickness and pixel spacing values obtained from the DICOM metadata tags.Volumes are updated actively based on user modification of the fluid segmentation.Once the rater has completed the segmentation, they may save an analysis file that contains the raw image data, the image metadata, segmented bowel fluid masks, measured pre-and post-secretin bowel fluid volume, and analysis metadata (e.g., time to process, software version).
MR pancreatic function testing (MR-PFTs) allows noninvasive assessment of pancreatic exocrine function by
subjectively characterizing or objectively quantifying fluid secreted in response to administration of a secretagogue.Quantitation of secretory response eliminates subjectivity and is particularly important in children where secreted fluid volume is known to increase with age, necessitating application of age-specific diagnostic thresholds [8].Further, quantitation presents an opportunity to monitor changes in secretory response over time.Among the barriers to adoption of quantitative MR-PFTs is the need for time-intensive manual image segmentation and the potential for interrater variability that may impact diagnostic performance.To address the limitations and to facilitate adoption of quantitative MR-PFTs, we developed a software solution (PFTquant) designed to minimize interrater variability and reduce processing time by ( 1) automating initial thresholding of the MR images, (2) automatically detecting and removing hyperintense voxels that are not bowel fluid, and (3) providing semi-automated interactive tools for segmentation refinement that encourage consistent, data-driven contours.Application of PFTquant yielded substantial and significant improvements in both interrater reliability (as measured by intraclass correlation coefficients) and efficiency (as measured by time to process examinations).Importantly, these improvements were observed in a clinical population, demonstrating the relevance of this work to pancreatic disease and should encourage the wider adoption of MR-PFTs in clinical care and research.The simplicity of the workflow should also reduce barriers to adoption: a 10-minute tutorial video covers all functionality and served as the entirety of training in this work.This low learning curve should allow for the quantitation to be performed by clinical image analysts in so-called "3D labs" instead of board-certified radiologists.The PFTquant software is freely available for non-commercial use under the Creative Commons Attribution-NonCommerical license version 4.0 (CC BY-NC 4.0) and can be downloaded from the author's GitHub (https://github.com/duddb3/PFTquant).
In our study of a sample of 20 clinically-obtained MRI examinations with MR-PFTs, we showed application of PFTquant could achieve excellent interrater agreement with a negligible mean difference (-4.5 mL) in quantified secreted fluid volume.This compared to moderate interrater agreement for manual processing of the same data sets.Further, use of PFTquant reduced the time for analysis of patient data sets by 36%.Agreement between observers in the current study for manual segmentation was less than previously demonstrated in a sample of 31 pediatric patients [10].In that study which used a similar manual and semi-automated image analysis by a single observer.Interrater reliability, expressed as intraclass correlation coefficients, of measured secretory response based on standard processing was moderate at 0.69 (0.36-0.86 95% C.I.) compared to excellent at 0.90 (0.76-0.96 95% C.I.) using the PFTquant software.Bland-Altman difference analysis demonstrated mean differences in measured secreted fluid volume of -0.9 mL (95% limits of agreement: -77.3-75.5 mL) and − 4.5 mL (95% limits of agreement: -39.4-30.4mL) for standard and PFTquant processing respectively (Fig. 2).DSC were not significantly different (p < 0.11) for standard processing (0.85 +/-0.10)compared to PFTquant processed cases (0.88 +/-0.06)(Fig.3).
The mean time to process examinations using the standard technique (645 +/-305 s) was significantly longer (p < 0.001) when compared to the PFTquant technique (412 +/-177 s)(Fig.4).This reflects an average time savings of 234 s, or approximately 36% time savings.Notably, the observers in that prior study were highly experienced with the technique with results likely reflecting the best-case scenario.Further, the time required to perform the manual segmentations in that prior study was not reported.To our knowledge there are no other studies with which to compare the current work.
Our study is limited by the fact that it is a single center study using a small number of MRI examinations, performed on a single vendor MRI platform, with analysis by a small number of users/observers.Use of the PFTquant requires matched image series pre-and post-secretin, acquired to optimize conspicuity of fluid content of bowel.Performance of the software using other image sets is unknown.The small number of examinations and small number of observers included may inadequately characterize the performance of the software.However, our results show meaningful improvements in efficiency and interrater reliability with application of the software in this small study.
In conclusion, we have developed a software solution to facilitate quantitative analysis of MR-PFTs which reduces the time required to process these examinations and improves interrater agreement.These are important steps to the wider adoption of MR-PFTs in clinical care and research, allowing exploration of the potential benefit or
Declarations
Conflict of interest The authors declare that there are no disclosures relevant to the subject matter of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not Fig. 4 Time to process examinations was significantly shorter using the semi-automated (PFTquant) software compared to the manual (standard) technique.Each dot represents the time taken to process each exam for the given rater and method; lines connect the same exams across raters and methods.Superimposed box-whisker plots show the median (red), interquartile range (blue), and +/-2.7 times the standard deviation of time to process for each rater and method.The set of outlying dots, fully above the whiskers, reflect processing times for a patient with abundant ascites.Separating fluid in bowel from ascites required extra time with both methods but was shorter using the PFTquant 1 3
( 2 )
Manually adjust window level for a single image and apply to all images.(3) Combine image stacks.(4) Reduce image intensity to 8-bit depth.(5) Duplicate stack.(6) Threshold image; manual selection guided by subjective assessment that all fluid voxels are thresholded.(7) Remove areas that are not bowel using the duplicated stack for anatomical reference.(8) Save the final segmented combined stack as a (lossless) multi-layer tif file.After all examinations had been processed, the segmented image stacks were loaded into MATLAB (MathWorks; Natick, MA) as binary arrays for quantification of fluid volumes.Pre-and post-secretin volumes were calculated as the sum of pixels in the left and right half of the image stacks, respectively, multiplied by the product of the slice thickness and pixel spacing values obtained from the DICOM metadata tags.
Fig. 1
Fig. 1 PFTquant software interface.Coronal fat-saturated images from the T2-weighted scans prior to (left image) and 15 min following (right image) secretin administration with segmented bowel fluid outlined in yellow.The upper left panel is for the display of patient information 85) R1 = rater 1 (ATT), R2 = rater 2 (JAD), standard = manual image analysis, PFTquant = semi-automated image analysis segmentation process (ImageJ), Trout et al. showed observers could achieve strong correlation (r = 0.92) with negligible mean bias (2 mL) in measured secreted fluid volume.
Fig. 3
Fig. 3 Distributions of Dice similarity coefficient (DSC) for the manual (standard) and semi-automated (PFTquant) methods of quantifying bowel fluid secretion.Each dot corresponds to the DSC between two raters for a single examination processed using the given method; lines connect the same examinations across methods
Fig. 2
Fig. 2 Bland-Altman difference plots for the manual (A) and semiautomated PFTquant (B) methods of quantifying bowel fluid secretion.Plots are shown with the same scale.The dashed line indicates the mean difference between raters for secreted fluid volume, calculated as
Funding 1 R01
DK13246-01 Multi-parametric quantitative MRI for assessment of pancreas health in children.
Table 1
Bowel fluid volumes pre-and post-secretin and calculated secretory response (ΔVolume) derived from manual (standard) and semi-automated (PFTquant) MR image analysis.Results are expressed as mean +/-standard deviation (in mL) with ranges shown in paren-
Table 2
Interrater reliability statistics for pre-secretin fluid volume, post-secretin fluid volume, and change in secreted fluid volume (ΔVolume).Results are expressed as intraclass correlation coefficients with 95% confidence intervals in parentheses
|
2024-06-20T06:16:18.509Z
|
2024-06-19T00:00:00.000
|
{
"year": 2024,
"sha1": "149d299a5f45d84657a1cceec5fff4c0489d1e4e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00261-024-04442-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "58c7e9a53d250d663e9716bac6f8b3d05d9805da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11637902
|
pes2o/s2orc
|
v3-fos-license
|
Segmental dataset and whole body expression data do not support the hypothesis that non-random movement is an intrinsic property of Drosophila retrogenes
Background Several studies in Drosophila have shown excessive movement of retrogenes from the X chromosome to autosomes, and that these genes are frequently expressed in the testis. This phenomenon has led to several hypotheses invoking natural selection as the process driving male-biased genes to the autosomes. Metta and Schlötterer (BMC Evol Biol 2010, 10:114) analyzed a set of retrogenes where the parental gene has been subsequently lost. They assumed that this class of retrogenes replaced the ancestral functions of the parental gene, and reported that these retrogenes, although mostly originating from movement out of the X chromosome, showed female-biased or unbiased expression. These observations led the authors to suggest that selective forces (such as meiotic sex chromosome inactivation and sexual antagonism) were not responsible for the observed pattern of retrogene movement out of the X chromosome. Results We reanalyzed the dataset published by Metta and Schlötterer and found several issues that led us to a different conclusion. In particular, Metta and Schlötterer used a dataset combined with expression data in which significant sex-biased expression is not detectable. First, the authors used a segmental dataset where the genes selected for analysis were less testis-biased in expression than those that were excluded from the study. Second, sex-biased expression was defined by comparing male and female whole-body data and not the expression of these genes in gonadal tissues. This approach significantly reduces the probability of detecting sex-biased expressed genes, which explains why the vast majority of the genes analyzed (parental and retrogenes) were equally expressed in both males and females. Third, the female-biased expression observed by Metta and Schlötterer is mostly found for parental genes located on the X chromosome, which is known to be enriched with genes with female-biased expression. Fourth, using additional gonad expression data, we found that autosomal genes analyzed by Metta and Schlötterer are less up regulated in ovaries and have higher chance to be expressed in meiotic cells of spermatogenesis when compared to X-linked genes. Conclusions The criteria used to select retrogenes and the sex-biased expression data based on whole adult flies generated a segmental dataset of female-biased and unbiased expressed genes that was unable to detect the higher propensity of autosomal retrogenes to be expressed in males. Thus, there is no support for the authors’ view that the movement of new retrogenes, which originated from X-linked parental genes, was not driven by selection. Therefore, selection-based genetic models remain the most parsimonious explanations for the observed chromosomal distribution of retrogenes.
(Continued from previous page)
Conclusions: The criteria used to select retrogenes and the sex-biased expression data based on whole adult flies generated a segmental dataset of female-biased and unbiased expressed genes that was unable to detect the higher propensity of autosomal retrogenes to be expressed in males. Thus, there is no support for the authors' view that the movement of new retrogenes, which originated from X-linked parental genes, was not driven by selection. Therefore, selection-based genetic models remain the most parsimonious explanations for the observed chromosomal distribution of retrogenes.
Background
In Drosophila, there is an excess of retrogenes moving from the X chromosome to autosomal regions [1]. Interestingly, those retrogenes are frequently expressed in testis [1]. Both observations have been reported several times in Drosophila melanogaster [1][2][3], as well as in other species of mammals [4] and mosquitoes [5,6]. In addition, a comparative study between the genomes of twelve Drosophila species revealed excessive movement out of the X chromosome for both retrogenes and DNA-based duplications in the Drosophila genus [7,8]. Further, older genes that originated before the split of the Drosophila and Sophophora subgenera and for which expression is greater in males than females, are underrepresented on the X chromosome [9][10][11][12]. The gene movement off the X chromosome likely contributed, along with other mechanisms, to the paucity of X-linked male-biased genes found in Drosophila [11].
Several hypotheses have been proposed to explain the excessive movement of genes out of the X chromosome and the paucity of male-biased X-linked genes [1, [13][14][15][16][17][18][19]. These hypotheses include (i) meiotic sex chromosome inactivation (MSCI), (ii) dosage compensation, (iii) meiotic drive, and (iv) sexual antagonism, and they all assume that natural selection has favoured accumulation of male-biased genes on the autosomes [1, [13][14][15][16][17][18][19]. Two of those hypotheses, MSCI and dosage compensation, have been tested and shown to play a role in the genomic relocation of retrogenes expressed in testis [15,16,20]. MSCI is predicated on the hypothesis that retrogenes located on autosomes continue functioning during male meiosis whereas otherwise they would be subjected to inactivation [1, 17,20]. Indeed, in meiosis where MSCI occurs, autosomal retrogenes have higher expression than their parental X-linked genes, presumably to compensate for their inactivation [20]. In Drosophila, the dosage compensation hypothesis also predicts a decrease in the number of male-biased genes in the X chromosome relative to autosomes [15,16]. Upregulation in males is less effective for X-linked genes since they are already hypertranscribed during dosage compensation [15,16]. Consistent with this hypothesis, autosomal retrogenes are often derived from X-linked parental genes that reside close to components of the dosage compensation machinery [16].
The recent study by Metta and Schlötterer [21] proposed a new interpretation which negated the need for selection-based hypotheses to understand the out-of-the X movement pattern of Drosophila retrogenes. To test the general role of natural selection, Metta and Schlötterer [21] identified retrogenes for which the parental gene has been lost or degenerated. In other words, the parental genes and retrogenes are never found in the same species. This innovative approach differed from previous studies that analyzed both parental and retrogene copies of the same species [1-3]. A key argument used for their analysis was that the remaining retrogenes assumed and maintained parental ancestral function(s) [21]. This unique set of parental genes and retrogenes (Table 1) displayed no differences in their patterns of DNA sequence evolution nor in sex-biased expression. However, these retrogenes still showed excessive movement out of the X chromosome suggesting no selection for these genes based on differential gene expression in males. Moreover, the genes studied by Metta and Schlötterer [21] displayed femalebiased or unbiased (non-sex-biased) expression profiles. Therefore, the authors suggest that such gene movement in Drosophila is not related to male-biased expression and therefore is a general non-adaptive property of retrotransposition [21].
We revisited the analyses and sex-biased expression data presented by Metta and Schlötterer [21] and found several issues with the retrogene dataset and expression data used that tended to render their arguments arguable. First, we found that the set of retrogenes was a segmental dataset in which the majority of genes with male-biased expression were excluded. Second, we observed that the general unbiased expression they claimed to exist was actually a consequence of the use of expression data from whole animals. Sex-biased gene expression (particularly male-biased expression) is poorly revealed when RNA is obtained from whole-body samples in comparison to dissected tissues (gonads) [6,7,22]. Third, we found that most of the observed female-biased expression is derived from X-linked parental genes. The dataset provided by Metta and Schlötterer [21] shows an excess of X→A movement and therefore contains a significant number of parental genes that are located on the X chromosome, which is known to be enriched with female-biased genes. Fourth, we analyzed additional gonad expression data that support the evidence that autosomal genes show higher male-related expression than X-linked genes. In the following four sections, we report our analyses of Metta and Schlötterer's [21] data that led to conclusions different from their previous ones.
The segmental dataset underestimated male-biased expression
We analyzed the dataset of positionally relocated genes for 12 Drosophila species [23], used by Metta and Schlötterer [21]. Bhutkar et al. [23] identified 46 cases of inter-chromosomal retrotransposition for which the parental copy had degenerated or had been lost (Methods). Metta and Schlötterer [21] further filtered the dataset by several criteria such as high coverage between orthologous sequence alignments and intron absence to control the data quality (filtered out 26 cases) [21]. Therefore, for those remaining 20 cases together with a previously identified retrogene (RplP2), (herein named the segmental dataset), each of the 12 Drosophila species has only one orthologous gene that corresponds either to the parental gene or the retrogene. In Metta and Schlötterer's study [21], D. melanogaster expression was retrieved from FlyAtlas [24] (which is based on comparison of gonad expression).
Metta and Schlötterer [21] found that none of the 21 cases of inter-chromosomal retroposition showed testisbiased expression in D. melanogaster. However, the pattern of testis-biased expression changes significantly between the segmental dataset (21 cases) and the initial dataset of 46 retrogenes from Bhutkar et al. [23]. Nine out of the 26 remaining cases (herein called the excluded dataset) show testis-biased expression in D. melanogaster [21], which is significantly different from the expression patterns found in their segmental dataset (Figure 1, Fisher Exact Test; P = 0.0025, Additional file 1).
Nonetheless, Metta and Schlötterer [21] were aware that the testis expression data limited the analysis to D. melanogaster genes (no gonad expression data was available/used for other species). For the cases of retroposition where the parental gene had been lost, the copy present in D. melanogaster either corresponded to the parental gene or the retrogene, depending on which species or branch the duplication occurred. Using the segmental dataset and the expression criteria in [25], Metta and Schlötterer [21] found that only one out of five retrogenes located on an autosome is expressed (at very low levels) in the testis, which supported their argument for general female-biased or unbiased expression of retrogenes. However, this result was not consistent in Flyatlas [24] in which three of the five retrogenes (CG14286, CG12375, CG4918) are expressed in testis. Moreover, in the excluded dataset, the only case of an autosomal retrogene (CG10934) in D. melanogaster with parental Xlinked gene is indeed testis-biased expressed [21].
The difference in sex-biased expression between the excluded and segmental datasets could have compromised their final conclusions [21], as one should expect that data subsets would not show drastic differences in expression patterns. One possibility is that the conservative sequence similarity used to construct the segmental dataset biased their acquisition of male-biased expressed genes since in Drosophila this class of genes is known to be more divergent than female-biased or unbiased expressed genes [26,27].
However, the conservation of sequence similarity was not the only threshold used to remove genes from the segmental dataset [21]. Other criteria, such as absence of introns, were also implemented [21]. Therefore, it is possible that the segmental dataset represents an even more confident set of relocated retrogenes. Therefore, we conducted a full analysis on the excluded dataset (26 cases, see Additional file 2). We found no evidence to exclude the following cases: CG32119, CG14077, CG7557, CG8928, CG4904, CG14026 and CG12010. Note that three of those genes are male-biased expressed. Thus, those highly confident relocated genes contained in the excluded dataset still show a significantly higher frequency of male-biased genes than the segmental dataset (3 out of 7 vs. 0 out of 21 or 43% vs. 0%, Fisher Exact Test, p=0.0107). Nonetheless, we focused our further analyses only in the segmental dataset used by Metta and Schlötterer's [21]. In the following three sections, we present several points that led us to continue to have a different conclusion.
Whole-body gene expression comparison between males and females underestimated male-biased expression
\In order to test for functional equivalence among duplicate copies, Metta and Schlötterer [21] compared the sexbiased gene expression between retrogenes and parental copies. They used the available gene expression data from whole body of males and females in D. simulans, D. yakuba, D. ananassae, D. pseudoobscura, D. virilis and D. mojavensis [26] to classify those genes into different categories in terms of sex-biased expression. They found that retrogenes and parental genes usually show similar expression. Indeed, almost 50% (10/21) of cases have the same sex-biased expression across all species (see Table 1 reproduced from Table 2 in [21]). However, our reanalysis of the data (Figure 2, Additional file 1) revealed that approximately 80% of those cases (8 out of 10) with same sex-biased expression show no significant evidence for male-or female-biased expression. Note that we used the same source to obtain information regarding male-or female-biased expression [26] (see methods). All of them are equally expressed among males and females (unbiased expression or "No sex-biased" in Figure 2). Note that our re-analysis has shown that one additional case of relocation (CG2227, Additional file 1) has unbiased expression in D. simulans [21,26]. The sex-biased expression data used by Metta and Schlötterer [21] came from a previously published article that compared whole body expression of males and females [11,26], whereas previous analyses of gene movement with male expression in Drosophila utilized expression data from testes and ovaries [1][2][3]. It was reported that the number of genes with sex-biased expression is drastically reduced in the whole body expression data of D. melanogaster [9]. We also have previously observed that analysis of gene duplicates using whole body expression data only recovered 30% of the male-biased gene expression in D. melanogaster gonads [7]. This low coverage of male-biased genes in the whole body data was also observed in Anopheles gambiae [6,22]. In this case an even smaller proportion of male-biased genes is observed when compared to the proportion of female-biased genes: only 7% of testis-biased expression is recovered using male whole- Figure 1 Percentage of Drosophila melanogaster testis-biased and non-testis-biased expressed genes in two different gene expression datasets. Testis-biased expression profiles for D. melanogaster genes were obtained from Metta and Schlötterer [21]. Segmental dataset corresponds to the 21 movement cases selected by Metta and Schlötterer [21] from the original 46 cases in [23]. Excluded dataset corresponds to the remaining 26 cases. The number of testis-biased genes is significantly higher in the excluded dataset (**Fisher exact test, p = 0.0025), which implies that the filter used by Metta, and Schlötterer [21] disproportionally selected less testis-biased genes in the segmental dataset. body RNA. In contrast, 50% of ovary-biased expression is recovered when using whole-body of females [22]. Moreover, the number of female-biased genes can also be underestimated using whole-body RNA. Since those genes are widely expressed [24], the introduction of somatic tissues in the RNA pool may distort the relative excess found in the ovary. Therefore, the use of whole-body RNA underestimates in general detection of sex-biased genes found by gonadal tissue comparisons. Metta and Schlötterer [21] also claimed that 60% of genes that have heterogeneous sex-biased expression, i.e. cases in which orthologs of the same gene in different species have different sex-biased expression. Moreover, they found that sex-biased expression among species show no particular pattern associated with retrogenes or parental copies (Table 1). However, this result is not unexpected as only 11 out of the 41 retrogenes (27%) displayed sex-biased expression for all species/gene combinations ( Figure 2). We therefore reason that any conclusions regarding the relationship between sex-biased expression and chromosomal locations of retrogenes without parental genes must await additional studies using comparisons between gonads in males and females (see "Additional gonad expression data supports selection hypothesis for movement out of the X chromosome" below).
Female-biased expression is associated with X-linkage of parental genes Metta and Schlötterer [21] claimed that genes in their dataset show a high frequency of female-biased expression in contrast to the male-biased expression usually found for retrogene moving out of the X chromosome. They interpreted this lack of association and the apparent nonrandom gene traffic off the X to reflect a non-adaptive process. However, we found that this level of femalebiased expression (29/116 species/gene combinations, Tables 1 and Figure 2) is a consequence of large number of X-linked parental genes present in the dataset and therefore is not unexpected even under selection-driven models. In other words, there is an excess of the X-linked gene movement to the autosomes in their dataset. If all orthologs to the 21 retrogenes across the twelve Drosophila species are analyzed, it is clear that there will be an enrichment of X-linked parental genes when analyzing the total expression profile (80% vs. 20%, n = 119; Fisher's Exact test, p < 0.0001). As the X chromosome in Drosophila is enriched with female-biased genes [9], it is reasonable to expect a high frequency of this class of gene.
Indeed, we found that most of those female-biased genes are parental genes located in the X chromosome where 18 (grey boxes, Figure 2) out of the 27 female- Female-biased expressed genes located on the X chromosome are shown in grey boxes. Retrogenes and parental genes are shown in "R" and "P", respectively. Same sex-biased expression can be divided in: no sex-biased expression and female-biased expression for all orthologs analyzed. "-" corresponds to cases where orthologos do not show the same sex-biased expression. "na" refers to no expression data available.
biased genes are located on the X chromosome and only two are retrogenes ( Figure 2). Note that two of 29 genes previously found to be female-biased expressed are actually unbiased expressed between males and females (see notes in Additional file 1). In other words, a high frequency of female-biased genes as 60% (16/27) are Xlinked parental genes and the X chromosome is known to be enriched with parental genes and female-biased expressed genes [1,9]. This association can be clearly seen as an enrichment of X-linked female-biased genes for parental copies but not for retrogenes ( Table 2, Fisher Exact test, p = 0.0061). Removal of female-biased X-linked genes from Table 1 (grey boxes, Figure 2), results in a noticeable decrease in sex-biased expression, particularly for retrogenes: 3 male-biased and 6 femalebiased expressed genes. Therefore, the large number of female-biased genes associated with X-linkage of parental genes is expected from various forms of sexual antagonism [13,14,28] models and consistent with the known deficit of male-biased genes on the X chromosome and enrichment of female-biased genes [9,12]. In other words, their finding of excess of female-biased genes is actually in agreement with proposed selection-based hypotheses connected to sex-biased expression [9][10][11][12][13][14].
Additional gonad expression data supports selection hypothesis for movement out of the X chromosome We searched for additional gonad expression data for the specific group of retrogenes and their parental counterparts analyzed by Metta and Schlötterer [21]. If the selection is driving the retrogene movement out of the X chromosome, we should be able to detect lower expression in ovaries and higher expression in testis for those genes located in the autosomes in comparison to X-linked genes. However, if the movement out of the X chromosome is an intrinsic property of the retrogenes, no differences of sex-related expression should be expected. Although such assessment is not trivial given the small sample size (entire dataset = 47; segmental dataset = 21 [21]), we were able to find significant differences in at least two independent analyses. First, using FlyAtlas [24] expression data for the segmental dataset of D. melanogaster (n = 21), we found that parental genes are more up regulated in ovary than retrogenes (Table 3 and Additional file 1, 93% vs. 43%; Fisher exact test, p = 0.0251). This pattern is not a result of the great number of Xlinked genes found in the group of parental genes as none of the X-linked retrogenes is up regulated in the ovary (Table 3). This is in contrast to the expression profile of X-linked parental genes, which are all up regulated in the female organ (Fisher exact test, p = 0.001).
Second, using two different spermatogenic expression profiles [20,29], we found that D. melanogaster autosomal genes described by Metta and Schlötterer [21] (entire dataset, n = 47) were more likely expressed in meiosis than in mitosis. Additional file 3: Figure S1 plots the correlation between two available expression profiles in D. melanogaster spermatogenesis [20,29]. One of the profiles corresponds to the expression fold difference found between bag-of-marbles (bam) mutant and wild type testes [29]. The bam mutation prevents the entry into meiosis stage and results in the accumulation of pre-meiotic cells [30]. The other profile corresponds to the expression fold difference found between mitotic and meiotic cells dissected from wild-type testes [20]. Both expression profiles are significantly correlated and therefore should reproduce the expression differences between the first two phases of spermatogenesis (r 2 = 0.41, p = 2.3e-06). In the latter profile [20], the X-linked genes analyzed in Metta and Schlötterer's [21] sample show a higher mitotic/meiotic expression when compared to genes located in the autosomes (t-test = 2.03, p = 0.048). This result suggests that autosomal genes are more frequently expressed in meiotic cells of the testis.
These independent analyses have shown that autosomaland X-linked genes analyzed by Metta and Schlötterer [21] are not equally expressed regarding sex-related tissues: the autosomal genes tend to be less ovary-expressed and tend to show more male expression, more specifically the meiotic phases of the testis. This result is therefore in agreement with the hypothesis that selective forces such as MSCI, dosage compensation and sexual antagonism are involved in the retrogene movement out of the X chromosome [1, [13][14][15][16][17]. It is important to notice that the selective model does not necessarily require male-biased expression, but higher male expression of autosomal retrogenes than of their X-linked parental counterparts.
Discussion
Numerous studies have shown increased testis expression of retrogenes that have moved out of the X chromosome in D. melanogaster [1-3,7,8]. Those findings are associated with several evolutionary hypotheses in which autosomal male-biased genes have been favoured by natural selection [1, [13][14][15][16][17][18][19]. However, the recent study of Metta and Schlötterer [21] found no evidence of male-biased expression among retrogenes for which the parental copy has been lost. On the contrary, the genes analyzed have mostly female-biased or unbiased expression [21]. As those genes also show the excessive movement out of the X chromosome, Metta and Schlötterer [21] suggested that such a trend is an intrinsic property of retrogenes in Drosophila and not part of an adaptive process. The segmental dataset used by Metta and Schlötterer [21] did not show the same proportion of testis-biased expressed genes observed in the entire dataset of retrogenes in which the parental gene was subsequently lost [23]. Thus it is clear that the segmental dataset used by Metta and Schlötterer was not representative of the entire dataset of retrogenes for which the parental copy has been lost and the authors therefore took this as evidence against selection-based hypotheses [21].
In addition, statistical analysis of gene movement and sex chromosome evolution can only be performed using tissue-specific expression profiles across species, particularly male gonads [1- 3,6,7,9,20]. However, such studies are complicated in cases where the parental copy has degenerated or has been lost. In those instances, movements of parent and retrogenes can only be inferred using genomic comparisons and phylogenetic inference between different Drosophila species [7,8,21,23]. Unfortunately, expression data derived from gonad analysis do not yet exist for all genomic sequenced Drosophila species (only whole-body expression data has been assembled in [26]).
Although a previous study of whole-body expression analysis successfully detected the non-random chromosomal distribution of sex-biased genes [11], it failed to recover the known extensive male-biased expression obtained using tissue-specific data in D. melanogaster [7]. That means whole-body expression analyses lack the statistical power needed to detect the tissue-specific basis of retrogene movement out of the X chromosome [7,8] probably due to the smaller sample size of this dataset in comparison to genome-wide analyses. In a previous study [7], we approached this problem by using a conservative analysis of gene movement in D. melanogaster for which gonad expression data are available [7,24]. Although the number of retrogenes was too small to conduct a statistical test, it was possible to show that X-linked parental genes for which the corresponding retrogene had moved to the autosomes were generally under-expressed in testis in agreement with sexual antagonism, MSCI and dosage compensation models [7]. Thus, hypotheses concerning the generality of retrogene movements from the X (with or without parental genes) cannot be tested with existing expression data. We must await the acquisition of appropriate tissue-specific expression data from across the Drosophila clade.
However, we were able to show that there is an association of sex-biased expression with movement out of the X chromosome within the group of retrogenes analyzed by Metta and Schlötterer [21]. First, using D. melanogaster gonad data from FlyAtlas [24], we found the Xlinked parental genes tend to be more up regulated in ovaries than retrogenes located in the autosomes. Second, autosomal genes tend to more expressed in meiotic cells of the testis in comparison to X-linked genes. Those results are in agreement with the hypothesis that autosomal regions provide a favourable environment for male-expression [1, [13][14][15][16][17][18][19]31].
Nevertheless, it is important to notice that even if the tissue-specific data across the Drosophila clade provides evidence for reduced testis-biased expression of retrogenes without parental genes compared to that of retrogenes with parental copies, it will not necessarily rule out MSCI, sexual antagonism, meiotic drive and dosage compensation models [1, [13][14][15][16][17][18][19]. The current sex-biased expression of retrogenes without parental gene does not necessary reflects expression levels when duplication occurred. In this model of retrotransposition, it is reasonable to assume that before the parental gene is lost, the retrogene would either complement the parental gene's function, or undergo neo-or sub-functionalization [21]. Only after degeneration of the parental copy could selection favour mutations in the retrogene that gradually restore the parental function [21]. Therefore, for the selection-driven hypothesis, male-biased expression is only expected by the time the inter-chromosome movements have occurred.
In addition, there are several other lines of evidence supporting hypotheses that predict excessive gene movement off the X chromosome is driven by natural selection. First, the excessive gene movement out of the X chromosome is not exclusively found in retrogenes. Genes created by DNA-based mechanisms also show excessive out-of-the-X movement, which suggest that natural selection, rather than mutation processes intrinsic to retrotransposition, played an essential role in distributing male-biased genes [7,8]. Second, chicken and silkworm, which have ZW sex determining systems, also present association between sex-bias gene expression and chromosomal gene movement. In those cases, a symmetrical pattern to the XY sex determining system is observed: genes that move out of the Z chromosome tend to be ovary-biased expressed [32,33]. Therefore the phenomenon is not dependent on mutational processes intrinsic to the testis expression and therefore is more likely to be driven by natural selection. Third, a recent population genomic analysis of the copy number variants of Drosophila retrogenes found that there are more fixed than polymorphic retrogenes originating on the X chromosome, which provided direct and strong population genetic evidence for the positive selection hypotheses [34]. Fourth, it worth mentioning that several autosomal retrogenes that moved out of the Drosophila X chromosome showing clear testis-specific functions have been indentified and extensively described. Examples of those genes are Drosophila nuclear transport factor-2-related (Dntf-2r), Rcd-1 related (Rcd-1r) and gasket (gskt), [1, [35][36][37].
Conclusions
Our re-analysis of Metta and Schlötterer's [21] data mainly revealed that whole body expression analyses are unable to accurately assess sex-biased expression of retrogenes. A similar issue has been recently resolved in mosquitoes [5,6]. The association between male-biased expression and Anopheles gambiae retrogene movement out of the X chromosome has been obscured by whole body data [5,38], but revealed in experiments using dissected testes [6]. The available evidence argues against Metta and Schlötterer's [21] results and interpretations, and reanalysis of their data suggests that retrogenes with parental copies do not tend to be female-biased or unbiased in their expression. We therefore conclude that the excessive movement out of the X chromosome is not an intrinsic property of the retrogenes in Drosophila but instead the result of selective forces acting on males.
In conclusion, we note that the conclusions of Metta and Schlötterer [21] have been cited by others [39,40]. It is the hope that our reanalysis of their work will serve to re-focus and clarify the importance of biological relevance in database construction and analysis of gene traffic in Drosophila. This is a crucial element to move forward in understanding the role of selection-driven hypotheses such as MSCI, dosage compensation, meiotic drive and sexual antagonism in sex chromosome evolution [1, [13][14][15][16][17][18][19].
Retrogene and parental gene identification
We retrieved the 47 genes analyzed by Metta and Schlötterer [21] from their Additional file 5. Those genes correspond to D. melanogaster genes involved in interchromosomal retrotransposition for which the parental copy had degenerated or had been lost, previously identified in [23]. Following Metta and Schlötterer's [21] classification, we separated those 47 inter-chromosomal gene movements into two sub-datasets here named by us as the segmental and the excluded datasets. The former contains 21 cases, which Metta and Schlötterer [21] selected by several criteria in order to control the data quality (see details in Additional file 1). The excluded dataset corresponds to the remaining 26 cases. In order to search for orthologs of the segmental dataset genes in other Drosophila species, we used the 21 D. melanogaster CGs as Flybase queries [41]. Using the result from genome-wide drosophilid orthologs, we searched for GLEANR identifiers through the FlyBase FBgn-GLEANR ID Correspondence Table. GLEANR identifiers are listed in our Additional file 1.
Gene expression analysis
For the 21 gene movements presented in the segmental dataset, we searched for sex-biased pattern in male vs. female whole body comparisons in six Drosophila species [26]. In order to reproduce expression data from Metta and Schlötterer [21] in non-D. melanogaster species, we used the GLEANR identifiers to search for male-and female-biased genes identified in Supplemental Tables 5-16 in [26]. Genes that were not presented in those tables were considered as unbiased expressed genes between males and females.
Testis-biased expression profiles for D. melanogaster genes were obtained from Metta and Schlötterer [21] analysis marked in red both in Additional file 1 here and in Additional file 5 in [21]. We re-analyzed the presence of expression in testis for all five retroposed copies in the segmental dataset that are located on the autosomes in D. melanogaster. Using the Affymetrix present call classification in FlyAtlas (4 out of 4 arrays), we observed that 3 out of the 5 retrogenes are expressed in testis in D. melanogaster as opposed to only one described in [21]. D. melanogaster up regulation in ovary or testis in comparison to the whole body was also obtained from FlyAtlas [24] and is described in Additional file 1, Additional expression sheet.
Expression data on specific stages of D. melanogaster spermatogenesis was obtained from both bam mutant whole testes and from mitotic and meiotic phases of wild-type testes [20,29]. Normalized expression data for the 47 D. melanogaster genes involved in gene movement were obtained from Tables S1 in [20,29] by crosslinking Oligo identifiers and are described in Additional file 1.
Statistical Analysis
In-house Perl scripts and unix commands were used to analyze different groups of data. Significances of the differences in 2x2 contingency tables were always assessed with Fisher's exact tests as implemented in R.
Additional files
Additional file 1: List of Retrogenes and their sex-biased information. Modified from Additional file 5 in Metta and Schlotterer [21]. Sex-biased and spermatogenic expression and movement direction for candidate genes were obtained from [11,20,21,24,26,29].
Additional file 2: Detail analysis on the 26 relocated cases contained in the excluded dataset.
Additional file 3: Figure S1. Correlation between two expression datasets from Drosophila spermatogenesis [1,2]. X-axis represents the fold differences between bam mutant and wild type testis from [1]. Y-axis represents the fold differences between mitotic and meiotic expression
|
2017-06-18T07:04:57.924Z
|
2012-09-05T00:00:00.000
|
{
"year": 2012,
"sha1": "9c584d632cad3e2fe1964732474290ef98e17f51",
"oa_license": "CCBY",
"oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-12-169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4fc40f1a12dbcff5ced46c248de1eca17fe74b4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
195847896
|
pes2o/s2orc
|
v3-fos-license
|
Axion-like Relics: New Constraints from Old Comagnetometer Data
The noble-alkali comagnetometer, developed in recent years, has been shown to be a very accurate measuring device of anomalous magnetic-like fields. An ultra-light relic axion-like particle can source an anomalous field that permeates space, allowing for its detection by comagnetometers. Here we derive new constraints on relic axion-like particles interaction with neutrons and electrons from old comagnetometer data. We show that the decade-old experimental data place the most stringent terrestrial constraints to date on ultra-light axion-like particles coupled to neutrons. The constraints are comparable to those from stellar cooling, providing a complementary probe. Future planned improvements of comagnetometer measurements through altered geometry, constituent content and data analysis techniques could enhance the sensitivity to axion-like relics coupled to nucleons or electrons by many orders of magnitude.
Our limits utilize an experimental device called a helium-potassium ( 3 He-K) comagnetometer [47][48][49][51][52][53][54][55]. The comagnetometer is sensitive to the difference between the magnetic fields measured by two strongly interacting magnetometers. The first measures the magnetic field via the spin of helium-3 atoms, which is dominated by the spin of their neutrons. The second magnetometer is sensitive to the spin of potassium atoms, which are dominated by the spin of their outermost electron. The comagnetometer resonantly couples the two magnetometers, and the result is a device that is sensitive to low-frequency, ∼ < O(100) sec −1 , spin-dependent interactions that couple differently to neutrons and electrons. The basic idea is as follows. The sensitivity of the comagnetometer is optimized at the so-called 'compensation point'. There, the response of the helium spins is tuned such that they cancel the effect of magnetic fields on the alkali (potassium) spins, making the alkali magnetometer insensitive to regular magnetic fields. Anomalous magnetic fields-which couple differently to neutrons and electrons compared to regular magnetic fieldswould not be canceled by the helium gas, and will have a measurable effect on the alkali. For an ALP, the ratio of its coupling to neutrons, g aN N , to its coupling to electrons, g aee , should generically differ from the neutron to electron gyromagnetic ratio, and so the comagnetometer is a sensitive instrument for detecting the new magneticlike fields that an ultra-light ALP would induce.
As a result, the 3 He-K comagnetometer can be used as a tool to measure the interactions of ALPs with neutrons and electrons. As we will show, this setup enhances the signal from ALP-neutron coupling compared to that of the ALP-electron coupling, yielding moderate sensitivity to the latter and excellent sensitivity to the former. The bounds we recast from the published data of Refs. [47][48][49] place the strongest terrestrial constraints on the coupling of ALPs to neutrons over a broad range of masses, comparable and complementary to known astrophysical bounds.
We note that Ref. [56] (as well as Ref. [57]) suggested doing an analysis such as the one presented in this paper, and Ref. [42] (discussed in further details by Ref. [58]) has implemented the analysis for the case where the ALP's inverse-mass is much larger than the total measurement time, placing limits for m a 2 × 10 −22 eV. Our analysis lays out the machinery (distinct from that presented in Ref. [42]) needed to explore higher masses, extending the limits up to m a ∼ < 4 × 10 −13 eV. We further discuss the near-future prospects of these experiments. This paper is organized as followed. We begin in Section II by describing the comagnetometer and its basic principle of operation. Section III describes the dynamical equations of the comagnetometer. We discuss how the comagnetometer can be used to detect relic ALPs in Section. IV. The data we use is presented in Sections V, followed by our new derived limits in Section VI. We end by outlining possible improvements for future experiments in Section VII followed by a summary. The many appendices expand on the calculations and derivations performed throughout the paper. In Appendix A we give a more complete derivation of the steady state solution of the comagnetometer. Appendix B expands on the dynamical response of the system, and its steady state response to an oscillating signal. Appendix C describes how the direction of the ALP wind affects the signal, and how we treated it in our analysis while Appendix D presents the treatment of the implications of the stochastic nature of the ALP field. Appendix E discusses two effects related to the nuclear spin structure, justifying choices we make in our analysis. Finally, Appendix F unites the results of all previous appendices and provides an explicit derivation of the 95% C.L. bounds, accounting for the effects of noise.
II. THE 3 HE-K COMAGNETOMETER
The concept of the helium-potassium comagnetometer was originally proposed in Ref. [55] and further developed in Refs. [47][48][49]. Below, we briefly describe the principles of its operation (for further details see Appendices A and B).
The 3 He-K comagnetometer is depicted in Fig. 1. It is a hybrid of two magnetometers that occupy the same space and interact with each other. The setup typically includes a spherical glass cell containing potassium (K) vapor and a highly pressurized helium-3 gas ( 3 He). The glass cell is illuminated by two laser beams, referred to as the 'pump' and 'probe'. The pump beam is used to initialize the comagnetometer by polarizing the potassium atoms to its direction, while the probe measures the spin of the potassium atoms. The glass cell is surrounded by magnetic coils, which are themselves surrounded by magnetic shields, so that the magnetic field inside the cell remains under control to a high degree. The density of the potassium vapor is determined by the temperature of the cell, which is controlled using an oven.
The alkaline K magnetometer. The spins of the potassium magnetometer are initialized to a certain direction,ẑ, via the pump beam. Further stabilization of the polarization in this direction is achieved through the placement of a magnetic field aligned in theẑ-direction. Such a magnetic field has two crucial additional roles to be discussed below: (i) it is used for mitigating magnetic noises in the 3 He system and (ii) by tuning this field to a specific value one may strongly couple the two magnetometers to one another. A weak transverse magnetic or anomalous field, that changes slower than the decay rate of the alkali's transverse polarization (induced mostly by the pump), adiabatically tilts the spins and induces a measurable change in the direction of the alkali's polar-ization. Since the alkali would only partially be able to follow fields that change too fast, its sensitivity is reduced when the typical time scale for changes in the magnetic fields is shorter than the inverse decay rate. The probe beam measures the projection of this polarization along its direction, while minimally affecting the alkali spins. The resulting magnetometer is sensitive to fields perpendicular to both the pump and the probe beams. 1 Dynamics of helium-3 atoms. Helium-3 is a spin-1 2 atom with its two electrons in the singlet state. Consequently, its spin originates entirely from the nucleus. Using the pump and probe beams at wavelength 770 nm, they have practically no interactions with the nuclear energy levels associated with the helium-3 spins. The helium-3 dynamics benefit from two important effects that stem from their spin-conserving collisions with the alkali metal. First, these collisions polarize the helium-3 gas, operating as an effective pumping force that generates a macroscopic helium-3 magnetization and acts to (slowly) decay any spin component that is not aligned with the alkali polarization along theẑ direction. Second, the collisions induce mutual effective magnetic fields. The magnetic field induced by the alkali is significantly smaller than the external magnetic field in theẑ direction, however it plays a crucial role for the dynamics of the helium-3 spins in the transverse directions, as discussed below.
The primary goal is to measure an anomalous field transverse to theẑ direction, which oscillates slowly in time (much like an ultra-light axion). To do so, timescales play an important role. For simplicity, it is easier to think of the anomalous field as though it only interacts with either electrons or neutrons, and correspondingly affects only the potassium or the helium. As mentioned above, the response of the alkali is damped when the field oscillates much faster than the alkali's decay rate. In a generic situation, the helium-3 decay rate is small, or equivalently, the lifetime of its transverse nuclear spin excitations is very long. Consequently, if an anomalous field interacting only with neutrons is oscillating much faster than the lifetime, its oscillations will effectively average out before helium-3 spins have time to follow it by decaying to the direction of the net-magnetic field. To solve this problem (as well as to probe the helium-3 spin), the system must be brought into a resonance, which significantly shortens the transverse lifetime of the helium-3 spin. We now discuss the method to achieve this.
Interactions of the two magnetometers. With the two magnetometers placed in the same glass-cell, the
K Droplet
Center: Schematic illustration of the principles of operation of the comagnetometer, including the pump laser, probe laser, polarization measurement, glass cell, K droplet (indicated by the silver sphere), K atoms and 3 He atoms. The pump laser in theẑ direction polarizes the K atoms, which themselves polarize the 3 He to theẑ direction. Measuring the outgoing probe laser beam's polarization allows one to measure thex projection of the alkali spin. In this illustration an anomalous field bn is present (e.g. sourced by an ALP) along theŷ direction and affects only the 3 He atoms. Side panels: 3-dimensional axes depicting the spins of the 3 He (left) and K (right) and the different fields (anomalous as well as magnetic).
is the magnetic field the K ( 3 He) spins induce on the 3 He (K) atoms. Both atoms are in the presence of an external magnetic field Bext, which has a small deviation from the large, controlledẑ magnetic field due to magnetic noise, here assumed in thê x axis. The overall magnetization of the K ( 3 He) are depicted by the dotted vectors and marked as −λ MK (−λ M3 He ). The tuning of theẑ component of Bext to what is called the compensation point, ensures that the effect of the 3 He's magnetization on the K spins, B3 He , has a projection in thex axis which exactly cancels the effects of Bext on K. The rotation induced by bn on the 3 He induces transverse polarization in the perpendicular direction on the K spin. This implies the comagnetometer has sensitivity to anomalous fields, while it is insensitive to regular magnetic noise. See main text for further details.
system exhibits two modes, one that is mostly aligned with the short-lived alkali metal, and the other much longer-lived mode that is mostly aligned with the spin of the noble gas. The interactions between the two gases induce an effective coupling that triggers both the pumping effect in the helium-3 and mutual effective magnetic fields. 2 The mixing, however, is a priori insufficient to significantly affect the lifetime of the helium-3 (of order a few hours; see R He in table I), unless the two modes are in resonance. Since the pump and external magnetic field are both aligned with theẑ direction, the noise in the pumping rate and in the B z amplitude would dominate 2 Note that the effective pumping of the alkali due to the presence of the helium is negligible compared to the direct pumping from the pump beam. Conversely, the source of the helium polarization is non other than the pumping achieved by the presence of the alkali.
over any new anomalous field in theẑ direction. Therefore, sensitive measurements cannot be implemented in theẑ direction, and one only measures the transverse spins.
By tuning the magnetic field in theẑ direction, one can tune the energy splitting due toẑ magnetic fields in the two spin species to be identical, putting the two magnetometers in resonance. At this point, the two previously separable magnetometers become mixed-allowing sensitivity to the nuclear spins through the measurement of the alkali spins. Moreover, the lifetimes become similar, and in particular, the effective lifetime of the helium-3 is reduced by orders of magnitude compared to the nonresonant mode, of order ∼ 100 msec.
A very important effect happens close to the resonance regime, which significantly enhances the comagnetometer sensitivity. Under steady state conditions, the nuclear polarization of the helium-3 can be made to follow external magnetic fields, thus canceling the net magnetic fields felt by the alkali (in the transverse directions). This spe-cific choice of magnetic field is called the compensation point. It is usually O(1%) away from the resonance point, thus reaping most of the sought-after benefits of the latter point as well. At the compensation point, the alkali spins-which interact with the total external and nuclear magnetic fields-feel a vanishing overall magnetic force. Consequently, the comagnetometer cancels out regular magnetic fields, leaving excellent sensitivity to anomalous ones.
We can now expand on the schematic depiction of the comagnetometer of Fig. 1. In the center panel, the large circle represents the glass cell which houses a pressured 3 He gas, as well as a liquid silvery droplet that generates a vapor of K atoms. The probe laser passes through the glass cell, and its linear polarization is modified by the alkali spins, allowing for the projection of the potassium spin along the direction of the probe beam propagation (≡x) to be measured. The pump laser is circularly polarized and interacts with the alkali atoms, giving them a macroscopic polarization in the direction of the pump beam propagation (≡ẑ). This macroscopic polarization is passed (to some degree) to the 3 He atoms, giving them a macroscopic magnetization in the pump beam's direction (ẑ) as well. The left drawing of vectors represents the fields operating on the 3 He atoms, with b n the anomalous field interacting with the neutrons, B K− 3 He the magnetic field the alkali induces on the 3 He, and B ext the external magnetic field. Note that B ext is not precisely along theẑ direction due to possible experimental noise. We have chosen to depict only b n (and not b e as well), in order to simplify the illustration. The right drawing of vectors represents the fields operating on the K atoms, with B3 He− 3 K the magnetic field the 3 He atoms induce on the K atoms. λ M K = B K− 3 He (= 2λµ K S K in later equations) and λ M3 He = B3 He−K (= 2λµ He S He in later equations) represent the effective magnetization of the alkali as felt by the noble gas and of the noble gas as felt by the alkali, respectively. Note that the vector −λ M K (−λ M3 He ) is proportional to the direction of the spins of the K ( 3 He) atoms with a positive proportionality scale. −λ M K (−λ M3 He ) therefore is not a field that is felt by K ( 3 He) spins, rather it is the direction of the K ( 3 He) spins.
The compensation point is illustrated in Fig. 1 by the 3-dimensional axes showing (λ M K + B ext ) ·x = 0, since the noble gas exactly cancels the transverse component of the external magnetic field (which in Fig. 1 is directed along thex axis). On the other hand, theŷ projection of noble spins is non-vanishing and proportional to the nuclear anomalous field (which in Fig. 1 is along theŷ axis). As a consequence, theŷ anomalous field induces, through the nuclear spins, a measurable tilt in the alkali spins along thex axis.
III. DYNAMICAL EQUATIONS
Much of the dynamics of the comagnetometers described above can be captured by the coupled timeevolution equations for the helium-3 nuclear spin vector, S He , and the alkali spin vector, S K (for further details see Appendix A): The typical sizes for the different variables are shown in Table I. The first line in each of the equations describes the action of the effective total field on the corresponding spins. Here B is the total external magnetic field, namely the controlled magnetic field in theẑ direction, together with any magnetic noise penetrating the magnetic shielding or generated by thermal noise in the shield. b n (b e ) is an anomalous field 3 that interacts with the neutrons (electrons). µ K and µ He are the spin-normalized alkali and noble gas magnetizations respectively, while the factor λ 50 is related to the cross section of a nuclear-alkali collision, and depends upon the overlap of the alkali and nuclear wave-functions during a collision 4 . Under typical conditions, |µ K S K | |µ He S He |. γ e (γ n ) is the gyromagnetic ratio of a free electron (neutron), while q is called the 'slowing down factor' that arises from integrating over the spin-3/2 degrees of freedom of the potassium nucleus, and is a dimensionless constant of order O(4−6), depending on the precise experimental setup. Finally, γ He is the 3 He gyromagnetic ratio.
The decays of the spins are described by the first term of the second line in each of the equations. R e and R He are the decay rates of the electron and 3 He spins respectively. Finally the effect of the external pump for the alkali and the effective pump due to the spin-exchange interactions for the helium-3 are described by the last terms. s pu is the spin of the circularly polarized pump beam, s pu =ẑ, while R pu and R eff pu are the external and effective pumping rates respectively. As can be seen, the effective pump drives the helium-3 spin to align with that of the alkali. Note that the probe beam-which can be thought of as the ability to measure the alkali's spin projection on the direction of the probe's propagation-does not appear in the above equations, as it has negligible effect on the dynamics of the potassium spins and none at all on the 3 He atoms.
The system can be understood in a simple manner under the assumption of a steady-state equilibrium, S K,He = 0. The pumping terms dominate the steady state solution of theẑ projections, greatly impairing the sensitivity to all fields in that direction. Conversely, significant sensitivity can be achieved in the perpendicular directions when in the so-called compensation point. As we show in Appendix A, in the absence of an anomalous field, b e,n = 0, the transverse spin polarization of the alkali gas can be made to vanish (even for finite B ⊥ ), S ⊥ K = 0, by tuning theẑ component of the external magnetic field to be where B c is the compensation point of the magnetic field. Correspondingly, one often defines the compensation frequency, given by ω c ≡ γ He B c , which is usually the typical time-scale that characterizes the compensation point. At this point, the alkali gas feels no (non-anomalous) external magnetic fields in the perpendicular direction. We stress that this ability to cancel external magnetic fields is achieved by only tuning the controlled magnetic field along theẑ direction, allowing to cancel any additional noise in the system. As a consequence of the compensation point, the sensitivity to anomalous fields acting on the neutrons is maximized and is found to be (see Appendix A).
The above shows an enhanced sensitivity to b ⊥ n due to the large numerical coefficient, γ e /γ n O(1000). The compensation point occurs within the resonance regime where the decay rate of the helium-3 is highly enhanced.
IV. MEASURING ALPS WITH THE COMAGNETOMETER
Having established the basic concept of comagnetometers, we move to discuss their sensitivity to new physics. In particular, we focus on the ALP Langrangian terms [50], where a, N and e are the ALP, neutron and electron fields, respectively while g aN N and g aee are the ALPneutron and ALP-electron couplings.
The non-relativistic limit of the above results with spin-dependent interactions that are analogous to the interactions of magnetic fields and spins in the SM. In analogy to magnetic fields, we define an ALP-induced field b, which couples to the alkali's spin (dominated by its electronic configuration) and the helium-3 spin (governed by the spin of its neutron) with the following Hamiltonian: As mentioned previously, such a field is called anomalous if the ratio of the above couplings g aee /g aN N does not match that in the Standard Model (SM), γ e /γ n . Microscopically this is the case for any force mediator that couples differently to electrons and neutrons. (For this reason, comagnetometers are not sensitive to relic dark photons, which would couple with the same ratio as the SM photon.) As described above, comagnetometers are excellent detectors of such anomalous fields, with best sensitivity demonstrated for anomalous couplings to nucleons. Refs. [48,49] performed a thorough search, with the results mostly interpreted in the context of anomalous fields sourced by Lorentz violation, thereby considering only time-independent anomalous fields, b n ≡ g aN N b.
In this work, we show that the same data can be used to place constraints on anomalous fields that are sourced by the presence of relic ALPs that induce an effective time-dependent b n , oscillating at a frequency related to their mass, m a . Bounds can similarly be placed for ALPelectron anomalous fields, b e ≡ g aee b, though these are somewhat weaker.
When a spin-1/2 neutron, N , is in the presence of a coherent ALP field, a, assuming both are non-relativistic, The spectral noise density as a function of the angular frequency collected from three experiments and used to derive the limits in this work. Dataset I (purple) is taken from Figure 5.3 of Ref. [47], dataset II (blue) is taken from Figure 4.17 of Ref. [48], and dataset III (orange and green) are taken from Figure 5.11 of Ref. [49]. Note that we present the data as a function of the angular frequency ω = 2πf , instead of as a function of frequency itself f . the Hamiltonian of their interaction is given by [32,50,60] where ρ a is the energy density of the ALPs at the vicinity of the neutron. θ 0 is some random initial relative phase. σ N is the spin of the neutron, and E a is the energy of the ALP, which for a non-relativistic particle is roughly its mass, E a m a . The relative velocity between the neutron and the ALP field is v, and for DM ALPs we have on average |v| ∼ 7.7 × 10 −4 in natural units. The Hamiltonian of the interaction between an ALP and electrons is similar, with the replacements g aN N → g aee , σ N → σ e . 5 As is evident, relic ALPs would act as an anomalous field: they couple to the spin of the particles with an oscillating strength, and an effective anomalous field, with a corresponding equation for electrons.
V. DATA
In this work we analyze existing data and show how it can be used to place new bounds on ALP couplings. The data comes from three experiments, each measuring the spins of potassium atoms in a 3 He-K comagnetometer for a period of several days. The data, reproduced in Fig. 2, is given in the form of the magnetic field spectral noise density, Here A(ω) is the amplitude of the signal in units of magnetic field per square root bandwidth (where bandwidth is measured in units of Hz), ω is the frequency at which the signal is measured, t tot is the total measurement time, σ is the direction for which the measurement is sensitive, and b n is the neutron anomalous field. A similar equation can be written if one assumes a signal from an electron anomalous field (b n → γ n b e /γ e ). The details of the three data sets we use are as follows: 1. Dataset I: Vasilakis et al., Ref. [47] (some of the data is only shown in a plot in Ref. [49]), performed a search for a long-range spin-dependent interactions using a comagnetometer, over a total integration time of 36.2 days. In this experiment, relic ALPs would appear as a background field, and thus the noise spectrum they provide can be used to constrain such a relic.
The available data, which is depicted by the purple curve of Fig. 2, presents the measured noise spectrum for the entire experiment, for frequencies in the range of 0.04 sec −1 ω 400 sec −1 . The data above 315 sec −1 is filtered and therefore cannot be used to derive bounds.
The experiment was split into 7 separate runs, each testing different configurations which affect the long-range spin-dependent interaction search, but do not affect the sensitivity to relic ALPs. However, since the different measurements have been summed incoherently, and with long breaks between them, there are non-trivial effects, discussed in further details in Appendices D and F. Throughout the experiment, the sensitive direction of the comagnetometer was aligned with the radial direction of the earth. This dataset will be used to derive new limits on ALP-neutron and ALP-electron couplings for masses in the range 2.4 × 10 −17 eV ∼ < m a ∼ < 2 × 10 −13 eV.
2. Dataset II: The second dataset was presented by Kornack et al. in Ref. [48] and is available only for a measurement period of 6 days, for frequencies 3 × 10 −6 sec −1 ω 600 sec −1 . Throughout, the sensitive direction of the comagnetometer was aligned with the radial direction of the earth.
The data is presented by the blue curve of Fig. 2. This dataset is the oldest of those we use and is noisier over most of its covered frequencies. Additionally, its resolution over the frequency range which is uncovered by other measurements is too poor to detect daily modulation, which-combined with its single measurement source-further suppresses its reach (see Appendix F for further details). This data is used to cast new bounds in mass regions not covered by the other datasets, for 3 × 10 −18 eV ∼ < m a ∼ < 4 × 10 −17 eV and 2 × 10 −13 eV ∼ < m a ∼ < 4 × 10 −13 eV. (While this dataset could be used to cast limits on arbitrarily low ALP masses, it is non-competitive with results derived from dataset III below and so we do not pursue this further.) 3. Dataset III: The third dataset was presented by Brown et al. in Ref. [49] and is available only for their longest uninterrupted measurement, which lasted 21.81 days out of 143 days of total run time. Every 7 − 10 seconds, the sensitive direction of the comagnetometer was rotated by 90 • . The sensitive directions of the available measurement in this case are therefore both north-south and east-west.
The measured noise spectral density is depicted by the green (sensitive directions east-west) and red (sensitive directions north-south) curves of Fig. 2. The data spans the frequency range of 6 × 10 −6 sec −1 ω 5 × 10 −3 sec −1 . This data will be used to cast limits on ALPs with masses m a ∼ < 3 × 10 −18 eV.
In addition to the data shown in Fig. 2, Ref. [49] also provides us with a bound on the amplitude of a constant anomalous field. A constant anomalous field would be interpreted as a nearly massless ALP, m a < 1.8 × 10 −22 eV, so that b n of Eq. (8) remains nearly constant throughout the measurement. This bound relies on the full 143 days of exposure, and is therefore stronger than the one cast from the 22 days of exposure of the longest uninterrupted measurement. Indeed, Ref. [42] has already cast a bound on neutron coupling to ultralight ALPs from this result. However Ref. [42] has not accounted for the stochastic nature of the ALP field, which was recently shown to weaken bounds when taken to account [61] (see Sec. VI for a brief discussion, and Appendices D and F for a more complete one). We therefore recalculate that bound here, getting weaker results due to this effect.
VI. ANALYSIS AND RESULTS
We now describe our analysis method for obtaining new constraints on the ALP parameter space using the comagnetometer measurements described above, and the results derived from the existing data. While there is no commonly accepted model for the ALP DM density and velocity distribution (see Ref. [62] for several models), in order to cast bounds, one must choose a specific model. Here we have the average velocity of the ALPs relative to us, | v | = 232 km/sec ∼ 0.77 × 10 −3 in natural units. For the density profile, we have assumed that the ALPs comprise of all DM in the galaxy have assumed an average ALP density of ρ a = ρ DM = 0.4 GeV cm −3 .
Following Eq. (7), the ALP-induced anomalous field is in the direction of the ALP velocity and linearly dependent on its size. The experimental sensitivity depends on the direction of the anomalous field, in relation to the detector's sensitive direction (encoded inσ of Eq. (7)), which is different for the different datasets. The rotation of the earth rotates the direction of sensitivity of the detector, creating O(1) daily modulation in the signal. The data of Ref. [49] is the only one with a fineenough resolution to measure this effect. The details of our daily modulation treatment are given in Appendix C, and their application to the Ref. [49] dataset is described in Appendix F.
In the limit m a → 0, the signal can be constrained from the signal at the sidereal day frequency, ω = Ω SD 2π/ day. ALPs in this limit can be thought of as a source to the original anomalous daily modulated field searched for in Refs. [48,49]. Indeed, the bound calculated by Ref. [42] assumes this. As we have mentioned, independently of the data in Fig. 2, Ref. [49] presents their final values for a constant anomalous field, |b ⊥ n | < 3.7 × 10 −33 GeV at 68% C.L., corresponding at 95% C.L. to |b ⊥ n | < 5.5 × 10 −33 GeV. Due to a long break in the middle of data-taking, the experiment was spanned over a period of 270 days, so that for masses of m a < 2π/(270 days) 1.8 × 10 −22 eV, the anomalous field would appear as constant throughout the experiment of Ref. [49] (up to the effect of the daily modulation), and the result above can be used.
The naive plugging-in of ρ a = ρ DM , v a = v a , θ 0 = 0, and E a = m a , in Eq. (8), to find the appropriate bound on the coupling neglects the stochastic nature of the ALPs, and is inaccurate by a factor of O (20). We will now discuss briefly the effects of the stochastic nature of the ALP field, though we leave the full discussion to Appendices D and F.
Non-relativistic ALPs are coherent over a period of Where we took v stochastic = v virial = 220 km/sec. For any measurement shorter than the coherence time, we can assume a single value was sampled for the velocity, the energy, the relative phase, and the density of the ALP field (the distributions can be found in Appendix D). For a measurement time t tot = nτ a , we assume n random samples of the stochastic distributions should be summed upon. To get the bounds, we therefore run a simple Monte Carlo (MC) simulation in which we sample the distributions appropriately and derive a 95% C.L. bound. The stochastic nature of the ALPs allows for the possibility that we are in a region where the ALP field is uncharacteristically small. However, it is enough to draw few samples from the distributions to decrease this effect. This implies an O(3) improvement of the sensitivity when transitioning from t tot = τ a to t tot ∼ 5τ a . As explained in Appendix D, a more detailed analysis could be made with more complete-data, achieving a further improvement to the bound which scales as t 1/4 tot for the Signal-to-Noise-Ratio (SNR) at periods longer than the coherence time; however, with the current data available, that additional improvement could not be achieved.
The 7 runs of Ref. [47] were spread over a period of ∼ 100 days, and the longest of them took about ∼ 8 days. Therefore, for masses 10 −14 eV ∼ < m a ∼ < 10 −15 eV, the signal would be incoherent for the 100 days, despite being coherent for the entirety of any specific run. Another non-trivial detail is the calibration procedures that were done during the measurements -which took about O(50%) of the total measurement time (during which no data was recorded). In general, all three datasets shown in Fig. 2 have gone through several processing procedures, which we do not know in full details. These complications give rise to some uncertainties, which we discuss in further details in Appendix F.
In general, more of the technical details for the procedure that we use to derive our constraints are described in the Appendices, with the final procedure found in Appendix F. Our results for the constraints on the ALPneutron and ALP-electron couplings are shown as blue shaded regions in Figs. 3 and 4, respectively.
In Fig. 3, the region labeled as 'long-range' represents the merging of two separate bounds from the nonobservation of new long range interactions [43,47]. The 'ν n /ν Hg ' region is excluded from not measuring anomalous fields in a system of mercury atoms and free neutrons [44], and the 'CASPEr (ZULF)' region is excluded by the phase I run of this low-frequency NMR experiment [45]. The bound from the CASPEr ZULF comagnetometer experiment is presented as the 'CASPEr (comag.)' region [46] 6 . The last three exclusion regions were recently corrected by Ref. [61] which accounts for the previously ignored stochastic nature of the ALP field, and we use their corrected results in this figure. The 'neutron star' and 'SN' shaded regions indicate the stellar constraints from neutron star cooling [63] and supernova SN1987a [64] (a recent recalculation of Ref. [68]). The 'meson' shaded region is a model-dependent constraint, arising from the new decay channels that axions would introduce in meson decays [65]. The dotted orange, the dashed magenta, and the dot-dashed red curves show our future projections, as explained in the next section.
In Fig. 4, the 'white dwarfs' and 'solar axions' shaded regions indicate astrophysical constraints coming from the new cooling mechanism axions would introduce in white dwarfs [67], and the non-observation of solar axions by the LUX experiment [66], respectively. The 'longrange' region presents the bound from looking for longrange spin-dependent interactions [40]. The 'torsionpendulum' region presents the bound from the search for the anomalous field sourced by ALPs, interacting with polarized electrons of a so called "spin-pendulum" [35]. The magenta dashed curve shows our future projections of a dedicated comagnetometer experiment, as explained in the next section.
As is evident, for ALP-neutron couplings, our new derived bounds from old data provide the strongest terrestrial constraints to date over a broad range of masses, providing a complementary probe to stellar constraints. Further improvements and deep reach into uncharted parameter space should be made possible with future experimental improvements, as we now detail.
VII. FUTURE IMPROVED EXPERIMENTS
The concept of the alkali-noble comagnetometers exists for over a decade and shows great promise, however relatively little work on the topic discussed here has been performed. We now outline several possible directions for future improvement, which could enhance the sensitivity of these systems to relic ALPs. We describe three realistic experimental setups for improved sensitivity. The Hot Vapors Laboratory of the Quantum Optics Center in Israel is currently building two comagnetometers that will implement some of the ideas presented below. Our projected sensitivity curves for these realistic future experimental setups are depicted by the dashed curves in Fig. 3 and Fig. 4.
A. Dedicated DM Search
The simplest way to improve the bounds extracted in this paper is to improve the detector and by performing specific analysis for DM. The expected reach is shown by the dashed magenta curves in Fig. 3 and Fig. 4. The predicted constraint realistically assumes a 30-day dedicated run with an O(5−10) improvement in the signal-to-noise ratio (SNR) of the detector compared to Ref. [47], i.e. assuming 1.4 picoG/ √ Hz noise spectrum density. We further assume the noise to increase by an order of magnitude at the lower frequencies. We have assumed a moderate increase in the polarization of the helium atoms, leading to a compensation frequency (ω c ≡ γ He B c ) of 100 sec −1 , as well as a small increase in the alkali decay rate (R e /q ∼ > ω c = 100 sec −1 ). As discussed in Appendix B, when the frequency of the anomalous fields, ω, increases, the SNR of the detector decreases as a function of ω/(R e /q) and ω/ω c . In this neutron-ALP interaction 3. Constraints and projected reach for ALP-neutron couplings. The shaded blue regions represents the 95% C.L. bounds derived in this paper from datasets I [47], II [48] and III [49]; the bound derived from dataset III continues to arbitrarily small ALP masses. The sudden increase of the bound at ultra-light masses is due to longer measurement-time being available at those masses, and not an increased sensitivity at low frequencies. The shaded 'long-range' region comes from the non-observation of deviations from the gravitational 1/r 2 at short distances [43], together with the bound from long-range spin-dependent interactions [47]. The 'νn/νHg' shaded region comes from Ref. [44], which compared the effect of anomalous DM axion fields on Hg and neutrons. Similarly, the 'CASPEr (comag.)' region is excluded by the non-observation of the effect of anomalous DM axion fields on 1 H and 13 C [46]. The 'CASPEr (ZULF)' shaded region indicates the phase-I bound of that experiment [45] which looks for anomalous fields by utilizing NMR methods. The last three bounds were recently corrected by Ref. [61] which accounts for the previously ignored stochastic nature of the ALP field, and we use their corrected results in this figure. The 'neutron star' band indicates the constraints from neutron star cooling considerations [63]. The 'SN' band depicts cooling bounds from supernova SN1987a [64]. The 'meson' band is the model-dependent bound from searching for invisible meson decays [65]. The dashed magenta, dotted orange and dot-dashed red curves indicate future reach of our proposed improved experimental setups; for further details, see main text.
projection, we have therefore taken the SNR of the detector to linearly decrease after reaching 100 sec −1 . For the electron-ALP interactions, the loss of SNR is less steep, and we have therefore neglected this effect. All of these improvements rely on advanced techniques which are currently being tested.
One of the detectors being built has two probe beams, and it is planned to implement a control measurement (e.g. by exiting the compensation point for short intervals during which sensitivity to noise from magnetic fields is present). We believe that our background subtraction can introduce significant additional improvements compared to what is currently shown in Fig. 3 and Fig. 4. We also expect to be able to have a marginal improvement at times longer than τ a , that scales as 4 √ t tot , as discussed in Appendix D. Ref. [47] has already shown that even a partial control measurement can introduce an O(10) improvement for some frequencies, and it is therefore likely that the final reach of such an experiment could be even greater than that presented here.
An additional improvement is expected through complete 3D knowledge of directionality of the measured anomalous field. Since the directional properties of the experimental noise are currently unknown, we do not include potential improvement from a multi-directional search in our projected reach. Techniques to measure the entire 3D vector of the anomalous field are, however, currently being studied and will enable the complete knowl- FIG. 4. Constraints and projected reach for ALP-electron couplings. The shaded blue regions represent the 95% C.L. bounds derived in this paper from datasets I [47], II [48] and III [49], the third of which continue to arbitrarily small ALP masses. The sudden increase of the bound at ultra-light masses is due to longer measurement-time being available at those masses, and not an increased sensitivity at low frequencies. The shaded 'long-range' region represents the constraint from searching for new long range interactions [40]. The 'Torsion-Pendulum' region represents the bound from the search for the anomalous field sourced by ALPs, interacting with polarized electrons of a so called "spin-pendulum" [35]. The shaded 'solar axions' region is excluded by the solar axion search of the LUX collaboration [66]. The shaded 'white dwarfs' region is excluded by considering the effects axions would have on white dwarfs as a new cooling mechanism [67]. The magenta dashed curve describes the future reach of an improved comagnetometer setup we propose; see main text for further details.
edge of the ALP field directionality. If a sharp peak is found at some frequency, directional detection schemes in different laboratories would allow testing whether the measured signal is sourced out of earth, which will inform us in the question of its DM origin.
B. Change of Atoms
While the 3 He-K comagnetometer can achieve strong bounds on the interactions between ALPs and neutrons, it cannot probe ALP-proton interactions due to the absence of a proton component in the 3 He spin (see Appendix E for further details). Changing the identity of the atoms the comagnetometer can not only affect the sensitivity but also further enable the probing of the ALP-proton coupling, which can be much larger than the ALP-neutron coupling [69].
Several options for variations in the atoms exist. One, currently under study, is the use of 21 Ne as an alternative to 3 He. A second, more readily available is the use of a Xenon isotope, 129 Xe or 131 Xe, paired with Rb alkali atoms. The Xe-Rb interactions trigger a large relaxation rate for the rubidium. As a consequence, in order to reach reasonable polarizations, a cell with xenon isotopes must have a significantly lower pressure compared to a cell with 3 He atoms. Since the noise cancellation is also sensitive to the density of the noble gas, δB/B c ∝ 1/n noble , this would naively impede the cancellation. However, the interaction of the Rb and Xe is about O(100) stronger than that of K and 3 He, and additionally the pumping of the Xe isotopes can reach O(10%) polarization compared to the O(2%) polarization of the 3 He atoms. Thus an order of magnitude increase in the compensation frequency can be expected.
The decay rate of the electronic spin is increased by orders of magnitude compared to the 3 He-K comagnetometer, which would naively suppress the signal as can be seen from Eq. (4). However, the leading order contribution from magnetic noise is suppressed by the same factor, so the SNR due to magnetic noises is unaffected. 7 Depending on the precise origin of the detector noisewhether sourced by SM magnetic fields or something else, such as noise in the lasers-this suppression of both signal and noise may or may not fully cancel. For this reason, we use a conservative projection assuming that the detector will experience a constant spectral density noise of 0.1 nG/ √ Hz. As in the dedicated DM search case, the increase in compensation field and decay rate has been translated to the dynamical suppression factors discussed in Appendix B appearing at higher frequencies compared to the existing experiments analyzed here. For the assumed ω c = 10 3 sec −1 this translates to a linear decrease in the sensitivity starting at m a = 10 3 sec −1 .
Our projection is shown by the orange dotted curve in Fig. 3. The noise of the detector at low frequencies is extremely hard to predict and thus the projected bounds are given for masses m a > sec −1 . Once again we assume a 30 day run period. The reach of the Xe-Rb comagnetometer described here can also be cast for ALP-proton couplings, with similar sensitivity to the ALP-neutron ones, thus providing a complementary probe. The sensitivity of this detector to ALP-electron interactions are not-competitive with existing bounds and are therefore not shown. Finally, we comment that the Xenon detector may be further improved by the simultaneous use of two Xenon isotopes, as will be demonstrated in future work.
C. Long Range Spin-Dependent Interaction Search
In the experiment of Ref. [47]-whose data was analyzed in this work-a helium-potassium comagnetometer is used to measure long-range spin-dependent interactions via an independent sample of highly polarized 3 He gas, placed in proximity to the detector. This is equivalent to searching for the effective anomalous field generated by the highly polarized source of 3 He. In this case, however, the interaction is only modulated if the directionality of the source sample is modulated, giving control over the frequency of the sought signal. A future improved long-range spin-dependent interaction search would enable significant reach into ALP parameter space. We note that this experiment probes the existence of ALP interactions regardless to whether they are a component of DM or not.
The O(10) improvement in the SNR of the planned future 3 He-K comagnetometer would give an O(3) improvement on the bounds. Changing the geometry can further enhance the reach. A factor of O(10) improvement on the bound can be achieved by placing the sample source at a distance of 10 cm from the center of the comagnetometer, rather than the 50 cm distance of Ref. [47]. The specialized detector currently being built is much smaller than that of Ref. [47] and should thus allow for this close placement of the source sample. Such a setup would then allow to probe masses a factor of 5 heavier than those probed in Ref. [47].
A final further planned improvement, which should yield an additional enhancement of reach by a factor of O (20), is the use of xenon in one of its non gaseous phases (xenon ice, liquid xenon, or xenon snow) as the source material. By taking a cell of 5 cm radius filled with non-gaseous xenon, with an achievable polarization of 50%, a substantially increased amount of polarized spins can be obtained. We note that the typical constructed cells of such polarized material are usually much smaller than this. However, Ref. [32] has discussed the use of such cells in one of their future phases, and cells of size 300 mL (oddly shaped, but close in volume to a 5 cm radius spherical cell), have already been used in Ref. [70] with 34% polarization of Xe (though since the thawed gas is said to lose about 50−80% of its initial polarization, we expect substantial improvement is possible in the absence of thawing). The greatest challenge of using the existing cell technology is that these cells are commonly housed in strong magnets which could ruin the comagnetometer's shields, making the placing of the comagnetometer 10 cm away from the anomalous spin source a challenging task. Preliminary investigation however implies that these issues may be solved in the future, and our projected reach for this future experiment are shown by the dot-dashed red curve in Fig. 3. We note that this type of experiment has no independent sensitivity to g aee .
VIII. SUMMARY
Comagnetometers present an innovative and underutilized avenue to probe ultra-light ALPs. With current setups far from optimization, and sensitivity spanning many decades of ALP masses, down to fuzzy dark matter [71][72][73] masses of O(10 −22 eV), comagnetometers hold great promise to detect relic ALPs. In this paper we have presented the foundation to enable current and future searches using comagnetometers to constrain and detect such ultra-light ALPs. Using publicly available partial comagnetometer data, we are able to place meaningful constraints on ALP couplings to neutrons and electrons, including, in the case of ALP-neutron interactions, the strongest terrestrial constraints to date over a broad range of masses, demonstrating the power of our approach. With future improvements to the experimental setup-the implementation of which is already underway-many different and interesting searches can be performed, with prospects to cut deep into unchartered ALP parameter space in the near future.
Scale
Ref. [47] Ref. [48] Ref. [49] ΓK This appendix delves into the detailed description of the comagnetometer's steady state equations, leaving the time-dependence of the system to Appendix B. Our goal is to present some of the details of the derivation of Eqs. (1) and (2), and then discuss the derivation of Eq. (4).
The spin of each individual potassium atom is com-posed of an electronic spin-1/2 and a nuclear spin-3/2 configurations. As a consequence, the Bloch equations which describe the spin degrees of freedom as 3-vectors, cannot be naively used, and a more complex density matrix formalism seems necessary. To simplify the situation, it is possible to integrate over the nuclear degrees of freedom to reach an effective spin-1/2 system which then allows one to use the Bloch equations with some of the constants modified to account for the integrated-out degrees of freedom. This is precisely the method used to arrive at Eq. (1) (and similarly, Eq. (6)), with q, the slowing down factor encapsulating the nuclear degrees of freedom. Generally, R e is not isotropic, and there is a much faster decay rate in the directions perpendicular to the magnetic field, however, in the so called SERF regime which we are working in, this anisotropy can be neglected [74]. Finally, the 3 He are spin-1/2 atoms with their spin stemming entirely from the neutron in the nucleus [75], and thus the Bloch equations are immediately applicable for them.
Let us consider approximate solutions to Eqs. (1),(2). Six degrees of freedom are at play: 3 from S K , and 3 from S He . In standard operating procedure, all of the magnetic fields (external, as well as those induced by the atoms on each other) are approximately aligned with theẑ direction, (which is the pump beam direction as well), so there are no transverse polarizations. As a consequence, at leading order there are only 2 degrees of freedom, corresponding to theẑ polarizations. Moreover, after time t ∼ (3/R eff pu ), the system reaches a steady state, so that at leading order in the misalignments, As can be seen in Eqs.
(1),(2), the next order corrections would only contribute to the transverse components, as the leading effect of a misalignment is to rotate the spins without changing their absolute value, i.e. we expect S ⊥(1) ∼ S z(0) · sin(θ), with θ representing the misalignment, while the longitudinal component receives no correction to order O(θ).
We may thus conclude that the first order equations have four real degrees of freedom corresponding to the four transverse components. These equations can be written more compactly by complexifying a general 3vector v = (v x , v y , v z ), writing it as v C = v x + iv y , and v z instead. The first order equations for the transverse components can therefore be written as a 2 × 2 linear ordinary differential equation with constant coefficients and inhomogeneous terms, (A3) Here ω K = γ e B z /q +2γ e λµ He S z(0) He /q and ω He = γ He B z + 2γ He λµ K S z(0) K are the precession frequency of the transverse components around theẑ direction for the potassium and helium atoms' spins respetively. Γ K = R e /q is the time scale typical for a precession of the potassium atoms' spins to decay. The off diagonal term ω K−He = 2γ e λµ He S He ) represents the rotation of the zeroth orderẑ potassium (helium) polarization around the magnetic field generated by the transverse helium (potassium) polarization. The inhomogeneous terms come from the rotation of the zeroth orderẑ polarizations around the small transverse magnetic and anomalous fields. The typical values for the scales presented in this equation can be found in Table II. In the above, all terms proportional to the timescales R He , R eff pu were neglected as they are much slower than any other relevant rate. Additionally, anomalous fields in theẑ direction were neglected as they are significantly smaller than the external magnetic fields at play along this direction.
The goal of the detector is to measure the transverse anomalous fields, and Eq. (A3) implies that the transverse magnetic fields have a similar effect on the polarizations and therefore act as background. However, because the terms are not exactly the same, and the detector only measures the potassium's spin, by tuning the magnetic fields along theẑ direction, it is possible to greatly decrease that background. To see this, let us consider the limit b C n = b C e = 0. The steady state solution,Ṡ C K =Ṡ C He = 0, then implies that independently of the size and direction of the transverse magnetic fields, if then S C K = 0. In other words, if the external magnetic field'sẑ component is tuned to B c , then transverse magnetic fields have no first order effect on the steady-state transverse potassium polarization. When the system is in the state where B z = B c , it is said to be in the compensation point.
We note here that away from the compensation point, the sensitivity to non-anomalous transverse magnetic fields is restored. For this reason, magnetic shielding is crucial, allowing for the stabilization of the system around that point. We also note that as explained in section 3 of Ref. [49], the µ-metal shields used in such systems do not shield anomalous fields. 8 8 In short, µ-metal magnetic shields do not respond to bn, while At the compensation point, the sensitivity to constant anomalous fields is easily found by taking the steady state condition again,Ṡ C K =Ṡ C He = 0, and solving for S C K and S C He , one finds where we only took the leading order in 1/R e which is the fastest rate in this setup. Note that Eq. (A5) is equivalent to Eq. (4) from the main text, up to notation. From the above one sees that indeed the alkali's transverse magnetization is insensitive to the external magnetic field, while that of the helium-3 is (thereby allowing the cancelation in the alkali system).
Appendix B: The Dynamical Response of the Comagnetometer
Our goal is to understand the dynamical response of a system described by Eq. (A3). When an anomalous field is rapidly oscillating, the spins in the system are unable to follow the changes sufficiently fast, and therefore the signal is suppressed. Additionally, at the compensation point, the nuclear spin must follow the outside magnetic field in order cancel the total magnetic field felt by the alkali. This does not occur for a rapidly varying field and as a result, the alkali spins will be affected by the external magnetic fields, implying a subpar noise cancellation. It is thus clear that the dynamical response to changing fields is crucial, and in this appendix we explain how the effects of abrupt changes and oscillating fields on the comagnetometer can be calculated, summarizing the main results of the calculation.
The solution to a linear non-homogeneous 2 × 2 ODE such as Eq. (A3) is composed of homogeneous and inhomogeneous contributions. In the steady-state limit and after a sufficiently long time (compared with the inverse decay rate of the system to be discussed below), the homogeneous solution is exponentially small and the system is described by the inhomogeneous contribution, which in our case is controlled by the (possibly oscillating) fields. We stress that near the compensation point, and for low magnetic frequencies, this part of the alkali's solution is insensitive to the non-anomalous fields (see Eq. (A5); for higher frequencies to be discussed below, this is no longer true, however our treatment here of abrupt changes in the fields remains intact). Conversely, before reaching the steady-state regime, the homogeneous solutions, determined by initial conditions, play an important role. their response to be generates an oppositely directed magnetic field, which a comagnetometer tuned to the compensation point would be insensitive to.
Time dependence in the system therefore enters in two distinct manners: (i) abrupt changes drive the system away from the steady-state solution and can be described via initial conditions which alter the homogeneous solutions and (ii) oscillatory fields, the response to which is described within the steady-state regime, and shows up in the inhomogeneous part of the solution. We now discuss each of these contributions separately.
The Homogeneous Solution: Response to Abrupt Changes
Relevant abrupt changes in the comagnetometer system would appear as sudden variations in the nonanomalous transverse magnetic fields, which show up in the first inhomogeneous terms of Eq. (A3). 9 Such changes keep the compensation point intact [see Eq. (A4)], however at short time scales, the helium-3 is too slow to align with the new magnetic fields and hence its influence on the alkali (through an induced magnetic field) does not cancel external magnetic field. During this time, the system is susceptible to these fields and the sensitivity to anomalous fields is impaired.
How is the above picture reflected in the solutions to Eqs. (1) and (2) and subsequently Eq. (A3)? While the numerical solutions which corresponds to the above discussion is easy to derive, the analytic solution is rather cumbersome and non-informative and hence we do not reproduce it here. Instead let us explain the important effects in the solution.
As discussed above, sufficiently close to the compensation point, the inhomogeneous part of the alkali's solution (which essentially describes the late-time steady-state behavior of the system) is largely independent of the nonanomalous fields, and therefore sudden changes (typically relevant in low-frequency magnetic modes) in those fields can only appear in the homogeneous contribution. This is not the case for the helium-3, the inhomogeneous solution of which depends on all magnetic fields [Eq. (A6)] and therefore alter upon a sudden change in the external fields. Meanwhile, the homogeneous part of the solution (of both atoms) depends only on the parameters of the system but not the external fields, however, their coefficients (describing the most general solution), which are determined via the initial conditions, may regain such dependence. Since the two magnetometers are coupled (as is apparent through the non-diagonal terms in Eq (A3)), the homogeneous solution of the two atoms is not aligned with the alkali and helium-3 modes. The dependence of the helium-3 solution on the non-anomalous fields therefore influence the coefficients and remains important so long as the homogeneous solution is not exponentially diluted (i.e. before the system reaches steady-state). From that point of view, the homogeneous part of the solution entails the system's ability to respond to the sudden changes in the inhomogeneous terms.
In the discussion so far, the system was described at short timescales, before it can reach its steady-state behavior. Let us now estimate this timescale. If not for the coupling of the two spin ensembles, there would be two distinct modes, one for the alkali and one for the noble gas. The rate with which the noble gas's mode decays in such a case is longer than that of the alkali by many orders of magnitude. The interaction between the atoms mix the two spin modes and the resulting system is described by two new eigen-modes with two new respective eigenvalues. Since we mostly care about how long it takes the system to reach equilibrium, it is sufficient to discuss the slower decay rate, Γ slow . Neglecting the rates, R K−He , R He−K ( which would have been the real components of the off-diagonal terms of Eq. (A3)), and R He (which are mostly irrelevant in the systems at hand), one finds for δω ≡ ω K − ω He The above is only an order of magnitude smaller than the (mostly) alkali mode's decay rate in typical systems for which Γ K √ ω He−K ω K−He . We point out that at the compensation point, the higher order corrections in 1/δω can become important. While highly dependent on the precise details, one often finds the two eigenvalues' real values to be of the same order of magnitude (see Table II for the values for Refs. [47][48][49]).
As an example to why the above discussion could be important, consider the case of Ref. [49]. In Ref. [49], the detector changes its direction every few seconds. Sudden changes in the magnetic fields are then expected due to possible field penetration as well as inner thermal noise of the magnetic shields. If the rate with which the system reaches equilibrium after each rotation is slower than the rate of rotations, the system never converges to its steady-state behavior. As a result, the homogeneous terms proportional to the magnetic fields can add significant contributions to the signal. Under realistic laboratory conditions, and even without such a clear intervention in the detector's environmental conditions, sudden changes in the magnetic fields occur, and unless the detector's response-time is fast enough, they can impair the measurements. Fortunately, since Γ slow 10 sec −1 , abrupt changes are treated rather efficiently in the systems studied here.
The Inhomogeneous Solutions: Response to Oscillatory Fields
Let us now discuss the steady-state response of the system to high-frequency magnetic fields. The inhomo-geneous solutions have three contributions. The first two are from the electron anomalous field and the nuclear anomalous field. These terms relate the alkali's spin measurement to those anomalous fields. The third contribution will come from oscillating magnetic fields. It is this that dictates the system's ability to cancel noise at a given frequency of oscillation.
Unlike the homogeneous solutions, whose frequency is determined by the linear system's parameters, an inhomogeneous term with a certain frequency will only induce an inhomogeneous solution of that frequency. As can be seen from Eq. (A3), an important change in the presence of an oscillating field with a frequency ω, is that the steady-state solution is no longer found by takinġ S C K =Ṡ C He = 0, but ratherṠ C K = iωS C K , andṠ C He = iωS C He . Much like in the case of the homogeneous solutions, the actual results are easy to calculate, but have cumbersome formulas. Nonetheless, close to the compensation point, and neglecting R He−K , R K−He , R He , one finds the approximate closed form solutions, Here S C K,He (ω) are the inhomogeneous contributions of the fields And P 1 (ω), P 2 (ω) are polynomials of degree one and two, and using the notations of Eq. (A3), and (B5) Note that due to the ALP field oscillating as cos(m a t+θ 0 ) with θ 0 an unknown phase, the negative and positive frequencies are mixed, and therefore the final dependence on the ALP field will be a symmetrized version of Eqs. (B2) and (B3).
While it is not yet entirely known what governs the noise spectrum of the comagnetometer at low frequencies (see e.g. Ref. [47][48][49]76] for calculations of the noise from theoretical arguments, and compare with results from Refs. [47][48][49]), at higher frequencies (usually ω ∼ > Γ K or |ω| ∼ > |ω c | |ω He |) there are reasons to believe that magnetic noise is the dominant factor. Eq. (B2) shows that such magnetic noises would enter the (measured) alkali's magnetization and thus one can approximate the ω-dependence of the signal-to-noise ratio due to suppressed response to ALP-neutron (ALP-electron) interaction, by dividing the coefficient of b n (b e ) with that of B in Eq. (B2). The conclusions is therefore that for ALP-neutron interactions, we expect an approximately linear decrease in the signal-to-noise sensitivity for high frequencies (the ratio is ∝ 1/ω), while we do not expect such a decrease for ALP-electron interactions [the ratio is ∝ P 1 (ω)/ω ∼ O(ω 0 )].
Appendix C: Effects of Signal Directionality
Here we discuss in detail the procedure for treating signal directionality. For the datasets used in this paper, Refs. [47][48][49], a simplified treatment sufficed (see Appendix F), however here we lay the groundwork for the formal treatment of velocity directionality, which will be relevant in the future with new independent highresolution data. Throughout this appendix we assume the measurement time t tot day, as the shortest datataking session used in our bounds was 4 days long.
The data in Refs. [47,49] is given in the form of Eq. (9) (the data of Ref. [48] is given in a similar, yet not identical form). The directional dependence on the relative ALP velocity is apparent, but the relative velocity of the ALPs with respect to earth is highly model dependent [62]. Different models can also change the local DM density significantly. Moreover, even for a given model, the local velocity and local density of the ALPs are statistical quantities, and thus need to be treated as such [61]. Our treatment of the effects of the non-deterministic properties of ALPs is presented in Appendix D. For the purposes of this appendix, we take the ALPs to have a constant relative velocity v, and a constant density ρ DM .
Under these assumptions, we look at the result of plugging the anomalous field implied by the Hamiltonian of Eq. (7) in the integrand of Eq. (9). We find that where we took the square of the absolute value of the amplitude as we are interested in root mean square over different parameters such as the relative velocity's direction and initial phases. We defined c ≡ 2ρ a |v| 2 g 2 aN N /(γ 2 n t tot ), in order to make the equations more tractable. t tot is the total measurement time. E a is the ALP energy, and because the ALP is non-relativistic, E a = m a + m a v 2 /2 m a (we will address the importance of deviation from that assumption in Appendix D).σ is the sensitive direction of the detector. We allowed an initial relative phase for the ALP field, θ 0 , which we will also discuss in further details in Appendix D.
The measurement itself is of the change of polarization in the probe beam behind the cell, rather than b n ·σ, and while the change of polarization is proportional to S x K (which is proportional to b n ·σ), there are calibration factors. These factors are measured individually by the different experiments, with the data given after calibration. The calibration is done by checking low frequency response, and therefore at higher frequencies, a correction is necessary, as was discussed in Appendix B. However, for this appendix, we shall assume that the data given is after the necessary additional corrections were made to correct for the higher frequencies -and thus Eq. (C1) can be assumed as the signal we are given.
To findσ let us use the coordinate system whereẑ is the direction of earth's rotation axis. We define the x − z plane so that at t = 0, the observer of an experiment on earth is described by (R ⊕ sin(θ), 0, R ⊕ cos(θ)), where R ⊕ is the earth's radius, and the observer's latitude coordinate is π/2 − θ. At time t, the observer's position is therefore, r(t) = R⊕(sin(θ) cos(ΩSDt), sin(θ) sin(ΩSDt), cos(θ)), (C2) with Ω SD 2π/day as the sidereal day frequency.
For the experiment of Ref. [49], the detector's sensitive direction alternated between the North-South (NS) directions to the East-West (EW) directions every few seconds. Thus, from the second experiment, we have a lowfrequency measurement of two differentσ, for the NS measurements, and for the EW measurements. For each of the three directions we plug the appropriate of the three Eqs. (C3),(C4),(C5), into Eq. (C1), to get the expected signal. The resulting signal is a complicated function of the many different parameters, and we thus do not show it here. However, as we do want to examine the expected form of the signal, it is useful to look at |A(ω,σ)| 2 , where we would average upon all possible directionsv, and upon the initial angle θ 0 . For any of the three directions, the resulting averaged signal squared has the form where the coefficients a i (σ) do not depend on the frequencies or the mass, and are in fact only dependent on σ(t = 0). The frequencies ω i have six possible values, the sum of one of the two {m a , −m a } with one of the three {Ω SD , −Ω SD , 0}. This form is reasonable, as when t tot → ∞ these terms become delta functions (up to normalization), and as we did not yet include the velocity smearing that shall be discussed in Appendix D, the ALPs are indeed infinitely sharp in the frequency range -albeit possibly shifted due to the earth's rotation. As long as m a is not within ∼ 2π/t tot of 0, Ω SD , or Ω SD /2, for all 1 ≤ i < j ≤ 6, we have When Eq. (C7) holds, A(ω) takes a similar form to Eq. (C6), albeit with a i (σ) → a i (σ,v). When this condition is not met, the signal might smear between different ω i 's, and take a more complicated form. When specifically m a ∼ < 2π/t tot , the effects of θ 0 are not negligible, and its stochastic nature must be accounted for (see Appendix D and Appendix F). We note that had we not assumed t tot day, the device's longitude coordinate, and hour at which measurements started would have played a role as well.
Appendix D: Effects of Non-Deterministic Signal
Eq. (8) presents us with the expected average field for the ALPs throughout the galaxy. However, due to the stochastic nature of the ALP field, E a , v, θ 0 and ρ a should not be treated as their average values throughout the galaxy, when only measured for a short time. Indeed, as we move in the galactic plane, we go through spatial gradients in the ALP field [77], and the local properties of the ALP field should be thought of as random variables sampled from a distribution centered around the average values. While there is debate in the astrophysics literature as to the size of these gradients, here we take the conservative approach of Ref. [78], by taking the typical scale of these gradients to be the De-Broglie wavelength of the ALPs, ∼ 2π/m a v virial .
Recently, Ref. [61] has shown how to treat the effect of the stochastic nature of the ALP field, and we base the methods presented in this appendix on theirs. For the case of the ALPs velocity distribution, we also use Ref. [79]. While Ref. [79] was discussing detection of DM scattering via direct detection, their general formulas for finding the relative velocities of virialized DM were useful for our discussion as well.
We will now discuss the different variables that were taken as non-deterministic, and the distributions of these variables. After that, we discuss the coherence time of the signal. The coherence time plays an important role in our treatment of the stochastic nature of the ALP field -we take an independent sampling of each of the nondeterministic variables every coherence time.
The stochastic nature of the ALP field Here we discuss one by one the non-deterministic variables of Eq. (8) (identical to the non-deterministic variables that affect b e ), and what distribution was chosen for them. The Initial Signal Phase. As we have already briefly mentioned in Appendix C, when E a t tot ∼ > 2π, the initial phase θ 0 becomes of little importance, as one goes over at least one oscillation of cos(E a t + θ 0 ). Conversely, when E a t tot 1, the signal can be highly dependent on that phase. As this phase is entirely random, we sample it from a uniform distribution between 0 and 2π.
The ALP Density. The anomalous field (Eq. (8)) depends on the square root of the DM energy density. The square root of the ALP energy density, √ ρ a , is Rayleigh distributed [61] around √ ρ DM = 0.4 GeV/cm 3 . We therefore sample √ ρ a from the following probability density function, The Velocity Distribution. Following Ref. [79], we take a Standard Halo Model (SHM), where our relative velocity compared to the DM is where v = (11, 232, 7) km/sec, is the velocity of the sun with respect to the galaxy, in galactic coordinates. v SHM is the randomly sampled DM velocity in the SHM with respect to the galactic rest-frame. We also use Ref. [79] for their formulas (which we do not reproduce here) to transition Eqs. (C3),(C5),(C4) to the galactic coordinates in which v is given. We have neglected the velocity of the earth with respect to the sun, which would introduce ∼ 10% annual modulation, and we have neglected the velocity from the rotation of the detector around the earth's axis which introduces sub-percent daily modulation. We emphasize that the daily modulations that are discussed in Appendix C are coming from the detector's sensitivedirection's rotation, and not from the small change in the detector's velocity due to earth's rotation. The SHM velocity's probability distribution function is, where Z is a normalization constant, Θ is the Heaviside function, v esc = 550 km/sec is the galactic escape velocity, v virial = 220 km/sec is the virial velocity. The Energy Distribution. The energy of the nonrelativistic ALPs, E a = m a (1 + v 2 a /2) should be entirely determined by the sampled velocity which was discussed in the previous paragraph. However, as the smearing of the searched frequency introduces a finite coherence time of ALP oscillations, it requires a more thorough discussion, which we perform below.
Effects of finite signal coherence time
Neglecting the small corrections due to finite galactic escape velocity in the SHM, the spread of velocities gives rise to a coherence time τ a = 2π/(m a v 2 ) 10 7 /m a [32] 10 . If a data-taking session is significantly shorter than the coherence time, we assume that the signal is entirely coherent throughout the measurement, i.e. only a single value should be sampled from the distributions discussed in this appendix.
A coherent signal should scale linearly with t tot , the measurement time, while the random noise will scale as √ t tot , giving rise to SNR [for S(ω)] that scales as √ t tot . This is why the data of Fig. 2 is given in the seemingly odd units of Gauss/ √ Hz. It is therefore expected that even if t tot is increased, the noise spectrum will look the same, while any contribution of the signal will peak over the noise as t tot increases.
Conversely, if t tot > τ a , for every τ a that passes since the beginning of the measurements, we sample the distributions one more time. Following Ref. [77], we can also sketch how to understand the dependence of the SNR on t tot after t tot > τ a . If we assume that n coherence times have passed, t tot = nτ a , this implies adding incoherently (with random relative phases), n coherent measurements of length τ a . When n 1, the expected measured amplitude of this incoherent summation is approximately the addition in quadratures (as it is a random walk) of the measurements. Therefore, the signal would scale as ∼ √ nτ a = √ t tot · τ a . This would imply that after the coherence time passes, there is no longer an advantage in taking longer measurements (as the signal to noise ratio no longer increases for t tot > τ a ). In a dedicated experiment, as explained in Ref. [77], and in analogy to the prescribed procedures of Ref. [81], it can be possible to increase sensitivity even after the coherence time passes using curve-fitting of the signal to smeared gaussians. However, since the data analyzed in this paper was not given with sufficient resolution and has gone through several processing procedures, we have not attempted such procedures.
Despite the above discussion, there's an O(3) improvement in the 95% C.L. bound in the transition from t tot ∼ τ a to t tot ∼ 5τ a . This improvement is because when we only have a single sampling of the stochastic distribution, the signal might be unusually small (due to a small ρ a , or a cancellation of v and v SHM ). Conversely, when t tot ∼ > 5τ a , that probability drops, as we sample 5 different values of our distribution, and while one of them might be small, on average, they would not be consistently small. However as was discussed in the above paragraph, after this improvement at t tot ∼ 5τ a , the SNR stops improving if one does not use curve fitting. 10 Since the ALP kinetic energy is 1 2 mav 2 virial , some authors use τa = 2π/(mav 2 virial /2), which is twice as long as the coherence time we use. Our shorter coherence time is conservative, and coincides with Ref. [80] which shows that τa = 2π/(mav 2 virial ) gives the correct frequency spread from Doppler broadening considerations. Regardless, since the bounds depend only weakly on the exact coherence time, this factor of 2 does not affect the results significantly.
In the case of Ref. [47], we are given data that were averaged from multiple measurements. The measurements were taken over a period of ∼ 100 days. We have assumed that when τ a (m a ) = 100 days, all non-deterministic variables were sampled from a single distribution. However, when τ a ∼ 8 days, most measurements are spaced enough for them to be considered independent samples of the stochastic distribution. We have used a simple interpolation to predict the suppression of the bound due to the stochastic nature of the ALP field, between τ a (m a ) = 100 days and τ a (m a ) = 8 days. Similar, more complicated methods have yielded similar results.
We have used MC simulations for our final bounds and projections presented in Figs. 3 and 4. The procedure of finding the bounds and projections after treating the effects in all other appendices, is described in further details in Appendix F. probes at different directions. At low frequencies, P 2 (ω) and P 1 (ω)/P 2 (ω) (corresponding to the alkali's magnetization due to an anomalous electron and neutron magnetic fields respectively; see Eq. (B2)), are almost purely imaginary, however at higher frequencies they have a real contribution as well, leading to sensitivity for ALPs in the direction parallel to the probe beam. This requires us to specify for the datasets which we analyze at high frequencies the direction of the probe beam -which for both Refs. [47,48] was 60 • with the NS directions (and no component in the direction of gravity).
By fitting to a function that is smeared, the effects of the ALP incoherence on the SNR which are described in Appendix D become apparent, as different sampled energies would widen the expected signal. However, as no curve fitting was attempted, the effect of finite coherence time has prevented any improvement in SNR after τ a passed, as described in the second part of Appendix D. For future analysis however, it was assumed that once the coherence time has been reached, the reach still improves as t −1/4 tot . At this point, we have a well defined procedure to extract the predicted signal from an ALP. What remains now is to account for the noise in order to find the 95% C.L. limit for a given m a , and either g aN N or g aee . The problem here is that unlike the more common cases of direct detection experiments, in our setup, noise may in fact theoretically cancel the signal. Therefore, without any model for the noise, a specific measurement could be the remains of a cancellation to an unknown degree between the signal and the noise.
To solve this problem, we need to understand how noise can affect the measurement. Assume that the noise at a given frequency ω, in a given experiment, is with some unknown amplitude A noise (ω), oscillating with an unknown initial phase. In this case, in order for complete cancellation of the ALP signal at the same frequency, A(ω), to occur, not only do we need A(ω) = A noise (ω), but also for the two phases to exactly match. As these two phases are entirely independent, we expect the relative phase to be a uniformly distributed random variable between 0 and 2π. Therefore, for a given A noise (ω), we can easily extract the 95% C.L. of A(ω) from the measured amplitude at ω. While we have no way of knowing A noise (ω), we simply take the conservative approach and assume the one that gives the weakest bound. Note that the result is always bounded from below and for any given A noise (ω), the probability to get complete cancellation of signal and noise is infinitesimal.
We find that nearly always, the strongest bounds are found when A noise (ω) → 0. The main reason for this is the stochastic nature of the signal. The possibility given in the previous paragraph of A noise (ω) = A signal (ω) is impossible when the signal is stochastic in nature. Even if the amplitudes are equal for a given v, ρ a , when we sample the non-deterministic variables, they would not cancel consistently.
The treatment for the dataset of Ref. [49] is a bit more complicated than the other two, and there is more freedom in the choice of statistical test. An ALP of mass m a would have a measurable amplitude at 5 different points of data, the |m a ± Ω SD | frequencies in both the EW and the NS searches, and the m a frequency in the NS search. We have taken the mean of these five measurements, though in the future, we expect smarter choices can be made, that could further teach us about the data (e.g. using the standard deviation of the five measurements to estimate the noise).
We note that when analyzing the data of Ref. [49], the three masses of m a = (0, 1 2 Ω SD , Ω SD ) require a special treatment, since for such cases some of the different frequencies at which we attempt to find a signal coincide (e.g. for m a = 1 2 Ω SD , m a = −m a + Ω SD ), or else we need the zero frequency data which we do not have. The analysis is a simple extension of the previously described procedures, so we do not reproduce it here.
Before moving to discuss the future projections, we finally note our efficiency estimates. For the data from Ref. [47], we are told that about 35% of the time, the detector was not actively measuring (e.g. due to the calibration routines), so the effective measurement time is only 65% of the reported t tot . As Ref. [49] uses the same procedures, we have taken its efficiency to be 0.65 × 0.5, as each of the two directions is actively measuring only half the time. For Ref. [48] it is written that the efficiency was between 0.2 and 0.6, so we have conservatively taken 0.2.
Future projections were calculated with a much simpler procedure compared to the bounds, since we cannot be sure of the precise experimental apparatus we will have. The reach is not to be thought of as expected 95% C.L. bound, but as the expected measurable signal. The reach is taken as the sensitivity described in the text, for a single month of exposure, and under the assumption that the bound when the measurement time is longer than the coherence time improves as t 1/4 tot (instead of √ t tot for t tot < τ a ). To account for the effects of the dynamical response, we assume the sensitivity is weakening linearly for the b n search at ω > ω c (with ω c given explicitly in Sec. VII).
The calculation of the long range spin-dependent interaction bound was written explicitly in Sec. VII, and it is effectively a rescaling of the bounds presented by Ref. [47] for their similar experiment.
|
2019-07-08T18:00:03.000Z
|
2019-07-08T00:00:00.000
|
{
"year": 2019,
"sha1": "07e7ea1e50417a7b1654e74d1929bb2fdde22d0e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2020)167.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "24afbfc690c48d5ab70e4d624e515b71ba1cc0f7",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247065886
|
pes2o/s2orc
|
v3-fos-license
|
Overseas Job Opportunity among Fresh Graduate of Healthcare Workers: A SWOT Analysis
The chronic reasons why Indonesian healthcare workers are reluctant to work abroad are allegedly due to inadequate English language proficiency and family encouragement. The demand, opportunities, and benefits of working abroad are increasing from year to year. Unfortunately, the opportunity is poorly anticipated. This research highlights the opportunities for entry-level healthcare professionals and involves nursing, midwifery and environmental health. This is what makes it different from previous research. The objective is to explore the current demand, challenges and offer strategies to elevate the interest of Indonesian healthcare workers to work abroad. This study used a quantitative method with the Strength, Weaknesses, Opportunity, Threats (SWOT) Analysis supported by the PICOT (Participants, Intervention, Comparison, Outcome, Time) model to differentiate the demand, interest, and challenges of the professions. A mixed questionnaire was distributed among the population of 148 students from 3 majors: nursing (n-84), midwifery (n=23), and environmental health (n=61) collected randomly as the purposive sampling. The population were graduates holders (n=87) and diploma holders (n=61), i.e. nursing (n=84), midwifery (n=23), and environmental health (n=61). Results showed that nursing professional was more dominant and demanded abroad. The participants were interested if the training was provided (n=141 or 84%). The weakness was lack of language proficiency (n=93), and lack of preparation (n=76). The challenges mainly were due to inadequate preparation (languages, family support, and financial). The main finding of this research was that the increasing demand for overseas job is not matched by the language preparation and family support. The study recommended a structured preparation program with an integrated approach during college time INDEX TERMS Overseas jobs, Indonesian healthcare workers, SWOT Analysis.
I. INTRODUCTION
Issues related to the migration of healthcare workers from one country to another have been widely discussed in the last two decades [1]. Many studies elaborated the causes and problems as the background for the migration of healthcare workers (HCW) on an international scale [2]. The phenomena show how important the roles of healthcare workers in health services are [3]. During the Covid-19 era for example, the shortage of nurses reached 20% [4]. Yet, in many rich and developed countries, the tendency of people to take the healthcare profession as a choice of work is decreasing which is worsening the health care services condition faced by the modern world [5]. International Council of Nurses (INC) reported 13 million of nurses will be needed to fill the gap in the future [6]. In general, the need of healthcare workers from other countries of the third world countries seems endless [7]. As a result, the high demand of HCW from other countries such as India, the Philippines and Indonesia keep on increasing. Unfortunately, due to some problems, there is an imbalance between demand and supply during the recruitment process [8]. It is acknowledged worldwide that working abroad offers various benefits, both economically, socially, culturally, educationally to religious interest [9]. The benefits vary depending on the country of residence, the institution of employment, forms and sizes of the healthcare services, position offered, work experience, and the candidate's specialization [10]. In general, in terms of the profession, working abroad as a HCW has never subsided and is always needed. Nurses are the most needed healthcare professional, besides midwives and environment health professionals [11] The demand for Indonesian HCW began in the late 80s for placement in Saudi Arabia, followed by other countries i.e. the United Arab Emirates (UAE), Malaysia, Singapore, Brunei, the Netherlands, Qatar, Oman, Bahrain, Australia, Canada, USA, Japan, and Germany [12]. The opportunities was not well anticipated [13]. Allegedly the main cause was lack of the language proficiency and support from family, parents, or spouses for those who are married [14]. The number of workers needed is in hundreds to thousands. Nevertheless, the trends of sending Indonesian HCW abroad for the last three decades have not shown a significant increase [15]. At the same time domestically, Indonesian HCW face employment problems due to overproduction, lack of opportunities, and remunerations [16].
Previous studies have revealed a lot about the correlation between the wages and job satisfaction which is closely related to the welfare of HCW [17]. The relationship between competence and well-being of HCW migrants is also widely discussed [8]. A research in Australia and Canada explored how stress was experienced by migrant workers in the workplace [8], [18]. Not a few who discussed in detail HCWs' contributions, for example in the USA [19], Japan [20], and in Norway [21], to countries that are members of the OECD (Organization for Economic Co-operation Development) [22]. Migrant workers have expectations [8], the most common of which are gaining experience [21], professional development [8], and improving economic conditions [8]. It must be admitted that achieving those expectations and goals are not easy because migrant workers are always faced with challenges, as faced by HCWs from India and the Philippines in the UK [18].
Researchers agreed that despite its challenges and drawbacks, working abroad is highly profitable [23]. Healthcare workers in developed countries such as the USA, Canada, and Australia continue to spread to other continents, namely the Middle East, Africa, Europe, and even Southeast Asia is a concrete example [24]. Those facts prove that working abroad has its own charm. In the nursing profession, for instance, there is professional a Travel Nurse, a nurse whose job is to accompany mobile clients from one country to another. It is indeed interesting to study the occurrence of Indonesian HCW' interests which seems constant amid the incessant demand for overseas placement for years.
This article tried to analyze the strengths, weaknesses, opportunities and threats of the Indonesian fresh graduate HCWs' interest in working overseas by implementing SWOT Analysis. PICOT formula was also used to help researchers differentiate the demand, interest and the challenges of each healthcare profession. The implication of the study is to complement the results of previous researches that have not been discussed, especially on fresh graduates who are interested in working abroad, healthcare education providers, lectures and authorized government agencies. The objective is to explore the current situation, demand, and challenges faced by Indonesian new starters of HCWs and offer the right strategy to fill gap of overseas job opportunity. Many studies discussed the opportunities, advantages, and challenges of HCWs, but the majority focus on the global nursing profession. The fundamental difference with this research is that the emphasis is more on entry-level healthcare professionals which also involves midwives and environmental health.
II. METHODS
This research method is quantitative with a SWOT analysis design. The method was chosen because it enabled us to identify the organization's Strengths and Weaknesses, possible Opportunities and potential Threats of fresh graduates of HCWs' interest-related factors [25]. Similar method was used in nursing by previous researcher [26].
A. DATA COLLECTION
The primary data collection that we used was a questionnaire distributed through the assistance of Google Forms. It was used to ensure that information on the variable of interest took place systematically and allowed respondents to answer questions and evaluate results. This initial data collection stage was carried out during a webinar at the Institute of Health Sciences (Stikes) of Widyagama Husada Malang. The questionnaire was a mixed questionnaire extracted from a validated questionnaire [27]. It was conducted on 16 October 2021. The population was randomly and purposively taken from 168 students (n=168) taking part the webinar in which 14% (n=23) of them were midwifery students, 36% (n=61) were environmental health students, and 50% (n=84) were nursing students. It was carried out online, after obtaining ethical clearance from the institute
B. DATA GROUPING
In the second stage, we conducted data grouping, namely separating data based on certain criteria. Kriteria inklusi dan eksklusi These criteria are based on the results of classifying 8 questions in which 2 questions were on education background, 2 questions on overseas work opportunities, 2 questions on the preparedness to work abroad, and 2 questions on the challenges.
C. DATA MEASUREMENT
The third step was data measurement in which we used nominal (numeric) scales to label the variables. The independent variable is job, and the dependent variable is nurses. The measurements were carried out using a selfcompleted questionnaire.The SWOT Analysis diagram is as follows.
The above diagram projected two groups of analysis, positive and negative. The positive group covers the Strengths (S) and Opportunities (O), and the negative group covers Weaknesses (W) and Threats (T). The strengths are internal factors that include the advantages possessed by Indonesian HCWs that may attract users from other countries. Meanwhile, the weaknesses contain the fatigue or shortcomings of the HCWs that hinder the recruitment process.
The opportunity (O) and Threats (T) are the external factors. The opportunity consists of the identified factors that provide opportunities for the HCWs. The threats are the potential external risks that burden the recruitment process. The summary was then combined with the selected data according to the research focus, namely interests and work opportunities abroad with a sample population of 168 students of Stikes Widyagama Husada Malang. To differentiate to demand or requirements, interest and challenges of each healthcare profession we used PICOT (Population, Intervention, Comparison, Outcome, Time) formula as the instrument. The Population was fresh graduate of health care profession), Intervention was the interest to work abroad, Comparison was nurses, midwives and environmental health, Outcome was healthcare professionals who fulfill the requirement and Time was after graduation year of 2021-2022. After entering the data in the formula, we identified the gaps and suggested the possible solution
A. Study Selection
The study selection was taken from the tabulated questionnaire i.e. self-competed questionnaire which was processed by using a numbering system. The number of students participating in the study was 168 (n=168), respectively from the department of midwifery (n=23 or 14%), environmental health (n=61 or 36%) and nursing (n=84 or 50%) of the Health Institute (Stikes) of Widyagama Husada Malang. They were undergraduate (Strata 1) (87%) and the diploma holders (13%). Most of them believed that overseas job opportunities were always available (n = 136 or 81%) and for HCWs (n=53 or 91%). However, those who felt that HCWs in Indonesia were ready to pick up the job opportunity was 75 (n=75 or 45%), and 66 people (n=66 people or 39%) were not ready and those who did not know was 27 people (n=27 or 17%). Almost half of them (n=82 or 49%) were interested in working abroad. Their obstacles were language skills (76 or 45%), family support (n=45 or 27%), and 17% (n=27) who felt that they were less well off financially. Nonetheless, 84% (n=141) were confident in their training if given opportunities. The summary is as projected in the below figures: The above figure shows the biggest obstacle faced by healthcare workers is the language acquisition (n=84 or 50%).
B. SWOT Analysis
The following SWOT diagram content was extracted from the questionnaire tabulation supported by scientific journals. The Regarding the Opportunities O) according to the government agency (BP2MI, 2021) the demand is still high. The choices are available in different countries from the Netherlands, Germany, Middle East countries, until Japan [28]. Whereas about the threats (T), India and the Philippines are two countries that send more nurses abroad, and they play as major global competitor [18]. Besides, technology changes [29]; and current the Covid-19 Pandemic [30]. The figures above (3 and 4) portray the current condition of fresh graduates of HCWs who are academically potentials and competent because of their majors in healthcare education expertise, namely nursing, midwifery, and environmental health produced by an accredited university, but at the same time facing language barrier, minimum of family support and lack of fund. They are in high demand and needed by many developed countries.
C. PICOT Formula
The table above is the result of data identification which were put in the PICOT formula. The same formula was implemented to support his evidence-base change of nursing practice project in which he used SWOT Analysis to measure the effect of team work with the support of PICOT formula [31]. The results show that nursing profession is dominant as HCWs (50%). All HCWs express that the biggest obstacle was the language barrier (50%). However, if trained, their interest to go abroad is enormous (84%). From the three data groups that were processed by SWOT Analysis and mapping using the PICOT Formula with the support of journals and various sources related to the issue of health worker migrants, the need for healthcare workers is very promising. Indonesia has the potential to meet its demand due to the high number of its healthcare education institutions [32]. However, the data of this study projected the readiness of healthcare workers in Indonesia is relatively lacking and is still stagnant.
III. DISCUSSION
The basis for the discussion below is each review contained in the SWOT Analysis (Figures 3 and 4) in which each aspect (strengths, weaknesses, opportunities, threats) contains problems that need to be solved. In addition, in the PICOT Formula (Table 2), the aspect discussed is the point outcome (O) regarding the problems faced by respondents.
A. STRENGTHS
Some of the potential poses as strong elements by Indonesian HCWs as professional as stated in the results of the questionnaire collection are the number of graduates (86.9%), their interest (45.8%), and the professional education background especially nursing (50%). The main requirements for working abroad is minimum to be a diploma or bachelor's degree, having a minimum of two or three years of work experience, possessing a Registration Certificate (STR), passing an interview/written test, and medical checkup [33]. In the SWOT analysis, what is included in the Strengths category are human resources, infrastructure, training, support, and technology (hardware and software) [34]. As the fourth largest country in terms of population, Indonesia has a large number of human resources in healthcare [35]. In the educational institutions view point, more than 3,000 public and private campuses available [36]. Nursing and midwife majors reach more than 1,400 [26]. For the diploma education level, 38 Polytechnic of Health (Poltekkes) of Ministry of Health campuses with more than 54,700 students from various majors are accessible [32]. Students from various health majors have wide practice area in the country, from laboratory technician, pharmacy, physiotherapy, radiology, nutritionist until nursing. Indonesia has more than 2,800 units and more than 9,900 units of Public Health Centers (Puskesmas) [37]. Not to mention other practical facilities and infrastructure, including large number of patients as case study material for students. Indonesia possesses a Health Manpower Council (MTKI) that regulates the health workforce system as well as a regulator that determines National Health Service standards [38]. The curriculum applied to educational institutions is regulated by the Ministry of Higher Education, Research and Technology and the Ministry of Health. They standardize the health professionals studies who can compete on the international stage, especially in the global era [39]. Therefore it is expected that HCWs must be competent and able to compete in the global market [40]. Some campuses with an "A" accreditation are equipped with adequate facilities, highly qualified lecturers who have graduated from overseas campuses and modern laboratory practice tools [41]. The modernization of those campuses is strived to answer the challenges of the era of globalization and industry. The Indonesian Healthcare workers Council (MTKI) is a regulator that provides registration status for professional healthcare workers whose certificates are not only needed for practice purposes domestically but also abroad [37]. The registration status of the healthcare workers is also required to ensure that HCWs are competent [42]. Moreover, Indonesia owns a non-ministerial institution called the Indonesian Migrant Workers Protection Agency (BP2MI) that directs, bridges and protects Indonesian healthcare workers who are interested in working abroad [43]. So, there should be no question why too many overseas job opportunities available every year are not fulfilled.
B. WEAKNESSES
The data processing result in the study showed three main problems faced by Indonesian HCWs, namely lack of mastery of foreign languages (55.4%), poor family or parental support (27.4%) and less preparation (45.2%). Only some felt that they were economically unable to finance the recruitment process (14%). According to the SWOT Analysis theory, those that fall into the category of weaknesses include all internal elements that hinder the achievement of goals such as lack of support, resources, lack of technology, and infrastructure [44]. The weaknesses in the foreign language proficiency and from parents' support or family were dominantly felt by respondents who were interested in working abroad. The language barrier occurred because the curriculum refers to the national curriculum without maximizing the 'local wisdom' (muatan lokal). The portion of credit semester of foreign languages such as English, Japanese Arabic in the curriculum is little. Although several campuses organize foreign language training, they have not been effective [45]. The reasons can be due to an unsupportive speaking practice environment and unavailability of competent lecturers. Meanwhile, the encouragement of parents or family is still low because of their traditional understanding of the concept of professional globalization [46]. Those two main problems can be overcome through an organized program. For example, the introduction of work abroad for health professions has been introduced in the early study (semester one). In that stage students are gradually provided with study materials related to overseas work programs such as Transcultural, effective language development, bringing in experienced practitioners abroad or foreign student exchanges to share their experiences, etc. Special guidance can also be given to students who are interested in the program by involving their parents or guardians. Thus, after graduation, they will have received sufficient provisions without having to undergo lengthy and expensive overseas job preparation training program
C. OPPORTUNITIES
Most respondents in this study have the same perception of the hefty overseas job opportunities (91.7%). They admitted that there were always work opportunities abroad (81%). The evidence shows that the information on job opportunities abroad has spread widely in the information technology era. Overseas programs organized by the government or the private sector, both Government to Government (G to G) and Private to Private (P to P) can be accessed very easily and quickly through various media [47]. The implementation of the selection does not have to come physically unnecessarily to the capital city of Jakarta. Medical check-ups can also be carried out in the nearest city that has international medical check-up facilities. Several countries have provided financial assistance scheme for language training program, document processing, and ticketing [48]. Certain countries offer handsome packages i.e. free accommodation, transportation, and even food. Indonesian HCWs in many countries such as the Netherlands, Kuwait, Qatar, Kingdom of Saudi Arabia, Australia and the USA can study while working. The opportunities prove that welfare insurance and professional development for HVWs working abroad have received considerable attention. Those golden opportunities should have been introduced early because the demand for Indonesian HCWs had existed since the late 80s. Colleges with health-related departments should ensure a major role of their graduates in fostering young generation to prepare their world-class healthcare workers. Working abroad for HCWs is not merely limited to improving the level of welfare of the population, but also the introduction and equality of world-class professionals.
C. THREATS
The data of this study indicate that not all healthcare professions have interest in working abroad. Challenges from family or parental support may pose major obstacle if not anticipated from the start. From year to year, the challenges and hindrances are always changing. From one country to another one, their policies towards migrant healthcare workers also changes. Domestic employment policies and procedures have also developed. The working conditions and requirement abroad in the future will be more and more complicated. A concrete example is during the Covid-19 pandemic where there was a change in the recruitment process, at least a multilevel Polymerase Chain Reaction (PCR) test [49]. Not to mention the mastery of foreign languages and the medical technology, which are not always delivered in English. Competitions from other countries' healthcare workers keep on increasing [50]. Those challenges need to be anticipated from the beginning, so that novice enthusiasts can prepare themselves better. The existence of institutions such as BP2MI in Indonesia is a concrete example in anticipating the various challenges faced by migrant workers in the future. Yet, more importantly, is back to the inner interest of the HCWs.
In short, apart from the positive and negative sides of this SWOT analysis of Indonesian HCWs, what must be emphasized is that the increasingly stringent requirements for foreign demand require a concrete strategy. It would be simpler if an international program was established from a campus that is interested in providing opportunities for students to pursue their employment careers abroad. Thus, there is no need for various screening systems to capture them, because from the outset of the educational objective is already clear. The international program is a concrete solution to the job demand for HCWs who are interested in working abroad. Apart from the candidate's strong interest from the start, their families and parents have already agreed, and mastery of foreign languages is no longer in a big question mark. The weakness of this research is that it was not carried out directly by involving more potential candidates of registered healthcare professionals across the country, due to the Covid-19 pandemic and the government restriction. The research could have involved more respondents from many campuses, lecturers, government organizations, and manpower agencies. The difference with previous researches is that many previous researches only focused more on nurses recruitment especially in Indonesia than other health professions in the entry level.
IV. CONCLUSION
The objective of this study is to explore the constant interest of Indonesian HCWs working abroad in which language barrier, lack of family support are found as the major challenges among fresh graduates and offer some solution. The findings in this study indicated that the biggest challenge faced by Indonesian HCWs is language acquisition, followed by lack of family support and financial condition. Those three problems were found as they were in the final stage of their studies. On the contrary they are highly potentials and enthusiasts to work abroad. The study suggested a structured program during the college time by introducing overseas preparatory program earlier, and involving parents, manpower agencies and the government to be more prepared. Therefore, to measure its effectiveness, further studies are definitely required in the future.
|
2022-02-24T16:25:52.153Z
|
2022-02-22T00:00:00.000
|
{
"year": 2022,
"sha1": "b0b67b977b9280e4ec825bc15153247d73755327",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijahst.org/index.php/ijahst/article/download/31/19",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8d6811930bb0be0340c16619d1c86c1210b8bbb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
33526270
|
pes2o/s2orc
|
v3-fos-license
|
The Human Uncoupling Protein-3 Gene
Uncoupling protein-3 (UCP3) is a recently identified candidate mediator of adaptive thermogenesis in humans. Unlike UCP1 and UCP2, UCP3is expressed preferentially and at high levels in human skeletal muscle and exists as short and long form transcripts,UCP3 S and UCP3 L.UCP3 S is predicted to encode a protein which lacks the last 37 C-terminal residues of UCP3 L. In the present study, we have defined the intron-exon structure for the human UCP3 gene and determined thatUCP3 S is generated when a cleavage and polyadenylation signal (AATAAA) located in the last intron prematurely terminates message elongation. In addition we have mappedUCP3 to the distal segment of human chromosome 11q13 (between framework markers D11S916 and D11S911), adjacent toUCP2. Of note, UCP2 and UCP3 in both mice and humans colocalize in P1 and BAC genomic clones indicating that these two UCPs are located within 75–150 kilobases of each other and most likely resulted from a gene duplication event. Previous studies have noted that mouse UCP2 maps to a region of chromosome 7 which is coincident with three independently mapped quantitative trait loci for obesity. Our study shows thatUCP3 is also coincident with these quantitative trait loci raising the possibility that abnormalities in UCP3 are responsible for obesity in these models.
Uncoupling protein-3 (UCP3) is a recently identified candidate mediator of adaptive thermogenesis in humans. Unlike UCP1 and UCP2, UCP3 is expressed preferentially and at high levels in human skeletal muscle and exists as short and long form transcripts, UCP3 S and UCP3 L . UCP3 S is predicted to encode a protein which lacks the last 37 C-terminal residues of UCP3 L . In the present study, we have defined the intron-exon structure for the human UCP3 gene and determined that UCP3 S is generated when a cleavage and polyadenylation signal (AATAAA) located in the last intron prematurely terminates message elongation. In addition we have mapped UCP3 to the distal segment of human chromosome 11q13 (between framework markers D11S916 and D11S911), adjacent to UCP2. Of note, UCP2 and UCP3 in both mice and humans colocalize in P1 and BAC genomic clones indicating that these two UCPs are located within 75-150 kilobases of each other and most likely resulted from a gene duplication event. Previous studies have noted that mouse UCP2 maps to a region of chromosome 7 which is coincident with three independently mapped quantitative trait loci for obesity. Our study shows that UCP3 is also coincident with these quantitative trait loci raising the possibility that abnormalities in UCP3 are responsible for obesity in these models.
The control of body weight involves a regulated balance between energy intake and expenditure. Energy expenditure can be divided into three components (1): resting metabolic rate, physical activity, and adaptive thermogenesis, the latter being defined as the component of energy expenditure that changes in response to environmental stimuli such as cold exposure or chronic dietary excess. In rodents, an important site of adaptive thermogenesis is brown adipose tissue (reviewed in Ref. 2) where uncoupling protein-1 (UCP1), 1 expressed exclusively in brown adipocytes (3,4), promotes proton transport across the mitochondrial inner membrane. UCP1 decreases the proton electrochemical potential gradient, uncoupling fuel oxidation from ADP availability (reviewed in Refs. 5 and 6). Activation of UCP1, therefore, causes increased consumption of calories and generation of heat. UCP1-mediated effects on energy expenditure are regulated by changes in the level of sympathetic nervous system activity in brown fat. Cold exposure and overfeeding cause increased sympathetic stimulation of brown fat, simulating UCP1-mediated uncoupling and energy expenditure. The importance of this is demonstrated by the fact that mice lacking UCP1 are cold-intolerant (7). UCP1 is also regulated by purine di-and trinucleotides (ATP, ADP, GTP, and GDP) and free fatty acids, which inhibit and stimulate uncoupling activity, respectively (reviewed in Refs. 5 and 6).
UCP1 may be of lesser importance in humans in whom the mass of brown adipose tissue is limited. Instead, skeletal muscle is thought to be a major site of adpative thermogenesis (8 -12). UCP2 (13-15) is a recently described UCP1 homologue which, unlike UCP1, is expressed in most tissues. Because of its wide tissue distribution, UCP2 could have important effects on metabolic rate in humans. However, as UCP2 is expressed at high levels in many sites not thought to mediate adaptive thermogenesis, such as spleen, lymph node, thymus, and gastrointestinal tract (13)(14)(15)(16), its role in mediating regulated energy expenditure is unclear.
UCP3 is a third member of the uncoupling protein family (15,16). It was identified by Boss et al. (15) using a homology-based screening method and by the present authors (16) as an expressed sequence tag (EST) deposited into the Washington University, St. Louis-Merck & Co. EST data base. UCP3 is distinguished from other UCPs by its relatively selective, high level expression in skeletal muscle (15,16) and the existence of two RNA transcripts (15), UCP3 L and UCP3 S , which are predicted to encode long (312 amino acids) and short (275 amino acids) UCP3 proteins, differing only by the presence or absence of C-terminal 37 residues. This difference could be significant because the region in question is homologous to a domain in UCP1 thought to mediate inhibition of uncoupling activity by purine nucleotides (17,18). The abundant and relatively selective expression of UCP3 in skeletal muscle suggests that it may be a mediator of adaptive thermogenesis in humans. Here we define the intron-exon structure of the human UCP3 gene, establish its chromosomal localization at 11q13, within 75-150 kb of UCP2, and define the genetic basis for the two UCP3 S and UCP3 L mRNA transcripts.
EXPERIMENTAL PROCEDURES
Intron-Exon Structure-Six sense and antisense PCR primer pairs corresponding to cDNA sequence were used to amplify genomic fragments from human genomic DNA (see Table I). The genomic PCR fragments were subcloned using the TA cloning system (pCR2.1 plasmid, Invitrogen, Carlsbad, CA) and were subjected to restriction enzyme digestion plus agarose gel electrophoresis and dideoxy sequencing using M13, T7, and internal UCP3 gene-specific primers. 3ЈRACE (rapid amplification of cDNA ends) was used to clone the 3Ј ends of UCP3 S and UCP3 L . 3ЈRACE was performed using the Marathon cDNA Amplification Kit, human skeletal muscle Marathon-Ready cDNA (both from CLONTECH) and a sense UCP3 primer (TCAGCCCCCTCGACTGTA) located in exon 6 (cDNA position relative to ATG ϭ ϩ761 to ϩ778).
Analysis of UCP3 S and UCP3 L mRNA Transcripts by RNase Protection Assay-RNase protection assays were performed as described previously (19) using two in vitro transcribed 32 P-labeled RNA antisense probes, one corresponding to UCP3 L , spanning exons 6 and 7 (ϩ631 to ϩ925 relative to ATG), and the other corresponding to UCP3 S , spanning exon 6 and the immediately adjacent UCP3 S 3ЈUTR (ϩ623 to ϩ900 relative to ATG).
UCP3 Chromosomal Localization-The Genebridge 4 Radiation Hybrid Panel (20 -22) was screened for the presence of hUCP3 (Research Genetics, Inc., Huntsville, AL) using the following PCR primer pair: sense ϭ GCGACAGAAAATACAGCGGGACTA (exon 4, cDNA position relative to ATG ϭ ϩ464 to ϩ487) and antisense ϭ GCAAAGGGCTG-GTAAAATGAACTG (intron 4, 192 to 169 bp downstream of the exon 4 splice donor). These primers amplified a 269-bp band from human genomic DNA and failed to amplify any signal from control, hamster genomic DNA.
Analysis of P1 and BAC Human and Mouse Genomic Clones for Colocalization of UCP2 and UCP3-P1 (human and mouse 129/ola) and BAC (mouse 129/SvJ) genomic libraries were screened (Genome Systems, St. Louis, MO) using gene-specific primers shown in Table II (P1 libraries) or a 32 P-labeled mUCP3 cDNA clone (BAC library). P1 and BAC DNA was isolated and analyzed for the presence of UCP2 and UCP3 using PCR (specific primer sets shown in Table II). Fig. 1, the human UCP3 coding sequence was found to be distributed over six exons (exons 2-7) spanning ϳ5.25 kb of genomic DNA. To obtain 5Ј upstream cDNA sequence of human UCP3, 5ЈRACE on human skeletal muscle Marathon cDNA was performed (16). Different clones were obtained and sequenced, and the longest ones were found to contain 183 bp 5Ј upstream of the ATG. Thus, at least one exon containing UCP3 5Ј-untranslated sequence was detected (exon 1). Sequence analysis indicated that the 3Ј-UTR of UCP3 S and the intron region between exon 6 and the AATAAA S in intron 6 were identical (see Fig. 1). The protein predicted to be generated by the UCP3 S transcript is truncated by an in-frame stop codon (tga) which follows a preserved glycine (G) codon (GGg) at residue position 275. This glycine codon in UCP3 L (GGA) is located at the splice junction between exons 6 and 7.
Intron-Exon Structure-As shown in
Analysis of UCP3 S and UCP3 L mRNA Transcripts by RNase Protection Assay-An RNase protection assay probe corresponding to the UCP3 L transcript, spanning exons 6 and 7, was prepared. This probe contained 193 bp of exon 6 sequence and 100 bp of exon 7 sequence. As is shown in Fig. 2, two bands were protected, one of ϳ290 bp representing UCP3 L and another of ϳ190 bp representing UCP3 S . Additional RNase protection assays were performed using a probe corresponding to UCP3 S (data not shown). This probe contained 200 bp of exon 6 and 77 bp of adjacent 3Ј sequence corresponding to the UCP3 S 3Ј-untranslated region (3ЈUTR S , see Fig. 1). As would be predicted, two protected bands were obtained, one of ϳ280 bp representing UCP3 S and another of ϳ200 bp representing UCP3 L (data not shown). Quantitation of RNase protection assay results using in vitro transcribed sense UCP3 transcripts as a standard curve and total RNA extracted from five lean subjects (rectus abdominis muscle) revealed that there were ϳ15 amol (per g of total RNA) of UCP3 L transcripts and ϳ18 amol (per g of total RNA) of UCP3 S transcripts.
UCP3 Chromosomal Localization-A hUCP3 PCR primer set (see "Experimental Procedures") was applied to the Genebridge 4 Radiation Hybrid Panel (20 -22) generating the following data set for hybrid clones 1 through 93 (0 ϭ no amplification, 1 ϭ amplification and 2 ϭ ambiguous results): 1001012001 0000010101 0000010000 0200112000 1110000001 0000100001 0000000000 0100100000 001. These data were submitted to the MIT Center for Genome Research STS mapping server. 2 UCP3 was mapped to chromosome 11q13 (distal portion), 1.31 cR (lod Ͼ 3.0) below framework marker WI-6189. WI-6189 maps to 387.58 cR from the top of the Chr 11 linkage group on the Whitehead Institute Center for Genome Research radiation hybrid map, between framework markers D11S916 (384 cR) and D11S911 (391 cR). D11S916 and D11S911 have also been positioned on the Généthon human genetic linkage map, 85 and 89 cM from the top of the Chr 11 linkage group, respectively (23). It has previously been noted (13) that two ESTs representing UCP2, WI-13873 (accession number R49188) and WI-16720 (accession number T80845), have been independently mapped to this region (385.84 and 387.58 cR, respectively, Whitehead Institute Center for Genome Research). See Fig. 3 for the order of markers in this region and the position of framework markers on the genetic map.
Analysis of P1 and BAC Human and Mouse Genomic Clones for Colocalization of UCP2 and UCP3-Given the proximity of human UCP2 and UCP3 by radiation hybrid mapping, we investigated whether human UCP2 and UCP3 might be found together on P1 genomic clones. P1 genomic clones generally have genomic inserts of ϳ75-100 kb. In addition, we also investigated whether mouse UCP2 and UCP3 might be found together on P1 and BAC mouse genomic clones. BAC genomic 2 http://www-genome.wi.mit.edu/cgi-bin/contig/rhmapper.pl. clones generally have a genomic insert of ϳ150 kb. Analysis for mUPC3 was possible because we had recently cloned its corresponding cDNA. 3 Mouse UCP3 is 87% identical to human UCP3 at the amino acid level, 3 but is only 55% identical to mUCP1 and 72% identical to mUCP2. As is shown in Table II
DISCUSSION
In the present study we have analyzed the human UCP3 gene. It contains at least 7 exons spread over ϳ8.5 kb and is located on chromosome 11 (11q13), adjacent to UCP2. The UCP3 gene generates two mRNA transcripts, UCP3 L and UCP3 S , which are predicted to encode long and short UCP3 proteins, differing only by the presence or absence of 37 residues on the C terminus (15). These 37 residues are encoded by exon 7 which is missing from UCP3 S . Intron 6 contains a cleavage and polyadenylation signal (designated AATAAA S in Fig. 1). The AATAAA S signal terminates message elongation ϳ50% of the time, thus generating UCP3 S . When the AATA-AA S signal is bypassed, which seems to occur ϳ50% of the time, message elongation continues until the AATAAA L signal (located ϳ1.1 kb downstream of exon 7) is reached, thus generating UCP3 L .
The domain encoded by exon 7 is highly homologous to Cterminal residues found in UCP1 and UCP2, thus UCP3 S is unique in lacking these residues. Since this region is believed to participate in purine nucleotide-mediated inhibition of UCP1 uncoupling activity (17,18), UCP3 S may have increased uncoupling activity. Alternatively, UCP3 S could have reduced activity or no activity due to the possible absence of critical residues. The biological significance of UCP3 S will need to be the focus of future investigations.
The human UCP3 gene maps to the distal segment of 11q13, adjacent to UCP2. Human UCP1, on the other hand, is located on chromosome 4 (24). In this context, it is noteworthy that UCP2 and UCP3 are more similar to each other than to UCP1. 1. Human UCP3 gene structure. Human UCP3 gene with start codon (ATG), stop codons (TGA S for UCP3 S and TGA L for UCP3 L ), and cleavage and poly(A) adenylation signals (AATAAA S for UCP3 S and AATAAA L for UCP3 L ) are shown above. Exons are coded from 1 through 7. 3Ј-Untranslated regions for UCP3 S and UCP3 L are shown as UTR S and UTR L , respectively. The GenBank TM accession numbers for each exon and flanking intronic sequences are consecutive from exon 1 to 7: AF012196, AF012197, AF012198, AF012199, AF012200, AF012201, and AF012202. Schematic cDNAs are shown below the gene structure. On the bottom is the exact location of the splice donors and splice acceptors (uppercase letters refer to exon sequence, lowercase letters refer to intron sequence). Amino acids adjacent to the splice sites are shown below the nucleotide sequence.
FIG. 2. UCP3 RNase protection assay.
A probe spanning exons 6 and 7 (see "Experimental Procedures" for details) was used to assess UCP3 S and UCP3 L mRNA expression in human skeletal muscle total RNA (isolated from quadriceps muscle, male subject, age 32, RNA purchased from CLONTECH, catalog number 64033-1). Total RNA ranging in amounts from 0 -10 g were assessed. A cyclophilin probe was used to control for quality of RNA. Expected size of each signal is shown.
Both mouse and human UCP2 and UCP3 genes colocalize on P1 and BAC genomic clones, indicating that these two UCPs are within 75-150 kb of each other. Given this and the high degree of similarity between UCP2 and UCP3 (ϳ70% at the nucleotide level), it is likely that one UCP gene arose from the other via a duplication event. However, despite their common origin and similar sequence, the two UCPs are unique, being distinguished by their different patterns of expression and the existence of a short form for UCP3, but not UCP2. 3 The close proximity of UCP2 and UCP3 and the similarity in nucleotide sequence have additional implications. Unequal crossovers during meiosis could generate alleles with deletions, duplications, or gene conversions of UCP2 and/or UCP3, as observed with ␣-globin (25), 21-hydroxylase (26), and 11-hydroxylase (27) genes. Also, the close proximity of UCP2 and UCP3 will prevent genetic linkage studies from discriminating between UCP2 and UCP3. Of interest, a prior study mapped mUCP2 to chromosome 7 (13), tightly linked to the tubby mutation. This region is coincident with a quantitative trait locus for obesity in three mouse models (28 -30) and one congenic strain (31). Since mUCP2 and mUCP3 are adjacent, it is possible that an abnormality in one or both of these genes is responsible for obesity. In humans, the Bardet-Beidl syndrome (BBS1, MIM#209901) consisting of retinal degeneration, polydactyly, hypogonadism, mental retardation, and obesity has been linked to 11q13 (32-34) (significant lod scores with markers D11S1883 and D11S913, see Fig. 3). However, in this case UCP2 and/or UCP3 are unlikely candidate genes given that they are positioned at least 12 cM distal to BBS1 (no significant linkage between BBS1 and marker D11S916).
In summary, genes for UCP2 and UCP3 are highly homologous and are located in close proximity on chromosome 11q13. UCP3 is distinguished from UCP2 and UCP1, however, by its selective and high level expression in skeletal muscle and the expression of a short form transcript, UCP3 S , generated by a cleavage and polyadenylation signal (AATAAA) located in the last intron. Given its proximity to the UCP2 gene, the mouse UCP3 gene is also coincident with 3 independently mapped quantitative trait loci for obesity (28 -30), raising the possibility that abnormalities in UCP3 are responsible for obesity in these models. Thus, human linkage studies for the UCP2/ UCP3 locus along with mutational analyses of mouse and human UCP2 and UCP3 genes should be the focus of future investigations.
|
2018-04-03T02:26:26.169Z
|
1997-10-10T00:00:00.000
|
{
"year": 1997,
"sha1": "390aef65c88e3d5bb461b0c1fa3734332cd19afd",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/272/41/25433.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "f42012249648cce04eda0a82906cc97075526406",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
236955001
|
pes2o/s2orc
|
v3-fos-license
|
Within amygdala: Basolateral parts are selectively impaired in premature-born adults
Highlights • Amygdala volume is reduced in premature-born adults.• Particularly accessory basal nuclei volumes are selectively reduced.• Structural covariance within basolateral amygdala is altered after premature birth.• Data suggest that prematurity has lasting and distinct effects on amygdala nuclei.• Basolateral amygdala development seems to be specifically impaired.
However, the amygdala is not a homogenous structure. Instead, it consists of several grey matter nuclei which are typically divided into three groups based on their distinct developmental pathways (Pape and Pare, 2010;Sah et al., 2003;Swanson and Petrovich, 1998): First, a superficial division (SFA) which is part of the cortex with a corresponding cortical developmental trajectory for particular glutamatergic neurons, second, a centromedial division (CMA) which is thought of as a ventromedial expanse of the striatum with a corresponding striatal developmental trajectory for particular GABAergic neurons, and third, a basolateral division (BLA) which is derived from a ventromedial extension of claustrum anlage with a corresponding claustral developmental trajectory for particular subplate and glutamatergic neurons (Bruguier et al., 2020;Medina et al., 2004;Pape and Pare, 2010;Puelles, 2014;Sah et al., 2003;Swanson and Petrovich, 1998;Watson and Puelles, 2017). These three groups within the amygdala are also distinct based on cytoarchitectonic mapping, with different dominating cell types within each group (Amunts et al., 2005;Heimer et al., 1999): For example, while BLA contains mainly glutamatergic projection cells and secondly local-circuit GABAergic cells, CMA mainly consists of GABAergic cells similar to striatal neurons (McDonald, 1982;Pape and Pare, 2010;Sah et al., 2003). Critical for the current study is that Fig. 1. Segmentation of amygdala nuclei. T2-weighted and T1-weighted images of the amygdala and amygdala nuclei as segmented by FreeSurfer. *Not illustrated. Abbreviations: AAA, Anterior amygdaloid area; AB, accessory basal nucleus; Ba, basal nucleus; BLA, basolateral amygdala; CAT, corticoamygdaloid transition area; Ce, central nucleus; CMA, centromedial amygdala; Co, cortical nucleus; La, lateral nucleus; Me, medial nucleus; PL, paralaminar nucleus; SFA, superficial amygdala. advances in automated brain segmentation have made it possible to identify distinct amygdala nuclei by in-vivo structural MRI in humans (Saygin et al., 2017), facilitating nuclei sensitive analysis of the adult amygdala. The assignment of each nucleus to SFA, CMA or BLA is illustrated in Fig. 1. Furthermore, it is important for our approach that these different, superficial, centromedial, and basolateral, nuclei groups are associated with distinct, cortical, striatal, and claustral developmental trajectories, as described above (Amunts et al., 2005;Pape and Pare, 2010;Sah et al., 2003;Swanson and Petrovich, 1998), which might in turn reflect distinct vulnerability to impaired development in prematurity. For example, perinatal adverse events, such as transient hypoxia/ischemia, particularly affect subplate neurons (SPNs), which are distinctively involved in claustrum and thereby BLA development (McClendon et al., 2017). It is still unknown, however, whether these developmental trajectories might lead to differential effects of prematurity on the individual amygdala nuclei in humans.
Furthermore, premature birth is associated with an increased risk for social impairments Johnson, 2007;Pesonen et al., 2008;Wolke et al., 2019). Accordingly, we recently found significantly higher scores on an avoidant personality scale in a cohort of premature-born young adults, reflecting increased social anxiety trait . Still, although the amygdala has been linked with social anxiety in general and although social impairments were associated with altered functional connectivity of the amygdala in preterm-born adolescents (Davidson, 2002;Davis and Whalen, 2001;Johns et al., 2019), we did not find evidence that social anxiety was correlated with global amygdala volume alterations . However, as mentioned above, the amygdala is not a homogenous structure but consists of several nuclei, which can be assigned to three subdivisions based on their distinct developmental pathways. Within the amygdala, functional specialization and parallel processing take place (Balleine and Killcross, 2006;Janak and Tye, 2015). For example, evidence regarding differential roles of amygdala subdivisions in humans comes from studies investigating Urbach-Wiethe patients (Hortensius et al., 2017). These studies suggest deficits in the processing of ambiguous social information, and impaired learning from social experience in humans with BLA damage (de Gelder et al., 2014;Rosenberger et al., 2019). It remains unclear, whether distinct amygdala nuclei might mediate social impairments in premature-born adults.
Following up on previous work , in the present study we investigated 101 very premature-born adults and 108 full-term controls at 26 years of age using automated FreeSurfer segmentation of amygdala nuclei (Saygin et al., 2017) in structural MRI to address the following questions: First, are there differential effects of prematurity on individual amygdala nuclei structure? As proxy for amygdala nuclei structure, we used both nuclei volume and structural covariance among nuclei. And second, are structural differences in distinct amygdala nuclei linked with social anxiety?
Participants
Our study sample was previously described in (Riegel et al., 1995;Schmitz-Koep et al., 2021;Wolke et al., 1994;Wolke and Meyer, 1999): All subjects were part of the Bavarian Longitudinal Study (BLS), a geographically defined, whole-population sample of neonatal at-risk children and healthy FT controls who were followed from birth, between January 1985and March 1986, into adulthood (Eryigit Madzwamuse et al., 2015Reyes et al., 2021). 682 infants were born VP (<32 weeks of gestation) and/or with very low birth weight (VLBW, birth weight <1500 g). Informed consent from a parent and/or legal guardian was obtained. From the initial 916 FT born infants born at the same obstetric hospitals that were alive at 6 years, 350 were randomly selected as control subjects within the stratification variables of sex and family socioeconomic status in order to be comparable with the VP/ VLBW sample. Of these, 411 VP/VLBW individuals and 308 controls were alive and eligible for the 26-year follow-up assessment. 260 from the VP/VLBW group and 229 controls participated in psychological assessments (Breeman et al., 2015). All subjects were screened for MRrelated exclusion criteria including (self-reported): claustrophobia, inability to lie still for >30 min, unstable medical conditions (e.g. severe asthma), epilepsy, tinnitus, pregnancy, non-removable MRI-incompatible metal implants and a history of severe CNS trauma or disease that would impair further analysis of the data. However, the most frequent reason not to perform the MRI exam was that subjects declined to participate. Finally, 101 VP/VLBW subjects and 111 FT controls underwent MRI at 26 years of age. The MRI examinations took place at two sites: The Department of Neuroradiology, Klinikum rechts der Isar, Technische Universität München, (n = 145) and the Department of Radiology, University Hospital of Bonn (n = 67). The study was carried out in accordance with the Declaration of Helsinki and was approved by the local ethics committee of the Klinikum rechts der Isar, Technische Universität München and the University Hospital Bonn. All study participants gave written informed consent. They received travel expenses and a small payment for participation.
Birth variables
Gestational age (GA) in weeks was estimated from maternal reports on the first day of the last menstrual period and serial ultrasounds during pregnancy. In cases in which the two measures differed by more than 2 weeks, clinical assessment at birth with the Dubowitz method was applied (Dubowitz et al., 1970). Birth weight (BW) in grams was obtained from obstetric records. Duration of mechanical ventilation in days was computed from daily records by research nurses.
Variables related to anxiety
To assess behavioral and emotional outcome related to anxiety, we used the German version of the Young Adult Self Report (YASR) which includes six (Depressive, Anxiety, Somatic, Avoidant personality, Attention deficit/hyperactivity problems, and Antisocial personality) included DSM-IV-oriented scales (Achenbach, 1997). In a previous study of the present cohort, we found significantly higher T score for avoidant personality in VP/VLBW individuals compared to FT controls, indicating increased social anxiety trait . Therefore, we chose the avoidant personality score for these analyses.
MRI processing and amygdala segmentation
Images saved as DICOMs were converted to Nifti-format using dcm2nii (Li et al., 2016). The FreeSurfer image analysis suite, version 6.0 (http://surfer.nmr.mgh.harvard.edu/) was used which includes an automated segmentation of the amygdala nuclei (Saygin et al., 2017). Recently, Armio et al. (Armio et al., 2020) used this algorithm to investigate amygdala subnucleus volumes in psychosis high-risk state and first-episode psychosis. They assessed reliability of the segmentation method scanning five subjects twice and showed excellent test-retest reliability (Armio et al., 2020). Using both high-resolution T1weighted and T2-weighted images, nine amygdala nuclei were labeled per hemisphere: Anterior amygdaloid area (AAA), corticoamygdaloid transition area (CAT), basal nucleus (Ba), lateral nucleus (La), accessory basal nucleus (AB), central nucleus (Ce), cortical nucleus (Co), medial nucleus (Me) and paralaminar nucleus (PL). Segmentation outputs were inspected visually. Examples of amygdala segmentation are shown in Fig. 1. Successful amygdala segmentations were available in 101 VP/ VLBW subjects and 109 FT subjects. These nine amygdala nuclei were assigned to one of the three amygdala subdivisions as visualized in Fig. 1. SFA includes AAA, CAT and Co, CMA includes Ce and Me, and BLA includes Ba, La, AB and PL.
However, segmentation of amygdala nuclei is challenging due to small regional volumes and limited availability of a clear ground truth. Buser et al. (Buser et al., 2020) assessed the spatial and numerical reliability for the segmentation of amygdala nuclei in FreeSurfer. While numerical reliability was mostly high within the amygdala, medial nucleus and paralaminar nucleus showed poor spatial reliability (Buser et al., 2020). Therefore, we decided to exclude both medial and paralaminar nucleus from our analyses.
Statistical analysis
All statistical analyses were performed using IBM SPSS Version 26 (IBM Corp., Armonk, NY, USA). To detect possible outliers, we used a method proposed by Hoaglin and Iglewicz (1987) which multiplies the interquartile range by the factor 2.2 to determine outliers. One FT subject was excluded from the analyses because it contained multiple outlier values. Finally, 101 VP/VLBW subjects and 108 FT subjects were included in the analyses. Age was not included as a covariate in our analyses, as VP/VLBW subjects and FT controls had the same mean age of 26 years (p = 0.165).
Outcome measures 2.7.1. Amygdala nuclei volumes after premature birth
Our first outcome measure of amygdala nuclei structure was nuclei volume. To test whether specific nuclei of the amygdala are particularly affected by alterations in volume after premature birth, we used general linear models. We performed 14 separate tests entering the respective amygdala nucleus (left La, left Ba, left AB, left AAA, left Ce, left Co, and left CAT, as well as these volumes in the right hemisphere) as dependent variables, group membership as fixed factor and whole amygdala volume of the left and right hemisphere, respectively, sex and scanner as covariates.
We conducted two control analyses: First, to investigate the effect of adjusting for whole amygdala volume, we repeated general linear model analyses with TIV, sex and scanner, not left or right whole amygdala volume, as covariates.
Second, there were few subjects with intraventricular hemorrhage in the neonatal period (see Table S4). To investigate whether removing these subjects impacts the results, we repeated general linear model analyses for left and right accessory basal nucleus between the remaining subjects of the VP/VLBW group and the FT group. We entered volumes of left and right accessory basal nucleus as dependent variables, group membership as fixed factor and whole amygdala volume, sex and scanner as covariates.
To test whether differences in amygdala nuclei volumes between VP/ VLBW subjects and FT controls are specifically related to premature birth, we conducted a two-tailed partial correlation analysis in the VP/ VLBW group. If group differences were found, then nuclei volumes were correlated with GA, BW and duration of ventilation as variables of premature birth. TIV, sex and scanner were entered as covariates.
Amygdala nuclei structural covariance
Our second outcome measure of amygdala nuclei structure was structural covariance. It has been proposed that linked brain regions may develop in concert, and that coordinated development, for example of cortical regions, is altered after premature birth. Therefore, we investigated interrelated development within the amygdala, i.e., structural covariance (Alexander-Bloch et al., 2013;Nosarti et al., 2011;Scheinost et al., 2017). To explore structural covariance within the amygdala, we tested the correlation between amygdala nuclei volumes in the VP/VLBW and FT group. Since we found reduced volumes of accessory basal nuclei in premature-born adults, we focused our structural covariance analysis on these nuclei. More specifically, we entered volumes of the left and right accessory basal nucleus and ipsilateral amygdala nuclei, respectively, as variables of interest into a two-tailed partial correlation analysis in each group (VP/VLBW and FT) separately. TIV, sex and scanner were entered as covariates of no interest. We tested for differences in structural covariance between VP/VLBW subjects and FT controls using Fisher r-to-z transformation and calculating z-scores and p-values to assess the significance of the difference (Eid et al., 2013).
Thresholding and correction for multiple testing
All analyses were FDR corrected for multiple comparisons using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995). Statistical significance was defined as p < 0.05.
Correlation between amygdala nuclei volumes and anxiety
We used the avoidant personality T scores to study the relationship between altered amygdala nuclei volumes and social anxiety. In the VP/ VLBW group, we entered amygdala nuclei volumes (i.e., left and right accessory basal nucleus volumes), respectively, and anxiety scores (i.e., the avoidant personality T scores) as variables of interest and TIV, sex and scanner as covariates into a two-tailed partial correlation analysis. Results were FDR corrected for multiple comparisons using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995). Statistical significance was defined as p < 0.05.
Data availability statement
Patient data used in this study are not publicly available but stored by the principal investigators of the Bavarian Longitudinal Study.
Sample characteristics
Demographic and clinical background variables are presented in Table 1. Sex (p = 0.894) and age at scanning (p = 0.165) did not differ significantly between the VP/VLBW group and the FT group. By design of the study, GA (p < 0.001) and BW (p < 0.001) were significantly lower in the VP/VLBW group compared to FT controls. Furthermore, TIV was significantly smaller in VP/VLBW individuals compared to FT controls (p = 0.001).
Altered structure of basolateral amygdala in premature-born adults
Automated segmentation of the amygdala nuclei in structural MRI data is visualized in Fig. 1. To investigate whether specific nuclei of the amygdala are particularly affected by alterations in volume after premature birth, we used general linear models. After FDR correction for multiple comparisons, both left and right accessory basal nucleus showed significantly lower volume in VP/VLBW subjects compared to controls. Fig. 2 and Table 2 present estimated marginal means and pvalues. Table S1 presents raw amygdala nuclei volumes.
We conducted two control analyses: First, to investigate the effect of adjusting for whole amygdala volume, we repeated general linear model analyses without left or right whole amygdala volume, but with TIV, sex and scanner as covariates. After FDR correction for multiple comparisons, all amygdala nuclei showed significantly lower volume in VP/ VLBW subjects compared to FT controls. Table S2 presents estimated marginal means and p-values. The results indicate that while premature birth has an effect on all amygdala nuclei, accessory basal nucleus is particularly affected in relation to whole amygdala volume. Second, in order to control for an impact of intraventricular hemorrhage on our results, we removed these subjects (see Table S4) and repeated general linear model analyses for accessory basal nucleus between the remaining subjects of the VP/VLBW group and the FT group. We found significantly lower volumes of left and right accessory basal nucleus in VP/VLBW subjects without intraventricular hemorrhage compared to FT controls (see Table S5), verifying that accessory basal nucleusas part of BLAseems to be particularly affected. These results indicate that our main findings of amygdala nuclei volume reductions were not affected by effects of intraventricular hemorrhage.
To support the notion that volume reductions of both left and right accessory basal nucleus were specifically related to prematurity, we conducted a partial correlation analysis (Fig. 3, Table 3). We observed significant positive correlations between GA and volumes of both left (r = 0.356, p < 0.001) and right accessory basal nucleus (r = 0.279, p = 0.006). While there was no significant relationship between BW and left accessory basal nucleus volume (r = 0.123, p = 0.252), BW and right accessory basal nucleus volume showed a significant positive correlation (r = 0.232, p = 0.013). We found significant negative correlations between duration of ventilation and volumes of both left (r = -0.449, p < 0.001) and right accessory basal nucleus (r = − 0.427, p < 0.001), possibly reflecting vulnerability of BLA to stress exposure induced by premature birth. Fig. 3 and Table 3 present correlation coefficients and p-values from the partial correlation analysis between volumes of accessory basal nuclei and variables of premature birth.
Furthermore, we investigated coordinated structural development within the amygdala using structural covariance. In order to explore structural covariance for the accessory basal nucleus, as this part of BLA was significantly smaller in premature-born adults, we tested the correlation between volumes of the left and right accessory basal nucleus and ipsilateral amygdala nuclei, respectively. Table 4 presents correlation coefficients from the partial correlation analyses, and p-values from comparing correlation coefficients between VP/VLBW subjects and FT controls. In summary, our results showed that the accessory basal nucleusas part of the basolateral subdivisionwas significantly reduced in volume when whole amygdala volume was entered as a covariate. Furthermore, structural covariance between parts of left BLA was significantly higher in VP/VLBW individuals compared to FT controls. Hence, these results support the hypothesis that BLA is particularly affected by premature birth.
Increased social anxiety is not associated with reduced amygdala volumes in premature-born adults
To investigate whether reduced amygdala nuclei volumes are linked with increased social anxiety in premature-born adults, we investigated the relationship between accessory basal nuclei volumes and the YASR avoidant personality score using a partial correlation analysis (Fig. 5, Table 5). There was no significant correlation between the T score for avoidant personality and left (r = 0.092, p = 0.372) or right accessory basal nucleus volume (r = 0.009, p = 0.928). Correlation coefficients and p-values are presented in Fig. 5 and Table 5.
These results suggest that, while part of the BLA is specifically reduced in volume after premature birth, there seems to be no association with social anxiety.
Discussion
Based on structural MRI, we demonstrated specifically reduced volumes of accessory basal nucleusas part of BLAand altered structural covariance within the amygdala in VP/VLBW subjects compared to FT controls at 26 years of age. There seems to be no association between these specific volumetric reductions and increased social anxiety. Results indicate, to the best of our knowledge for the first time, that prematurity affects subnuclei of amygdala specifically, namely BLA. Data suggest that BLA development is specifically impaired after premature birth, possibly due to disturbance of its distinct claustral developmental pathway.
Altered structure of basolateral amygdala after premature birth
All amygdala nuclei showed significantly lower volume in VP/VLBW subjects compared to FT controls, however, amygdala composition differed in VP/VLBW adults, since accessory basal nucleus showed significantly lower volumes adjusted for whole amygdala volume (Fig. 2). The accessory basal nucleus is one of four nuclei (together with basal, lateral and paralaminar nucleus) composing BLA. It integrates input from cortical and subcortical regions as well as from within the amygdala (for example lateral and basal nucleus) and projects to the central and medial nucleus as parts of autonomic pathways (Aggleton et al., 1980;Pitkänen et al., 1995;Sah et al., 2003;Savander et al., 1995). Previous studies suggested that BLA and claustrum, a thin sheet of grey matter between external and extreme capsule, both are of pallial origin (Medina et al., 2004;Waclaw et al., 2010). This is supported by evidence for presumably glutamatergic projection neurons in BLA similar to the cerebral cortex and the claustrum, and expression of a major glutamate transporter gene in the cerebral cortex which is also expressed both in BLA and claustrum (J. B. Smith et al., 2019;Swanson and Petrovich, 1998). In particular, BLA development may partly depend on subplate neuron (SPN) development: The subplate zone, mainly consisting of SPNs, is a largely transient structure that plays a particularly important role in the structural and functional organization of the cortex during its developmental peak between 22 and 34 weeks of gestation (Kostović et al., 1989;McConnell et al., 1989). A common developmental origin of subplate and claustrum has been proposed, as gene expression patterns suggest part of the claustrum to be subplatelike (Bruguier et al., 2020;Puelles, 2014;Watson and Puelles, 2017). As mentioned above, BLA is derived from a ventromedial extension of Correlation coefficients and p-values from partial correlation analysis between the volume of the accessory basal nucleus and variables of premature birth. TIV, sex and scanner were entered as covariates. Bold letters indicate statistical significance after FDR correction using the Benjamini-Hochberg procedure. Abbreviations: AB, accessory basal nucleus; BW, birth weight; GA, gestational age. Correlation coefficients from partial correlation analysis between volumes of left accessory basal nucleus and ipsilateral amygdala nuclei and volumes of right accessory basal nucleus and ipsilateral amygdala nuclei. TIV, sex and scanner were entered as covariates. Comparing correlations bold letters indicate statistical significance after FDR correction using the Benjamini-Hochberg procedure. Abbreviations: AAA, anterior amygdaloid area; AB, accessory basal nucleus; Ba, basal nucleus; CAT, corticoamygdaloid transition area; Ce, central nucleus; Co, cortical nucleus; FT, full term; La, lateral nucleus; VP/VLWB, very preterm and/ or very low birth weight.
claustrum anlage. Hence, first, BLA is tightly linked to SPN development, and second, SPN damage is a key mediator of aberrant brain development after premature birth (McClendon et al., 2017). Therefore, BLA may be particularly vulnerable to disturbances in brain development after premature birth. However, the expression of a subplatespecific gene in the BLA was only observed in the mouse to date (Wang et al., 2010). Other potential explanations for BLA vulnerability after premature birth may include injury of other neurons involved in BLA, and especially accessory basal nucleus, development. However, the discussion of potential mechanisms behind BLA vulnerability has to be interpreted with care, since it is not clear whether findings reported from BLA can be generalized to all nuclei of BLA, including the accessory basal nucleus. Furthermore, since the accessory basal nucleus integrates input from cortical and subcortical regions, volume reductions may also be secondary to altered input projections. Lastly, amygdala volume reductions after premature birth have been associated with greater Correlation coefficients and p-values from partial correlation analysis between the volume of the accessory basal nucleus and the avoidant personality score. TIV, sex and scanner were entered as covariates. Bold letters indicate statistical significance after FDR correction using the Benjamini-Hochberg procedure. Abbreviations: AB, accessory basal nucleus; CI, confidence interval; YASR, Young Adult Self Report. exposure to neonatal pain/stress (Chau et al., 2019), and in the present study we found significant negative correlations between duration of ventilation and volumes of both left and right accessory basal nucleus, possibly reflecting vulnerability to stress exposure induced by premature birth. Hence, volumetric differences could also stem from early injury or early differences related to prenatal, perinatal, and early postnatal stress, that remain in spite of superimposed later postnatal development. Furthermore, it has been proposed that concerted development and connectivity of linked brain regions may be reflected by structural covariance (Alexander-Bloch et al., 2013;Lerch et al., 2006;Mechelli, 2005). After premature birth, increased as well as decreased covariance has been reported between cortical and subcortical regions and cerebellum in adolescents and young adults (Nosarti et al., 2011;Scheinost et al., 2017). More specifically, we previously investigated structural covariance of whole amygdala volume across hemispheres in premature-born adults . We observed that while the correlation did not differ significantly between the VP/VLBW group and the FT controls, it approached statistical significance towards stronger correlation in the VP/VLBW group, possibly suggesting that related whole amygdala development across hemispheres could be affected by prematurity. In the present study, we investigated structural covariance within the amygdala. We found significantly increased correlation between parts of left BLA in the prematurity group, supporting the hypothesis that development of BLA might be particularly affected by premature birth. Moreover, we found increased structural covariance between subregions of the amygdala, namely BLA, SFA, and CMA, after premature birth. Increased structural covariance has previously been reported after premature birth between grey matter regions including cortical regions, caudate, thalamus and cerebellum and may reflect potential neuroplastic compensatory mechanisms or differences in structural and functional connectivity (Nosarti et al., 2011;Scheinost et al., 2017).
In conclusion, we found decreased BLA nuclei volumes and altered structural covariance within the amygdala. It follows that prematurity has complex effects on amygdala nuclei development persisting into adulthood, particularly for BLA nuclei. Our data support the hypothesis that the BLA is particularly affected by premature birth, possibly due to its developmental dependency on SPNs.
Increased social anxiety is not associated with reduced amygdala volumes in premature-born adults
While we previously found significantly increased avoidant personality T scores in VP/VLBW individuals compared to FT controls reflecting increased social anxiety trait, this trait was not correlated with whole amygdala volume alterations . In general, studies linking anxiety or personal traits to amygdala volume provide heterogenous results, both in healthy subjects and in patients with anxiety disorders (Avinun et al., 2020;De Bellis et al., 2000; J. C. Gray et al., 2018;Hayano et al., 2009;Qin et al., 2014;Schienle et al., 2011;Spampinato et al., 2009). Animal studies provide ample evidence for functional specializations and parallel processing within amygdala subdivisions (Balleine and Killcross, 2006;Janak and Tye, 2015). Previous morphometric studies in humans (including premature-born populations) did often not differentiate amygdala nuclei, leaving the possibility that differential changes in amygdalar subcircuits remained undetected. Therefore, in the present study, we investigated whether volumetric differences in distinct amygdala nuclei are linked with social anxiety. We did not find an association between specifically reduced volumes of BLA and social anxiety. Maybe, structural changes of amygdalar circuits, as measured by volume changes, do not directly translate into behavioral deficits, but are mediated by changes in functional connectivity (Baur et al., 2013;Hahn et al., 2011;Jalbrzikowski et al., 2017;Johns et al., 2019): For example, in preterm-born adolescents, social impairments were associated with functional connectivity of the amygdala, supporting a possible relationship between prematurity, social anxiety and the amygdala (Johns et al., 2019). Furthermore, multiple brain systems are involved in the mediation of anxiety and social anxiety behavior, such as other regions of the limbic system and prefrontal cortex (J. A. Gray, 1982;Martin et al., 2009;Spampinato et al., 2009;Wu et al., 1991).
In conclusion, neural correlates of social anxiety after premature birth remain less clear and further investigations including other brain regions as well as other structural and functional measures are necessary.
Strengths and limitations
One of the strengths of our study is that a relevant impact of patient age on amygdala volumes at the time of the MRI scan is excluded as VP/ VLBW subjects and FT controls had the same mean age of 26 years and age range was very limited. Another strength of our study is a large sample size (101 VP/VLBW individuals and 108 FT controls) which enhances the generalizability of our findings. Third, segmentation quality was improved as both high-resolution T1-weighted and T2weighted images were used for amygdala segmentation.
However, one important limitation of this study is that amygdala segmentation is challenging due to small regional volumes and limited availability of a clear ground truth. While reliability of the segmentation method used has been investigated and mostly allows for reliable parcellation of amygdala nuclei, validity of segmentation remains unclear. To address this limitation, we first reviewed general reliability of the applied parcellation scheme in previous studies. While Armio et al. (2020) reported excellent test-retest reliability of this segmentation method, Buser et al. (2020) found that medial and paralaminar nucleus showed poor numerical and/or spatial reliability. Therefore, we decided to exclude both medial and paralaminar nucleus from our analyses. Relatively small standard errors and narrow 95% confidence intervals, presented in Table 2 and Table S1, indicate that uncertainty in the estimation of amygdala nuclei volumes is relatively low, and that the sample mean of our data is likely to be close to the 'true' population mean.
Another limitation is that individuals with more birth complications in the initial Bavarian Longitudinal Study sample were more likely to be excluded in the initial screening for MRI due to exclusion criteria for MRI. Therefore, the current sample is biased to VP/VLBW adults with less severe neonatal complication and the observed differences in amygdala volumes between VP/VLBW subjects and FT controls are conservative estimates of true differences. However, as mean GA and BW were not significantly different in VP/VLBW subjects with MRI data compared to subjects without MRI data (see Table S3), the sample with MRI data was still representative of the full cohort in terms of GA and BW. There were few subjects with intraventricular hemorrhage in the neonatal period (see Table S4). To investigate whether removing subjects with intraventricular hemorrhage impacts the results, we repeated general linear model analyses for left and right accessory basal nucleus (see Table S5) between the remaining subjects of the VP/VLBW group and the FT group. These results indicate that our main findings of amygdala nuclei volume reductions were not affected by effects of intraventricular hemorrhage.
Conclusions
Basolateral amygdala seems to be specifically impaired after premature birth, possibly due to disturbance of its distinct claustral developmental pathway. The present study might motivate further investigations of brain systems with subplate-dependent development, particularly in relation to the BLA such as claustrum and insula. Furthermore, future studies should investigate further neural correlates of social anxiety after premature birth including other brain regions as well as other structural and functional measures of the amygdala nuclei.
In summary, results suggest lasting and distinct effects of prematurity on amygdala nuclei and their development.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-08-09T15:32:33.638Z
|
2021-08-09T00:00:00.000
|
{
"year": 2021,
"sha1": "51af044c9cdab976f942a2502031b7635690344f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nicl.2021.102780",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc4e437a5d15e8ad2d3778e690e9452400a57603",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7049657
|
pes2o/s2orc
|
v3-fos-license
|
KELLER: estimating time-varying interactions between genes
Motivation: Gene regulatory networks underlying temporal processes, such as the cell cycle or the life cycle of an organism, can exhibit significant topological changes to facilitate the underlying dynamic regulatory functions. Thus, it is essential to develop methods that capture the temporal evolution of the regulatory networks. These methods will be an enabling first step for studying the driving forces underlying the dynamic gene regulation circuitry and predicting the future network structures in response to internal and external stimuli. Results: We introduce a kernel-reweighted logistic regression method (KELLER) for reverse engineering the dynamic interactions between genes based on their time series of expression values. We apply the proposed method to estimate the latent sequence of temporal rewiring networks of 588 genes involved in the developmental process during the life cycle of Drosophila melanogaster. Our results offer the first glimpse into the temporal evolution of gene networks in a living organism during its full developmental course. Our results also show that many genes exhibit distinctive functions at different stages along the developmental cycle. Availability: Source codes and relevant data will be made available at http://www.sailing.cs.cmu.edu/keller Contact: epxing@cs.cmu.edu
INTRODUCTION
Many biological networks bear remarkable similarities in terms of global topological characteristics, such as scale-free and smallworld properties, to various other networks in nature, such as social networks, albeit with different characteristic coefficients (Barabasi and Albert, 1999). Furthermore, it was observed that the average clustering factor of real biological networks is significantly larger than that of random networks of equivalent size and degree distribution (Barabasi and Oltvai, 2004); and biological networks are characterized by their intrinsic modularities (Vászquez et al., 2004), which reflect presence of physically and/or functionally linked molecules that work synergistically to achieve a relatively autonomous functionality. These studies have led to numerous advances towards uncovering the organizational principles and functional properties of biological networks, and even identification of new regulatory events (Basso et al., 2005).
However, most such results are based on analyses of static networks, i.e. networks with invariant topology over a given set of molecules. One example is a protein-protein interaction (PPI) network over all proteins of an organism, regardless of the conditions under which individual interactions may take place. * To whom correspondence should be addressed.
Another example is a single-gene network inferred from microarray data even though the samples may be collected over a time course or multiple conditions. A major challenge in systems biology is to understand and model, quantitatively, the dynamic topological and functional properties of cellular networks, such as the rewiring of transcriptional regulatory circuitry and signal transduction pathways that control behaviors of a cell.
Over the course of a cellular process, such as a cell cycle or an immune response, there may exist multiple underlying 'themes' that determine the functionalities of each molecule and their relationships to each other, and such themes are dynamical and stochastic. As a result, the molecular networks at each time point are contextdependent and can undergo systematic rewiring rather than being invariant over time, as assumed in most current biological network studies. Indeed, in a seminal study by Luscombe et al. (2004), it was shown that the 'active regulatory paths' in a gene-regulatory network of Saccharomyces cerevisiae exhibit dramatic topological changes and hub transience during a temporal cellular process, or in response to diverse stimuli. However, the exact mechanisms underlying this phenomena remain poorly understood. We refer to this time-or condition-specific 'active parts' of the biological circuitry as the active time-evolving network, or simply, time-varying network. Our goal is to recover the latent time-evolving network of gene interactions from microarray time course.
What prevents us from an in-depth investigation of the mechanisms that drive the temporal rewiring of biological networks during various cellular and physiological processes? A key technical hurdle we face is the unavailability of serial snapshots of the timeevolving rewiring network during a biological process. Current technology does not allow for experimentally determining a series of time-specific networks, for a realistic dynamic biological system, based on techniques such as yeast two-hybrid or ChIP-chip systems; on the other hand, use of computational methods, such as structural learning algorithms for Bayesian networks, is also difficult because we can only obtain a few observations of gene expressions at each time point which leads to serious statistical issues in the recovered networks.
How can one derive a temporal sequence of time-varying networks for each time point based on only one or at most a few measurements of node-states at each time point? If we follow the naive assumption that each temporal snapshot of gene expressions is from a completely different network, this task would be statistically impossible because our estimator (from only the observations at the time point in question) would suffer from extremely high variance due to sample scarcity. Previous methods would instead pool observations from all time points together and infer a single 'average' network (Basso et al., 2005;Friedman et al., 2000;Ong, 2002), which means they choose to ignore network rewiring and simply assume that the observations are independently and identically distributed. To our knowledge, no method is currently available for genome-wide reverse engineering of time-varying networks underlying biological processes, with temporal resolution up to every single time point based on measurements of gene expressions.
In this article, we propose kernel-reweighted logistic regression (KELLER), a new machine learning algorithm for recovering timevarying networks on a fixed set of genes from time series of expression values. KELLER stems from the acronym KERWLLOR, which stands for KErnel ReWeighed l 1 -regularized LOgistic Regression. Our key assumption is that the time-evolving networks underlying biology processes vary smoothly across time, therefore temporally adjacent networks are likely to share more common edges than temporally distal networks. This assumption allows us to aggregate observations from adjacent time points by reweighting them, and to decompose the problem of estimating time-evolving networks into one of estimating a sequence of separate and static networks. Extending the highly scalable optimization algorithms of 1 -regularized logistic regression, we are able to apply our method to reverse engineer a genome-wide interactions with a temporal resolution up to every single time point.
It is worth emphasizing that earlier algorithms, such as the structure learning algorithms for dynamic Bayesian network (DBN) (Ong, 2002), learns a time-homogeneous dynamic system with fixed node dependencies, which is entirely different from our approach, which aims at snapshots of rewiring network. Our approach is also very different from earlier approaches which start from a priori static networks and then trace time-dependent activities. For example, the trace-back algorithm (Luscombe et al., 2004) that enables the revelation of network changes over time in yeast is based on assigning time labels to the edges in a priori static summary network. The Achilles' heel of this approach is that edges that are transient over a short period of time may be missed by the summary static network in the first place. The DREM program (Ernst et al., 2007) reconstructs dynamic regulatory maps by tracking bifurcation points of a regulatory cascade according to the ChIP-chip data over short time course. This is also different from our method, because KELLER aims at recovering the entire time-varying networks, not only the interactions due to protein-DNA binding, from long time series with arbitrary temporal resolution. One related approach is the Tesla algorithm by Ahmed et al. (2008). However, Tesla aims at recovering bursty rather than smoothly varying networks.
We apply our method to reverse engineer the time-evolving network between 588 genes involved in the developmental process during the life cycle of Drosophila melanogaster. These genes are a subset of the 4028 genes whose expression values are measured in a 66-step time series documented in Arbeitman et al. (2002). We validate the biological plausibility of the estimated timeevolving network from various aspects, ranging from the activity of functionally coherent gene sets, to previous experimentally verified interactions between genes, to regulatory cascade involved in nervous system development, and to gene functional enrichment. More importantly, the availability of time-evolving networks gives us the opportunity to further study the rich temporal phenomena underlying the biological processes that is not attainable using the traditional static network. For instance, such a downstream analysis can be a latent functional analysis of the genes in the time-evolving network appeared in Fu et al. (2008).
The remainder of the article is structured as follows. In Section 2, we will introduce our kernel reweighted method. In Section 3, we will use synthetic data and a time series of gene expression data collected during the life cycle of D.melanogaster to show the advantage as well as biological plausibility of estimating a dynamic network. We conclude the article with a discussion and outlook on future work in Section 4.
METHODS
First, we introduce our time-evolving network model for gene expression data, then explain our algorithm for estimating the time-evolving network and finally discuss the statistical property and parameter tuning for our algorithm.
Modeling time series of gene expression
Microarray profiling can simultaneously measure the abundance of transcripts from tens of thousands of genes. This technology provides a snapshot into the cell at a particular time point in a genome-wide fashion. However, microarray measurements are far from the exact values of the expression levels. First, the samples prepared for microarray experiments are usually a mixture of cells from different tissues and, possibly, at different points of a cell cycle or developmental stage. This means that microarray measurements are only rough estimates of the average expression levels of the mixture. Other sources of noise can also be introduced into the microarray measurements, e.g. during the stage of hybridization, digitization and normalization. Therefore, it is more robust if we only consider the qualitative level of gene expression rather than its actual value. That is we model gene expression as either being upregulated or downregulated. For this reason, we binarize the gene expression levels into X := {−1,1} (−1 for downregulated and 1 for upregulated). For instance, for cDNA microarray, we can simply threshold at 0 the log ratio of the expression levels to those of the reference, above which a gene is declared to be upregulated and otherwise downregulated.
At a particular time point t, we denote the microarray measurements for p genes as a vector of random variables X (t) := (X (t) 1 ,...,X (t) p ) ∈ X p , where we have adopted the convention that the subscripts index the genes and the bracketed superscripts index the time point. We model the distribution of the expression values for these p genes at any given time point t as a binary pair-wise Markov Random Field (MRF): where θ vu ∈ R is the parameter indicating the strength of undirected interaction between genes u and v; and a θ (t) uv = 0 means that the expression values for genes u and v are conditionally independent given the values of all other genes. Therefore, a MRF is also associated to a network G (t) with a set of nodes V and a set of edges E (t) : V corresponds to the invariant set of genes and hence without the superscript for time; each edge in E (t) corresponds to an undirected interaction between two genes (and a non-zero θ (t) uv ). The difference between E (t) and θ (t) uv can be viewed as follows: E (t) only codes the structure of the model while θ (t) uv contains all information about the model. Finally, the partition function Z(θ (t) ) in a MRF normalizes the model to a distribution.
The dynamic interactions between genes underlying temporal biological processes are reflected in the change of the magnitude of parameter θ (t) uv across time. In particular, increased values of θ (t) uv indicate strengthened or emerging interaction between gene u and v, and decreased values indicate weakened or disappearing interaction. Furthermore, we assume that the dynamic interactions between genes vary smoothly across time. Mathematically, this means that the change of θ (t) uv is small across time, i.e. the difference |θ | is upper bounded by a small constant C θ . In other i129 words, the networks at adjacent time points, G (t) and G (t+1) are very similar, i.e. |E (t) ∩E (t+1) |/|E (t) | is lower bounded by a large constant C E (here, we used |·| to denote the cardinality of a set).
Given time series of gene expression data measured at n time points, D := {x (t 1 ) ,...,x (tn) }, our goal is to estimate a temporal sequence of networks G := {G (t 1 ) ,...,G (tn) } with each network for 1 time point. Note that, we will focus on estimating the structures of the interactions between genes (G (t) ) rather than the detailed strength of these interactions (θ (t) ). We hope by restricting our attention to estimating the structure, we can obtain better guarantees in terms of the ability of our algorithm to recover the true underlying interactions between genes. In Sections 2.2 and 2.4, we will provide further explanation on the advantage of focusing on G (t) .
Another important point of clarification is that the interactions between genes we are modeling are the statistical dependencies between their expression levels. This is a common choice for many existing methods, such as the methods by Friedman et al. (2000) and Ong (2002). Note that statistical dependency is different from causality, which focuses on directed statistical relations between random variables. In other words, it is more appropriate to view networks from our model as the co-regulation relations between genes. That is, if there is an edge between two genes in the dynamic network at time point t, then the changes of the expression levels of these two genes are likely to be regulated by the same biological process.
Estimating time-varying network
Two questions need to be addressed when we estimate a time-evolving network. First, what is the objective to optimize and second, what is the algorithmic procedure for the estimation? The first question is addressed in this section and it concerns both the consistency and efficiency of our method while the second question only concerns the efficiency of the algorithm, which we will discuss more in Section 2.3.
First, estimating the parameter vector θ (t) by maximizing log-likelihood is not practically feasible since the evaluation of the partition function Z(θ (t) ) involves a summation of exponential number of terms. Another approach to address this problem is to use a surrogate likelihood function, which can be tractably optimized. However, there is no statistical guarantee on how close an estimate obtained through maximization of a surrogate likelihood is to the true parameter (Banerjee et al., 2008). Therefore, we adapt the neighborhood selection procedure of Wainwright et al. (2006) to estimate the time-evolving network G (t) instead.
Overall, we have designed a method that decomposes the problem of estimating the time-evolving network along two orthogonal axes. The first axis is along the time, where we estimate the network for each time point separately by reweighting the observations accordingly; and the second axis is along the set of genes, where we estimate the neighborhood for each gene separately and then joining these neighborhoods to form the overall network. The additional benefit of such decomposition is that the estimation problem is reduced to a set of identical atomic optimization tasks in Equation (3). In the next section, we will discuss our procedure to solve this atomic optimization task efficiently.
In this new approach, estimating the network G (t) is equivalent to recovering, for each gene u ∈ V, its neighborhood of genes that u is interacting with, i.e.
It is intuitive that if we can correctly estimate the neighborhood for all genes u in V, we can recover the network G (t i ) by joining these neighborhoods. In this alternative view, we can decompose the joint distribution in Equation (1) \u ), each of which is the distribution of the expression value of gene u conditioned on the expression values of all other genes (we use \u to denote the set of genes except gene u, i.e. \u : \u ) takes the form of a logistic regression: where a,b = a b denotes inner product and θ (t) uv | v ∈\u} is the (p−1)-dimensional sub-vector of parameters associated with gene u. The neighborhood N (t) (u) can be estimated from the sparsity pattern of the sub-vector θ (t) \u . Therefore, estimating the network G (t) at time point t can be decomposed into p tasks, each for the sub-vector θ (t) \u corresponding to a gene. For later exposition, we denote the log-likelihood of an observation x under Equation (2) Recall that we assume that the time-evolving network varies smoothly across time. This assumption allows us to borrow information across time by reweighting the observations from different time points and then treating them as if they were i.i.d. observations. Intuitively, the weighting should place more emphasis on observations at or near time point t with weights becoming smaller as the observations move further away from time point t. Such reweighting technique has been employed in other tools for time series analyses, such as the short-time Fourier transformation where observations are reweighted before applying the Fourier transformation to capture transient frequency components (Nawab and Quatieri, 1987). In our case, at a given time point t, the weighing is defined as is a symmetric nonnegative kernel and h n is the kernel bandwidth. We used the Gaussian RBF kernel, K hn (t) = exp(−t 2 /h n ), in our later experiments. Note that multiple measurements at one time point can be trivially handled by assigning them the same weight. We consider multiple measurements to be i.i.d. observations.
Additionally, we will assume that the true network is sparse, or that the interactions between genes can be approximated with a sparse model. This sparsity assumption holds well in most cases. For example, a transcription factor only controls a small fraction of target genes under a specific condition (Davidson, 2001). Then, given a time series of gene expression data measured at n time points, D ={x (t 1 ) ,...,x (tn) }, we can estimate θ (t) \u or the neighborhood of N (t) (u) of gene u at time point t using an 1 penalized log-likelihood maximization. Equivalently the estimatorθ (t) \u is the solution of the following minimization problem: where λ ≥ 0 is a regularization parameter specified by user that controls the size of the estimated neighborhood, and hence the sparsity of the network. Then, the neighborhood for gene u can be estimated asN (t) (u) = {v ∈ V |θ (t) uv = 0}, and the network can be estimated by joining these neighborhoods:
Efficient optimization
Estimating time-evolving networks using the decomposition scheme described in previous section requires solving a collection of optimization problems given in Equation (3). In a genome-wide reverse engineering task, there are tens of thousands of genes and hundreds of time points, so one can easily have a million optimization problems. Therefore, it is essential to develop an efficient algorithm for solving the atomic optimization problem in Equation (3), which can then be trivially parallelized across different genes and different time points. The optimization problem in Equation (3) is an 1 penalized logistic regression with observation reweighting. This optimization problem has been an active research area in the machine learning community and various methods have been developed, including interior point methods (Koh et al., 2007), trust region newton methods (Lin et al., 2008) and projected gradient methods (Duchi et al., 2008). In this article, we employed a projected gradient method due to its simplicity and efficiency.
KELLER: estimating time-varying networks
The optimization problem in (3) can be equivalently written in a constrained form: where C λ is an upper bound for the 1 norm of θ (t) \u and defines a region in which the parameter lies. There is an one-to-one correspondence between C λ in Equation (5) and λ in Equation (3). In this formulation, the objective L(θ (t) \u ) is a smooth and convex function, and its gradient with respect to θ (t) \u can be computed simply as The key idea of a projected gradient method is to update the parameter along the negative gradient direction. After the update, if the parameter lies outside the region , it is projected back into the region , otherwise, we move to the next iteration. The essential step in the algorithm is the efficiency with which we can project the parameter into the region : where (a) := argmin b { a−b |b ∈ } is the Euclidean projection of a vector a onto a region . We employed an approach by Duchi et al. (2008) which involves only simple operations such as sorting and thresholding for this projection step.
Algorithm 1 gives a summary of the projected gradient method for the optimization problem in Equation (3). Note that the projected gradient algorithm has several internal parameters α, and σ , which, in our experiments, we set to typical values given in the literature (Bertsekas, 1999).
Statistical property
The main topic we discuss here is whether the algorithm described in Section 2.2 can estimate the true underlying time-evolving network correctly. In order to study the statistical guarantees of our algorithm, we need to take three aspects into account. First, a genome-wide reverse engineering task can involve tens of thousands of genes while the number of observations in time series can be quite limited (hundreds at most). Therefore, it is important to study the case in which the dimension p scales with respect to the sample size n, but still allows for recovery of networks. Second, the time-evolving nature of networks adds extra complication to the estimation problem, so, we have to take the amount of change between adjacent networks, , into account. Third, the intrinsic properties of the interactions between genes will also affect the correct recovery of the networks. Intuitively, the more complicated interactions the more difficult it is to recover networks, e.g. each gene interacts with a large fraction of other genes. In other words, the maximum size of the neighborhood for a gene C N := max u∈V N (u) is also a deciding factor. To our knowledge, none of the earlier methods (Basso et al., 2005;Friedman et al., 2000;Ong, 2002) provide a statistical guarantee for recovered networks or are amenable to such analysis.
In contrast, the method we presented in Section 2.2 is highly amenable to a rigorous statistical analysis. Statistical guarantees have been provided for estimating static networks under the model in Equation (1) (Wainwright et al., 2006) and we can extend them to the time-varying case. A detailed proof of a similar result for our approach is beyond the scope of this article and deserves a full treatment in a separate paper. At a high level, we can show that under a set of suitable conditions over the model, C θ , C N , h n and λ, with high probability, we can recover the true underlying time-evolving network even when the number of genes p is exponential in size of the number of observations n [for details of the proof, see M.Kolar and E.Xing (submitted for publication)]. A different analysis have been provided for time-varying Gaussian graphical models (Zhou et al., 2008), in which the consistency of the interaction strengths is addressed, but not the consistency of the network topology.
Parameter tuning
The regularization parameter λ controls the sparsity of the estimated networks. Large values of λ result in sparse networks, while small values result in a dense networks that have higher log-likelihood, but more degrees of freedom. We employ the Bayesian Information Criterion (BIC) for choosing λ that trades off between the fit to the data and the model complexity. More specifically, we use an average of the BIC score defined below for each time point t and for each gene u: where Nz(·) counts the number of non-zero entries inθ (t) \u . Then the final score is BIC := 1/n|V| u∈V n j=1 BIC(t j ,u). A larger BIC score implies a better model.
The bandwidth parameter h n controls the smoothness of the change in the time-evolving networks. Using wide bandwidths effectively incorporate more observations for estimating each network snapshot, but it also risks missing sharp changes in the network; using narrow bandwidths makes the estimate more sensitive to sharp changes, but this also makes the estimation subject to larger variance due to the reduced effective sample size. In this article, we use a heuristic for tuning the initial scale of the bandwidth parameter h n : we first form a matrix (d ij ) with its entries d ij := (t i −t j ) 2 (i ={1,...,n}). Then the scale of the bandwidth parameter is set to the median of the entries in (d ij ). Intuitively, the bandwidth parameter reflects the characteristic interval between time points. In our simulation experiments, we find that this heuristic provides a good initial guess for h n , and it is quite close to the value obtained via more exhaustive search.
EXPERIMENTS
In this section, we use synthetic data to demonstrate the advantage of estimating a time-evolving network, and we used data collected from Drosophila to show that our method, KELLER, can estimate a biologically plausible time-evolving network and reveal some interesting properties of the dynamic interactions between genes.
Recovering synthetic networks
In this section, we compare KELLER with structural learning of DBN (Friedman et al., 2000) and 1 -regularization logistic regression for static network estimation using synthetic networks. Note that 1 -regularization logistic regression can be obtained from KELLER: we only need to apply a uniform weight w(t i ) = 1/n to all observations and estimate a single network using the same objective as Equation (5). We evaluate the estimation procedures using an F1 score, which is the harmonic mean of precision (Pre) and recall (Rec), i.e. F1 := 2 * Pre * Rec/Pre+Rec. Precision is calculated as 1/n n i=1 |Ê (t i ) ∩E (t i ) |/|Ê (t i ) |, and recall as The F1 score is a natural choice of the performance measure as it tries to balance between precision and recall; only when both precision and recall are high can F1 be high. Furthermore, we use an initial bandwidth parameter h n as explained in Section 2.5, then searched over a grid of parameters (10 [−0.5:0.1:0.5] for λ and h n ×[0.5,1,2,5,10,50] for the bandwidth), and finally chose one that optimizes the BIC criterion defined in Section 2.5. When estimating the static network, we use the same range to search for λ.
The recovery results for the overall time-evolving network, the dynamic and static edges, are presented, respectively, in Figure 1. From the plots, we can see that estimating a static network does not benefit from increasing number of i.i.d. observations at all. In contrast, estimating a time-varying network always obtains a better performance and the performance also increases as more observations are available. Note that these results are not surprising since our time-varying network model fits better the data generating process. As time-evolving networks occur very often in biological systems, we expect our method will also have significant advantages in practice.
Recovering time-evolving interactions between genes in D.melanogaster
Over the developmental course of D.melanogaster, there exist multiple underlying 'themes' that determine the functionalities of each gene and their relationships to each other, and such themes are dynamical and stochastic. As a result, the gene-regulatory networks at each time point are context-dependent and can undergo systematic rewiring, rather than being invariant over time. In this section, we use KELLER to reverse engineer the dynamic interactions between genes from D.melanogaster based on a time series of expression data measured during its full life cycle. We use microarray gene expression measurements collected by Arbeitman et al. (2002) as our input data. In such an experiment, the expression levels of 4028 genes are simultaneously measured at various developmental stages. Particularly, 66 time points are chosen during the full developmental cycle of D.melanogaster, spanning across four different stages, i.e. embryonic (1-30 time point), larval (31-40 time point), pupal (41-58 time points) and adult stages (59-66 time points). In this study, we focused on 588 genes that are known to be related to developmental process based on their gene ontologies. We use a regularization parameter of 10 −2 , and a bandwidth parameter of 0.5×h n in this experiment (h n is the median distance as explained in Section 2.5).
In Figure 3a, we plot two different statistics of the reversed engineered gene-regulatory networks as a function of the developmental time point (1-66). The first statistic is the network size as measured by the number of edges; the second is the average local clustering coefficient as defined by Watts and Strogatz (1998). The first statistic measures the overall connectedness of the networks, while the latter measures the average connectedness of the neighborhood local to each gene. For comparison, we normalize both statistics to the range between [0,1]. It can be seen that the network size and its local clustering coefficient follow very different trajectories during the developmental cycle. The network size exhibits a wave structure featuring two peaks at midembryonic stage and the beginning of pupal stage. Similar pattern of gene activity has also been observed by Arbeitman et al. (2002). In contrast, the clustering coefficients of the time-evolving networks drop sharply after the mid-embryonic stage, and they stay low until the start of the adult stage. One explanation is that at the beginning of the developmental process, genes have a more fixed and localized function, and they mainly interact with other genes with similar functions; however, after mid-embryonic stage, genes become more versatile and involved in more diverse roles to serve the need of rapid development; as the organism turns into an adult, its growth slows down and each gene may be restored to its more specialized role.
To illustrate how the network properties change over time, we visualize two networks from mid-embryonic stage (time point 15) and mid-pupal stage (time point 45) in Figure 3b and 3c respectively. Although the size of the two networks are comparable, we can see that there are much clearer local clusters of interacting genes during mid-embryonic stage. To provide a better view of the evolving nature of these clusters, we cluster genes based on the network at time point 1 using spectral clustering, and visualize the gradual disappearance of these clusters in Figure 2. Note that our visualization does not indicate that genes do not form clusters in later developmental stage. Genes may cluster under different groupings, but these clusters cannot be revealed by the visualization since the positions of the genes have been fixed in the visualization.
To judge whether the learned networks make sense biologically, we zoom into three groups of genes functionally related to different stages of development process. In particular, the first group (30 genes) is related to embryonic development; the second group (27 genes) is related to post-embryonic development; and the third group (25 genes) is related to muscle development. (The genes are assigned to their respective groups according to their ontology labels.) We used interactivity, which is the total number of edges a group of genes is connected to, to describe the activity of each group genes. In Figure 4, we plotted the time courses of interactivity for the three groups, respectively. For comparison, we normalize all scores to the range of [0,1]. We see that the time courses have a nice correspondence with their supposed roles. For instance, embryonic development genes have the highest interactivity during embryonic stage, and post-embryonic genes increase their interactivity during larval and pupal stage. The muscle development genes are less Each cell in the plot corresponds to one gene pair of gene interaction at one specific time point. The cells in each row are ordered according to their time point, ranging from embryonic stage (E) to larval stage (L), to pupal stage (P) and to adult stage (A). Cells colored blue indicate the corresponding interaction listed in the right column is present in the estimated network; blank color indicates the interaction is absent.
specific to certain developmental stages, since they are needed across the developmental cycle. However, we see its increased activity when the organism approaches its adult stage where muscle development becomes increasingly important. The estimated networks also recover many known interactions between genes. In recovering these known interactions, the timeevolving networks also provide additional information as to when interactions occur during development. In Table 1 recovered known interactions and the precise time when they occur. This also provides a way to check whether the learned networks are biologically plausible given the prior knowledge of the actual occurrence of gene interactions. For instance, the interaction between genes msn and dock is related to the regulation of embryonic cell shape and correct targeting of photoreceptor axons. This is consistent with the timeline provided by the time-evolving networks. A second example is the interaction between genes sno and Dl which is related to the development of compound eyes of Drosophila. A third example is between genes caps and Chi which are related to wing development during pupal stage. What is most interesting is that the time-evolving networks provide timelines for many other gene interactions that have not yet been verified experimentally. This information will be a useful guide for future experiments. We further study the relations between 130 transcriptional factors (TFs). The network contains several clusters of transcriptional cascades, and we will present in detail the largest TF cascade involving 36 TFs (Fig. 5). This cascade of TFs is functionally very coherent, and many TFs in this network play important roles in the nervous system and eye development. For example, Zn finger homeodomain 1 (zhf1), brinker (brk), charlatan (chn), decapentaplegic (dpp), invected (inv), forkhead box, subgroup 0 (foxo), Optix, eagle (eg), prospero (pros), pointed (pnt), thickveins (tkv), extra macrochaetae (emc), lilliputian (lilli), doublesex (dsx) are all involved in nervous and eye development. Besides functional coherence, the networks also reveal the dynamic nature of gene regulation: some relations are persistent across the full developmental cycle while many others are transient and specific to certain stages of development. For instance, five TFs, brk-pnt-zfh1-pros-dpp, form a long cascade of regulatory relations which are active across the full developmental cycle. Another example is gene Optix which is active across the full developmental cycle and serves as a hub for many other regulatory relations. As for transience of the regulatory relations, TFs to the right of Optix hub reduce their activity as development proceeds to later stages. Furthermore, Optix connects two disjoint cascades of gene regulations to its left and right side after embryonic stage.
The time-evolving networks also provide an overview of the interactions between genes from different functional groups. In Figure 6, we grouped genes according to 58 ontologies and visualized the connectivity between groups. We can see that large topological changes and network rewiring occur between functional groups. Besides expected interactions, the figure also reveals many seemingly unexpected interactions. For instance, during the transition from pupa stage to adult stage, Drosophila is undergoing a huge metamorphosis. One major feature of this metamorphosis is the development of the wing. As can be seen from Figure 6r and s, genes related to metamorphosis, wing margin morphogenesis, wing vein morphogenesis and apposition of wing surfaces are among the most active groups of genes, and they carry their activity into adult stage. Actually, many of these genes are also very active during early embryonic stage (for example, Fig. 6b and c); though the difference is that they interact with different groups of genes. On one hand, the abundance of transcripts from these genes at embryonic stage is likely due to maternal deposit (Arbeitman et al., 2002); on the other hand, this can also be due to the diverse functionalities of these genes. For instance, two genes related to wing development, held out wings (how) and tolloid (td), also play roles in embryonic development.
CONCLUSION
Numerous algorithms have been developed for inferring biological networks from high-throughput experimental data, such as microarray profiles (Ong, 2002;Segal et al., 2003), ChIP-chip genome localization data (Bar-Joseph et al., 2003;Harbison et al., 2004;Lee et al., 2002) and PPI data (Causier, 2004;Giot et al., 2003;Kelley et al., 2004;Uetz et al., 2000), based on formalisms such as graph mining (Tanay et al., 2004), Bayesian networks (Cowell et al., 1999) and DBN (Friedman et al., 2000;Kanazawa et al., 1995). However, most of this vast literature focused on modeling static network or time-invariant networks, and much less has been done towards modeling the dynamic processes underlying networks that are topologically rewiring and semantically evolving over time. The method presented in this article represents a successful and practical tool for genome-wide reverse engineering dynamic interactions between genes based on their expression data.
Given the rapid expansion of categorization and characterization of biological samples and improved data collection technologies, we expect collections of complex, high-dimensional and feature-rich data from complex dynamic biological processes, such as cancer progression, immune response and developmental processes, to continue to grow. Thus, we believe our new method, KELLER, is a timely contribution that can narrow the gap between i134 (a) Avgerage network. Each color patch denotes an ontological group, and the position of these ontological groups remain the same from (a) to (u). The annotation in the outer rim indicates the function of each group. Interactions between gene ontological groups related to developmental process undergo dynamic rewiring. The weight of an edge between two ontological groups is the total number of connection between genes in the two groups. In the visualization, the width of an edge is proportional to its edge weight. We thresholded the edge weight at 30 in (b)-(u) so that only those interactions exceeding this number are displayed. The average network in (a) is produced by averaging the networks underlying (b)-(u). In this case, the threshold is set to 20 instead. imminent methodological needs and the available data and offer deeper understanding of the mechanisms and processes underlying biological networks.
|
2014-10-01T00:00:00.000Z
|
2009-05-27T00:00:00.000
|
{
"year": 2009,
"sha1": "a176d36ee3caf6e940c931eeca0f39a925731275",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/25/12/i128/460231/btp192.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dbf9ab3e35adc9a7181dd3b2c1161aaffe9b47c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
]
}
|
1076077
|
pes2o/s2orc
|
v3-fos-license
|
Thermal hypesthesia in patients with complex regional pain syndrome related dystonia
The quantitative thermal test showed cold and warmth hypesthesia without increased heat pain sensitivity in the affected limbs of complex regional pain syndrome (CRPS) patients with tonic dystonia (n = 44) in comparison with healthy controls with a similar age and sex distribution (n = 35). The degrees of cold and warmth hypesthesia were strongly correlated. We conclude that dysfunction in small nerve fiber (i.e., C and Aδ) processing is present in patients with CRPS-related dystonia.
Introduction
Complex regional pain syndrome (CRPS) is characterized by various combinations of sensory, autonomic and motor disturbances and is usually preceded by a trauma. Patients with CRPS often experience spontaneous pain along with allodynia, hyperalgesia and hyperesthesia (Janig and Baron 2003;Veldman et al. 1993). In addition, negative sensory phenomena, such as hypesthesia and hypalgesia may be present, especially in chronic cases with longer disease duration (Birklein et al. 2000;Janig and Baron 2003;Rommel et al. 1999;van Hilten et al. 2001). Autonomic signs include changes in skin temperature and color, and hyperhidrosis (Janig and Baron 2003;Veldman et al. 1993). About 25% of the patients develop movement disorders, especially dystonia (Bhatia et al. 1993;Schwartzman and Kerrigan 1990;van Hilten et al. 2005). In contrast to the twisting and repetitive movements generally encountered in primary dystonia, dystonia in CRPS is typically characterized by fixed flexion postures of the distal extremities. Two types of CRPS are generally distinguished, depending on the presence (CRPS-2) or absence (CRPS-1) of major nerve damage (Merskey and Bogduk 1994).
In primary dystonia, there is compelling evidence of altered sensory processing (Tinazzi et al. 2009) which includes abnormalities in temporal and spatial discrimination and vibration-induced illusion of movements as well as higher-order sensory processing. In CRPS related dystonia, sensory integration of proprioceptive afferent input was found normal (van Rijn et al. 2009b). By definition there is no clear involvement of large nerve fibers in CRPS-1. Until now the function of the small nerve fibers (i.e., C and Ad), as opposed to large nerve fiber function, has not been studied in this type of dystonia.
The quantitative thermal test is a non-invasive clinical test which assesses the function of small fibers and their central connections (Verdugo and Ochoa 1992;Yarnitsky 1997). The technique quantifies temperature sensation by testing minimally detectable temperature changes ('thresholds') for cold (CDT) and warmth detection (WDT), as well as for heat-induced (HPT) and coldinduced pain (CPT).
We hypothesized that small nerve fiber, in contrast to large nerve fiber, dysfunction is present in CRPS related dystonia. Though disturbances in temperature sensation were earlier shown in CRPS patients without dystonia, its presence in those with dystonia is unknown. In this study we applied the quantitative thermal test to evaluate C and Ad fiber dysfunction in CRPS patients with dystonia. Since these patients may sometimes have three or even four affected extremities, and because an unaffected extremity may be involved on a subclinical level, we chose to compare results primarily with those of healthy controls. Whenever possible, comparisons were also made between affected and unaffected sides.
Patients and methods
We studied 44 consecutive CRPS-1 patients (41 women; mean age ± SD: 36 ± 13 years; mean disease duration ± SD: 10 ± 6 years) who were candidates for a study on intrathecal baclofen treatment (Table 1). This study was published in detail elsewhere (van Rijn et al. 2009a). CRPS was diagnosed according to the diagnostic criteria for CRPS-1 of the International Association for the Study of Pain (Merskey and Bogduk 1994). Patients with peripheral neuropathy were excluded. Severity of pain was evaluated with a numeric rating scale (NRS) ranging from 0 (no pain) to 10 (worst imaginable pain). Severity of dystonia was assessed with the Burke-Fahn-Marsden (BFM) dystonia rating scale (Burke et al. 1985), which ranges from 0 to 120 with higher scores reflecting more severe dystonia.
For control purposes, 35 healthy controls (all women) with a similar age distribution (mean ± SD: 40 ± 13 years), who had no diseases of the nervous system and did not receive any neuroactive drugs were also investigated. Controls were partners, relatives or friends of patients, or were recruited among the hospital staff. We used the data from the non-dominant control limbs in the primary analyses, because we hypothesized that if there would be any differences in sensory acuity between both sides, this would be on the nondominant side. Informed consent was obtained from all subjects according to the Declaration of Helsinki. The study protocol was approved by the Institutional Review Board of the Leiden University Medical Center.
Quantitative thermal test
A TSA-II NeuroSensory Analyzer (Medoc Ltd., Ramat Yishai, Israel) was used to determine CDT, WDT, and HPT of both hands (thenar eminence) and both feet (dorsal aspect of the first metatarsal bone). CPT was not tested to minimize discomfort. These tests were performed by trained technicians in a quiet room at a temperature of 20-22°C. Subjects were measured in supine position and were not allowed to watch the computer screen. The 'method of levels' algorithm was used, in which the thermode returns to its baseline temperature (32°C) after each temperature change. After each stimulus period subjects are asked whether a (painful) change was perceived. The amplitude of the next temperature change is based on the response given after a stimulus: when no change of temperature has been perceived, the temperature change for the next step is doubled. If a change was perceived, the amplitude for the next step was halved. The procedure was continued until the step size reached 0.1°C. To alert the subject that a stimulus was imminent each stimulus was preceded by an auditory cue. Lower and higher temperature limits were 15.0°and 50.0°C, respectively; rate of temperature change 1.0°C/s (CDT, WDT) and 4.0°C/s (HPT); stimulus duration 5 s; return rate 10°C/s; and interstimulus interval 5 s (CDT, WDT) and 9 s (HPT).
Statistical analysis
The data were not distributed normally (Kolmogorov-Smirnov statistics for raw and log-transformed CDT and WDT data, and raw HPT data, P \ 0.05) and therefore non-parametric tests were used. The significance threshold was set at P \ 0.05. For all tests, the SPSS software package version 14.0 (SPSS Inc., Chicago, IL) was used. (57) BFM Burke-Fahn-Marsden dystonia rating scale (range 0-120, with 0 = no dystonia) (Burke et al. 1985), CRPS complex regional pain syndrome, F female, IQR interquartile range, M male, NRS numeric rating scale (range 0-10, with 0 = no pain)
Patients versus controls
Thermal thresholds were evaluated in 37 hands of 28 patients, and in 48 feet of 37 patients; testing on the other sites was not feasible due to dystonia or pain. The CDT and WDT were abnormal in the patients' affected limb in comparison with the controls' non-dominant limbs (Table 2). There was a strong positive correlation between CDT and WDT in patients (Spearman rho = 0.66, P \ 0.001) and a trend towards significant association in controls (Spearman rho = 0.33, P = 0.05). HPT did not differ between patients and controls (HPT hand: P = 0.50, HPT foot: P = 0.53). Compared with the non-dominant limbs of controls, CDT and WDT of patients' unaffected limbs were increased, although the difference was not significant (Table 2). There were no significant differences in thresholds between nondominant and dominant limbs in controls (data not shown).
Within and between patients comparisons
Nine patients had one affected arm, and also nine patients had one affected leg (Table 1). The affected limbs showed elevated CDT and WDT in comparison with their unaffected counterparts, but this was only significant for WDT in the hands (Table 2).
Relations between clinical characteristics and thermal thresholds
There was no significant correlation between the severity of pain (NRS) and any threshold (data not shown), nor between dystonia (BFM) and any threshold. Although disease duration varied considerably between patients, none of the thresholds showed significant associations with this variable. There were no significant differences in thermal thresholds between patients who used analgesics versus those who did not.
Discussion
Although thermal thresholds have previously been examined in CRPS patients without dystonia (Birklein et al. 2000;Huge et al. 2008;Kemler et al. 2000;Rommel et al. 2001), this issue has not been addressed in CRPS patients with dystonia. These earlier studies have yielded variable findings that most likely are explained by differences in applied methods and population characteristics. The general picture that arises from these studies is that CDT and WDT are elevated (i.e. reflecting thermal hypesthesia) in patients with disease durations up to 4 years, with the possible exception of CDT in patients with short disease duration (6 months); findings on CPT and HPT are contradictory. In the present study we found cold and warmth hypesthesia together with normal HPT in the affected arms and legs of CRPS patients with dystonia.
Thermal hypesthesia may be caused by disturbances at multiple levels of the nervous system. First, small fiber pathology has been demonstrated in CRPS (Albrecht et al. 2006;Oaklander et al. 2006;van der Laan et al. 1998) and may explain our findings. In addition, it is known that impairment of C and Ad fibers typically leads to thermal hypesthesia while sparing heat-induced pain, due to CDT cold detection threshold (difference from baseline temperature), CRPS complex regional pain syndrome, HPT heat-induced pain threshold, WDT warmth detection threshold (difference from baseline temperature), DT difference with baseline temperature a Note that most patients were excluded because they had two affected hands or two affected feet; number of patients is slightly different from Table 1 because testing was impossible in two patients due to dystonia or pain (both for hands and feet) Thermal hypesthesia in patients with complex regional pain syndrome 601 differences in spatial summation requirement (Verdugo and Ochoa 1992). Second, C fiber activation by capsaicin injection elicited reversible tactile hyperalgesia and hypesthesia not only at the site of injection, but also in the adjacent tissue (Magerl and Treede 2004). This was attributed to rerouting of somatosensory input from nonnociceptive into nociceptive pathways in the spinal dorsal horn. Therefore, plasticity-related changes of sensory processing at the spinal level may also be an explanation for our findings. Third, in a population of 40 CRPS patients with one affected extremity, neurological examination showed hemisensory deficits including the face in 15 (38%) (Rommel et al. 2001). The authors suggested that functional changes in the thalamus may play an important role in the pathogenesis of sensory abnormalities. Fourth, a shrunk representation area of the affected hand was found in the primary somatosensory cortex of CRPS patients (Juottonen et al. 2002;Maihofner et al. 2003;Pleger et al. 2004). Reduced activation of the contralateral primary and secondary somatosensory cortex after tactile stimulation has also been reported in CRPS (Pleger et al. 2004) and similar cortical changes may underlie thermal hypesthesia.
In conclusion, we found thermal hypesthesia in CRPS patients with dystonia. Apparently, dysfunction in small nerve fiber (i.e., C and Ad) processing is present in these patients. Whether this sensory abnormality is a secondary phenomenon or is in fact involved in the causal pathway to dystonia is uncertain.
For a further understanding, clinical studies on the efficacy of sensory rehabilitation in CRPS-related dystonia are warranted.
|
2016-10-14T09:01:21.650Z
|
2010-12-29T00:00:00.000
|
{
"year": 2010,
"sha1": "4f6d2f105c3fa791f66be8a8f6066f97fa0bc814",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00702-010-0558-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f6d2f105c3fa791f66be8a8f6066f97fa0bc814",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
254068073
|
pes2o/s2orc
|
v3-fos-license
|
Influence of the environment on the characteristics of asthma
Few studies have compared the prevalence of asthma in urban and rural settings or explored the issue of whether these two manifestations of the disease may represent different phenotypes. The aim of this study was: (a) to establish whether the prevalence of asthma differs between rural and urban settings, and b) to identify differences in the clinical presentation of asthma in these two environments. Descriptive epidemiological study involving individuals aged 18 or over from a rural (n = 516) and an urban population (n = 522). In the first phase, individuals were contacted by letter in order to organize the administration of a first validated questionnaire (Q1) designed to establish the possible prevalence of bronchial asthma. In the second phase, patients who had presented association patterns in the set of variables related to asthma in Q1 completed a second validated questionnaire (Q2), designed to identify the characteristics of asthma. According to Q1, the prevalence of asthma was 15% (n = 78) and 11% (n = 59) in rural and urban populations respectively. Sixty-five individuals with asthma from the rural population and all 59 individuals from the urban population were contacted and administered the Q2. Thirty-seven per cent of the individuals surveyed had previously been diagnosed with bronchial asthma (35% in the rural population and 40% in the urban setting). In the urban asthmatic population there was a predominance of women, a greater personal history of allergic rhinitis and a family history of allergic rhinitis and/or eczema. Asthma was diagnosed in adulthood in 74.8% of the patients, with no significant differences between the two populations. Regarding symptoms, cough (morning, daytime and night) and expectoration were more frequent in the urban population. The prevalence of asthma does not differ between urban and rural settings. The differences in exposure that characterize each environment may lead to different manifestations of the disease and may also affect its severity.
Bronchial asthma is one of the most prevalent chronic diseases, with more than 350 million affected people in the world 1 . Its prevalence varies between countries 2 and also between rural and urban areas, although in the latter case the results are inconsistent 3 . In rural areas, it has been proposed that exposure to a greater number of infectious agents and endotoxins from nearby farms may prevent the onset of asthma 4 , especially in children; however, in the adult population these same exposures may aggravate existing asthma 5 . In contrast, in large cities, exposure to smoking and air pollution may predispose to a higher prevalence of asthma 6 . It has also been postulated that the difference between prevalence may be due to differences in accessibility to health resources 3 .
Few studies have compared the prevalence of asthma in urban or rural settings 7-10 , and even fewer have sought to establish whether the clinical presentation differs in these environments and whether they may actually represent two different phenotypes of the disease. Currently, asthma patients tend to be grouped according to whether they present a T2 or a non-T2 response 11 . In general, in the T2 response two phenotypes can be distinguished: one allergic, in which Th2 response mechanisms predominate, and the other eosinophilic, in which the response is mediated by ILC2s 12 . The non-T2 response encompasses patients with neutrophilic inflammation or without apparent inflammation (known as paucigranulocytic asthma) 13 . There are different mechanisms that could explain neutrophilic inflammation in patients with asthma. Some studies have shown a possible activation of the Th17 pathway 14,15 while others propose a dysregulation of the innate immune response associated with IL-1b or CXCR2 16 . It has also been proposed that in patients in whom bronchial remodeling has led to the appearance of bronchiectasis, bacterial colonization may increase the number of neutrophils in the airways 17 or that the corticosteroid treatment itself, which reduces the number of eosinophils, facilitates this neutrophilic www.nature.com/scientificreports/ inflammation 18 . Finally, regardless of whether the response is T2 or non-T2, it has also been postulated that there may be a mixed Th2/TH17 response 19 .
Whether an individual has one type or another of asthma basically depends on the interaction between genetics and the environment to which they are exposed 20 . In this regard, the different exposures to which individuals living in rural or urban areas may be subject may lead to different forms of presentation of asthma. The objective of the present study is twofold: first, to establish whether there are differences in the prevalence of asthma between rural and urban settings and, second, to record any differences in the clinical presentation of the disease in these two environments.
Methods
Study population. One rural and one urban population were studied. The rural population consisted of all individuals over 18 years old living in Ribes de Freser, a mountain town in the Eastern Pyrenees; the group comprised1,760 inhabitants (883 men/877 women) of whom 1,541 were over 18 years of age at the time of the study. The urban population consisted of 1500 randomly selected individuals over 18 years of age from Horta-Guinardó a district in the city of Barcelona with 170,249 inhabitants (Fig. 1). The district of Horta-Guinardó has 11 neighborhoods; for the randomization of the population of this area, 140 questionnaires were introduced at random into the residential mailboxes of each of these neighborhoods to ensure an adequate representation of the area as a whole.
The study was approved by the local Ethics Committee (Hospital Vall d'Hebron Ethics Committee approval PR(AG)367/2011) and all subjects signed informed consent prior to participation. All methods were performed in accordance with the relevant guidelines and regulations.
Design of the study
Descriptive, epidemiological study carried out in two phases. First phase. A questionnaire for respiratory symptom screening (Q1), previously published by the group 21 , was sent by post to the members of both populations. They were asked to complete it and return it to the investigators by prepaid postage. This questionnaire included questions on symptoms extracted from the ECRHS survey 22 . In the rural population, the town council was responsible for sending the questionnaire to all inhabitants over 18. In the urban setting, 1500 questionnaires were randomly introduced into the mailboxes of homes in the district. Briefly, this earlier study 21 used multiple correspondence analysis 23 to assess the association patterns in the set of variables related to respiratory symptoms [(a), (f), (i), (j), (k), (l), and (m)]. Asthma was defined based on an affirmative answer to at least one of the three questions (a) Has a doctor ever told you that you have asthma?, (f) Have you had an asthma attack in the last 12 months? or (m) Have you taken any asthma medication in the last 12 months?. Chronic bronchitis was defined based on a positive response to questions (k) Do you usually cough most days for at least three months of the year? and/or (l) Do you cough up phlegm during at least three months a year? and negative responses to the three asthma questions (a), (f) or (m). Rhinitis was established in the case of a positive answer to questions (c) Has a doctor ever told you that you have rhinitis? and/or (g) Have you had allergic rhinitis in the last 12 months? and dermatitis with a positive answer to questions (b) Has a doctor ever told you that you have dermatitis? and/or (h) Have you had eczema or skin allergies in the last 12 months?.
In the Q1 questionnaire, patients were asked for their consent to participate in the second phase. They were not informed of the main hypothesis of the study, that is, the possible association between environmental exposure and respiratory symptoms. www.nature.com/scientificreports/ Second phase. The individuals who agreed to participate in this second phase and who were diagnosed with possible bronchial asthma on the basis of the Q1 questionnaire were contacted by telephone and administered a second questionnaire (Q2) designed to identify the characteristics of their asthma. This questionnaire, adapted from the European Community Respiratory Health Survey II (ECRHS II) 24 , focused especially on patients' general characteristics and symptoms, exposure at work, in the home or in the environment, and the relationship of symptoms with these forms of exposure. The interviews were conducted by pulmonologists who are experts in asthma at the Vall d'Hebron University Hospital.
Statistical analysis. Categorical variables were expressed as percentages and continuous variables as means (standard deviation). The chi-square calculation was performed for the analysis of the qualitative variables, Student's t-test for the grouped quantitative variables with normal distribution and the Mann-Whitney test for the grouped quantitative variables without normal distribution (the Shapiro-Wilk test was used to determine the normal distribution in the quantitative variables). A two-sided p value < 0.05 was considered statistically significant. The statistical program STATA 16 was used for the analyses.
Results. First phase. Five hundred and sixteen individuals from the rural population (Response rate = 33%) and 522 individuals from the urban population (Response rate = 35%) responded to the survey (Fig. 1). Table 1 details the characteristics of both populations. The prevalence of possible asthma in the rural population was 15% (i.e., 78 individuals responded positively to questions "a", "f " or "m") and 11% in the urban population (i.e., 59 individuals responded positively to questions "a", "f " or "m") (p = 0.320). One hundred and four patients in the rural population were classified as having possible chronic bronchitis (a prevalence of 20%), and 96 in the urban population (a prevalence of 18%) (p = 0.215). No significant differences were found in the variables analyzed between rural and urban individuals in the population classified as asthmatic in Q1 (Table 2).
Second phase. The second survey was administered to 65 of the 78 individuals (83.3%) classified as asthmatic in the rural population in the first survey and to 50 of the 59 individuals (84.7%) of the urban population. Seven individuals (three rural) did not provide correct data and it was not possible to contact them. Twelve (five rural) Table 1. Demographic and clinical characteristics of the study populations (Phase I). *Based on positive answers to questions in Q1 (17) **Not exclusive to the area. www.nature.com/scientificreports/ refused to continue in the study and three (two rural) had died by the time of contact. Table 3 shows the general characteristics of the population finally included. In all, 37% of the individuals surveyed had previously been diagnosed with bronchial asthma (35% in the rural population and 40% in the urban). In the urban asthmatic population there was a predominance of women, more personal history of allergic rhinitis and more family history of allergic rhinitis and/or eczema; urban dwellers with asthma also presented a greater personal history of severe respiratory infection during childhood, were more likely to live either currently or during childhood with family members who smoke, and comprised a greater number of active smokers. Patients in this population also presented more symptoms in winter, used asthma control medication more frequently, had required a greater number of emergency room visits due to respiratory problems, and presented a greater number of exacerbations in the last year. Asthma had been diagnosed in adulthood in 74.8% of the patients, with the mean age of onset of symptoms being 44 years; there were no significant differences in this regard between the two populations. The most prevalent symptoms related to asthma were wheezing (58.3%), exertional dyspnea (54.8%), morning cough (40%), night cough (39.1%), and morning expectoration (31.3%) in both populations. No significant differences were observed in symptoms between the populations except in cough (morning, daytime and night) and expectoration, which were more frequent in the urban population. The percentage of patients with continuous symptoms was also higher in the urban population (Table 4).
Regarding occupational, domestic and environmental exposure (Table 5), 45.2% of individuals were working at the time of the interview. Occupational exposures that might affect respiratory health were recorded in 55% of the rural population and in 40% of the urban population (p = 0078). Twenty-nine per cent were exposed to smoke and dust; 17.4% related their asthma symptoms to work and 7.8% had had to change their job for this reason. These events were more frequent in the rural population. Symptoms due to contact with animals and/or dust were reported by 45% of the study population, and were more frequent in the urban setting. Symptoms due to contact with pollen and/or in parks were recorded by 53% of respondents; 39.1% described symptoms when being near irritating odors (bleaches, perfumes, gasoline, etc.) and 33.9% reported symptoms when noticing a subjective increase in environmental pollution. The exposure to irritants and environmental pollution generated more coughing, nasal congestion and eye irritation in the urban population, and more dyspnea in the rural population.
Discussion
The results of this study do not show differences in the prevalence of asthma between urban and rural areas, but they do show differences in the characteristics of asthma and probably also in its severity. The most relevant findings were the following: there was a predominance of women with asthma in the urban setting; urban asthma sufferers presented more allergic symptoms in contact with allergens than their rural counterparts; their major symptoms were cough, rhinitis and eye irritation; they required more treatment, presented more exacerbations and made more emergency room visits for respiratory problems than asthmatics in the rural population.
The objective of the current study is to establish whether there are differences in the prevalence of asthma between urban and rural areas and, if so, to identify the factors that cause them. It has been demonstrated that exposure to a microbial environment in early childhood, typical of rural environments, may play a role in the subsequent development of asthma. Based on data from a subpopulation of The European Community Respiratory Health Survey (ECRHS), Timm et al. 25 reported a prevalence of asthma of 8% in individuals who lived near farms and one of 11% in those who lived in city centers in a northern European population. They also established an urban-rural gradient of asthma, according to which subjects growing up on a livestock farm had significantly less late-onset asthma than subjects growing up in cities. In contrast, a greater exposure to environmental pollutants might explain the higher incidence of asthma in individuals who live in cities, especially in city centers 26 . However, even though one recent systematic review of 70 articles established that the prevalence of asthma seemed to be higher in urban than in rural areas 27 , it is difficult to reach firm conclusions: most of the studies carried out are very heterogeneous terms of design, the definition of the condition, and the environmental exposures described, and very few studies take into account the possible underdiagnoses of asthma in rural areas due to logistical reasons 3 . Indeed, in our study, only 35% of possible asthmatics in the rural population had previously www.nature.com/scientificreports/ been diagnosed with the disease; what is more, the studies that do not show differences or report a greater risk of asthma to the rural population are the ones carried out more recently [27][28][29] . Finally, a recent study establishes that exposure even to low doses of pollutants indoors could equalize the incidence of asthma in children between rural and urban areas 30 . Another possible reason for the differences observed in the prevalence of asthma between rural and urban populations, and which by itself could be the object of study hypotheses, is whether urban and rural asthma represent different phenotypes of the disease. This issue has received little attention, but there are grounds to think that it may indeed be the case. As noted above, whether an individual presents one type of asthma or another basically depends on the interaction between genetics and the environment in which he/she is exposed 20 . In this regard, there is evidence in the field of occupational asthma that exposure to high or low molecular weight www.nature.com/scientificreports/ agents generates different clinical phenotypes of the disease without there being relevant inflammatory changes between the two types of exposure 31 . Among the differences observed in the present study, we found that urban patients had more allergic rhinitis, more family history of allergic rhinitis and/or eczema, and more asthma symptoms with exposure to aeroallergens. These findings may be conditioned by the different exposures to which individuals are subjected in rural and urban settings 9 . In fact, although the theory of hygiene cannot explain differences in the prevalence of asthma, it can account for the different levels of awareness between the rural and urban populations 32 . Furthermore, the association of aeroallergens with city-specific environmental pollutants can contribute to exacerbating asthma, as our group has recently shown 19 . Although these observations do not necessarily reflect differences in the prevalence of asthma, they show that the asthma suffered by individuals in rural or urban areas is different.
It is also interesting that urban asthmatic patients presented more cough, both as a base symptom and when exposed to allergens, irritants, or environmental pollutants than the rural population. To be able to explain this finding, further studies are probably necessary in order to determine whether there are differences in terms of lung function or distinctive types of bronchial inflammation between the two populations. The characteristics of this study do not allow us to establish the actual cause, although it is known that greater bronchial obstruction is more often associated with the presence of cough and a greater degree of bronchial hyperresponsiveness to wheezing and thickness 33 . Nor, based on the results obtained, can we establish with certainty whether in fact cough is a feature that differentiates between the two types of asthma or is merely a finding that could be explained by a confounding factor such as tobacco exposure. Indeed, in the urban asthma population there may be greater exposure (both active and passive) to tobacco smoke, while in the rural population ex-smokers predominate. However, the fact that, in the first phase of the study, the diagnosis of chronic bronchitis was more frequent in the rural population, and that no differences were found when informants were specifically asked about chronic bronchitis in the second phase, would argue against this possibility. The relationship between sex and the consequences of smoking also raises doubts since it has been shown that female smokers and ex-smokers in rural areas are more likely to be diagnosed with asthma than non-smoking urban women 34 , especially taking into account that the proportion of women with asthma was higher in our urban population than in our Table 4. Respiratory symptoms of asthmatic individuals included according to Q1. *Chronic cough duration longer than three months; **Chronic bronchitis Cough and expectoration lasting more than three months for two years in a row; ***mMRC Modified Medical Research Council. SD Standard deviation. Significant values are in [bold]. www.nature.com/scientificreports/ rural population. Likewise, the possible relationship between asthma and/or asthma symptoms in patients with exposure to secondhand smoke has also received little attention. A cross-sectional study using the Canadian National Population Health data, collected from 1994 to 2000, showed a higher prevalence of asthma among smokers and nonsmokers in urban than in rural residents. Higher stress levels and the lack of open spaces compared with their rural counterparts, may be reasons for this higher prevalence of asthma among smokers living in urban areas, while among nonsmokers in urban areas the reasons may be environmental factors and exposure to secondhand smoke 35 . www.nature.com/scientificreports/ Asthma exacerbations have also been shown to be a differential factor between urban and rural asthma. The fact, for example, that rural asthmatic patients may present a higher incidence of exacerbations in spring could be related to a greater exposure to allergens in this season, while the greater number of exacerbations in winter in urban asthmatics might be due to a greater exposure to indoor pollutants caused by a decrease in air circulation between outdoor and indoor environments as windows tend to be closed at this time of year 26,30 . However, more relevant is the fact that patients with urban asthma had made more visits to the emergency room for respiratory problems and presented more exacerbations in the last year. In this connection, Smith et al. 36 conducted a crosssectional study in the US exploring the risk factors associated with healthcare utilization among 3,013 Arizona Medicaid patients with asthma. These authors observed that urban areas had higher rates of asthma-related hospital visits compared to rural counties, and that rates were higher in adults than in adolescents. Furthermore, several authors have pointed out that urban asthma may be associated with greater morbidity than rural asthma 20,25,26,28 , and although these results may be affected by differences in accessibility to the health system in the two areas 3 , it is generally agreed that exposure to environmental pollutants, more typical of urban areas, may well increase the number of exacerbations in these patients 37 .
One of the most important limitations of the study is the low response rate (around 35%) in the first phase. However, the absolute number of responses obtained, close to 600 individuals in each population, probably validates the results obtained. Another limitation, inherent in all epidemiological studies, is the definition of asthma itself. In this regard, we decided to use the results obtained from a correspondence analysis from the first questionnaire previously published by the group, and which has demonstrated its validity 21 . Finally, the study design did not allow us to establish possible risk factors that might increase the differences observed between urban and rural asthma.
In conclusion, the results of this study establish two possible working hypotheses for future work: first, that the prevalence of asthma does not necessarily differ between urban and rural settings and, second, that the different characteristic exposures of each environment may lead to different manifestations of asthma and to different degrees of disease severity, as has already been shown, for example, in occupational asthma. Clinical, lung function and bronchial inflammation studies are needed to confirm that urban and rural asthma may actually be two different asthma phenotypes.
Data availability
All data generated or analysed during this study are included in this published article.
|
2022-11-30T06:17:05.036Z
|
2022-11-28T00:00:00.000
|
{
"year": 2022,
"sha1": "48836bf93cbf0ef87508e69a7b3df1b8519b3d7a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f4b5e6fd9daee99ddddbebe17ae41b0e20964186",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237902408
|
pes2o/s2orc
|
v3-fos-license
|
Michael O’Brien’s Theological Aesthetics
: This essay introduces and examines aspects of the theological aesthetics of contemporary Canadian artist, Michael D. O’Brien (1948–). It also considers how his philosophy of the arts informs understandings of the Catholic imagination. In so doing, it focuses on his view that prayer is the primary source of imaginative expression, allowing the artist to operate from a position of humble receptivity to the transcendent. O’Brien studies is a nascent field, owing much of its development in recent years to the pioneering work of Clemens Cavallin. Apart from Cavallin, few scholars have focused on O’Brien’s extensive collection of paintings (principally because the first catalogue of his art was only published in 2019). Instead, they have worked on his prodigious output of novels and essays. In prioritising O’Brien’s paintings, this study will assess the relationship between his theological reflections on the Catholic imagination and art practice. By focusing on the interface between theory and practice in O’Brien’s art, this article shows that conversations about the philosophy of the Catholic imagination benefit from attending to the inner standing points of contemporary artists who see in the arts a place where faith and praxis meet. In certain instances, I will include images of O’Brien’s devotional art to further illustrate his contemplative, Christ-centred approach to aesthetics. Overall, this study offers new directions in O’Brien studies and scholarship on the philosophy of the Catholic imagination.
Introduction
Painter and novelist, Michael D. O'Brien (1948-), is among Canada's leading artists, furthering a tradition of making devotional art in the Canadian context-a tradition which was especially advanced by his friend and mentor, the Ukrainian-Canadian painter, William Kurelek (1927Kurelek ( -1977. O'Brien's art can be found in churches, monasteries, schools, chaplaincies, museums and private collections around the world. Owners or commissioners of his paintings come from diverse backgrounds, including the Missionaries of Charity in the Bronx, New York, the Congregation of Dominican Fathers in Kigali, Rwanda, the Institute of Christian Communities in Montréal, Québec, the Augustinian Fathers in Klosterneuburg, Austria, and beyond. Drawing upon Byzantine icons and photographic realism, cubism and expressionism, paleo-Christian symbology as well as aspects of Inuit art and other traditions, O'Brien uniquely adapts varied art forms to express the spiritual and metaphysical depths of human existence and experience (Cavallin 2019, p. 9).
In his reflections on the artistic process, O'Brien observes that the artist is called to contemplation. The artist, he says, is a "medium, though not in the sense of a tool or a mechanism or an indifferent conduit;" rather, "he is about a more difficult process: that of making manifest the mysteries and barely perceptible inner beauties of his subject. [The artist] . . . is a vehicle of perception, an interpreter . . . a contemplative" (O'Brien 2019, p. 15). This essay will examine key aspects of O'Brien's theological aesthetics, contributing new directions to the limited yet growing field of existing scholarship on his work. In so doing, it will show the degree to which contemporary conversations on Christian aesthetics benefit from a greater consideration of the roles which prayer and contemplation play in the exercise of the imagination.
The Rosary as a Guide for the Imagination
O'Brien's paintings and novels are imbued with the sense that art is meant to express, and mysteriously participate in, the divine plan of salvation as revealed in scripture. While his art meditates on various episodes from the Old and New Testaments, it often returns to the Gospels as well as Genesis (which chronicles the beginning of life) and the Book of Revelation (which prophesies the 'end' or consummation of life and history). To that end, a central focus in O'Brien's art is the contemplation of Christ's earthly ministry, especially as expressed in the scriptural meditations supplied by the rosary. For example, a significant art project he undertook in the 1980s and early '90s was a series of paintings depicting the Religions 2021, 12, 451 3 of 16 mysteries of the rosary. He then published a devotional book on the subject, accompanying each of his paintings with a combination of personal meditations and traditional prayers from the Roman Catholic tradition and incorporating elements of Eastern, Byzantine iconography as well as Western, devotional sensibilities along the way (O'Brien 1992).
These paintings blend a series of art traditions in a distinctive manner, thereby expressing the dynamic paradox of Christian Catholicity: namely that the Christian message is simultaneously universal (catholic) and particular (personal), relevant to all times and places, operating within but also beyond cultural contexts and values. For instance, The Nativity ( Figure 1) is a striking example of O'Brien's unique adoption of centuries of devotional, iconographic art for our times, an adoption profoundly inspired by decades of personal prayer and contemplation. Given its subject matter and blend of varied styles, the painting is at once timely and timeless, drawing together biblical symbolism, the spare style of iconography, and a reserved expressionism in which the dramatic yet sombre landscape of the painting suggests the existential plight of the fallen human condition, while also as firmly hinting at the redemptive work of God incarnate. As with the rosary itself, the painting integrates a network of scriptural passages and images, all with a view to encouraging prayerful meditation on the central Christian mystery: Christ's incarnation, his entrance into human history.
Book of Revelation (which prophesies the 'end' or consummation of life and history). To that end, a central focus in O'Brien's art is the contemplation of Christ's earthly ministry, especially as expressed in the scriptural meditations supplied by the rosary. For example, a significant art project he undertook in the 1980s and early '90s was a series of paintings depicting the mysteries of the rosary. He then published a devotional book on the subject, accompanying each of his paintings with a combination of personal meditations and traditional prayers from the Roman Catholic tradition and incorporating elements of Eastern, Byzantine iconography as well as Western, devotional sensibilities along the way (O'Brien 1992).
These paintings blend a series of art traditions in a distinctive manner, thereby expressing the dynamic paradox of Christian Catholicity: namely that the Christian message is simultaneously universal (catholic) and particular (personal), relevant to all times and places, operating within but also beyond cultural contexts and values. For instance, The Nativity ( Figure 1) is a striking example of O'Brien's unique adoption of centuries of devotional, iconographic art for our times, an adoption profoundly inspired by decades of personal prayer and contemplation. Given its subject matter and blend of varied styles, the painting is at once timely and timeless, drawing together biblical symbolism, the spare style of iconography, and a reserved expressionism in which the dramatic yet sombre landscape of the painting suggests the existential plight of the fallen human condition, while also as firmly hinting at the redemptive work of God incarnate. As with the rosary itself, the painting integrates a network of scriptural passages and images, all with a view to encouraging prayerful meditation on the central Christian mystery: Christ's incarnation, his entrance into human history. The painting draws together a cluster of prophetic images from the Old Testament, emphasising Christ's status as the Messiah. The stag with a pierced side is doubly significant: he foreshadows Christ's passion and brings to mind the lyric cry of the psalmist The painting draws together a cluster of prophetic images from the Old Testament, emphasising Christ's status as the Messiah. The stag with a pierced side is doubly significant: he foreshadows Christ's passion and brings to mind the lyric cry of the psalmist who is on the lookout for the Messiah: "[a]s the hart panteth after water; so my soul pant-eth after thee, O God" (Psalm 42:1). The vibrantly coloured fruit tree represents not only the "tree of the knowledge of good and evil" (Genesis 2:17), but also the wood of the cross and Christ's status as the tree of life (Proverbs 11:30), the living vine (John 15:5). In the distant background, we see a triadic cluster of fir trees swaying in the wind. These trees, of course, represent the three crosses of Golgotha, foreshadowing Christ's passion and death. In including this allusion to Christ's future suffering, O'Brien seeks to imaginatively convey the Christian concept of time in which past, present and future meet and are fulfilled in the saving work of Christ's earthly ministry. This concept of time governs the prayers and patterns of the rosary and often appears as an organising feature in O'Brien's paintings and novels. The joyful, sorrowful, luminous and glorious mysteries of Christ's life offers O'Brien an imaginative, analogical trajectory in which the finite (human) and infinite (divine) meet in the midst of both the ordinary and dramatic dimensions of everyday life and living.
This Christian understanding of time is most fully expressed in liturgical theology and the liturgy, itself, as the memorial of, and participation in, Christ's passion, death and resurrection. As Joseph Ratzinger notes, the New Testament inaugurates a transformative temporal or liturgical consciousness, a "between-time" because Christ defeats death through his own death and offers the hope of everlasting life and the fulfilment of the cosmos through his resurrection, his sacramental, Eucharistic presence, and his promise to come again: "[t]hus the time of the New Testament is a peculiar kind of 'in-between', a mixture of 'already and not yet'. The empirical conditions of life in this world are still in force, but they have been burst open, and must be more and more burst open, in preparation for the final fulfilment already inaugurated in Christ" (Ratzinger 2000, p. 54). In expressing the "between-time" consciousness established by Christ's incarnation, O'Brien invites viewers of the painting into their own philosophical or prayerful meditations on the ways in which their personal lives can be transformed by the divine mysteries as represented in the rosary's scriptural meditations and rhythmic form.
In The Nativity, O'Brien not only uses biblical types or recurring images to reflect on the temporal transformations afforded by the incarnation. He also meditates on the holy family as an example of the kind of contemplation to which the devotional artist is called. Drawing from the iconographic tradition, he depicts the Christ-child wrapped in swaddling clothes and cradled in the intertwined arms of the Virgin Mary and St. Joseph, thereby improvising on the standard biblical account (in which Christ is placed in a manger) to highlight the degree to which divine love and human love worked together, in holy cooperation, to bring about the event of the incarnation. Both Mary and Joseph incline their head towards the Christ child, contemplating his face and serving as models of prayer. Christ is no ordinary child in this painting; his swaddling cloth is also a burial shroud and he has the face of an old man-as is often the case in Byzantine and medieval depictions of the infant Christ. This fusion of infant and wizened man, of birth and death, seeks to express (through the means available to the limited, human imagination) the paradox of God's entrance into history and the mystery of the hypostatic union.
In The Nativity, we see the degree to which O'Brien's art stands in and carries forward a theological understanding of the arts and the artist, an understanding which has been at the heart of Catholicism throughout Church history and especially in Catholic theology and papal teachings throughout the twentieth and twenty-first centuries. In particular, The Nativity places in view the degree to which prayer and painting are inextricably linked for O'Brien: art is the fruit of contemplation, of what could be called a rosarian approach to time and daily life. As with his cycle of rosary paintings as a whole, The Nativity serves, then, as an example of O'Brien's visual theology-one which echoes aspects of Pope John Paul II's reflections on the irreplaceable role and value of the rosary in the life of Christian devotion. More specifically, O'Brien's artistic depictions of the rosary, of the centrality of the holy family as a model for artists, are saturated with the spirit of John Paul II's writings on Christian devotion and therefore, at one level, serve as visual correlatives to the late pope's conviction that the rosary is one of the primary resources for cultivating a contemplative discernment of reality. For instance, in his 2002 apostolic letter, Rosarium Virginis Mariae, John Paul II writes that "[t]he Rosary belongs among the finest and most praiseworthy traditions of Christian contemplation. Developed in the West, it is a typically meditative prayer, corresponding in some way to the 'prayer of the heart' or 'Jesus prayer' Religions 2021, 12, 451 5 of 16 which took root in the soil of the Christian East" and which "train[s]" Christians from a young age to "pause for prayer" (John Paul II 2002).
It is beyond the scope of this introductory essay to do justice to the many links between John Paul II's theological aesthetics and O'Brien's; however, it is an area which deserves significant attention and I hope it will feature in new directions taken in scholarship on O'Brien in the near future. That said, it is important to note the sympathies between both thinkers in order to show the degree to which O'Brien's art and thought draw deeply from the tradition of Catholic theology, especially the Catholic thought of the twentieth century (which has been profoundly shaped by the insights and papacy of Pope John Paul II). In the next section, we will consider the degree to which contemplation is at the heart of O'Brien's theological aesthetics. In so doing, we will gain a better sense of the way in which he understands the imagination-and, by extension, the practice of the arts-to be inherently religious.
Called to Contemplation: O'Brien's Theology of the Artist's Studio
Reflecting on the history of art, O'Brien writes that it is "inherently religious" (O'Brien [1997(O'Brien [ ] 2017. From the pre-historic cave paintings in Lascaux to the present day, he sees artists searching for the transcendent, for a meaning that responds to personal, human desire and yet reaches beyond it (O'Brien [1997(O'Brien [ ] 2017. O'Brien developed this perspective more explicitly in his years as a practicing artist. However, he shares that an inkling of the religious dimension to art and within the workings of the imagination first emerged in his childhood-especially in the years he and his family spent living in a "small Inuit (Eskimo) village" in Canada's high Arctic in the early 1960s (O'Brien [1997] 2017, p. 62). O'Brien's memories of First Nations art, especially Inuit carvings of animals, humans and Inuit mythology, left a deep and lasting impression on his imagination. In particular, he recalls being struck by the work of an elderly Inuit woman who used to sit in her tent along the shore of the Arctic Ocean, resting on "an empty packing crate scavenged from the Hudson Bay outpost" and carving animals of "heart-stopping" beauty by firelight (O'Brien [1997(O'Brien [ ] 2017. In her stone carving called Man Wrestling with Polar Bear, the young O'Brien intuited a deep understanding of the existential (not just historical) aspects of the human condition: "[i]n the North", he writes, "men do not wrestle with bears. In such encounters men always lose. This image was a solid metaphor of the interior wrestling which is our abiding condition [ . . . ] It was pondering existential questions in the only language available to her, and it was for that reason", he concludes, "inherently religious" (O'Brien [1997(O'Brien [ ] 2017. For O'Brien, art's inherent religiosity is especially bound up, then, with questions about the nature, conditions, and calling of the human person, and, in turn, it is fed by the artist's sensitivity to the "mysterious roots of life itself" (O'Brien [1997] 2017, p. 62). There is therefore a profound affordance of religious value in the very making of art, irrespective of the creedal or faith position of artists (Hopps 2020, pp. 79-94).
Although O'Brien's writing on the inherently religious nature of the arts emerges in some of his essays and novels, he tends to be more explicitly Christian and devotional in his theological aesthetics. While the expressly devotional standpoint of his writing is one of the most attractive features of his approach to aesthetics, his recollections of his childhood in the Arctic show the degree to which he is highly capable of implicit and more strictly philosophically oriented approaches to aesthetics. At times, scholars have expressed the desire for O'Brien to extend this more implicit mode of approach to his aesthetics, given how perceptive it is when it does emerge and, as importantly, given how much it would open up his art to people standing outside of Christianity. For those who appreciate O'Brien's occasionally subtler approach, they may find the paintings he classifies in the Ignatius art catalogue as 'implicit' or 'reflective' of particular interest (O'Brien 2019, pp. 119-35, 137-67). That noted, O'Brien's sees his art in clearly vocational terms and has discerned the call to address his audiences from the express standpoint of belief, noting that Christian artists possess a particular responsibility to reveal the degree to which the Religions 2021, 12, 451 6 of 16 innate longings of the artist are fulfilled in the mystery and person of Christ-who, in his redemptive work, is the artist par excellence (O'Brien 2019, pp. 65-80). For O'Brien, then, the practical and philosophical elements of imaginative creativity are most fully formed and informed by scripture, Christian doctrine and divine revelation.
Indeed, O'Brien has recently shared that it is only upon his (re)conversion to Christianity that he discovered his abilities as an artist and saw in this discovery a call to express his conversion in artistic forms (O'Brien 2019, pp. 17-18). In his conversion, he discerned in Christian revelation and doctrine a striking realism (as opposed to a vague set of ideals), which gave him a greater sense of meaning and purpose: "My conversion to Christ . . . was a pure gift from God", he confesses. "It was sudden, totally unexpected, instantaneous, like St. Paul's on the road to Damascus. It was a radical shock . . . and a revelation that everything the Church and Scripture had taught about God was, in fact, reality" O'Brien's depiction of his own studio draws from the long Christian tradition. As Leonard Boyle OP noted, the early Christians, and the medieval artists after them, discerned cruciform patterns everywhere; in their view, the cross "was, indeed, the axis of the universe itself, and drew everything to its centre" (Boyle 1989, p. 30). For O'Brien, artists are brought into closer contact with God by seeing indications of the incarnate Christ, and of the cross which leads to resurrection, in each step of the creative process. Incorporating a range of symbols, The Studio focuses on the process of painting and also considers the role of prayer in the making of art; it is therefore an intriguing fusion of the meta-critical with the metaphysical, of the self-reflexive with the self-transcending. That is to say, the painting understands the natural and supernatural, technique and prayer, in light of each other. Tubes of acrylic paint, like the ones O'Brien used to produce this very painting, are placed in clear view, located close to his other tools (water, brushes, an easel). There are a series of different light sources. From a naturalist perspective, the light of the moon (which classically represents the artist's imagination) illuminates the studio. However, the golden icon of St. Luke and the white dove descending over the artist's easel suggest additional (as opposed to alternative) light sources. Here, prayer and nature work in cooperation. The painter's easel is cruciform in shape, with the Latin inscription 'INRI' ('Iesus Nazarenus, Rex Judaeorum') mostly in view, spread across its horizonal axis (John 19:19). O'Brien transforms the idea of the artist's wooden easel, seeing it as not only a functional aid to the creation of the artwork but also an ennobled tool, a material good which doubles as a conduit of grace, a reminder of Christ's passion, death and resurrection.
O'Brien's depiction of his own studio draws from the long Christian tradition. As Leonard Boyle OP noted, the early Christians, and the medieval artists after them, discerned cruciform patterns everywhere; in their view, the cross "was, indeed, the axis of the universe itself, and drew everything to its centre" (Boyle 1989, p. 30). For O'Brien, artists are brought into closer contact with God by seeing indications of the incarnate Christ, and of the cross which leads to resurrection, in each step of the creative process. Commenting on the relationship between artistic contemplation and artistic praxis, he observes that "[t]he artist must be tireless in perfecting his practical skills and knowledge of the art, for without the discipline of craftsmanship, the vision will be indistinct and may even fail altogether. Grace builds upon nature, says Aquinas, and thus the artist of faith must be as dedicated to prayer as he is to his tools" (O'Brien 2019, p. 16). To illustrate this point even further, O'Brien includes an explicit depiction of the Holy Spirit (who is more subtly introduced in some of his other devotional works) to drive home the invaluable roles that both divine inspiration and prayer play in the making of art.
Hovering in the upper left-hand corner of the painting, the Holy Spirit is represented as the source of imaginative ingenuity. Throughout the history of Christian art, the Holy Spirit has been described as the muse par excellence; this is discernible in classic, Christian artists-like Fra Angelico and Dante-as well as more recent and contemporary art forms, such as the nature sonnets of the Victorian poet, Gerard Manley Hopkins SJ, Toni Morrison's literary criticism, and, of course, O'Brien's own devotional art. As with the cruciform shape of the easel, the inclusion of the Holy Spirit serves as a reminder that artistic creativity, as with human life, is first and foremost made possible through divine gift; it does not originate from the resources of the artist only. If anything, the artist is a co-operator in the project of divine creativity, imitating the creative activity of the Trinity which loved the cosmos into existence. As with Jacques Maritain, who viewed the artist as a noble but 'poor god' (able to imitate and associate rather than originate ex nihilo), O'Brien sees the work of the artist as always already derivative, a response to the creation of the world as revealed in Genesis (Maritain 1953). In this way, the presence of God the Father is also implicit in this painting-especially given that O'Brien views the artist as a "co-creator", as someone who is called to prayerfully respond to and imitate God's own original creativity (O'Brien 2019, p. 16).
The artist's studio is, then, a place not unlike a contemplative's cell; it is a space in which the skilled work of the artist and the tools of his or her trade are means through which the words of scripture and the Catholic tradition are weighed and expressed. For, according to O'Brien, the imagination that is responsive to Catholic doctrine and teaching will draw strength and inspiration from contemplation and prayerful discernment. The artist, he observes, is called to be "ceaselessly concerned with the authenticity of the work and the good of those who will one day gaze upon it [ . . . ]" (O'Brien 2019, p. 16). As such, the artist (like the icon writer) is meant to be attentive to "the demands of ora et labora" (O'Brien 2019, p. 16). "This is no small task . . . no small vocation" he admits (O'Brien 2019, p. 16). Drawing from Maritain's theological aesthetics, O'Brien proposes that the artist's primary calling is neither popularity nor success. Rather, it is the call to love, to become a saint: "For most of us," he writes, "the path is one of long, hard labors combined with a spirit of exploration and, above all, a spirit of love, which is the means and motive of the growth. At the core of all genuine love is the willingness to sacrifice, to die to oneself so that others may live [ . . . ]" (O'Brien 2019, p. 16). Here, O'Brien is influenced by the sensibilities of the contemplative Christian tradition, especially the monastic one. In his theological account of the imagination, divine inspiration is not some flash from above but, rather, a mysterious unfolding (much like the phenomenon of life, itself): the "creative process", he observes, "is an experience of what I believe is the 'co-creative' mystery, that is, grace working together with my natural talents. For me, fiction is neither entirely nature nor entirely grace. It's neither purely rational nor purely intuitive" (Olsen 2020).
It is no wonder, then, that O'Brien chose St. Luke as the subject of the icon depicted in The Studio. Luke the evangelist is the patron saint of doctors and artists, reputed to have painted the first image of the Virgin Mary holding the Christ child. The Gospel of Luke emphasises Christ's evangelization of the Gentiles; it therefore stresses the universality or catholicity of Christ's redemptive work. By extension, as the patron of artists, St. Luke represents arts' universal ability to draw imaginations from around the world and throughout history towards the transcendent. The significance of St. Luke for artists is also a theme in O'Brien's novels. For example, in Theophilus, he imaginatively explores how Luke's witness to belief in Christ sets his agnostic, adoptive father on an existential pilgrimage towards Christian belief (O'Brien 2010). In featuring St. Luke, The Studio testifies to art's ability to speak to the inner-most depths of those with or without faith, drawing them toward deeper encounters with the mysteries of existence and experience (O'Brien 2010, pp. 16-18). That noted, the often explicit way in which O'Brien incorporates spiritual imagery into his devotional paintings tends to make his art most immediately accessible to Christian believers or those with some religious or biblical literacy (something which, increasingly, cannot be taken as a given).
Having examined the centrality of prayerful contemplation in O'Brien's philosophy of art, the next section situates his thought on the Catholic imagination (although this is a term he does not tend to use expressly) within the wider context of theological aesthetics. In so doing, it emphasises the degree to which O'Brien offers an integrative approach. He not only draws from an astonishing number of art forms and styles in his paintings. He is also as explorative and wide-ranging in his philosophical engagements with the Catholic theological tradition, covering and integrating centuries of Christian devotion and theological expression.
O'Brien and the Catholic Imagination
Imaginative responses to Catholicism date back to the early Christian era, evidenced in the writings, material culture and liturgical practices developing during that period. Throughout church history, theologians and church councils have clarified the status of the arts within the life of faith and culture. Imaginative responses to Catholicism have therefore been ongoing since the days of the early Church, and such responses began to receive systematic assessments in medieval, philosophical theology (Haldane 2013, pp. 25, 31-35). As John Haldane reminds us, the nature and "practice of art was a source of significant reflection within medieval thought", and the "representational arts" were carefully assessed in light of scripture, conciliar theology, and the wider tradition of Christian thought found in the Greek Fathers and the Latin West (Haldane 2013, pp. 25-27). However, the "concept of the aesthetic" as it is often used today principally stems from eighteenth-century thought, especially "philosophical psychology and investigations into judgments of taste" that are invigorated by "the question of how estimations of beauty, though expressing a personal response to nature or art, nevertheless seem to lay claim to truth" (Haldane 2013, p. 25). "[M]odern aesthetics" is therefore "a branch of philosophy of Religions 2021, 12, 451 9 of 16 mind and theory of value," whereas during the medieval period aesthetics "belong[ed] to philosophical theology" (Haldane 2013, p. 25). It is especially thanks to Hans Urs von Balthasar and his recent inheritors that recuperations and extensions of the medieval understanding of art as a resource for theology and prayerful contemplation are under way, opening up a series of important conversations within Roman Catholic theology and across other Christian denominations as well. It is also thanks to Balthasar's work, especially his magnum opus, The Glory of the Lord: Theological Aesthetics (1961)(1962)(1963)(1964)(1965)(1966)(1967), that the concept of 'the Catholic imagination' has increasingly surfaced in recent decades and features in theological aesthetics, philosophical theology and literary criticism in particular (Tracy 1981;Greeley 2000;Pfordresher 2008;Carpenter 2015).
Balthasar's contributions to theological aesthetics have, as we know, especially influenced Pope John Paul II (who elevated Balthasar to the cardinalate), and are discernible in his theological writings, such as his pastoral letter to artists, delivered at the Vatican on Easter Sunday, 1999. In this pastoral address, John Paul II reminds fellow artists that their "special vocation" is most fully realized through prayer (John Paul II 1999). "The more conscious they are of their 'gift'", he writes, the more artists "are led . . . to see themselves and the whole of creation with eyes able to contemplate and give thanks, and to raise to God a hymn of praise. This is the only way for them to come to a full understanding of themselves, their vocation and their mission" (John Paul II 1999). O'Brien's own theological aesthetics is greatly indebted to both John Paul II and Balthasar and is best understood as part of the recovery and development of medieval, theological aesthetics which has been ongoing throughout the twentieth century and into the twenty-first. Bearing this in mind, the rest of this section will consider key aspects of O'Brien's views on the role of art in the life of devotion and the way in which his philosophical approaches enrich the concept of 'the Catholic imagination', as it is currently discussed today.
O'Brien has written extensively on artistic and theological influences in his workranging from the pre-historic, Cro-Magnon artists who decorated the caves of Lascaux to Christopher Dawson, Jacques Maritain, Catherine Doherty, von Balthasar and William Kurelek, among others (O'Brien [1997] 2017). However, his cultivation of a Catholic imagination, of a contemplative way of seeing the world, also stems from decades of meditating on the work of Pope John Paul II. In his "Letter to Artists" (similar to aspects of Pope John Paul II's own), O'Brien sees the cultivation of a Christian imagination as a vocation, as the response to a divine calling; in this sense, art is the fruit of conversations with God and the practice of art itself is that of speaking with and about God. O'Brien's theological aesthetics is, then, deeply personal in character, and can be understood, to a certain extent, as an extension of the Christian personalism which flowered throughout the early decades of the twentieth century, principally in the phenomenological work of Max Scheler, Edith Stein, Dietrich von Hildebrand, and John Paul II, among others (Lamb 2016).
Specifically, for O'Brien, imaginative expressions of Christianity are rooted in a personal exchange between the self and the Triune God: "[t]here is always a mystery regarding each person's vocation in the works of the Lord," he writes. " . . . [God's] creation is not a machine but rather a vast work of art [ . . . ] 'We are God's work of art', says St Paul [Ephesians 2:10]. Growth in the vocation [of the artist] is usually a series of countless small steps of faith, usually blind steps, because what God wants to accomplish most in us is the increase of absolute trust in him, not so much successes [ . . . ] I believe his primary will is accomplished and is always more fruitful, to the degree that we have agreed to be very little instruments in his hands" (O'Brien 2018). Here, O'Brien distinguishes divine creativity and Christian art from machine-based or mechanistic modes of production. He makes this distinction to stress the degree to which a Catholic imagination is meant to see persons, nature and the created order as dignity-bearing values, instead of 'means' to be exploited or worshiped as idols (O'Brien [1997(O'Brien [ ] 2017. For O'Brien, a persistent temptation for the artist is the forgetting of the call for a relationship with God and others; such a forgetting leads to the worship of lesser goods (talent, success, ingenuity) and becoming obsessed with one's own creative capacities. The artist's imagination therefore requires training, detachment and transformation through contemplative prayer and receptivity to grace.
O'Brien's art and theological writings often address the theme of temptation towards idol worship (of one sort or another), and one of his most extensive essays on the Catholic imagination, entitled "Historical Imagination and the Renewal of Culture," reflects on the significance of the theological debates held between iconoclasts and iconodules in the eighth and ninth centuries, the Reformation, and during our own times (albeit in subtler forms) (O'Brien [1997(O'Brien [ ] 2017. The commandment against the making of images in the Old Testament was, according to O'Brien, part of a providential plan to lead people away from attempts to live according to their own terms: "[t]he Old Testament injunction against graven images was God's long process of doing the same thing with a whole people that He had done in a short time with Abraham. Few if any were as pure as Abraham. It took about two thousand years to accomplish it, and then only roughly, with a predominance of failure. Idolatry was a very potent addiction. And like all addicts ancient man thought he could not have life without the very thing that was killing him" (O'Brien [1997(O'Brien [ ] 2017. From God's calling of Abraham to the events leading to Christ's incarnation, O'Brien discerns the gradual emergence of a theological aesthetic, a way of seeing and expressing the world according to right worship as opposed to idolatry. Jean-Luc Marion, Aidan Nichols and Rowan Williams, among others, have also reflected at length on the respective places of the icon and idol in the history of Christian aesthetics (Marion 2004;Nichols 2007;Williams 2003). As with O'Brien, they share the understanding that the imagination at the service of devotion calls for a kind of self-renunciation so that art itself becomes, as Williams puts it, not just the production of "a striking visual image" but rather the "open[ing] of a gateway for God" (Williams 2003, p. xvii).
Despite O'Brien's conceptual sympathies with Williams et al., it is important to stress that his views on the subject not only grow out of philosophical reflection and prayer but, as importantly, from the lived experience of making and producing art-an experience which he proposes demands a constant renewal of commitment to conversion of heart through meditation on scripture, the Christian tradition and the lives of the saints (O'Brien 2013). For O'Brien, this is because the artist deals with the material world in a very distinct and particular way and is therefore called to undergo the same pilgrimage of spiritual growth chronicled in the Old and New Testaments, a pilgrimage which leads towards the contemplation of God in the beatific vision. Such a contemplation begins on earth and in the Christian tradition culminates in heavenly adoration of the triune God who is revealed by Christ. Christ's incarnation therefore supplies the artist's imagination with an agapic as opposed to self-absorbed disposition towards the world and other persons. "Because the Lord had given himself a human face, the old injunction against images [can] . . . be reconsidered," O'Brien writes, and the gradual emergence of a Christian visual culture marks the gradual, spiritual renewal and transformation of artistic imaginations throughout history (O'Brien [1997(O'Brien [ ] 2017. This process of spiritual renewal constitutes the Christian understanding of salvation history and features as a recurring theme throughout O'Brien's paintings. For example, in St. Francis Embracing the Leper (Figure 3), O'Brien explores the degree to which Christ's incarnation, and especially his passion, death and resurrection, transforms how the artist sees the value of other people-especially those on the margins of society who suffer with difficulties we instinctively wish to avoid or hide.
Following St. Francis' example, O'Brien represents the leper in persona Christi ('in the person of Christ'), thereby expressing a profound understanding of the dignity of human suffering when it is placed within the context of Christ's redemptive self-sacrifice. Incorporating the tradition of early Christian iconography, in which the left and right sides of Christ's face are made asymmetrical (so as to indicate the hypostatic union), O'Brien suggests that through the leper's disfigurement and suffering the love of the God-man and the mystery of the incarnation, itself, are uniquely found. In this way, the dignity of the human person, irrespective of circumstances or conditions, is made explicit. In the background of the painting, we spot a reference to calvary, which is at once the site of Christ's profound suffering and the gateway to resurrection and spiritual transformation. It also serves as the key moment in salvation history, allowing Francis' encounter with the leper, centuries later, to take on new significance and depth. Given all this, the painting offers us a window into the ways in which the lives of the saints, throughout history, witness to and "manifest . . . the life of Christ in countless forms", making the "hidden face of divine love . . . visible" (O'Brien 2019, p. 103).
Following St. Francis' example, O'Brien represents the leper in persona Christi ('in the person of Christ'), thereby expressing a profound understanding of the dignity of human suffering when it is placed within the context of Christ's redemptive self-sacrifice. Incorporating the tradition of early Christian iconography, in which the left and right sides of Christ's face are made asymmetrical (so as to indicate the hypostatic union), O'Brien suggests that through the leper's disfigurement and suffering the love of the God-man and the mystery of the incarnation, itself, are uniquely found. In this way, the dignity of the human person, irrespective of circumstances or conditions, is made explicit. In the background of the painting, we spot a reference to calvary, which is at once the site of Christ's profound suffering and the gateway to resurrection and spiritual transformation. It also serves as the key moment in salvation history, allowing Francis' encounter with the leper, centuries later, to take on new significance and depth. Given all this, the painting offers us a window into the ways in which the lives of the saints, throughout history, witness to and "manifest… the life of Christ in countless forms", making the "hidden face of divine love … visible" (O'Brien 2019, p. 103). As with devotional icons, St. Francis Embracing the Leper serves as a mode, then, of visual theology, communicating the degree to which Christ's incarnation offers the artist a new way of viewing the world, one in which he or she learns to reverence and care for As with devotional icons, St. Francis Embracing the Leper serves as a mode, then, of visual theology, communicating the degree to which Christ's incarnation offers the artist a new way of viewing the world, one in which he or she learns to reverence and care for creation as opposed to dominating or worshiping it. The figure of St. Francis is the exact opposite of the ego-driven artist who has rejected the spiritual dimensions of existence. Often depicted as a kind of holy fool in literature and art, St. Francis is characteristic of a 'type' of character or personality O'Brien presents and represents in his paintings and novels. For instance, in The Fool of New York City, O'Brien (2016) orchestrates a series of important encounters between a disenchanted, post-modern artist (or aesthete), who is suffering from amnesia, and a quiet, giant of a man who lives like a humble Franciscan in the concrete jungle of New York city, transforming the lives of those who encounter him through his radical commitment to the beatitudes. Throughout this novel and O'Brien's paintings more generally, Christ and Christ-like figures abound, serving as representations of agapic, contemplative love in the midst of the world and its problems.
It is particularly in his imaginative depictions of Christ and Christ-like behaviour that O'Brien offers contemporary, scholarly conversations on the Catholic imagination concrete models of the integration between thought and action, philosophy and virtue, imaginative expression and the 'art of living'-to borrow a phrase from Dietrich von Von Hildebrand (2017). Having considered the close relationship between contemplation of Christ and worship in O'Brien's theological aesthetics, the final section of this paper examines the degree to which O'Brien's contemplation of the cross enriches conceptualisations of the Catholic imagination.
O'Brien's Christological Aesthetics
In O'Brien's theological aesthetics the event of the incarnation gives the innate religious sensibility of the artist a wider horizon. The incarnation declares that God is not only provident but personal; he is intimately involved in the lives of each of his creatures. As importantly, Christ's incarnation declares that the natural and supernatural are not separated by a major gulf; rather, they are merged together. The finite and infinite are joined; the horizontal and vertical, the imminent and transcendent are met in Christ's hypostatic union. This point is, of course, central to what Balthasar argues in The Glory of the Lord but O'Brien's own meditations on the implications of the incarnation for the arts are helpful and timely contributions to the growing conversations on the Catholic imagination-especially when it comes to seeing an example of how theory and practice, aesthetics and the making of art, can meet and mutually inform each other.
For O'Brien, the Eucharist and participation in the liturgy and the sacramental life of the Church are irreplaceable resources for the imagination. Christ's entrance into history and his redemptive work inaugurates the sacrament of the Eucharist and it is this mystery of divine self-gift which, according to O'Brien, unites "word, image, spirit, flesh, God and man" into "one" community of believers (O'Brien [1997(O'Brien [ ] 2017. Through the institution of the Eucharist, Christ draws all believers together in a fellowship rooted in divine self-donation, a gift finding its fullest expression in the liturgy of the Eucharist (O'Brien [1997(O'Brien [ ] 2017. In the liturgy, the entire drama of Christ's life is remembered, and in this remembering the participating faithful are drawn into communion with Christ and each other. This key aspect of the liturgical theology of the Roman Catholic Church reoccurs throughout O'Brien's art, both explicitly and implicitly. Given this, O'Brien's philosophy of sacred art is especially expressive of two of the three kinds of "mystical experience[s] within the Christian church" as identified by Oliver Davies (Davies 1988, p. 4). According to Davies, the divisions are as follows: first, "a form which we may call the mysticism of the sacraments and of the liturgy"; second, a "Christocentric spirituality which is based upon imagery that is sometimes biblical and sometimes secular, and upon revelation . . . [which] in its more intense form [may include] visions in which a supernatural dimension entirely effaces everyday reality"; and, finally, the transcendence of imagery and an encounter with "the 'darkness' and the 'nothingness' of the Godhead itself in a journey which leads the soul to the shedding of all that is superfluous . . . to God" (Davies 1988, p. 4).
O'Brien's paintings and novels do occasionally and paradoxically involve the "transcendence of imagery" and it is an element of his aesthetics which could benefit from even further consideration in scholarship. In so far as his theology of aesthetics attends to the dark night of the soul, it usually finds fullest expression in his novels. However, there are instances when the transcendence of images is, paradoxically, a crucial subject of his paintings: note, for instance, his tendency to situate figures in dark, earthy or womb-like settings in an effort to image the image-lessness to which the pilgrimaging soul will be subjected. Likewise, in paintings such as (2000) and Exodus (1982) we find him experimenting with a striking variety of styles, colours and expressions in order to communicate the mystical movements of the human soul, movements which transcend the very forms of expression which seek to talk about them in some way. These exceptions aside, O'Brien's art and reflections on art principally focus on the sacramental and Christological forms of mysticism, in which the inclusion of images is seen to help occasion closer contact with God incarnate. This positive theology of imagery is a central and abiding presence in O'Brien's paintings, from his earliest work in the 1970s to the present, and accounts for the wide-ranging series of passion paintings he has produced over the decades. For example, in his painting, Christ and Adam (from the early 1990s), we see O'Brien's Christological imagery yoked to a hope-filled, cruciform aesthetic (Figure 4).
the Baptist in Prison (2001), The Prophet Elijah (2000 and Exodus (1982) we find him experimenting with a striking variety of styles, colours and expressions in order to communicate the mystical movements of the human soul, movements which transcend the very forms of expression which seek to talk about them in some way. These exceptions aside, O'Brien's art and reflections on art principally focus on the sacramental and Christological forms of mysticism, in which the inclusion of images is seen to help occasion closer contact with God incarnate. This positive theology of imagery is a central and abiding presence in O'Brien's paintings, from his earliest work in the 1970s to the present, and accounts for the wide-ranging series of passion paintings he has produced over the decades. For example, in his painting, Christ and Adam (from the early 1990s), we see O'Brien's Christological imagery yoked to a hope-filled, cruciform aesthetic ( Figure 4). Christ and Adam meditates on Adam's status as a precursor to Christ, the God-man, who is the 'New Man' or 'New Adam' and restorer of the union between God and humanity, a union which Adam and Eve damaged through the fall (Genesis 1: 1-3). By his life, death and resurrection, Christ becomes the New Adam who transforms the original Adamic relationship with God. The sombre, earthy palette of the painting invokes Adam's creation out of "the dust of the ground" and Christ's incarnation (Genesis 2: 7). Although Christ and Adam meditates on Adam's status as a precursor to Christ, the God-man, who is the 'New Man' or 'New Adam' and restorer of the union between God and humanity, a union which Adam and Eve damaged through the fall (Genesis 1: 1-3). By his life, death and resurrection, Christ becomes the New Adam who transforms the original Adamic relationship with God. The sombre, earthy palette of the painting invokes Adam's creation out of "the dust of the ground" and Christ's incarnation (Genesis 2: 7). Although Christ bears his stigmata and the wounds of his passion, he is the one supporting a weakened Adam. As the wounded healer, Christ enters into solidarity with Adam's fallen condition, drawing him into new life.
The painting brings together a constellation of images which reference Christ's suffering on earth. Once again, O'Brien's signature inclusion of three crosses in the background reefer to Christ's passion and, due to their triadic clustering, also gesture towards the Trinitarian nature of God (as we recall, this triadic clustering was also present in O'Brien's The Nativity and, indeed, is found in most of his paintings). Christ and Adam therefore transposes 1 Corinthians, chapter 15, in which St. Paul compares and contrasts Adam with Christ, reminding the faithful of Corinth that Adam's fall led to original sin and the punishment of death but Christ's resurrection undid the power of death, thereby transforming the meaning of suffering and reuniting fallen humanity with God: "For by a man came death, and by a man the resurrection of the dead. And as in Adam all die, so also in Christ all shall be made alive" (1 Corinthians 15: 21-22).
Given all this, the painting views the cross with reverential joy, seeing it as not just a brutal death or poison but also a cure: it is the holy pharmakon, as it were. In so doing, it celebrates the central paradox of Christian faith: the cross leads to new life. O'Brien is therefore not satisfied with just two references to Christ's passion. He not only includes three crosses and details of Christ's glorified wounds (note how the dotted imprints of the crown of thorns are ennobled in this painting, appearing like a scarlet constellation of stars on his forehead); he also incorporates the top portion of a saltire cross, upon which Christ props a weakened and world-weary Adam. In interweaving multiple depictions of the cross, O'Brien deftly incorporates an element of devotional imagery which emerged during the fourth and fifth centuries, following on from the Edict of Milan (313 CE), but which gained in momentum and popularity throughout the Middle Ages, particularly in Western churches. Early and late medieval art (frescos, mosaics, icons, acrostic poems, Gothic and Romanesque churches, and mystical writings or 'shewings') were encoded with a multiplicity of cruciform patterns and representations of the cross and the instruments of Christ's passion. For example, in the famous, twelfth-century apse mosaic located in the Basilica of San Clemente al Laterano, in Rome, Italy, the cross is depicted as the Tree of Life. Throughout the mosaic, there are a series of "allegorical repetition[s]" of the cross motif, including "the Sign of the cross itself: the monogram of Christ (chrismon) enclosed in an elliptical disc (clipeus) to symbolise the victory won over death by the death of the Cross" and, among other symbols, the lamb that was slain (a reference to Christ's crucifixion in Isaiah and the Book of Revelation, respectively) as well as coded depictions of the Eucharistic feast which is the memorial of Christ's sacrifice on the cross (Boyle 1989, p. 30).
The cruciform aesthetic found throughout the Christian art tradition is so prominent and celebrated that it led Erich Przywara SJ to observe that "the 'scandal of the folly of the Cross' appears as the origin, measure, and defining goal of Christian sacred art" (Przywara 2014, p. 554). As I have noted elsewhere, it is in observing and tracing this cruciform pattern in the Catholic imagination that Przywara finds in "the cruciform vision of the world [the fact] that Christianity offers an access to hope that can withstand human suffering [and] . . . reminds us that the witness of Christian art, throughout church history, upholds the cross as a sign of hope, contradicting the various fears [ . . . ] endured throughout the course of history" (Lamb 2021, p. 19). In O'Brien's art we discern the kind of cruciform aesthetics admired by Przywara and informing Roman Catholic philosophical and theological engagements with aesthetics throughout Church history. As importantly, we find O'Brien often fusing elements of the Byzantine iconographic tradition with aspects of realism in an effort to communicate the transcendent and immanent nature of Christ as fully God and fully man. This fusion is uniquely accomplished in his art and shows the degree to which his commitment to Christian doctrine widens (as opposed to limits) imaginative expression.
Before turning to the conclusion, it would be remiss if we did not pause to stress O'Brien's understanding that the cross is meaningful because it is the gateway to the resurrection. This is why, even in his starkest meditations on Christ's passion, his paintings tend to include glimpses or foreshadowing of the resurrection. This is accomplished in various ways, such as his depiction of Christ's wounds as stars or jewels, as seen in Christ and Adam. In this way, O'Brien's Christological aesthetics is keyed into the register of hope. Given this, his art exhibits what Christopher Wojtulewicz would call "eschatological transparency", a witness to the future glory of humanity resurrected. Speaking of devotional art (this side of heaven), Wojtulewicz writes: "[w]hen we see in the mirror the face of God, we do see it, but without the transparency that belongs to resurrected life, and when we try to express what we see, or merely the experience of seeing it when we do see it, we suffer the inability to clothe it in words or express it fully" (Wojtulewicz 2016, p. 8). Despite the vivid and concrete imagery which characterizes most of O'Brien's art (and therefore places him firmly in the first two 'ways' of mystical experience identified by Davies), it is nonetheless clear that his artwork, in true iconographic style, always already points beyond itself, aware of the limits of the artist's imagination to convey the depths of the mysteries it nonetheless invites us to contemplate.
Conclusions
In his meditations on the Apocalypse, O'Brien writes that Christians are called to use their gifts in a way that draws life and wisdom from the tradition, sacraments, and doctrine of the Catholic faith. "We will grow in . . . virtues", he says, "to the degree that we keep turning toward the light, hungering for the bread of life in the Eucharist, for the living word of God in sacred Scripture, and for continual encounter with Jesus in prayer" (O'Brien 2018, p. 154). Participation in the life of the sacraments is not only vital for the community of Christian believers. For O'Brien, the devotional artist, who seeks to cultivate a Catholic imagination, relies on contemplation and sacramental encounters with Christ in order to receive deeper insight into the reality of existence. Drawing on the thought of Maritain-who held that the artist is called to be a saint-O'Brien writes that " . . . the artist's task is to seek the face of God, to search out for his people an authentic image of Christ Jesus. To make visible what is invisible" by, as Maritain puts it, embracing "a steady struggle . . . [one] which has to pass through trials and 'dark nights' comparable, in the line of creativity of the spirit, to those suffered by mystics in their striving toward God" (O'Brien 2013, pp. 49-50).
Such a contemplative attention to the world and the practice of the arts characterises O'Brien's thought as well as his own artwork, contributing to our understanding of the nature and expression of imaginative engagements with Catholicism. Too often, scholars of theology and related disciplines do not give the interplay between prayer and art practice the focus it deserves. It is either assumed as a given or noted but then passed over for the sake of more systematic approaches to the question of the aesthetic in religion. Balthasar, John Paul II and O'Brien are among the thinkers and artists who have recently sought to redress this tendency within theology to separate thought from experience, philosophical inquiry from the spiritual life and art theory from art practice. In O'Brien's paintings and writings we therefore find an integrated and timely articulation of the Christian understanding of the relationship between contemplation and the imagination. As importantly, in his consideration of Christian doctrine and the long art tradition which it has inspired, we discern in O'Brien the development of a theological or devotional aesthetic which is at once highly stylized yet also personal, sensitive to the transcendent that is within and beyond the human scale of things, and ever attentive to Christ's analogical relation to the cosmos. It is therefore not only O'Brien's penetrating insights into theological aesthetics that make him an important interlocutor in the growing literature on the Catholic imagination; it is also his way of visually expressing theological and philosophical insights that makes him an especially fruitful conversation partner in contemporary scholarship on the nature and implications of the Catholic imagination. In his art O'Brien shows us that the Catholic imagination embraces the world but is not of it.
|
2021-09-01T15:14:46.998Z
|
2021-06-18T00:00:00.000
|
{
"year": 2021,
"sha1": "0c5c92a0b09d02d2a97cfa5a26b52219a260b659",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/12/6/451/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "83d4eb95d471a169e24e19090a55aaa2c4e7c868",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
}
|
233382916
|
pes2o/s2orc
|
v3-fos-license
|
Soliciting organ donations by medical personnel and organ donation coordinators: A factor analysis
The literature on organ donation in Taiwan lacks a discussion of the roles of medical staff, organ donors, and transplant coordinators in organ donation. The biggest plight of organ donation is lack of the organ donations. When we probed the possibilities of not finish the organ donation procedure, such as religions, traditions and cultural belief, disease cognitions, and the failure of persuasion or the loss of organ donors. There are lots of research literature shown that the attitude of medical personnel would influence the willingness of organ donation or persuasion. This study considered such personnel and their participation in organ donation, specifically analyzing factors influencing their effectiveness. Snowball sampling was adopted to recruit medical staff, organ donors, and transplant coordinators for an online survey. The results revealed that some participants were unclear as to how to initiate the organ donation process and what practical operations are involved. Even with the necessary qualifications, some participants remained passive when soliciting organ donations in clinical practice. Organ donation coordinators with experience in caring for organ donors who attended organ donation courses performed well in soliciting organ donations. The researchers recommend that training courses on clinical planning and organ donation are incorporated into intensive care training and that they serve as the basis for counsellors soliciting organ donations to increase nurses’ willingness to solicit organ donations.
Introduction
Much of the relevant literature in Taiwan and abroad argues that the attitude of medical staff toward potential organ donors affects actual donation or solicitation. If medical staff solicit donations with confidence and awareness such that patients' families do not reject the idea of organ donations, 84% of family members may agree to organ donations when medical staff proactively ask for it. Conversely, family members may unanimously refuse when asked without confidence or warning. This proves that the attitudes and thoughts of medical staff affect the decisions of patients' families, highlighting how medical staff members' past experiences, educational background, personal perception, willingness, and attitude toward organ donation and transplants may affect the discovery of potential donors [1,2]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Literature review
Organ transplants have a long history in Taiwan. In 1968, Professor Chun-jean Lee of National Taiwan University Hospital successfully completed the first kidney transplant in Asia and opened the door for organ transplants in Taiwan. Following the enactment of the 1987 Human Organ Transplant Act as well as procedures for determining brain death, Taiwan became the first country in Asia to establish regulations governing organ transplants (and its technology), which resulted in subsequent successes in liver, heart, and lung transplants. In contrast to Japan, which did not legislate for brain death until 1999, Taiwan is a pioneer of organ transplantation in Asia [2].
As organ transplant technology matured and developed, organ transplants became the last hope of many patients facing organ failure. With the invention of new immune suppressants and anti-rejection drugs in particular, organs such as the heart, liver, and kidneys have achieved 70% and even up to 95% 3-year survival rates. According to the Taiwan Organ Registry and Sharing Center's statistics, over 8,000 patients are waiting for a successful organ match at 2018. Only approximately 200 organ donors appear each year. This shortage causes a substantial bottleneck in organ donations.
Despite the high survival rate of organ transplant recipients and the increasing number of people with organ donor cards in Taiwan, organ donation rates in Taiwan remain low compared with European and American countries. Spain has the most comprehensive organ donation measures in the world. It had an organ donation rate of 17.8 per million people in 1990 and 35.1 per million in 2013 [3]. This rate is five times that in Taiwan. Organ donation levels are low in Taiwan primarily because of religious beliefs, traditional customs, fear of disease, failure to recruit organ donors, and loss of potential donors [4,5]. In addition to patients' and family members' self-worth affecting their decisions, the decisions of such patients can be influenced by attending clinical medical staff. Many studies have reported that clinical medical staff are often the first to discover potential organ donors but seldom proactively inform organ donation and transplantation teams or inquire with family members regarding organ donations [5,6]. Article 10-1, Paragraph 4 of Taiwan's Human Organ Transplant Act states the following: "To promote the ethos of organ donating, hospitals shall take initiatives to establish a donation soliciting mechanism to ask the family members of potential donors of suitable organs of their desire for organ donation, and hence expand the sources of organ donation." Therefore, soliciting organ donations is a legal expectation of medical staff.
At present, in Taiwan, hospital are responsible for organ donation and its attendant procedures. Organ donation generally refers to the donation of organs or tissues following brain death or the end of life. In Taiwan, members of the organ transplant office of each hospital include transplant coordinators, social workers, and transplant nurse practitioners. Upon discovering a potential donor during clinical care, frontline medical staff immediately notify the organ transplant office to initiate the organ donation procedure. First, a transplant nurse practitioner of the office assesses whether the individual is suitable for organ donation. Subsequently, a social worker communicates with the individual's family to understand their opinions. For the individual to donate his or her organ, the consent of his/her family members must be obtained and an organ donation consent form must be signed before the organ donation application can be processed in the subsequent medical treatment. According to Article 4 of the Human Organ Transplant Act, "when performing a transplant operation by removing an organ from a corpse, the organ donor shall be certified dead by his/her attending physician before the operation can be performed." Brain death must be determined in accordance with the procedures stipulated by the Ministry of Health and Welfare. In addition, According to Article 12 of Human Organ Transplant Act, "any organ for transplantation shall be provided or acquired free of charge." The stipulated organ donation procedures are as follows. First, frontline nursing staff find potential organ donors through an assessment by a doctor or by a recommendation by family members. Second, and subsequently, frontline nursing staff notify members of the organ donation office in the hospital. Third, nursing staff members explain the organ donation procedures to the family member(s) of a donor and confirm their willingness to accept the donation. Fourth, the nursing staff members guide the donor's (patient's) family member(s) in signing a consent form and relevant documents. Fifth, the organ donation office initiates the requisite tests, takes care of the donor, and maintains the vital signs of the patient or donor. Sixth, a determination of brain death of the donor must undergo confirm twice. Seventh, and finally, organ donation and transplant surgeries are then performed.
Research design
With the consent of the Organ Donation Association, snowball sampling was adopted to sample medical staff and organ donation and transplant coordinators.
Participants and eligibility criteria
The target group consisted of certified organ donation and transplant coordinators who were asked to invite a doctor (of any level-intern, resident, or attending-and discipline aged 20 years and older) and nurse (over 3 months of work experience and familiar with clinical nursing services) to complete an online survey. The online survey was distributed with assistance from the Organ Donation Association. The survey explained the research purpose 0and content to the participants. A total of 192 valid responses were collected (response rate: 73.8%). The Research Ethics Committee of the National Taiwan University approved this study (IRB:201504ES002).
Research tools
The self-developed questionnaire was constructed with reference to the expert questionnaires used by Huang et al. [6], Shi et al. [7], Zhang et al. [8], and Cory et al. [9] and was divided into four sections. The design was based on questionnaire design principles that test hypotheses in terms easily comprehensible by interviewees to gain insight into their traits. Section 1 concerns demographic variables, and the organ donations attitude scale in Section 2 contains 20 items regarding participants' thoughts, beliefs, and behavioral tendencies toward organ donations, including their thoughts and views on organ donations and care. Section 3 comprises 14 yes/ no questions on participants' experiences with organ donation or the organ donation knowledge scale, which surveys organ donation and transplant coordinators' level of understanding of the definition, determination of brain death as well as related legal requirements. The organ donation efficacy scale of Section 4 focuses on participants' successful and failed experiences in soliciting organ donations.
Validity and reliability of research tools
Expert validity review. Five experts on organ donation were invited to test the validity of the first draft of the self-developed organ donation survey-an organ donation-soliciting physician, transplant surgical nurse, social worker, transplant coordinator, and family member of a patient who successfully received an organ donation. The content validity index of the survey reached 0.818.
Trust level analysis. After formal samples from the parent group were excluded, 35 participants were chosen for a pretest. The Cronbach's α of the survey's internal consistency was 0.856 for the organ donation attitude scale and 0.704 for the organ donation knowledge scale, which indicated that the survey was valid and could be adopted.
Statistical analysis
The coding, archival, and statistical analyses of the survey responses were processed in Excel and SPSS by using descriptive (frequency distribution and percentages) and inferential statistics. Descriptive statistics of the following variables were calculated: basic characteristics (gender, age, education level, marital status, religious belief, and explicit consent given to be an organ donor), job attributes (number of years working, type of occupation, department working at, and hospital type; indicated by frequency and percentage), scores on the organ donation attitude scale, scores on the organ donation knowledge scale, and variables for performance in promoting organ donation. Regarding inferential statistics, an independent samples t test, one way analysis of variance, Pearson's product-moment correlation, and multiple regression were used to analyze the relationships between basic attributes, organ donation attitude, organ donation knowledge, and performance in soliciting organ donations.
Demographic distribution
The demographic information of the participating medical staff and organ donation and transplant coordinators were analyzed using descriptive statistics as both numbers and percentages. The results revealed that the majority of participants were female and worked in surgical disciplines, such as organ donation and transplant coordinators, or were employed in medical centers. The average age range was 30-39 years, and the participants were mostly college or technical school graduates. Most participants were married and had no religious beliefs. Most participants had 5 or more years of clinical work experience, and most had organ donor cards and attended courses on organ donation.
Responses on the organ donation attitude scale
The highest scoring item was "I think that organ donation is meaningful" and had an average score of 4.53 (SD = 0.63). The second highest scoring item was "Organ donation continues the organ donor's life and gives the recipient a second chance at life" and had an average score of 4.43(SD = 0.64). The third highest scoring item was "I think that coordinating the solicitation of organ donations is a meaningful task" and had an average score of 4.40 (SD = 0.67).
This section comprised 14 yes-no questions (1 point was awarded for each correct answer for a maximum score of 14) on workplace organ donation procedures and regulations for determining brain death. The three highest scoring questions were "After the first test to determine brain death, at least 4 hours must elapse before the second test is conducted in accordance with prescribed procedures", "According to Taiwan's standards for determining brain death, brain death is defined as brain stem death," and "Central health and welfare agencies shall subsidize the funeral costs of organ donors as prescribed by those agencies." These score result suggested a certain degree of understanding of the guidelines that determine brain death.
The survey revealed that 113(58.9%) participants in organ donation solicitation, and 79 (41.1%) did not. Among participants who never solicitated organ donations, the more common reasons were "This is not the business of my current department," "I think organ donation solicitation from the organ donation and transplant team is more appropriate," and "I have not encountered appropriate patients to solicit donations." These results revealed that some participants were passive in advocating organ donations.
Among reasons for successful organ donations arising from active solicitation by study participants and when participants felt that patients' families proposed the donations, the first and third most common reasons for both types of donation were "Organ donations can help other people" and "The organ donor wished to donate their organs or had signed an organ donor card." The second most common reason was "Organ donation is a means of continuing life" for the successful solicitation of organ donations by participants and "Organ donations are acts of kindness that will be rewarded" for unsolicited donations by family members. These results revealed that most reasons for successful donations were derived from altruism.
The survey indicated reasons for failed organ donations. The top three reasons participants gave for families' refusals were "The family members desired to keep the patient's body intact or were unwilling to have the patient suffer from operations again," "The family did not want to donate the patient's organs," and "Resuscitation was pursued to the full extent." Thus, these reasons were consistent with those of the literature [5,[10][11][12].
Relationship between participant demographics and the organ donation attitude scale
Among the 13 variables for all demographic traits in Table 1, participants' attitude toward organ donation exhibited significant differences in terms of "whether they are registered organ donors" and "whether they attended organ donation courses" (F = 15.353, p<0.01; T = 2.675, p = 0.008). Comparisons after a Bonferroni post hoc test revealed that unregistered and Table 3 The relationship between nine variables-educational level, religious beliefs, work department, job title, type of hospital of employment, experience in caring for organ donors, experience in caring for organ recipients, organ donor registration status, and attendance at organ donation courses-comparing between participants who had and had not engaged in organ donation solicitation (p<0.05) The complete form is as S3 Table in the attached information. Table 4: Multiple regression analysis results indicated that the key factors for the successful solicitation of organ donations included "having experience in caring for organ donors," "an organ donation coordinator job title," and "having take organ donation courses." Participants who were organ donation coordinators with experience in caring for organ donors and who attended organ donation courses had performed well in soliciting organ donations. Table 5 suggests that among organ donation course participants, the highest proportion attended courses on soliciting organ donations, followed by family grief counseling courses, organ transplant courses, and brain death determination courses.
Discussion
1. The study participants were primarily medical staff and organ donation coordinators. The results revealed that those in surgical disciplines, women, organ donation coordinators, and medical center staff formed the majority of participants, Their age ranged from 30 to39 years old. They were mostly college or technical school graduates. Moreover, the average score for healthcare workers' and organ donation coordinators' attitudes toward organ donations was 78.1 ± 8.8, which signified that the participants generally had positive attitudes toward organ donation. In-depth discussions revealed that a majority of participants identified positively with organ donation but continued to worry about it, Clinical nursing staff may be reluctant to suggest organ donation to patients' families because of their own unclear knowledge of organ donation, unwillingness to become involved with families' grief and suffering, unacceptance of organ donations helping other patients and their families, worries that they are not empowered to solicit organ donations, fear of being blamed or refused, or potential conflicts from believing that organ donation means giving up medical treatment [13,14].
2. The participants scored 9.27 (SD = 1.21) in average on their knowledge of organ donation, which was higher than the average score from the public [13,14]. This result revealed the relationship between participants' education and work experience as well as the participants' continued learning with regard to organ donation. Question "I know the procedure for initiating the organ donation process," had the fewest number of yes responses, which suggested that participants were unclear on how to initiate the organ donation process and continued to exhibit unfamiliarity with practical operations. Second, clinical first-line caregivers, who are key stakeholders for discovering potential organ donors, were the second highest group to answer incorrectly in question "I know the standards for confirming organ donors." This point should be reinforced in organ donation education and advocacy.
3. Successful solicitations of organ donations revealed that among the 79 participants (41.1%). This is similar to the current medical environment. Because of the inability of medical care regulations to effectively protect practitioners and allay fears of medical disputes, many practitioners adopt conservative attitudes. Beginning the organ solicitation process from a medical perspective is suggested: Once a patient is suspected to be brain dead, the hospital must report the potential organ donation to the Organ Donation Association or the Ministry of Health and Welfare. Then, an organ donation coordinator should be requested to provide medical and administrative assistance in the hospital where the alleged brain-dead patient is hospitalized. Departments that declare organ donations should be rewarded at the end of the year to motivate medical staff and organ donation coordinators to discover potential donors.
4. Regarding participants' attitude toward organ donation, 50.6% of participants had organ donor cards. This percentage was higher than that of nursing staff noted in the literature in Taiwan, which suggested high acceptance of the promotion of organ donation in recent years. Ke et al. [15] argued that because Taiwanese people are more restrained when expressing emotions and organ donation is mostly jointly decided by family members, they are more willing to choose donations if they know that the patient intends to donate their organs. That is why, Taiwan is currently promoting organ donor cards and marking people's willingness to donate on health insurance cards to signify their support for organ donation.
5. Successes in soliciting organ donations had a positive correlation among nine variableslevel of education, religious beliefs, work department, job title, type of hospital of employment, experience in caring for organ donors, experience in caring for organ recipients, organ donor registration status, and take at organ donation courses. These correlations were related to participants continuing to learn about organ donations through education and work experiences. Courses on soliciting organ donations had the highest significance among organ donation course participants, followed by family grief counseling courses, organ transplant courses, and brain death determination courses. These results indicated that the courses were beneficial to soliciting organ donations. Including courses of soliciting organ donations into compulsory course credits for medical staff and organ donation coordinators may improve organ donation solicitation.
Participants' demographic traits and other factors influenced organ donation solicitation.
To understand the factors influencing participants on organ donation results, multiple regression analysis was conducted on the nine variables of level of education, religious beliefs, work department, job title, type of hospital of employment, experience in caring for organ donors, experience in caring for organ recipients, organ donor registration status, and attendance at organ donation courses. The results revealed that key factors in the successful solicitation of organ donations included "having experience in caring for organ donors," "an organ donation coordinator job title," and "having attended organ donation courses." These results revealed that organ donation coordinators with experience in caring for organ donors who attended organ donation courses had high performance results in soliciting organ donations. This was possibly because organ donation coordinators must undergo training and pass an exam to be certified and thus have more experience in soliciting donations. Medical staff should be encouraged to undergo such training.
Study limitations
This study was conducted using a structured questionnaire to survey participants, who may have had reservations about or grossly misinterpreted the questionnaire contents; this may have led to measurement errors and due to differences in hospital scale and environment, only superficial study results.
Suggestions
Hospitals soliciting organ donations implement operations on hospice care consultations for end-of-life patients to advocate for such patients to have the right to express intentions of donating their organs before actively inquiring for patients' intentions on organ donation to spur major hospitals throughout Taiwan to actively join in organ donation solicitation. Medical staff are crucial to the organ donation process because they are on the frontlines in discovering potential organ donors. The attitude of medical staff toward organ donation affects the push for organ donations, and care throughout the entire organ donation process is a vital link [16]. How nursing staff can be encouraged to engage in soliciting organ donations are questions deserving attention. Therefore, the researchers of this study recommend the following: (1) Substantiate these findings regarding administrative practices as a reference for training courses on clinical planning and organ donation to encourage intensive care nurses to proactively solicit organ donations and increase the organ donation rate. (2) Include these training courses on organ donation in certifying exams for intensive care training and as the basis for counselors engaging in soliciting organ donations. This would reinforce intensive care nurses' knowledge of and positive attitudes toward organ donation and increase willingness to solicit organ donations, which can ensure that patients on organ wait lists can benefit from the generosity of donors and their families. Furthermore, (3) as caregivers, emergency and critical care nurses are generally passive in soliciting organ donations; however, they also play the role of educators and organ donation solicitors. The results of nurses engaged in soliciting organ donations can be provided for future researchers studying the attitudes and behavior of intensive care nurses toward soliciting organ donations. These studies may improve the nursing and communication skills of nurses in emergency and critical units, which may allow them to allocate time when patients' vital signs are stable for family members to consider and decide on organ donation [17].
Conclusion
In Taiwan, because the organ transplant technology has continually evolved, the survival rate within one year after each transplant surgery has currently exceeded 80%, and transplantation has become a major treatment option for patients with organ failure. However, because of the conservative mindsets in the ethnic Chinese society, the lack of sources of transplanted organs is a major problem that must be overcome. Because changing people's concepts on organ donation is a slow process, education plays a crucial role. Following the rapid evolution of technology, young people have come into contact with electronic products, and they have spent more time in virtual worlds than with their families and friends. In the society dominated by utilitarianism, which emphasizes rapid response to things without emotions, showing insufficient care for others has been a widespread phenomenon. Medical and nursing students are faced with the most immediate problems in human life in their future career. Therefore, life education plays a critical role in teaching ultimate concern and empathy in medical students, nursing students, and students of other types. Topics related to organ donation have been integrated in the courses at schools of all levels to instruct students with the appropriate concepts, knowledge, and life education meanings on organ donation, thereby promoting the general public's approval for organ donation. Organ donation enables patients with organ failure to extend their life span and improves the quality of their life. Such is a new understanding of the eternity. Moreover, practical clinical knowledge and skills must continue to systematically promote the idea of organ donation. Support groups related to life education can be formed, and people are encouraged to share their life stories to continue reinforcing and sustaining the benefits of organ donation [18][19][20][21].
Supporting information S1
|
2021-04-25T06:16:18.254Z
|
2021-04-23T00:00:00.000
|
{
"year": 2021,
"sha1": "8be7c620caba07155eb361ab4c9a4eb7932d58f9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0250249&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7664d1fb764669086a776eb53ac1aa1ff1634c89",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249664982
|
pes2o/s2orc
|
v3-fos-license
|
Forecasting the Trends of Covid-19 and Causal Impact of Vaccines Using Bayesian Structural time Series and ARIMA
Several researchers have used standard time series models to analyze future patterns of COVID-19 and the Causal impact of vaccinations in various countries. Bayesian structural time series (BSTS) and ARIMA (Autoregressive Integrated Moving Average) models are used to forecast time series. The goal of this study is to look at a much more adaptable effective methodology for dissecting the major components of a time series that breaks down the main parts of a time series. Within the period of March 1, 2020, to June 30, 2021, we used these state space model to explore the forecast patterns of COVID-19 in five afflicted nations.In addition, we used intervention analysis under BSTS models to examine the casual effect of vaccines in these countries, and we reached higher levels of accuracy than ARIMA models. According to forecasts, the number of confirmed cases in the United States, the United Kingdom, the United Arab Emirates, Bahrain, and India will climb by 1.17%, 19.4%, 15.5%, 13.8% , and 8%, respectively, during the next 60 days. On the other side, death rates in the United States, the United Kingdom, the United Arab Emirates, Bahrain, and India are expected to rise by 2.7%, 3.5%, 15.8%, 9.4%, and 14.8%, respectively. In addition, By using effective and quick vaccination, the United States, United Kingdom, and UAE have been able to reduce the number of mortality. On the other hand, vaccination is currently unable to decrease the rate of cases and deaths in India. Overall, the Indian healthcare system is likely to be seriously over-burdened in the next month. Though the USA and UK have managed to cut down the rates of COVID-19 deaths,but in UK and UAE number of confirmed cases are high as compared to other nations,so serious efforts will be required to keep these controllable. On the other hand,To keep things under control, Bahrain and four other countries has to speed up vaccinations.
Introduction
In this era of big data, Analysts use the data and assist the government in making decisions as to the availability of government-provided health, agricultural, energy, tourism, and economic data [1] has increased [2]. The Internet of Things revolves around real-time decision making; it's all about decision informatics and embraces Big Data's advanced technology [3]. Many methods are used to analyze, optimize, and forecast large data [4]. COVID-19 creates a global emergency that affects many aspects of life, including health, economics, and politics [5]. It's an infectious disease caused by a coronavirus. Most people infected with the COVID-19 virus will experience mild to moderate respiratory illness and recover without requiring special treatment. Older people and people with underlying medical problems such as cardiovascular disease, diabetes, chronic respiratory disease, and cancer are more likely to develop serious illnesses. Although the COVID-19 has low mortality rates as compared to severe acute respiratory syndrome (SARS) and the Middle East respiratory syndrome (MERS), this virus has a higher transmissibility [6]. Considering that COVID-19 trends are unknown and its end is also uncertain. Where specific information and computing resources are available, mechanical models can be sufficient to predict coronavirus infection patterns and more accurately model the impact of different intervention strategies to inform decision-makers and health care workers [7]. Many studies deal in the next trends of the epidemic in many regions. Like in Nigeria [8], IRAN [9], USA [10] and [11]. These papers used traditional methods to forecast the pandemic's future behavior, such as traditional ARIMA and regression models. These models face the problem of over-adjustment, especially where covariates are present [12]. Time series evaluation is regularly used to do calls for forecasting, which calls for information on seasonality and trend, in addition to regression components. For small data, the researcher wishes to estimate those additives with proper precision is hard the usage of traditional time series methods.
Bayesian structural time series (BSTS) models are a viable option due to [13] having a number of intriguing features. These models can accommodate a large number of covariates and properly reflect stochastic behavior by allowing model parameters to fluctuate over time [14,15]. The Bayesian technique has the advantage of placing previous beliefs on the parameters, which is an advantage over the classical approach. This method is extra evident than ARIMA models and offers uncertainty in an extra fashionable manner. It is greater obvious due to the fact that its illustration does now no longer depend upon differencing, lags, and shifting averages. These models could be useful in setting priorities in public health, growing, and imposing regulations to address and keep away from the negative health situations [16]. These models have already been used to predict the fitness consequences of alcohol consumption. [17] and to predict the negative effects on health and rate of crime as a result of local alcoholic licensing regulations [18]. It is likewise feasible to pick out the proper variables through Spike and Slab priors by the use of those models [19]. The BSTS models predicted future health consequences from alcohol consumption better than ARIMA. Despite the fact that the training dataset only contains eight data points, they were able to construct a reasonably accurate 1-5 year estimate [17].
BSTS models, in a nutshell, are stochastic state-space models that may examine trend, seasonality, and regression components individually. The spike and slab priors are utilized in these models to choose appropriate covariates, The final projections are produced using Bayesian model averaging. The estimates from these models are the least dependent on particular assumptions.Using these models, the model parameters are manually weighted depending on their inclusion probabilities. Linear regression component of the model is not required in these models since they select the most informative parameters. These models improve the depiction of estimate certainty and change over time [17]. Analytically computing the Bayesian posterior distribution is, however, rather complex. As a result, mathematical calculations have been done by utilizing Markov Chain Monte Carlo (MCMC) methods like Gibbs sampling [20]. The Bayesian structural time series (BSTS) models [21] are implemented with the recently built bsts package of R.
In response to the severe COVID-19 epidemic, global vaccine development efforts have been increased. Even with minimal protection against infection, vaccination can have a significant influence on preventing COVID-19 outbreaks in United states [22]. The evaluation of vaccination's effects in the target countries could be highly fascinating. Intervention analysis can be used to investigate the causal impact of vaccination in these nations. Causal impact analysis employing bayesian structural time series models is accustomed to assessing the effect of the involvement in the post-involvement phase. These models, in contrast to traditional ARIMA models, give absolute effect and relative effect and performs better than conventional models due to their chronological method, as well as additional benefits such as past data and complex covariate structure [23].
The study's first goal is to develop BSTS models for researching COVID-19's future trends and compare their prediction power to that of the most commonly utilized ARIMA models. This study aims to look into the COVID-19's temporal dynamics in the five countries affected worldwide. The United States, the United Kingdom, the United Arab Emirates, Bahrain, and India are among these countries. We also looked into the vaccination's causal effects in these nations. We explored BSTS models and intervention analysis using bayesian structural time series models to attain this goal. When contrasted to ARIMA models, the outcomes showed a higher level of accuracy. The methods suggested can be used to examine these trends in any other country. accepted for examination, and the study no longer requires any permission from authorities.
A time series intervention analysis can be conducted using the BSTS designs. The discrepancy between the actual and predicted time series that had happend if the treatment had not occurred can be calculated using these methods. The following steps can be used to examine the causal effects of vaccination using these computations. The BSTS model is estimated in the first stage utilizing data up to the goal date (date of vaccination started, Here we consider February 15 as the date of vaccination started for five countries). An estimated model is used to forecast the vaccination period if the intervention is not used (without vaccination) in the next stage. Finally, during the vaccination period, To determine the causal influence of vaccination, the difference among expected and actual data is evaluated.
The BSTS models were used to generate forecasts for various pandemic parameters. The Bayesian technique is incorporated into these models. The likelihood function (current data) is blended with prior information (like a professional viewpoint) to upgrade the existing information and construct the finalized Bayesian models, known as posterior distributions, employing these models. These models employ Bayesian model averaging and Kalman filtering to generate more exact forecasts [16]. When using these models, closed-form estimators for model parameters are not achievable due to their complexity [21]. We used the R language to estimate the model parameters quantitatively using the Markov Chain Monte Carlo (MCMC) approach. The MCMC approaches use conditional distributions to draw random samples for the model parameters and then average the results to get the final estimations. Due to the complexity of these models, closed-form estimators for model parameters are not available when employing them [21]. We used the R language to estimate the model parameters quantitatively using the Markov Chain Monte Carlo (MCMC) sampling method. The MCMC method uses conditional distributions to draw samples at random for estimating BSTS parameters and then average the results to get the ultimate estimations. The Ljung Box test has been used to perform diagnostic checks on the models. Various forecast accuracy measurements, like root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE), have been used to compare the forecast accuracy of BSTS models to the most often used ARIMA models. We have only given projections for different parameters of the pandemic using BSTS models because the enhanced forecast accuracy has been noticed. To do this, I treated the effect of vaccinations (in the aforementioned countries) as an intervention and conducted an intervention analysis using BSTS models. The numerical findings for the causal impacts were acquired using the R package CausalImpact.
Bayesian Structural Time-series Models
The time series is broken down into four factors in the basic structural model: a level, a local trend, seasonal impacts, and an error term. A pair of equations can be used to define a structural time series model [24] given by The observation equation is the first (1), and the state equation is the second (2); it ties the observed data(y t ) to the state vector( α t ). where ε t ∼ N 0, σ 2 t and η t ∼ N (0, Q t ) are independent of all other unknowns and ε t and η t are observation error and system error respectively. The output vector, transition matrix, control matrix, and state-diffusion matrix are represented by Z t , T t , R t , and Q t , respectively.
Local Level Model
Local level model is the simplest Structural times series model. Local level assumes the trend is a random walk: In the local level the matrices Z t , T t and R t in equation are collapsed to the scalar value '1'. Parameters of the model are variances of the error term σ 2 , σ 2 η .
Local Linear Trend Model
The local linear trend assumes that both the mean and slope follow random walks. The equation for the mean is as follows: and the equation of the slope is: Because it quickly adapts to local variability, the local linear trend model is a common choice for modeling trends. This is useful when making short-term forecasts. When making longer-term forecasts, this kind of flexibility may be undesirable, as such predictions frequently have implausibly large uncertainty intervals.
ARIMA Models
Three parameters determine the ARIMA( p, d, q) model. The parameter p in the AR( p) represents the current values depend on its own p-previous values and parameter q in the MA(q) represents The current deviation from mean depends on q-previous deviations. And d is the measure of difference. The ARIMA( p, d, q) model has the form [25] : The total number of reported daily Covid-19 cases was Y t , with the first difference Y t = Y t −Y t−l representing the daily number of infections. The Akaike Information Criterion (AIC) was used to determine the final parameters for d and p, q. The open-source software R was used to perform all calculations.
Results and Discussions
We performed the research to find out COVID-19's future behavior in the five nations afflicted by the virus, as well as the causative influence of vaccination in these countries. With these data in hand, We set out to evaluate the predicting accuracy of the recommended BSTS models to the much more often utilized ARIMA models. This contrast has been made using various forecasting accuracy measures like RMSE, MAE, and MAPE. The accuracy of predicting results comparison is shown in Table 1. Table 2 and Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 provide the projections for the following sixty days, including the projected number of cases, deaths, and total vaccinations. Finally, the causal effects of vaccines are summarised in Table 3 and Figs. 16,17,18,19,20,21,22,23,24,25. Table 1 shows the contrast of BSTS and ARIMA models by measures of several prediction accuracy parameters. The BSTS models delivered more accurate forecasts than the ARIMA models, according to these findings. There are a few deviations, which could be related to the data's unpredictable behavior. As a result, for projecting COVID-19 trends in these nations, BSTS models have proven to be a suitable option for ARIMA models. As a result, only the BSTS models' forecasts have been presented in full. Table 2 presents two-month projections for the five countries involved. On August 30, 2021, the cumulative number of positive cases, deaths, and population that has received at least one dose of vaccine in the United States was predicted to grow by 1.17 percent, 2.7 percent, and 12.5 percent, respectively. In the United States, 54 percent of the population has received at least one dose of the COVID-19 vaccine, and 46.61 percent is fully vaccinated as of June 30. Our projection is that nearly 66.5 percent of the population will have had at least one dose by August 30, and the lesser the number of deaths and cases in the United States. If people become fully vaccinated and take precautions, the coronavirus could be brought under control in the United States in the coming months. The predicted rise in the cumulative number of positive cases, deaths, and a population that has received at least one dose of vaccine in the UK is 19.4%, 3.5%, and 17%, respectively. In the United Kingdom, 66% of the population has received at least one dose of the COVID-19 vaccine, 48% is fully vaccinated as of June 30, and we expect approximately 83% to have had at least one dose by August 30. The increased vaccination rate aids the UK in controlling daily COVID-19 deaths, but due to a lack of sufficient measures, the UK continues to have a substantial number of daily cases.
Similarly, the cumulative number of positive cases, deaths, and vaccinations in the UAE is predicted to grow by 15.5%, 15.8%, and 25.2%, respectively. On June 30, 74% of the population in the UAE had received at least one dose of the COVID-19 vaccine, and 64% were fully vaccinated. These rapid vaccinations in the UAE keep daily deaths to a bare minimum and daily cases under control.
In Bahrain, the overall number of positive cases, deaths, and the population that has received at least one dose of immunizations are expected to increase by 13.8%, 9.4%, and 14%, respectively. As of June 30, 62.17 percent of the Bahrain population had received at least one dose of the COVID-19 vaccine, 58% had been fully vaccinated, and we predict that by August 30, about 76% of the population will have had at least one dose. If Bahrain accelerates its immunization program, it will be able to reduce COVID-19-related deaths and cases on a daily basis.
In India, the overall positive cases, deaths, and population that has received at least one dose of vaccines are expected to rise by 8%, 14.8%, and 15.3%, respectively. As of June 30, 15.48 percent of India's population had gotten at least one dose of the COVID-19 vaccine, while 19.66 percent had been fully vaccinated, and our prediction is that by August 30, about 31% of the population will have had at least one dose.
The next stage was to look into the role of vaccines in the development of cumulative cases and deaths in the five countries concerned. It should be noted that the vaccination in the United Kingdom began on December 8, 2020, in the United States on December 14, 2020, in UAE on December 14, 2020, in Bahrain on December 22, 2020, and in India on 16, January 2021. The immunization date (February 15, 2021) was employed as the intervention in the BSTS models' intervention analysis. We compared the current figures to what might have happened if these countries had not vaccinated their citizens. The validity of the findings was investigated utilizing posterior probabilities and the likelihood of causal impacts. The outcomes are presented in Table 3 and Figs. 16,17,18,19,20,21,22,23,24,25. These impacts' posterior odds of occurring as random events are far too low., as we can see. The chances of causal effects, on the other hand, are relatively high. This merely demonstrates the importance of immunizations' causal effects in each of the five countries involved. The vaccine reduced the number of cases by 9.7 percent, 10 percent, and 12.3 percent in the United States, the United Kingdom, and the United Arab Emirates, respectively. Likewise, these countries saw a decline in death rates, with 17.9 percent, 7.7 percent, and 3.8 percent for the United States, the United Kingdom, and the United Arab Emirates, respectively. As a result, these countries may have benefited from a high immunization rate in their population. On the other hand, there is no evident impact of vaccination in terms of cases and deaths in Bahrain, but due to poor vaccination rates in India, just 15 percent of the population received the vaccine on June 30. Vaccination is not having a visible impact in India in terms of cases and deaths. The current estimates for the overall number of illnesses and deaths are significantly higher than those predicted during the vaccine period. As a result, India must improve the speed with which vaccines are administered so that people can return to their normal lives.
Conclusion
According to a literature survey, there hasn't been any research into the separation of the components in relation to the changing behavior of the COVID-19 trends. The BSTS models disaggregate the COVID-19 trends through various components, which is an important study aspect. The proposed method also permits the coefficients to fluctuate over time, allowing for better detection of the data creation process. We showed that BSTS models could help with early preparation, prioritization, and distribution of healthcare resources to mitigate COVID-19 effects in the nations studied. Furthermore, the causal effects of vaccination have been studied. With a few exceptions, the study's findings imply that the proposed models forecasting accuracy is superior to that of commonly used ARIMA models. Among these countries,The number of instances is predicted to increase by a percentage to be high in the UK. The % rise in the total number of cases will be ranked in the following order, UK> UAE > Bahrain> India >United States of America. Among these countries, In India, the rate of increase in the number of deaths is predicted to be significant. The % rise in the total number of deaths will be ranked in the following order, India> UAE > Bahrain> UK >USA. Our research also suggests that in the United States, To lower the number of cases and deaths, the United Kingdom and the United Arab Emirates have implemented a successful vaccine plan. On the other hand, India is still battling to manage the number of deaths due to tardy immunization or a large population. India may need to rethink their immunization strategies. By using effective and quick vaccination, the United States and the United Kingdom have reduced the number of mortality. However, the situation in India may become more problematic during the following sixty days. These findings, we believe, will assist these countries in efficiently prioritizing, devising, and implementing policies to prevent the pandemic's expected consequences.
There are several limitations to this study as well. We presume that the information obtained is correct; nevertheless, because not all patients are admitted to clinics, and others are asymptomatic, the data may be underreported. No risk factors have been evaluated due to a lack of corresponding data. Despite the fact that BSTS models gave better projections than ARIMA models, the precision of these forecasts may be harmed by the data's inherent uncertainty. However, the study's goal is not to produce 100 percent accurate projections but rather to provide key signals to stakeholders so that they may organize their strategies accordingly.
Author Contributions Each author has equal contribution. All authors jointly write, review and edit the manuscript.
Funding agency No funding was received for conducting the study.
Data and Code availability Data link is provided in the manuscript, and Code will be provided as per the request Declaration Conflict of interest All authors declare that they have no conflict of interest.
Ethical approval Authors do not copied this work from any source and this work does not cause harm to human or society.
|
2022-06-15T15:24:52.469Z
|
2022-06-13T00:00:00.000
|
{
"year": 2022,
"sha1": "8add3d480d2eb2a5b6fe903df9f4ee9e95029452",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40745-022-00418-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f709731493c38c71379bb642a7f373e2a28b64f",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
250668643
|
pes2o/s2orc
|
v3-fos-license
|
Confinement and electron correlation effects in photoionization of atoms in endohedral anions: Ne@Cz−60
Trends in resonances, termed confinement resonances, in photoionization of atoms A in endohedral fullerene anions A@Cz−60 are theoretically studied and exemplified by the photoionization of Ne in Ne@Cz−60. Remarkably, above a particular nl ionization threshold of Ne in neutral Ne@C60 (Iz=0nl), confinement resonances in corresponding partial photoionization cross sections σnl of Ne in any charged Ne@Cz−60 are not affected by a variation in the charge z of the carbon cage, as a general phenomenon. At lower photon energies, ω < Iz=0nl, the corresponding photoionization cross sections of charged Ne@Cz−60 (i.e., those with z ≠ 0) develop additional, strong, z-dependent resonances, termed Coulomb confinement resonances, as a general occurrence. Furthermore, near the innermost 1s ionization threshold, the 2p photoionization cross section σ2p of the outermost 2p subshell of thus confined Ne is found to inherit the confinement resonance structure of the 1s photoionization spectrum, via interchannel coupling. As a result, new confinement resonances emerge in the 2p photoionization cross section of the confined Ne atom at photoelectron energies which exceed the 2p threshold by about a thousand eV, i.e., far above where conventional wisdom said they would exist. Thus, the general possibility for confinement resonances to resurrect in photoionization spectra of encapsulated atoms far above thresholds is revealed, as an interesting novel general phenomenon.
Introduction
Endohedral fullerenes A@C 60 , where the atom A is encapsulated inside the hollow interior of the carbon cage C 60 , are of the highest interest and importance both to the basic and applied sciences and technologies, as new modern building blocks of materials and devices with unique properties. Therefore, they have attracted much attention of many investigators in recent years. In particular, the photoionization, as a basic phenomenon in nature, of atoms A encapsulated in fullerenes A@C 60 has become a topical research subject both for theorists for some years now (see a review paper [1] as well some latest works [2,3,4] on the subject and references therein) and, since only very recently, experimentalists [5,6]. Among the performed photoionization studies of thus confined atoms, only works [1,7] have provided the initial understanding of how the photoionization spectrum of an atom could be modified by the environment of a negatively charged carbon cage C z− 60 . The understanding was exemplified by trends in photoionization of the innermost 1s subshell of Ne in Ne@C z− 60 with various z's. However, how these and other possible trends might show up, as general phenomena, in spectra of intermediate and outer subshells of atoms A in A@C z− 60 has remained unstudied. The present paper expands further the investigation started in [7] to the intermediate 2s and outermost 2p subshells of the confined Ne with the aim to reveal which new aspects of these spectra of endohedral anions are most interesting, as general phenomena.
Brief description of theoretical concepts
Following previous works, see, e.g., [1,2,7] and references therein, the neutral C 60 cage is modelled by a short-range attractive spherical potential V c (r) of inner radius r 0 = 5.8 a.u., depth U 0 = −8.2 eV and finite thickness ∆ = 1.9 a.u.
A neutral endohedral fullerene A@C 60 is formed by placing the atom A at the center of the cage. For small sized, compact atoms A there is no charge transfer to the cage, so that the confined atom A retains the general structure of the free atom A. Alternatively [1,7], an endohedral anion A@C z− 60 is modelled by the sum total of the potential V c and the Coulomb potential V z (r) of an excessive negative charge on the cage C 60 . Assuming that the charge z is uniformly distributed over the entire outer surface of C 60 , Next, the sum total of these two potentials is added to nonrelativistic Hartree-Fock (HF) equations for a free atom. Solutions of these new HF equations, i.e., electronic energies and wavefunctions of the confined A atom, are used in well-known expressions for nl photoionization amplitudes, angle-differential and/or angle-integrated nl photoionization cross sections, etc., for free atoms; see [8] for the latter. To account for interchannel coupling in the photoionization of a confined atom, the random phase approximation with exchange (RPAE) [8] is utilized to meet the aim. This is because RPAE, which uses a HF approximation as the zero-order approximation, has proven to be a very reliable methodology over the years. Accordingly, for the sake of "theoretical" consistency, HF values of ionization thresholds of free and confined Ne atoms are used in the present study. In these calculations, interchannel coupling in the atom was accounted for at an intra-shell approximation level, as the first step in the study. One can see that, above the 2p threshold I z=0 2p ≈ 23 eV of Ne in neutral Ne@C z=0 60 (i.e., to the right of a vertical line marked with z = 0), all σ 2p 's oscillate about the 2p photoionization cross section of free Ne, as do σ 2s above I z=0 2s ≈ 53 eV and σ 1s above I z=0 1s ≈ 892 eV, respectively. The oscillations are due to the interference between the outgoing nl photoelectron wave and those scattered off the confining potential of C z− 60 . When the constructive interference occurs, there emerge maxima, i.e., resonances in σ nl 's, termed confinement resonance [1,2,4,7]. Furthermore, one can see that confinement resonances in all σ nl 's are nearly z independent above related I z=0 nl thresholds (previously [7], the same was noted in the 1s spectrum of Ne in Ne@C z− 60 ). This is because the energy of an outgoing nl photoelectron in any charged Ne@C z− 60 exceeds by far the Coulomb potential barrier of the charged carbon cage, at ω ≥ I z=0 nl . This is clearly seen in figure 3 where direct (Hartree) parts of the potentials "seen" by the p and s photoelectrons (due to the 2p → p, s transitions) are displayed. Hence, the presence of the Coulomb potential barrier is inconsequential for the outgoing nl photoelectrons at photon energies ω ≥ I z=0 nl . Correspondingly, at ω ≥ I z=0 nl , confinement resonances in any nl photoionization spectrum of the encapsulated atom will chiefly be governed by details of the confining potential well V c , equation (1), as in neutral Ne@C 60 . Thus, the resonances will nearly be z-independent. We term such confinement resonances as conventional confinement resonances. To conclude, the above results reveal, and the given explanation proves, that conventional confinement resonances in photoionization cross sections of inner, intermediate and outer subshells of an atom A in A@C z− 60 appear to be almost z-independent. This is an interesting general phenomenon. Another important observation (see figures 1 and 2) is that conventional confinement resonances vanish quite rapidly with increasing photon (or photoelectron) energy. This is in line with a theory of scattering of particles off a potential well/barrier. Indeed, starting at a sufficiently high energy of the outgoing photoelectron, the coefficient of reflection of the latter off a finite potential well/barrier decreases with increasing energy of the electron. As a result, the interference effect between the outgoing and scattered off the potential well/barrier photoelectron waves weakens, with increasing energy of the electron, and so are the associated conventional confinement resonances. In further, we will term this reasoning as "conventional thinking".
Coulomb confinement resonances
Above, trends in the Ne nl photoionization cross sections of Ne@C z− 60 were considered at photon energies ω beyond the corresponding I z=0 nl thresholds of Ne in neutral Ne@C 60 . We now turn the attention to lower photon energy (ω < I z=0 nl ) parts of figures 1 and 2 (to the left of the z = 0 marked line) where we consider photoionization of charged Ne@C 6 0 z− (z = 0). There, the additional resonance in each of σ 2p , σ 2s and σ 1s of Ne@C z− 60 is seen to emerge. It owes its existence to the Coulomb potential V z , equation (2), of the charged carbon cage. The V z potential brings up the Coulomb potential barrier at the outer surface of C 60 . This engenders reflection of the low-energy continuum photoelectron wave from the Coulomb barrier causing additional resonances one of which is depicted in figures 1 and 2 at photon energies under discussion. Originally, the emergence of this kind of a resonance was noted in the Ne 1s photoionization of Ne@C z− 60 [7], where it was named Coulomb confinement resonance, in view of its association with the Coulomb potential barrier of the charged carbon cage. The present paper establishes that the phenomenon emerges in the photoionization of intermediate and outer subshells of the encapsulated atom as well, as a general occurrence. Coulomb confinement resonances appear to be z-dependent, as is clearly exemplified by depicted in figure 1 σ 2p 's. This is because, in this instance, the energy of the outgoing photoelectron is near or, generally, below the top of the Coulomb potential barrier (see, as illustration, the energy line = 0.6 eV in figure 3). This makes the photoionization process to be sensitive to details of the latter, and, hence, to a charged state of the carbon cage as well.
To conclude, the established co-existence of z-dependent Coulomb and z-independent conventional confinement resonances in photoionization spectra of endohedral anions is an exclusive feature of these systems.
Correlation confinement resonances: the resurrection of confinement resonances far above thresholds
In the above, the discussion was related to RPAE calculated data for photoionization cross sections σ nl (ω) of the encapsulated Ne atom which were obtained at an intra-shell interchannel coupling approximation level. However [1,2,9], the effect of inter-shell interchannel coupling in the encapsulated atom may result in the emergence of new confinement resonances, termed correlation confinement resonances. These resonances were previously interpreted as resonances which are induced in an outer-shell photoionization spectrum of the encapsulated atom by conventional confinement resonances in inner-shell photoionization transitions in the atom, via interchannel coupling. The earlier finding was illustrated by RPAE [1,9] and recently seconded by relativistic RRPA [2] calculated data for the Xe 5s photoionization of Xe@C 60 where interchannel coupling between the 5s and 4d transitions was accounted for. However, the same effect may occur via interchannel coupling with Coulomb confinement resonances as well. It may even be bigger in this case since Coulomb confinement resonances dominate over conventional confinement resonances; see figure 2 for the most illustrative supporting evidence. Furthermore, the effect of interchannel coupling may show up strongly in an outer-shell photoionization spectrum at photoelectron energies which are thousands eV above the threshold, when interchannel coupling involves very deep inner-shell transitions. The effect is going to be strong, because at such big differences in ionization thresholds of the inner and outer subshells, photoionization transitions from the former will be strong whereas those from the latter will be weak, at photon energies above the inner shell threshold. As a result, both inner-shell Coulomb and conventional confinement resonances may be effectively "funneled" through a thousands-eV-distance to the outer-shell spectrum. However, to which extent the "funneled" confinement resonances may indeed perturb the outer-shell spectrum is not clear. To clarify this point, we performed RPAE calculations both of the 2p photoionization cross section σ 2p and dipole photoelectron angular-asymmetry parameter β 2p for Ne in Ne@C 1− 60 , above the Ne 1s ionization threshold. This time, inter-shell interchannel coupling between the 1s, 2s and 2p transitions was included in the calculations. Thus obtained RPAE calculated data for σ 2p and β 2p for the encaged Ne are depicted in figure 4 along with data for free Ne. (b): The same as in (a) but for the dipole photoelectron angular-asymmetry parameter β 2p (ω).
One can see that both σ 2p and β 2p (ω) for the encapsulated Ne atom possess a strong sharp resonance at about 890 eV which is followed by a lower but broader resonance at about 905 eV. As a result, these "encapsulated" σ 2p and β 2p (ω) differ considerably from the free Ne σ 2p and β 2p (ω), far above threshold. Thus, the confinement matters in this case, so that the two prominent resonances in "encapsulated" σ 2p and β 2p (ω) are confinement resonances.
The striking novelty of the above finding is that the resonances emerge far-far above where conventional thinking said they would exist. Indeed, when considering confinement resonances, one normally thinks in terms of conventional confinement resonances which occur due to the interference between the directly outgoing and reflected off the confining potential photoelectron waves. However, in line with "conventional thinking", as was discussed above, conventional confinement resonances fade away relative rapidly with increasing energy and do vanish far above threshold. The discovered emergence, or better say resurrection of confinement resonances in the 2p photoionization spectrum of Ne@C z− 60 far above threshold implies that, in contrast to "conventional thinking", a few eV deep/high confining potential well/barrier may, once again, be felt by a far-above-potential-barrier-electron. The effect may as well be called reemerging confinement effect for a high-energy scattering electron, as a general phenomenon. This general phenomenon may result in the emergence of far above threshold confinement resonances in the nl photoionization spectrum of a confined atom, as in the above discussed particular example of the Ne 2p photoionization of Ne@C 1− 60 . The latter effect may rightly be termed as resurrection of confinement resonances effect. Both the reemerging confinement and resurrection of confinement resonances effects owe their existence to inter-shell interchannel coupling in the encapsulated multielectron atom. Indeed, a trial calculation for Ne@C 1− 60 showed that removal of the Ne 1s transition (and, thus, associated with it Coulomb and conventional confinement resonances) from RPAE calculations of the Ne 2p photoionization leaves no traces of the two resonances in "encapsulated" σ 2p and β 2p (ω). As a result, the 2p photoionization spectra of the confined and free Ne atoms become virtually identical far above threshold, as they previously have been thought to remain nearly identical at all high energies, on the basis of "conventional thinking". Hence, the resurrected confinement resonances in "encapsulated" σ 2p and β 2p (ω), far above threshold, are due to interchannel coupling with the conventional and Coulomb confinement resonances in the 1s spectrum. The latter are "funneled" to the 2p spectrum via interchannel coupling. This appears to perturb the outer-shell spectrum of the confined atom dramatically.
Clearly, there is nothing particularly special about the Ne@C z− 60 system. Therefore, both the reemerging confinement effect and the resurrection of confinement resonances effect are expected to appear in, and be qualitatively similar for, spectra of other endohedrally confined atoms as well. In other words, the two discovered effects step in as novel general features of spectra of endohedrally confined atoms A@C z− 60 the existence and significance of which has been convincingly proven in the performed study.
In conclusion, neither Coulomb or conventional confinement resonances, not to mention the just discovered resurrected confinement resonances far above threshold, have been experimentally observed yet for technical reasons. We hope that the data presented herein will prompt experimentallists to look into the matter, thereby promoting such developments.
|
2022-06-28T02:39:55.979Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "dedc4c79501fe76d8cd37f03687307a93b6de106",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/212/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dedc4c79501fe76d8cd37f03687307a93b6de106",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2314012
|
pes2o/s2orc
|
v3-fos-license
|
Sustaining Regional Advantages in Manufacturing: Skill Accumulation of Rural–urban Migrant Workers in the Coastal Area of China
Extant research pays little attention to unorganized migrant workers' skill accumulation/upgrading from the perspective of the labor supply. This paper takes China as an example to explore the factors influencing the skill accumulation of rural–urban migrant workers (RUMWs), with the purpose of discovering how to sustain or reshape regional competitive advantages by improving RUMWs' skill accumulation. Structured questionnaire surveys were adopted for data collection in Suzhou City, Jiangsu Province and Taizhou City, Zhejiang Province located in the Yangtze River Delta in eastern China. In total, 700 questionnaires were issued and 491 effective questionnaires were recovered. It takes the perspective of individual laborers, with special regard to the effects of localization on the laborers' skill accumulation within the context of globalization. It adopts a broad viewpoint including intra-firm skill-biased strategy (as a response to intense competition), inter-firm relationships, and the accessibility of local non-firm organizations. The findings indicate that firms' skill preference, which impacts employees' skills and innovation ability and stimulates them to learn with initiative, have a significant influence on RUMWs' skill accumulation. In terms of collective efficiency based on the co-competitive relationship between local firms, the more intensive interactions are, the more opportunities RUMWs are afforded for skill accumulation. The accessibility of local institutions and favorable policies also benefit RUMWs' skill accumulation. In addition, the place itself, as a synthesized space of a firm's internal labor-management relations and inter-organizational relations, also exerts an influence on and causes regional differences in RUMWs' skill accumulation.
Introduction
Human capital growth or skill accumulation/upgrading of labor (for ordinary laborers, skills and competencies are crucial components of human capital, and they can generate productivity, and in turn drive economic growth [1]; therefore, in this paper, the authors regard workers' human capital as vocational skills) influences industrial transformation, upgrading, and reconstruction, and sustains the competitive advantage of less developed countries (LDCs), particularly in the era of globalization [2].The extant literature on skill upgrading is mainly within the field of international economics, which is concerned with the relationship between trade and employment.Based on the view of comparative advantage, the Heckscher-Ohlin theorem and Stopler-Samuelson theorem (HOSS hereafter) predict that trade with or foreign direct investment (FDI) from developed countries would make developing countries specialize in labor-intensive activities and increase the demand for unskilled labor [3,4], meaning that LDCs benefit little for their skill upgrading from international trade.Contrary to this prediction, another line of literature on the interlinks among globalization (export, import, and FDI), technological transfer, innovation, employment, and skill upgrading, argues that trade and FDI encourage firms in LDCs to engage in product innovation and hire workers with higher levels of skills, which sustains competitive advantages and leads to skill upgrading [5][6][7][8].Furthermore, it is considered that compared to accepting FDI, export trade cannot significantly promote skill upgrading for firms [9].This in turn implies that different means of integration into the global economy may cause differences in workers' skill upgrading in different countries and regions.However, currently there is still insufficient discussion in this regard.
In addition, much of the relevant research focuses on the relationships between globalization and skill upgrading, but pays little attention to the national or local absorptive capacities, which are related to institutional setting, labor skills, technological capabilities, and competitiveness of domestic firms, and have effects on the skill upgrading of a specific country or place [3].Actually, the experiences from some LDCs have indicated that integration into global production networks could not help them realize a shift from lower-value-added activities to higher-value-added activities (let alone workers' skill upgrading), and these countries turn to build up their national value chains [10].Therefore, it needs to examine the national or local determinants influencing laborers' skill upgrading within the context of globalization, which is an important reason for the authors to carry out this research.From the perspective of labor demand instead of the supply side, the findings show that the proportion of workers with higher skills significantly increases with the opening up of a country; however, little evidence indicates whether this improvement is a consequence of individual workers' skill accumulation or that of the replacement of low-skill workers with high-skill workers.Furthermore, the process of globalization is believed to be beneficial to skillful workers as a whole, while having an opposing effect on unskilled workers [11].Therefore, different groups of laborers deserve further investigation in terms of their skill accumulation.The discussion of unorganized labor, especially migrant workers outside the mainstream labor market, is also currently insufficient.
Skill accumulation/upgrading of rural-urban migrant workers (RUMWs, labor with a census register in a rural area which migrates to cities for employment) in China is a topic worthy of discussion.In coastal areas in particular, foreign trade and FDI have been developing rapidly, and the local industrialization process has been accelerating since China's reforms and opening up of the late 1970s.Consequently, abundant RUMWs have swarmed into such areas and become the main force of urban industries [12].According to the national rural-urban migrant monitoring and investigation data of 2013, the number of RUMWs in secondary and tertiary industries already exceeded half the number of the total employed population [13].Due to the restrictions of the urban-rural dual structure (due to the household registration system (HRS) in China, farmers do not have access to social welfare and full citizenship in cities, even if they have worked there for many years, whereby HRS actually segregates the rural and urban populations in geographical, social, economic, and political terms, and leads to two different social classes: the urban class, whose members work in the prioritized and protected industrial sector and who have access to social welfare and full citizenship, and the farmers, who are tied to the land in the agricultural sector and fend for themselves [14]), and to their low educational backgrounds and low labor skills, it is believed that most RUMWs have engaged in physical work with low technology and low wage levels for a long time, and therefore live in the bottom social level of cities [15].With the transference of labor-intensive industries to other countries and economic transition from "Made in China" to "Create in China" in recent years, the coastal areas have begun to reconstruct their competitive advantage by relying more on high quality human capital than on the high quantities of labor.Compared with the increasing demand for high-level skilled workers, the competition for low-skill posts has intensified, and many RUMWs are facing an unemployment crisis.As a result, a structured labor shortage has emerged [16].Under such circumstances, improving RUMWs' skills could not only sustain the industrial upgrading of developed regions but also help these workers adapt to the economic transformation and ensure the sustainability of their urban lives.The Chinese government has already realized the importance of improving the RUMWs' vocational skills, and has adopted successive policy initiatives such as the "Spring Breeze Action" and the "Sunshine Project" since 2004, especially putting this issue into the National New Urbanization Planning (2014-2020).Furthermore, the Rural-Urban Migrant Work Leading Group of the State Council has deployed the implementation of the "Rural-Urban Migrant Vocational Skill Improvement Plan" for the annual training of more than 20 million people nationwide since 2014 [17].However, most of the current research focuses on the social integration of RUMWs into cities, but fails to discuss their skill upgrading or accumulation [15].A small amount of research mainly focuses on the institutional restraints from a nationwide standpoint, such as the urban-rural dual barrier and the household registration system [14,18], but is lacking in a survey that considers both the factors concerning firms and regions and the wider context of globalization.Therefore, this paper integrates international economics literature with industrial clusters research (rather than focusing solely on the perspective of globalization as international economics literature does), selects the Chinese eastern coastal area, and discusses the main factors influencing RUMWs' skill accumulation from the perspective of individual laborers on three levels: intra-firm strategy as a response to intense competition, local inter-firm relations, and the accessibility of local non-firm institutions.
Labor, Migration, and Human Capital Growth/Skill Upgrading
It is considered that human capital growth can be achieved through certain investments in medical treatment and health care, on-the-job training, regular education, and non-business adult education, and can obtain certain beneficial returns, such as an increase in personal income [19][20][21].Laborers' individual attributes, including age, gender, educational background, work experience, career goals, etc., influence the intention and capability for personal investment in human capital, and as such further influence human capital growth [22][23][24].Migration is also regarded to improve the human capital level by two promoting effects: forward stimulation before migration and the subsequent acquisition of relearning opportunities after migration [25].With respect to the former effect, if opportunities for employment transfer to foreign countries increase, the workers in underdeveloped regions will be stimulated to take the initiative to improve their human capital to improve their possibility of employment transfer [26].In terms of the latter effect, farmers who transfer from agriculture in rural areas to non-agricultural industry in cities can increase their human capital through "learning by doing", on-the-job training [27], and vocational skills training courses provided by local government or NGOs [28,29].Additionally, the employment experience in cities forms a certain degree of integration with various kinds of capabilities and competencies, which also improves their human capital.As a result, the urban-rural labor flow is regarded as an important source of economic development for urban areas [30], and returning RUMWs can be considered as an engine driving the development of their hometowns through their accumulation of financial capital and human capital acquired during migration [31].
However, another line of literature in China argues that the flow of RUMWs into a city does little to help the floating population with skill accumulation.Discrimination against RUMWs based on wage, employment, and welfare exists in the urban labor market as a result of the urban-rural dual system (household registration system, education system, employment system, insurance system, labor system, etc. formed on the basis of urban-rural separation administration system).The marginalization of employment and living, and the frequent job-hopping bring a certain degree of decomposition to the RUMWs' human capital after migration to cities [18].Meanwhile, the new-generation of RUMWs have more opportunities to choose jobs, they can better prepare vocational skills before migration, and thus they better adapt to city life compared with elder generations.However, due to frequent career choice and employment flow, it is not beneficial for them to make sustainable investment in human capital and accumulate vocational skills [32].Furthermore, there is a dispute regarding the influence of immigration-targeted cities on RUMWs' human capital growth and skill accumulation.One line of literature believes that RUMWs in big cities have more opportunities for human capital growth and skill accumulation than those in small cities because of the favorable employment environments and better service facilities in big cities [33].Another line of literature holds the view that it is difficult for RUMWs to enjoy such treatment and to promote skill accumulation in big cities due to the restrictions on their leisure time and financial capital, and to the low accessibility of urban public resources [34].
Trade, FDI, and Skill Upgrading
International economics discusses skill upgrading within the context of trade liberalization [35,36], openness [37], or globalization [4,38] in developing countries.According to the Heckscher-Ohlin model, both trade and FDI take advantage of the abundance of labor in developing countries and make these countries specialize in labor-intensive activities.As a result, their domestic employment increases.The Stolper-Samuelson theorem makes a further prediction that there would be an increasing demand for domestic unskilled labor compared to skilled labor in LDCs, and that the wage inequality between unskilled and skilled labor would decrease.From the perspective of the HOSS prediction, increasing trade lowers the demand for skilled labor in LDCs, implying that openness provides little benefit or even hinders skill upgrading in these countries.
However, the recent literature does not totally support the HOSS prediction.Export growth may raise employment, but imports may displace previously protected domestic industries and firms, thereby decreasing the demand for labor [3].As Lall points out, the HOSS theorem is based on the endowments of two factors within perfect markets, but ignores imperfections which determine industrial efficiency and competitiveness, such as technological leads and lags, scale and agglomeration economies, product differentiation, and so forth [38].Competitive capabilities vary from country to country in the developing world, and the impact of increasing trade on employment and wage equality between unskilled and skilled labor is complex but not necessarily positive [3,35,38].For example, since the enactment of trade reform in Mexico, there has not been an increase in employment, but there has been a dramatic increase in the skilled-unskilled wage gap.This is mainly because it has a relative scarcity of un-skilled labor and an abundance of skilled labor compared with China and other countries [35].In terms of skill upgrading, it has been proven that trade and technological change are complementary rather than alternative mechanisms [36].International openness may increase the trade of capital goods, such as skill-intensive machinery embodying technological innovations, and facilitate technology diffusion from industrialized to LDCs.As Robbins' (1996) [39] skill-biased enhancing trade hypothesis argues, the imports of skill-biased machinery, which are probably outdated in developed countries but are relatively advanced compared to the existing machinery in LDCs, can increase the demand for skilled labor in LDCs.International activities are also considered to induce and foster skill-biased technological change [36].Due to openness, domestic companies in LDCs have been exposed to increasingly serious competition, forcing them to adapt to modern skill-intensive technologies to sustain the competitiveness of their products [40].Moreover, international activities concerning imports, exports, and FDI mean that cross-national relationships are established with foreign suppliers, customers with high-level tastes, and multinational companies.These relationships facilitate domestic firms with gaining access to tacit knowledge, especially that knowledge which is not easily transferred via market transactions.The interplay between trade openness and technology adoption plays a critical role in shifting the labor demand towards more skilled workers, pushing forward skill upgrading in developing countries.
It is further argued that the effects of international activities on the demand for skilled labor vary from country to country [4].In the comparative case study of Brazil and China, Fajnzylber and Fernandes obtained opposite results between the two countries, namely, Brazil increased in the demand for skilled labor while China increased in the employment of unskilled labor.The reasons go beyond the endowments of fundamental factors and lie in national absorptive capacities, which are related to institutional setting, labor skills, technological capabilities, and the competitiveness of domestic firms, all of which have an effect of globalization on skill upgrading in a specific country [3].The prosperity of export-oriented and labor-intensive industries provided RUMWs with many job opportunities with low levels of skill and income, but few opportunities for skill accumulation [15].It is reported that RUMWs have been exposed to unpleasant or even unhealthy work conditions with overtime work and less leisure, and have little time to attend continuing education or cannot afford professional training [41].Lacking in skill accumulation, a structured labor shortage has emerged with the increasing demand for skilled workers, and, under these circumstances, labor-intensive industries have been transferred into other neighboring LDCs [16].
Innovation, Openness, and Skill Upgrading
Innovation is believed to have a strong skill-bias [42], that is to say, innovation and skill intensity usually complement one another.Empirical studies confirm that firms doing R&D (R&D is usually taken as an important type of innovative activity) have a higher demand for skilled labor [4], and R&D expenditures are positively and significantly related to skill upgrading in some middle income countries [36].
It is argued that innovation in developing countries is inherently connected with trade, FDI, and a consequent international transfer of technology [42].Salomon and Shaver (2005) proposed a "learning by exports" hypothesis (LEH) and argued that access to foreign markets provides exporting firms with feedback from their foreign partners, which is useful for their internal innovation [43].Wagner put forward a self-selection hypothesis (SSH), which argues that export firms have superior performance characteristics compared to non-export firms [44].Actually, the characteristics of a firm, such as firm size and age, are an important determinant of its international activities, and the latter in turn affects the employment and labor type as analyzed before.Moreover, the level of education of a firm's management is an important determinant of skilled labor demand [4].Further evidence shows that LEH is beneficial for groups of young firms, while SSH has been proven relevant for groups of mature firms [45].This means that domestic firms benefit from foreign firms unequally.It has been argued that spillovers from foreign firms are determined by the technological profiles, embeddedness and linkage creation of both foreign and domestic firms [46].In addition, firms' introduction of new product lines or adoption of foreign technology are not necessarily related to skill upgrading, for other attributes, such as financial constraints and managerial ability, affect a firm's ability to attract more skilled labor [47].
Openness and lowering trade barriers increase foreign competition and force domestic firms to make technological and product innovation, in order to decrease the production cost or reduce the substitutability of their products [46,48].Consequently, the demand for highly skilled labor is supposed to increase.However, the extant literature pays more attention to the effects of firm innovation on skill demand than to its effect on the promotion of laborers' skill accumulation.With the acceleration of technological progress and intensification of international competition, firms in LDCs are making a strategic transformation from a low-cost-based strategy that relies on large-scale manufacturing technologies to a differentiation-oriented strategy that depends on technological innovation and flexible manufacturing system with high efficiency [49].In Pine's (1993) view, the low-cost strategy emphasizes operational efficiency, and workers are only a part of the production process.In this production system, workers are not required to be innovative; they are trained just to engage in standardized and simple labor.It is difficult for them to obtain opportunities for skill accumulation or upgrading.On the contrary, the differentiation strategy emphasizes process efficiency, pays attention to the improvement of workers' learning ability and creativity, and encourages them to make continual enhancement and improvement of the production process and overall efficiency [50].For this reason, firms not only provide more opportunities for education and skill training but also advocate "learning by doing" and "team learning", which has promoted workers' skill accumulation to a certain extent [51].In particular, "learning by doing" has been discovered as an effective approach for Chinese RUMWs to accumulate skills and experience, thus lightening the payment restrictions of human capital investment [8].However, whereas the relevant literature concerns itself greatly with the relations between skill upgrading, technological innovation, and firm characteristics, as mentioned above, it also fails to pay attention to the effects of such intra-firm skill-biased strategies on RUMWs' skill accumulation.
The authors define the behaviors of firms emphasizing and utilizing workers' skills and creativity as skill preference and assume that such preference would promote RUMWs' skill accumulation by providing them with training and other incentive measures.Based on the analysis conducted above, the skill preference of firms in efficiency improvement is measured from three aspects, namely, provision of on-the-job training, encouragement of "learning by doing", and encouragement of "team learning".Thus, the following hypothesis is put forward: Hypothesis 1 (H1).The skill preference of a firm is positively correlated with RUMWs' skill accumulation.
Localization and Skill Upgrading
As mentioned above, the effects of international trade/globalization on skill upgrading to some extent depend on the country or region itself, such as national absorptive capability [3], domestic capabilities [52], or local assets and strategies [38].The international trade literature has little discussion of the effects of localization on skill upgrading, but industrial clusters research considers that clustering conditions, which are based on the two important factors of intensive inter-firm/personal relationships and institutional thickness, are the source of regional competitiveness or advantage [53], and that it is important for individuals or firms to gain access to external knowledge for innovation or skill accumulation through interactive learning [54].
Inter-Firm Relationships and RUMWs' Skill Accumulation
Inter-firm relationships contain informal interpersonal relationships and formal cooperative, competitive and controlling partnerships among local firms, such as input-output linkages, technical connections and the bargaining power between firms.Cooperation among competitors in the provision of industry-specific skills is beneficial to establish training standards and also lowers the costs of investment in skills and information sharing [55].Such relationships can improve a firm's innovation capability [56][57][58][59] and are the source of "collective efficiency" [60], i.e., competitive advantage generated by the local external economy and joint action.Firstly, firms of the same trade have similar worker skill demands.This benefits workers to stably and continuously make skill investments and leads to the formation of a labor team with professional skills.Human capital serves pricing and rewards skills [21].Industrial access and promotion space influence the vocational development opportunities of RUMWs.These local market environments have a certain effect on the return on the human capital investment of RUMWs [28], influencing the growth of human capital.Secondly, the geographical agglomeration of firms of the same trade also makes it convenient for workers to acquire technologies, knowledge and information through approaches such as personnel flow and interpersonal relations, thus increasing the opportunities to learn skills.Thirdly, inter-firm cooperation is accompanied by personnel flow and information communication.The effects of knowledge transfer are influenced by the social relationship between dispatched technical personnel and employees of technical receiving firms, increasing for employees of these enterprises the opportunities to learn, communicate, or train cooperative firms [61,62].Furthermore, in order to control product quality, local leading firms usually select and assign technical personnel to conduct quality supervision of contracted firms and to provide necessary technical training and guidance.The degree of severity of the requirement of leading firms for quality specification urges contracted firms to enhance technological innovation and quality management, and finally promotes the extension of product supply cooperation to technical R&D cooperation between downstream supplies and upstream customers.The one-sided technical output is transformed into a two-way technical communication, promoting the growth of human capital for employees of the firms of both parties [63].The first two points are related to the external economy, while the second two points involve inter-firm joint action.
In this paper, the authors consider enterprise cooperation or control on a regional scale instead of on a global scale.Enterprise cooperation or control in this regard reflects the collective efficiency of local production organizations, and discusses whether such a relationship has a significant influence on RUMWs' skill upgrading.The following hypothesis is put forward which refers to the relevant research and selects certain aspects, such as enterprise cooperation, enterprise competition and the local labor market, to measure inter-firm collective efficiency (factor selection is shown in Section 4): Hypothesis 2 (H2).Collective efficiency among local firms is positively correlated with RUMWs' skill accumulation.
Institutional Thickness, Non-Firm Relations, and Skill Upgrading of RUMWs
Institutional thickness is used to describe how localized institutional conditions (including the strong presence of local institutions such as firms, government agencies, training centers, research institutions, business associations, communities, etc.) and the strength of shared rules, conventions, and knowledge, can promote regional economic success [53].As investors for human growth of laborers [1], these intermediary institutions limit the detrimental effects of market failures [55].The high levels of close interaction and coalition amongst many diverse institutions [64,65] produces a high innovative capacity [66] and can promote knowledge diffusion between firms and knowledge-intensive institutions [65].The relationship of RUMWs with these organizations may be direct individual-organization contact, or an indirect, inter-organizational connection through the firms the RUMWs work in.
Government
As an investor in human capital, local governments improve RUMWs' skill accumulation through human resources development projects, institutional environment design, and public services/facilities [28].With the increasingly serious scarcity of skilled labor and the great concern of the central government regarding RUMWs' human capital, local governments are beginning to pay more attention to improving the life and employment circumstances of the floating population.
NGOs
NGOs targeted at RUMWs have emerged in China since the late 1990s.They mainly help RUMWs receive legal assistance, maintain their legitimate rights and interests, and collaborate with local government, firms, training organizations, and volunteers to raise funds and provide training services for RUMWs [29].However, due to disputes over the legality of their identities, some NGOs are lacking in adequate resources, and encounter difficulties in obtaining government support.Their effects on RUMWs' skill accumulation are not yet significant.
Vocational Training Organizations (VTOs)
Vocational education and training is considered an alternative to regular academic education and meets the RUMWs' demands of career development in China [30].These organizations provide professional knowledge and skill training to improve RUMWs' employability and enhance their willingness for self-learning [67].However, doubts have arisen from the fact that a large number of VTOs provide low-level skill training which does not meet the demands of firms, and that the training fees are too high for RUMWs.
Labor Unions
Labor unions play an important role in improving workers' welfare, policy consulting, labor legislation, and the supervision and coordination of labor-management relations in China.With regard to skill accumulation, labor unions provide workers with employment information and vocational training by establishing service agencies such as employment and training agencies [68].In addition, through communication with firms and training organizations, labor unions arrange well-targeted training programs for employees to improve their vocational skills.However, the extant literature has little mention of whether RUMWs can be involved in local labor unions and thereby obtain opportunities for skill accumulation.
Rural-Urban Migrant Communities
As informal organizations, rural-urban migrant communities contain intensive personal relationships and are indispensable to RUMWs' daily life and career development.These communities not only solve accommodation problems, but also provide employment information, offer job opportunities, safeguard economic interests of RUMWs and solve labor-capital disputes.Moreover, social relationships based on similar career experience as RUMWs or fellow townsman associations also provide RUMWs with platforms to share sensitive information and advice, learn skills, and enhance their ability to negotiate with firms to realize increases in income and improvements in work and living conditions [59,69].However, due to a lack of elites, such communities have a limited effect on the improvement of RUMWs' human capital.
In summary, only when local non-firm organizations can provide skill-related services will RUMWs obtain opportunities that promote skill accumulation.The authors borrow the concept of "accessibility" to represent the possibility or convenience of establishing contacts with non-firm organizations and thereby acquiring relevant services (including regulations and policies formulated by local governments) to put forward the following hypothesis: Hypothesis 3 (H3).Accessibility of local non-firm institutions is positively correlated with RUMWs' skill accumulation.
Research Areas
This paper selects Suzhou City, Jiangsu Province and Taizhou City, Zhejiang Province located in the Yangtze River Delta in the east of China as research areas (see Figure 1) for the following reasons.Firstly, the Yangtze River Delta is one of the manufacturing agglomerations of China, and two rural industrialization models, namely, "South Jiangsu Model" and "Wenzhou Model" emerged in the late 1970s when China implemented a planned economy.The two models represent different means of rural industrialization in China in the late 1970s.The "South Jiangsu Model" originated from the southern area of Jiangsu Province in the 1960s, including Suzhou, Wuxi, Changzhou, Yixing, Zhenjiang, and Nanjing.Local rural communes or production teams built up firms with collective ownership, promoted the development of the local economy, and speeded up the process of industrialization in rural areas.The "Wenzhou Model" also emerged at the same time, but relied on family-owned factories and used local rural markets to invigorate circulation channels and to promote the level of local industrialization [70].Suzhou and Taizhou are typical regions of these two respective models.Secondly, in recent years, these two models have been undergoing dramatic changes.South Jiangsu has developed into the transfer destination of international industries and has become home to abundant foreign-funded firms.Suzhou Industrial Park (SIP) is viewed as a typical example of the "New South Jiangsu Model" [71].The Wenzhou Model has quickly spread to its neighboring regions (especially adjacent Taizhou) and entered the era of the "Wenzhou-Taizhou Model".Based on the location advantage of geographical adjacency with Wenzhou (see Figure 1), Yuhuan County of Taizhou has become a region that has benefited from the industrial outflow of Wenzhou in relatively early stages, and is a representative region of the "Wenzhou-Taizhou Model".A group of family-owned firms has endeavored to make the transformation to modern corporations and blend into global production networks mainly through foreign trade, but, in general, they rely on the international market less than those firms in Suzhou.The permanent resident population of Yuhuan County is equivalent to that in SIP.Its gross regional production and industrial added value are both near 1/5 of those in SIP, but its economic indexes concerning foreign trade and foreign investment are far lower than those in SIP (see Table 1).Thirdly, both research areas are conglomeration places of RUMWs, who used to sustain local development and reshape local competitive advantage through skill upgrading.The data obtained from the sixth demographic census indicate that the floating population of SIP approached 50% of the permanent population in 2010, while that of Yuhuan reached 38%.
investment are far lower than those in SIP (see Table 1).Thirdly, both research areas are conglomeration places of RUMWs, who used to sustain local development and reshape local competitive advantage through skill upgrading.The data obtained from the sixth demographic census indicate that the floating population of SIP approached 50% of the permanent population in 2010, while that of Yuhuan reached 38%.
Questionnaire Survey
The authors adopted a structured questionnaire survey for first-hand data collection.The questionnaire covers RUMWs' basic information, work changes before and after migration, vocational skills accumulation, the nature of the firm they work for, its competition and cooperation with other firms, employee training and incentives, local policies, cultural facilities, and skill training services from relevant organizations, etc.The survey was successively conducted in SIP and Yuhuan County from July to September 2015.As RUMWs need to go through 3-6 months of apprenticeship or internship after entering into firms, this survey selected those RUMWs who have worked in SIP or Yuhuan for at least more than half a year.It was difficult to get permission for the survey inside a firm in both research areas.The authors learned from local residents the conglomeration places where RUMWs live, dine, seek entertainment, and go shopping, and then launched the survey.In order to avoid the little coverage of questionnaires in a handful of firms, only one RUMW was selected each time among the RUMWs with the same type of work or position in the same firm.
The authors issued 700 questionnaires and recovered 668 questionnaires in the two areas in total, and 491 of those 668 questionnaires were valid with a validity rate of 73.5%.The invalid questionnaires had blank responses or respondents who had worked in SIP or Yuhuan less than half a year.In SIP, the authors issued 400 questionnaires and recovered 388 questionnaires with a recovery rate of 97%.There were 235 valid questionnaires with a validity rate of 60.56%.In Yuhuan, the authors issued 300 questionnaires in total and recovered 280 questionnaires with a recovery rate of 93.33%.There were 256 valid questionnaires with a validity rate of 91.43%.In SIP, many respondents had worked less than half a year, while only a few respondents could not meet the requirement of working time in Yuhuan, which led to the differences in the validity rates of the two areas.
Characteristics of Effective Samples
The basic information from effective samples is shown in Table 2.In summary, these samples generally have the following characteristics:
Demographic Characteristics
Among the total samples, the proportion of male to female is 7:3.Compared with Yuhuan, SIP had more male samples.According to the survey, SIP focuses on electronic product processing and manufacturing and machinery manufacturing, which need more male workers.The number of new-generation RUMWs aged below 35 makes up approximately 88% of the total samples; compared with Yuhuan, the age structure of RUMWs in Suzhou is younger.
The number of unmarried migrants in the total samples is higher than that of married migrants.The ratio between married migrants and unmarried migrants reaches 6:4 in Yuhuan, while this ratio in SIP is nearly 3:7.
The RUMWs surveyed mainly have an educational background of secondary school, making up 77.2% of the total.Those with an educational background of junior high school make up about one third of the total, while those with an education background of senior high school, professional high school, and technical secondary school exceed 40% of the total survey respondents.Compared with SIP, the RUMWs surveyed in Yuhuan have a relatively low educational background, and those with educational background of junior high school and below make up 55.47%, which is 38% higher than those in SIP.In terms of the survey, the firms in SIP are mostly foreign-funded and have relatively high requirements for the initial education level of their employees; whereas most privately-owned firms or family-owned workshops in Yuhuan have relatively low requirements.
Conditions of Occupations and Skills before and after Migrant Work
More than 1/3 of the RUMWs surveyed were in-school students before migration; 27.3% were engaged in agriculture before migration; 23.8% of them worked in a firm previously; and 26.68% had worked in the immigrating place for 1-3 years, 21.59% had worked there for 3-5 years, and 20% had worked there for 5-10 years.Compared with Yuhuan, the length of the RUMWs surveyed migration to Suzhou is generally short.The RUMWs with length of service above five years in SIP make up 19.57%, 9.55% lower than that in Yuhuan.About 80% of the RUMWs surveyed were ordinary workers and about 10% of them were group leaders.More than half of the RUMWs surveyed had no vocational skill qualification certificates.With regard to this index, the situation in SIP is much better than that in Yuhuan.Note: "Junior college and above" does not include RUMWs with previous undergraduate and postgraduate education.
Characteristics of the Firms for Which the RUMWs Surveyed Work
In total, 58.25% of the RUMWs surveyed worked in medium-sized firms with 101-300 employees (according to national standard of firm scale), and 26.88% of them worked in small firms with 21-100 employees.Almost half of the firms RUMWs surveyed worked for in Yuhuan were small-scale firms with no more than 100 employees, while nearly 70% of the firms in SIP were large and medium-sized.Furthermore, most of the firms for which the RUMWs surveyed worked for in SIP are foreign-funded or joint-ventures-funded, while Yuhuan is dominated by private firms and undergoing the transformation of traditional industries.
Selection and Measurement of Variables
The International Labor Organization categorizes skills into four aspects, namely, basic skills, core work skills, technical or vocational ability, and entrepreneur and operation management capacity.Basic skills refer to basic language and computing power as well as the application of such capacities in specific environments, work skills refers to a general capacity increasing the possibility of employment in the labor market, and technical and vocational ability means the ability enabling individuals to complete specific tasks, such as carpentry, basket fabrication, metal work, forging, etc. [72].This research defines RUMWs' skills as "vocational ability", and uses "skill accumulation" as a dependent variable, which is specifically interpreted in the questionnaire as the "improvement of vocational skills after migration to current place".The respondents could make a judgment based on their own condition, including the acquisition of a vocational qualification certificate, promotion of working post grade, and income increase due to skill accumulation.The Likert 5-score method is adopted for assignment (detailed in Table 3
below).
Three independent variables, namely, "firm attributes and skill preference", "collective efficiency" and "accessibility of non-firm institutions", containing 15 variables in total, are selected.Individual attributes are used as control variables.Meanwhile, "regional factor" is introduced as a control variable to examine whether or not the two research areas have any differences in RUMWs' skill accumulation.The names of each variable, selection basis, meaning and method of measurement are shown in Table 3.
Model Selection and Results
Based on the related data of 491 valid samples, the authors used ordered logistic regression models, as the dependent variable "skill accumulation" has five-ordered categories [73], to estimate the impact of three independent factors on RUMWs' skill accumulation, with the factors of "regional factor" and "individual factors" used as control variables in the three models.A correlation matrix of the independent and control variables is shown in Table 4.There is no multi-collinearity of variables as the variance inflation factor (VIF) scores are under 10, so the regression models are not distorted by this problem [73].
Models I, II, and III are set up to discuss the relationships between RUMWs' skill accumulation and three kinds of factors, namely "firm attributes and skill preference", "collective efficiency", and "accessibility of non-firm institutions", respectively.All variables are involved in Model IV to discuss which variables influence RUMWs' skill accumulation when all factors are considered.The results are shown in Table 5.All models present a good fit, as models' chi-squares are significant at the 0.01 level, the test of parallel line could not reject the null hypothesis (p > 0.05), and the Pearson chi-square of goodness-of-fit measure are non-significant (p > 0.05).Inter-firm cooperation [61] There are many local firms of the same trade and the economic and technical cooperation is frequent.
ITFC2
Inter-firm competition [62] Local firms of the same trade often compete for technical workers.
ITFC3
Job opportunity [28] Local job opportunities for RUMWs with higher vocational skills increase.
ITFC4
Local skill demand [23] The local requirements for vocational skills of RUMWs are generally raised.
Accessibility of Non-firm institutions NFR1
Supply of public cultural facilities [33] Local cultural facilities can be utilized to improve individuals' education level or vocational ability.
NFR2
Government incentive policy [75] Those with sufficient educational background or certain vocational technical qualification certificates can easily obtain local permanent registered residence.
NFR3 NGO [29]
There are local NGOs improving vocational skills of RUMWs.
NFR4 Vocational training organization [67]
There are local private organizations providing vocational skill training for RUMWs.
NFR5
Fellow townsman community [69] Fellow townsmen working in different firms can often learn from each other to improve their vocational skills.All six of the variables concerning the factor "firm attributes and skill preference" are involved in Model I.The overall fitting degree is good (χ 2 = 43.279,p < 0.01), which shows that this factor has a significant effect on RUMWs' skill accumulation.Specifically, INFC4 and INFC5 have significant positive correlation under the levels of 0.1 and 0.01.They represent "on-the-job training inside the firm" and "encouragement of learning by doing", respectively, and are the incentives for firms to promote RUMWs' skill accumulation.It also proves that H1 is tenable, namely, that the skill-oriented preference of a firm is positively correlated with RUMWs' skill accumulation.However, INFC1, INFC2, INFC3 and INFC6 do not show significant correlation with the dependent variable.INFC1, INFC2 and INFC3 represent the size, the ownership and the trade of firms respectively, indicating that these characteristics of firms do not have a significant impact on RUMWs' skill accumulation.
All four variables concerning the factor "collective efficiency" are involved in Model II.The chi-square value is 64.194 under the level of 0.01 of the ordered logistic regression equation, which means that the whole equation is significant.The four variables present significant positive correlation with RUMWs' skill accumulation.ITFC1 and ITFC2 present significant positive correlation under the levels of 0.05 and 0.01, and they represent "inter-firm cooperation" and "inter-firm competition", respectively, indicating the inter-firm relationship significantly promotes RUMWs' skill accumulation.When firms compete for technical workers, RUMWs will have more motives to enhance their vocational skills; moreover, abundant firms in the same sector can conduct economic and technical cooperation to benefit the flow of different kinds of knowledge and skills among firms and to further increase the opportunities of RUMWs to learn skills.It proves that H2 is tenable, that the collective efficiency among local firms is positively correlated with the skill accumulation of RUMWs.ITFC3 and ITFC4 also present significant correlation under the level of 0.05, which means that more job opportunities and the higher local skill demand as a result of geographical agglomeration of firms in the related sectors contribute to RUMWs' skill accumulation; in other words, external economies and consistency in demand for higher-skilled labor in the local labor market promote RUMWs' skill accumulation.
All five of the variables concerning "accessibility non-firm institutions" are involved in Model III, and the ordered logistic equation is significant as the chi-square value is 73.830 under the level of 0.01.NFR1, NFR4 and NFR5 present significant positive correlation under the level of 0.05, 0.1 and 0.01, respectively, indicating that the non-firm institutions including the public cultural facilities, VTOs and fellow townsman communities, help RUMWs accumulate vocational skills.However, NFR2 and NFR3 fail to pass significance testing.Local governments are very concerned with the employment and skill accumulation of local residents, while paying little attention to RUMWs, so the skill accumulation of RUMWs does not actually benefit much from local incentive policies.As NGOs are still in development in the two research areas, few RUMWs could benefit from such organizations.In general, Model III proves H3 is tenable, namely, the accessibility of local non-firm institutions is positively correlated with RUMWs' skill accumulation.
The control variable "regional factor" is significant under the level of 0.05 in Model I, II, III, and its regression coefficients are positive in each model.Since the measurement of this index takes Suzhou as a benchmark, it means regional difference is significant and RUMWs' skill accumulation in Yuhuan is higher than that in Suzhou.The results of the questionnaire survey also prove this point: 261 respondents consider that their skills are significantly improved (the average assignment of skill improvement is ≥4), making up 53.2% of the total respondents.This ratio in Yuhuan is 62.1%, much higher than that in Suzhou (only 43.4%).Conversely, 114 respondents believe that their skills are not significantly improved (the assignment of skill improvement is ≤2), making up 23.2% of the total respondents.This ratio in Yuhuan is 17.9%, significantly lower than that in Suzhou (28.9%).
ID1, a control factor related to "individual factors", has no significant effect in Model I, II and III, indicating that gender has no significant effect on RUMWs' skill accumulation.In addition, the ID3 is only significant at the level of 0.1 in Model I, and has no significant effect in Model II and III, meaning that the length of service is not an important factor affecting RUMWs' skill accumulation.The variable "professional high school or technical secondary school" of ID2 are significant at the level of 0.05, and its regression coefficients are negative, as the variable "junior college and above" is a reference.This result indicates that the RUMWs whose education level is junior college or above would have more improvement on vocational skills than those whose education level is professional high school or technical secondary school.Although other education levels do not show significant correlation, the significant effect of "professional high school or technical secondary school" indicates that the education level before migrant work of RUMWs actually influences their skill accumulation during their migrant work.The ID4 is significant at the level of 0.05 in the three models, and its regression coefficients are positive.Since the measurement of this index takes "choosing the firm you work in for other reasons" as a benchmark, this result indicates that improving skills as the main reason for RUMWs' job hunting significantly facilitates their skill accumulation in the firm they worked in.In other words, RUMWs' self-selection has a significantly positive effect on their skill accumulation.However, only about 30% of the total respondents viewed skill accumulation as the most important reason to accept the current job opportunity in the questionnaire, 14% lower than the top reason, "for more income".
In Model IV, all variables are involved in the regression model, and the whole equation is significant as the chi-square value is 116.703under the level of 0.01.Eight of 20 variables, RG, ID2, ID4, INFC4, INFC5, ITFC2, NFR4, and NFR5, show significant correlation with RUMWs' skill accumulation.These eight significant variables are also significant in Model I, II, and III, whereas the other five variables, ID3, ITFC1, ITFC3, ITFC4 and NFR1, which present significant correlation in Models I, II, and III, fail to pass significance testing in Model IV.Among these seven significant variables, RG, NFR5, ID4, ITFC2, and INFC5, belonging to regional factor, accessibility of non-firm institutions, individual factors, collective efficiency, and firm attributes and skill preference, respectively, are the top five factors influencing RUMWs' skill accumulation, and their odds ratios are 1.433, 1.239, 1.266, 1.186, and 1.169 respectively.Compared with intra-firm factors, local non-firm institutions (especially local fellow townsman communities) and inter-firm co-competitive relationship contribute more to the RUMWs' skill accumulation.It implies that the "place" as a space of various kinds of relationships has an important role in the RUMWs' skill accumulation besides the economic entities themselves.
Conclusions and Discussion
International economics literature is very concerned with the relationships between international activities, innovation, and labor's skill upgrading in LDCs.However, individual workers' skill accumulation/upgrading is rarely considered and the discussion of unorganized labor, especially migrant workers outside the mainstream labor market is still insufficient.Focusing on RUMWs, unorganized laborers, this paper takes China, a developing country undergoing economic transition from "Made in China" to "Create in China", as an example to explore the factors influencing their skill accumulation.Unlike the extant literature on skill upgrading, which generally focuses on the effects of global activities, this paper concerns itself with the effects of localization on laborers' skill accumulation in the context of globalization by using an analytical framework based on international economics literature and industrial clusters research, and goes beyond the view of national institutional barriers concerning household registration in China.It explores different regions inside a country instead of looking at that country as a whole, and rather than standing on the demand side, takes the perspective of individual laborers.It adopts a broad viewpoint containing intra-firm skill-biased strategy (as a response to intense competition), inter-firm relationships, and accessibility of local non-firm organizations.In particular, it finds that inter-firm relationships, representing collective efficiency of local value chains, are beneficial in promoting RUMWs' skill accumulation and in maintaining the sustainability of local development, which is seldom mentioned in the extant literature.Additionally, the place itself, as a synthesized space of labor-management relations inside a firm and inter-organization relations, exerts an influence on and causes regional differences in RUMWs' skill accumulation.These findings contribute to a deeper understanding of how to sustain or reshape competitive advantage through the improvement of workers' skill accumulation, and suggest that collective coordination and efficiency of local production systems (rather than that of the local value chain mentioned above) should be taken into account for designing and implementing regional skill-upgrading-related policy.
This research also draws some conclusions.Firstly, firms' skill-oriented preferences, which are concerned with employees' skills and innovation ability and stimulate them to learn with initiative, have a significant influence on RUMWs' skill accumulation.Secondly, collective efficiency, based on the co-competitive relationship between local firms, has a significant influence on RUMS' skill accumulation.That is to say, the more intensive interactions between local firms are, the more learning opportunities and knowledge spillovers occur to promote skill accumulation for RUMWs.Thirdly, the accessibility of local institutions and favorable policies is beneficial for RUMWs to improve skill accumulation.In particular, training programs from local VTOs and mutual learning inside the local communities of RUMWs are two important channels of skill accumulation.Furthermore, RUMWs' skill accumulation of labor in Yuhuan is higher than that in SIP, which demonstrates the importance of place as a synthesized space of multiple factors.In addition, although local firms in the two areas have directly or indirectly engaged in international trade, it does not show the significant effects of globalization on RUMWs' skill accumulation mentioned in the extant literature, and also shows no significant difference in skill accumulation between foreign and private firms.
However, this paper suffers from the following defects.Firstly, variable selection and connotation definition in the questionnaire should be further improved.For example, standards in the three aspects are set up in the paper to assist the respondents in measuring "skill improvement" in a quantified way.However, there exists a certain degree of subjectivity.Some variable's measurements, especially those concerning firms' international activities or global relationships, are quite rough, and do not consider some important aspects, such as imported advanced machinery, and the amount of imports and exports.The two relevant independent variables (firm ownership and international trade) not showing significance to skill accumulation could be caused by this.The depiction of inter-firm relationships fails to subdivide into relations with the upper and lower streams of the value chain and peer firms, respectively.Secondly, the research focuses on RUMWs in the manufacturing industry and the findings should be further tested in other sectors.For example, the positive influence of the "regional factor" in Suzhou is less than that in Yuhuan, which is confirmed in the latest communication with local experts in Suzhou.However, it has also been learned that the service quality and personnel quality of tertiary industries in Suzhou have been quickly improving due to the influx of abundant foreign investments and the effect goes far beyond the manufacturing industry.In addition, the questionnaires were issued in two areas, and the number of large firms involved in effective questionnaires is still few.The research findings should be verified in other regions and through an increase in sample coverage.
Figure 1 .
Figure 1.Geographical Location of Research Areas.
Figure 1 .
Figure 1.Geographical Location of Research Areas.
Table 1 .
Main Social and Economic Indexes in SIP and Yuhuan in 2014.
Other data were obtained from the statistical data of Suzhou Industrial Park in 2015 and statistical bulletins of national economic and social development of Yuhuan County in 2015.
Table 1 .
Main Social and Economic Indexes in SIP and Yuhuan in 2014.
Table 2 .
Basic Information of Valid Samples in SIP and Yuhuan.
Table 4 .
Descriptive Statistic and Correlation Matrix of Independent Variables.
Table 5 .
Ordered Logistic Regression Results on the Factors Influencing RUMWs' Skill Accumulation.Standard errors are in parentheses.***, **, and * indicate significance at the level of 0.01, 0.05 and 0.1, respectively.OR = odds ratio.JC = junior college and above; PS = primary school and below; JHS = junior high school; RSHS = regular senior high school; PHS = professional high school or technical secondary school.
|
2017-01-07T08:35:44.032Z
|
2017-01-07T00:00:00.000
|
{
"year": 2017,
"sha1": "9490c7f64d8a3ff63d9b5b83851af17c8ac2f970",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/9/1/72/pdf?version=1483766846",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9490c7f64d8a3ff63d9b5b83851af17c8ac2f970",
"s2fieldsofstudy": [
"Economics",
"Geography",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
}
|
225477924
|
pes2o/s2orc
|
v3-fos-license
|
Application of plastic concrete cut-off wall in reinforcement of reservoir
Plastic concrete is a kind of suitable material for cut-off wall of earth dam, which has low elastic modulus and permeability, large ultimate deformation, and good workability. In this paper, the mix proportion of plastic concrete with 50% bentonite content, water-binder ratio of 1.00 and sand-aggregate ratio of 55% is proposed, and the plastic concrete cut-off wall has been successfully applied to the reinforcement project of Huizhou Zengbo Lianhe reservoir. All the indexes meet the design requirements. The seepage of the main dam was reduced by 67%, and the piezometric level behind the wall was reduced by 6-9 m.
Introduction
More than 85,000 reservoirs have been built in China, of which 37,000 are unsafe. 14,600 dangerous reservoirs had been reinforced by 2012. A total of 3,484 dams were broken in China from 1954 to 2003, seriously affecting the safety of people's lives and property. Seepage is the most common defect of homogeneous earth dam, and the main cause of dam break [1,2]. And the major technical measure for seepage prevention and reinforcement of earth dam is cut-off wall.
The cut-off wall prepared with ordinary Portland concrete has been widely used in the reinforcement of homogeneous earth dam [3]. However, the ordinary concrete cut-off wall cannot deform synergistically with the surrounding soil, due to its large elastic modulus and small ultimate deformation. Under the huge negative friction of the surrounding soil, cracks are likely to occur, and then the integrity and continuity of the cut-off wall will be destroyed [4,5]. Therefore, the safety of dam would be threatened because of the reduced anti-seepage effect [6][7][8][9]. Moreover, the early strength of ordinary concrete cut-off wall develops rapidly and the operation of slot connection at Ⅱ stage is difficult, thus the seepage control effectiveness of the cut-off wall would be reduced due to the hidden danger area of seepage [10].
Compared with ordinary concrete, plastic concrete contains bentonite or clay. The elastic modulus of plastic concrete is similar to that of foundation, and its ultimate deformation is large, so it can adapt to foundation deformation better than ordinary concrete. The plastic concrete can be able to work with the foundation, so it has better crack resistance and earthquake resistance. In addition, plastic concrete has better workability, longer final setting time and lower mechanical strength, is beneficial to improve the construction quality of slot connection at Ⅱ stage. It's worth noting that, incorporation with an GBEM 2020 IOP Conf. Series: Earth and Environmental Science 531 (2020) 012037 IOP Publishing doi:10.1088/1755-1315/531/1/012037 2 appropriate amount of bentonite or clay in plastic concrete, not only improves the anti-permeability performance, but also reduces the cement consumption, bringing environmental and economic benefits [11][12][13].
The Huizhou zengbo lianhe reservoir, located in the Dongjiang river, has a catchment area of 110.8 km 2 and a total storage capacity of 80.94 × 10 6 m 3 after reinforcement. Safety evaluation was carried out before the dam was reinforced. The permeability coefficient by water injection test of the dam body was only 1.0×10 -3 cm/s in several sections, which was larger than the standard requirements. The permeability coefficient of dam body fluctuates greatly, the compaction degree of overall soil is insufficient, the seepage reinforcement is necessary. The foundation of the main dam is partially in contact with the silty sand soil layer, and the recommended permeability coefficient k is 5×10 -3 cm/s. The silty sand soil layer should be reinforced due to the severe seepage. Moreover, seepage at the slope toe of the main dam occurs frequently. The water level of reservoir was 51.20 m, and the seepage at the dam toe was about 0.0126 m 3 /s after on-site inspection, which is unfavorable to the stability of dam. In view of the severe seepage of the main dam, the plastic concrete cutoff wall is adopted for the reinforcement of reservoir.
Mix proportion
The plastic concrete was prepared with 42.5R ordinary Portland cement, granite gravel of 16-31.5 mm, medium river sand and mud-type calcium bentonite. The mix proportion is shown in Table 1.
Construction technology
The cut-off wall is arranged at the axis of the dam crest, with a thickness of 600 mm. The bottom of cut-off wall should pass through the silty sand soil layer and intensely weathered granite, and enter the relatively impermeable layer up to 1 m. The axis length of the cut-off wall is 315.5 m, and the designed maximum wall depth is 50.40 m. The construction work quantity is about 9000 m 2 , and the profile of plastic concrete cut-off wall of main dam is shown in figure. 1. 2 is the flow chart of construction technology of plastic concrete cut-off wall, mainly consist of construction of concrete guide slot and platform, mud preparation and transportation, drilling hole, hole cleaning and mud displacement, and casting plastic concrete. It should be noted that, the bentonite mud was used in this project, and the mud should be recycled and reused. Figure 2. The construction technology of plastic concrete cut-off wall.
Results of laboratory test
The properties of plastic concrete tested in laboratory are shown in Table. 2. The experimental results meet the design requirements.
Results of field test
The specimens in the field were selected randomly to test. The compressive strength was 3.2-4.6 MPa, elastic modulus was 1382-1688 MPa, permeability coefficient was 2.4-4.7×10 -7 cm/s, and the seepage gradient was not less than 400. All the results meet the design requirements.
Seepage of main dam
The relationship between seepage and reservoir water level of main dam is shown in figure. 3, '2017' represents the annual data of 2017 before the reinforcement of the cut-off wall, and '+2019' represents the data from October 2018 to October 2019 after the reinforcement. When the reservoir water level is between 46.11 and 52.61 m, the seepage of main dam is 1.34×10 -2 m 3 /s before the reinforcement, and 0.62×10 -2 m 3 /s after the reinforcement, with a reduction of 53.7%. It should be noted that, the reinforcement measures are not adopted for the dam foundation in this project. It is assumed that the seepage amount of the dam foundation is 20% of the total seepage amount before the reinforcement, and the seepage of dam foundation remains unchanged. The reduction reaches 67% after deducting the seepage influence of the dam foundation, hence the seepage control effect of the plastic concrete cut-off wall is remarkable. In addition, the anti-permeability of plastic concrete increases with the age, and the bentonite can absorb water and expand. Therefore, the seepage control effect of plastic concrete cut-off wall would be further enhanced with time. Figure 3. The relationship between seepage and reservoir water level.
Piezometric level of dam
Nine piezometric tubes of three stakes were selected. The inlet section of piezometric tubes were arranged in the dam body, about 6 m away from the dam foundation, in order to avoid the influence of seepage of dam foundation. Table 3 is the arrangement of piezometric tubes. a Axle distance is the distance between piezometric tubes and cut-off wall, upstream is '-' and downstream is '+'.
The piezometric level before and behind the plastic concrete cut-off wall is shown in figure. 4. It is obvious that the piezometric level increases with the rise of the reservoir water level. Take the B1 and B2 as an example to illustrate the change of piezometric level. B1 and B2 are respectively arranged on the upstream and downstream of the cut-off wall, with a water level difference of 8.28-11.31 m. While the water level difference between two piezometric tubes in the adjacent position is about 1.8-2.5 m, before the construction of the cut-off wall.
As the reservoir water level rises, the water level of B1 and B2 both rises, but the former rises faster. Therefore, the water level difference between B1 and B2 increases, which is consistent with study of Tian [14]. This is because the permeability coefficient of the cut-off wall is smaller than that of dam filling. As the reservoir water level rises, the area of the cut-off wall that plays the role of seepage control increases, then the overall seepage control effect improves. It takes a long time for the seepage to reach the piezometric tubes behind the wall, which has a certain hysteresis effect.
Conclusions
Plastic concrete, with low elastic modulus and permeability, large ultimate deformation, and good workability, is a kind of suitable material for cut-off wall. The mix proportion of concrete with 50% bentonite content, water-binder ratio of 1.00 and sand-aggregate ratio of 55% meet the design requirements of plastic concrete cut-off wall. The seepage control effect is remarkable. The seepage of the main dam was reduced by 67%, and the piezometric level behind the wall was reduced by 6-9 m.
|
2020-08-06T09:08:31.682Z
|
2020-07-31T00:00:00.000
|
{
"year": 2020,
"sha1": "0739cf487598b6562c6ac311d8106c4a8bcdd520",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/531/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9630839b0c91815109f46423faeebf28912d393f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
119115410
|
pes2o/s2orc
|
v3-fos-license
|
Orientation Dependence of Confinement-Deconfinement Phase Transition in Anisotropic Media
We study the temporal Wilson loops with arbitrary orientation in anisotropic holographic QCD. Anisotropic QCD is relevant to describe quark-gluon plasma (QGP) in heavy-ions-collisions (HIC). We use an anisotropic black brane solutions for a bottom-up anisotropic QCD approach in 5-dim Einstein-dilaton-two-Maxwell model constructed in previous work. We calculate the minimal surfaces of the corresponding probing open string world-sheet in anisotropic backgrounds with various temperatures and chemical potentials. The dynamical wall (DW) locations, providing the quark confinement, depend on the orientation of the quark pairs, that gives a crossover transition between confinement/deconfinement phases in the dual gauge theory.
Introduction
Study of the phase diagram in the temperature and chemical potential (µ, T )-plane is one of the most important questions in QCD. It is known that perturbative methods are inapplicable to study this problem. The lattice QCD still has difficulties with the study of theories with non-zero chemical potential. The gravity/gauge duality provides an alternative tool to study this problem [1,2,3].
The phase diagram has been experimentally studied only for small µ and large T values (RHIC, LHC) on the one hand and for low energies (small T ) and finite chemical potential values (SPS) on the other hand. One of the FAIR and NICA tasks is the experimental study of this diagram in between these two particular cases. For this purpose the results of the beam scanning in HIC are supposed to be analyzed. In this context noting that there is an obvious anisotropy -the nonequivalence of the longitudinal and transverse directions -in HIC, one can say that the phase diagram is tried under anisotropic conditions. In fact it is believed, that QGP formed in HIC is initially in an anisotropic state. Isotropisation occurs approximately in 0.5 − 2 fm/c after a collision [4]. Therefore it seems natural to assume that the results of the beam scanning give indications of the phase transition in an anisotropic QCD. This anisotropy can be taken into account holographically. An additional argument for plasma anisotropy in HIC is the estimation of multiplicity, which is holographically supported by an anisotropic model [5]. Anisotropic lattice QCD is a subject of studies in [6,7,8,9].
Anisotropy in the gravity side can be taken into account by anisotropic metric, that can be provided by adding magnetic ansatz of Maxwell field to dilaton gravity action. Non-zero chemical potential is introduced via electric ansatz for the second Maxwell field [10]. Thereby the 5-dimensional dilaton gravity with two Maxwell fields turns out to be the most suitable model. Such model was considered in [10,11]. The simplest anisotropic model, characterized by anisotropic parameter ν, has been investigated in [5].
In this paper we consider a 5-dim metric defined by anisotropic parameter ν, non-trivial warp-factor, non-zero time component of the first Maxwell field and non-zero longitudinal magnetic component of the second Maxwell field. We take the warp-factor in the simplest form b(z) = e cz 2 2 , as this particular case allows to construct explicit solution [10]. We study the confinement/deconfinement phase transition line for the pair of quarks in the anisotropic QGP. It is natural to expect that the phase transition depends on the orientation of the quark pair relative to the anisotropy axis. Anisotropy axis in QGP created in HIC is defined by the axes of ions collisions. We show that the confinement/deconfinement phase transition line depends on the angle θ between quarks line and heavy ions collisions line. We calculate the expectation values of the rectangular temporal Wilson loop W ϑt for different orientation of the spacial part of the Wilson loop and find the conditions of the confinement/deconfinement phase transition for this line. For this purpose we introduce the effective potential V(z) that depends on the angle θ and describes the interquark interaction. The confinement takes place when the effective potential V has a critical point. We find conditions, under which the critical point exists, and study the dependence of the confinement/deconfinement phase transition temperature on chemical potential µ and angle θ.
The specific feature of the holographic description of the confinement/deconfinement is the position of the phase diagram associated with the Wilson loop behavior relative to the line of the Hawking-Page phase transition, characterized by the 5-dim background metric. It is evident, that unlike the confinement/deconfinement transition line, the Hawking-Page transition line's position on the phase diagram doesn't depend on the angle θ. As a result the change of this angle leads to changing of the mutual arrangement of the confinement/deconfinement transition line and Hawking-Page transition line on the phase diagram. We find the critical value θ cr , for which the top of the Hawking-Page transition line and the top of the confinement/deconfinement transition line coincide.
The paper is organized as follows. In Sect.2.1 we describe the 5-dim black brane solution in the anisotropic background. In Sect.2.2 we calculate the expectation value of the temporal Wilson loop. In Sect.3 we find the condition of the confinement-deconfinement phase transition for zero and non-zero temperature and perform the phase diagrams depending on the angle θ.
The model
We consider a 5-dimensional Einstein-dilaton-two-Maxwell system. In the Einstein frame the action of the system is specified as where F 2 (1) and F 2 (2) are the squares of the Maxwell fields F (1) µ and F (2) µν = q dy 1 ∧ dy 2 , f 1 (φ) and f 2 (φ) are the gauge kinetic functions associated with the corresponding Maxwell fields, V(φ) is the potential of the scalar field φ.
To find the black brane solution in the anisotropic background we used the metric ansatz in the following form: where b(z) is the warp factor and g(z) is the blackening function; we set the AdS radius L = 1. All the quantities in formulas and figures are presented in dimensionless units. Note that in [10] the following strategy to study holographic model is used. First, one takes the warp-factor suitable for phenomenological application, in particular one can take b = e cz 2 2 . Second, the anisotropic multiplier z 2− 2 ν is also fixed by phenomenological reasons [5]. Third, one takes a specific function f 1 by reasons of simplicity. And finally, using E.O.M. following from (2), one finds coupling function f 2 ( Fig. 1), potential V ( Fig. 1), Maxwell field potential A µ and blackening function g. The last one has the form: and the function G(x) has the following expansion [12]: The potential V can be approximated by a sum of two exponents and a negative constant: For c = −1 and ν = 4.5 with the coefficients depending on the chemical potential µ: Note, that in [13] an explicit isotropic solution for the dilaton potential as a sum of two exponents and zero chemical potential has been constructed. It would be interesting to generalize this construction to the anisotropic and non-zero chemical potential cases.
The Wilson loop
The purpose of our consideration is to calculate the expectation value of the temporal Wilson loop oriented along vector n, such that n x = cos ϑ, n y = sin ϑ.
Following the holographic approach [14,15,16] we have to calculate the value of the Nambu-Goto action for the test string in our background: where G µν is given by (2). The world sheet presented in Fig. 2 is parameterized as The action (10) can be rewritten: Let us introduce the effective potential: From (11) we have representations for the character length of the string and the action: where z * is a top point. Here we introduce the UV cut-off since M has singular behaviour near z ∼ 0: From (14) and (15) we see that S and make sense if the potential is a decreasing function in the interval 0 < z < z * , where z min is the local minimum of V(z), z min < z h . We are interested in studying the asymptotics of S at large . To get → ∞ and S → ∞ we have to take z * = z min . Indeed, substituting into (14) and (15), we get so that → ∞ as z → z min − 0 and S → ∞ as z → z min − 0. The stationary point, V | z=z min = 0, is usually called a dynamical domain wall (DW) point, that satisfies the equation: Taking the top point z * = z DW , we get
Confinement/deconfinement phase transition
In our case the effective potential depends on the warp factor, the scalar field and the angle. To find stationary points of V(z) we solve the equation (21) for the potential (13) with arbitrary angle: It is possible to obtain particular cases for θ = 0, π/2 [10] from the expression (23). Let us first consider the case of the zero temperature, i.e. g = 1, and deal with equations ( Fig. 3): For the non-zero temperature we can rewrite the equation (23) (Fig. 3): This expression leads to the particular cases θ = 0, π/2: The expression for the temperature T (z h , µ, c, ν) is [10]: Let us remind that in [10] we have also studied the thermodynamical properties of the constructed black hole background and found the large/small black hole phase transitions (BB-transition) at the temperature T BB (µ). Hawking-Page phase transition takes place at z h,HP , where the free energy equals zero. The particular value of z h,HP depends on parameters c and ν and is larger for larger negative c, i.e. z h,HP (c 1 , ν) < z h,HP (c 2 , ν) for c 1 < c 2 < 0. For the anisotropic background the Hawking-Page horizon is less than for the isotropic one with the same c < 0. For µ = 0 and T < T HP (0) the black hole dissolves to thermodynamically stable thermal gas. If the system cools down with the non-zero chemical potential less than some critical value µ cr , the background undergoes the phase transition from a large to a small black hole. This is a generalization of the corresponding effect in the isotropic case [17,18,19,20,21]. We have found that in anisotropic case the temperature of the large/small black hole phase transition is less than in the isotropic one, i.e. T (ν) BB (µ) < T (iso) BB (µ). The value of the critical chemical potential, up to which this phase transition exists, in the anisotropic case is larger compared to the isotropic one, µ (ν) cr > µ (iso) cr . Also, we have found that the point (µ (ν) cr , T (ν) cr ) for ν → 1 goes smoothly to (µ (iso) cr , T (iso) cr ). In Fig. 4-6 we can see the angle dependence of the confinement/deconfinment phase transition on the Wilson loop orientation. We choose the intermediate angle values θ = 10 • , 45 • , 60 • . In the boundary cases the graphs coincide with the curves for W xT (blue solid line) and W yT (magenta solid line) from [10]. In our consideration we take into account the Hawking-Page phase transition (dashed pink line).
Results and conclusions
We have found the dependence of the confinement/deconfinement phase transition line on the orientation of the quark pair. For this purpose we have studied the behavior of the temporal Wilson loops in the particular 5-dimensional anisotropic background supported by dilaton and two-Maxwell field constructed in [10]. The diagram is defined in (µ, T )-plane for arbitrary angles. In this model we have We have studied the dependence of behavior of the temporal Wilson loops on the orientation specified by the arbitrary angle θ in the background (3). This result is the generalization of the two particular cases of orientation, considered in [10], that can be associated with boundary values θ = 0, π/2. We demonstrated that the phase diagram depends on the orientation [11]. Taking into account the instability zones of the anisotropic background, we have found more complicated confinement/deconfinement phase diagrams for differently oriented temporal Wilson loops and the details are the following.
• For 0 • ≤ θ < θ cr = 22 • parts of regions near zero values of the chemical potential enter the instability regions of our background, where the small black holes collapse to large ones. Here the horizon suddenly blows up to pass, so that the confinement phase transforms to the deconfinement one by a Hawking-Page phase transition. After the chemical potential exceeds some critical value, the confinement/deconfinement phase transition is no longer determined by the background and the influence on the Wilson loop starts to dominate, analogous to the longitudinal orientation case, presented as W xT in [10] and associated with θ = 0 • . The isotropic case can be regarded as the reduction to the described scenario as well. • For θ cr3 = 78 • ≤ θ < 90 • the confinement/deconfinement phase transition is determined by the Wilson loop starting from the zero values of chemical potential and meets the instability of the background in the small region of non-zero chemical potential only. In this small region the Hawking-Page phase transition takes place and afterwards the influence of the Wilson loop becomes dominant again. This picture is analogous to the case of the transversal orientation case, presented as W yT in [10] and associated with θ = 90 • .
In isotropic background such a case is not implemented.
We should also note that all these considerations are applicable to the energies high enough, as for T → 0 effects of anisotropy disappear.
As to the future investigations, the following natural questions to static and non-static properties of our model are worth noting. As has been mentioned, the anisotropic background constructed in [10] can be generalized to provide a more realistic model. In this case the solution can be given in terms of quadratures only and we suppose to generalize the Wilson loop calculations to this more realistic case. As to static properties, it is natural to • investigate θ-oriented Wilson loops based on more complicated factor b(z), in particular such that in the isotropic limit it fits the Cornell potential known by lattice QCD; • study the Regge spectrum for mesons, adding the probe gauge fields to the backgrounds and find its dependence on θ; • consider estimations for direct photons and find dependence on orientation [11]; • evaluate transport coefficients and their dependence on the anisotropy; • estimate the holographic entanglement entropy and find its dependence on θ; note that this has been done in [5] for zero chemical potential and θ = 0, π/2; the isotropic case for non-zero chemical potential has been considered recently in [22].
As to the thermalization processes, which are the main motivations of our consideration of the anisotropic background (see details in [11,23]), it would be interesting to investigate the behavior of the temporal Wilson loop during thermalization. This problem for zero chemical potential has been studied in [24]. It is also interesting to generalize the result of paper [25] and consider thermalization of the spacial Wilson loops for non-zero chemical potential. This will give the dependence of the drag-forces on the chemical potential.
|
2018-12-27T15:28:29.000Z
|
2018-08-16T00:00:00.000
|
{
"year": 2018,
"sha1": "718f3818e2252aa075a343a3f05e987ca3d56022",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2019.04.012",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b2c1801e041998f60a8d46518550afb7cc9105cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246240145
|
pes2o/s2orc
|
v3-fos-license
|
LiteHAR: Lightweight Human Activity Recognition from WiFi Signals with Random Convolution Kernels
Anatomical movements of the human body can change the channel state information (CSI) of wireless signals in an indoor environment. These changes in the CSI signals can be used for human activity recognition (HAR), which is a predominant and unique approach due to preserving privacy and flexibility of capturing motions in non-line-of-sight environments. Existing models for HAR generally have a high computational complexity, contain very large number of trainable parameters, and require extensive computational resources. This issue is particularly important for implementation of these solutions on devices with limited resources, such as edge devices. In this paper, we propose a lightweight human activity recognition (LiteHAR) approach which, unlike the state-of-the-art deep learning models, does not require extensive training of large number of parameters. This approach uses randomly initialized convolution kernels for feature extraction from CSI signals without training the kernels. The extracted features are then classified using Ridge regression classifier, which has a linear computational complexity and is very fast. LiteHAR is evaluated on a public benchmark dataset and the results show its high classification performance in comparison with the complex deep learning models with a much lower computational complexity.
INTRODUCTION
The WiFi technology, based on the IEEE 802.11n/ac standards [1], uses Orthogonal Frequency Division Multiplexing (OFDM) which decomposes the spectrum into multiple subcarriers with a symbol transmitted over each subcarrier. Channel state information (CSI) reflects how each subcarrier is affected through the wireless communication channel.
Accepted for presentation at IEEE ICASSP 2022. ©2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. CSI in a Multiple-Input Multiple-Output (MIMO) setup has spatial diversity due to using multiple antennas as well as frequency diversity due to using multiple subcarriers per antenna. Each CSI sample, measured at the baseband, is a vector of complex variables where the size of the vector is the number of subcarriers in the OFDM signal.
It has been shown that changes in the environment affect the CSI and hence human activities can be recognized by studying the CSI variations [1,2]. Anatomical movements of the human body can change the reflection of WiFi signals, hence change the CSI in the environment. By a proper study of the CSI variations certain human activities can be detected. Besides the availability of WiFi signals in homes and most indoor environments, the privacy issue makes the use of CSI for HAR an attractive proposition. Unlike cameras, WiFi-based activity recognition preserves privacy of users and does not require a line-of-sight access.
Some of the early models for human activity recognition (HAR) are based on random forest (RF), and the hidden Markov model (HMM). It has been shown that these methods have lower performance than deep learning approaches such as LSTM [1], SAE [3], ABLSTM [4], and WiARes [5]. The WiAReS [5] model is an ensemble of multi-layer perceptron (MLP) networks, convolution neural networks (CNNs), RF, and support vector machine (SVM). Its performance is close to that of the ABLSTM [4] method. Unfortunately, the computational complexity of this approach is not reported and the codes are not available.
Generally, deep learning approaches have a large number of trainable parameters, require tremendous training data, need major hyper-parameter tuning, and are resource hungry in training and inference [6]. Training deep learning models with limited-imbalanced data is challenging [7] and augmentation methods can partially reduce the overfitting of these models [8,9]. Deployment of these solutions generally requires access to graphical processing units (GPUs) which make their implementation expensive on resource-limited devices without access to cloud resources. However, it is possible to design structural-efficient networks [10] and prune unnecessary parameters to shrink the models [11,12].
It is possible to use a large number of random convolution kernels, without training them, for feature extraction from time series as proposed in Rocket [13]. This is a fast and accurate time series classification approach which has showed significant performance improvement in classification tasks for various applications such as driver's distraction detection using electroencephalogram (EEG) signals [14] and functional near infrared spectroscopy signals classification [15]. This paper proposes a novel approach for HAR based on Rocket [13]. Unlike the deep learning approaches [1,3,4,5], our method does not require training of a large number of parameters and is computationally very light, hence called lightweight human activity recognition (LiteHAR). LiteHAR also does not require a GPU setup and can be implemented on local devices without cloud access 1 .
LiteHAR MODEL
Different steps of the proposed LiteHAR model are presented in Fig. 1. The steps are described in detail next.
Input Signals
Let {(X 1 , y 1 ), ..., (X N , y N )} represent a set of N training samples where X n = (x n,1 , ..., x n,M ) is the CSI amplitude signal of M subcarriers over time and y m is the corresponding activity label. As an example, for a MIMO receiver with three antennas and 30 subcarriers per antenna, (x 1 , ..., x 30 ), (x 31 , ..., x 60 ), and (x 61 , ..., x 90 ) represent the CSI amplitude signals of subcarriers on antenna one to three, respectively, as demonstrated in Fig. 2. Unlike most activity recognition methods (e.g. [1,4]), we do not perform any major pre-processing on the input signals.
The convolution outputs are then represented by two values per kernel, the proportion of positive values (ppv) and the maximum value (max) [13]. Hence, the extracted features for each CSI subcarrier signal x m is (k m,1 , ..., k m,D ), where k m,d = (ppv m,d , max m,d ). As Fig. 1 shows, for M given subcarriers, M feature vectors are generated. An advantage of this feature representation approach is mapping a variablelength time series to a fixed-length feature vector, which eliminates the padding of different-length signals.
Classifier
In the proposed model, we train a classifier ψ(·) per subcarrier. Therefore, for the M extracted feature vectors The predicted class for an input X n with target activity class y n over all the subcarriers is extracted by voting inỹ as Commonly, the ridge regression classifier is used in Rocket, which is a very simple and significantly fast classifier, and uses generalized cross-validation to determine appropriate regularization [13]. This classifier is used as ψ m (·) in this paper, but one is not limited to this classifier.
Data
The experiments were conducted on the CSI dataset 2 provided in [1]. The CSI data was collected at a receiver with three antennas and 30 sub-carriers at a sampling rate of 1kHz. The length of each collected sample is 20 Seconds. This dataset has 7 activity classes which are Run, Pick up, Lie down, Fall, Sit down, Stand up, and Walk, collected in an indoor environment. Most proposed methods in the literature (e.g. [1,4,5]) have been evaluated on 6 activity classes (i.e. the Pick up activity class has been excluded) of the dataset. For the sake of comparison, LiteHAR is evaluated on these 6 classes as well as on the entire dataset (i.e. 7 activity classes).
Setup
The proposed LiteHAR model is implemented in Python using Numba 3 high performance compiler and parallel and 2 https://github.com/ermongroup/Wifi Activity Recognition 3 http://numba.pydata.org/ lightweight pipelining 4 . Our codes are available online 5 . The input CSI signals are down-sampled to 500Hz and normalized by subtracting the mean and dividing by the l 2 -norm. The number of random kernels is D = 10, 000 [13]. Regularization strength is set to 10 evenly spaced numbers on a log-scale in the range (−3, 3). The average results of 10 independent runs are reported and the training dataset is shuffled in each run. A computational setup similar to [4] was used. Table 1 shows the classification accuracy results of the RF [1], HMM [1], LSTM [1], SAE [3], ABLSTM [4], and the proposed LiteHAR model. The confusion matrices of the top three models are presented in Table 2.
Classification Performance Analysis
For the experiments on 6 activity classes, ABLSTM has the highest classification performance at 97% with the best overall accuracy for three activity classes. The accuracy of LiteHAR model is slightly lower than ABLSTM, where it has achieved the best overall performance for two activity classes. However, LiteHAR has a training time of 157.8 sec while the training time for ABLSTM is about 82× more than LiteHAR. In inference, LiteHAR is 0.003 sec faster than ABLSTM. Note that current version of LiteHAR is implemented for parallel processing on CPUs. However, a GPU implementation can further accelerate the training and inference of the model.
LiteHAR has achieved a performance of 91% in classification of 7 activity classes with an accuracy of 93% for the Pick up activity class. Adding this class has dropped the accuracy of LiteHAR by 2% from the 6-class model. The confusion matrix of LiteHAR for all the activity classes is presented in Table 3.
Computational Complexity of LiteHAR
The LiteHAR model has two parts. The first part is applying random convolution transforms for feature extraction, which has a computational complexity of O T = O(D · N · l input ), where l input is the length of the time series [13]. This complexity is a linear function of the number of kernels. The sec-
Spatial Diversity Analysis
The MIMO system with OFDM modulation offers spatialfrequency diversity in CSI data collection. Fig. 3 shows the classification performance of ψ 1 (k 1 ), ..., ψ M (k M ) per subcarrier (30 subcarrier/antenna) of each antenna (3 antennas), for a single run over all activity classes with an overall accuracy of 92%. Fig. 3 shows that not all the antennas and subcarriers contribute to the performance of the model and some are redundant or have negative impact. From a spatial perspective, antenna 3 has a lower overall accuracy than the other antennas (subcarriers 3 to 30). However, all subcarriers in antennas 1 and 2 have a competitive performance. Hence, one may detect and prune the redundant/destructive antennas/subcarriers from the voting mechanism in (1). This approach can enhance the classification performance of a model. During our experiments, we have observed that training Lite-HAR with captured signal from the first two antennas can increase its classification performance about 1%.
CONCLUSIONS
WiFi-based solutions for human activity recognition (HAR) offer privacy and non-line-of-sight activity detection capabilities. Most recent proposed methods, which have achieved high classification performance on benchmark datasets, use complicated deep learning solutions with very large number of trainable parameters. In order to have an affordable and practical HAR solution, particularly for implementation on resource-limited devices in a local setup, both classification accuracy and computational complexity of the model should be considered. In this paper, we have proposed a lightweight human activity recognition (LiteHAR) solution, which has a very competitive classification performance in comparison with the state-of-the-art methods and has very low computational complexity. Unlike most deep learning solutions, Lite-HAR does not require training of a large number of parameters and can be implemented on resource limited devices without GPU access.
ACKNOWLEDGMENT
This work was partially supported by the Mobile AI Lab established between Huawei Technologies Co. LTD Canada and The Governing Council of the University of Toronto.
|
2022-01-25T02:16:04.620Z
|
2022-01-23T00:00:00.000
|
{
"year": 2022,
"sha1": "d4e67ac2f2f5fec33bfbafa67a881013c5ed12ed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d4e67ac2f2f5fec33bfbafa67a881013c5ed12ed",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
41298804
|
pes2o/s2orc
|
v3-fos-license
|
Semiclassical analysis of the Nonequilibrium Local Polaron
A resonant level strongly coupled to a local phonon under nonequilibrium conditions is investigated. The nonequilibrium Hartree-Fock approximation is shown to correspond to approximating the steady state density matrix by delta functions at field values to which the local dynamics relaxes in a semiclassical limit. If multiple solutions exist, all are shown to make nonvanishing contributions to physical quantities: multistability does not exist. Nonequilibrium effects are shown to produce decoherence, causing the standard expansions to converge and preventing the formation of a polaron feature in the spectral function. The formalism also applies to the nonequilibrium Kondo problem.
A resonant level strongly coupled to a local phonon under nonequilibrium conditions is investigated. The nonequilibrium Hartree-Fock approximation is shown to correspond to approximating the steady state density matrix by delta functions at field values to which the local dynamics relaxes in a semiclassical limit. If multiple solutions exist, all are shown to make nonvanishing contributions to physical quantities: multistability does not exist. Nonequilibrium effects are shown to produce decoherence, causing the standard expansions to converge and preventing the formation of a polaron feature in the spectral function. The formalism also applies to the nonequilibrium Kondo problem. The quantum mechanics of nonequilibrium systems is a subject of fundamental importance and of great current interest, for example in the context of prospective 'single molecule devices' 1 . In equilibrium problems, nonperturbative analysis based on solutions of Hartree-Fock equations (which may be understood as saddle points of functional integrals) has led to important insights. In nonequilibrium problems, mean field equations may be formulated 2,3,4 and indeed exhibit a richer structure than in the corresponding equilibrium problems, but it has not been clear how to select the relevant solutions or to systematically compute corrections.
In this paper we investigate a simple model which indicates a resolution to these issues. We find that the relevant solutions are selected by the steady state density matrix, which in a semiclassical, weakly nonequilibrium limit is found to become very sharply peaked at field values corresponding to local minima of a 'pseudoenergy' which we define. The formalism also shows that if multiple minima exist, all contribute, with weights varying smoothly as parameters change and shows how departures from equilibrium lead to a decoherence which suppresses characteristically quantal effects such as the formation of a polaron resonance.
We consider a single level which may be occupied by 0 or 1 spinless electrons (creation operator d † ) and is coupled to two leads j = L, R (creation operators a † jk ). The leads are assumed to be reservoirs specified by chemical potentials µ j and inverse temperature β j (which we typically set to β j = ∞). The electrons interact with an oscillator (dimensionless displacement coordinate q and momentum p) of mass M ph and energy U (q)). This is a local version of the familiar 'polaron problem' 5 and captures an important aspect of the physics of prospective single-molecule devices 6,7 At M ph = ∞ the model is analytically solvable, with properties determined by the Green functions where and Σ = Σ L +Σ R , Σ j (z) = p t 2 p,j z−ǫpa and a j = ImΣ j (ω− iδ)/ (ω − ǫ 0 − λq − ReΣ(ω)) 2 + ImΣ 2 (ω) . It is also useful to define b = Reg R and the 'pseudoenergy' which in equilibrium becomes the q-dependent part of the total energy. Fig. 1 shows a possible form of Φ(q). The equilibrium physics of this model is very well understood, and is conveniently viewed as a functional integral over trajectories q(t). In the limits M ph ,β → ∞ the integral is dominated by those paths for which q takes the definite value q a minimizing the energy. At finite M ph , other paths become important: two crucial classes are the 'gaussian fluctuation' paths involving small excursions (characteristic frequency ω a ) from the miniman and, if Φ eq (q) has the form shown in Fig 1 and the barrier height H is sufficiently large, tunnelling (instanton) processes during which q goes rapidly from the vicinity of the global minimum (E 1 in the notations of Fig 1) to the higher energy minimum (here E 2 ), spends a time of order ∆E −1 ≡ (E 2 − E 1 ) −1 near the higher minimum, and then returns to the vicinity of the global minimum. If H is sufficiently large, then tunnelling processes are rare, but when E 2 ≈ E 1 , the time spent in the higher minimum becomes longer than the interval between tunnelling events and the dilute instanton approximation breaks down, signalling the formation of a polaron resonance in the density of states. In this paper we investigate the changes occurring when the system is driven out of equilibrium by application of a chemical potential difference ∆µ between the two leads. We present a general discussion but focus most attention on the nonequilibrium polaron limit H >> ω a >> ∆µ; with β j = ∞ but ∆E/∆µ arbitrary. In our actual calculations we also assume that departures from equilibrium are small enough that we may neglect density-of-states variations: (∆µ)∂lna/∂ω < 1.
To analyse the out of equilibrium behavior we use the standard 8,9 extension of Feynman diagrammatics to the Keldysh contour which consists of a time-ordered (−) branch extending from t = −∞ to t = +∞, followed by an anti-time-ordered (+) branch extending from +∞ back to −∞. The Keldysh diagrammatics may be obtained by functional differentiation, with respect to source terms η a (t), of a generating functional W [{η a ] which can be formulated 2 as a coherent state path integral: Here the functional integral Dq(t) is over all paths beginning at q − at t = −∞ on the time ordered contour and ending at q + at t = −∞ on the antitime ordered contour, the time evolution operator on the Keldysh contour and a(t) = ±1 according to whether the time is on − or + branch. The contributions of paths with endpoints q + , q − are weighted by the appropriate element of the steady state density matrix ρ which is the long time limit of an equation which may be expressed in path integral language as Here the D ′ q ′ (t ′ ) is over all paths which begin at q ′ − on the lower contour at time t 0 and end at q − time t on the lower contour, along with the time reversed paths which begin at q ′ + (t) on the upper contour and return to q + at time t 0 . We require the long time behavior after transients have decayed. We do not find non-steady long time limits such as limit cycles or chaos: ρ in Eq 8 is the time independent solution of Eq 9.
Because eqs 1,2,3 involve a finite system coupled to two infinite reservoirs, the trace over electron operators may be performed 7 . The combination of time evolution operators needed in eqs. 8,9 may be written in the interaction representation as R(t 2 , t 1 ; , denotes expectation value in the reservoir defined by the leads (note this trace depends on the entire trajectory q(t)) and n +/− = d † d on the indicated contour. In the physical problem the quantum field coupling constant λ q = λ. Differentiation with respect to λ q and application of the usual linked cluster arguments leads to Here L ph+S is the phonon action 2 ) supplemented by source terms, and G g ′ solves with v(t) = (λq c (t)1 + λ q q q (t)σ x ). We have verified that an expansion in powers of λ about the weak coupling limit λ = 0 reproduces results obtained 7 by standard Keldysh diagrammatic analysis. We now turn to the semiclassical analysis. In the large M ph limit, one expects stationary (time-independent) paths to dominate the physics. However, in the Keldysh formalism, stationary paths have q q = 0, so in the absence of source terms the time evolution operator in Eq 8 is unity for all stationary paths, providing no basis for selection. Instead, the important paths are selected by the density matrix; i.e. from the solution of Eq 9.
To understand the dynamics implied by Eq 9 we first assume that ρ(q, q ′ ) is strongly peaked near q = q ′ = q a . We may then expand Φ = E a + M ph ω 2 a 2 (q − q a ) 2 and analyse Eq 9 by the usual perturbative methods 7 . In the large mass, weakly nonequilibrium limit we find relaxation, with a rate of order λ 2 ω a (a L + a R ) 2 to a sum of functions sharply peaked about the values q a which minimize the pseudoenergy.
To leading order in the electron-phonon coupling λ we find r a (q, q ′ ) = dp ] with T ef f = ∆µ aLaR (aL+aR) 2 , peaked about q a with a width arising from quantum fluctuations (finite M ph ) and from departures from equilibrium (which act as an effective temperature).
Fixing the ρ a requires consideration of the exponentially small processes neglected above. Within the present formalism we find two: quantal (finite M ph ) effects which lead to tunnelling through the barrier connecting the minima, and diffusion (finite ∆µ) effects which produce motion along the pseudoenergy surface. Diffusion may be analysed using the analogy between ∆µ and temperature mentioned above; it leads 12 to motion between minima with rate constant R dif f of order ln R dif f ≈ H ∆µ if ∆µ >> ω a and to a much smaller rate in the opposite limit. The standard equilibrium estimate of the rate due to tunnelling processes gives lnR tun ≈ −2 HKa(q1−q2) 2 ωa ∼ H ωa ; corrections become important only for ∆µ ∼ ω a . Thus we expect roughly that for ∆µ > ω a , the diffusion process dominates. We now consider in more detail the interesting quantum limit ∆µ < ω a for large barrier height H. In this limit one would like to restrict attention to paths for which q(t) is almost always near one of the minima, with occasional tunnelling events in which q shifts from one minimum to another. The tunnelling amplitude R tun is exponentially small and (for small enough ∆µ) is well approximated by its equilibrium value. However, in the real time path integral the smallness of R tun arises as the nearly complete cancellation of a sum of many paths with large but oscillating amplitudes. We argue that for small ∆µ the result of performing this complicated sum (which we do not treat directly) may be represented by the 'instanton' (kink) vertices shown in Fig 2 with amplitude R tun . A kink or antikink comes with a factor i and a sign σ = ±1 determined by the contour it is on. We find (see below) that the action of a single kink or antikink is infinite, so each kink must be followed by an antikink. The path integral Eq 9 thus becomes (14) where the first term is the sum of the processes shown in panels a and b in Fig. 2 the second term is the sum of the processes shown in c and d, and we have denoted explicitly only the beginning and ending values of the classical component of the field.
Differentiating Eq 14 with respect to time yields where to order R 2 tun the scattering rates are The integral in Eq 16 yields a quantity T 0 with dimension of time; we may neglect higher order terms if R tun T 0 << 1. To evaluate Eq 16 it suffices to approximate the potential in Eq 12 by the 'telegraph' form shown in Fig 2; the equations may then be solved by standard methods. In equilibrium a crucial role is played by the phase shift δ = tan −1 (aL+aR)V 1−bV and a L,R and b given by Eqs 5,6 at q = q a . As shown by Ng 11 , out of equilibrium, the behavior is described by complex phase shifts δ L,R with Here δE neq = (δ ′ L − δ ′ R ) ∆µ is the change in electronic energy arising from the imbalance in chemical potential. C eq (µ) − i∆E neq is the equilibrium result evaluated at the mean chemical potential plus the nonequilibrium energy correction; it oscillates at frequency ∆E, providing convergence of the integral in Eq 14 when ∆E = 0. The imaginary parts δ ′′ of the phase shifts give rise to a time decay ('decoherence') which is the manifestation, in the present formalism, of the decoherence introduced on semiphenomenological grounds by Rosch et. al. 13 . and ensures that out of equilibrium the integral in Eq 14 converges even when ∆E + ∆E neq = 0. We find At short times ∆µt f i << 1, δC dec t f i ∼ − (∆µt f i ) 2 ; at long times δC dec t f i ∼ − |∆µt f i |. At intermediate times an analytical solution is not available and φ 2 must Spectral functions corresponding to equilibrium state (solid curve-only minimum "1" occupied) and strongly nonequilibrium state (dashed curve; incoherent superposition of the G's corresponding to the two minima). Lower Inset: Variation with pseudoenergy difference of density matrix element corresponding to states near pseudoenergy minimum 1, computed for parameters used in main panel.
be computed perturbatively or numerically. We find x . δC orth expresses the change in orthogonality effects arising because at nonvanishing ∆µ there is destructive interference between the left and right leads 11 : We have used the perturbative expressions for the crossover functions to evaluate Γ (Eq 16). In equilibrium and at β j = ∞ we find that only scattering from the higher energy to the lower energy extremum occurs (the 'up-scattering rate' vanishes); under nonequilibrium conditions an upscattering rate appears: of order ∆µ ∆E ( ∆E ∆µ ) if |∆E| ∆µ >> 1 but of the order of the down-scattering rate as |∆E| ∆µ → 0. The change in the weight ρ 1 for one of the two minima as ∆µ/∆E is varied is shown in the lower inset of Fig 3. A similar result was found by Parcollet and Hooley in a 'pseudofermion' diagrammatic study of the nonequilibrium Kondo problem 14 .
The semiclassical Greens function follows as: The main panel of Fig 3 shows the current computed in the standard way by inserting the Green function given in Eq. 21 into Eq 28 of Ref 7 . The inset shows the evolution of the spectral density A(ω) = ImG R (ω). The minimum condition ∂Φ/∂q = 0 is equivalent to the Hartree-Fock equations discussed in recent literature 3,4 , but in these works it is assumed that at each set of parameter values, only one of the minima is occupied (ρ a = 0 or 1); preparation conditions are argued 3,4 to determine the state of the system, leading to multistability and switching, in contradiction to the results shown in Fig 3. Although the bistability discussed by 3,4 does not occur, the slow dynamics governing equilibration between the minima will lead to 'telegraph' noise in the current.
Hamann 15 shows that interacting electron problems such as the Anderson model lead, after Hubbard-Stratonovich transformation, to a model very like that studied here, but with the role of the phonon field played by the spin part of the decoupling field. Our analysis carries over directly to these problems, providing a different derivation 16 of the generally accepted results 13,14 that nonequilibrium effects suppress the formation of the Kondo resonance, and that the nonequilibrium Kondo effect is fundamentally a weak coupling problem.
|
2018-04-03T05:03:31.112Z
|
2004-09-09T00:00:00.000
|
{
"year": 2004,
"sha1": "aac1b184cba9069c821f17cedc93367894e315ce",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0409248",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9536173f956100126a5c41049aa9975cd638b1a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
119221307
|
pes2o/s2orc
|
v3-fos-license
|
$N^*$ resonances from $K\Lambda$ amplitudes in sliced bins in energy
The two reactions $\gamma p\to K^+\Lambda$ and $\pi^-p\to K^0\Lambda$ are analyzed to determine the leading photoproduction multipoles and the pion-induced partial wave amplitudes in slices of the invariant mass. The multipoles and the partial-wave amplitudes are simultaneously fitted in a multichannel Laurent+Pietarinen model (L+P model), which determines the poles in the complex energy plane on the second Riemann sheet close to the physical axes. The results from the L+P fit are compared with the results of an energy-dependent fit based on the Bonn-Gatchina (BnGa) approach. The study confirms the existence of several poles due to nucleon resonances in the region at about 1.9\,GeV with quantum numbers $J^P = 1/2^+$, $3/2^+, 1/2^-, 3/2^-, 5/2^-$.
Introduction
The nucleon and its excited states are the simplest systems in which the non-abelian character of strong interactions is manifest. Three quarks is the minimum quark content of any baryon, and these three quarks carry the three fundamental colour charges of Quantum Chromodynamics (QCD), and combine to a colourless baryon. At present it is, however, impossible to calculate the spectrum of excited states from first principles, even though considerable progress in lattice gauge calculations has been achieved [1]. Models are therefore necessary when data are to be compared to predictions.
Quark models predict a rich excitation spectrum of the nucleon [2,3,4,5,6]. In quark models, the resonances are classified in shells according to the energy levels of the harmonic oscillator. The shell structure of the excitations is still seen in the data and reproduced in lattice calculations [1]. The first excitation shell is predicted to house five N * and two ∆ * resonances with negative-parity; all of them are firmly established. The second excitation shell contains missing resonances: 22 resonances (14 N * 's and 8 ∆ * 's) are predicted but 15 only are found in the mass range below 2100 MeV, and just 10 of them (5 N * 's and 5 ∆ * 's) are considered as established, with three or four stars in the notation of the Particle Data Group [7]. Thus 9 N * 's in the mass region between 1700 MeV and 2100 MeV are predicted to exist which are unobserved or the evidence for their existence is only fair or even poor. This deficit is known as the problem of the missing resonances [8,9].
The search for missing resonances is one of the major aims of a number of experiments in which the interaction of a photon beam in the GeV energy range with a hydrogen and deuterium target is studied.
The production of Λ hyperons in pion and photo-induced reactions, in contrast to πN elastic scattering, is ideally suited to search for new nucleon resonances and to confirm resonances that are not yet well established (see, e.g., [13,14] and references therein). Due to isospin conservation in strong interactions, only N * resonances decay into ΛK final states, there are no isospin I = 3/2 contributions. Second, the Λ → N π weak-interaction decay reveals the polarization P of the Λ. Thus, the recoil polarization is measurable. In πN elastic scattering, the equivalent target polarization, also called P , requires the use of a polarized target. In photoproduction, a third advantage emerges: the process is not suppressed even when the πN coupling constants of N * resonances in the second excitation shell are small [13,14]. Photoproduction may hence reveal the existence of N * resonances coupling to πN only weakly. Indeed, a number of new resonances has been reported (or have been upgraded in the star rating) from a combined analysis of a large number of pion and photoproduced reactions [15]. Some of the "new" resonances had been observed before [10,11,16,17,18] or were confirmed in later analyses [19,20,21]. The evidence for the existence of the new states stemmed from energy-dependent fits to the data using the BnGa approach [22,23,24,25]. The reaction γp → K + Λ proved to be particularly useful [26].
The ultimate aim of experiments is to provide sufficient information that the data can be decomposed into partial waves or multipoles of defined and unique spinparity. It can either be done through constructing an explicit theoretical model, or as we present here, through the reconstruction of partial-wave amplitudes and of multipoles in a truncated partial wave analysis. Limiting the partial wave series to low orbital angular momenta allows us to overcome issues with the still relatively large errors in the measurements of observable quantities.
The main goal of this paper is to test if N * resonances in the fourth resonance region can be confirmed definitely from a fit to multipoles driving the excitation of partial waves with defined spin-parity, and to extract their properties. This is done in two ways: i.) In a standard way where a theoretical model is constructed. Its free parameters are estimated by fitting to the experimental data set base, and the partial waves of the final solution are analytically continued into the complex energy plane to obtain poles. ii.) In a way which does not depend on detailed model assumptions by using the Lau-rent+Pietarinen (L+P) method where a solution of the theoretical model is replaced by a most general analytic function consisting of a number poles and branch-cuts, which is embodied by a fast converging power series in a conformal variable. This variable is generated by a conformal mapping of the complex energy plane onto a unit circle. The first Riemann sheet is mapped to the outside of the unit circle, and the second Riemann sheet -where the poles are located -into the inside of the unit circle. In method ii.), poles are extracted by fitting to the singleenergy partial wave decomposition, as opposed to a direct global fit to the data.
Method i.), coupled-channel energy-dependent fits, exploits the full statistical potential of the data. The effect of couplings to various other final states like N π, N η, ΣK, ∆π, etc. is taken into account exactly as well as all correlations between the different amplitudes. However, all partial waves need to be determined in one single fit, and it is difficult to verify the uniqueness of the results. In method ii.) we use single channel L+P fits (SC L+P) where each channel is fitted individually, and multi-channel L+P fits (MC L+P) where two or more channels are fitted simultaneously. The main advantage of the model-independent approach is that we can fit one partial wave at a time, and that we avoid any dependence on the quality of the model. The drawback is that you first have to extract par-tial waves, and this procedure depends on the choice of higher partial waves, introducing some model dependence.
2 Construction of KΛ amplitudes in slices of their invariant mass 2.1 The partial wave amplitudes for π − p → K 0 Λ
Formalism
The differential cross sections dσ/dΩ for the reaction π − p → K 0 Λ receives contributions from a spin-non-flip and a spin-flip amplitude, f and g, according to the relation where q and k are the initial and final meson momenta respectively in the centre of mass frame [10]. Both amplitudes depend on the invariant mass W and z = cos θ, with θ being the scattering angle. The two amplitudes can be expanded into partial wave amplitudes where P l (z) are the Legendre polynomials. J = |l ± 1/2| is the total spin of the state. The sign in the relation for J defines the sign in A ± l . The Λ → N π decay can be used to determine the decay asymmetry with respect to the scattering plane, called recoil asymmetry P . Assuming that the target nucleon is fully polarized, P can be defined as When the target proton is polarized longitudinally (along the pion beam line), the spin transfer from proton to Λ yields the spin rotation angle β.
It is defined as β = arctan (−R/A), where A and R are the polarization components in direction of the Λ and its orthogonal component in the scattering plane. R and A are given by The polarization variables are constrained by the relation
Fits to the data
Data on the reaction π − p → K 0 Λ were taken in Chicago [27] and at the NIMROD accelerator at the Rutherford Laboratory [28,29,30]. From these data, the partial wave amplitudes A ± l defined in eqn.
(2) should be derived. A detailed study showed that the data require angular momenta up to l = 3 or even l = 4 but do not have the precision to determine all partial wave amplitudes [31]. Therefore we try to determine at least the low-l amplitudes, in particular A − 1 (= S 11 ), A + 0 (= P 11 ), A + 1 (= P 13 ), leading to J P = 1/2 − , 1/2 + , and 3/2 + . The higher partial waves, those above A 1 − , A 0 + , A 1 + , are taken from our current BnGa fit. Figure 1 shows the data. The solid curves represent the final BnGa fit. It reproduces the data with a χ 2 /N data = 570/916. The number of free parameters is 75.
The fit returns the real and imaginary parts of amplitudes for the S 11 , P 11 , and P 13 partial waves. The S 11 and P 11 amplitudes are shown in Fig. 2, the P 13 amplitude in Ref. [31] only (since it could not be fit with the L+P method). The solid line represents the L+P fit described below, and the energy-dependent solution BnGa2011-02 is shown as error band. Note that the higher partial waves are constrained fixed to the BnGa solution, while the other lower amplitudes are free to adopt any values.
Formalism
The amplitude for the reaction γp → K + Λ can be written in the form where ω and ω are spinors representing the baryon in the initial and final state, J µ is the electromagnetic current of the nucleon, and ε µ characterizes the polarization of the photon. The amplitude can be expanded into four invariant (CGLN) amplitudes F i [32] J µ = (8) where q is the momentum of the Λ hyperon in the final state, k is the momentum of the nucleon in the initial state, calculated in the center-of-mass system of the reaction, and σ i are the Pauli matrices. These four functions F i are functions of the invariant mass and of z with z = (kq)/(|k||q|) = cos θ and θ as the scattering angle. A determination of these four amplitudes requires the measurement with sufficient accuracy of at least eight well chosen observables [33,34,35,36,37]. For each slice in energy and angle one phase remains undetermined. It needs to be fixed from other sources. In π ± p elastic scattering, the phase can be determined from the (calculable) Coulomb interference. In hyperon production, one could try to fix the phase to the phase of t-channel Kaon exchange. Once the F i functions are known for each energy and angle, the results of all experiments can be predicted. The relations between the F i functions and the observables can be found, e.g., in [37]. For convenience, we give the expressions for the observables used in the fits. The differential cross section dσ/dΩ and the single polarization observables, the beam asymmetry Σ, the recoil asymmetry P , and the target asymmetry T , are given by The double polarization observables O x , O z (C x , C z ) define the spin transfer from linearly (circularly) polarized photons to the Λ hyperon where the z axis is given by the meson direction. This is referred to as the primed frame. Experimentally, the data on the spin transfer from polarized photons to the Λ hyperon are sometimes presented in an unprimed frame, in which the photon momentum is chosen as reference axis. Observables in the two frames are related by a simple rotation: with similar relations holding for the quantities O x and O z .
The double polarization observables O x , O z (C x , C z ) can be written as Fig. 1. Differential cross sections dσ/dΩ, Λ recoil polarization P , and spin rotation angle β for the reaction π − p → K 0 Λ from ANL75 (blue) [27] and RAL (black) [28,29,30]. Note that a few differential cross sections from [27] fall into a single energy window. The β is 360-degree cyclic which leads to additional data points shown by empty circles. The solid (black) line corresponds the L + P fit, the dashed (red) line corresponds the fit from which the amplitudes of Fig. 2 are deduced, the dotted (green) line corresponds to BnGa 2011-02 fit.
Fig. 2.
Real and imaginary part of the (dimensionless) S11 and P11 waves [31]. The energy-dependent solution BnGa2011-02 is shown as error band. The band covers a variety of solutions when the model space was altered. The solid curve represents a L+P fit. The narrow structure in the P11 wave is enforced by the photoproduction data.
When the F i are known with sufficient statistical accuracy they can be expanded -for each slice in energyinto power series using Legendre polynomials P L (z) and their derivatives: Here, L corresponds to the orbital angular momentum in the K + Λ system, W is the total energy, P L (z) are Legendre polynomials with , and E L± and M L± are electric and magnetic multipoles describing transitions to states with J = L ± 1/2. M 0+ or E 1− multipoles do not exist. Processes due to meson exchanges in the t channel may provide significant contributions to the reaction. They may demand high-order multipoles. The minmal L required to describe the data can be determined by polynomial expansions of the data [38]. A more direct approach is to insert the F i functions (eqns. 10) into the expressions for the observables (eqns. 9a and 9f) and to truncate the expansion at an appropriate value of L [39]. The observables are now functions of the invariant mass and the scattering angle, and the fit parameters are the electric and magnetic multipoles. In this method, the number of observables required to get the full information might be reduced if the number of contributing higher partial waves is not too big. But still, high precision is mandatory for the expansion. Fig. 4. (Color online) Fit to the data on dσ/dΩ: [41], P [41], and Σ, T , Ox, Oz [43] for γp → K + Λ reaction for the mass range from 1710 to 1850 MeV. The solid (black) line corresponds the L + P fit, the dashed (red) line corresponds to fit used to determine the multipoles of Fig. 8., the dotted (green) line corresponds to BnGa fit. [41], P [41], and Σ, T , Ox, Oz [43] for γp → K + Λ reaction for the mass range from 1850 to 1990 MeV. The solid (black) line corresponds the L + P fit, the dashed (red) line corresponds to fit used to determine the multipoles of Fig. 8., the dotted (green) line corresponds to BnGa fit. Fig. 6. (Color online) Fit to the data on dσ/dΩ: [41], P [41], and Σ, T , Ox, Oz [43] for γp → K + Λ reaction for the mass range from 1990 to 2130 MeV. The solid (black) line corresponds the L + P fit, the dashed (red) line corresponds to fit used to determine the multipoles of Fig. 8., the dotted (green) line corresponds to BnGa fit.
Fits to the data
From the results of the BnGa analysis we expect that in the energy range considered here the E 0+ , M 1− , and E 1+ yield the largest contributions, followed by M 1+ , E 2− , and all contribute with increasingly smaller importance, higher multipoles become negligible. First fits showed that it is not possible, given the statistical and systematic accuracy of the data, to determine all significant partial waves. Due to strong correlations between the parameters, the errors became large and the resulting multipoles showed large point-to-point fluctuations. Hence we decreased the number of freely fitted multipoles; the higher multipoles were fixed to the BnGa results. These multipoles are shown in Fig. 3. Reasonably small errors were obtained when the four multipoles E 0+ , M 1− , E 1+ , and M 1+ were fitted. The errors increased only slightly when the multipoles E 2− , M 2− , and E 2+ were fitted in addition but constrained to the BnGa solution by a penalty function.
The reaction γp → K + Λ has been studied extensively by the CLAS collaboration. The early measurement of the differential cross sections dσ/dΩ [40] was later superseded by a new measurement reporting the differential cross sections and the recoil polarization [41]. The spin transfer from circularly polarized photons to the final-state Λ hyperon, the quantities C x and C z , were reported in [42]. The polarization observables Σ, T, O x , O z have been determined recently [43]. The data are shown in Figs. 4-6. The data are used to determine the photoproduction multipoles in a truncated partial wave analysis. These structures emerge reliably when the multipole series is truncated, and only few multipoles are fitted freely. In Fig. 9 we show the results from one of our tests. In this case, the seven largest multipoles, E 0+ , M 1− , E 1+ , M 1+ , E 2− , M 2− , and E 2+ were all left free. In several mass bins, the resulting multipoles show an erratic behavior; the results become unstable. Likewise, it was important to include the multipoles with large orbital angular momenta. Even though they are individually all small, neglecting them (by assuming that they are identically zero) leads to biased results. Furthermore, these multipoles fix the overall phase.
Sandorfi, Hoblit, Kamano, and Lee [37] have reconstructed the photoproduction amplitudes for the reaction γp → K + Λ. For the high partial waves, they used the Born amplitude. Partly, they fitted all waves with L ≤ 3 freely and determined the phases as differences to the E 0+ phase. In other fits, they had the E 0+ phase free and fitted all waves with L ≤ 2. The resulting multipoles showed a wide spread. They concluded that a very significant increase in solid-angle coverage and statistics is required when all partial waves up to L = 3 are to be determined.
BnGa fits to the data
The BnGa partial wave analysis uses a K matrix formalism to fit data on pion and photo-induced reactions to extract the leading singularities of the scattering or production processes. The formalism is described in detail in a series of publications [22,23,24,25]. Here we briefly outline the dynamical part of the method.
The pion induced reaction π − p → K 0 Λ from the initial state i = π − p to the final state j = K 0 Λ is described by a partial wave amplitude A (β) ij . It is given by a Kmatrix which incorporates a summation of resonant and non-resonant terms in the form The multi-index β denotes the quantum numbers of the partial wave, it is suppressed in the following. The factor ρ represents the phase space matrix to all allowed intermediate states, ρ i , ρ j are the phase space factors for the initial and the final state. The K matrix parametrizes resonances and background contributions: Here g α a,b are coupling constants of the pole α to the initial and the final state. The background terms f ab describe non-resonant transitions from the initial to the final state.
For photoproduction reactions, we use the helicity (h)dependent amplitude for photoproduction a h b of the final state b [44] A h α is the photo-coupling of a pole α and F a a non-resonant transition. The helicity amplitudes A 1/2 α , A 3/2 α are defined as residues of the helicity-dependent amplitude at the pole position and are complex numbers [45].
In most partial waves, a constant background term is sufficient to achieve a good fit. Only the background in the meson-baryon S-wave required a more complicated form: Further background contributions are obtained from the reggeized exchange of vector mesons [22] in the form .
here, g(t) = g 0 exp(−bt) represents a vertex function and a form factor. α(t) describes the trajectory, ν = 1 2 (s − u), ν 0 is a normalization factor, and ξ the signature of the trajectory. Pion and and Pomeron exchange both have a positive signature and therefore [22]: Additional Γ -functions eliminate the poles at t < 0: where the Kaon trajectory is parametrized as α(t) = −0.25 +0.85t, with t given in GeV 2 . The data on partial wave amplitudes (Fig. 2) and on the photoproduction multipoles (Fig. 8) were included in the data base of the BnGa partial wave analysis. The data are fitted jointly with data on N η, ΛK, ΣK, N π 0 π 0 , and N π 0 η from both photo-and pion-induced reactions. Thus inelasticities in the meson-baryon system are constrained by real data. A list of the data used for the fit can be found in [15,46,47,48] and on our website (pwa.hiskp.unibonn.de). In Fig. 2 The main task of the single channel Laurent-Pietarinen expansion (SC L+P ) is extracting pole positions from given partial waves for one reaction. The driving concept behind the method is to replace an elaborate theoretical model by a local power-series representation of partial wave amplitudes [49]. The complexity of a partial-wave analysis model is thus replaced by much simpler modelindependent expansion which just exploits analyticity and unitarity. The L+P approach separates pole and regular part in the form of a Mittag-Leffler expansion 1 , and instead of modeling the regular part using some physical model it uses the conformal-mapping-generated, rapidly converging power series with well defined analytic properties called a Pietarinen expansion 2 to represent it effectively. So, the method replaces the regular part calculated in a model with the simplest analytic function which has correct analytic properties of the analyzed partial wave (multipole), and fits the given input. In such an approach the model dependence is minimized, and is reduced to the choice of the number and location of L+P branch-points used in the model. The method is applicable to both, theoretical and experimental input, and represents the first reliable procedure to extract pole positions directly from experimental data, with minimal model bias. The L+P expansion based on the Pietarinen expansion is used in some former papers in the analysis of pion-nucleon scattering data [51,52,53,54] and in several few-body reactions [21,55,56]. The procedure has recently been generalized also to the multi-channel case [57]. The generalization of the L+P method to a multichannel L+P method, used in this paper, is performed in the following way: i) separate Laurent expansions are made for each channel; ii) pole positions are fixed for all channels, iii) residua and Pietarinen coefficients are varied freely; iv) branch-points are chosen as for the singlechannel model; v) the single-channel discrepancy function D dp (see Eq. (5) in ref. [56]) which quantifies the deviation of the fitted function from the input is generalized to a multi-channel quantity D a dp by summing up all singlechannel contributions, and vi) the minimization is performed for all channels in order to obtain the final solution.
The formulae used in the L+P approach are collected in Table 1. L+P is a formalism which can be used for extracting poles from any given set of data, either theoretically generated, or produced directly from experiment. If the data set is theoretically generated, we can never reconstruct the analytical properties of the background put into the model, we can only give the simplest analytic function which on the real axes gives very similar, in practice indistinguishable result from the given model values. Therefore, analyzing partial waves coming directly from experiment is for L+P much more favourable because we do not have such demands. The analytic properties are unknown, so 1 Mittag-Leffler expansion [50] is the generalization of a Laurent expansion to a more-than-one pole situation. From now on, for simplicity, we will simply refer to this as a Laurent expansion.
2 A conformal mapping expansion of this particular type was introduced by Ciulli and Fisher [51,52], was described in detail and used in pion-nucleon scattering by Esco Pietarinen [53,54], and named as a Pietarinen expansion by G. Höhler in [10].
there is no reason why the simplest perfect fit we offer should not be the true result. As in principle we do not care whether the input is generated by theory or otherwise, in the set of formulas given in Table 1. we denote any input fitted with L+P function T a (W ) generically as T a,exp (W ).
In this paper we fit partial wave data; the discrete data points coming from a semi-constrained single energy fit of KΛ photo-production data, which is obtained in a way that the partial waves with L > 2 are fixed to Bonn-Gatchina energy dependent partial waves, and lower ones are allowed to be free. We perform a multichannel fit (M C L+P ) when possible by including single energy data from πN → KΛ process, and we fit both multipoles for the same angular momentum at the same time in the coupled-multipole fit (CM L+P ). The regular background part is represented by three Pietarinen expansion series, all free parameters are fitted. The first Pietarinen expansion with branch-point x P is restricted to an unphysical energy range and represents all left-hand cut contributions. The next two Pietarinen expansions describe the background in the physical range with branch-points x Q and x R respecting the analytic properties of the analyzed partial wave. The second branch-point is mostly fixed to the elastic channel branch-point, the third one is either fixed to the dominant channel threshold, or left free. Thus, only rather general physical assumptions about the analytic properties are made like the number of poles and the number and the position of branch-points, and the simplest analytic function with a set of poles and branchpoints is constructed.
In the compilation of our results we show the results of four fits: a) the BnGa coupled channel fit to the complete data base including the energy independent solutions for π − p → K 0 Λ and γp → K + Λ presented here; b) a single-channel L+P fit to the energy independent solution for π − p → K 0 Λ (SC πN,KΛ L+P ) ; c) a single-channel L+P fit to the energy independent solution for γp → K + Λ (SC γN,KΛ L+P ); and d) a multi-channel L+P fit to the energy independent solution for π − p → K 0 Λ and γp → K + Λ (CC L+P ).
L+P Fits
We have fitted the J P = 1/2 − partial wave from the energy independent amplitude for the reaction π − p → K 0 Λ in a SC πN,KΛ L+P fit. A χ 2 = 2.45 was obtained for the 28 data points with 23 parameters. We needed two poles, one at 1667 MeV and second one at 1910 MeV. Due to the lowstatistics of the data, the results from the single-channel fit show large errors.
The 48 data points on the E 0+ multipole from γp → K + Λ required only one pole close to 1900 MeV. The strong peak at low mass of the imaginary part of the E 0+ multipole is reproduced by the function Y a (W ) with a branching point at the K + Λ threshold. Note that the lowest mass Table 1. Formulae defining the Laurent+Pietarinen (L+P) expansion. bin for the E 0+ multipole starts at 1700 MeV, significantly above the N (1650)1/2 − mass. The data were described with a χ 2 = 0.48 and 19 parameters in a SC γN,KΛ L+P fit. Compared to the pion-induced reaction, the errors on the higher-mass resonance (at 1900 MeV) are considerably reduced.
The common fit to both data sets (with 76 data points) used two poles, the fit resulted in a χ 2 = 0.86 for 37 parameters. The results are shown in Table 2 and Figs. 8 and 10.
The real part of the pole positions of the N (1650)1/2 − resonance are nicely consistent when the three values are compared, the imaginary part is likely too narrow in the L+P fit. The magnitudes of the inelastic pole residue are consistent at the 2σ level when the BnGa and CC L+P fits are compared. The phases, however, seem to be inconsistent.
The N (1895)1/2 − pole positions are well defined with acceptable errors and consistent when the four analyses are compared, only the single-channel L+P fit to photo- production data returns a slightly too narrow width. All four analyses yield compatible magnitudes of the inelastic pole residues, the phases disagree at the 2σ level. The magnitudes and the phases of the E 0+ multipole determined by the BnGa fit agree well with the values of the L+P fits within the rather large uncertainties. Note that the errors in the CC L+P and BnGa fits have different origins: The L+P errors are of a statistical nature, the BnGa errors are derived from the spread of results of a variety of different fits. Both approaches establish the need for N (1650)1/2 − and unquestionably require N (1895)1/2 − .
J P = 1/2 + -wave
We have fitted the J P = 1/2 + -wave using the P 11 energy independent amplitude for the π − p → K 0 Λ reaction and the M 1− multipole from γp → K + Λ. The first data set π − p → K 0 Λ required two poles. The first pole was located near 1700 MeV, the second one was found near 2100 MeV even though with large error bars: the admitted range covers masses from ∼ 1790 to ∼ 2375 MeV. The photoproduction data required only one pole close to 1900 MeV. The CC L+P fit to both data sets was performed with two poles.
The results are shown in Table 2 and Figs. 8 and 11. The 28 data points for π − p → K 0 Λ were fitted with 23 parameters and a χ 2 = 0.67. The 48 data points on the M 1− multipole were described with a χ 2 = 0.366 and 19 parameters. The common fit to both data sets resulted in a χ 2 = 0.505 for 41 parameter. Both approaches, the BnGa and CC L+P fit, establish the need for N (1710)1/2 + , and unquestionably require N (1880)1/2 + .
The N (1710)1/2 + mass is consistent in the CC L+P and the BnGa fits, its width tends to be smaller in the CC L+P fit, see Tables 2 but the difference is 1.7σ only. The magnitudes of the inelastic residue for this resonance have large error bars in the L+P fits and cover zero, we give upper limits only. The limits are compatible with the BnGa result. In spite of the large errors in the magnitudes, the phases are consistent.
The masses of the N (1880)1/2 + resonance from the BnGa and CC L+P fits are compatible but not the widths. The inelastic residues disagree slightly. Both, the singlechannel SC L+P and the coupled-channel CC L+P fit, agree that the N (1880)1/2 + width should be smaller than ∼ 40 MeV while BnGa finds a normal hadronic width. However, we have performed a CC L+P fit imposing a mass of 150 MeV. When the result of the CC L+P fit is compared to the observables with 674 data points (Figs. 4 to 7), the fit deteriorates only minimally, the χ 2 increases by 4.5 units. We conclude that the N (1880)1/2 + resonance is definitely required in this nearly model-independent analysis and that it has a normal hadronic width. The magnitudes of the inelastic residues and of the M 1− multipole agree reasonably well, the phases of the inelastic residues are again inconsistent while the M 1− multipole phases agree well within their uncertainties.
J P = 3/2 + -wave
The J P = 3/2 + -wave was not derived from the pion induced reaction π − p → K 0 Λ, so the two photoproduction multipoles E 2− and M 2− were fitted simultaneously in the coupled-multipoles L+P mode (CM L+P ). The CM L+P fit required only one pole close to 1900 MeV, no N (1720)3/2 + was needed. Due to the presence of important thresholds (ΣK, N (1520)3/2 − π, N (1535)1/2 − π), the N (1720)3/2 + resonance has a rather complicated pole structure, and we refrain from discussing this resonance here. The fit to the 96 data points in the two data sets is shown in Fig. 8. The fit returned a χ 2 = 0.42 for 35 parameters. The results are shown in Table 3. The poles from the L+P and BnGa fits are fully consistent. We conclude that N (1900)3/2 + is definitely confirmed in this nearly model-independent analysis.
J P = 3/2 −
Due to limited statistics, the J P = 3/2 − -wave could not be derived from the pion induced reaction π − p → K 0 Λ. Thus, only the two photoproduction multipoles E 1+ and M 1+ were fitted in the coupled-multipoles CM L+P mode (CM L+P ). The L+P fit to the 96 data points in the two data sets returned a χ 2 = 0.55 for 36 parameters, the fit is shown in Fig. 8. The fit required only one pole close to 1900 MeV, no N (1700)3/2 − was needed. A low-mass pole at about 1700 MeV is required in the BnGa fit but due to the complicated pole structure in this mass region, we again refrain from discussing its properties here. The results of the L+P and the BnGa fits are shown in Table 3. The poles from the L+P and BnGa fits are found to be inconsistent. In the BnGa model, a mass of 1870±25 MeV is found, and there is a second pole -not discussed here -at 2150 MeV. The L+P fit does not find evidence for a two-pole structure and places the mass of the one pole at 1977±41 MeV.
J P = 5/2 − -waves
The J P = 5/2 − -wave was not derived from the pion induced reaction π − p → K 0 Λ, and in this case only the E 2− multipole could be determined from the data. The single channel L+P mode (SC γN,KΛ L+P ) was hence used to fit the data. The fit required one pole at about 2000 MeV. The fit to the 48 data points in the two data sets returned a χ 2 = 0.60 for 25 parameters. The results are shown in Table 3 and Fig. 8. The pole positions from the L+P and BnGa fits are fully consistent. We conclude that N (2060)5/2 − is confirmed. Figure 12 shows the real and imaginary parts of low-L partial-wave amplitudes from Refs. [31] and [58]. The amplitudes are similar in magnitude but differ in their shape. The JüBo fit does not contain N (1895)1/2 − , the third resonance in the J P = 1/2 − wave that is confirmed here and in a recent analysis of γp → η, η p [59]. Both the analysis in Ref. [58] and this work, introduce N (1710)1/2 + -a resonance not needed in Ref. [12] -but here we find evidence for an additional resonance in this partial wave, N (1880)1/2 + . Thus the differences in the partial-wave amplitudes are to be expected.
Comparison to other groups
There is a large number of papers devoted to partial wave analyses of the reaction γp → K + Λ. We discuss here only recently published papers which include at least one measurement of a double polarization variable.
A number of groups have analyzed pion or photo-induced reactions with a Kaon and a Λ hyperon in the final state. Wu, Xie, and Chen [63] studied the reaction π − p → K 0 Λ up to W = 1.76 GeV in an isobar model; the isobars include hyperon exchanges in the u-channel and K * exchange in the t-channel. The leading s-channel contributions were found to be due N (1535)1/2 − , N (1650)1/2 − and N (1720)3/2 + formation. Xiao, Ouyang, Wang, and Zhong [64] studied the mass range below 1.8 GeV and emphasize the leading role of N (1535)1/2 − and N (1650)1/2 − . The Jülich-Bonn (JüBo) group [58] described the data on π − p → K 0 Λ simultaneously with other pion-induced re-actions in an analytic, unitary, coupled-channel approach. SU(3) flavor symmetry was used to relate both the tand the u-channel exchanges. The authors fit the available data (see Fig. 1); all resonances found in the GWU analysis [12] were introduced in the fit and four further ones.
Mart, Clympton and Arifi [65,66] take into account the set of resonances used in the BnGa analysis [15]. They find that spin-5/2 resonances play an important role and have to be taken into account. In their best fit, the authors use 17 N * resonances. The three resonances N (1650)1/2 − , N (1720)3/2 + , and N (1900)3/2 + provide the most important contributions.
In Fig. 13, the photoproduction multipoles from the BnGa analysis and those of Skoupil and Bydžovský [62] and of Mart, Clympton and Arifi [65] are compared. There is no much similarity even though partly the same resonances are used. But possibly, this is not too surprising. In a comparison of the best studied process, γp → πN , significant differences were observed in the multipoles obtained by the BnGa, JüBo, and GWU groups [67] even though all three groups were capable of describing the data reasonably well. However, new data enforced a considerable reduction of the spread of the three results. In any case, the comparison demonstrates that further work is needed before the γp → K + Λ reaction can be considered as well understood. Fig. 12. Real and imaginary part of the (dimensionless) S11 and P11 waves [31]. The energy-dependent solution BnGa2011-02 is shown as error band. The solid curve represents a L+P fit. The dashed (green) curve is given by the solution JüBo2015-B of the JüBo group [58]. The BnGa and JüBo groups use a different sign convention. The JüBo amplitudes are shown with an inverted sign.
Summary
For a long time it has been anticipated that photoproduction experiments will provide measurements that are sufficient in number and statistical accuracy to construct the four complex amplitudes governing the photoproduction of an octet baryon and a pseudoscalar meson. A determination of these four amplitudes requires the measurement with sufficient accuracy of at least eight carefully selected observables [36], and one phase still remains undetermined. Alternatively, the multipoles driving the excitation of specific partial waves can be deduced from the data in a truncated partial wave analysis.
In this paper, we have performed such a truncated partial wave analysis of the reaction γp → K + Λ. The CLAS experiments studied this reaction and reported data on the differential cross section dσ/dΩ, on the polarization observables P , T and Σ, and on the spin correlation parameters O x , O z , C x , C z . The data cover the resonance region from 1.71 to 2.13 GeV, mostly in 20 MeV wide bins. Thus at the moment, these data offer the best chance to perform a truncated partial wave analysis.
In a first step, we determined the number of multipoles that can be deduced from the data. When the number of free multipoles is increased in the energy-independent analysis, the errors in the determination of the multipoles increases, and one has to balance precision on the one hand and the number of multipoles on the other hand. It turned out that only the four largest multipoles, E 0+ , M 1− , E 1+ , M 1− , can be determined without constraints when a good precision of the multipoles is required. In addition, three further multipoles, E 2− , M 2− , E 1+ , could be derived from the data when a penalty function forced the fit not to deviate too much from an energy dependent solution. In addition to the photoproduction multipoles, we also used partial wave amplitudes for the reaction π − p → K 0 Λ which had been determined earlier.
The energy-dependent solution was found within the BnGa approach. In this approach, a large number of data on pion and photo-induced reaction is fitted in a coupled channel analysis. The data base includes N π, N η, ΛK, ΣK, N ππ, and N πη final states and, in an iterative procedure, the partial wave amplitudes and photoproduction multipoles derived here. The higher photoproduction multipoles that could not be determined in the fits to the CLAS data were kept fixed to multipoles from the BnGa analysis.
All multipoles considered here, E 0+ , M 1− , E 1+ , M 1+ , E 2− , M 2− , E 1+ , are fitted within a Laurent-Pietarinen expansion. This expansion exploits the analytic structure of the S-matrix. In the vicinity of a resonance position (and reasonably close to the real axis), the photoproduction amplitude is determined by poles and the opening of thresholds. When this analytic structure is imposed, fits to the photoproduction multipoles and partial wave amplitudes require no further dynamical input, the fits do not impose any model bias. The Laurent-Pietarinen fits were performed to the photoproduction multipoles, to the partial wave amplitudes from the π − → K 0 Λ reaction, The result of the energy-independent analysis are shown with error bars; the BnGa fit [15] to the real part is represented by (black) thick curves, to the imaginary part by thick dashed curves. The L+P fit, shown by (cyan) long-dashed and long-dasheddotted curves, often coincides with the BnGa fit. The fit of Ref. [62] is shown by thin (green) solid or dashed curves, and the fit of Refs. [65,66] by thin (magenta) dotted or dashed dotted curves, again for the real or imaginary parts, respectively. Refs. [65,66] use a different sign convention. These amplitudes are shown with an inverted sign.
and to both in a coupled channel fit. The results are then compared to those from the BnGa fit. The two resonances N (1895)1/2 − and N (1900)3/2 + are firmly established. The results on their masses, widths, and other properties agree well. Also the N (1880)1/2 + resonance is definitely required but there remains the question of the width: within the Laurent-Pietarinen expansion, its width is 40 MeV or less while its width within the BnGa approach is about 150 MeV. The statistical significance of the narrow width is however very small.
The two resonances N (1875)3/2 − and N (2060)5/2 − are derived from photoproduction multipoles which are constrained to follow the BnGa solution. In the J P = 3/2 − partial wave, BnGa finds two poles; in the Laurent-Pietarinen fit, only one pole is observed at a mass in between the two BnGa poles. The BnGa and Laurent-Pietarinen results on N (2060)5/2 − are nicely consistent.
Summarizing, we can claim that several resonances found in the BnGa energy-dependent multichannel analysis are confirmed by fits based on a Laurent-Pietarinen expansion with a minimal model dependence.
|
2017-12-20T15:49:23.000Z
|
2017-12-20T00:00:00.000
|
{
"year": 2017,
"sha1": "9acd1c9ae10b50c44e9188c9a57fad72f969c697",
"oa_license": null,
"oa_url": "http://eprints.gla.ac.uk/154247/1/154247.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9acd1c9ae10b50c44e9188c9a57fad72f969c697",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267247235
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge structure and future research trends of body–mind exercise for mild cognitive impairment: a bibliometric analysis
Background Mild cognitive impairment (MCI) is a common neurodegenerative disorder that poses a risk of progression to dementia. There is growing research interest in body–mind exercise (BME) for patients with MCI. While we have observed a rapid growth in interest in BME for MCI over the past 10 years, no bibliometric analysis has investigated the knowledge structure and research trends in this field. Consequently, the objective of this research is to conduct a bibliometric analysis of global publications of BME for MCI from 2013 to 2022. Methods A total of 242 publications in the field of BME for MCI were retrieved from the Web of Science Core Collection. Bibliometric analysis, including performance analysis, science mapping, and visualization, was performed using CiteSpace, VOSviewer, and Microsoft Excel. Results Publications and citations in the field of BME for MCI have shown a rapidly increasing trend over the last decade. Geriatrics & Gerontology, and Neurosciences were the most frequently involved research categories. China (78 documents) and the USA (75 documents) contributed to the largest number of publications and had the strongest international collaborative networks. Fujian University of Traditional Chinese Medicine contributed to the largest number of publications (12 documents), and Chen, L of this institution was the most prolific author (12 documents). Frontiers in Aging Neuroscience (16 documents), and JOURNAL OF ALZHEIMER’S DISEASE (12 documents) were the most prolific journals. Tai Chi and Baduanjin, as specific types of BME, were the hotspots of research in this field, while evidence synthesis and guidelines might be future research trends. Conclusion In the last decade, there has been a rapid growth in scientific activities in the field of BME for MCI. The results of this study provide researchers and other stakeholders with knowledge structure, hotspots, and future research trends in this field.
Introduction
As global aging accelerates, cognitive decline, including mild cognitive impairment (MCI), has attracted growing concerns from researchers, clinical practitioners, as well as a wide range of other stakeholders (1).As a status between normal cognitive aging and dementia, MCI has emerged as a priority in both research and clinical practice to delay the development of neurodegenerative disease (2).MCI typically consists of memory impairments accompanied by abnormal memory test scores, affecting professional and social activities, but with maximum preservation of activities of daily living (3).The latest meta-analysis proposed that 15.6% of adults over 50 years old were suffering from MCI (4), while the incidence of MCI was 22.5, 40.9, and 60.1% in people aged 75-79 years, 80-84 years, and over 85 years, respectively (5).Due to the natural trajectory of cognitive decline, aging is a risk factor for MCI, while males are more prone to suffer from MCI than females (6).More importantly, MCI carries a varying cognitive developmental trajectory, including a reversal to normal cognitive function, retention of stability, and progression to dementia (7).Previous research suggested that approximately 80% of patients with MCI may have converted to Alzheimer's disease with annual conversion rates around 10-15% (8)(9)(10).Meanwhile, it was reported that 18% of patients reversed from MCI to normal cognitive function (11).Nevertheless, the risk of cognitive decline is still higher for patients with MCI than for people with a normal trajectory of cognitive decline (12).Furthermore, patients with MCI are commonly accompanied by a variety of neuropsychiatric symptoms and reduced ability to perform activities of daily living (13).The average annual direct medical costs for an individual with MCI in the USA were estimated at $6,499, which is substantially higher than those without MCI ($2,969) (14).Therefore, MCI, as a prevalent cognitive disorder among middle-aged and elderly people, needs to be prevented and treated with more action, especially in this aging world.
Even though a number of pharmacological interventions have been developed to treat MCI (15)(16)(17), none of them has been reported to be effective (7).Meanwhile, exercise has shown potential value as a non-invasive and highly feasible non-pharmacological treatment for patients with MCI (18).The body-mind exercise (BME), which combines body movement, mental focus, and controlled breathing to improve strength, balance, flexibility, and overall health (19), has been reported to benefit on cognitive disorders such as MCI among middleaged and elderly adults (20)(21)(22).Fabre et al. (23) proposed that combined aerobic and mental training was more effective than separate aerobic or mental training in memory quotient among healthy older adults.Similarly, Theill et al. (24) found that 10 weeks of combined cognitive and physical training was more effective for cognitive performance than single cognitive training among older adults.More recently, Tai Chi, Yoga, Qigong, and other types of BME have been increasingly employed to enhance cognitive function and to manage MCI among older adults (25)(26)(27).
As mentioned above, researchers have conducted quite a few in-depth original investigations of BME for MCI, and this has resulted in several evidence syntheses, such as systematic reviews, and metaanalyses (28,29).These kinds of syntheses employ systematic approaches that allow for the robust extraction of qualitative or quantitative information from publications and then identify the existing evidence on various specific research questions (30).However, these syntheses are not suitable for a broad and rapidly developing research field and fail to deal with highly heterogeneous publications (30).For instance, in the field of BME for MCI, the original investigations include randomized controlled trials and laboratory mechanistic studies.Existing approaches, whether systematic reviews or meta-analyses, are unable to synthesize evidence from interventions' effectiveness and evidence from imaging reports simultaneously.
Therefore, Nakagawa et al. (31) proposed a "Research Weaving" framework that combines bibliometrics and systematic mapping to reveal and visualize the knowledge structure and research trends in a research field.The major advantage of this approach is that it allows for the synthesis of a large number of heterogeneous scientific publications, thus providing better insights into the knowledge structure and future research trends (30).Considering these advantages, bibliometric analyses have begun to become a standard instrument in science policy and research management (32).In research fields related to BME for MCI, such as exercise for Parkinson's Disease (33), and neuroinflammation-induced MCI (34), bibliometric analysis has provided a one-stop overview of the field, as well as the identification of knowledge gaps.Nevertheless, there is an absence of bibliometric analysis in the field of BME for MCI.Therefore, a robust bibliometric analysis of BME for MCI, which is a rapidly growing field with highly heterogeneous publications, is required to support researchers to identify future research directions and to provide quantitative evidence for policy makers to identify future funding priorities.
Accordingly, this research aims to assess the knowledge structure of the field of BME for MCI from 2013 to 2022, and to predict future research directions.Specifically, this present research will answer the following research questions: (1) Which country, institution, author, discipline, journal, and reference significantly contributed to the research in the field of BME for MCI? (2) What are the research trends in the field of BME for MCI from 2013 to 2022? (3) What are the hotspots of research in the field of BME for MCI?
Data source and search strategy
This study was conducted in rigorous accordance with the stepby-step guidelines for bibliometric analysis (30).
All eight indexes of the Web of Science Core Collection (WoSCC) were utilized as an electronic database for this bibliometric analysis.The rationale for WoSCC is that, as this study aims to perform a bibliometric analysis of the multidisciplinary field of BME for MCI, the usage of specialist field databases such as PubMed and SPORTDiscus may result in an omission of search results.Furthermore, using multiple databases can introduce bias into subsequent bibliometric analyses because of the differences in data formats and the included information of various databases (e.g., the difference between WoSCC and Scopus @ regulations on author initials) (35).As WoSCC provides the most extensive information for bibliometric analysis compared to other multidisciplinary electronic databases such as Google Scholar and Scopus@, WoSCC was chosen as the only database in this study (36).Moreover, WoSCC has been successfully used by researchers as a data source to conduct bibliometric analysis in the field of Tai Chi for Health (37), As recommended by the guidelines for conducting bibliometric analyses, the search strategy used for this study was defined by the keywords used in the previous reviews in the field of BME for MCI, as well as brainstorming by all the authors (30).Taking into account the update of the WoSCC electronic database, all data were retrieved on March 13, 2023, to eliminate search bias.Keywords related to BME were entered in the topic column and recorded as #1, while keywords related to MCI were entered in the topic column and recorded as #2.The final search strategy is available in Supplementary material.The document type was set as "Article" and "Review Article, " the timespan was set as 2013-2022, and the language of publication was limited to English.The reason for limiting the search to the last 10 years is that the BME for MCI research field has developed over a relatively short period and that 10 years of bibliometric analyses have already demonstrated their ability to detect the knowledge structure and to predict research trends in previous studies (39).The flow chart of the literature search was shown in Figure 1.This search strategy resulted in 242 eligible documents, and basic information about each document was downloaded from the electronic database in formats of plain text files and tab-delimited files.
Analysis tool
To perform this bibliometric analysis, three software programs, CiteSpace, VOSviewer, and Microsoft Excel, were used to analyze the data downloaded from WoSCC (40,41).CiteSpace is a Java-based software that has been used extensively in bibliometric analysis and knowledge map visualizations.In this research, the reference co-citation network, and the co-occurring author keyword network in CiteSpace were applied to generate maps of knowledge structures of BME for MCI.As a key function of co-citation networks and co-occurring keyword networks that can characterize the intensity and frequency of mentions of an item over time (42), citation bursts were employed to describe research trends in this study.Due to its advantages in generating visual graphs and handling big data in bibliometric analysis, VOSviewer was used in this study to generate the collaborative networks of countries, institutions, and authors who published in the field of BME for MCI.VOSviewer automatically calculated the total link strength (TLS), which represents the number of co-occurrences of two items (countries/institutions/authors) in publications.Meanwhile, the average citations per item (ACI) was derived by dividing the total number of citations for the item automatically generated by VOSviewer by the number of publications of the item.Microsoft Excel was used to generate publication and citation trends in the field of BME for MCI.
Annual publication activity
From 2013 to 2022, a total of 242 eligible publications were retrieved in the field of BME for MCI, which received a total of 4,118 citations, with 17.02 citations per publication.Among the 242 publications in the field of BME for MCI, 143 publications were "Article, " and 99 publications were "Review Article." Figure 2
Analysis of national and institutional productivity
According to the results from WoSCC, a total of 42 countries/ regions, as well as 635 institutions contributed to at least one publication in the field of BME for MCI in the past decade.Table 1 shows the countries and institutions that contributed to the largest number of publications in the field of BME for MCI from 2013 to 2022.China, with 78 publications (32.23%), and the USA, with 75 publications (30.99%), were the top two countries that published the largest number of documents in the field of BME for MCI and were well ahead of other countries.Furthermore, of the 11 countries with the highest scientific productivity in the field of BME for MCI, only China, Brazil, and Thailand are developing countries.Germany (ACI = 48.10),Thailand (ACI = 36.88),and Australia (ACI = 31.88)had the highest ACI, demonstrating the higher quality of their research in the field of BME for MCI.The USA and China held the strongest collaborative networks, and England, Canada, and Australia also had strong collaborations with other countries.Among the 12 institutions with the highest scientific productivity in the field of BME for MCI, eight are from the USA, three from China, and one from England.The Fujian University of Traditional Chinese Medicine in China contributed to the largest number of publications (12 publications, 4.96%) in the field of BME for MCI, followed by the Harvard Medical School in the USA (10 publications, 4.13%).Massachusetts General Hospital (ACI = 57.71)and the University of California, Los Angeles (ACI = 38.17) in the USA had the highest ACI, proposing that the two institutions had an early start and a high quality of research in the field of BME for MCI.Furthermore, the Fujian University of Traditional Chinese Medicine, the University of Florida, and the University of Washington had the strongest collaborative networks (TLS = 23) in the field of BME for MCI, whereas some institutions had very low collaborative intensity.The inter-country collaborative network in the field of BME for MCI is presented in Figure 3A, while Figure 3B shows the inter-institutional collaborative network in this field.
Analysis of authors' productivity
A total of 1,357 authors contributed to the 242 publications in the field of BME for MCI in the past decade, and Table 2 shows the 13 active authors with the largest number of publications.Chen, L of Fujian University of Traditional Chinese Medicine was the most prolific author in this field (12 publications, 4.96%), and the only author with more than 10 publications.Khalsa, DS of the Alzheimer's Research & Prevention Foundation (ACI = 40.20),and Lavresky, H of the University of California, Los Angeles (ACI = 38.17),had the highest ACI in the field of BME for MCI, proposing that their research had high quality.Chen, L (TLS = 68), Li, M (TLS = 55), and Tao, J (TLS = 55) had the strongest collaborative networks.The interauthorial collaborative network in the field of BME for MCI is illustrated in Figure 4.
Journal characteristics
A total of 242 publications in the past decade in the field of BME for MCI were published by 127 journals, while the top 14 journals which published the largest number of documents are presented in Inter-authorial collaborative network in the field of body-mind exercise for patients with mild cognitive impairment.
Keyword analysis
The occurrence of the author's keywords reflects the degree of interest in the field of BME for MCI and can predict future research trends.Meanwhile, the centrality of items was calculated automatically through CiteSpace, while nodes with high centrality represented their critical or turning point significance in a particular research field (43).The 14 most frequently occurring authors' keywords in the field of BME for MCI are shown in Table 4.MCI was the most frequently occurring keyword and had the highest centrality (0.88).Tai Chi, Older Adults, and Alzheimer's Disease were also commonly occurring.It is clearly shown in Figure 5 that the top 10 keywords with the strongest citation bursts in the field of BME for MCI in the past decade.The blue line represents the timeline from 2013 to 2022, while the red line represents the duration of the keyword outbreak from 2013 to 2022.In the last decade, body-mind exercise (Burst = 1.66) and Alzheimer's Disease (Burst = 1.36) were the keywords with the highest citation burst value, as well as the keywords with the longest duration of burst.Since 2020, quality of life has been the keyword with the highest burst value in the field of BME for MCI.
Reference analysis
The five most cited publications in the field of BME for MCI are shown in Table 5.These five most-cited publications received a total of 719 citations, and only one of the five publications was "Article." The most cited publication in this field is entitled "Effect of Tai Chi on Cognitive Performance in Older Adults: Systematic Review and Meta-Analysis" (220 times).Wayne et al. (44) systematically reviewed the effectiveness of Tai Chi on overall cognitive function among older adults with normal or pathological cognitive declines (including This study conducted an analysis of the reference co-citation network by CiteSpace to explore hotspots and trends in the field of BME for MCI.To measure the validity of clustering strategy, the modularity value (Q) and the weighted average silhouette value (S) were automatically calculated with CiteSpace, while Q > 0.5 and S > 0.7 are the recognized validity thresholds.Therefore, the clustering strategy in this study was valid and reasonable due to its Q = 0.7879 and S = 0.9276.Figure 6 clearly shows the cluster view of the knowledge map of the field of BME for MCI over the past decade, in which 13 clusters were found.Cluster #0 dual-task training had the largest size, followed by #1 Alzheimer's disease, #2 baduanjin, and #3 yoga.Dual-task training is another description of BME, as we also included this terminology in our search strategy (45).Alzheimer's disease is the most common type of dementia, and MCI has the potential to progress to Alzheimer's disease (46).Baduanjin and yoga are two common BME that have been increasingly investigated as potential ways to manage MCI.Meanwhile, a timeline view of clusters, which describes their evolutionary process, is shown in Figure 7.The top 10 co-cited references with the strongest citation bursts are shown in Figure 8
Discussion
This is the first study to systematically analyze the productivity, research hotspots, and future research trends within the field of BME for MCI over the past decade using a robust bibliometrics approach.The rapid increase in the past decade in both publications and citations in the field of BME for MCI is indicative of the continued steady increase in interest and input from researchers and other stakeholders within the field.Review articles accounted for 40.91% of total publications, which is higher than the results from bibliometric analyses in related fields (33).One possible explanation is that in recent years, having been affected by the COVID-19 pandemic, patients with MCI have been defined by some ethical committees as a vulnerable population and cannot be accessed by intervention studies or epidemiological investigations (51).The proportion of funded research in this area was similar to previous figures for related fields Top 10 keywords with the strongest citation bursts in the field of body-mind exercise for patients with mild cognitive impairment.
(52), indicating that the field of BME for MCI is preferred by funding agencies and has a large scientific and translational potential.Publications in this field were mainly classified as Geriatrics & Gerontology and Neurosciences, indicating that research in this field was multidisciplinary involving the intersection of geriatrics, neuroscience, and exercise science.China and the USA were far ahead in terms of national productivity of publications, and Brazil and Thailand also had a high number or quality of publications.Tai Chi, Baduanjin and other BME are popular Chinese traditional exercises, while China also has the largest population of older adults and patients with MCI around the world (53).Therefore, the interest and investment in the field of BME for MCI in China is comprehensible and significant.Nevertheless, developed countries such as Germany and Australia had higher quality research than China.This suggests that developing countries such as China and Brazil need to continue to support research in the field of BME for MCI, and to enhance the quality of research through international exchange and cooperation.
In terms of institutional productivity, although Fujian University of Traditional Chinese Medicine in China was the most productive institution, American institutions occupied eight of the 11 most productive institutions in the field of BME for MCI.Moreover, the level of cooperation between all institutions was low, with some institutions in China having almost no interinstitutional cooperation.Therefore, more international and inter-institutional collaborations are urgently needed for subsequent research in this field.Chen, L was the most productive author in this field and had the strongest collaborative network.Nine of the top 13 most productive authors were from China, but the authors from the USA published higherquality research.This suggests that Chinese researchers in this field need to improve the quality of their research while maintaining productivity and collaborating across institutions and countries.Furthermore, Frontiers in Aging Neuroscience and JOURNAL OF ALZHEIMER'S are the most popular journals.However, there appear to be some challenges in publishing research in this field in highquality journals compared to other related fields (33).
Through co-occurrence keyword networks and reference co-citation networks, we identified BME, including Tai Chi and Baduanjin, as hotspots in the field of BME for MCI over the past decade.Tai Chi, a popular traditional Chinese exercise, is a moderateintensity aerobic exercise that combines both physical and cognitive exercise (54).Research applying Tai Chi, as an intervention for patients with MCI has started relatively early, while Tai Chi has been demonstrated to be effective in enhancing general cognitive performance, memory, attention, and executive function in patients with MCI (55,56).This may be since it obtains the benefits of both mind and body exercise.First, as moderate-intensity aerobic exercise, tai chi slows age-related brain atrophy, increases cerebral blood circulation, and even alters brain plasticity (57,58).Second, as a cognitively stimulating activity, Tai Chi requires learning, memorizing, and performing a series of sequential choreographed movements, which promote cognitive functions in participants performing Tai Chi The cluster view of the knowledge map based on reference cocitation analysis in the field of body-mind exercise for patients with mild cognitive impairment.
FIGURE 7
The timeline view of the knowledge map based on reference co-citation analysis in the field of body-mind exercise for patients with mild cognitive impairment.63).Second, in the practice of Baduanjin, older adults need working memory and executive functions to learn, memorize, and perform motor movements, thus improving their cognitive function (64).Tai Chi and Baduanjin are hot research topics in the field of BME for MCI, they also have greatly promoted the role of traditional Chinese exercise in healthy aging.However, given the difficulty of acquiring these BMEs, there are challenges in promoting and researching them in countries outside of China.This may explain why there is little international or even inter-institutional cooperation in this field.We identified the future research trends in the field of BME for MCI as evidence synthesis and guidelines through an analysis of bursts in co-occurrence keyword networks and reference co-citation networks.First, over 40% of the publications in this field were review articles.Apart from the limitations of interventions or investigations of patients with MCI during the COVID-19 pandemic, the large number of studies in this field published in Chinese may be another explanation.
However, systematic reviews allow researchers to synthesize evidence from studies published in both English and Chinese.For example, Lin et al. (61) included 13 randomized controlled trials published in Chinese in the meta-analysis.This is an advantage for Chinese researchers but also poses challenges.Mastering both English and Chinese can uniquely position them to synthesize evidence from original research in the field of BME for MCI, but it also limits their ability to internationalize and enhance the quality of evidence synthesis through international collaboration.Furthermore, the large number of publications in this field were published in Chinese and provided only English abstracts limit the development of open science in the field.Another frontier in this field is the guidelines.Livingston et al. (49) in the 2017 report of the Lancet Commission: Dementia prevention, intervention, and care stated that a combination of interventions is needed to delay the progression of MCI to dementia, such as treatment of vascular risk factors, diet, exercise, cognitive and social stimulation, etc.In the updated 2020 report, Livingston et al. (65) indicated that most meta-analyses on the effectiveness of cognitive training for patients with MCI were low-standard, positive, and mostly achieved statistical significance.Currently, there is also evidence that sleep plays an important role in the relationship between exercise and cognitive function (66)(67)(68).However, the clinical significance of the results remains uncertain due to the poor standards of the studies and the heterogeneity of the results.Furthermore, the American Academy of Neurology recommended that patients diagnosed with MCI should exercise twice a week, and may have cognitive interventions (50).However, this guideline also notes that the strength of evidence for exercise and cognitive interventions was insufficient and that heterogeneity in outcome measures needed to be reduced in subsequent studies, thus facilitating evidence synthesis and guideline development.
Limitation and further recommendations
Inevitably, there are limitations to this study.First, only one database, WoSCC, was utilized in this study, which may have led to the omission of high-quality studies that exist in other databases but were not indexed by WoSCC.Therefore, software developers are required to upgrade relevant bibliometric tools and algorithms in the future.Second, only English-published studies were included in this study, which may have overlooked high-quality studies published in other languages such as Chinese and Spanish.With advances in bibliometrics techniques as well as international cooperation, it may be possible in the future to minimize linguistic bias.Finally, some high-quality studies may have been overlooked due to late publication, thus not generating hotspots.
Conclusion
This study performed a bibliometric analysis of BME for MCI in the past decade using a robust methodology, describing knowledge structures, and hotspots, and predicting future research trends in this field.The results of this study help researchers to quickly grasp the knowledge structure in the field of BME for MCI, inform future research, and facilitate inter-institutional and international collaboration.Evidence synthesis and guidelines might be future research trends in this field.Future bibliometric research needs to reduce the limitations imposed by bibliometric techniques, language, and citation delays.
1
FIGURE 1Flow chart of the literature search in the field of body-mind exercise for patients with mild cognitive impairment.
FIGURE 2
FIGURE 2Annual distribution of publications and citations in the field of body-mind exercise for patients with mild cognitive impairment.
FIGURE 3 (
FIGURE 3 (A) Inter-country collaborative network in the field of body-mind exercise for patients with mild cognitive impairment.(B) Inter-institutional collaborative network in the field of body-mind exercise for patients with mild cognitive impairment.
( 59 )
. Baduanjin is another traditional Chinese exercise that has been popular in China for more than a thousand years as an important part of the Qigong method(60).Compared to Tai Chi, Baduanjin is a more accessible type of BME and was used later to intervene with patients with MCI but has certainly become a popular intervention in recent years.Previous meta-analyses proposed that Baduanjin also enhanced general cognitive function and executive function among patients with MCI (61, 62).As a BME, Baduanjin may have similar mechanisms of enhancing cognitive function in patients with MCI as Tai Chi.First, Baduanjin improves cardiopulmonary function by controlling breathing, thus enhancing the inhibitory control sublevel of executive function
FIGURE 8
FIGURE 8Top 10 references with the strongest citation bursts in the field of body-mind exercise for patients with mild cognitive impairment.
TABLE 1
Top 11 active countries and top 12 active institutions in the field of body-mind exercise for patients with mild cognitive impairment.
%, percentage of the quantity of work per country/institution out of the 242; ACI, average citations per item; TLS, total link strength; =, same quantity of publications.
Table 3 .
Frontiers in Aging Neuroscience (16 publications, 6.61%), and JOURNAL OF ALZHEIMER'S DISEASE (12 publications, 4.96%) were the two journals publishing the largest number of documents in the field of BME for MCI.Meanwhile, Frontiers in Aging Neuroscience [Impact factor (IF) = 5.702], and Frontiers in Psychiatry (IF = 5.435) were the only two journals with an IF of more than 5, suggesting that publishing research in the field of BME for MCI in high quality journals is still challenging.
TABLE 2
Top 13 active authors in the field of body-mind exercise for patients with mild cognitive impairment.
%, percentage of the quantity of work per author out of the 242; ACI, average citations per item; TLS, total link strength; =, same quantity of publications.
TABLE 3
Top 14 active journals in the field of body-mind exercise for patients with mild cognitive impairment.
%, percentage of the quantity of work per journal out of the 242; IF, 2021 Impact Factors.
TABLE 4
Top 11 high-frequency keyword in the field of body-mind exercise for patients with mild cognitive impairment.
TABLE 5
Top 5 most-cited publications in the field of body-mind exercise for patients with mild cognitive impairment.
|
2024-01-26T17:09:58.472Z
|
2024-01-23T00:00:00.000
|
{
"year": 2024,
"sha1": "3ddcf849dc197d625fd432f86d3682f3fbc296db",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2024.1351741/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a5444f5000bcb2e56a206d01cf7b08cf4e72d23",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55400819
|
pes2o/s2orc
|
v3-fos-license
|
Bifurcation Analysis of a Delayed Worm Propagation Model with Saturated Incidence
This paper is concerned with a delayed SVEIR worm propagation model with saturated incidence. The main objective is to investigate the effect of the time delay on the model. Sufficient conditions for local stability of the positive equilibrium and existence of a Hopf bifurcation are obtained by choosing the time delay as the bifurcation parameter. Particularly, explicit formulas determining direction of the Hopf bifurcation and stability of the bifurcating periodic solutions are derived by using the normal form theory and the center manifold theorem. Numerical simulations for a set of parameter values are carried out to illustrate the analytical results.
Introduction
Worms, as one kind of malicious codes, have become one of the main threats to the security of networks.Since the first Morris worm in 1998, new worms have come into networks frequently, including Slammer worm [1], Commwarrior worm [2], Cabir worm [3], and Chameleon worm [4].Each of them can cause enormous financial losses and social panic [5][6][7].Therefore, it is significant to explore effective methods to counter against worms.To this end, we need to accurately understand the dynamic behaviors of worm propagation in networks.Considering that the process of worm propagation in networks is similar to that of biological virus propagation in the population, mathematical models have been important tools used to analyze the propagation and control of worms based on the theory of Kermack and McKendrick [8].
In [9], Kim et al. proposed the SIS (Susceptible-Infectious-Susceptible) model in order to analyze the dynamical behaviors of worm propagation on Internet.However, the SIS model neglects the effect of the antivirus software.Thus, the SIR (Susceptible-Infectious-Recovered) model is proposed [9].Although SIR model considered the immunity of the nodes in which the worms have been cleaned, however, it assumes that the recovered hosts have permanent immunity.This is not consistent with the reality in networks, because they may be infected by some new emerging worms again.To overcome this drawback of the SIR model, Wang et al. investigated the SIRS (Susceptible-Infectious-Recovered-Susceptible) mode for analyzing the dynamics of worm propagation in networks [10][11][12].It should be pointed out that both the SIR mode and the SIRS model assume that the susceptible nodes become infectious instantaneously.As we know, worms usually have a latent period.Based on this consideration, the SEIR (Susceptible-Exposed-Infectious-Recovered) model [13,14] and the SEIRS (Susceptible-Exposed-Infectious-Recovered-Susceptible) model [11,15] are proposed to describe the dynamics of worm propagation in networks.Considering influence of the quarantine strategy and the vaccination strategy on the propagation of worms, some worm models with quarantine strategy [16][17][18][19] and vaccination strategy [20][21][22][23][24][25] are formulated and analyzed.
It should be pointed out that all the models above use the bilinear incidence rate .As stated in [26], the dynamics of a model system heavily depends on the choice of the incidence rate.Gan et al. have considered the different incidence rate functions /() in their work [27,28].It was found that the saturated incidence rate /(1 + ) is more general than the bilinear incidence rate .Based on this, where (), (), (), (), and () present numbers of the susceptible, vaccinated, exposed, infectious, recovered hosts at time , respectively.The meanings of more parameters are described and shown in "Parameters of the Model and Their Meanings" section.Wang et al. [29] investigated the stability of system (1).One of the significant features of computer viruses is their latent characteristics [30,31].In addition, time delays of one type or another could cause the numbers of hosts in system (1) to fluctuate.And worm propagation models with time delay have been investigated by some scholars [14,17,19].Based on above discussions, in this paper, we extend system (1) by incorporating the time delay due to the latent period of the worms in the exposed hosts into system (1) and obtain the following delayed worm propagation model: where is the latent period of the worms in the exposed nodes.
The remainder of this paper is organized as follows.Local stability of the positive equilibrium and existence of a Hopf bifurcation at the positive equilibrium are analyzed in the next section.Properties of the Hopf bifurcation such as direction and stability are investigated in Section 3. Numerical simulations are carried out in Section 4 to support the obtained theoretical results.Finally, conclusions are given in Section 5 to end our work.
Existence of Hopf Bifurcation
By direct computation, we know that if the condition ( And * is the positive root of the following equation: where The Jacobi matrix of system (2) ) ) ) , where The characteristic equation of that matrix ( 6) is When = 0, (8) becomes where For > 0, let = ( > 0) be the root of (8).Then, we have Thus, we can get the following equation: where Let V = 2 ; then (14) becomes Based on the discussion about the distribution of the roots of ( 16) in [32], we suppose that ( 3 ): (16) has at least one positive root V 0 .
If the condition ( 3 ) holds, then (16) has a positive root 0 = √ V 0 and (8) has a pair of purely imaginary roots ± 0 .For 0 , we have with Differentiating on both sides of ( 8) with respect to , we can obtain Further, we have Re where Based on the discussion above and the Hopf bifurcation theorem in [33], we have the following results.
Numerical Simulation
In this section, some numerical simulations are carried out for qualitative analysis by using Matlab software package.
By extracting some values from [29] and considering the conditions for the existence of the Hopf bifurcation, we choose a set of parameters as follows: = 100, = 0.
By some computations, we can obtain the following equation with respect to : This property can be shown as in Figures 1 and 2. However, a Hopf bifurcation will occur and a family of periodic solutions bifurcate from * (12723, 1571.3,4107.5, 204.8103, 81924) when the value of passes through the Hopf bifurcation value 0 , which can be illustrated by Figures 3 and 4.
In addition, we obtain 1 (0) = −4.3990+2.9057, ( 0 ) = 0.7014 − 0.0212 by some complicate computations.Thus, we get 2 = 7.8776 > 0, 2 = −8.798< 0, and 2 = −0.0713< 0 based on (34).It follows from Theorem 2 that the Hopf bifurcation is supercritical and the bifurcating periodic solutions are stable and decrease.Since the bifurcating periodic solutions are stable, then the five classes of hosts in system (35) may coexist in an oscillatory mode from the view of the biological point, which is not welcome in networks.
Conclusions
In this study, the dynamical behaviors of a delayed SVEIR worm propagation model with saturated incidence are discussed based on the work in literature [29].The dynamical behaviors of the model are investigated from the point of view of local stability and Hopf bifurcation both analytically and numerically.The threshold of the time delay 0 at which the model causes a Hopf bifurcation is obtained by using eigenvalue method.We found that characteristics of the propagation of worms in the model can be predicted and controlled when the value of delay is suitably small ( ∈ [0, 0 )).However, propagation of the worms in the model will be out of control once the value of the time delay is above the threshold value 0 .Accordingly, we can know that the propagation of worms in the model can be controlled by postponing occurrence of the Hopf bifurcation.Moreover, the properties of the Hopf bifurcation are investigated by applying the normal form theory and the center manifold theorem.Numerical simulations are also presented in order to testify our obtained theoretical results.
|
2018-12-13T13:33:39.167Z
|
2018-05-15T00:00:00.000
|
{
"year": 2018,
"sha1": "bebc74df71dbe349b297e64cd6c0b32f6da66681",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/amp/2018/7619074.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bebc74df71dbe349b297e64cd6c0b32f6da66681",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
14725754
|
pes2o/s2orc
|
v3-fos-license
|
Reduction of apoptosis by proanthocyanidin-induced autophagy in the human gastric cancer cell line MGC-803
Proanthocyanidins are flavonoids that are widely present in the skin and seeds of various plants, with the highest content in grape seeds. Many experiments have shown that proanthocyanidins have antitumor activity both in vivo and in vitro. Autophagy and apoptosis of tumor cells induced by drugs are two of the major causes of tumor cell death. However, reports on the effect of autophagy induced by drugs in tumor cells are not consistent and suggest that autophagy can have synergistic or antagonistic effects with apoptosis. This research was aimed at investigating whether proanthocyanidins induced autophagy and apoptosis in human gastric cancer cell line MGC-803 cells and to identify the mechanism of proanthocyanidins action to further determine the effect of proanthocyanidins-induced autophagy on apoptosis. MTT assay was used to examine the proanthocyanidin cytotoxicity against human gastric cancer cell line MGC-803. Transmission electron microscopy and monodansylcadaverine (MDC) staining were used to detect autophagy. Annexin V APC/7-AAD double staining and Hoechst 33342/propidium iodide (PI) double staining were used to explore apoptosis. Western blotting was used to determine expression of proteins related to autophagy and apoptosis. Real-time quantitative PCR technology was used to determine the mRNA level of Beclin1 and BCL-2. The results showed that proanthocyanidins exhibit a significant inhibitory effect on the human gastric cancer cell line MGC-803 proliferation in vitro and simultaneously activate autophagy and apoptosis to promote cell death. Furthermore, when proanthocyanidin-induced autophagy is inhibited, apoptosis increases significantly, proanthocyanidins can be used together with autophagy inhibitors to enhance cytotoxicity.
Introduction
Gastric cancer is one of the most common malignant cancers. According to incomplete statistics, there are approximately one million new cases of gastric cancer diagnosed worldwide each year, and gastric cancer is the second leading cause of death, accounting for ~8% of cancer-related deaths (1). Since the early symptoms of gastric cancer are not obvious, the cancer is typically in the middle and advanced stages by diagnosis, and its five-year survival rate is less than 20% (2). In 2006 alone, there were 950,000 new cases of gastric cancer, which accounted for 9% of new cancer cases, behind only lung, breast and colon cancer (3). Therefore, gastric cancer is a malignant tumor that seriously endangers human health. Approximately 70% of gastric cancers occur in developing countries, which have less medical resources than developed countries (4). More than 40% of patients with gastric cancer are Chinese (5). Although research is committed to the development of emerging fields such as nano-medicine (6)(7)(8) and stem cell technology (9)(10)(11), surgical resection, radiotherapy and chemotherapy are still the main treatment methods for malignant tumors at present. However, current chemotherapy drugs have many issues. Although these drugs are able to delay tumor growth and extend survival, they are highly controversial in tumor treatment since of the lack of an ideal therapeutic effect, the frequent occurrence of drug resistance, and strong toxic side-effects (12)(13)(14). Therefore, the extraction of highly effective, low-toxicity active ingredients from natural products to replace or combine with existing chemotherapy drugs has become a new research trend.
Proanthocyanidins are flavonoids that are widely present in the skin and seeds of various plants, with the highest content in grape seeds (15,16). Their molecular formula is C 30 H 26 O 12 , and their molecular weight is 578.52 Da. Currently, proanthocyanidins can be extracted from many natural compounds and are also a major component of many Chinese medicines (17)(18)(19). Long-term studies have shown that proanthocyanidins possess anti-inflammatory properties, decrease blood pressure, and inhibit platelet aggregation, atherosclerosis and oxidation, among other functions (20)(21)(22)(23)(24). Since they are widely available and exhibit low toxicity and few side-effects, proanthocyanidins have received wide attention. In recent years, many experiments have shown that
Reduction of apoptosis by proanthocyanidin-induced autophagy in the human gastric cancer cell line MGC-803
proanthocyanidins have antitumor activity both in vivo and in vitro, and proanthocyanidins were shown to inhibit or kill many types of tumors (25)(26)(27)(28)(29). Autophagy and apoptosis of tumor cells induced by drugs are two of the major causes of tumor cell death. Autophagy is an important cellular metabolic process that is highly conserved throughout evolution and is widely present in eukaryotic cells. Autophagy is a programmed cell death that is different from apoptosis and is termed type II programmed cell death (30,31). Autophagy is characterized by the presence of a large number of autophagosomes in the cytoplasm, and various digestive enzymes in the lysosome digest and degrade the contents in the vacuoles to convert them into substances required by the body for energy (32,33). Several studies have confirmed that autophagy plays an important role in tumorigenesis and therapy (34)(35)(36). However, reports on the effect of autophagy induced by drugs in tumor cells are not consistent and suggest that autophagy can have synergistic or antagonistic effects with apoptosis (37,38). In the present study, we treated the human gastric cancer cell line MGC-803 with proanthocyanidins to determine whether proanthocyanidins induced autophagy and apoptosis in these cells and to identify the mechanism of proanthocyanidins action to further determine the effect of proanthocyanidin-induced autophagy on apoptosis.
Cell culture. The human gastric cancer cell line MGC-803 was purchased from Nanjing KeyGen Biology China. The cells were cultured in RPMI-1640 complete medium containing 10% calf serum (CS) at 37˚C in a 5% CO 2 incubator. Cells in the logarithmic growth phase were used for the experiments.
MTT assay for cell proliferation (IC 50 ). A cell suspension with a concentration of 5x10 4 cells/ml was prepared, and 100 µl of the cell suspension was added to each well of a 96-well culture plate, incubated at 37˚C in a 5% CO 2 incubator (Sanyo XD-101; Sanyo, Osaka, Japan) for 24 h. Complete medium was used to dilute the drug to the desired concentrations (400, 200, 100, 50, 25, 12.5, 6.25, 3.125, 1.5625 and 0.78125 µg/ml), and 100 µl of the corresponding drug-containing medium was added to each well. A negative control and a positive control group were also included. The 96-well plate was incubated at 37˚C in a 5% CO 2 incubator (Sanyo XD-101) for 48 h. The plate was then subjected to MTT staining, and the OD value was measured at λ=490 nm. The inhibition rate and drug IC 50 value of each group were calculated.
Annexin V APC/7-AAD double staining to detect apoptosis. Cells growing in the logarithmic phase were trypsinized and seeded into a 6-well plate. The corresponding drugcontaining medium was added (100, 20 or 4 µg/ml) after the cells were attached to the plate and negative control group was included at the same time. After treatment with the drug for 48 h, 0.25% trypsin (without EDTA) was used to trypsinize and gather the cells. The cells were washed twice with phosphate-buffered saline (PBS) (centrifugation at 2,000 rpm, 5 min), and 5x10 5 cells were collected. The cells were then resuspended in 500 µl of binding buffer. After 5 µl of Annexin V-APC was added and mixed well, 5 µl of 7-AAD was added and mixed well. The reaction was performed at room temperature for 5-15 min in the dark, and a flow cytometer (FACSCalibur; Becton-Dickinson, USA) was used to detect apoptosis.
Transmission electron microscopy. MGC-803 cells in the logarithmic growth phase were incubated in drug-containing medium (100, 20 and 4 µg/ml). A negative control group was included at the same time. All the cells were harvested 24 h later. Trypsin (0.25%) was used to remove the cells from the plate. The cells were then centrifuged at 1,000 rpm for 10 min. After the supernatant was discarded, the cells were washed twice with PBS, and 2.5% glutaric acid was added. The cells were fixed for 90 min at 4˚C. After the cells were embedded, sectioned and stained with uranyl acetate and lead citrate, the autophagosomes were observed under a transmission electronic microscope (JEm-1011, Japan).
MDC staining to detect autophagy. Cells in the logarithmic growth phase were trypsinized and seeded into a 6-well plate. The next day, after the cells attached to the walls, drug-containing medium was added (100, 20 and 4 µg/ml). A negative control group was included at the same time. After treatment with the drug for 48 h, 0.25% trypsin (without EDTA) was used to gather the cells. Wash buffer (1X; 300 µl) was used to wash the cells once, and an appropriate amount of 1X wash buffer was added to resuspend the cells, with the cell concentration adjusted to 10 6 cells/ml. A total of 90 µl of cell suspension was transferred to a new microfuge tube and 10 µl of MDC staining solution was added and gently mixed. After staining at room temperature for 15-45 min in the dark, the cells were gathered by centrifugation at 800 x g for 5 min. Wash buffer was used to wash the cells three times, and the cells were resuspended in 100 µl of gatherion buffer. The cell suspension was dropped onto a slide and covered with a coverslip. The slide was then observed under a fluorescence microscope (Olympus IX51; Olympus, Japan).
Western blotting to determine protein expression. Cells in the logarithmic growth phase were trypsinized and seeded onto a 6-well plate. The next day, after the cells attached, drug-containing medium was added (100, 20 and 4 µg/ml). A negative control group was included at the same time. Pre-chilled lysis buffer (200 µl) was added to each group.
After mixing, the lysate was incubated on ice for 30 min. After vortexing, the lysate was centrifuged at 13,000 x g for 10 min at 4˚C. The supernatant was saved, and the BCA method was used to measure the protein concentration of the samples. The proteins were resolved on a 10% SDS-PAGE gel and transferred to a PVDF membrane. After the membrane was blocked overnight with 5% non-fat milk, the primary antibody (1:200) was added and incubated overnight at 4˚C in a sealed bag. TBST was used to wash the membrane three times for 10 min, and the membrane was then incubated with the secondary antibody (1:4,000) for 1 h. Finally, the membrane was incubated with chemiluminescence solution and exposed to film.
Hoechst 33342/PI double staining to detect apoptosis. Cells in the logarithmic growth phase were trypsinized and seeded into a 6-well plate. The next day, after the cells attached, drug-containing medium was added. A negative control group was included at the same time. After treatment with the drug for 48 h, 0.25% trypsin (without EDTA) was used to gather the cells. A total of 10 5 -10 6 cells was resuspended in 1 ml of medium, 10 µl of Hoechst 33342 staining solution was added to the cells and mixed well, and the suspension was incubated at 37˚C for 5-15 min. The cells were centrifuged at 500-1,000 rpm for 5 min at 4˚C, and the supernatant was discarded. Buffer A (1 ml) was used to resuspend the cells, and 5 µl of PI staining solution was added, and incubated at room temperature for 5-15 min in the dark. The suspension was mixed well and observed under a fluorescence microscope Olympus IX51.
Fluorescence quantitative PCR to detect gene expression. Total RNA was isolated from logarithmically growing MGC-803 cells, and the purity of the RNA was determined. The isolated RNA was reverse transcribed into cDNA using a kit from Thermo Fisher. Fluorescent staining and a quantitative PCR were used to perform real-time quantitative PCR (ABI StepOne Plus, USA). The primers were synthesized by Nanjing GenScript Technology Co., Ltd. with the sequences: GAPDH (101-bp product) [5'-ACAACTTTGGTATCGTGG AAGG-3' (sense), and 5'-GCCATCACGCCACAGTTTC-3' (antisense)]; Beclin1 [(140-bp product) (5'-ATGTCCACAGA AAGTGCCAA-3' (sense), and 5'-GGGTGATCCACATCTGT CTG-3' (antisense)]; and BCL-2 (114-bp product) [(5'-AAATC CGACCACTAATTGCC-3' (sense), and 5'-TGCTCTTCAGAT GGTGATCC-3' (antisense)]. The amplification conditions were 95˚C pre-denaturation for 5 min followed by 95˚C denaturation for 15 sec, 60˚C annealing for 20 sec, and 72˚C extension for 40 sec for a total of 40 cycles. The specificity of the amplified products was monitored by melting curves. Software was used to calculate the relative expression of the target genes in each group, and GAPDH was used as an internal reference to assess the expression of target genes.
Statistical methods. The data are presented as the mean ± standard deviation. The SPSS 16.0 statistical software was used for data analysis. Analysis of variance (ANOVA) was used to compare the difference between groups under different conditions, and p<0.05 was considered to indicate a statistically significant result.
Results
The inhibitory effect of proanthocyanidins on the proliferation of MGC-803 cells. As shown in Fig. 1 The effect of proanthocyanidins on the microstructural morphology of MGC-803 cells. To verify whether the cytoplasmic vacuoles observed by inverted microscopy were related to autophagy, transmission electron microscopy was used to observe autophagosomes in MGC-803 cells treated with proanthocyanidins. As shown in Fig. 3, untreated cells had normal nuclei, cytoplasm and organelles, whereas proanthocyanidin-treated cells showed a high number of autophagosomes of various sizes, and autophagosomes containing mitochondria were also observed by electron microscopy. This suggests that autophagy occurred in the cells after proanthocyanidin treatment.
MDC staining for autophagosome labeling. An inverted fluorescence microscope was used to observe MDC-labeled autophagic vacuoles, clear punctate structures were observed in the cytoplasm and perinuclear region, and the changes of the particles inside the cell were used to determine the level of autophagy. As shown in Fig. 4, compared with the control group, proanthocyanidins-treated cells showed stronger Proanthocyanidins induces LC3 and caspase 9 expression in MGC-803 cells. Western blot analyses were used to detect changes in the levels of LC3 and caspase 9 after MGC-803 cells were treated with proanthocyanidins. On SDS-PAGE gels, LC3-II ran faster than LC3-I, producing two bands by western blotting. Fig. 5A shows that the untreated cells exhibited only a faint LC3-I band, whereas the LC3-II band was not detected. In contrast, after treatment with proanthocyanidins, the level of LC3-II increased significantly in a dose-dependent manner. Fig. 5B shows that treatment with proanthocyanidins significantly increased the expression of caspase 9 compared with the control group in a dose-dependent manner.
Effect of proanthocyanidins on the phosphatidylinositol 3 kinase (PI3K)/protein kinase B (PKB/AKT)/mammalian target of rapamycin (mTOR) signaling pathway.
The PI3K/AKT/mTOR signaling pathway is the canonical pathway that negatively regulates the initiation of autophagy. It has been reported that inhibition of this pathway induces cell autophagy. As shown in Fig. 6, western blot analyses showed that proanthocyanidins inhibited the phosphorylation of PI3K, AKT and mTOR in the PI3K/AKT/mTOR signaling pathway.
Inhibition of autophagy increased the cytotoxicity of proanthocyanidins. Preliminary experiments determined that proanthocyanidin treatment of MGC-803 cells activated both
autophagy and apoptosis and that the inhibition of MGC-803 cell proliferation by proanthocyanidins occurred in a dosedependent manner. To understand whether the cytotoxicity exhibited by proanthocyanidins was mediated by autophagy, the autophagy inhibitor 3-MA was added, and MTT assays were used to examine its cytotoxicity. The results showed that compared with cells treated with only proanthocyanidins, the addition of 3-MA significantly increased the percentage of apoptotic MGC-803 cells (Fig. 7).
Hochest 33342 and PI double fluorescence staining of live cells. Hochest 33342 and PI double staining is able to distinguish live and dead cells. When cells are in the late apoptotic stage or in the early necrotic stage, the nuclei are red in color, whereas the nuclei of live cells are blue. As shown in Fig. 8, MGC-803 cells treated with proanthocyanidins for 48 h exhibited nuclei with a bead-like shape, forming apoptotic bodies. There was no significant difference between treatment with 3-MA alone (Fig. 8B) and control cells (Fig. 8A). However, cells treated with proanthocyanidins ( Fig. 8C) exhibited an increased percentage of apoptotic cells compared with the control cells (Fig. 8A) (p<0.001). Cells treated with proanthocyanidins + 3-MA (Fig. 8D) showed an increased percentage of apoptotic cells compared with cells treated with proanthocyanidins alone (Fig. 8C) (p<0.01). These results showed that apoptosis increased significantly in response to the inhibition of autophagy induced by proanthocyanidins.
The effect of proanthocyanidins on apoptosis following inhibition of autophagy. Since we found that the cytotoxicity of proanthocyanidins was increased after the inhibition of autophagy, we speculated whether autophagy inhibited the effect of proanthocyanidins on apoptosis. Real-time quantitative PCR was used to investigate MGC-803 cells treated with proanthocyanidins. Using Beclin1 and BCL-2 as indicators, we investigated the effect of proanthocyanidins on autophagy and apoptosis when autophagy was inhibited. As shown in Fig. 9, after the addition of 5 mm 3-ma, the expression of Beclin1 decreased compared with that in control cells. After treatment with 40.7 µg/ml proanthocyanidins, the expression of Beclin1 increased significantly compared with that in control cells. However, when the proanthocyanidins were added after treatment with 3-MA, Beclin1 expression was significantly decreased compared with the level in cells treated with proanthocyanidins alone. After treatment with proanthocyanidins, BCL-2 decreased significantly compared with expression in the control cells, whereas the addition of 3-MA significantly decreased BCL-2 expression compared with that in cells treated with proanthocyanidins alone. Thus, when autophagy was inhibited, the apoptotic effect of proanthocyanidins was increased.
Discussion
In the present study, MTT assays were used to determine the effect of proanthocyanidins on the human gastric cancer MGC-803 cells and to calculate the IC 50 . The results showed that proanthocyanidins significantly inhibit MGC-803 cells in a dose-dependent manner.
To test whether the inhibition of MGC-803 cells by proanthocyanidins is related to the induction of autophagy, we used MDC staining to label autophagic vacuoles and transmission electronic microscopy to observe autophagosomes, which are published methods to confirm the presence of autophagy (39). MDC staining showed that proanthocyanidin-treated cells exhibited increased autophagic vacuoles, and transmission electronic microscopy confirmed the existence of autophagosomes in the cells. Therefore, from the morphology, we can preliminarily confirm that proanthocyanidins induce autophagy in MGC-803 cells. Microtubule-associated protein 1 light chain 3 (LC3) plays a key role in autophagy in mammalian cells. LC3 consists of soluble LC3-I and lipidated LC3-II. Under various stresses, such as hypoxia and drug treatment, the cells initiate autophagy, and LC3-I undergoes a ubiquitination-like modification and processing to form LC3-II. Therefore, the level of LC3-II is positively correlated with the number of autophagic vacuoles. When autophagy occurs inside the cells, LC3-II increases significantly, and the detection of the change in LC3-II levels can accurately determine the amount of autophagy (40)(41)(42)(43). When western blot analyses were used to determine the levels of the autophagy marker LC3, LC3-I levels decreased, whereas LC3-II increased in proanthocyanidin-treated MGC-803 cells, and both of these effects were dose-dependent. This confirmed that treatment with proanthocyanidin-induced autophagy in MGC-803 cells.
We next used flow cytometry to observe apoptosis of MGC-803 cells after treatment with proanthocyanidins. The proanthocyanidins significantly induced apoptosis in MGC-803 cells. According to published studies, proanthocyanidins induce apoptosis in multiple tumor cell types, and the above result is consistent with these reports (44,45). Apoptosis is initiated by p53, which activates relevant proteins to form a channel on the mitochondria to allow cytochrome c in the mitochondria to be released into the cytoplasm, which eventually activates caspase family proteases (46). Caspase family proteases are a type of aspartate-specific cysteine-containing proteases that are important for apoptosis. Caspases are Figure 9. The effect of proanthocyanidins on autophagy and apoptosis when autophagy was inhibited. After treatment with 5 mM 3-MA, the expression of Beclin1 was decreased compared with that in control cells. After treatment with 40.7 µg/ml proanthocyanidins, the expression of Beclin1 increased significantly compared with that in control cells, ** p<0.01. However, when the proanthocyanidins were added after treatment with 3-MA, Beclin1 expression was significantly reduced compared with that in cells treated with proanthocyanidins alone, △△ p<0.01. (A) Data are expressed as the mean ± standard deviation (n=3). After treatment with proanthocyanidins, the level of Bcl2 decreased significantly compared with that in control cells, ** p<0.01, whereas simultaneous addition of 3-MA significantly decreased Bcl2 expression compared with that in cells treated with proanthocyanidins alone, △△ p<0.01. (B) Data are expressed as the mean ± standard deviation (n=3). divided into apoptosis initiation factors and apoptosis effectors according to their function (47). Caspase 9 is an important initiation factor and can be activated by other proteins or by itself to activate a series of downstream effectors. Eventually, the cells undergo biochemical and morphological changes that lead to apoptosis (48). Our experiments have shown that proanthocyanidins activate caspase 9 and induce apoptosis in MGC-803 cells.
Molecular signaling pathways are closely involved in autophagy and apoptosis. Phosphatidylinositol 3 kinase (PI3K)/ protein kinase B (PKB/AKT)/mammalian target of rapamycin (mTOR) is one of the currently most-studied pathways. This pathway is well accepted as being associated with cell autophagy and apoptosis. PI3K phosphorylates phosphatidylinositol (4,5) bisphosphate [PtdIns (4,5)P2] in the cytoplasmic membrane to generate phosphatidylinositol (3)(4)(5) triphosphate [PtdIns (3,4,5) P3], which recruits AKT to the inner side of the cytoplasmic membrane. AKT is then phosphorylated and activated by another protein kinase, 3-phosphoinositide-dependent protein kinase 1 (PDK1). Activated AKT further activates mTOR by inhibiting the tuberous sclerosis complex (TSC1/2), which is an inhibitor of mTOR. The inhibition of TSC1/2 activity by phosphorylated AKT leads to the activation of mTOR. mTOR is a serine/threonine kinase that inhibits autophagy when activated (49). Similarly, apoptosis is also affected by the PI3K/AKT pathway. Activated AKT binds to Ser184 of the BCL-2 family member BAX. After phosphorylation, BAX inactivates mitochondrial cytochrome c, blocking the activation of caspases and inhibiting apoptosis (50). Whereas, activated AKT also phosphorylates Ser196 of caspase 9, inactivating it and thus directly reducing apoptosis (51). Multiple studies have shown that autophagy and apoptosis induced by a variety of drugs are all associated with the PI3K/AKT/ mTOR pathway (52)(53)(54)(55). Western blot analyses showed that although proanthocyanidins did not affect the total amount of PI3K, AKT, and mTOR, these compounds did significantly reduce the amount of p-PI3K, p-AKT and p-mTOR; in other words, proanthocyanidins reduced the activation of the PI3K/ AKT/mTOR pathway. This may be one of the reasons that proanthocyanidins induce autophagy and apoptosis and is also consistent with other reports (56).
The results above showed that both autophagy and apoptosis are activated when proanthocyanidins induce cell death in the human gastric cancer MGC-803 cells and that the activation mechanism is associated with the inhibition of the PI3K/AKT/mTOR pathway.
To further explore the relationship between proanthocyanidin-induced autophagy and apoptosis, we used the classical autophagy inhibitor 3-methyladenine (3-MA) to determine the effect of autophagy on apoptosis. MTT assays found that after the addition of 3-ma, the effect of proanthocyanidins increased. When Hoechst 33342/PI double fluorescence staining was used to observe apoptosis after the inhibition of autophagy, apoptosis increased significantly. These results suggest that the inhibition of autophagy may increase apoptosis. To verify this hypothesis, real-time quantitative PCR technology was used to determine the mRNA level of Beclin1 and BCL-2. Beclin1 plays a key role in autophagy of mammalian cells. Beclin1 is also the mammalian homologue of the yeast protein ATG6/Vps30 and is located on human chromosome 17q21 (57). Beclin1 promotes the formation of autophagosomes by forming a complex with class III PI3K (58). The expression of Beclin1 is positively correlated with autophagy in multiple malignant tumor cell types (59)(60)(61). BCL-2 is a key regulator of the known apoptosis proteins and is negatively correlated with apoptosis induced by various signals (62,63). When BCL-2 increases, the BCL-2/BAX heterodimer interferes with the release of cytochrome c, thereby blocking the activation of the upstream caspase protease and in turn inhibiting apoptosis (64)(65)(66). The inhibition of autophagy by 3-MA not only decreased Beclin1 mRNA but also increased BCL-2 mRNA, which further confirmed that apoptosis increases significantly when autophagy induced by proanthocyanidins is blocked.
In summary, proanthocyanidins exhibit a significant inhibitory effect on human gastric cancer cell (MGC-803) proliferation in vitro and simultaneously activate autophagy and apoptosis to promote cell death. The mechanism is associated with interference of the PI3K/AKT pathway by proanthocyanidins and a change in the amount of the downstream autophagy proteins LC3 and Beclin1, as well as in the apoptosis proteins BCL-2 and caspase 9. Furthermore, when proanthocyanidin-induced autophagy is inhibited, apoptosis increases significantly and tumor cells undergo cell death. Therefore, as an active ingredient of natural products with low toxicity, proanthocyanidins can be used together with autophagy inhibitors to enhance cytotoxicity.
|
2016-05-12T22:15:10.714Z
|
2015-11-13T00:00:00.000
|
{
"year": 2015,
"sha1": "554b7575e6b0a46907af1eef2eecb6b467d26034",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/or.2015.4419/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "554b7575e6b0a46907af1eef2eecb6b467d26034",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
110345786
|
pes2o/s2orc
|
v3-fos-license
|
A Plan for the Delivery of Nursing Homes in Korea
With its rapid modernization and the unparalleled rate at which its society is aging, South Korea faces the need for a dramatic increase in its supply of elderly care services. Among these, nursing homes are considered an essential alternative provision because Korea can no longer rely on traditional familism or medical facilities for the care of its older people. It is necessary, therefore, to prepare a plan for the delivery of nursing homes in Korea. This paper has identified the elderly care context and analyzed existing elderly care facilities of Korea in terms of the supply and utilization rates of nursing homes, according to region and type of facility. On the basis of this analysis, a plan for the delivery of nursing homes in Korea has been proposed in order to improve the welfare status of older people as well as the efficient utilization of health care resources.
Introduction
Compared to that of other developed countries, the proportion of older people in South Korea is growing at an unparalleled speed. People aged 65 and over were 7.2% (3,395,000) of the total population in 2000 and are expected to total over 14% by 2019 (Korea National Statistical Office, 2001). Therefore, a rapid increase is anticipated in the number of older people who will need care. However, as Korean society modernizes, the average family size is decreasing and women, who have been the major 'caregivers' within the family, are working outside the home in larger numbers. As a result, it has become more difficult to find proper caregivers for older people in the family. Medical facilities like hospitals are a poor alternative, because hospital service for older people who need nursing care is not cost-effective. It has to be taken into account that the national health expenditure of Korea is increasing sharply. Consequently, nursing homes are considered an essential alternative because Korea can no longer rely on traditional familism nor medical facilities for the care of its older people. However, the ratio of available nursing home beds to older people in Korea is very low compared to that of developed countries such as the U.S, U.K and Japan. It is necessary, therefore, to prepare a plan for the delivery of elderly care services in Korea including the supply of nursing homes. This paper first identifies the elderly care context and analyzes elderly care facilities of Korea in terms of the supply and utilization rates of nursing homes according to region and type of facility. On the basis of this analysis, a plan for the delivery of nursing homes in Korea is proposed in order to improve the welfare status of older people and efficient utilization of national resources.
The Elderly Care Context in Korea 2.1 Increase in elderly population
Life expectancy at birth in South Korea increased from 65.8 years in 1980 to 75.9 years in 2000. In the same period, the total fertility rate declined from 2.8 to 1.47, and the overall crude death rate fell from 7.3 to 5.3 per 1000. These demographic trends suggest that the proportion of older people, while small at the moment compared to that of other developed countries, will continue to grow rapidly. The percentage of the population aged 65 and over increased from 5.9% in 1995 to 7.2% in 2000, due to the increase in the average life span and reduction in the birth rate. From the 'population pyramid' graphs ( Figure 1), it is anticipated that there will be a 'graying revolution' in Korea as the large middle-aged group moves into the elderly range in the near future. Japan is already undergoing a graying revolution, while India has longer to wait before it becomes an aging society.
According to the United Nations' definition, 'aging society' means that the percentage of older people is more than 7%, and 'aged society' means it is at least 14%. Korea became an aging society in 2000, and will be an aged society by 2019 (Table 1); it will take only 19 years for Korean society to make this leap, which is the fastest aging speed of any developed country ( Figure 2). This unparalleled speed of aging will bring about many problems related to elderly care.
Growth in the proportion of older people living alone
Older people living alone are much more likely to require formal care services from professional care agencies than are those who live with their families. The proportion of older people living alone in Korea appears to be increasing ( Table 2). It increased from 8.9% in 1990 to 16.2% in 2000. In other words, one sixth of Koreans aged 65 and over could be said to be living alone in 2000. Though this proportion is still low compared to those of western countries such as the US and UK, it is higher than that of Japan, which is one of the most aged societies in Asia. A national survey conducted among Koreans insured under the National Pension Program revealed that 73% of the non-elderly respondents said they would prefer to live apart from their children after their children's marriages (Choi, 1996:7). All these indications point to Korea's need to increase rapidly the amount of formal care services available for its older people.
Women's increasing participation in occupational and social activities
The rate of women's participation in occupational and social activities has increased steadily and is likely to go on increasing (Table 3). This phenomenon tends to make it more difficult to care for older people within the family because, up to now, women have been the major caregivers for frail older people in their families. Table 4 shows that the employment rate among Korean women in 1997 was almost the same as that of Japan.
Economic difficulties of older people
The National Pension program covering most Korean workers, which only started in 1988 and is to begin paying pensions in 2008, cannot provide benefits for current Korean retirees. As shown in Table 5, more than 40% of Koreans aged 65 and over are dependent on their children for their living expenses, while the major income sources of older people in Japan, the US and the UK are pensions.
Most elderly Koreans take it for granted that they will receive economic support from their children. However, contrary to this expectation, an increasing number suffer from financial difficulties because of their children's unwillingness or inability to provide economic support. These factors further illustrate Korea's need for a public care system for older people.
Health care system and national health expenditure
The Korean health care system is characterized by the dominance of private provision and finance. More than 80% of the system's total beds are provided by the investment of the private sector. Of the total health expenditures, more than 65% are financed by the private sector, most in the form of direct patient payments. This is the same even under the universal health insurance. Older people have to pay about 20-30% of their medical costs 1 , like any other group without any special exceptions. This makes it more difficult for them to cope with their frail health and poor economic status.
Another feature of the Korean health care system is that medical charges are based on the Fee for Service scheme. The more services provided, the higher the fee; therefore many institutions manipulate this system to increase earnings by providing excessive diagnostic & treatments and profitable service items even to elderly patients. One weakness of this system is that it can lead to higher national health expenditures.
Expenditures on health have risen rapidly in Korea over the last twenty years, due to rising income levels and the gradual expansion of health insurance coverage since 1977, the year that the government introduced mandatory health insurance. Measured as a proportion of GDP, health expenditures have almost doubled from 2.7% in 1970 to 5.1% in 1999 (OECD Health Data 2001). The increase in medical expenditures for older people also can be one reason for this. The percentage of the medical expenses for older people to the total ones was 5.4% in 1985, but it increased to 10.3% in 1993 (Kim, 1996). Considering the fact that medical services for older people are not fully developed, medical expenditures are expected to grow steadily in the future as the number of older people and the supply of medical services for this group increase. It is necessary, therefore, to control the medical expenses of older people for the sake of the national economy.
Delivery Patterns for Nursing Homes 3.1 Patterns of care for older people
Care services for older people in Korea vary from medical services in acute care hospitals to home care. They can be divided into two categories: formal and informal care services. Formal services are delivered by trained professionals and require payment. And informal services, usually delivered by nonprofessional family members, are becoming less common due to rapid modernization and the deconstruction of the family structure in Korea. As a consequence of this change, formal care services are emerging as an important source of elderly care.
Formal care services are classified into two types: institutional care, which provides older people with care services for long periods in institutions such as hospitals or nursing homes; and community care which is sporadic care provided in the home, day care centers, and nursing homes in the community. Nowadays, community care, often referred to as "aging in place", is considered a good alternative for elderly care because institutional care deepens the dependency of older people and separates them from their families. Furthermore, institutional care is considered to be expensive. However, the expansion of community care is limited. If an older person is very dependent and frail, community care is neither sufficient nor cost-effective for him. Figure3, showing the relationship between community and institutional care, shows why the one cannot substitute fully for the other.
Institutional care is composed mainly of hospital and nursing home services. As shown in Table 6 and Figure 4, the number of hospital beds in developed countries is decreasing, while that of nursing home beds is increasing. This is because hospital care is more expensive than nursing home care, and the care environment in the hospital is not suitable for elderly people receiving long-term care. In many developed countries, the provision of continuing care in hospitals has been questioned both in terms of suitability and economic viability. As a consequence, large reductions have been made in the number of hospital beds devoted to continuing care (OECD, 1996:296). These lessons learned in western countries indicate that Korea has to increase the nursing home services instead of hospital services for elderly care.
Nursing homes in Korea
In general, long-term care facilities for older people in Korea are regulated by Elderly Welfare Laws. These facilities vary in type, providing residential services, nursing care, leisure services, and/or home-based care ( Table 7). Some facilities provide multiple services. Countries Source: Same as that of Table 6 This paper will focus on nursing care facilities, rather than including those that provide residential or leisure services. Elderly Care Hospitals, though included in nursing facilities, have also been excluded from this discussion because they mainly provide medical care, and are regulated as hospitals under Medical Laws.
A nursing home (nursing facility) is a kind of institutional facility for older people (65+) who require continuing care due to their disabilities. In 2000, there were 9,312 nursing home beds in Korea, accommodating only 0.28% of the elderly population (Korea Association of Senior Citizen Welfare Institutions, 2002). Nursing homes are classified in two groups, according to the dependency level of the older people using these facilities: Intermediate Nursing Homes (INH) serve those requiring general nursing/social services, and Special Nursing Homes (SNH) serve frail older people, usually suffering from stroke or dementia. Each group is categorized again into two or three types according to the fee scale: Charge-free, Low-fee charging and Fullfee charging facilities (Table 7). To be admitted to Charge-free or Low-fee charging nursing homes, patients must be of low income, as assessed by a means test.
Planning for the Provision of Nursing Homes in Korea
Over the last several decades, there has been a steady increase in the number of nursing home beds. However, the proportion of nursing home beds to the number of older people in Korea is still quite low compared to that of other developed countries such as Japan, the US, and the UK. In Korea, it was 0.28% in 2000, whereas in Japan it was 1.34% 2 in 1999 (MHW, 2001), in the US it was 5.2% in 2000, and in the UK it was 2.13% in 1993. Moreover, the occupancy rate of these facilities in Korea is approximately 84.8% (see Table 8), which is somewhat low compared to Japan's occupancy rate of 99% in 1999. There are many reasons for the low occupancy rate in nursing homes in Korea: the traditional value of filial piety still prevails in Korea; most existing facilities are of very poor quality 3 ; and most facilities do not accept older people who have children caring for them, in accordance with the admission criteria of the Livelihood Protection program 4 . Anyway Korea needs to raise both the supply and occupancy rate of nursing homes as soon as possible for the sake of elderly welfare and efficient use of national health care resources.
At the present time, there is a greater need for Chargefree or Low-fee charging nursing homes rather than Fullfee charging ones. This is because most elderly Koreans are indigent. As shown in Table 8, the occupancy rates of Charge-free and Low-fee charging facilities are higher than that of Full-fee charging ones, probably for this reason. Therefore, the government and society must play active roles in the provision of care for older people. Figure 5 shows that Charge-free facilities have made up the majority of nursing homes, and this will likely continue to be the case in the near future. However, the public sector will not be able to bear all the rapidly increasing financial burden of elderly care indefinitely. Even in developed countries, continuing care is effectively a personal or family responsibility, with the state's role confined to providing a safety net for those with insufficient resources and a lack of family support (OECD, 1996:299). In the long run, Korea can expect to see a similar combination of public and private funding for its elderly care services.
According to a national survey (n=30,000) conducted in 2002, 4.4 percent of the elderly 60 and over preferred to reside in an Elderly housing facility or Nursing facility (Table 9), and the percentage was higher among the urban (5.2%) than the rural residents (3.0%). This result indicates that the demand for elderly care facilities in Korea is increasing in the urban area, although the overall demand for elderly care facilities is not as high as that in developed countries. The percentages of the elderly wanting charging (full-cost or low-cost) and free facilities were 1.6% and 2.8%, respectively. This means that Charge free facilities are preferred to Full-fee charging or Low-fee charging facilities for the care of older people. Especially in rural area, the preference rate for the Full-fee charging facilities recorded only 0.6 percent (Korea National Statistical Office, 2003:57) and the occupancy rate of them was very low compared to the Charge free care facilities (Table 8). It is said that a major reason for the low occupancy rate of Full-fee charging facilities in spite of the low supply rate has been the high cost ($1150/month in 2002) for the admission to those facilities. A number of residents in Full-fee charging facilities moved into Low-fee charging ones because they could not afford to pay the fee, although they were not eligible for Low-fee charging facilities. Consistent with this, a survey (n=202) conducted in 2001 in Daegu, which is one of the 7 metropolitan cities in Korea, revealed that 49 percent of the elderly expected their living expenses per month as $42-$830, while only 11.5% of them expected those as $1,250 or more (Moon, 2001:44). Full-fee charging facilities will be commonly used only when senior Koreans can afford high expenses based on national pension and long-term care insurance systems. Figure 5 also shows that the proportion of Special Nursing Homes has expanded sharply since they were introduced in 1997. This means that there are many older people with high level of dependency. These facilities primarily serve very older people suffering from severe chronic diseases like paralysis or dementia. The rapid increase in the most elderly segment of Korea's population will require a significant increase in the number of Special Nursing Homes.
As shown in Table 8, the occupancy rate of nursing homes in metropolitan areas (90.2%) is higher than that in rural areas (61.7%), even though the metropolitan supply rate of nursing homes (0.31%) is higher than the rural supply rate (0.26%). This is because housing problems 5 , and phenomena such as the rise of individualism and the nuclear family system, have driven older people into these institutions. The rapid modernization of metropolitan areas suggests that the need for nursing homes there is even more urgent than in rural areas 6 . Of course, a large number of nursing facilities will be needed in rural areas as well; the number of elderly over the age of 70 living alone in rural areas (190,000) is larger than that in urban areas (165,000) (Korea National Statistical Office, 2001) 7 .
On the basis of the discussion above, a strategic plan for the delivery of nursing homes in Korea has been proposed, and is shown in Table 10. The scheme is divided into a short-term plan and long-term plan 8 . Of course, the timing of provision is closely related to the types, providers and locations of care facilities for older people. The short-term plan calls for the provision of many Charge-free nursing homes, funded by the public sector, and Low-fee charging nursing homes, funded by public & private sectors in metropolitan areas. In the long run, many Charge-free nursing homes funded mainly by the public sector will be needed in rural areas, and Low-fee charging nursing homes funded by public & private sectors will be needed in both areas. On the other hand, Full-fee charging nursing homes will not be in great demand in Korea for the time being.
Conclusion
This study began with a discussion of the problems Korea faces in the care of its older people. As mentioned before, Korean society is experiencing rapid modernization and an unparalleled rate of aging, so it is necessary to increase dramatically the supply of elderly care services in the near future. Among these, nursing homes are considered an essential alternative provision, because Korea can no longer rely on traditional familism nor medical facilities for the care of its older people. The number of nursing home beds available today is extremely low, making the provision of a large number of nursing homes an important priority for Korea.
Based on the existing elderly care context, a strategic plan for the delivery of nursing homes in Korea has been proposed. The plan addresses the issues of progressive development (when), types of facilities (what), number of beds (how many), funding source (who) and service area (where) ( Table 10). Furthermore, the plan has been divided into its short-term and long-term components, because the timing of provision can be a precondition for the types, numbers, payers and locations of elderly care facilities.
Considering the fact that most elderly Koreans are indigent, the government and society will have to participate actively in the provision of nursing home care. In the long run, however, market forces will lead to more private funding of elderly care services. Many developed countries have pursued the privatization of nursing home services for the sake of economic efficiency. Even in the UK, with its collective social security norms and entitlement approach to health care, the state has played a relatively minor role in the direct provision of longterm care, since the emergence of the Thatcher government of the Conservative Party, in 1979. In order to utilize national healthcare and welfare resources efficiently, privatization is necessary to a certain extent in providing elderly care services. Consequently, the Korean welfare state would have to develop by means of a shared responsibility on the part of the state, the community and the family, for providing care for the elderly.
With regard to community care, Korean society needs to develop domiciliary care programs such as respite care or daycare services, while expanding institutional care programs. This would prevent unnecessary or premature institutionalization, in addition to ensuring more effective and efficient care for older people in many cases. Domiciliary care services that can supplement or support the family's care function would be especially desirable in Korean society, in that they would be more consonant with upholding the traditional value of filial piety. In urban areas, public daycare centers are needed most urgently, and in rural areas, public home-based care is more urgently needed. In the long term, however, rural areas require nursing homes to accommodate the large number of frail older people living alone there. In all areas, the concept of "ageing in place" or "community care" should be a priority.
As far as the types of nursing care facilities are concerned, special nursing homes, which serve frail older people suffering from chronic illness, are needed more than intermediate nursing homes. The history of care services for the elderly in developed countries like Japan, the US, and the UK can be summarized as that of specialization. As their economies developed and the societies aged, the Homes for poor older people were replaced by nursing homes delivering specialized care services for the elderly. At the same time, the burden of national health expenditures and the need for a better care environment led to the shift from a large number of hospital beds for older people to nursing home beds. These experiences indicate that Korea also will need a significant number of special nursing homes, providing special care for frail older people.
Along with these considerations, the vitalization of nursing homes will be important in the Korean context. The degree of vitalization can be related to the occupancy Table 10. Strategic Plan for the Provision of Nursing Homes in Korea rates of nursing homes. If the occupancy rate is not high enough, it is not necessary to increase the number of nursing home beds. As mentioned before, the current occupancy rate of Korean nursing homes is low; therefore their vitality must be addressed. Government subsidies must be increased, the care environment must be improved, the nursing homes must be advertised properly, admission criteria must be amended reasonably, and chronic elderly patients in hospitals must be transferred to nursing homes for the proper development of nursing homes in Korea.
There is no overall blue print related to the allocation model of elderly care facilities for the efficient utilization of limited welfare resources in Korea. Individual researchers propose different aspects of elderly care facilities that are only related to their fields of study. There is no generally agreed supply plan for the elderly care facilities. The present study proposes a short-term and long-term supply plan of elderly care facilities according to the type, provider, and location, and can serve as a preliminary criterion for setting a priority in allocating national welfare resources. Further research on the quality of care facility is recommended in order to improve the care environment for the frail elderly. In all cases, the planning of care facilities for older people should be based on the patient needs, cost effectiveness, and care efficiency.
1
As of 1994, 95.2% of the Korean population covered by compulsory medical insurance program had to pay 20-30% of the medical fee when they used medical services, whereas 4.8% covered by Medical Assistance program did not need to pay at all. The Medical Assistance program is a public assistance program for the poor. As of 1994 this program covered 21% of all those aged 65 and over. 2 Full-fee-charging nursing home beds were not counted. 3 As indicated sardonically by Johnson and Grant (1985), these facilities are as good as 'human junkyards', 'houses of death' and 'warehouses for the dying' (Olson, 1994: 33). 4 This is given to those who are of low income as judged by a means test, regardless of age. The level and types of benefits vary according to the status of the recipients. 5 The proportion of people having their own houses in urban area is no more than 67.7% whereas that in rural areas is over 78.9% (Korea National Statistical Office, 2001) 6 Surveys on nursing homes in rural areas (2002) show that 50-60% of residents in Low-fee and Full-fee charging nursing facilities have come from urban areas. This means that nursing home beds in urban areas are in very short. 7 Whereas the number of elderly people over 70 in rural areas (828,538) is less than in urban areas (1,167,146). (Korea National Statistical Office, 2001) 8 One of the most important factors in the supply of elderly care facilities is the affordability for the nursing home fee. In developed countries such as Japan, the U.K., and the U.S., the cost is typically shared by each individual and the government. So the growth of elderly income as well as the establishment of national paying system for the welfare cost is needed for the development of nursing facilities. The national pension program in Korea, which started in 1988 and will pay more than 2.5 million people by the year 2008, will contribute to the growth of economic power of the elderly. On the other hand, a long-term care insurance system, which can cover a large portion of nursing fee, is under consideration and is expected to start in 5-10 years. These two factors will be the underpinning for the development of nursing industry. Based on these, the borderline between short-term and long-term plans is set as year 2008-2013 in this paper.
|
2019-04-13T13:06:00.125Z
|
2003-05-01T00:00:00.000
|
{
"year": 2003,
"sha1": "f181e557c4fae70d8a8c5f44a88324b095d1acbd",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/jaabe/2/1/2_1_139/_pdf",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "2ddfb5f228235c5a632ad26b68aee124ed0a5919",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
}
|
7381287
|
pes2o/s2orc
|
v3-fos-license
|
Journal of Cardiovascular Magnetic Resonance
Aims: Obese subjects with insulin resistance and hypertension have abnormal aortic elastic function, which may predispose them to the development of left ventricular dysfunction. We hypothesised that obesity, uncomplicated by other cardiovascular risk factors, is independently associated with aortic function. Methods and results: We used magnetic resonance imaging to measure aortic compliance, distensibility and stiffness index in 27 obese subjects (BMI 33 kg/m 2) without insulin resistance and with normal cholesterol and blood pressure, and 12 controls (BMI 23 kg/m 2). Obesity was associated with reduced aortic compliance (0.9 ± 0.1 vs. 1.5 ± 0.2 mm 2 /mmHg in controls, p < 0.02) and distensibility (3.3 ± 0.01 vs. 5.6 ± 0.01 mmHg-1 × 10-3 , p < 0.02), as well as higher stiffness index (3.4 ± 0.3 vs. 2.1 ± 0.1, p < 0.02). Body mass index and fat mass were negatively correlated with aortic function. Leptin was higher in obesity (8.9 ± 0.6 vs. 4.7 ± 0.6 ng/ml, p < 0.001) and also correlated with aortic measures. In multiple regression models, fat mass, leptin and body mass index were independent predictors of aortic function. Conclusion: Aortic elastic function is abnormal in obese subjects without other cardiovascular risk factors. These findings highlight the independent importance of obesity in the development of cardiovascular disease.
Introduction
Obesity affects approximately 300 million people worldwide, and another 750 million are believed to be overweight [1], representing one of the largest health care challenges of our time. Obesity is associated with high levels of adiposity, significantly increased levels of adipokines such as leptin [2] and elevated levels of the inflammatory marker C-reactive protein (CRP) [3]. Land-mark studies have linked obesity with a higher risk of developing heart failure [4].
Subjects with obesity have altered aortic function [3,5,6]. Physiologically, the aorta maintains low left ventricular after-load, promotes optimal sub-endocardial coronary blood flow [7], and transforms pulsatile into more laminar blood flow. Increased aortic stiffness leads to higher left ventricular systolic pressures, diminished sub-endocardial blood supply [7] and may ultimately contribute to left ventricular dysfunction [8,9]. These changes in arterial mechanics are also associated with coronary artery disease [10], hypertension [11,12], diabetes [13,14], and hypercholesterolaemia [15][16][17]; disorders which themselves are more common in obesity Therefore, it has been difficult to determine the independent effect of obesity on vascular function.
In this study, we employed the unique features of cardiovascular magnetic resonance imaging -direct visualisation of cardiac and aortic mechanics, with high temporal and spatial resolution, even in subjects with large subcutaneous thoracic fat deposits [18,19] -to test the hypothesis that obesity is independently associated with abnormal aortic function, in adults without confounding factors such as diabetes, insulin resistance, hypertension, or coronary artery disease.
Subjects
Control and obese subjects were recruited from the general population of Oxfordshire via newspaper advertisements. The study was approved by the local ethics committee, and subjects gave their informed consent prior to participation.
Blood assays
Participants had fasting venous blood samples collected to assess hepatic and renal function, full blood count, lipid profile, insulin, glucose, C reactive protein (CRP) and leptin. Lipid profile was based on total cholesterol, high density lipoproteins (HDL), triglycerides and a calculated low density lipoprotein (LDL) level [20]. Leptin (LINCO Research Inc., St. Charles Missouri) and C reactive protein (CRP) (MP Biomedicals, Orangeburg, NY) were measured using commercially available ELISA techniques.
Exclusion criteria
To investigate the independent effect of obesity on aortic function, we excluded patients with cardiovascular risk factors or factors that might contribute to sub-optimal vascular function. Hypertensives were identified and excluded based on the Joint National Council on Prevention, Detection, Evaluation and Treatment of High Blood Pressure definitions [21]. Diabetics were identified from medical history or fasting venous blood glucose level ≥ 6.7 mmol/L [22]. Furthermore, the homeostasis insulin model assessment (HOMA) formula was used to calculate an insulin resistance (IR) score [23]. Men with an IR score of > 2.35 and women with a score > 1.88 were excluded based on the European Group for the study of Insulin Resistance (EGIR) guidelines [24]. Smokers, subjects with a history of cerebrovascular or coronary artery disease, those with total blood cholesterol levels > 6 mmol/L and those with abnormal renal, hepatic or haematological function were not included. Additionally, those with contraindications to CMR were not recruited.
Assessment of body size
All participants were weighed on an electronic Seca scale and height was measured on an adjustable Seca standing stadiometer. These measures were used to calculate body mass index. Waist and hip circumferences were measured using a tape measure. Bioelectric impedance using the Bodystat ® 1500 was used to assess fat mass.
Cardiovascular magnetic resonance imaging
CMR studies were performed on a 1.5 Tesla clinical MR system (Siemens Sonata, Erlangen, Germany) as previously described [25]. For aortic imaging, a 2-element array surface coil on the chest was combined with a spine-coil array. Aortic indices were assessed using TrueFISP cine sequences with the following parameters: TR/TE 2.8 ms/ 1.4 ms and 15 lines per phase with a temporal resolution of 24 frames per second. Sampling bandwidth was 930 Hz/pixel with a matrix of 192 × 118 over a FoV of 380 × 332 mm, resulting in an in-plane resolution of 1.97 × 2.81 mm. Aortic cine images were acquired in two transverse planes, based on sagittal-oblique pilots ( Figure 1a): at the pulmonary arch for the ascending and descending aorta and 10 cm below the diaphragm for the distal descending aorta (Figure 1b and 1c). All participants had their resting blood pressure taken immediately before the cardiac magnetic resonance study. For cardiac analysis localiser images were acquired followed by vertical long axis (VLA) and horizontal long axis (HLA) cine images. A short axis stack of contiguous images was then acquired (slice thickness 7 mm, inter-slice gap 3 mm).
Aortic cross-sections were manually contoured using CMR Tools ® (Imperial College, London, UK). Vascular compliance, distensibility and stiffness index were calculated as described previously [18]. Aortic compliance is the absolute change in area per unit of pressure whereas distensibility is the relative change per unit pressure. Stiffness index examines the logarithmic relationship between pressure and the relative change in aortic cross-sectional area. This takes into account the variation in background arterial distending pressure. Mean aortic compliance, dis-tensibility and stiffness index were calculated by averaging the regional measures. Left ventricular volumes and mass were obtained from the short axis stack by manually contouring end-diastolic and end-systolic endocardial and epicardial borders from base to apex, using Siemens analytical software (ARGUS © ). Left ventricular end-diastolic volume (EDV), end-systolic volume (ESV), left ventricular mass (LVM), ejection fraction (EF), stroke volume (SV) and cardiac output (CO) were calculated and, where appropriate, normalised for body size.
Statistical analysis
Statistical analysis was carried out using SPSS 11.0. Arterial compliance, distensibility, stiffness index and myocardial mass were not normally distributed and were investigated using non-parametric Mann-Whitney test and Spearman's analysis for correlation. All values are reported as mean ± the standard error of the mean (SEM) and a p value of < 0.05 was considered significant. Multiple linear regression was carried out correcting for gender and height to determine predictors of aortic function.
Results
Demographic characteristics of the study groups are in Table 1. Ages and lean mass of obese and lean subjects were not different. The obese group were shorter, with a 1.4 times higher BMI (p < 0.01) and 1.9 times higher fat mass (p < 0.01). There was no significant difference between groups in waist-hip ratio, systolic (SBP) or diastolic blood pressure (DBP) or mean arterial pressure.
Blood parameters (Table 2)
There was no significant difference in total or LDL cholesterol levels between groups. Triglyceride levels were 31% higher in the obese (p < 0.05) and HDL levels 13% lower (p < 0.05). Leptin levels were 91% higher in the obese (p < 0.01) whereas CRP levels were not significantly different. Fasting glucose levels were similar between groups. Insulin levels were higher in the obese, but HOMA insulin resistance scores did not vary. BMI correlated positively with leptin (r = 0.8, p < 0.001) and CRP (r = 0.6, p < 0.001) and negatively with HDL (r = -0.7, p < 0.001). Fat mass showed positive correlations with leptin (r = 0.8, p < 0.001) and CRP (r = 0.5, p = 0.001), and negative correlation with HDL cholesterol (r = -0.5, p < 0.001).
Left ventricular function
There was no difference in ejection fraction between groups. The control cohort was taller and had larger cardiac volumes. However, there were no significant differences between cohorts in SV, EDV, ESV, and LVM indexed for height (Table 3). CMR image in coronal-sagittal orientation indicating meas-urement levels in the aorta (a) Figure 1 CMR image in coronal-sagittal orientation indicating measurement levels in the aorta (a). AAO indicates ascending aorta; DAO P , proximal descending aorta; DAO D , distal descending aorta. Transverse CMR images demonstrating the ascending and proximal descending aorta (b, c) and the distal descending aorta (d, e) in systole and diastole.
Aortic function and obesity (Table 4)
Obesity was associated with a significant reduction in compliance in the proximal descending thoracic aorta and the distal descending abdominal aorta (Figure 1). Furthermore, there was a corresponding decrease in distensibility of the proximal descending aorta and distal descending aorta. Stiffness index (β) was significantly higher in the obese at the level of the distal descending aorta only. There was no significant difference between groups in aortic compliance, distensibility or stiffness index in the ascending aorta.
Discussion
In this study, cardiovascular magnetic resonance imaging revealed significant changes in aortic mechanical function in an obese population without hypertension, diabetes, insulin resistance or hypercholesterolaemia. The descending aorta had significantly lower compliance, distensibility and a higher stiffness index -all indicators of decreased mechanical and intrinsic elastic function. This functional abnormality strongly correlated with BMI, fat mass, leptin, waist circumference and HDL levels. Even after adjustment for the potential confounders of gender and height, fat mass emerged as the strongest predictor of decreased aortic elasticity, closely followed by leptin, BMI and HDL.
Figure 2 Mean aortic compliance had a negative correlation with (a) body mass index (BMI), (b) fat mass and (c) leptin.
ness have been published. Oren et al [30] reported increased aortic compliance in obese subjects compared with lean controls and Raison et al [32] demonstrated reduced vascular peripheral resistance in obesity. More recent studies have evaluated arterial distensibility in peripheral vessels [3] and pulse wave velocities [33] and suggest a negative correlation between fat mass and aortic compliance, more consistent with an adverse impact of obesity on the vasculature.
Oren et al used diastolic blood pressure decay and pulse pressure relative to stroke volume as surrogate measures of compliance of the whole aorta. These were measured by placing a pressure catheter in the ascending aorta. Magnetic resonance imaging has the advantage of studying changes in aortic compliance in different segments of the aorta. Using CMR, Danias et al [5] studied the ascending aorta in an obese population with cardiac risk factors and reported no difference in compliance compared to controls. However, they did find a reduction in elasticity of the abdominal aorta. They hypothesised that the changes may have been due to physical compression by abdominal fat or structural changes in the vessel wall. As the study included subjects with cardiac risk factors these may also have independently influenced vascular function. Our study demonstrates that changes in distensibility occur in the descending thoracic aorta as well as the abdominal aorta and are independent of abdominal size. These findings suggest the changes in aortic function are less likely to be due to physical compression from abdominal fat. Furthermore, our cohort did not have cardiac risk factors, which suggests obesity has an independent impact on vascular function.
Similar to Danias et al [5] we found no change in function in the proximal aorta. The precise reason for the proximal sparing of the vessel remains unclear. It is possible that aortic dysfunction in obesity begins distally with an ascending pattern of progression. The aorta is a physiologically heterogeneous vessel with elastin:collagen ratios decreasing distally along its length. Regions with higher proportions of elastin have physiologically greater abilities to stretch and recoil. Impairment of vascular elasticity might commence in vessel sections physiologically less compliant [34], and this might then affect the entire arterial tree if obesity is sustained.
Although we excluded all subjects with raised glucose or insulin resistance, our population was hyperinsulinaemic. In work done by Ferrannini et al [35], it was recognised that although insulin hypersecretion can occur in adults with uncomplicated obesity, the prevalence of insulin resistance is low. Further, it was suggested that in the obese with no evidence of insulin resistance, the risk for the development of cardiovascular disease might differ from that seen in the insulin resistant patient. Additionally work done during the San Antonio Heart Study [36] demonstrated that during an eight year prospective trial,
Figure 3 Mean aortic distensibility correlated negatively with (a) body mass index (BMI) and (b) fat mass.
subjects with high HOMA scores (i.e. with evidence of insulin resistance) were the ones at highest risk for cardiovascular events.
The reduced aortic compliance and distensibility seen in individuals with uncomplicated obesity was unrelated to the inflammatory status, as CRP was not correlated to aortic function. Anthropometric parameters and leptin were the strongest predictors of aortic function and therefore may be more important in the pathogenesis of early aortic disease. Elevated leptin has been shown to increase atherosclerotic risk [37,38]. Knudson et al [39] demonstrated the presence of leptin receptors on coronary artery endothelium and that through increased endothelial oxidative stress hyperleptinaemia resulted in significant arterial endothelial dysfunction. Additionally, Zarkesh-Esfanai et al [40] have demonstrated that high leptin levels may lead to the activation of tumour necrosis factor alpha (TNFα). TNFα has been shown to decrease eNOS production and consequently increase vascular tone [41]. We have not measured TNFα but it is conceivable that chronically elevated leptin levels indirectly impair vascular elastic function via TNFα.
Abnormal aortic function is an independent predictor of the development of coronary artery disease and stroke [42], as well as left ventricular dysfunction. Interestingly, cardiac changes are not yet evident in our cohort with obesity despite a mean age of forty nine. The development of cardiac dysfunction may have been delayed by the absence of other risk factors or the selection of subjects with uncomplicated obesity has identified a specific group with adaptive processes that compensate for changes in aortic function. It would be of interest to determine whether cardiovascular disease and risk factors in obese individuals predisposes them to further decline in aortic function and determine how aortic dysfunction progresses over time in uncomplicated obesity.
Our study is limited by a relatively small sample size and these findings need to be investigated further in larger cohorts with uncomplicated obesity. The lack of variation in left ventricular function between the obese and lean subjects has been demonstrated in other studies [43]. However, with larger sample numbers to facilitate gender and obesity subgroup analysis on the basis of increasing BMI, a pattern towards worsening left ventricular function might have been noted. As changes in aortic distensibility are seen so early, it is possible that genetic factors are relevant to changes in aortic distensibility in obesity. Data on family history of cardiovascular disease was not available in our cohort and more detailed work will be required to investigate the possible contribution of inherited factors. Fat mass distribution is of interest to risk of cardiovascular disease [44] and can be assessed with magnetic resonance imaging. Future magnetic resonance research could incorporate these measures to determine how adiposity distribution contributes to changes in aortic function. This research could also study other indices of aortic function such as pulse wave velocity and more refined assessments of blood pressure, including use of central aortic pressure. As leptin is produced predominantly in adipocytes, a reduction in fat mass, rather than absolute weight reduction, might be more efficacious in restoring normal aortic function in this group of patients.
CMR is an excellent imaging modality for non-invasive quantitative assessment of vascular mechanics in a clinical study setting, but might prove impractical for screening for increased aortic stiffness in the general obese population. Our study suggests fat mass and BMI have a predictive potential for central arterial dysfunction. Unlike HDL and leptin measurements, which, though predictive, necessitate venepuncture and laboratory testing, BMI and fat mass are both easily measured with scales, callipers or bioelectric impedance. Earlier appreciation of the vascular risk posed by uncomplicated obesity encourages earlier and more aggressive treatment, thus reducing the morbidity and mortality associated with excess body weights.
|
2014-10-01T00:00:00.000Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "14b0e463e1320f688fa2f433ddfbb2658c0fea69",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/1532-429X-10-10",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14b0e463e1320f688fa2f433ddfbb2658c0fea69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245531372
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the Relationship between Academic Self-Concept and Academic Self-Efficacy of University Students Receiving Sports Education
In this study, it was aimed to examine the relationship between Academic Self-Concept and Academic Self-Efficacy of university students studying in the field of sports sciences. The population of the research consists of a total of 619 students from the 2nd, 3rd, and 4th grades of Karabük University Hasan Do ğ an School of Physical Education and Sports, while the sample group consists of a total of 241 students, 88 of whom are female and 153 are male. “Personal Information Form”, “Matovu Academic Self-Concept Scale” developed by Liu and Wang (2005) and later adapted for university students by Matovu (2014) and adapted into Turkish by Cantekin and Gökler (2019), and the “Academic Self-Efficacy Scale” developed by Kandemir (2010) were used as data collection tools in the research. The data obtained were analyzed with the SPSS-24 Package Program. At the same time, Pearson correlation analysis, Independent-Samples t-test analysis, One-Way ANOVA (One-Way Analysis of Variance), Tukey multiple comparison were used in the analysis and interpretation of the data. While there is a significant difference between the Academic Self-Efficacy Scale (ASES) and Self-Efficacy for Academic Effort, one of its sub-dimensions, according to the gender of the students, there is no significant difference between it and the other sub-dimensions. total of ASES and Self-Efficacy for Handling Academic Problems, one of its sub-dimensions, while there is no significant difference between it and other sub-dimensions.
activities such as how long they can withstand obstacles and deterrents (Şeker, 2016;Yalmacı & Aydın, 2014). People with high self-efficacy may be more comfortable and productive when faced with high-difficulty situations. People with low self-efficacy see difficult situations even harder than they really are, and such a thought increases anxiety and stress and narrows the person's perspective to solve the problem. In this direction, it is determined that self-efficacy strongly affects the success of individuals (Pajares, 2002).
This concept, which has developed in the field of social psychology, has started to be used in other disciplines and fields over time. One of them is the field of education and learning. According to Bandura, who connects Academic Self-Efficacy to Self-Efficacy theory, Academic Self-Efficacy is the belief that an individual can be successful in an academic subject area (Bandura, 1997). Academic Self-Efficacy focuses on the ability of individuals to successfully fulfill their academic duties and responsibilities (Booth, Abercrombie, & Frey, 2017).
When the literature is examined, it is seen that the studies related to the Academic Self-Concept are mostly conducted with students at primary and secondary education level, and they refer to its relationships between academic achievement, attitude towards school and course, problem posing and solving skills (Korkmaz & Kaptan, 2002;Yüksel, 2003;Deringöl, 2019). With the Matovu Academic Self-Concept scale developed in 2019, it has become possible to measure the Academic Self-Concepts of university students in Turkey (Cantekin & Gökler, 2019). On the other hand, it is seen that studies on Academic Self-Efficacy are mostly conducted with teacher candidates, and the relations between Academic Self-Efficacy and academic motivation, academic success, academic procrastination are addressed (Albayrak, 2014;Şeker, 2016;Gündoğan & Koçak, 2017). It is very difficult to encounter studies in which Academic Self-Concept and Academic Self-Efficacy are used together in national and international literature (Wang & Neihart, 2015). In the field of physical education and sports sciences, no study was found in which both concepts were used together. For this reason, it is thought that due to the originality of the study, it will shed light on future studies.
Purpose of the Research
The aim of the study is to examine the relationship between Academic Self-Concept and Academic Self-Efficacy of university students studying in the field of sports sciences in terms of some variables. For this purpose, answers to the following questions were sought: (1) Is there a relationship between the students' Academic Self-Concept and their Academic Self-Efficacy?
(2) Is there a significant difference between Matovu Academic Self-Concept and Academic Self-Efficacy according to the gender of the students?
(3) Is there a significant difference between Matovu Academic Self-Concept and Academic Self-Efficacy according to the departments of the students?
(4) Is there a significant difference between Matovu Academic Self-Concept and Journal of Educational Issues ISSN 2377-22632021 www.macrothink.org/jei 71 Academic Self-Efficacy according to the grade levels of the students?
Method
In this section, information about the research model, the sample group, the data collection tools and the analysis of the data are given.
Research Model
This research is an example of the relational screening model, which is one of the general screening models. In relational screening models, also called correlational, the co-variance of two or more variables is examined (Büyüköztürk, Çakmak, Akgün, Karadeniz, & Demirel, 2016).
Sample Group
The population of the research consists of a total of 619 students from the 2nd, 3rd, and 4th grades of Karabük University Hasan Doğan School of Physical Education and Sports, while the sample group consists of a total of 241 students, 88 of whom are female and 153 are male. When Table 1 is examined, the frequency and percentage distributions of the characteristics reflecting the personal knowledge of the students are seen. 36.5% (n = 88) of the students participating in the research were female, while 63.5% (n = 153) were male, and 37.8% (n = 91) were physical education and sports teaching students, whereas 29.0% (n = 70) were sports management students and 33.2% (n = 80) were coaching education department students, and 51.0% (n = 123) were 2nd grade students, while 29.1% (n = 70) were 3rd grade students and 19.9% (n = 48) were 4th grade students.
Data Collection Tools
Within the scope of the research, data were collected through the "Personal Information Form", "Matovu Academic Self-Concept Scale" and "Academic Self-Efficacy Scale" Google Form.
Personal Information Form
The Personal Information Form developed by the researchers includes gender, department and grade level variables.
Matovu Academic Self-Concept Scale
The scale, developed by Liu and Wang (2005), later adapted for university students by Matovu (2014) and adapted into Turkish by Cantekin and Gökler (2019), consists of a total of 20 items collected in two dimensions: academic confidence and academic effort. The Cronbach Alpha internal consistency coefficients for the dimensions were obtained as 0.960 and 0.964, respectively, by the researchers who adapted it into Turkish. The Cronbach Alpha internal consistency coefficient for the entire scale was calculated as 0.930. Factor loadings ranged from .722 to .963, and adjusted item-total correlations ranged between .433-.800. There are reverse coded items in the scale.
Academic Self-Efficacy Scale
The scale developed by Kandemir (2010) consists of a total of 19 items collected in three sub-dimensions: self-efficacy for handling academic problems, self-efficacy for academic effort, and self-efficacy for academic planning. The Cronbach alpha internal consistency coefficients were found to be .90 for the first factor, .78 for the second factor, .77 for the third factor, and 92 for the whole scale by the researcher who developed the scale. There are no reverse coded items in the scale.
Analysis of Data
Within the scope of the research, first, the distribution of the data set was examined in line with the answers given by the students to the data collection tools. In this direction, first of all, the skewness and kurtosis coefficients were calculated. When the skewness-kurtosis coefficients calculated for the normality assumption of the variables were examined, it was determined that the coefficients took values between -2 and +2 (Matovu Academic Self-Concept Scale total: skewness = -0.590; kurtosis = 0.899; Academic Confidence: skewness = -0.440; kurtosis = 0.184 and Academic Effort: skewness = -0.652; kurtosis = 1.007 from the Matovu Academic Self-Concept Scale sub-dimensions; Academic Self-Efficacy Scale total: skewness = -0.505; kurtosis = 1,579; Self-efficacy for handling with academic problems: skewness = -0.454; kurtosis = 1.169, self-efficacy for academic effort: skewness = -0.599; kurtosis = 1.782, and self-efficacy for academic planning: skewness = -0.465; kurtosis = 0.771, which are among the Academic Self-Efficacy Scale Sub-Dimensions), and it was observed that the total and sub-dimensions of the Matovu Academic Self-Concept Scale and the Total and sub-dimensions of the Academic Self-Efficacy Scale showed normal distribution. In order to examine the assumption of
In Table 2, when examining the results of Levene statistic conducted to determine whether the score distributions of the scales are homogeneously distributed or not, it is seen that the scores obtained from the scales are homogeneously distributed. When the structure of the score distributions obtained from the data collected from the students is examined, it is seen that it meets the assumptions of normality and homogeneity, it is scaled at an evenly spaced scale level, and it meets the assumptions of the parametric test since it is a data that shows continuous distribution (Köklü, Büyüköztürk, & Bökeoğlu, 2007).
The relationship between the scores of the students from the scales was examined by Pearson correlation analysis. Independent-Samples t-test analysis was used to test the difference between the scores obtained from the scales according to the demographic characteristics (gender) of the students in two categories. One-Way ANOVA (One-Way Analysis of Variance) was used to test the difference between the scores obtained from the scales according to the demographic characteristics of the students (department, grade) with more than two categories, and Tukey Multiple Comparison was used to determine the differences. ISSN 2377-22632021 Table 3. Pearson correlation analysis results of the relationship between students' Matovu Academic Self-Concept Scale and Academic Self-Efficacy Scale and its sub-dimensions N = 241
It was examined in Table 3 whether there is a relationship between the students' Matovu Academic Self-Concept Scale and Academic Self-Efficacy Scale and its sub-dimensions. According to p = .000 < .05, it is seen that there is a moderately significant positive relationship between "self-efficacy for handling academic problems" (r = 0.591), "self-efficacy for academic effort" (r = 0.516) and "self-efficacy for academic planning" (r = 0.576), which are among the sub-dimensions of Matovu Academic Self-Concept Scale and Academic Self-Efficacy Scale. According to p = .000 < .05, there is a moderately significant positive relationship between the students' total Matovu Academic Self-Concept Scale and Academic Self-Efficacy Scale (r = 0.629). Note. * p < .05 Categories r: 0-0.30 = low relationship; 0.40-0.60 = moderate relationship; 0.70-1.00 = high level of relationship.
It was examined in Table 4 whether there is a relationship between the students' Academic Self-Efficacy Scale and the Matovu Academic Self-Concept Scale and its sub-dimensions. According to p = .000 < .05, it is seen that there is a moderately significant positive relationship between the academic confidence" (r = 0.619) and "academic effort" (r = 0.503), which are among the Academic Self-Efficacy Scale and Matovu Academic Self-Concept Scale sub-dimensions. According to p = .000 < .05, there is a moderately significant positive ISSN 2377-22632021 www.macrothink.org/jei 75 relationship between total of the students from the Academic Self-Efficacy Scale and the Matovu Academic Self-Concept Scale (r = 0.629). Table 5 shows that according to the gender of the students, no significant difference was found according to Matovu Academic Self-Concept t = -1.370, p = .172 > .05 and its sub-dimensions Academic Confidence t = 0.800, p = .425 > .05, Academic Effort t = 1.659, p = .98 > .05. The Cohen d value calculated for the effect size was found to be 0.1867 for the Matovu Academic Self-Concept Scale, 0.1089 for the Academic Confidence sub-dimension, and 0.2283 for the Academic Effort sub-dimension. Note. * p < .05. Table 6 shows that according to the gender of the students, there is a significant difference between Academic Self-Efficacy Scale t = 2.133, p = .034 < .05 and its sub-dimension, Self-Efficacy for Academic Effort, according to t = 2.094, p = .37 < .05. t was determined that there was no significant difference between Self-Efficacy for Handling Academic Problems t = 1.953, p = .052 > .05 and Self-Efficacy for Academic Planning, which are its other sub-dimensions, according to t = 1.724, p = .0.86 > .05. The Cohen d value calculated for the effect size was found to be 0.2930 for the Academic Self-Efficacy Scale, 0.2662 for the Self-Efficacy for Handling Academic Problems sub-dimension, and 0.2864 for the Self-Efficacy for Academic Effort sub-dimension, and 0.2347 for the Self-Efficacy for Academic Planning sub-dimension. ISSN 2377-22632021 Table 7 shows that the students' Academic Self-Concept did not show a significant difference according to their departments (p > 0.05). In other words, it was determined that the Academic Self-Concepts of the students studying in the physical education and sports teaching, sports management, and coaching education departments are similar. Table 8 shows that the Academic Self-Efficacy of the students does not differ significantly according to their departments (p > 0.05). In other words, it was determined that the Academic Self-Efficacy of the students studying in physical education and sports teaching, sports management, and coaching education departments are similar. ? Table 9. One-way ANOVA (one-way analysis of variance) results on the difference between Matovu Academic Self-Concept according to the grade levels of students Table 9 shows that the Academic Self-Concept of the students does not show a significant difference according to their grade level (p > 0.05). In other words, it was determined that the Academic Self-Concepts of the 1st, 2nd and 3rd grade students are similar. Note. * p < .05 Categories: 2 nd Grade = 1; 3 rd Grade = 2; 4 th Grade = 3. Table 10 shows that there is a significant difference according to F(2-240) = 3.403, p = .035 < .05 between students' grade levels and their self-efficacy for handling academic problems, which is the sub-dimension of Academic Self-Efficacy scale. This significant difference stems from the fact that the students in the 4th grade (= 42.47) have higher self-efficacy scores for handling academic problems than the students in the 3rd grade (= 39.58). It is seen that there is no significant difference between the grade levels of the students and self-efficacy for academic effort F(2-240) = 2.521, p = .083 > .05 and self-efficacy for academic planning, which are among the Academic Self-Efficacy scale sub-dimensions, according to F(2-240) = 2.475, p = .086 > .05. It is seen that there is a significant difference between the grade levels of the students and the total of the Academic Self-Efficacy scale according to F(2-240) = 3.458, p = .033 < .05. This significant difference stems from the fact that students in the 4th grade (= 73.91) have higher Academic Self-Efficacy scores than the students in the 3rd grade (= 68.82).
Discussion
In this study, it is aimed to examine the relationship between Academic Self-Concept and Academic Self-Efficacy of university students studying in the field of sports sciences. At the same time, the distinction between the concepts of Academic Self-Concept and Academic Self-Efficacy, which causes conceptual confusion in the literature, has been tried to be explained and it is thought that awareness has been created in the participants.
According to the gender variable, it was determined that there was no significant difference in terms of Academic Self-Concept of university students. In this case, it can be argued that regardless of male or female, they show improvement in their perceptions of themselves and how talented they are compared to other students within the scope of the general academic field. A comparison could not be made as there was no study on university students. According to the gender variable, a significant difference was found between the total of Academic Self-Efficacy and the self-efficacy for academic effort sub-dimension, but no difference was found in terms of the other sub-dimensions. This is a difference in favor of women. It can be thought that this means that women can better focus on the issue of successfully fulfilling their academic duties and responsibilities. While there are studies supporting this result in different studies (Durdukoca, 2010;Biricik, 2015;Uslu, 2018), there are also studies with opposite results (Çuhadar et al., 2013;Yalmacı & Aydın, 2014;Şeker, 2016).
It was determined that there was no significant difference in terms of Academic Self-Concept of university students according to the department. Since there is no study on university students, a comparison could not be made. No significant difference was found in terms of Academic Self-Efficacy of university students according to the department. It can be understood that students studying in physical education teaching, coaching education and sports management departments show similarities in terms of their interest in academic subjects specific to their fields, their ability to make comparisons, their way of taking academic duties and responsibilities, etc. The study named Akdeniz University example, which was conducted by Eroğlu et al. (2017) to examine the relationship between Academic Self-Efficacy and academic motivation levels of students in the faculty of sports sciences, supports our research. In Biricik's (2015) study, "Examination of the Academic Self-Efficacy of students studying in physical education and sports departments," it is seen that there is a statistically significant difference according to the department variable, and this is an opposite result with our study.
It was determined that there is no significant difference in terms of Academic Self-Concept of university students according to grade level. Since there is no study on university students, a comparison could not be made. While it is seen that there is a significant difference between the total of the Academic Self-Efficacy scale and its sub-dimension of the self-efficacy for handling academic problems, according to the grade level, there was no significant difference between self-efficacy for academic effort and self-efficacy for academic planning, which are among the Academic Self-Efficacy scale sub-dimensions. However, when the literature is examined, there are studies that do not support the result of this research according to the grade variable (Cihan, 2014;Eroğlu et al., 2017;Uslu, 2018).
As a result, in this study, the relationship between Academic Self-Concept and Academic Self-Efficacy of university students receiving sports education was tried to be examined in terms of gender, department and grade level variables. In future research, Academic Self-Concept and self-efficacy of students studying at different universities can be determined and compared with each other. In addition to Academic Self-Concept and Academic Self-Efficacy of university students, the effect of other variables such as motivation, learning strategies, academic achievement, etc. can also be investigated.
|
2021-12-29T16:16:27.066Z
|
2021-12-26T00:00:00.000
|
{
"year": 2021,
"sha1": "5255485d3e9f4d1062b03f906599c09903ee977a",
"oa_license": "CCBY",
"oa_url": "https://www.macrothink.org/journal/index.php/jei/article/download/19269/15041",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42c430bdc0f741a333b067bd1b72f7333ef6ccf9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
8303322
|
pes2o/s2orc
|
v3-fos-license
|
Claudin 1 in Breast Tumorigenesis: Revelation of a Possible Novel “Claudin High” Subset of Breast Cancers
Claudins are the major component of the tight junctions in epithelial cells and as such play a key role in the polarized location of ion channels, receptors, and enzymes to the different membrane domains. In that regard, claudins are necessary for the harmonious development of a functional epithelium. Moreover, defective tight junctions have been associated with the development of neoplastic phenotype in epithelial cells. Breakdown of cell-cell interactions and deregulation of the expression of junctional proteins are therefore believed to be key steps in invasion and metastasis. Several studies suggest that the claudins are major participants in breast tumorigenesis. In this paper, we discuss recent advances in our understanding of the potential role of claudin 1 in breast cancer. We also discuss the significance of a subset of estrogen receptor negative breast cancers which express “high” levels of the claudin 1 protein. We propose that claudin 1 functions both as a tumor suppressor as well as a tumor enhancer/facilitator in breast cancer.
Breast Cancer
Breast cancer remains one of the most commonly diagnosed cancers among women in North America [1]. Based on molecular, epidemiological, and histological observations, a morphological progression model for breast cancer has been assembled within the last decade [2][3][4]. This proposed model ( Figure 1) outlines a continuum of lesions describing a stepwise progression of breast cancer, from epithelial hyperplasia, through atypical hyperplasia and ductal carcinoma in situ, to invasive carcinoma and eventually metastatic disease [2][3][4]. Despite many advances in the diagnosis and treatment of breast cancer, metastasis remains an insurmountable challenge. About 40% of women currently fail primary management strategies for early breast cancer and ultimately succumb to the disease.
The complex nature of the disease presentation and the limitations in identifying clinically relevant subsets of patients create major difficulties for current breast cancer diagnostic and therapeutic strategies. A growing understanding of the heterogeneous nature of this disease has stemmed primarily from cDNA microarray and immunohistochemical studies, which have led to a redefinition of breast cancer subsets [5][6][7][8][9][10]. To date, 5 distinct breast cancer categories have been identified (Table 1, [5][6][7][8][9][10]) based on ER/PR status, Her2, CK5/6, and EGFR expression. Therapeutically, these subtypes have been shown to display a wide variety of responses to different treatments [6,11,12]. The luminal A subtype (Table 1), which is more sensitive to hormones, has the most favorable outcome whereas the Her2 and the basal subtypes, which are not sensitive to hormones, are more aggressive, demonstrate the worst prognosis, and have fewer therapeutic options [13][14][15]. Evidently, the further identification of different subtypes of breast cancer will provide more therapeutic opportunities to match the characteristics of individual breast cancer patients, enhancing our ability to begin to offer individualized treatment to patients.
Two hypothetical models have been proposed to explain the evolution of breast cancer subtypes [18]. In the first model, the linear model, the cell of origin is the same for different tumor subtypes (and thus, tumor subtype is determined by acquired genetic and epigenetic events). In
Epithelial Mesenchymal Transition (EMT)
The acquisition of the invasive phenotype is thought to mark the most significant change in breast cancer biology as it represents the first step towards the development of metastatic disease. As cells convert from the noninvasive to the invasive phenotype, they become anchorage independent and exhibit enhanced motility as well as increased aggressiveness, a process referred to as epithelial-mesenchymal transition (EMT). During this transition, epithelial cells acquire a mesenchymal-like phenotype via disruption of intercellular adhesion and enhanced motility (for review see [20]). It is believed that cells switch from a keratin (epithelial) rich network to a vimentin (mesenchymal) rich network to facilitate their motility [20,21]. As for mesenchymal cells, in contrast to epithelial cells, they can individually migrate, penetrate into surrounding tissues, and spread to distant sites [20,22,23]. The breakdown of cell-cell interactions and the deregulated expression of the junctional proteins are therefore believed to be key steps in invasion and metastasis [24,25].
Tight Junctions
Tight junctions are the most apical of intercellular junctions and appear as a network of continuous and anastomosing filaments on the protoplasmic face of the plasma membrane [27] (Figure 2). They contribute to the transepithelial barrier that controls the transport of ions and small molecules through the paracellular pathway, a property referred to as the "barrier" function [28,29]. Tight junctions are also crucial for the organization of epithelial cell polarity, . Tight junction proteins in conjunction with adheren junction proteins (cadherins, catenins) form epithelial cell junctional complexes [26]. The gap junction is located basal to the adheren junction.
separating the plasma membrane into apical and basolateral domains [30][31][32][33][34][35]. As well, they are also critical for the polarized location of ion channels, receptors, and enzymes to the membrane domains necessary for structurally and functionally developed epithelia, a function referred to as the "fence" function [28,36]. Tight junctions are therefore essential for the tight sealing of cellular sheets and maintaining homeostasis [35]. Aside from maintaining cell polarity and paracellular functions, tight junction proteins are involved in recruiting signaling proteins [37]. Primarily, three types of integral membrane proteins constitute the tight junctions ( Figure 2); the claudins, occludin and the junctional adhesion molecule(s), with the claudins and occludin being the two main molecular components in forming the tight junction strand. The junctional adhesion molecule is believed to function as the initial spatial cue for tight junction formation [32]. In conjunction with the adheren junction proteins, which are responsible for the mechanical adhesion between adjacent cells and the stabilization of the whole multicellular architecture, they constitute the apical junctional complex in epithelial tissue [33,38].
Tight Junction Proteins and Tumorigenesis
A strong association between tight junction proteins and cancer development has been established. Alterations in the structure and function of tight junctions have indeed been reported in adenocarcinomas of various organs [39][40][41]. An absence of tight junctions or defective tight junctions has also been associated with the development of the neoplastic phenotype in epithelial cells [29,33,42]. Such observations are consistent with the accepted idea that the disruption of tight junctions leads to loss of cohesion, invasiveness, and the lack of differentiation, thereby promoting tumorigenesis. Currently, most knowledge regarding the role of junctional proteins in cancer, and more specifically breast cancer, has stemmed from studies on the major adheren junction protein, E cadherin (for review see [20,33]). The downregulation of E cadherin is believed to be an important molecular event during epithelial-mesenchymal transition [20]. In contrast to E cadherin, the role of the tight junction proteins is not well understood in breast cancer.
The Claudins.
The claudins are the major component of the tight junction, and 24 members of this family of proteins have been identified to date [26,27]. They are small proteins ranging in size from 22 to 27 kD and are encoded by at least 17 human genes (Table 2), located on 12 different chromosomes [43,44]. The distribution of the loci for the claudin genes among so many different chromosomes is interesting as generally many gene families have most, if not all of their members located on one chromosome [45,46]. The wide distribution may reflect the multifunctional characteristics of these proteins. Three claudin gene clusters are readily apparent on chromosome 3 (3q28), 4 (4q35.1), and 7 (7q11.23) and it is very likely that members within these clusters may have similar function and tissue specificity [47]. Aside from these three claudin gene clusters, there appears to be no other obvious pattern ( Table 2). It is possible that the expansion of the claudin gene family in humans may have allowed for the acquisition of novel functions during evolution, as postulated for this gene family in teleost fish [48]. The claudins share a common transmembrane topology; each family member is predicted to possess four transmembrane domains with intracellular amino and carboxyl-termini in the cytoplasm and two extracellular loops [33,49]. The expression pattern of the claudin proteins is tissue specific [33]; however, most tissues express multiple claudins that can interact in either a homotypic or heterotypic fashion to form the tight junction strand [33,50]. The exact combination of the claudin proteins within a given tissue determines the selectivity, strength and tightness of the tight junction [33,51].
To date, only a few studies have addressed the role of claudins in breast cancer and findings on their function remain controversial [47,52,53]. In several cancers, including breast cancer, altered protein expression of some claudin family members has been demonstrated (for review see [33]). For example, protein expression of claudin 3 and 4 has been shown to be upregulated in invasive breast cancer [47] whereas, also in invasive breast cancer, the expression of the claudin 1 and 7 proteins were downregulated [47,53,54].
Claudin 1.
Knockout mice experiments have established that the tight junction protein claudin 1, and not occludin, forms the backbone of the tight junction strand and is crucial for the epidermal barrier function [35,55].
Expression of claudin 1 has been examined in a number of cancers (for review see [56]). Both an increase and a decrease in claudin 1 protein expression have been shown to be associated with tumorigenesis. In some cancers, including prostate [57], breast [47,58], and melanocytic neoplasia [59], loss of claudin 1 has been associated with cancer progression and invasion, and the acquisition of the metastatic phenotype. In esophageal squamous cell carcinoma [60] decreased expression of claudin 1 correlated with recurrence and shorter disease free survival, whereas in lung cancer, claudin 1 was shown to suppress the expression of invasion/metastasis enhancers and increase expression of cancer invasion/metastasis suppressors, thereby providing supporting evidence to suggest that it functions as a cancer invasion/metastasis suppressor [61]. Conversely, in other cancers, such as papillary thyroid [62] oral squamous cell carcinoma [63], ovarian [64], colon [65,66], melanoma [67], and gastric [68], overexpression of claudin 1 has been associated with aggressiveness and the increased malignant phenotype. Further, functional studies have shown that claudin 1 could recruit and promote the activation of the metalloproteinase MMP-2 [63,69] and lead to a more aggressive phenotype in oral and ovarian cancer.
Claudin 1 in Normal Breast and Breast Cancer
In the normal mammary gland, tight junction proteins have mainly been investigated in relation to lactogenesis [32]. Previous work from our laboratory has identified claudin 1 as a highly upregulated gene during early mammary gland involution [70], and its expression was found to be tightly regulated during different stages of normal mouse gland development [71]. Recently, there has been an increased interest in the potential role of claudin 1 in breast cancer.
Although studies are still relatively limited, there are a few critical reports which demonstrate a clear association between claudin 1 expression and breast cancer progression. The majority of studies point to a downregulation or complete loss of claudin 1 expression in malignant invasive human breast cancers [47,53], and in some human breast cancer cell lines [72]. A correlation between claudin 1 down regulation and disease recurrence was also recently reported [73]. Additionally, functional studies also suggest that claudin 1 may be a key player in breast tumorigenesis. A down regulation of claudin 1 gene expression was shown to lead to neoplastic transformation of breast epithelial cells [74]. As well, the re-expression of claudin 1 alone was shown to be sufficient to induce apoptosis in a human breast cancer cell line [75]. It has also been suggested that claudin 1 alone might be sufficient to exert a tight junction-mediated gate function in metastatic tumor cells even in the absence of other tight junction-associated proteins [52]. In addition, subcellular localization of claudin 1 has been shown (by us [76] and others [47,76,77]) to be disrupted in invasive breast cancer leading to a detection of this protein in the cytoplasm. Interestingly, an association between claudin 1 and epithelial mesenchymal transition has recently been established. As with E cadherin [78,79] the transcription factors, slug and snail, key markers of epithelial mesenchymal transition, were shown to bind to the claudin 1 promoter resulting in the repression of its activation [80]. Additional work from our laboratory has also provided evidence to show that claudin 1 expression in breast cancer is even more complex than originally thought. Using tissue microarray strategies, we showed that in a cohort of human invasive breast cancers exhibiting mixed pathological lesions (340 biopsies, the largest examined to date), only a small percentage of tumors express claudin 1 protein. The frequency of claudin 1 positive tumors was significantly lower than the frequency of tumors observed as positive for claudin 3 and 4, two family members previously shown to be overexpressed in invasive human breast cancer [47].
Since ER status is often considered an important classifier of breast cancers (Table 1), we wanted to determine whether there was any association between estrogen/estrogen receptor and claudin 1. We showed that in ER+ve breast cancers (189 biopsies), a significantly small number of tumors (5%) were positive for claudin 1 expression, while in the ER−ve tumors (151 biopsies), the frequency of positive tumor staining for claudin 1 was significantly higher (39%) [76]. A positive association was also found with EGFR, a marker of poor prognosis. Surprisingly, a significant correlation was also found with claudin 1 and markers of the basal-like subtype of breast cancers [76,81], an aggressive subtype of breast cancer associated with the worst prognosis and reduced patient survival. We also demonstrated for the first time that in the estrogen receptor positive (ER+ve) human cell line, MCF7, that claudin 1 expression was down regulated by estrogen in vitro (unpublished data).
Is Claudin 1 Much More than a Tumor Suppressor in Breast Tumorigenesis?
Both an over and an underexpression of claudin 1 have been observed in different types of cancers [57,59,62,63,[65][66][67]69], outlining the complexity of its potential role in carcinogenesis. In breast cancer, the majority of studies published to date, though limited in numbers, show that partial or total loss of claudin 1 expression correlates with increased malignant potential and invasiveness and with recurrence of disease [47,73]. As well, the re-expression of claudin 1 in breast cancer cells was demonstrated to induce apoptosis [75]. Additionally, our tissue microarray studies showed that a significantly low frequency of human invasive breast cancers was positive for claudin 1 expression [76]. Altogether, these studies provide supporting evidence to suggest that claudin 1 functions as a tumor suppressor in breast tumorigenesis. Paradoxically, our laboratory has also provided evidence to suggest that the role of claudin 1 in breast cancer may be much more than a tumor suppressor. We showed in our TMA studies that the frequency of claudin 1 positive tumors was significantly higher in ER−ve breast cancers than in ER+ve breast cancers [76]. To our knowledge these studies are the first report to address claudin 1 expression in breast cancer in the context of ER status. We further showed that claudin 1 positivity (as well as claudin 4) was significantly associated with the basal-like subtype of breast cancers, one of the most aggressive subtypes [76]. Of note, in a recent study by Kulka et al., 2008, it was demonstrated that claudin 4 expression was significantly higher in the basal-like subtype of breast cancers [81]. Since claudin 1 is generally considered to be a "tumor suppressor" in breast cancer, our observations were unexpected. How can such observations be rationalized? There are a few possible scenarios that may explain these findings. First, it is plausible that during tumorigenesis, not all tumor cells lose claudin 1 expression. In line with the proposed nonlinear model of breast cancer subtypes [18], it is possible that the cells which retain claudin 1 expression are the cells already predetermined to become ER−ve basal-like breast cancers. Then, in these cells, the role of claudin 1 may be that of a tumor promoter rather than a tumor suppressor. On the other hand, if one considers the linear model of breast cancer subtypes [18], tumor cells are believed to progress from ER+ve to ER−ve as the cancer advances. Then, is the increased frequency of claudin 1 positive tumors in the ER−ve cohort attributed to a re-expression of claudin 1 in these tumors? Such a concept is supported by the significantly higher expression of claudin 1 staining (an indicator of protein expression) that was observed in the ER−ve tumors. Here, the tumor suppressor function of the re-expressed claudin 1 is thus eliminated and now replaced by a tumor enhancing function, thereby facilitating breast tumorigenesis. This re-expression of claudin 1 could be attributed to a number of mitigating factors including genetic mutation in the claudin 1 gene or epigenetic modification of the protein. However, sequence analysis studies of the coding region of claudin 1 [53] in both sporadic and hereditary breast patients failed to identify any significant mutation that may be responsible for altering the claudin 1 gene expression in breast tumors. One interesting possibility is that the higher level of the claudin 1 protein could be due to a defective interacting partner resulting in the inability of claudin 1 to be transported back to the membrane where it may escape further down regulation by other factors, and therefore leading to an accumulation of claudin 1 in the cytoplasm. The latter has recently been demonstrated for E cadherin [82]. Furthermore, the exact combination of claudin proteins within a given tissue is thought to determine the strength of the tight junction [51]. Thus, one of the consequences of this aberrant accumulation of claudin 1 in the ER−ve invasive breast cancers may be a redefinition of the makeup of the tight junction, further undermining the integrity of the junction and thereby further facilitating tumor progression. Taken together, it appears that the role of claudin 1 extends beyond that of a tumor suppressor. We would like to propose that claudin 1 functions both as a tumor suppressor as well as a tumor enhancer/facilitator in breast cancer. We further propose that its tumor facilitating role is particularly associated with the ER−ve breast cancer subtypes ( Figure 3).
Recently, a "claudin low" subtype of breast cancer has been identified that was primarily ER−ve and exhibited low expression of claudin 3, 4, and 7 [83]. We would further like to propose a "claudin high" subtype that is ER-ve and which exhibits a high expression of claudin 1 protein (and claudin 4). This subtype would include the basal-like breast cancers and exclude the ER+ve luminal subtypes. The clinical significance of a high expression of the claudin 1 protein in ER−ve breast cancers and its association with the basal-like subtype may identify its potential use as a diagnostic and prognostic indicator for a particular breast cancer subset.
Future Perspectives
Mounting evidence suggests that claudin 1 has a unique role in breast cancer and breast cancer progression. Since claudin 1 is a transmembrane protein with two large extracellular loops, it is a potentially attractive candidate for use in therapeutic strategies. As we begin to address the role of claudin 1 in breast cancer progression we are left with many unanswered questions. What is the expression level of the claudin 1 protein in different histological subtypes of breast cancer? What triggers its regulation and causes claudin 1 to switch its role at different stages of breast cancer progression? Why does claudin 1 accumulate in the cytoplasm? Is there a mutation in the claudin gene or protein in the basal-like subtype of breast cancer? Does claudin 1 work in concert with other claudins or junctional proteins such as E cadherin? Is there cross-talk with the estrogen receptor pathway? Clearly, more detailed functional analysis studies are warranted both in vitro and in vivo.
One can only predict that more breast cancer subtypes will be identified in the near future and a clearer understanding of the cellular and molecular changes occurring during breast tumorigenesis will be critical for facilitating more effective patient management and directly impacting on reducing mortality rates. Only through such understanding will new biomarkers be identified that will report on metastatic changes, breast cancer progression and serve as therapeutic targets ultimately leading to specific and individualized patient management.
|
2014-10-01T00:00:00.000Z
|
2010-05-13T00:00:00.000
|
{
"year": 2010,
"sha1": "89cfce47d1e0b4ba68c863c627512d4fcc53c96d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2010/956897.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd815c881b8ca08dba7d25b9416bd0b11714f608",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245386852
|
pes2o/s2orc
|
v3-fos-license
|
European consensus on patient contact shielding
Patient contact shielding has been in use for many years in radiology departments in order to reduce the effects and risks of ionising radiation on certain organs. New technologies in projection imaging and CT scanning such as digital receptors and automatic exposure control systems have reduced doses and improved image consistency. These changes and a greater understanding of both the benefits and the risks from the use of shielding have led to a review of shielding use in radiology. A number of professional bodies have already issued guidance in this regard. This paper represents the current consensus view of the main bodies involved in radiation safety and imaging in Europe: European Federation of Organisations for Medical Physics, European Federation of Radiographer Societies, European Society of Radiology, European Society of Paediatric Radiology, EuroSafe Imaging, European Radiation Dosimetry Group (EURADOS), and European Academy of DentoMaxilloFacial Radiology (EADMFR). It is based on the expert recommendations of the Gonad and Patient Shielding (GAPS) Group formed with the purpose of developing consensus in this area. The recommendations are intended to be clear and easy to use. They are intended as guidance, and they are developed using a multidisciplinary team approach. It is recognised that regulations, custom and practice vary widely on the use of patient shielding in Europe and it is hoped that these recommendations will inform a change management program that will benefit patients and staff. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01085-4.
• Shielding use in radiology has been re-evaluated. • Major European bodies involved in imaging radiation safety have issued consensus-based recommendations. • This paper represents multidisciplinary based recommendations for shielding use.
• In the majority of cases patient contact shielding use is not recommended.
Patient summary
Radiation used in radiology carries small risks of radiation damage. To minimise this damage to sensitive organs, contact shielding was used for many years. In contact shielding a shielding object (blanket, rubber mat…) with radiation absorbing material is used and placed in contact with the surface to be shielded. Recent technological advances in equipment and recent scientific knowledge, have led to new guidelines and they show that there is rarely a need for shielding, although it can sometimes be allowed. In some cases, shielding can even
Open Access
Insights into Imaging
Introduction
In the healthcare sector, radiation protection devices are frequently placed in contact with the human body to reduce the radiation exposure to radiosensitive organs of patients undergoing diagnostic and interventional X-ray examinations. Such patient contact shielding has been in widespread use for the last seventy years, aiming to protect against genetic effects, cancer and other detriment [1]. However, an increasing number of studies, position statements and recommendations have raised concerns regarding the utility and effectiveness of such shielding [2][3][4][5]. This has added to an unhelpful and undesirable inconsistency in regulation and recommendations of shielding use across Europe [6].
The growing need for a European consensus statement on patient contact shielding has been highlighted by Gilligan and Damilakis [7], with the main objective of supporting and promoting effective and harmonised clinical practice.
Representatives of the European Federation of Medical Physicists (EFOMP), European Federation of Radiographer Societies (EFRS), European Society of Radiology (ESR), European Society of Paediatric Radiology (ESPR), EuroSafe Imaging (ESI), European Radiation Dosimetry Group (EURADOS) and European Academy of Den-toMaxilloFacial Radiology (EADMFR), as well as a representative from the Patient Advisory Group of ESR, founded the GAPS (Gonad and Patient Shielding) group (chair: P Gilligan) with the purpose to propose a European recommendation on the use of contact shielding.
Evidence review criteria
This consensus statement has involved examining the evidence-base provided in published data and guidance. The system of ranking the evidence is based on a user-friendly system developed by the European Heart Rhythm Association, EHRA [8] and here uses 'coloured shields' that provide an indication of the current status of the evidence and consequent guidance (see Table 1).
Thus, a green shield indicates a 'should do this' consensus statement or indicated risk assessment strategy based on strong evidence that it is beneficial and effective. An amber shield indicates general agreement and/ or scientific evidence favouring a 'may do this' statement or the usefulness/efficacy of a risk assessment strategy or procedure. Risk assessment strategies for which there is scientific evidence of little or no benefit or even potential harm and should not be used ('Not recommended to do this') are indicated by a red-striped shield.
Guidelines for clinical practice
Research has previously reported dose reductions of 30-95% to individual organs using shielding [9][10][11]. However, there has been a growing body of evidence that patient contact shielding is ineffective in most situations and at times potentially hazardous. The use of contact shielding can provide false reassurance (to patients and staff ) and continued use can overemphasise the hazards of ionising radiation in the public mind.
This has led to an inconsistency of application of shielding. In some cases it has also led to conflict between patient expectations that shielding would be used and professionals judging it as unnecessary or even harmful.
The main aim of this consensus statement is to encourage and support good clinical practice by promoting harmonisation of application of patient contact shielding. This statement should be seen as a tool for making decisions in healthcare more rational and for improving the quality of healthcare delivery. However, it should not serve as a substitute for sound clinical judgement nor replace professional responsibility of providers.
This consensus statement is intended to help in the development of local policies and procedures by highlighting the reported limited utility and potential drawbacks of patient contact shielding. Section "Issues when using contact shielding" also considers scenarios and approaches where individual circumstances such as high cumulative dose, anxious or radiosensitive patients may indicate that the radiology professional chooses to use shielding.
Decrease in patient doses
While the number of X-ray imaging examinations has increased during the last decades, individual patient doses have decreased significantly since patient contact shielding was first introduced [12], limiting the potential benefit of this shielding in most cases. Although some patients may be exposed to high cumulative radiation doses due to multiple examinations [13], or in complex interventional procedures [14], the highest doses are absorbed by organs and tissues being imaged, which cannot generally be shielded (see Section "Shielding within the imaging field of view (FOV)"). Therefore, currently, only a minor number of patients might experience a real benefit from using contact shielding, which also comes with a risk, as discussed next.
Past practice in radiation protection has been based on the dose range and associated risk estimates prevalent at the time. However, the levels of dose and the organsand age-risks estimates have changed over the years (see Section "Patient radiation risk from imaging"), requiring continuous revision of local practice in line with current knowledge and advice [4].
Shielding within the imaging field of view (FOV)
There are several factors to be considered when applying patient contact shielding within the imaging FOV. These include: • Incorrect placement of shielding by the operator or unintended movement of the shield during the examination can obscure important pathologies in the image, requiring repeat exposure [15]. • The operator may encounter difficulties in correct placement of shielding to cover intended radiosensitive organs due to variation in patients' anatomy [16]. This may only be apparent after the image has been taken and can give rise to ineffective shielding. • The highly attenuating material of the shielding may interfere with automatic exposure control systems and can lead to an increase rather than a decrease in patient dose [3,17]. • Beam hardening or streak artefacts caused by the applied shielding can reduce the image quality and may lead to the requirement to repeat the exposure [18].
Shielding outside the imaging FOV
The majority of scatter is internal and therefore cannot be shielded externally. Scatter doses are considerably smaller than the dose to anatomy within the area of interest or imaging FOV. As the patient doses have decreased over the years so too has the dose due to scattered radiation, which has now reduced to negligible levels in many cases. The probable benefits from the very small dose reduction due to contact shielding may not outweigh the potential risks of artefacts, infection and patient discomfort, as referenced below.
The placing of out of beam protection beyond the irradiated volume is not necessarily a simple, error free, task. For example, in helical CT scanning, there is a requirement to 'overscan' beyond the first and last image position in order to provide enough data to interpolate for those images. Since even a small amount of 'overscan' can extend a considerable distance beyond the image volume, placing a patient contact shielding adjacent to the scan volume can interfere with the image reconstruction leading to artefacts in the image [4].
Patient radiation risk from imaging
The primary concern when justifying a medical exposure is the risk-benefit balance. Therefore, the approach to deciding on adopting or avoiding patient contact shielding should centre on the change in radiation dose and risk. For example, in some cases the application of contact shielding is reported to show a large relative dose reduction to a specific organ, giving the impression of a significant improvement, whereas the absolute benefit may be small or non-existent [2].
In addition, the focus of patient radiation safety should be upon those organs deemed to be at risk from cancer induction due to radiation exposure.
However, when reviewing the need to protect a specific organ, it is important to take into account the fact that the radiation risk actually varies with age and sex of the patient, as illustrated in Fig. 1. This highlights the fact that paediatric patients can be at high risk and that the organ at highest risk can change with age.
Recommendations
These recommendations are divided into areas of the body where patient shielding may be used and assume that all other applicable justification and optimisation strategies have been employed before patient contact shielding is considered.
For example, in general radiography, with good collimation and using posterior anterior (PA) positioning for skull, spinal and chest X-rays, patient contact shielding is likely to have a negligible benefit and, in many instances, may obscure diagnostic information or lead to an overall increase in patient dose. A summary of the recommendations in this consensus document is provided as an appendix (Additional file 1).
Gonad shielding
Protection of the gonads is the longest-standing use of patient contact shielding due to the perception of the risk and the relative ease of use. However, genetic effects from radiation have not been observed in human studies despite the public perception otherwise. Indeed ICRP 103 [21] reduced the tissue weighting factor for the gonads to less than half its previous figure (0.2 to 0.08). Therefore, gonad shielding is perhaps the least useful in terms of reducing the radiation risk for the patient. Hereditable effects associated with typical dose ranges are likely to be negligible.
Within the FOV, there is a general consensus that it is difficult to position the shielding for female patients to ensure coverage of the ovaries, as well as avoiding interference with the anatomy of interest and the automatic exposure control system. Current published evidence has shown inconsistent results and disappointing impact on accuracy of shield application following audit and training' [16].
Outside the FOV the reduction in radiation risk for both male and female patients by using shielding is negligible, regardless of age [2].
For CT scanning of the abdomen, several papers have shown a range of measured testicular dose reductions (58% to 95%) through the use of outside field of view wraparound and testicular shields in male adults and phantoms [10,20]. In terms of absolute risk reduction based on a LNT model (given the limitations of uncertainty), this is of the order of 0.5 in 10,000 [22]. The benefit is small compared to other optimisation techniques such as limiting scan range in the area of the more radiosensitive organs as defined by the ICRP [21], and also comes with some risks. Yu et al. [23] showed that such shields provided little benefit in paediatric chest CT too as one got further from the field of view. There are risks of interfering with the automatic exposure control when using shielding outside the field of view such as those found in embryo shielding [24].
Recommendation Symbol
Male and female gonad contact shielding All X-ray Both 'Not recommended to use shielding'
Thyroid shielding
The thyroid gland has been highlighted as a radiosensitive organ. Since the relative sensitivity of the thyroid gland to radiation-induced cancer is strongly biased towards children and there is a longer time for any induced cancer to manifest itself, it is particularly important to consider this age group when deciding if thyroid protection is required, particularly when high cumulative radiation doses are expected due, for example, to multiple head CT examinations.
Since the shield should cover the front half of the neck, it can readily interfere with the imaging process within the FOV (see Section "Shielding within the imaging field of view (FOV)"). Outside the FOV, the effectiveness in reducing patient stochastic risk is minimal.
Whilst it is generally considered that patient contact shielding should not be used, exceptions may exist in the field of dental X-ray imaging due to the proximity of the thyroid to the FOV and the high percentage of paediatric patients examined [25][26][27].
In cephalometric radiography, a conventional thyroid collar can partially overlap with the FOV. However, thyroid shielding can be applied, if evaluation of the cervical spine is not needed [28,29] or custom protective devices that do not overlap with relevant anatomical regions are used [30].
If shielding were to be used, it is strongly recommended that a Medical Physics Expert (MPE) is consulted first, as there is the potential to introduce artefacts to the image should a thyroid collar enter the useful imaging volume. In addition, increased patient doses can arise from systems (e.g. CBCT) that incorporate an automatic exposure system [27].
Breast shielding
In a similar manner to the thyroid gland, breast tissue is highly sensitive to radiation, particularly for those less than 30 years of age.
Since the shield should cover the anterior surface of the chest, if it is within the FOV it could compromise the X-ray examination and give rise to an increased radiation dose to neighbouring organs and tissues. For example, in CT chest examinations of patients over 30 years old, the lung is the most radiosensitive organ (see Section "Patient radiation risk from imaging") and using breast contact shielding could lead to an increased lung dose, thus increasing, rather than decreasing, the overall risk to the patient.
Outside the FOV, the effectiveness in reducing patient stochastic risk is generally reported to be minimal [2].
Application Imaging modality
Inside or outside FOV
Recommendation Symbol
Breast contact shielding All X-ray Both 'Not recommended to use shielding'
Eye lens shielding
The lens of the eye is considered one of the most radiosensitive tissues in the body, with the primary concern being the development of cataracts and lens opacities following radiation exposure. However, in the case of CT, most recent studies suggest that dose reduction strategies are more effective than eye shielding (e.g., [31]). Due to the level of eye dose for some fluoroscopically guided cerebral interventional procedures [32,33], the consultation of a Medical Physics Expert is advised on a case-bycase basis.
Embryo/Fetal shielding
Studies have shown that radiation protection shields have limited value for the protection of the unborn child from examinations performed on pregnant patients because most of the embryo/fetal exposure results from internal scatter in the tissues of the mother [34,35]. In addition, if suitable optimisation strategies are adopted, the impact of patient contact shielding on the fetal dose is minimal [36]. Any discussion around this may require sensitive handling. Pregnant patients undergoing diagnostic radiology examinations may request abdominal protection, including situations where the examination is outside the pelvic region. In these cases, whether or not to provide extra shielding, usually in the form of lead/lead-equivalent material draped over the abdomen, should be in accordance with written procedures and at the discretion of the operator performing the imaging. If a decision is made to use contact shielding, then it is important that accurate collimation is used, and the shielding must not encroach on the automatic exposure control system. This includes taking account of any 'overscan' (see Section "Shielding outside the imaging FOV") beyond the first and last image position.
Issues when using contact shielding
It is not unreasonable to consider scenarios and approaches where individual circumstances such as high cumulative dose, anxious or radiosensitive patients may indicate to the radiology professional that the benefit of shielding could outweigh any risk associated with its use.
While not generally advised, any use of contact shielding should be considered carefully by a multi-disciplinary team, with the advice of a MPE, and should be written into examination protocols ahead of use. Its selection simply to reassure the apprehensive patient should be discouraged as this promotes mixed messages and an exaggeration of radiation risk to the patient and wider community. Instead, efforts should concentrate on explaining the risks from the use of contact shields to the patient [4].
Besides the risks of artefacts and interference with the AEC system, a disadvantage to using shielding is the potential discomfort experienced by the patient and the manual handling issues for the staff [9], as well as potential infection control issues [37,38]. Furthermore, the use of shielding may not be advisable for emergency patients, paediatrics or individuals with disabilities who are unable to tolerate the use of the shield (e.g., eye lens shielding).
Where it is agreed that shielding should be used, then staff should be trained in: • The selection of appropriate shielding, including how to prevent shielding moving during a procedure due to patient or equipment movement (e.g., during dynamic imaging) • The selection of appropriate radiographic techniques, including how to avoid interference with automatic exposure control systems • How to perform quality control checks on patient contact shielding • How to store shielding appropriately • How to clean and disinfect shielding • How to comply with local policies regarding patient dignity (e.g., transgender patients [39]) • Communication skills specific to discussions with patients, parents or caretakers of children undergoing radiological examinations and healthcare professionals on the use of patient contact shielding. • How to communicate benefit risk to pregnant patients
Next steps
For some users of radiation, the implementation of this guidance and recommendations may represent a significant cultural change in practice and require development of a change management program, with stakeholder consultation.
Following the adoption of this consensus statement, there will be a need to review current practice and provide suitable information and education material for health professionals and the public.
The European Society of Radiology through Eurosafe Imaging, with the assistance of the GAPS group (see introduction), are currently planning the first step, through a web-based survey of Radiology departments to evaluate the current practice of contact shielding within Europe.
A concerted effort will be required by the relevant professional bodies to ensure the next steps of education and training to explain the changes in guidance are made readily available to European users. Some useful information on patient shielding is already available online, including the British Institute of Radiology (https:// www. bir. org. uk/ educa tion-and-events/ patie nt-shiel ding-guida nce. aspx) and the American Association of Physicists in Medicine CARES (Communicating Advances in Radiation Education for Shielding) group (https:// w3. aapm. org/ cares/).
Review of current guidelines
The technology used in X-ray imaging, the level of radiation doses absorbed by the patients and the knowledge on radiation dose effects due to ionising radiation may vary over time. Therefore, it is deemed necessary to review these guidelines periodically. In principle, these will be reviewed after a period of five years or sooner if new evidence or changes recommend so.
|
2021-12-23T14:23:12.366Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1dfc04cf8fd4ed2ccc2335d386ce8c83b0d4a276",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "1dfc04cf8fd4ed2ccc2335d386ce8c83b0d4a276",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254962202
|
pes2o/s2orc
|
v3-fos-license
|
Teachers’ and school administrators’ perceptions of emergency distance education
This research was conducted to determine the perceptions of school administrators and teachers about Covid-19 and distance education. The research is a descriptive study conducted to reflect the specific characteristics of the participants. In this context, the research model is the scanning model. In the population of the study, 31 school administrators and 156 teachers voluntarily participated in a province of Turkey in the 2020–2021 academic year. An easily accessible situation sampling technique was used in determining the participants. Within the scope of the research, a distance education satisfaction questionnaire was developed based on the experiences of the researcher himself, and an information form containing the personal information of the participants was used to collect data. The data within the scope of the research were collected by sending the data collection tool prepared online to school administrators and teachers. The data collection tool was delivered to participants via WhatsApp groups via google forms. While analyzing the data obtained within the scope of the study, descriptive statistical analyzes were made in all questions and basic statistical values such as frequency, percentage, standard deviation, mode, median was reported. At the end of the study, it was determined that half of the participants did not consider the distance education conducted in their schools during the epidemic period to be sufficient. Administrators and teachers; 49.7% of them stated that they could partially benefit from distance education while conducting the lessons, 40.1% stated that it is not appropriate to conduct the lessons with distance education, and 10.2% stated that all the lessons could be conducted by distance education.
Introduction
Today, information and communication technologies can offer a wide range of teaching alternatives from supporting traditional teacher-centered classroom teaching activities to applications that can be customized according to each student's own learning pace and preferences regardless of time and space (Açıkgül et al., 2021;Becker, 2000;Boucher, 1998;Elyazgi et al., 2014;Isisag, 2012;Maryam et al., 2013;Pınar & Akgül, 2020;Postholm, 2007;Selwyn, 2007;Wenglinsky, 2005). The information age (Papadakis, 2021), which has risen in parallel with technological developments and influenced the world (Papadakis, 2021), has also significantly affected life skills, and this has brought to the fore a wide range of competencies based on information and communication technologies-supported decision and solution processes, which we call 21st century skills (Bardakçı & Keser, 2017;Cuban, 2006). All these transformations highlight distance education as an alternative that can be realized to complement and strengthen formal education processes (Eroğlu & Kalaycı, 2020;Katsaris & Vidakis, 2021). As a result of the COVID-19 pandemic that has spread all over the world, the field of education has been affected as in all fields, and in this process, distance education has become an effective option that supports traditional education or can be used instead of traditional education from time to time. The COVID-19 pandemic has affected the education process of 1.6 billion students from 200 countries, necessitating significant changes in education processes worldwide (Mohammed, 2022;UNESCO, 2020). Therefore, the whole world has urgently turned to distance education at all levels of education to minimize the negative effects of the pandemic on human health (Karadağ et al., 2021).
Distance education practices started in Turkey in 1982 with Anadolu University as open education (Bozkurt, 2017;Pınar & Akgül, 2020;Yamamoto & Altun, 2020). Distance education applications, which were previously given in radio and television environment (Bozkurt, 2017;Erturgut, 2010), were later moved to the computer environment with the advanced digital environments provided by internet technologies, and today it has gained a different dimension with the development of mobile devices. In 2012, the Ministry of National Education (MEB) designed the Education and Information Network (EBA) and started distance education activities within its structure. EBA, which has been enriched in terms of content since 2012, has gained a different dimension with the addition of the live lesson application in 2020 (YEĞİTEK, 2020). Today, both the Ministry of National Education and universities have paved the way for distance education activities independent of time and space. This has opened a new era in ensuring the continuity of education and training activities. Today, most educational institutions use distance education to conduct common compulsory or elective courses (Eroğlu & Kalaycı, 2020).
However, there are differences between the nature of distance education activities and emergency distance education activities (Bozkurt & Sharma, 2020;Hodges et al., 2020). Indeed, Hodges et al., (2020) stated that distance education conducted during a crisis is different from a typical distance education process. Accordingly, in the literature, the distance education process carried out in times of crisis without extensive preparation, such as during the COVID-19 pandemic, is referred to as "emergency distance education" (Bozkurt & Sharma, 2020;Hodges et al., 2020). The critical difference between the two practices is that distance education activities are well-planned learning activities (Hodges et al., 2020) and are characterized by distance between learners and learning resources in terms of time or space (Bozkurt & Sharma, 2020). On the other hand, emergency distance learning can be perceived as educational activities aimed at solving a sudden problem (Golden, 2020). Therefore, it is important to evaluate the distance education processes implemented during the pandemic and this study looks from the perspective of emergency distance education (Aguayo et al., 2022). Keegan (2003) states that distance education has six critical dimensions. These dimensions are the separation of teacher and student, the role of educational organization, the place of technological tools, two-way communication, the separation of teacher and learning group, and industrialization. Distance education offers numerous opportunities not only for students but also for educators for quality education. From this perspective, distance education enables the use of many different teaching materials such as virtual world applications, online conferencing environments, virtual reality applications, social media applications, offline communication applications, animations, simulations, teaching documents, virtual reality applications (Baker et al., 2009;Beldarrain, 2006;Dalgarno et al., 2009;Jin, 2011;Shih, 2002;Slykhuis et al., 2005;Veletsianos, 2010;Ventura & Martin-Monje, 2016). Therefore, distance education can be considered as a system that provides various learning environments for students who do not have access to face-to-face education (Liu & Ginther, 1999;Rovai & Barnum, 2003).
To summarize the purposes of distance education, it is to spread the latest technologies used for distance education to the public, and thus to maximize information sharing and access, and to ensure standardization in education individually and collectively. In addition, the aim of distance education is to shorten the time between training and practices, to improve individual skills and success, and to provide knowledge through continuous and intensive education (Ağır, 2007). There are four main elements in the basis of the concept of distance education. These elements can be listed as follows (Özarslan, 2008): • Distance education provides a formal education opportunity through government institutions and students can receive a diploma or certificate when they are successful. • Through distance education applications, students and teachers can come together in various places and times. • Distance education can be conducted both simultaneously (synchronously) and at various times (asynchronous). In addition, distance education also offers the opportunity to interact through new communication technologies. • Distance education provides a link between resources. Thus, design, budget, transmission planning can be created easily.
Distance education has many advantages over traditional education in terms of economic, social, cultural, and psychological aspects. These advantages are stated as follows (Demirbilek, 2021;Aguayo et al., 2022): Employees can access the internet from wherever they are and receive training remotely.
• Students' situations can be evaluated more objectively and accurately. • •Educational activities can be conducted by considering not only a national but also an international dimension. • Education can be provided to a large audience in a healthy way without the need for a place. • Since distance education enriches the lessons in an audio-visual way, students are motivated more quickly. • Distance education increases the competition among trainers, so that more qualified trainers can be trained. • Distance education reduces the economic expenditures of institutions and organizations and reduces costs.
However, with the COVID-19 pandemic, the sudden transformation of face-to-face courses into distance education without any planning process has brought many disadvantages in terms of student satisfaction (Altıparmak et al., 2011;Bakker & Wagner, 2020;Demirbilek, 2021). Especially due to technical problems on the internet, there may be connection problems between students and instructors (Altıparmak et al., 2011;Demirbilek, 2021). In a study conducted by the OECD on students accessing the Internet, approximately 80% of students in Turkey have this opportunity (OECD, 2020a). This drops to 50% for socioeconomically disadvantaged students. When we look at the advantaged group, it is around 90%. When the situation of our country is analyzed in this context compared to the countries of the world, it is clear that the situation of students in our country is more disadvantaged compared to other countries, considering that it ranks 71st among 78 countries and the OECD average is around 95% (OECD, 2020a). Students who do not have financial means may not benefit from Internetbased distance education for economic reasons because they cannot afford computers (Altıparmak et al., 2011;Demirbilek, 2021). Another technological infrastructure necessary for students to participate in emergency distance education activities is to have a computer. When students' access to computers for schoolwork is analyzed, it is around 90% on average in OECD countries, while this rate is around 65-70% in Turkey (OECD, 2020a). This situation may lead to inequalities among students in distance education activities carried out on an urgent basis and will mean interruption of educational activities for some students (Bakker & Wagner, 2020).
When the literature is examined, it is seen that another critical issue in distance education activities is that students have a suitable environment where they can study at home. OECD data shows that approximately 92% of students worldwide have such an environment (OECD, 2020a). In this context, when we look at the situation in Turkey, it is seen that approximately 86% of students can receive education at home. When this situation is considered for students at lower socio-economic levels, it is seen that approximately 80% of students have such an environment, but 20% do not have such an environment (OECD, 2020a). It is seen that this situation, which is a prerequisite for urgent distance education activities to be carried out, may create disadvantages especially for low-income students.
Another critical component of educational activities is teachers. Teachers should have the necessary technological infrastructure, knowledge and pedagogical background to manage the process in order to manage teaching activities well during the emergency distance education course (Demirbilek, 2021). They should also be able to prepare the necessary teaching materials for the urgent distance education activities to be carried out and allocate time for this. While the OECD average for teachers not having enough time to prepare the necessary digital content is about 60%, this rate is about 85% in Turkey (OECD, 2020a). In this respect, it is seen that teachers in Turkey have many problems in terms of time. While the OECD average for teachers having the necessary technical knowledge and infrastructure is 65%, this rate is around 75% in Turkey (OECD, 2020a). Despite being above the OECD average, approximately 25% of teachers do not have the necessary technological equipment. This situation reveals the need to support teachers in this context (Lynch, 2020;Reich et al., 2020;Reimers & Schleicher, 2020;Worldbank, 2020). It does not seem easy for teachers to adapt to new online environments (Kong, 2020) because they lack experience in distance education (Lynch, 2020). Kong (2020) stated that teachers have problems with how to express themselves during distance education lessons; the language they use in the teaching process is inflexible and flat, which does not attract students' attention. For this reason, it is seen that teachers have problems in involving students in the lesson and it turns into a completely teacher-centered education (Bakker & Wagner, 2020;Kong, 2020).
Studies reveal that especially synchronous distance education applications cannot meet the expectations of the participants due to problems such as visual, sound, communication problems and low interaction in distance education applications (Kaleli Yılmaz & Güven, 2015;Demirbilek, 2021). In fact, Özkul & Aydın (2012) took the students' views on distance education and found that half of the students preferred blended education instead of face-to-face education or distance and distance education, and one-third. However, Barış (2015) found that university students' attitudes towards distance education were low. In Özgül and Uysal's (2016) study, in which they investigated student opinions on the practice of distance summer school, it was concluded that students found the practice of distance summer school more beneficial than the practice of formal summer school. Paydar and Doğan (2019) also revealed in their study that most pre-service teachers had a positive view of distance and open learning, found the course useful and were willing to take the course. Pre-service teachers stated that there may be situations where distance learning environments are advantageous and disadvantageous.
The worldwide pandemic has had many impacts on teaching and learning activities. More than 94% of students worldwide have been affected by the pandemic, which shows the extent of the pandemic's impact on education worldwide (Mohammed, 2022). In this process, teachers, students, institutions and parents, who are the stakeholders of education, have been involved in a new education process and have entered distance education courses outside of the face-to-face education activities they are used to and have experienced many problems (Poultsakis et al., 2021). These problems were sometimes caused by the technology infrastructure, and sometimes by the negative emotions experienced by teachers, feelings of loneliness and communication problems with students.
The data for this study were collected at the end of the year when the COVID-19 pandemic emerged. At the time of data collection, all educational institutions were conducting compulsory courses in the form of distance education, asynchronous materials in the learning management system and synchronous live courses. The aim of this study is to determine the opinions of school administrators and teachers about the competence, changes, motivation and problems experienced in the online learning process and the distance education process.
Model of the research
This research is a descriptive study conducted to reflect the specific feelings and thoughts of the participants. In this context, the model of the research is the scanning model. Survey studies are "the studies that aim to collect data to determine certain characteristics of a group" (Büyüköztürk et al., 2013).
Universe and sample
The personal characteristics of the participants are given in Table 1. Accordingly, 42.8% of the sample consisted of women, while 57.2% were men. Teachers constitute the largest group with 83.4% in terms of the tasks they perform, and in terms of the level of assignment, secondary school with 51.9%, primary school with 31.6%, high school with 14.4% and finally kindergarten with 2.1%. In terms of the branch variable, it is seen that the largest group consists of Turkish teachers, and the smallest group, with 1.6%, consists of music and philosophy group teachers. Finally, when looking at the sample in terms of professional seniority variable, it was stated that the largest group consisted of participants with a seniority between 31.6% and 1-5 years, and the smallest group consisted of participants with a seniority between 17.6% and 11-15 years.
Data collection tools
In this research, which aims to examine the views of school administrators and teachers on the concept of distance education, a form consisting of two parts was created in the online environment. In the first part, there is a personal information form asking about demographic characteristics. In the second part, they were asked to complete the distance education satisfaction survey. Prepared forms were sent via e-mail. The prepared form was kept open for one month. In the research, 187 school administrators and teachers working in Bingöl Province were reached.
Data collection and analysis
As with all scale tools in social science research, validity and reliability studies should be carried out for survey results. The validity of the questionnaires shows the power to obtain appropriate answers to the subject and question under investigation. It is stated that test-retest studies are widely used in reliability. Questionnaire development process takes place in four stages: defining the problem, writing the item (question), getting expert opinion and pre-application (Büyüköztürk et al., 2013).
Within the scope of the validity and reliability studies of the developed questionnaire, expert opinion was consulted for face and content validity. A pre-application was conducted to check whether the items were understandable and explanatory. Within the scope of reliability, it was determined that the correlations of the answers given to the items ranged between 0.77 and 0.96 as a result of the test-retest study conducted with a two-week interval. After these stages, the actual application was started with the questionnaire questions. While analyzing the data obtained within the scope of the study, descriptive statistical analyzes were made in all questions and basic statistical values such as frequency, percentage, standard deviation, mode, and median were reported.
Sub-Problems of the Research.
1. Do you find the distance education conducted by your school sufficient? 2. What level of change has the Covid-19 outbreak caused in your life? 3. If you had to give a score between 0 and 7 for your general academic motivation before the Covid 19 epidemic, what score would you give? 4. If you had to give a score between 0 and 7 for your general academic motivation in the post-Covid 19 epidemic, how many points would you give? 5. Choose the one that suits you best from the opinions below regarding the suitability of the distance education method in the teaching of the courses at the school you are working in. 6. How often have you had problems with the following issues related to technology since the transition to distance education?
6.1. Students' deficiencies/inadequacies regarding distance education technologies/ applications. 6.2. Uncertainties about which technology or application we will use. 6.3. I do not know how to use the necessary applications (e.g., Zoom, M. Teams, Google Meet) for distance education-communication. 6.4. Lack of internet access at my place of residence. 6.5. Using a different technology/application to teach each lesson. 6.6. The lack/use of functional tools (e.g., blackboard) we use in face-to-face education in digital environment.
Findings regarding the first sub-problem
The first sub-problem of the research is "Do you find the distance education conducted by your school sufficient?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 2.
When the values in Table 2 are examined, it is seen that the mode, median and mean values are remarkably close to each other. In this context, it was stated that the participants gave the same answers to both yes and no options to this question, and the average value was slightly close to the no option (X =4.01).
Findings related to the second sub-problem
The second sub-problem of the research is "What level of change has the Covid-19 outbreak caused in your life?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 3.
By looking at the values in Table 3 and increasing the score obtained from the question asked, the option "completely changed"; Considering that the "never happened" option is approached with the fall; it is seen that the mean value of the data set is closer to the "completely changed" option (X =8.51).
Findings regarding the third sub-problem
The third sub-problem of the research was "If you had to give a score between 0 and 7 for your general academic motivation before the Covid 19 epidemic, how many points would you give?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 4. Looking at the values in Table 4, it is seen that the mean value of the data set is closer to the "I am highly motivated" option (X =5.83).
Findings related to the fourth sub-problem
The fourth sub-problem of the research is "If you had to give a score between 0 and 7 for your general academic motivation in the post-Covid 19 epidemic, how many points would you give? expressed as". For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 5.
Looking at the values in Table 5, it is seen that the mean value of the data set is closer to the "I have no motivation" option (X =3.75).
Findings related to the fifth sub-problem
The fifth sub-problem of the research is "Which of the following views is most appropriate for you regarding the suitability of the distance education method in teaching the courses at the school you work at?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 6.
Considering the values in Table 6, the highest participation rate with 49.7% is "Distance education applications can be partially benefited from in the conduct of our school lessons"; secondly, 40.1% for the option "Distance education is not an appropriate method for conducting the courses in our school"; and lastly, it is seen that 10.2% belongs to the option "All of the courses in the school can also be conducted with distance education".
Students' deficiencies/inadequacies regarding distance education technologies/applications
The sixth sub-problem of the research, the first sub-title "How often have you had problems with students' deficiencies/inadequacies regarding distance education technologies/applications since the transition to distance education?" expressed as. For this purpose, the descriptive statistical results of the answers given to the question asked are given in Table 7. Looking at Table 7, it is stated that the average score of the participants in the item of deficiencies/inadequacies regarding distance education technologies/applications since the transition to distance education is 3.31. It is seen that the participants agree with this statement at the "sometimes" level.
Uncertainties about which technology or application to use
The second sub-title of the sixth sub-problem of the research is "How often have you had problems with the uncertainties about which technology or application you will use since the transition to distance education?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 8.
When Table 8 is examined, it is seen that the average score of the participants in the item of uncertainty about which technology or application you will use since the transition to distance education is 2.72. It is seen that the participants agree with this statement at the "sometimes" level.
Not knowing how to use applications required for distance educationcommunication (e.g., zoom, M. Teams, Google Meet)
The sixth sub-problem and the third sub-title of the research, "Since the transition to distance education, how often have you had problems with not knowing how to use the necessary applications for distance education-communication (e.g., Zoom, M. Teams, Google Meet)?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 9. Looking at Table 9, it is stated that the average score of the participants is 1.97 in the item not knowing how to use the applications required for distance educationcommunication (e.g., Zoom, M. Teams, Google Meet) since the transition to distance education. It is seen that the participants agree with this statement at the level of "rarely".
Lack of internet access at the place of residence
The sixth sub-problem and the fourth sub-heading of the study "How often have you had problems with the lack of internet access in your place of residence since the transition to distance education?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 10.
Looking at Table 10, it was stated that the average score of the participants was 2.11 in the item No internet access at the place of residence since distance education was started. It is seen that the participants agree with this statement at the level of "rarely".
Using a different technology/application to teach each lesson
The sixth sub-problem and the fifth sub-heading of the research "How often have you had problems with using a different technology/application to teach each lesson since the transition to distance education?" expressed as. For this purpose, the descriptive statistics results of the answers given to the question asked are given in Table 11. When Table 11 is examined, it is stated that the average score of the participants in the item "Using a different technology/application to teach each lesson since distance education" was 2.16. It is seen that the participants agree with this statement at the level of "rarely".
Absence/not being used of functional tools (e.g., blackboard) used in face-toface education in digital environment
The sixth sub-problem and the sixth sub-title of the research "How often have you had problems with the lack of/not using the functional tools (e.g., blackboard) you use in face-to-face education in the digital environment since the transition to distance education?" expressed as. For this purpose, the descriptive statistical results of the answers given to the question asked are given in Table 12. at Table 12, it was stated that the average score of the participants was 2.87 in the item that the functional tools (e.g., blackboard) that you have used in face-to-face education since the transition to distance education were not used in the digital environment. It is seen that the participants agree with this statement at the "sometimes" level.
Conclusion, discussion and recommendations
At the end of the research, it was determined that half of the participants did not find the distance education conducted in their schools during the epidemic period sufficient. Managers and teachers; 49.7% of them said that they can partially benefit from distance education in the conduct of the courses; 40.1% of them stated that it is not appropriate to conduct the courses with distance education; On the other hand, 10.2% stated that all the courses can be conducted with distance education.
There are studies in the literature that overlap with the findings of the research. Classroom teachers find the distance education conducted during the epidemic insufficient. If possible, it has been suggested to use a hybrid/blended education system in which formal education and distance education are conducted together, instead of completely distance education (Kantos, 2020). The teachers who teach simultane-N X Ss Level Absence/not being used of functional tools (e.g., blackboard) used in face-to-face education in digital environment 187 2.87 1.37 Sometimes ously from a distance cannot provide enough guidance in this process; interaction with students is not sufficient (Başaran, Doğan, Karaoğlu, & Şahin, 2020); the level of participation in simultaneous courses is low; experiencing communication problems with students; and it has been determined that adequate social support cannot be provided to students (Genç, 2020).
In the research of Bakioğlu & Çevik (2020), 26.6% of science teachers think that they can complete the distance education curriculum, while 30.6% think that they cannot. 16% of the teachers stated that the curriculum could be partially completed. Most of the teachers who thought that the curriculum would not be enough stated that the distance education environment was not suitable, the duration of the lessons was insufficient and the level of participation of the students in the simultaneous lessons was low. Yılmaz (2020) stated that the activities conducted under the name of distance education are distance education activities and these activities cannot replace formal education.
One of the prerequisites for an education and training institution to be sufficient and effective in distance education is that employees are willing to conduct distance education activities (Canpolat, & Canpolat, 2020). The fact that teachers are inadequate and inexperienced in using distance education technologies and therefore have a negative view of distance education negatively affects students (Nenko, Кybalna, & Snisarenko, 2020;Genç, 2020).
At the end of the research, it was determined that school administrators and teachers had some uncertainties about which technology or application they would use since the transition to distance education. At the same time, they rarely had problems because they did not know which technologies/applications they should use in which course and how to use these technologies/applications. Although rare, the participants had problems connecting to the Internet during the distance education process.
Since the teachers could not use some of the tools and materials they use in face-toface education in the digital environment, they were sometimes worried about the efficiency of the lessons.
In the distance education process, teachers have problems in preparing and presenting sufficient and effective teaching materials for lessons (Genç & Gümrükçüoğlu, 2020). The limited number of course materials that can be used in the distance education process negatively affects the learning process of the students. For this reason, for a qualified distance education, the number and quality of digital course contents are increased and EBA etc. distance education systems should be enriched (Basaran et al., 2020). The most important problems faced by teachers in this process; It has been determined that it is caused by internet connection problems and not knowing how to use the hardware and software required for distance education. 58.6% of the teachers stated that students could not be reached during the distance education process and the students could not obtain sufficient information in this process, etc. They stated that they were worried about not being able to teach formally due to several reasons (Bakioğlu & Çevik, 2020).
The open-distance education system, which has been conducted professionally for nearly 40 years in our country, needs to be improved in terms of quantity and quality at all levels, from pre-school to higher education (Can, 2020). Schools should have a content developer for distance education, an assessment and evaluation specialist, a quality monitoring and evaluation team, and a system administrator for planning distance education courses and coordination among employees. Every school should have the necessary internet infrastructure and technological devices for distance education (Can, 2020;Canpolat, & Canpolat, 2020;Salleh et al., 2020).
At the end of the research, it has been determined that some students do not have the technological devices to be used for distance education in the distance education process, and some students are insufficient in using distance education applications/programs. Due to the reasons arising from the students, the administrators and teachers had some problems in the distance education process. The literature on emergency distance education activities indicates that access to online resources is crucial for many students, but this turns into a disadvantage for students with little or no access to online resources (Dubey and Pandey, 2020;OECD, 2020b). This is related to students' lack of technological infrastructure in their homes to enable internet access and access to online resources and poses a vital problem for students living in rural areas and socio-economically disadvantaged students (Alvarez, 2020;Dubey and Pandey, 2020;Konstantopoulou et al., 2022;OECD, 2020c).
In Kantos (2020) and Salman (2020) studies, it has been determined that teachers use EBA mostly for sending homework and activities, but homework and activities sent from EBA are done by a limited number of students. In the research of Bakioğlu & Çevik (2020), it was stated that the motivation of teachers and students to participate in distance education is insufficient. According to Can (2020), students' information technology literacy levels are low. If students with low digital literacy cannot get support from someone else, they either do not participate in distance education at all or lose motivation in the face of technological problems. In the research of Başaran, Doğan, Karaoğlu and Şahin (2020), students and their parents stated that there are infrastructure problems in EBA live classes, that despite the high number of siblings studying in the same house, there is a television at home and the inadequacy of technological devices, etc. They stated that students could not attend distance education courses due to several reasons. Similarly, in Bakioğlu & Çevik (2020) and Kantos (2020) studies, it has been determined that some students do not have the internet and technological devices required for distance education, so their participation levels in synchronous and asynchronous courses are low.
At the end of the research, it was determined that the lives of administrators and teachers changed completely with the epidemic, and the participants, whose academic motivation was quite high in the pre-epidemic period, lost their motivation during the epidemic. In the literature, many studies have been conducted on the emergency distance education process carried out during the COVID-19 pandemic. These studies revealed that students experienced various difficulties in time management, motivation, and independent learning while taking courses with the distance education method, which they were not used to before, and the quality of the education they received deteriorated (Lee et al., 2021;Means & Neisler, 2021;Weidlich & Kalz, 2021). In addition, studies examining students' satisfaction with this process have also shown that students are not very satisfied with emergency distance education (Karadag et al., 2021;Şimşek et al., 2021;Turan & Gürol, 2020). Considering the widespread use of distance education worldwide and the low satisfaction of students, the necessity of scientific research on distance education processes to design effective learning environments cannot be denied.
There are studies supporting the findings of the research in the literature. It has been determined that the professional satisfaction of teachers who think that distance education is ineffective and insufficient during the epidemic period has decreased (Bakioğlu & Çevik, 2020). In this process, teachers feel inadequate about controlling and supervising the teaching process (Kantos, 2020). Distance education limits teachers' communication with colleagues and students (Djalilova, 2020). Administrators and teachers, who come together in the teachers' room between classes in formal education, socialize in this process and can provide professional development by talking about the lessons and students. The motivation of the participants whose socialization needs are not met in distance education may decrease.
The stress level of teachers who stayed at home for a long time due to the epidemic and were worried that they would be infected with the virus increased (Al Lily, Ismail, Abunasserand Alqahtani, 2020). The frequent use of the internet and various distance education platforms in the emergency distance education process causes cyber security concerns for teachers. Negative news in the media about the theft of personal information and user accounts of some users over the Internet negatively affects the view of teachers who do not have enough knowledge about cyber security measures (Han, Demirbilek, & Demirtaş, 2021).
Teachers usually communicate with students via WhatsApp during the distance education period (Kantos, 2020). For this reason, especially branch teachers with a large number of lessons are included in many WhatsApp groups. With the request and question messages that can be received from the groups at any time of the day, the working hours of the teachers are spread throughout the day, including weekdays and weekends. Most teachers are uncomfortable with private messages and calls sent by students and parents, including late at night. When teachers who must instruct students, online lessons are added to the workload of their other responsibilities at home, teachers' stress and anxiety levels can increase even more.
Although the point of view of school administrators and teachers regarding the distance education process, which is tried to be conducted unprepared and urgently, is negative, this process has also made significant positive contributions (Han, Demirbilek, & Demirtaş, n.d. 2021). In the literature, there are studies that show positive changes in the lives of teachers through distance education. In a study on science teachers, 84% of teachers think that since they can teach even in difficult conditions, their self-confidence increases, they can improve themselves during the epidemic and affect their professional development positively (Bakioğlu & Çevik, 2020). It has been determined that with the sudden and compulsory transition to distance education, teachers' skills in using educational technologies have increased and they have improved themselves in preparing digital course contents. It has been determined that teachers adapt to distance education by frequently using EBA, which they have never used before, and educational videos and documents in EBA (Genç & Gümrükçüoğlu, 2020;Kantos, 2020;Bakioğlu & Çevik, 2020). In addition to the gains in the process, the increase in the research on distance education and the digital course contents produced during the epidemic period will make significant contributions to our education system in the post-epidemic period (Yıldırım, 2020).
Suggestions
When the findings of the study are evaluated together with the literature, the following suggestions can be made.
• Since the data were collected during the distance education process carried out during the COVID-19 pandemic, the possibility that the crisis environment created by the pandemic may affect the findings is also a limitation of this study. • Similar remote guidance services organized by MEB for students with high levels of anxiety and stress during the pandemic can also be provided to school administrators and teachers. • Education Information Network (EBA) infrastructure should be strengthened. In this way, teachers' search for using different platforms and the resulting concerns can be reduced. • Teachers need to be pedagogically prepared for distance education and carry out the teaching process more effectively. For this purpose, it is thought that improving teachers' readiness for distance education through in-service trainings will pave the way for more effective execution of subsequent processes. Digital education content preparation trainings can be organized for administrators and teachers. • Another vital issue is the positive and negative emotions experienced by teachers and students during the pandemic-era education process. It is a known fact that emotions play a decisive role in academic success. For this reason, it is necessary to investigate the causes of negative emotions in depth, reinforce the situations that bring out positive emotions, and thus ensure that teachers and students continue their education processes in a more positive environment. It is considered necessary to provide psychological support for both teachers and students. • In conclusion, it is seen that there were interaction problems among students and between teachers and students during the pandemic period. Teachers had difficulty in motivating students to attend classes and social ties between students decreased. It is understood that the interaction was generally cold and one-way, from the teacher to the student. However, creating an interactive teaching environment is essential to help students construct their learning through experience rather than passive participation. Therefore, it is of great importance to plan better in times of distance education emergencies and create a program outline that will move students from passive participation to an interactive process.
Data availability Data sharing for this study is not applicable as no datasets were generated.
Declarations
Conflict of interest Not Applicable.
|
2022-12-22T16:02:34.302Z
|
2022-12-20T00:00:00.000
|
{
"year": 2022,
"sha1": "2b4fedca743fc71f6fd82a2b75b531ed883481a9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4b1ddb6ee99bc7f92ffb7f182a1819736680f08b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
14979609
|
pes2o/s2orc
|
v3-fos-license
|
Changes in the response of the AL Index with solar cycle and epoch within a corotating interaction region
Abstract. We use observations in the solar wind and on the ground to study the interaction of the solar wind and interplanetary magnetic field with Earth's magnetosphere. We find that the type of response depends on the state of the solar wind. Coupling functions change as the properties of the solar wind change. We examine this behavior quantitatively with time dependent linear prediction filters. These filters are determined from ensemble arrays of representative events organized by some characteristic time in the event time series. In our study we have chosen the stream interface at the center of a corotating interaction region as the reference time. To carry out our analysis we have identified 394 stream interfaces in the years 1995–2007. For each interface we have selected ten-day intervals centered on the interface and placed data for the interval in rows of an ensemble array. In this study we use Es the rectified dawn-dusk electric field in gsm coordinates as input and the AL index as output. A selection window of width one day is stepped across the ensemble and for each of the nine available windows all events in a given year (~30) are used to calculate a system impulse response function. A change in the properties of the system as a consequence of changes in the solar wind relative to the reference time will appear as a change in the shape and/or the area of the response function. The analysis shows that typically only 45% of the AL variance is predictable in this manner when filters are constructed from a full year of data. We find that the weakest coupling occurs around the stream interface and the strongest well away from the interface. The interface is the time of peak dynamic pressure and strength of the electric field. We also find that coupling appears to be stronger during recurrent high-speed streams in the declining phase of the solar cycle than it is around solar maximum. These results are consistent with the previous report that both strong driving (Es) and high dynamic pressure (Pdyn) reduce the coupling efficiency. Although the changes appear to be statistically significant their physical cause cannot be uniquely identified because various properties of the solar wind vary systematically through a corotating interaction region. It is also possible that the quality of the propagated solar wind data depends on the state of the solar wind. Finally it is likely that the quality of the AL index during the last solar cycle may affect the results. Despite these limitations our results indicate that the Es-AL coupling function is 50% stronger outside a corotating interaction region than inside.
Introduction
Very early in the space age it was shown that geomagnetic activity is related to the solar wind speed (Snyder et al., 1963) and controlled by the north-south component of the interplanetary magnetic field (Fairfield and Cahill, 1966).This result was interpreted as evidence of magnetic reconnection between the interplanetary magnetic field (IMF) and the earth's dipole field.According to Dungey (1963) the rate per unit length at which southward IMF is transported to the subsolar magnetopause should be proportional to the dawn dusk component of the electric field given by E y =V B z .Subsequent work suggests that the magnetosheath flow pattern and stagnation of the flow may modify this simple assumption.Early work examined the relation of averages of different solar wind parameters versus various magnetic indices finding that Published by Copernicus Publications on behalf of the European Geosciences Union.
the larger is B z , the stronger is magnetic activity (Schatten and Wilcox, 1967).This work also showed that B z in gsm coordinates exhibits the highest correlation with magnetic activity (Hirshberg and Colburn, 1969).Arnoldy (1971) studied the relation between gsm B s (B z northward = 0) and the hourly integral of AE (auroral electrojet index).The hourly integral was calculated as the sum of several samples multiplied by the time between samples (τ ∼approximately 10 m) represented as B s τ .He found that the highest correlation between B s τ and AE occurred when the input was taken one hour ahead of the output, but there was correlation at other lags as well.This led him to express the output AE as a linear combination of input values at lags of 0, 1 and 2 h.The correspondence between the model predictions and the observations was remarkable.This model was actually a linear prediction filter.Meng et al. (1973) used 5-min resolution data to study the cross correlation between AE and IMF B z and found that the peak correlation occurred at ∼40 min delay, a value somewhat less than the value obtained by Arnoldy.The first study to explicitly use linear prediction filters to investigate the relation between IMF B z and various indices was performed by Iyemori et al. (1979).The authors used hourly averages to show that AL, AU, AE and D st are all reasonably well predicted by IMF B z .The auroral electrojet filters were all short with only a few samples contributing to the output, but the D st index depends on inputs for many hours prior to the current time.Clauer et al. (1981) extended this work with 2.5 min resolution data and consideration of three different coupling functions: epsilon, V B s and V 2 B s .Epsilon is proportional to the product of the interplanetary Poynting vector and a "gating function" that depends on the clock angle of the IMF around the Earth-Sun line (Perreault and Akasofu, 1978).All of the coupling functions produced filters that rise rapidly to a peak in an hour or less and then decay more slowly for several hours.They found that the epsilon parameter produced considerably less accurate predictions than the other coupling functions and that its filter was much noisier.They also noted that moderate activity filters tended to peak at about 60 min while strong activity filters peaked near 30 min.They interpreted this as evidence of a possible nonlinear response of the magnetosphere to the solar wind.Clauer et al. (1983) used the same technique to determine the prediction filter relating E s (component of E y due to B s ) to the ring current asymmetry index.They demonstrated that this filter is very similar to the AL filter suggesting a relation between the westward electrojet and the current system responsible for asymmetry in D st .
The nonlinearity of the E s to AL response was investigated in greater detail by Bargatze et al. (1985).The authors selected isolated intervals of activity and then characterized each interval by its median value of AL.Prediction filters were created for successive intervals.They found that the filters consisted of two peaks at about 20 and 60 min.The 60min peak was highest for moderate activity while the 20-min peak was highest in strong activity.This result stimulated a long sequence of papers that utilized this dataset to study solar wind coupling to the westward electrojet.
A number of reviews of solar wind coupling to geomagnetic activity were written about this time.Reiff (1983), Baker (1986), andBaumjohann (1986) described a variety of standard statistical techniques for studying solar wind coupling.Clauer et al. (1986) provided a detailed description of the techniques of linear prediction filtering.McPherron et al. (1988) reviewed results of linear prediction filtering noting that the E s -AL response function can be approximated by a Rayleigh function with time constant one-hour.Since the maximum of a Rayleigh function occurs at a time equal to the time constant this implies that the peak AL response to a delta function input will be delayed by this amount.Note, however, that although this result was quoted in the abstract it was not shown in the body of the published paper.In the results discussed below we demonstrate the truth of this statement.McPherron et al. (1988) also showed that the transfer function is a low pass filter with a cutoff frequency about 0.1 mHz (2.8 h).This filter explained less than 45% of the variance in the dataset suggesting that other factors beside the solar wind are important in the creation of AL.However, the authors showed that more than 90% of the variance in an individual event could be described by a very simple response function consisting of two delta functions of specified amplitude and time delay provided the four parameters are varied from event to event.Their interpretation of this result was that the AL index contains two components: one directly driven by the solar wind through the measured impulse response; and another driven by energy stored in the magnetotail and unloaded in a substorm expansion.The surprising result was that the second component also appears to be directly related to the solar wind electric field.However, the time delay when this component begins is controlled by internal processes and hence on average does not correlate with the solar wind.
Subsequent to the work by McPherron et al. (1988), Weimer (1994) carried out a superposed epoch analysis of AL during the expansion phase and fit a slightly different function to the mean behavior during the expansion and recovery phase.This function was f =c o +c 1 te pt .He found that the decay time constant −1/p decreased from 0.56 to 0.41 h as activity increased.For this function these values imply that the maximum of the response function occurs earlier as activity increases.
Techniques of nonlinear dynamics have been applied to the AE index time series in attempts to identify the type of system represented by solar wind coupling to the auroral electrojets.Vassiliadis et al. (1990) considered the magnetosphere as an autonomous system, one driven by a low-level steady input.In this situation the transient response of the system disappears as the system approaches a semi-steady state governed by internal dynamics.They concluded that the magnetosphere is a low-dimensional chaotic system with a fractal dimension near 4.This implies that only four differential equations are needed to describe the relation of E s to AE.In an extension of this work Vassiliadis et al. (1991) concluded that the Lyapunov exponent of the system, the time to depart from a given state, is only 10 min.Prichard and Price (1992) disputed this result arguing that the conclusion of low-dimensionality and short-lived chaotic behavior was an artifact of a long autocorrelation time in the AE index time series.They concluded instead that the index series usually represents random behavior with some nonlinear structure.Such time series are produced by a driven system with a random forcing function.They suggested that accurate dimensional estimates can only be obtained when the driver (E s in solar wind) is steady for times long compared to the time constants of the system.Price and Prichard (1993) examined one such event and concluded that there was some evidence for deterministic nonlinear coupling.Further analysis of a longer data series by Prichard and Price (1993) again concluded that there is no evidence of low-dimensional chaotic behavior.
Singular spectrum analysis was applied to the AE index time series by Sharma et al. (1993) who again concluded that that the magnetospheric system could be represented by a low dimensional system.Vassiliadis et al. (1993) demonstrated the feasibility of this by representing the system by an LRC circuit.Parameters in this model were determined by least square optimization.When driven by the solar wind these simple low-dimensional models were able to reproduce the behavior in the AE time series about as well as linear prediction filters.Quantitatively the authors showed that averaged over 1-2 d intervals their LRC models usually predicts more than 40% of the AE variance.This is very close to the average value of 45% obtained later in this paper.
The first attempt to use modern techniques of information theory to treat the magnetosphere as an input-output system was carried out by Price et al. (1994).The authors used "local linear filters" to predict the AE index some time ahead of the current time.This technique assumes that the current state of the system is defined by a state vector consisting of a sequence of previous values of the input and output, both normalized by their respective standard deviations.The historical record is searched to find previous examples of this state.An ensemble of these "nearest neighbors" is used to calculate a filter to advance the prediction one time step.Single-step prediction uses the last measured values to advance the state and prediction.Multi-step prediction uses the previously predicted values of the output and measured values of input to advance both the state and prediction.Price et al. (1994) found that their prediction errors do not stabilize until at least 500 nearest neighbors are used to advance the prediction.For single-step prediction about 95% of the variance in the next value is predictable.However, in multi-step prediction the prediction efficiency stabilizes at about 60% after 60 min.Twenty different coupling functions including gsm V B s and epsilon were considered with single-step prediction efficiencies that were virtually identical and close to the result of persistence, i.e. the next values is the same as the last value.The authors conclude that there is little evidence for nonlinear coupling.They support the conclusion of Bargatze et al. (1985) and McPherron et al. (1988) that the AE time series contains a strong and unpredictable stochastic component.
An extension of input-output system analysis has been reported by Vassiliadis et al. (1995).The authors utilize the same basic principles as did Price et al. (1994), but with a number of differences.Among these were the use of the AL index rather than AE which includes AU and AL indices; the use of the solar wind monitor IMP-8 closer to the Earth than ISEE-3; calculation of both moving average and autoregressive moving average filters.In addition they performed detailed optimization of the various parameters used in the analysis method including the length of the filter, the separation of samples used in the filter, the number of nearest neighbors defining the state space, and the number of singular values used in the matrix inversion.The authors find that local moving average (MA) filters that depend only on the current state of the solar wind are optimum for a 2.5 h long filter when the input series is sampled at five minute intervals, with 100 nearest neighbors, and separated from the current state by at least 10 h.Such filters make single step predictions of AL with a prediction efficiency of order 75%.Autoregressive moving average filters (ARMA) that depend on the state of both the solar wind and previous AL index do much better (∼90% of variance) with far fewer coefficients (4-6) in the two parts of the filter.When these filters are iterated for about 4 h using previous predictions of AL and the observed input series they still predict about 65% of the variance.In comparison linear filters predict about 40% of the variance.(Note the authors report prediction quality with correlation coefficients which are approximately the square root of prediction efficiency.)An important point made by the authors is that the AL predictions are stable against perturbations of the initial conditions used to start an iterated prediction.
In a later paper Vassiliadis et al. (1996) these same authors reexamine the question of whether the V B s -AL coupling in nonlinear.They conclude that the answer to this question strongly depends on the details of the analysis.In particular, when filters are averaged over a large range of activity levels they are biased toward becoming linear prediction filters.They conclude that the magnetosphere is nonlinear and that this must be taken into account by the use of state-dependent prediction filters.
Until recently little additional work has been done on the problem of V B s -AL coupling.Attention has turned to other indices such as the PC index and the D st index.Also an effort has been made to define a better input parameter than V B s .For example, Newell et al. (2007) Pulkkinen et al. (2007), however, have investigated the V B s -AL coupling problem using superposed epoch analysis.A set of 150 electrojet activations defined by the onset of a negative bay in AL was selected.The dataset was divided into high and low driving with E y =4 mV/m as the dividing line and high and low dynamic pressure with P dyn =3 nP separating the two classes.They find that the ratio |AL|/E y for low driving before onset is ∼130 and after is ∼180.For high driving the corresponding ratios are ∼110 and ∼140.Similar results were found for high and low dynamic pressure.For low pressure before onset the ratio is ∼130 and after onset it is ∼180.For high pressure the corresponding values are lower, ∼75 and ∼130.Thus weak driving and low dynamic pressure both result in stronger coupling.These results seem to confirm the conclusion of Vassiliadis et al. (1996) that the AL index has a nonlinear response to the solar wind electric field.
The purpose of this paper is to determine whether the nonlinear results obtained by Pulkkinen et al. (2007) are evident in the properties of linear prediction filters.For this analysis we use a technique somewhat like the first method described by Vassiliadis et al. (1995) (see preceding discussion).These authors used a state vector depending only on the solar wind input (E s ) to define the system state and for each state constructed a moving average filter from a number of very similar states.In our case we define the state of the solar wind differently.In particular we use the stream interface in a corotating interaction region (CIR) as a reference time.We assume that all CIRs exhibit similar values of solar wind parameters as a function of time relative to this reference time.Thus we expect that the linear prediction filter that transforms E s into AL is the same for all CIRs but varies with epoch time.We will show that there is a significant change with epoch time with the weakest coupling occurring at the stream interface when the solar wind electric field and dynamic pressure are strongest.
To illustrate why we might expect a change in the V B s -AL coupling function during the passage of a CIR we briefly review the characteristics of corotating interaction regions.A CIR is formed when slow speed solar wind from one longitude on the Sun is overtaken by high-speed wind from a following longitude.The high-speed plasma can not penetrate the slow speed plasma because of the imbedded magnetic fields.Consequently it compresses the plasma and magnetic field near the interface.This creates a spiral shaped interaction region between the two solar wind streams with the interface between the two streams at the center.Total pressure in the plasma reaches a peak along the interface with a gradient away from ridge of high pressure.Ahead of the interface the pressure gradient deflects the solar wind to the west of the Earth-Sun line and behind it deflects the plasma toward the east.With time the region of elevated pressure propagates away from the interface broadening the interaction region.Inside the leading edge of the CIR the slow wind is accelerated while behind the fast wind is decelerated.In a frame of reference moving with the interface the solar wind flow on the two sides is tangential to the interface.The high-speed stream behind the interface contains large amplitude Alfvén waves propagating outward from the Sun.These waves rotate the IMF southward antiparallel to the Earth's magnetic field enabling dayside magnetic reconnection.The reconnection drives magnetospheric convection which in turn drives field-aligned current closing through the ionosphere.It is the Hall current produced by this closure that is evident in the AE indices.Since the electric field in the solar wind is the rate of flux transport per unit length high speed wind creates a stronger electric field than low-speed wind.Combined with the fluctuations in B z caused by the Alfvén waves it is expected that geomagnetic activity is stronger after the interface than it is before.
An illustration of the average properties of a CIR derived from superposed epoch analysis is presented in Fig. 1.Zero time in this analysis is the time a stream interface, defined as the zero crossing of the azimuthal flow angle of the solar wind passed the Earth.The right side of the figure displays important parameters derived from solar wind plasma and magnetic field measurements.In each panel the shaded regions bounded by the upper and lower quartiles define the range within which 50% of the data fall.The top panel shows that solar wind dynamic pressure begins to increase one day before the stream interface.This is the leading edge of the compression region.Dynamic pressure peaks at the stream interface then decays over the following two days.The behavior of total pressure in the solar wind frame (panel 2) is very similar although starting its increase a little later.The solar wind electric field (E y ) (panel 3) begins to increase 12 h before the interface, peaks at the interface, and decays slowly over a period of three days.Beta (panel 4), the ratio of thermal to magnetic pressure in the solar wind is nearly 2.0 just before the interface, falls to about 1.0 at the interface, and takes several days to return to normal.Solar wind Mach number (panel 5) is typically around 8.0 but inside the CIR it falls to 6.0 and then recovers over the next three days.
The left side of Fig. 1 shows measures of geomagnetic activity during the passage of a CIR.All panels show that activity indices begin to increase a few hours before the CIR stream interface, peak 6-12 h after the interface, and then take about five days to decay to the quiet levels present before the interface.The analysis presented in this paper utilizes E s , the rectified version of E y , as input, and the AL index (a component of AE) as output.If coupling is stronger for low dynamic pressure and a weak driver we would expect to find that the ratio between AL and E s is largest at the edges of the figure and weakest at the center.We will demonstrate that this is the case.
Datasets and preprocessing
The output parameter used in this study is the lower auroral electrojet index (AL) calculated and distributed by the World Data Center -C2 in Kyoto, Japan.We have downloaded these data and interactively edited the AL time series to flag obvious spikes in the index.The input parameter is the rectified dawn-dusk electric field of the solar wind in gsm coordinates.To obtain this quantity we have processed all available solar wind and IMF data from the Wind and ACE spacecraft.The intervals covered by these spacecraft are Wind 1995 to present and ACE from 1 February 1998 to present.
Rectified electric field is calculated from solar wind speed and gsm B z propagated to the subsolar bow shock by the Modified Minimum Variance Method (Bargatze et al., 2005;Weimer et al., 2003).This method uses a moving window to calculate the time-varying normal to discontinuities in the interplanetary magnetic field (IMF).For each window the mean field is calculated and a minimum variance analysis performed on the projection of the field perpendicular to the mean field.The minimum variance direction is taken as the normal to discontinuities in the window.Time delays are calculated by projecting the satellite position vector and solar wind velocity onto this normal.The calculated time delays are added to the times of sequential points.When fast solar wind follows slow wind some parcels of plasma will appear to overtake and pass slower parcels ahead of them.Of course this can not happen in the solar wind because the magnetic field is frozen in the plasma.Instead the plasma and magnetic field near the gradient in speed is compressed.In the usual propagation technique this situation is handled by simply eliminating the overtaken parcels.When the wind speed is decreasing fast particles leave slower ones behind.At the subsolar bow shock the original equally spaced time series is distorted into a time sequence with variable time delay between samples.This sequence is interpolated to the original grid of one-minute samples.If the normal is poorly determined (eigenvalues are nearly equal) or the normal is close to orthogonal to the Earth-Sun line the normal can not be defined or is meaningless.In these cases the time delay is interpolated from adjacent values.
A combined one-minute solar wind and IMF dataset with properties somewhat similar to ours is available from the NASA National Space Science Data Center at the url http: //cdaweb.gsfc.nasa.gov/istppublic/.
Unfortunately we were not able to use this data set for our analysis because of the high density of flags in the output.These data have been propagated with very conservative constraints on acceptable output.In addition to the eigenvalues and normal direction flags the data are flagged whenever any parcel overtakes another parcel.Because of these constraints it is rare to have a flag free interval longer than the duration of an E s to AL filter.We note, however, this characteristic of the NSSDC dataset is not a problem in the generation of the dynamic cumulative distribution functions discussed below.
Linear prediction filters
In this work we utilize linear prediction filters (Weiner, 1942) to study the relation between the solar wind electric field and the AL index.Prediction filters represent the most general linear relation between two time series.In an ordinary linear regression the output of a system at a given time is represented as the sum of a constant and a fixed multiple of the input at that time.A finite impulse filter differs only in the assumption that the output at one time is given by the sum of multiples of the input at that time and earlier times.Since a specific previous output depends on previous inputs it is also possible to represent the system by a sum of multiples of previous output and previous inputs.This latter representation is generally more compact having fewer multiplicative coefficients.In the first case the filter is called either a moving average (MA) or finite impulse response (FIR) filter.In the latter case it is called an autoregressive moving average (ARMA) or infinite impulse response filter (IIR).For simplicity we use moving average filters in this work.
The phrase "impulse response" means that a plot of the filter versus time lag is the output that would be generated when the system is stimulated by a single pulse of unit amplitude.For example, we will show that the impulse response relating the rectified solar wind electric field to the AL index is a miniature negative bay with peak amplitude ∼1.5 nT/(mV/m) and duration of ∼2.5 h.(In our figures we have inverted this response to have positive area.)The Fourier transform of the impulse response is the system transfer function or frequency response.For the lower auroral electrojet index (AL) the transfer function is a low pass filter.
The mathematical representation of an ARMA filter is shown in Eq. (1).
To obtain a moving average filter the autoregressive coefficients a i are all set to zero except a 1 which is set to 1.0.In this case the output at the n-th sample point (time) is a sum of multiples (b i ) of inputs I n−i+1 at earlier time lags (i).(This is a convolution.)The set of equations obtained by allowing the index n to take on many successive values can be represented as illustrated in Eq. ( 2).
The left hand side of this set of equations is a column vector of length N+1 corresponding to the segment of the output time series beginning with sample n and ending with sample n+N.The right hand side is the result of a matrix multiplication between a rectangular matrix (X) with N+1 rows and N b columns and a column vector of N b unknown filter coefficients.This set of equations has a simple matrix representation shown in Eq. ( 3).
The first column of the matrix (X) is the input time series corresponding to the output series on the left hand side.The second column is the same series shifted down by one sample, the third column is the input shifted down two samples, and so on until the last column which is the input shifted down by N b +1 samples.This matrix is often called the "design matrix".To determine all N b coefficients the design matrix must have at least N b rows.Usually there are many more rows than coefficients (N >N b ) and the coefficients are over determined by the data.The least square solution for the coefficient vector b is obtained by multiplying both sides by the transpose of the design matrix and then inverting the resulting square matrix as shown in Eq. ( 4).
It can be shown that the product matrix (X T X) has rows and columns that represent the autocorrelation function of the input at various lags.Similarly the product X T O is the cross correlation between input and output as a function of lag.
Often this solution vector is too noisy to be useful and a different solution method is required.An alternative solution technique is called singular value decomposition or SVD (Press et al., 1986).It is shown in matrix algebra than any real rectangular matrix can be represented as the product of three special matrices X= (U) (S) V T .Matrix U is column orthogonal, matrix S is diagonal sorted descending, and matrix V is fully orthogonal.We use these facts to solve for the coefficient vector obtaining Eq. ( 5).
The reciprocal of the diagonal matrix (1/S) is also diagonal with elements in ascending order.Very small elements along the original diagonal become very large elements in the reciprocal matrix.These large elements are the source of noise in the least square solution.The secret of SVD is to set all elements beyond a certain singular value to zero in the reciprocal matrix.This eliminates the terms causing noise in the solution.
This procedure was modified to allow for the possibility of acausal filters.An acausal filter is one in which there is an output before the input is applied.This situation arises in our calculations if the solar wind has not been properly propagated from the upstream measurement point to the magnetopause.The required modification is to time shift the input data with both positive and negative lags.In our case we typically used 60 min before the expected output and 180 min afterwards for a total of 240 lags.
Ensemble matrices
To determine the impulse response for a particular state of the solar wind we must select a number of examples of the particular state and create an average filter for that state.We then systematically vary some parameter such as solar wind dynamic pressure and for each new range of values calculate a new filter from an ensemble of events satisfying these conditions.In this work we have used the Earth passage of a corotating interaction region (CIR) to establish a sequence of states and have calculated the filter relating the solar wind electric field to AL as a function of time relative to the stream interface at the center of the CIR.We then examine this sequence of filters and determine whether there is an observable change in the properties of the filter with epoch time.
The underlying assumption is that all CIRs create similar solar wind states at the same location in a CIR.
We begin by identifying all interfaces between low-speed and high-speed solar wind in the years 1995-2006(McPherron, et al., 2008a, b), b).For each interface we selected a 10-day interval centered on the interface and extracted a segment of solar wind or index data from our original database.The intervals were placed in the rows of an ensemble matrix.Spline interpolation was used to eliminate short gaps (ten minutes or less) in the original data.For the V B s -to-AL filter we constructed matrices of solar wind speed, interplanetary magnetic field (IMF) B z in gsm coordinates, and AL index.The matrices for solar wind speed (V) and B z were multiplied element by element to construct a new ensemble of V B s (rectified (V B z )).The matrices V B s and AL were then used as the input and output of the magnetospheric system.Note that southward B z (B z <0) produces negative E s (as defined here) so that the impulse response between E s and AL is a mostly positive curve.
To calculate the ensemble average impulse response we utilize 24-h sequences of data centered at 00:00 UT on each day around the stream interface.The design matrix for a given day of data and specific stream interface was constructed from the appropriate row in the input ensemble array.For day −5 (left end of a row) the time shifting required to construct the design matrix introduces flags (missing data) from outside the interval.The remaining days simply shift data from the preceding day into the matrix.For acausal filters flags are also shifted into the matrix from the right end of the arrays.In some cases, even after interpolation, there are missing data flags at arbitrary locations in the original data.These flags are shifted downward introducing flags in succeeding rows of the design matrix for that day.At the same time we construct the vector of system output (AL).As a final step we horizontally concatenate the output vector and the design matrix and search each row of this concatenation for missing data flags.Any row containing one or more flags is eliminated from the final output vector and design matrix.In a few cases there were too few rows remaining to calculate a prediction filter.
At this point the ensemble average prediction filter was calculated by two different methods.In the first we used the data available for a given day relative to a stream interface and SVD analysis to calculate a daily filter.These filters were then averaged over all interfaces in a year producing an ensemble average filter for the year.Filters calculated in this manner were highly variable from day to day, but generally represent the data from which they are calculated with high prediction efficiency.Prediction efficiency is defined as shown in Eq. ( 6).The operator "Var" means variance, i.e. mean square deviation of the argument from its mean.
In the second method used for this study we vertically concatenate the design matrix for each day into one long matrix for an entire year.SVD was then used to invert this matrix.Typically this matrix had about 365 000 rows and 240 columns.The filter obtained in this manner is the optimum linear representation of the relation between input and output data for CIR events during a given year.Since there are many different events in a year the prediction efficiency determined for an entire year is lower than the average prediction efficiency of the daily filter calculated in the first method.Typical values of prediction efficiency are about 45% of the variance with an annual average filter.Single day filters typically represent 65% of the variance.
Characterization of filters
In an ordinary linear regression a single number that characterizes the relation between the input and output is the multiplicative constant.In a linear filter the equivalent quantity is the area under the impulse response function.This can be clearly seen by considering an example of a constant input of unit amplitude to the system.After the transient response time of the filter (the length of the filter) the output stabilizes at a value given by the sum of products of the magnitude of the input filter and the coefficients.Because the input is constant it can be taken outside the sum in Eq. ( 1) leaving the sum of the filter coefficients.In this work we use the area under the response function to quantify the response of the magnetosphere.If the coefficients change in a systematic way with the state of the system we conclude that the system is nonlinear.For stream interfaces we use time relative to a stream interface (epoch time) as the state variable.
Example data and filter
An illustration of the relation between rectified solar wind dawn-dusk electric field and the AL index is presented in Fig. 2 for several hours in January 1999.The electric field in the top red trace is zero when IMF B z is positive.During these times there is no variation in the output.Note that in this analysis we removed mean values for a month prior to calculation of the prediction filters and then add the means back so slight differences in the baseline of the observed and predicted values may be present on a given day.When B z is negative E s =−V B s takes on positive values and AL responds after a short time.The predicted response follows the general pattern of the observations but is much smoother, does not contain the extreme variations in AL, and often shows timing differences relative to the observations.These differences cause the prediction efficiency to be significantly less than 1.0.The heavy dashed line (black) in the upper panel is the result of fitting an offset Rayleigh function to the impulse response.This function is given by the equation In this example the three model parameters have the values [−0.0699, 0.0629, 41.9291].The amplitude constants have units of nT/(mV/m) while the time constant is in minutes.For a Rayleigh function the peak response occurs at a time equal to the time constant.After the peak the response dies slowly passing through zero at a lag of 120 min.The response beyond this lag is not usually present in monthly filters and is probably an artifact of detrending by only subtracting the mean.This filter was characterized by the area under the filter between zero and 180 min.We also tried to fit this response function with the Weimer (1994) function discussed in the introduction.We found that this function fits the slope of the rise of the filter somewhat better, but it overshoots the peak value by almost a factor of 2. During the decay this model compensates for the overshoot by being less than required to fit the prediction filter.
The transfer function (frequency response) is defined as the Fourier transform of the impulse response.We have calculated this for each of the filters shown in the top panel except we have not plotted the function for the least square solution which is quite noisy.The heavy dashed black line is the transform of the fitted Rayleigh function.The heavy red line is the solution retaining the largest number of singular values (30).Other traces shown by thin blue lines correspond to the use of fewer singular values in the matrix inversion.It is evident how the use of singular value decomposition suppresses noise (and high frequency components) in the impulse response function.
The transfer function for the V B s -AL impulse response function is a low pass filter.Two arrows in the figure represent important cutoffs.The first at low frequency is physically meaningful and represents the behavior of the magnetosphere.Electric field fluctuations with periods shorter than ∼4 h are attenuated while those with longer periods are not.The second cutoff at higher frequency depends on the number of singular values retained in the solution.The cutoff at ∼0.4 h corresponds to 30 singular values.Solutions using fewer singular values have lower frequency cutoffs.
We also examined the phase response of this filter (not plotted).From zero to 0.2 mHz the response is linear and given by the function φ=af where a=−1.6148×10 4 radians/Hz.This translates to a uniform time delay of 42 min for these low frequency signals.
Properties of interplanetary electric field and AL index relative to CIR stream interfaces
Our emphasis in this work is an investigation of whether prediction filters for magnetic indices change systematically relative to a CIR stream interface.In work reported elsewhere (McPherron et al., 2008a) we have determined 394 stream interfaces in the interval 1995-2007.The behavior of the dawn-dusk component of the gsm interplanetary electric field E y =V B z is shown in Fig. 4.During quiet times several days before the interface roughly half of all electric field values have magnitude less than 1 mV/m.About 12 h before the interface the electric field strength begins to increase.It reaches a value twice as high at the interface.Subsequently it decays slowly reaching typical background strength after three days.
The AL index mimics its driver E y as can be seen in Fig. 5. AL begins to decrease 12-24 h before the interface and reaches a minimum value about 8 h after.The minimum median value of AL is only −150 nT.It is more negative than −300 nT less than 25% of the time.Note that activity is elevated for more than five days after the interface even though E y reaches background values in only three days.
E s -AL filters as function of CIR epoch
Prediction filters relating rectified solar wind electric field to the AL index are plotted in Fig. 6.Each trace in this figure represents an ensemble average for a particular 24-h period around the CIR stream interface.Each filter is computed as an average of that 24-h period preceding or succeeding the zero epoch time at the stream interface averaged over events observed during years 1995-2007.However, the 13-year interval does not include 1996 because no AE indices are available this year.Ensemble average filters were determined first for a given 24-h epoch and all data for events observed during a given year, and then filters from successive years were averaged.All filters have essentially the same shape starting from zero at zero lag, rising to a peak at ∼20 min, decaying slowly to zero at 150 min, and then slightly overshooting with negative values.The filters are virtually identical except for the filter highlighted with red error bars, which shows the filter for the zero epoch time during the day of the stream interface crossing.The error bars on the interface filter are the The top panel shows that the 3-day interval beginning two days before and including the interface exhibits the highest predictability of about 45%.The day after the interface during the high-speed stream this value drops to its lowest value of 35% and then rises slowly to a value of 42%, somewhat less than the values before the interface.The bottom panel shows that the area under the interface filter ( 102) is significantly less than the values in the days before and after (125 and 112).The values of the filter area adjacent to the interface are slightly lower than those two days and more before or after.Even though the data are more predictable on the day of the interface than any other day, the area under the filter is smaller.This suggests that the solar wind coupling becomes less efficient as a CIR passes the Earth.Note that the bottom panel indicates that the filters after the interface have a more variable area that they do before, a result consistent with the decreased prediction efficiency after the interface.
E s -AL filters as function of solar cycle
Since we have determined filters for each epoch of the CIR and for each year in the solar cycle we can also display average filters versus phase of the solar cycle.Figure 8 displays prediction filters averaged over all CIR epoch times in each year.Each trace in the figure shows the average filter for a given year.Two traces have been highlighted and annotated.The red trace with the largest area ( 163) is for the year 1995 late in the declining phase of solar cycle #22.The black trace with the smallest area ( 88) is for the year 2001 close to the maximum of the solar cycle.
The variations of the filter properties, prediction efficiency and area, with solar cycle are presented in Fig. 9.As shown in the top panel the annual filters typically predict about 45% of the variance, but in 2001 this dropped to a low of 35%.The data suggest a solar cycle variation in prediction efficiency with the highest efficiency in the declining phase of the solar cycle and the lowest near solar maximum.The bottom panel shows the area of the impulse response function versus phase of the solar cycle.The traces suggest a solar cycle effect with the strongest coupling in the declining phase (1995 and 2007) and the lowest around solar maximum in 2001.
Discussion and conclusions
Previous studies of the relation between the solar wind electric field and AL index have used linear prediction filters, local linear filters, and neural networks.In general the nonlinear models obtain the highest prediction efficiency, and the linear prediction filters the lowest.Clearly the relation between the two quantities is not linear.Two types of local linear filters have been utilized to approximate the nonlinear behavior.The most general defines the state of the magnetospheric system by a state vector constructed from the input and output time series.Filters are created from an ensemble of similar previous states at each time in the series being represented by the filter.In this case the prediction filter is continuously varying and adapts to represent the nonlinear system.A second type of filter uses the data immediately prior to the prediction point to define a filter to advance the prediction.Our work uses a variant of these methods.We assume that the solar wind establishes the state of the solar wind and that a linear filter can represent the input-output relation for each state.In our case we assumed that a corotating interaction region (CIR) establishes different magnetospheric states as a function of time relative to the stream interface.We then determined an average filter for each day for a 10-day period centered at the stream interface.The filters were determined using an ensemble of CIRs recorded during a given year.The collection of filters allows us to investigate whether there is solar cycle variation in solar wind coupling or a variation with time in a CIR.
Our results show that there are significant differences in the E s -AL coupling function both with epoch within the CIR and with solar cycle.First the prediction efficiency is reduced in the high-speed stream after the interface as compared with the value before and at the interface.Second the area under the impulse response is lowest at the stream interface, slightly stronger the day before and after, and strongest on other epoch days.We find that there is little change in prediction efficiency with the solar cycle averaging around 45% The physical interpretation of these results is not obvious.As shown in Fig. 1 we know that the state of the solar wind changes during a CIR.Two days before a stream interface the density begins to increase as the solar wind speed reaches a minimum value.Twelve hours before the interface the solar wind speed begins to increase reaching a maximum about 1.5 days after the interface.Thereafter the speed falls slowly.The IMF magnitude begins to increase one day before the interface, peaks at the interface, and then decreases over a period of about three days.Temperature of the solar wind is very low 12 h before the interface, a maximum at the interface, and then decays slowly for at least five days after the interface.
These properties of the solar wind affect various derived quantities important to magnetic reconnection.The dawndusk electric field is very asymmetric about the interface as illustrated in Fig. 4. The combination of increased speed and strong magnetic field cause E y to maximize at the interface and to decay slowly as the field strength and speed decrease in the high-speed stream.The density increase before the interface and the speed increase after cause the dynamic pressure to peak at the interface.Plasma beta decreases rapidly from 2.5 beginning 12 h before the interface to 1.5 for many days after the interface.Alfvén Mach number of the solar wind behaves in a somewhat similar manner dropping from a value of 10 to about 7 at the interface.It recovers more quickly returning to normal values in about two days.
Our analysis procedure does not allow us to separate various possible causes of a change in coupling.Is the observed decrease in coupling around the CIR stream interface a result of a smaller magnetosphere due to the enhanced dynamic pressure?Alternatively does the reduced Alfvén Mach number in the solar wind alter the efficiency of reconnection?The superposed epoch analysis reported by Pulkkinen et al. (2007) indicates that either high dynamic pressure or strong driving by E y can reduce the coupling efficiency.To answer these questions we would need to use a different protocol.In particular we would need to create an ensemble of events in which each row is characterized by reasonably constant values of some parameter.We could then calculate impulse response functions for rows selected for a given range of the possible control parameter.Conceivably we could bin according to two different parameters and still have enough data to define the impulse response.To do this we must create ensembles with rows somewhat longer than the response functions, i.e. longer than four hours.This procedure will be difficult and time consuming.The fact that we find significant differences in coupling near the stream interface gives us confidence that the binning procedure would produce interesting results.
We also found apparent variations in E y -AL coupling with solar cycle.The strongest coupling seems to occur in the declining phase in association with recurrent highspeed streams.The weakest coupling appears to be around solar maximum.One possible explanation is that coronal mass ejections (CME) at solar maximum contain solar wind plasma with properties quite different than found in CIRs and these properties affect the size of the magnetosphere and the reconnection process.Magnetic field strength in CMEs is often higher, density and temperatures lower than in CIRs.Another possibility is that the solar wind propagation algorithm works better during the moderate conditions associated with CIRs and the decreased filter area and prediction efficiency near solar maximum is caused by a poor representation of the solar wind arriving at the magnetopause.A third possibility is that the AL index is of lower quality around the last solar maximum (1997)(1998)(1999)(2000)(2001)(2002).This is a time when magnetometer data acquisition from the Siberian sector was very poor.Stations missing from the AE network will reduce the area of the impulse response.Data spikes associated with poor transmission and recording will decrease the prediction efficiency.
Unfortunately, no continuous high quality solar wind data are available before 1995 and insufficient good AE index data are available until the last few years.Thus we can not yet reach a definitive conclusion on whether the solar cycle changes we observe are real effects of the solar cycle, or artifacts of the propagation algorithm, or artifacts of the index generation.In future work we intend to apply this procedure to the Thule Polar Cap (PC) index which is based on one station and has been continuously calculated since 1975 with high quality data and a consistent procedure.If similar trends are apparent in these data we will be more confident that our AL results are physically meaningful.
The application of prediction filters or neural networks to the study of solar wind coupling is based on the assumption that there is a deterministic relation between the input and output variables, in our case V B s and AL index.There is no reason to expect that all of the AL variance is directly controlled by V B s .Some of the variance is likely to be caused by the viscous interaction which presumably does not depend on electric field.Some may be caused by changes in dynamic pressure.Also internal dynamics of the magnetosphere such as substorms in the tail and electron precipitation into the auroral oval will cause changes in the westward electrojet that may have only a probabilistic relation to the solar wind electric field.As explained in the description of the analysis procedure, linear filters depend on the existence of a fixed correlation between the input and output variables.Only that portion of AL that is correlated with V B s is predictable with our analysis.It is likely that multi channel prediction filters would predict somewhat more of the AL variance than a single channel filter.The main result obtained in this work is that the coupling function relating V B s and AL changes significantly with the state of the solar wind as defined by time relative to the interface between low-speed and high-speed solar wind streams.This coupling is weakest when the dynamic pressure and electric field are strongest inside the CIR.A secondary result suggests there is also a solar cycle variation in coupling that is consistent with the result obtained for CIRs, i.e. the coupling is weakest near solar maximum when both dynamic pressure and electric field reach larger values than they do in CIRs.These results show that the V B s -AL coupling function is nonlinear as it changes significantly with the state of the solar wind.Further studies of the role of other variables such as dynamic pressure will provide additional insight into the mechanisms of solar wind coupling to the westward electrojet.
Figure 1 .
Figure1.Results of superposed epoch analysis of solar wind and magnetospheric parameters are plotted versus time relative to a stream interface at the center of a corotating interaction region (CIR).The five panels on the right respectively show solar wind dynamic pressure, total of thermal and magnetic pressure, gsm Ey, plasma beta, and plasma Mach number.The five panels on the left show the AE index, the Sym-H index, the Asym-H index, the 3-h ap index and the PC index.Heavy black lines define the range within which 50% of the data lie.The heavy red line defines the parameter median at each epoch.
Fig. 1 .
Fig.1.Results of superposed epoch analysis of solar wind and magnetospheric parameters are plotted versus time relative to a stream interface at the center of a corotating interaction region (CIR).The five panels on the right respectively show solar wind dynamic pressure, total of thermal and magnetic pressure, gsm E y , plasma beta, and plasma Mach number.The five panels on the left show the AE index, the Sym-H index, the Asym-H index, the 3-h A p index and the PC index.Heavy black lines at the edges of the shaded regions define the range within which 50% of the data lie.The heavy red line at the center of shaded regions defines the parameter median at each epoch.
Figure 2 .Fig. 2 .
Figure 2.An illustration of the relation between time series of solar wind electric field (VBs) and the AL index for two days in January 1999 is presented.Top trace (red) is the rectified electric field.The bottom trace denoted by a thin blue line is the observed AL index for the same two days.The thicker black line is the output of a linear filter representing the relation between these quantities.
Figure 3 .
Figure 3.The VBs-AL prediction filter for the month of January and its frequency response are plotted in the upper and lower panels respectively.The heavy dashed lines show the fit of a Rayleigh function to the filter and its transform (see text for details).
Fig. 3 .
Fig.3.The V B s -AL prediction filter for the month of January 1999 and its frequency response are plotted in the upper and lower panels respectively.The heavy dashed lines in the two panels show the fit of a Rayleigh function to the filter and its transform (see text for details).
Figure 4 .Fig. 4 .
Figure 4.A dynamic cumulative distribution function (cdf) for the gsm IMF dawn-dusk electric field (VBz) for nine years of data measured at the ACE spacecraft.Heavy lines are quartiles of the distribution and thin lines are deciles.The vertical dashed line at zero epoch time is the time a stream interface at the center of a corotating interaction region (CIR) passed the spacecraft.We have reversed the sign of Ey to correspond to Bz.
Figure 5 .
Figure 5.A dynamic cdf of the AL index relative to stream interface (vertical dashed line) within a CIR is presented.Heavy lines are the quartiles as a function of epoch time.Note that the minimum AL index is more negative than -300 nT about 25 % of the time.
Fig. 5 .Figure 6 .
Fig. 5.A dynamic cdf of the AL index relative to stream interface (vertical dashed line) within a CIR is presented.Heavy lines are the quartiles as a function of epoch time.Note that the minimum AL index is more negative than −300 nT about 25% of the time.
Fig. 6 .
Fig. 6.Ensemble average linear prediction filters relating V B s to AL index during the interval 1995-2007.Each trace shows a different day relative to a stream interface.The heavy red line shows the error of the mean of all filters on the filter spanning the CIR stream interface.The filter has been inverted for display purposes.
Figure 7 .
Figure 7. Results for13 annual ensemble average filters are plotted versus time relative to CIR stream interfaces in the years 1995-2007.Blue lines show the annual averages at each daily epoch while thick red lines are the means of all annual averages.Error bars are the standard error of the mean at each time.Note that the day-long intervals used in analysis were centered on the beginning of days.
Fig. 7 .
Fig. 7. Results for 13 annual ensemble average filters are plotted versus time relative to CIR stream interfaces in the years 1995-2007.Blue lines show the annual averages at each daily epoch while thick red lines are the means of all annual averages.Error bars are the standard error of the mean at each time.Note that the day-long intervals used in analysis were centered on the beginning of days.
Figure 8. Ensemble average linear prediction filters relating VBs to AL index during the interval 1995-2007.Each trace is an average over all CIR epochs for a full year.The heavy red line at the top is for the declining phase of the solar cycle for the year 1995.The heavy black line at the bottom is from solar maximum in 2001.
Fig. 8 .
Fig. 8. Ensemble average linear prediction filters relating V B s to AL index during the interval 1995-2007.Each trace is an average over all CIR epochs for a full year.The heavy red line at the top is for the declining phase of the solar cycle for the year 1995.The heavy black line at the bottom is from solar maximum in 2001.
Figure 9 .
Figure 9.The prediction efficiency and filter area of annual average filters for each daylong epoch relative to CIR stream interface are shown by blue circles.The heavy red lines with error bars show the average over all epochs and the standard error of the mean.No AE index data are available for 1996.
Fig. 9 .
Fig. 9.The prediction efficiency and filter area of annual average filters for each day-long epoch relative to CIR stream interface are shown by blue circles.The heavy red lines with error bars show the average over all epochs and the standard error of the mean.No AE index data are available for 1996.
www.ann-geophys.net/27/3165/2009/Ann.Geophys., 27, 3165-3178, 2009 except for the years 2001-2002.The area under the response function is lowest during the rising phase of the solar cycle and again just after maximum.The largest areas are found near the end of the declining phases in 1995 and 2007.
|
2014-10-01T00:00:00.000Z
|
2009-08-14T00:00:00.000
|
{
"year": 2009,
"sha1": "279c9b6410e1f56d123b730bf83c871522a4d0ad",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/27/3165/2009/angeo-27-3165-2009.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "279c9b6410e1f56d123b730bf83c871522a4d0ad",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
203734727
|
pes2o/s2orc
|
v3-fos-license
|
State Representation Learning from Demonstration
Robots could learn their own state and world representation from perception and experience without supervision. This desirable goal is the main focus of our field of interest, state representation learning (SRL). Indeed, a compact representation of such a state is beneficial to help robots grasp onto their environment for interacting. The properties of this representation have a strong impact on the adaptive capability of the agent. In this article we present an approach based on imitation learning. The idea is to train several policies that share the same representation to reproduce various demonstrations. To do so, we use a multi-head neural network with a shared state representation feeding a task-specific agent. If the demonstrations are diverse, the trained representation will eventually contain the information necessary for all tasks, while discarding irrelevant information. As such, it will potentially become a compact state representation useful for new tasks. We call this approach SRLfD (State Representation Learning from Demonstration). Our experiments confirm that when a controller takes SRLfD-based representations as input, it can achieve better performance than with other representation strategies and promote more efficient reinforcement learning (RL) than with an end-to-end RL strategy.
Introduction
Recent reinforcement learning (RL) achievements might be attributed to a combination of (i) a dramatic increase of computational power, (ii) the remarkable rise of deep neural networks in many machine learning fields including robotics, which take advantage of the simple idea that training with quantity and diversity helps. The core idea of this work consists of leveraging task-agnostic knowledge learned from several task-specific agents performing various instances of a task.
Learning is supposed to provide animals and robots with the ability to adapt to their environment. RL algorithms define a theoretical framework that is efficient on robots [Kober et al., 2013] and can explain observed animal behaviors [Schultz et al., 1997]. These algorithms build policies that associate an action to a state to maximize a reward. The state determines what an agent knows about itself and its environment. A large state space -raw sensor values, for instancemay contain the relevant information but would require a too large exploration to build an efficient policy. Well-thought feature engineering can often solve this issue and make the difference between the failure or success of a learning process. In their review of representation learning, Bengio et al. [2013] formulate the hypothesis that the most relevant pieces of information contained in the data can be more or less entangled and hidden in different representations. If a representation is adequate, functions that map inputs to desired outputs are somewhat less complex and thus easier to construct via learning. However, a frequent issue is that these adequate representations may be task-specific and difficult to design, and this is true in particular when the raw data consists of images, i.e. 2D arrays of pixels. One of the objectives of deep learning methods is to automatize feature engineering to make learning algorithms effective even on raw data. By composing multiple nonlinear transformations, the neural networks on which these methods rely are capable of progressively creating more abstract and useful representations of the data in their successive layers. for K different tasks, we assume to have access to oracle policies (π k ) that solve each task, and compute their outputs with an "unknown" state representation. (b) Pretraining phase: learning of one shared representation function ϕ with imitation learning of K specific heads ψ k by observing π k from high-dimensional observations. Each head ψ k defines a sub-network that contains the parameters θ ϕ of ϕ and the parameters θ ψ k of ψ k . The set of all network's parameters is θ = {θ ϕ , θ ψ 1 , . . . , θ ψ K }. (c) Transfer learning phase: the pretrained network ϕ provides representations to learn an unseen decision making task ψ new .
The intuition behind our work is that many tasks operated in the same environment share some common knowledge about that environment. This is why learning all these tasks with a shared representation at the same time is beneficial. The literature in imitation learning [Pastor et al., 2009;Kober et al., 2012] has shown that demonstrations can be very valuable to learn new policies. To the best of our knowledge, no previous work has focused on constructing reusable state representations from raw inputs solely from demonstrations, therefore, here we investigate the potential of this approach for SRL.
In this paper, we are interested in solving continuous control tasks via RL or supervised learning, using state estimates as inputs, without having access to any other sensor, which means in particular that the robot configuration, which we will call ground truth representation, is unknown. We assume that at all times the consecutive high-dimensional observations (o t−1 , o t ) contain enough information to know the ground truth state q t and that the controller/predictor only needs to rely on this representation to choose actions. Intuitively, q t could probably be a much better input for a RL algorithm than the raw images, but without prior knowledge, it is not easy to get q t from (o t−1 ,o t ). In robotics, SRL [Lesort et al., 2018] aims at constructing a mapping from high-dimensional observations to lower-dimensional representations which, similarly to q t , can be advantageously used instead of (o t−1 ,o t ) to form the inputs of a policy.
Our proposed experimental setup consists in three different phases: 1. Preliminary phase ( Fig. 1(a)): we have K controllers called oracle policies π k , each solving a different task. For example, we could define them in laboratory conditions with better sensors (e.g. motion capture), the goal being to reproduce them with a different perception (e.g. images) where in this setting, building a representation extracted from the raw inputs makes sense. For the sake of the experiments, we used almost fake tasks.
2. Pretraining phase ( Fig. 1(b)): we derive a state representation that can be relied on to reproduce any of these oracle policies. We do so via imitation learning on a multi-head neural network consisting of a first part that outputs a common state representation s t ϕ(o t−1 ), ϕ(o t ) used as input to K heads ψ k trained to predict actions a k t executed by the oracle policies π k from the previous phase.
3. Transfer learning phase ( Fig. 1(c)): we use the previously trained representation s t as input to a new learning process ψ new in the same environment.
This method, which we call SRLfD (State Representation Learning from Demonstration), is presented in more detail in Section 3, after an overview of the existing related work in Section 2. We show that using SRLfD learned representations instead of raw images can significantly accelerate RL (using the popular SAC algorithm [Haarnoja et al., 2018]). When the state representation is chosen to be low-dimensional, the speed up brought by our method is greater than the one resulting from state representations obtained with deep autoencoders, or with principal component analysis (PCA).
Related Work
SRL for control is the idea of extracting from the sensory stream the information that is relevant to control the robot and its environment and representing it in a way that is suited to drive robot actions. It has been subject to a lot of recent attention [Lesort et al., 2018]. It was proposed as a way to overcome the curse of dimensionality, to speed up and improve RL, to achieve transfer learning, to ignore distractors, and to make artificial autonomous agents more transparent and explainable [de Bruin et al., 2018;Lesort et al., 2018].
Since the curse of dimensionality is a major concern, many state representation techniques are based on dimension reduction [Kober et al., 2013;Bengio et al., 2013] and traditional unsupervised learning techniques such as principal component analysis (PCA) [Curran et al., 2015] or its nonlinear version, the autoencoder [Hinton and Salakhutdinov, 2006]. Those techniques allow to compress the observation in a compact latent space, from which it can be reconstructed with minimal error. Further developments led to variational autoencoders (VAE) [Kingma and Welling, 2014] and then their extension β-VAE [Higgins et al., 2017], which are powerful generative models able to learn a disentangled representation of the observation data. However, the goal of those methods is to model the observation data; they do not take actions into account, and the representation they learn is optimized to minimize a reconstruction loss, not to extract the most relevant information for control. In particular, their behavior is independent of physical properties, or the temporal structure of transitions, and they cannot discriminate distractors.
To overcome this limitation, a common approach to state representation is to couple an autoencoder to a forward model predicting the future state [Watter et al., 2015]. A different approach to state representation is to forego observation reconstruction and to learn a representation satisfying some physics-based priors like temporal coherence, causality, and repeatability [Jonschkowski and Brock, 2015] or controllability [Jonschkowski et al., 2017]. Those methods have been shown to learn representations able to speed up RL, but this improvement is contingent on the careful choice and weighting of the priors suited to the task and environment.
Learning state representations from demonstrations of multiple policies solving different tasks instances, as we propose, has some similarities with multi-task and transfer learning [Taylor and Stone, 2009]. Multi-task learning aims to learn several similar but distinct tasks simultaneously to accelerate the training or improve the generalization performance of the learned policies, while transfer learning strives to exploit the knowledge of how to solve a given task to then improve the learning of a second task. Not all multi-task and transfer learning works rely on explicitly building a common representation, but some do, either by using a shared representation during multiple task learning [Pinto and Gupta, 2017] or by distilling a generic representation from task-specific features [Rusu et al., 2015]. The common representation can then be used to learn new tasks. However, all these techniques rely on the end-to-end RL approach, which is less sample-efficient than the self-supervised learning approach followed by SRLfD.
Learning state representations from demonstrations of multiple policies solving different tasks instances, as we propose, has some similarities with multi-task and transfer learning [Taylor and Stone, 2009]. Multi-task learning aims to learn several similar but distinct tasks simultaneously to accelerate the training or improve the generalization performance of the learned policies, while transfer learning strives to exploit the knowledge of how to solve a given task to then improve the learning of a second task. Not all multi-task and transfer learning works rely on explicitly building a common representation, but some do, either by using a shared representation during multiple task learning [Pinto and Gupta, 2017] or by distilling a generic representation from task-specific features [Rusu et al., 2015]. The common representation can then be used to learn new tasks. However, all these techniques rely on the end-to-end RL approach, which is less sample-efficient than the self-supervised learning approach followed by SRLfD.
In another perspective, the learning from demonstration literature typically focuses on learning from a few examples and generalizing from those demonstrations, for example by learning a parameterized policy using control-theoretic methods [Pastor et al., 2009] or RL-based approaches [Kober et al., 2012]. Although those methods typically assume prior knowledge of a compact representation of the robot and environment, some of them directly learn and generalize from visual input [Finn et al., 2017] and do learn a state representation. However, the goal is not to reuse that representation to learn new skills but to produce end-to-end visuomotor policies generalizing the demonstrated behaviors in a given task space. Several works have also proposed using demonstrations to improve regular deep RL techniques [Večerík et al., 2017;Nair et al., 2018], but the goal is mostly to improve exploration in environments with sparse rewards. Those works do not directly address the problem of state representation learning.
Demonstrations
Let us clarify the hierarchy of the objects that we manipulate and introduce our notations. This work focuses on simultaneously learning K different tasks 1 sharing a common state representation function ϕ and with K task-specific heads for decision (ψ 1 , ψ 2 , . . . , ψ K ) (see Fig. 1(b)). For each k-task, the algorithm has seen demonstrations in a form of paths P k 1 , P k 2 , . . . , P k P from an initial random position to the same goal corresponding to the k-task generated by running the oracle policy π k obtained in the preliminary phase (see Fig. 1(a)). Specifically, during a path P k p , an agent is shown a demonstration (or data point) of (o k,p t−1 , o k,p t , a k,p t ) from which it can build its own world-specific representation. Here, o k,p t−1 and o k,p t are consecutive high-dimensional observations (a.k.a. measurements), and a k,p t is a real-valued vector corresponding to the action executed right after the observation o k,p t was generated.
Imitation Learning from Demonstration
Following the architecture described in Fig. 1(b), we use a state representation neural network ϕ that maps highdimensional observations o k,p t to a smaller real-valued vector ϕ(o k,p t ). This network ϕ is applied to consecutive observations (o k,p t−1 , o k,p t ) to form the state representation s k,p t , as follows: This state representation s k,p t is sent to the ψ k network, where ψ k is one of the K independent heads of our neural network architecture. ψ 1 , ψ 2 , . . . , ψ K are head networks with similar structure but different parameters, each one corresponding to a k-task. Each head has continuous outputs with the same number of dimensions as the action space of the robot. We denote by ψ k (s k,p t ) the output of the k-th head of the network on the input (o k,p t−1 , o k,p t ). We train the global network to imitate all the oracle policies via supervised learning. Specifically, our goal is to minimize the quantities: ψ k (s k,p t ) − a k,p t 2 2 that measure how well the oracle policies are imitated. The optimization problem we want to solve is thus the minimization of the following objective function: (2) for k ∈ 1, K , and where θ = {θ ϕ , θ ψ 1 , . . . , θ ψ K } as explained in Fig. 1(b). We give an equal importance to all oracle policies by uniformly sampling k ∈ 1, K , and performing a training step on L(θ) to adjust θ. Algo. 1 describes this procedure.
The network of SRLfD is trained to reproduce the demonstrations, but without direct access to the ground truth representation of the robot. Each imitation can only be successful if the required information about the robot configuration is extracted by the state representation ϕ(o k,p t−1 ), ϕ(o k,p t ) . However, a single task may not require the knowledge of the full robot state. Hence, we cannot be sure that reproducing only one instance of a task would yield a good state representation. By learning a common representation for various instances of tasks, we increase the probability that the learned representation is general and complete. It can then be used as a convenient input for new learning tasks, especially for a RL system.
Goal Reaching
In this section, we study a transfer learning phase (see Fig. 1(c)) corresponding to a RL optimization problem to solve a torque-controlled reaching task with image observations. This is a challenging problem despite the simplicity of the task. Indeed, when high-dimensional observations are mapped to a lower-dimensional space before feeding a RL system, a lot of information is compacted and valuable information for control may be lost. The purpose is to verify that state representations learned with SRLfD are useful representations for RL algorithms. In this work, we only conduct experimental validations, but a fundamental question that we will not answer is at stake: what constitutes a good representation for state-of-the-art deep RL algorithms? Should it be as compact and as disentangled as possible, or on the contrary, can redundancy of information or correlations be useful in the context of deep RL? A definitive answer seems beyond the current mathematical understanding of deep RL.
Algorithm 1 SRLfD algorithm 1: Input: A set of instances of tasks T k , k ∈ 1, K , and for each of them a set of paths P k p , p ∈ 1, P of maximum length T . 2: Initialization: A randomly initialized neural network following the architecture described in Fig. 1 Pick uniformly a k-task 5: Predict current state representations with Eq. 1: Compute L(θ) with Eq. 2: The Reacher environment, with a reward of 1 when the end-effector reaches a position close to the goal, and 0 otherwise. For more challenging inputs (on the right), we add Gaussian noise with zero mean and standard deviation 10, and a ball distractor is added in the environment with random initial position and velocity.
We consider a simulated 2D robotic arm with 2 torque-controlled joints, as shown in Fig. 2. We use the environment Reacher adapted from the OpenAI Gym [Brockman et al., 2016] benchmark to PyBullet [Coumans et al., 2018]. An instance of this continuous control task is parameterized by the position of a goal that the end-effector of the robot must reach within some margin of error (and in limited time). We use as raw inputs RGB images of 64×64 pixels. As the heart of our work concerns state estimation, we have focused on making perception challenging, by adding in some cases randomly moving distractor and Gaussian noise, as shown in Fig. 2. We believe that the complexity of the control part (i.e. the complexity of the tasks) is less important to validate our method, as it depends more on the performance of the RL algorithm. To solve even just the simple reaching task, the configuration of the robot arm is required and needs to be extracted from images for the RL algorithm to converge. Indeed our results show that this is the case when SRLfD learned representations are used as inputs of SAC [Haarnoja et al., 2018].
Experimental Setup
Baseline Methods We compare state representations obtained with our method (SRLfD) to five other representation strategies: • Ground truth: as mentioned in Section 4.1, what we call ground truth representation of the robot configuration is represented as a vector of size four: the two torques angles and velocities.
• Principal Component Analysis (PCA) [Jolliffe, 2011]: we perform PCA on the demonstration data, and the 8 or 24 most significant dimensions are kept, thus reducing observations to a compact vector that accounts for a large part of the input variability.
• Autoencoder-based representation [Hinton and Salakhutdinov, 2006]: ϕ is replaced by an encoder learned with an autoencoder. The latent space representation of the autoencoder (of size 8 or 24) is trained with the same demonstrations (but ignoring the actions) as in the SRLfD training.
• Random network representation [Gaier and Ha, 2019]: we use the same neural network structure for ϕ as with SRLfD, but instead of training its parameters, they are simply fixed to random values sampled from a Gaussian distribution of zero mean and standard deviation 0.02.
• Raw pixels: the policy network is modified to receive directly (o t−1 , o t ) in input, with the same dimensionality reduction after ϕ as other methods, but all of its parameters are trained simultaneously in the manner of end-to-end RL.
The representations obtained with these methods use the same demonstration data as SRLfD method, and share the same neural network structure for ϕ or replace it (with ground truth and PCA) in the architecture of Fig. 1(c) whose output size is 8 or 24 2 .
Generating Demonstrations For simplicity, the preliminary phase of training K oracle policies π k (see Fig. 1(a)) is done by running the SAC [Haarnoja et al., 2018] RL algorithm. Here, the "unknown" representations used as inputs are the ground truth representations. SAC also exploits the cartesian cordinates of the goal position 3 . It returns a parameterized policy capable of producing reaching trajectories to any goal position.
For the pretraining phase of SRLfD (see Fig. 1(b)), the previously learned parameterized policy generates K = 16 oracle policies π k (which represent different instances of the reaching task), with each of them 238 paths for training and 60 paths for validation of maximal length T = 50, computed from various initial positions. We then simultaneously train all the heads for computational efficiency. Specifically, for each optimization iteration, we uniformly sample for every head ψ k a mini-batch of 64 demonstrations from the P paths corresponding to the k-task.
Implementation Details For SRLfD network architecture (adapted from the one used in [Mnih et al., 2013]), ϕ (see Fig. 1) sends its 3 × 64 × 64 input to a succession of three convolutional layers. The first one convolves 32 8×8 filters with stride four. The second layer convolves 64 4×4 filters with stride two. The third hidden layer convolves 32 3×3 filters with stride one. It ends with a fully connected layer with half as many output units as the chosen state representation dimension (because state representations have the form ϕ(o t−1 ), ϕ(o t ) ). The heads ψ k take as input the state representation and are composed of three fully connected layers, the two first ones of size 256 and the last one of size two, which corresponds to the size of the action vectors (one torque per joint).
For SAC network architecture we choose a policy network that has the same structure as the heads ψ k used for imitation learning, also identical to the original SAC implementation [Haarnoja et al., 2018], and use the other default hyperparameters.
The Rectified linear units (ReLU) is used for the activation functions between hidden layers. We use ADAM [Kingma and Ba, 2014] with a learning rate of 10 −4 to train the neural network ϕ, and 10 −3 to train all the heads ψ k and the policy network.
Results and Discussion
In this section, we report our results with the quantitative evaluation of SRLfD when a goal reaching task is used in the transfer learning phase (see Fig. 1(c)). Specifically, we evaluate the transferability of SRLfD learned state representations used as inputs for a RL algorithm (SAC [Haarnoja et al., 2018]) to solve a new instance of the reaching task chosen randomly, and compare the success rates to the ones obtained with state representations originating from other methods. The performance of a policy is measured as the probability to reach the goal from a random initial configuration in 50 time steps or less. We expect that better representations yield faster learning progress and better convergence (on average). the ground truth representation, of size 4), and on "clean" i.e. raw observations (in the middle of Fig. 2) or observations with noise and a randomly moving ball distractor (on the right of Fig. 2). As expected, the best results are obtained with the ground truth representation, but we see that out of the five other state representations, only SRLfD, PCA, and VAE representations can be successfully used by SAC to solve reaching tasks when noise and a distractor are added to the inputs. SAC fails to train efficiently (in an end-to-end manner) the large neural network that takes raw pixels in input, whether its representation is of size 16 or 48. Using fixed random parameters for the first part of its network (random network representation) is not a viable option either.
The results show that with fewer dimensions our method (SRLfD) leads to better RL performances than with observation compression methods (PCA and VAE). We assume that the information from the robotic arm can be filtered through the small size of the bottleneck due to the observation reconstruction objective (and dramatically more on the challenging observations). This explains why PCA and VAE tend to require additional dimensions than the minimal number of dimensions of our robotic task (four dimensions: the two torques angles and velocities). This clearly shows that with a carefully chosen unsupervised learning objective, such as the one used for SRLfD, it is possible to compact into a minimal number of dimensions only the information necessary for robotic control.
Another surprising observation is that PCA outperforms VAE in our results. By design, VAE is trained to encode and decode with as few errors as possible, and it can generally do this better than PCA by exploiting the nonlinearities of neural networks. Moreover, as first explained by Bourlard and Kamp [1988]; Kramer [1991], the autoencoder is an extension of PCA that transforms correlated observations into nonlinearly uncorrelated representations. However, it is not clear that such uncorrelated input variables lead to better RL performances. This is because when data are obtained with transitions from a control system, the most important variables are those correlated with changes between transitions, which generally do not coincide with the directions of greatest variation in the data.
Ballistic Projectile Tracking
In this section, we study a transfer learning phase (see Fig. 1(c)) corresponding to a simple supervised learning system to solve a ballistic projectile tracking task. Specifically, it consists in training a tracker from learned representations to predict the next projectile position. This task has the advantage of not needing K oracle policies π k in a preliminary phase ( Fig. 1(a)). Instead, we derive π k directly from the ballistic trajectory equations (Eq. 14). This enables us to easily perform the experimental study of the main hyperparameters of our SRLfD method: the state dimension S d and the number of oracle policies K. This also allows us to conduct a comparative quantitative evaluation against other representation strategies. Furthermore, we study the possibility of using a recursive loop for the state update instead of the state concatenation. Thus, SRLfD can handle partial observability by aggregating information that may not be estimable from a single observation. In particular, to solve a simple projectile tracking task, projectile velocity information is required and must be extracted from past measurements for the supervised learning algorithm to converge. Mathematically, the observation (i.e. measurement) o t is concatenated to the previous state estimate s t−1 to form the input of SRLfD model ϕ which estimates the current state as follows: Literally, this recursive loop conditions the current state estimate on all previous states.
An instance of this projectile tracking task is parameterized by the initial velocity and angle of the projectile. Specifically, a tracker receives as input the state estimated by ϕ, and must predict the projectile's next positionô t+1 as follows: A tracker is then trained by supervised learning to minimize this objective function: where the notations correspond to those defined in Section 3.1.
Experimental Setup
Baseline Methods We compare the state representations learned with SRLfD to five other representation strategies: • the ground truth is a vector of size 4 formed from the 2D cartesian positions and velocities of the projectile: (x t , y t , v x,t , v y,t ); • the position corresponds to the 2D cartesian coordinates of the projectile: (x t , y t ); • a random network representation with the same ϕ architecture and state recursive loop which is not trained 4 ; • an end-to-end representation learning strategy, i.e. it builds its state estimate with the same ϕ architecture and state recursive loop, while solving the tracking task; • a Kalman filter estimated from positions with unknown initial cartesian velocities of the projectile. The Kalman filter designed by Kalman [1960a] is a classical method for state estimation in state-linear control problems, where the ground truth state is not directly observable, but sensor measurements are observed instead. Kalman et al. [1960] create a mathematical framework for the control theory of the LQR (Linear-Quadratic-Regulator) problem and create for this purpose the notions of controllability and observability refined later in [Kalman, 1960b]. Witsenhausen [1971] conducted one of the first attempts to survey the literature on the separation of state estimation and control. In particular, this led to the two-step procedure, composed of the resolution of Kalman filtering and then of LQR, known as LQG (Linear-Quadratic-Gaussian). However, the Kalman filter is also commonly used in recent RL applications [Ng et al., 2003[Ng et al., , 2006Abbeel et al., 2007;Abbeel, 2008].
The Kalman filter has then undergone many extensions including the popular extended Kalman filter (EKF) which can handle nonlinear transition models [Ljung, 1979]. However, a major drawback of these classical state estimation methods is that they require knowledge of the transition model. This constraint has been relaxed by feature engineering (like SIFT [Lowe, 1999] and SURF [Bay et al., 2006]) techniques which require knowledge of the subsequent task [Kober et al., 2013]. Indeed, good hand-crafted features are task-specific and therefore costly in human expertise. These drawbacks were then overcome by SRL methods popularized by Jonschkowski and Brock [2013], which benefit from the autonomy of machine learning and the generalization power of deep learning techniques.
Kalman filter
We briefly describe below the operations of the Kalman filter, the reader willing a complete presentation can refer to [Bertsekas, 2005]. The equations for updating the positions (x t , y t ) and velocities (v x,t , v y,t ) of the projectile are related to their previous values, to the acceleration due to the gravitational force (g), and to the time elapsed between each update (∆t), as follows: The ground truth state is defined as s t = [x t , y t , v x,t , v y,t ] ∈ R 4 . Thus, the update procedure of the transition model is described with a single linear equation as follows: where Q is the process noise covariance matrix and w t is the process noise assumed to be white (i.e. normally, independently and identically distributed at each time step).
The observation of this transition model is the projectile position defined as o t = [x t , y t ] ∈ R 2 which is obtained from the ground truth state in the following way: where R is the measurement noise covariance matrix (a.k.a. sensor noise covariance matrix) and ν t is the measurement noise also assumed to be white. R and Q are ignored in our experiments for simplicity.
The aim of the Kalman filter is to solve the problem of estimating the ground truth state s ∈ R 4 . To do this, the Kalman filter assumes to know A, Q, R and H. In this experiment, we assume there is no noise in the sensory inputs (i.e. R has only zero values), and no uncertainty in the transition model (i.e. Q has only zero values). Moreover, in this projectile tracking task, there is no control vector (a t 0), because as the force exerted by gravitation is constant, it can be incorporated in the matrix A. The state of the Kalman filter is initialized with the initial measurement (i.e. the initial 2D cartesian position of the projectile) and zeros instead of the true initial velocities of the projectile. In order to let the Kalman filter fix the initial velocities of the projectile and to ensure its convergence, we initialize the diagonal coordinates corresponding to the velocities of the state covariance matrix (denoted P) to 100. The Kalman filter follows a twofold procedure: (i) a prediction step which uses knowledge of the transition model, (ii) an update step which combines the model and the measurement knowing that both may be imperfect. The equations of the prediction step consists in the prediction of the state estimate Eq. 9, and the prediction of the state covariance estimate Eq. 10.ŝ − t = Aŝ t−1 + Bu t−1 (9) This corresponds to a priori estimates.
The equations in the update step first update the Kalman gain matrix Eq. 11, then update the state estimate by incorporating the measurement and the a priori state estimate Eq. 12, and similarly update the state covariance matrix Eq. 13.
Therefore, at each iteration, the Kalman filter predicts the next a priori state estimate which is then used to update the next state estimate which corresponds to an a posteriori estimate. This implies that the Kalman filter is a recursive linear state estimation, whereas SRLfD is a recursive nonlinear state estimation (thanks to the state recursive loop).
In particular, while SRLfD may use a linear network ϕ for simple tasks, it may still take advantage of nonlinear approximations such as multilayer perceptrons defining its ψ k heads, such as in this ballistic projectile tracking task.
Generating Demonstrations The projectile tracking task does not require in its preliminary phase K oracle policies π k . It is the ballistic trajectory equations (Eq. 14) that allow us to define these oracle policies π k . As we ignore all forces except the gravitational one, the trajectory of the projectile corresponds to a ballistic trajectory. The temporal equations of the ballistic trajectory are defined with the force of gravity g = 9.81 m/s 2 , the initial angle of launch of the projectile α 0 , and its initial velocity v 0 as well as its initial y-coordinate y 0 , as follows: The total horizontal distance x max covered until the projectile falls back to the ground is given by: This allows us to calculate the corresponding time of flight t max : These equations provide an oracle policy parameterized by v 0 and α 0 , which can generate ballistic trajectories at any initial ordinate y 0 . Each generated trajectory is of fixed length T = 12 which corresponds to 10 demonstration samples (as the first two are used during initialization) of the form (o k,p t , a k,p t ) where the actions a k,p t correspond to the next positions of the projectile, i.e. a k,p t = o k,p t+1 . To do this, we define for each trajectory the time between each update ∆t, as follows: where t max is defined in Eq. 16.
For the pretraining phase of SRLfD (see Fig. 1(b)), each oracle policy π k has a fixed initial velocity and angle (v k 0 , α k 0 ), and generate P ballistic trajectories with different random initial ordinates such that y 0 ∈ [0, 30]. We uniformly pick K tasks (such that K ≤ K) to simultaneously train the corresponding heads ψ k for computational efficiency. Each of them is trained on demonstrations sequentially generated from P = B K different paths where B is a desired batch size for training ϕ. This way, every optimization iteration to train ϕ is performed on a fixed number B of demonstrations, independently of the total number of tasks K. We use B = 256 and K = min(K, 6) in our experiments. For the tracker training, the batch size is also 256, which implies P = 256.
For the SRLfD training validation, we measure the average of tracking prediction errors over all oracle policies on initial ordinates defined as y 0 ∈ {0, 10, 20, 30} (in meters). For the tracker training validation, we measure the average of tracking prediction errors on fixed trajectories with the same initial velocity v 0 = 20 and the initial angles defined as α 0 ∈ {25, 30, 35, 40, 45, 50, 55} (in degrees) and on initial ordinates defined as y 0 ∈ {0, 10, 20, 30} (in meters). Fig. 4 shows some qualitative results of this validation with a tracker trained from 4-dimensional SRLfD representations with 6 oracle policies.
Implementation Details ϕ is a linear neural network of input dimension (2 + S d ) and output dimension S d . When not specified, the default number of oracle policies K is 6. We used a state recursive loop to remove the state concatenation. For the random network and the end-to-end baselines, the networks have the same structure as for ϕ, where in the former the parameters are kept fixed, while in the latter ϕ is trained jointly with the tracker ψ new . The heads ψ k and the tracker ψ new are one-hidden neural networks, with the hidden one of size 32 and the last one of size two, which corresponds to the size of the action vectors (i.e. the next projectile positions). The previous nonlinear networks are necessary because ∆t changes on all ballistic trajectories, so unlike the Kalman filter which knows this value, for the tracker and SRLfD heads they must relate the input to the output nonlinearly.
We use ADAM [Kingma and Ba, 2014] with a learning rate of 10 −4 to train ϕ, the heads ψ k , and the tracker ψ new . For the SRLfD and tracker trainings, we use early stopping of patience 40 epochs. In one epoch 10 000 iterations are performed, ϕ (during SRLfD training) or ψ new (during tracker training) see exactly 1 000×256 different trajectories composed of 10 demonstrations (i.e. data points or samples). The Leaky Rectified Linear Unit (Leaky ReLU) is used for the activation function [Xu et al., 2015].
Results and Discussion
We learned the projectile tracking task from six representation strategies. Fig. 5(a) shows the boxplot of the average tracking errors obtained with all different strategies of the same size 4. Our SRLfD outperforms the end-to-end representation, confirming the empowerment provided by "divide and conquer" techniques [Dasgupta et al., 2008]. Ground truth representations outperform our method because they know the complete projectile configuration. With a random network, the supervised learning system fails to track the projectile, implying that state representation learning is required in the context of recursive state estimation.
Regarding the Kalman filter, it uses the knowledge of the real transition model in order to provide a complete recursive state estimation. It is therefore not surprising that it achieves the same performance as the ground truth baseline. Unlike this classical method, SRLfD does not use the a priori knowledge of the tracking task but only that available in the oracle policies. The performance obtained with the SRLfD representations in this comparative evaluation shows that they extract the position and velocity of the projectile. In other words, there is enough diversity in the oracle policies to be imitated by SRLfD network heads, so that their joint state space is close to the real state that makes the system fully observable. The comparative quantitative evaluation presented by Fig. 5(b) confirms this hypothesis since the performance obtained with the size 4 SRLfD representations increases with the number of oracle policies used during the pretraining phase of SRLfD. Table 2 displays the average tracking errors displayed in Fig. 5, with the mean and standard deviation for better insight. Fig. 5(c) shows that as the state dimension decreases, the information lost by SRLfD significantly degrades the performance of the trained trackers, while for a size of 2, it is still better than the position baseline. On the other hand, Fig. 5(d) shows that as the state dimension increases, the performance of the trained trackers improves until it even outperforms ground truth starting at 6 dimensions (see Table 2). Although these results may be surprising, one can assume that adding redundancy to the representations makes them easier to build (since the dimension of the ground truth vector is 4). Indeed, larger state embeddings could be more regular and thus be build with simpler neural networks which are less subject to the overfitting problem. However, the question of what is an ideal representation for deep learning algorithms is far from being answered. Recently works have started to investigate this question [Ota et al., 2020[Ota et al., , 2021, but the search for a definitive answer leads far beyond the scope of this work.
Conclusion
We presented a method (SRLfD) for learning state representations from demonstrations, more specifically from runs of oracle policies on different instances of a task. Our results indicate that the learned state representations can advantageously replace raw sensory inputs to learn policies on new task instances via regular RL. By simultaneously learning an end-to-end technique for several tasks sharing common useful knowledge, SRLfD forces the state representation to be general, provided that the tasks are diverse. Moreover, since the representation is trained together with heads that imitate the oracle policies, we believe that it is more appropriate for control than other types of representations (for instance ones that primarily aim at enabling a good reconstruction of the raw inputs). Our experimental results tend to confirm this belief, as SRLfD state representations were exploited more effectively by the SAC RL algorithm and a supervised learning system, than several other types of state representations.
|
2019-09-15T14:43:01.000Z
|
2019-09-15T00:00:00.000
|
{
"year": 2019,
"sha1": "67f8477b2af5a045bceff059b77773496c3377a7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.01738",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67f8477b2af5a045bceff059b77773496c3377a7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
229446156
|
pes2o/s2orc
|
v3-fos-license
|
Maternal Physical Activity and Neonatal Cord Blood pH: Findings from the Born in Bradford Pregnancy Cohort
Objective: Evidence suggests that physical activity whilst pregnant is beneficially associated with maternal cardiometabolic health and perinatal outcomes. It is unknown if benefits extend to objective markers of the neonate condition at birth. This study investigated associations of maternal pregnancy physical activity with cord blood pH. Methods: Cord blood pH was measured when clinically indicated in a subgroup of Born in Bradford birth cohort participants ( n = 1,467). Pregnant women were grouped into one of four activity categories (inactive/somewhat active/moderately active/active) based on their self-reported physical activity at 26–28 weeks gestation. Linear regression was used to calculate adjusted mean differences in cord blood pH, and Poisson regression was used to quantify relative risks for moderate cord blood acidaemia (pH < 7.10), across physical activity groups. Results: More than half of pregnant women (52.0%) were inactive, one-fifth were somewhat active (21.7%), fewer were moderately active (14.6%) and active (11.7%), respectively. Pregnancy physical activity was favourably associated with higher cord blood pH. Compared to neonates of inactive women, there was some evidence that neonates of women who were at least somewhat active in pregnancy had lower relative risk of moderate cord blood acidaemia (for arterial blood: relative risk = 0.70 (95% confidence interval 0.48–1.03)). Conclusions: Modest volumes of mid-pregnancy maternal physical activity do not appear to adversely influence cord blood pH and may enhance the neonate condition at birth. hypertension, gestational weight gain, emergency caesarean and medical or surgical induction), measurement factors related to maternal physical activity (gestational age at the time of reporting, if women were feeling well and were able to enjoy their normal daily activities, if maternal physical activity was reported during Ramadan) and cord blood pH (the time delay from birth to blood sampling (the median (IQR) time delay to arterial and venous cord blood samples was 10 (10) and 12 (10) minutes, respectively) did not appreciably influence associations and so were not retained in models. Models were rerun excluding samples that were missing data for sampling time ( n = 61) and excluding samples that were delayed by ≥ 30 minutes following delivery ( n = 82), the results were substantively unchanged.
Introduction
Safety concerns are cited by pregnant women as a barrier to physical activity (Coll et al., 2017). This may have originated from cultural beliefs that women should 'rest' in pregnancy (Coll et al., 2017) alongside former recommendations that physical activity should be limited because of concerns for mother and offspring welfare, including risk of impaired oxygen supply to the fetus during supine exercise (American College of Obstetricians and Gynecologists (ACOG), 1985; Pivarnik and Mudd, 2009).
It is still good practice to avoid supine exercise, particularly late in pregnancy, but otherwise physical activity can enhance transportation of oxygenated blood to the developing fetus, via physiological adaptations such as increased placental surface area, and enhanced blood flow and perfusion balance between maternal and fetal circulations (Melzer et al., 2010;Ferraro, Gaudet and Adamo, 2012). Physical activity can also aid regulation of circulating blood glucose levels (Collings et al., 2020a) which could help to reduce the risk of placental dysfunction and fetal hyperinsulinemia, thereby impacting fetal metabolism, oxygen demand and lactate accumulation (Jarmuzek, Wielgos and Bomba-Opon, 2015). As well as acute neonatal stress, each of these factors can have implications for the cord blood pH, levels of which provide an objective marker of the neonate condition immediately prior to birth and closely relate to Apgar scores (Victory et al., 2004). Low cord blood pH has been shown to predict neonate morbidity and mortality and longer term neurodevelopmental impairment (Malin, Morris and Khan, 2010;Yeh, Emary and Impey, 2012;Kelly et al., 2018;Vesoulis et al., 2018). There is currently limited support for an association between pregnancy physical activity with cord blood pH, but the evidence-base is low quality, characterised by small studies that were likely underpowered to detect associations (Davenport et al., 2018).
The objective of this study was to investigate associations of maternal pregnancy physical activity with cord blood pH measured in neonates who were born in poor condition, or for whom there was a concern during labour or immediately following birth. Evidence for a positive association with cord blood pH could strengthen the case that physical activity is safe and potentially beneficial for healthy pregnant women and their neonates.
Study design
Born in Bradford (BiB) is a prospective birth cohort study of 12,453 pregnant women who were recruited between 2007-10 whilst attending routine antenatal appointments at Bradford Royal Infirmary, the only maternity unit serving the city. Bradford is the sixth largest metropolitan borough in England and is one of the most deprived and ethnically diverse cities in the country (Wright et al., 2013). In a subsample of BiB participants, cord blood samples were taken and pH was analysed on the labour ward. This occurred when there was ' concern about the baby during labour or immediately following birth' or if the baby was born in poor condition (National Institute for Health and Care Excellence, 2007). The final sample comprised 1,467 mother-neonate pairs. As cord blood pH was measured only when clinically indicated the pH subgroup was skewed toward greater representation of nulliparous women (who are higher risk for pregnancy complications (Bai et al., 2002)) and more caesarean section deliveries. The characteristics of the included subgroup were otherwise similar to all other BiB participants (n = 12,391 mother-neonate pairs; see Supplementary Table S1) who were broadly representative of the obstetric population in Bradford at the time of recruitment (Wright et al., 2013). The BiB study was approved by the Bradford Research Ethics Committee (ref 07/H1302/112) and all mothers provided written informed consent.
Maternal physical activity was assessed at 26-28 weeks gestation using the General Practice Physical Activity Questionnaire (GPPAQ) which has been validated against accelerometry and exhibits face validity in the BiB pregnancy cohort (National Health Service, 2009;Collings et al., 2020aCollings et al., , 2020b. Mothers were grouped into one of four activity levels (inactive/somewhat active/moderately active/active) based on their self-reported occupational physical activity level, physical exercise and walking. The active category is consistent with meeting the recommended minimum of 150 minutes per week of moderate intensity physical activity (Department of Health and Social Care, 2019). Full details of the GPPAQ including the scoring system used to derive activity categories are shown in Supplementary Figure S1.
Following clinical guidelines that were in operation at the time, in the event there was ' concern about the baby either in labour or immediately following birth', or if the baby was born in poor condition with a 1-minute Apgar score of 5 or less, the umbilical cord was double-clamped and venous and arterial blood samples were taken for gas analysis by clinical staff (Thorp, Dildy and Yeomans, 1996;National Institute for Health and Care Excellence, 2007). The data were retrieved from obstetric records and were used to derive a third variable that represented the lowest cord blood pH recorded from either umbilical sample (Kelly et al., 2018).
Women consented to the abstraction and use of their data from obstetric medical records and at recruitment completed an interviewer administered questionnaire. Full details of all covariables have previously been described (Collings et al., 2020a).
Statistical analysis
Linear regression models were used to calculate differences in cord blood pH levels between the four groups of maternal physical activity (reference group: inactive); p-values from trend tests across physical activity categories are also presented. Models were initially adjusted for maternal age, ethnicity, early-pregnancy body mass index (measured at ~1 2 weeks gestation), socioeconomic status, parity, season of physical activity assessment, and neonate sex. Adjustments for maternal smoking, delivery mode, gestational age, birth weight, and neonate abdominal circumference were subsequently included because they changed β-coefficients between exposures and outcomes by ≥10% (Maldonado and Greenland, 1993). All dependent variables (arterial, venous and the lowest cord blood pH) were approximately normally distributed and results from the linear regression analysis are presented as marginal means with 95% confidence intervals. Adjusted for the same covariates, Poisson regression was used to quantify the relative risk of moderate cord blood acidaemia (pH < 7.10 (Yeh, Emary and Impey, 2012;Vesoulis et al., 2018)) between neonates of inactive women compared to women who were at least somewhat active in pregnancy (somewhat active, moderately active, and active groups combined). All analyses were performed in Stata/SE version 15.0 software and p < 0.05 was deemed statistically significant.
Participant characteristics
Descriptive statistics for the study sample are presented in Table 1. More than half of women (52.0%) were inactive in pregnancy, one-fifth were somewhat active (21.7%) and fewer were moderately active (14.6%) and active (11.7%), respectively. Inactive women were more frequently of Pakistani-origin, multiparous and were from moderately or the most deprived socioeconomic strata. Nearly one-tenth (8.5%) of the lowest recorded cord blood pH samples were moderately acidaemic (Supplementary Table S2 shows the number of cases of acidaemia stratified by cord blood source and physical activity level). Cord blood pH was positively related to Apgar scores ( Supplementary Figures S2 and S3). Table 2 shows cord blood pH levels stratified by maternal pregnancy physical activity. There was no evidence for effect modification by ethnicity (p ≥ 0.42) hence results are presented for the whole sample combined, adjusted for ethnic group. Significantly higher cord blood pH was observed in neonates of women who were somewhat active and moderately active in pregnancy compared to neonates of women who were inactive. The Poisson regression analysis provided some indication that, compared to neonates of inactive women, neonates of women who were at least somewhat active in pregnancy had lower risk of moderate cord blood , pregnancy complications (gestational diabetes, gestational hypertension, gestational weight gain, emergency caesarean and medical or surgical induction), measurement factors related to maternal physical activity (gestational age at the time of reporting, if women were feeling well and were able to enjoy their normal daily activities, if maternal physical activity was reported during Ramadan) and cord blood pH (the time delay from birth to blood sampling (the median (IQR) time delay to arterial and venous cord blood samples was 10 (10) and 12 (10) minutes, respectively) did not appreciably influence associations and so were not retained in models. Models were rerun excluding samples that were missing data for sampling time (n = 61) and excluding samples that were delayed by ≥30 minutes following delivery (n = 82), the results were substantively unchanged. acidaemia (arterial pH (relative risk (95% confidence interval): 0.70 (0.48-1.03), p = 0.071; venous pH (0.62 (0.34-1.12), p = 0.11); lowest pH (0.73 (0.51-1.05), p = 0.086).
Discussion
This study found that maternal pregnancy physical activity was favourably associated with higher cord blood pH. The association was modest in size but may be underestimated due to errors in self-reported physical activity (Celis-Morales et al., 2012). It is also conceivable that any positive influence on cord blood pH may be clinically meaningful as lower values are associated with higher risk of serious adverse neonatal outcomes in a dose-dependent manner (Malin, Morris and Khan, 2010;Yeh, Emary and Impey, 2012;Kelly et al., 2018;Vesoulis et al., 2018). A recent meta-analysis concluded that there was low quality evidence from six randomised trials which together indicated there was no evidence for an association between pregnancy physical activity with cord blood pH (Davenport et al., 2018). However, an observational study reported that arterial cord pH was significantly higher in neonates of women who performed high-intensity exercise throughout pregnancy compared to neonates of women who had discontinued exercising (Clapp et al., 1995). Furthermore, cohort analysis of a controlled trial found that objectively measured physical activity at 16 weeks gestation was associated with higher venous cord blood pH and arterial oxygen saturation (Baena-García et al., 2019). The results provided by Baena-García et al. may be of limited clinical relevance, however, as their analysis was restricted to participants with 'normal' cord blood pH values >7.20 (Baena-García et al., 2019).
This study is the first to report associations between pregnancy physical activity and the full range of cord blood pH in a large and diverse sample of mother-neonate pairs from a deprived location. Our results appear to provide evidence for a threshold-type effect. Compared to neonates of women who were inactive, only neonates of women who were somewhat and moderately active in pregnancy exhibited higher cord blood pH. This is may be explained by fewer women being categorised as active, and hence insufficient statistical power at the higher end of the activity spectrum to detect what appear to be small associations. To account for this and to help decipher the clinical relevance of associations we calculated relative risks for moderate cord blood acidaemia between neonates of inactive women versus women who were at least somewhat active in pregnancy. We found some evidence that being at least somewhat active was associated with a lower risk of moderate acidaemia. The results only approached statistical significance, but all 95% confidence intervals were largely consistent with a protective effect (confidence limits indicated considerable risk reductions of ~4 0-50% associated with being at least somewhat active in pregnancy versus comparatively marginal risk increases). It is encouraging that modest volumes of mid-pregnancy physical activity do not appear to be detrimental and seem to be associated with some benefit in terms of higher cord blood pH, and possibly lower risk of moderate acidaemia. This is contrary to the popular belief of many women that physical activity in pregnancy is harmful (Coll et al., 2017). Our results support new UK guidelines which emphasise that ' every activity counts' and that inactive pregnant women should gradually accumulate physical activity throughout the week (Department of Health and Social Care, 2019).
The results of this study add to growing evidence that pregnancy physical activity is not only safe but beneficial for the short and long-term health prospects of both mother and child (Collings et al., 2020a(Collings et al., , 2020b. Placental function likely underpins the link between maternal physical activity and higher neonatal cord blood pH. Biological adaptations to regular pregnancy physical activity include increased size and vascularity of the placenta which aids oxygen perfusion to the fetus and removal of deoxygenated lactate filled blood (Melzer et al., 2010;Ferraro, Gaudet and Adamo, 2012). Activity-mediated improvements in maternal adiposity and glucose regulation (which we have previously observed in the BiB pregnancy cohort (Collings et al., 2020a)) may also assist prevention of placental dysfunction and fetal hyperinsulinemia, which could help curb adverse fetal metabolism and cord blood acidosis (Jarmuzek, Wielgos and Bomba-Opon, 2015;Aalipour et al., 2018). As with the results of any observational study, residual confounding by imperfectly measured covariates and unknown or unmeasured confounding cannot be excluded. Additional research is needed to replicate the current findings. Future studies should be carried out in unselected population-based samples of mother-neonate pairs, rather than clinically identified samples, which hinders generalisability of results.
Conclusion
In neonates born in poor condition, or for whom there was concern during labour or immediately following birth, maternal physical activity was favourably associated with slightly higher neonatal cord blood pH. Modest volumes of mid-pregnancy physical activity do not appear to be harmful and may enhance the condition of neonates at birth.
Data Accessibility Statement
Data generated and analysed for the current study are available from the corresponding author on reasonable request.
|
2020-12-03T09:07:44.361Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3255f18cfb9871fe3a47898a6aac5f08d3e9a263",
"oa_license": "CCBY",
"oa_url": "http://paahjournal.com/articles/10.5334/paah.66/galley/85/download/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "431181c11135c4f21ef4f2b0d70ba16fb17418f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235259984
|
pes2o/s2orc
|
v3-fos-license
|
Restrained eating in Lebanese adolescents: scale validation and correlates
Background Restrained eating disorder is prevalent worldwide across both ethnic and different cultural groups, and most importantly within the adolescent population. Additionally, comorbidities of restrained eating present a large burden on both physical and mental health of individuals. Moreover, literature is relatively scarce in Arab countries regarding eating disorders, let alone restrained eating, and among adolescent populations; hence, the aim of this study was to (1) validate the Dutch Restrained Eating Scale in a sample of Lebanese adolescents and (2) assess factors correlated with restrained eating (RE), while taking body dissatisfaction as a moderator between body mass index (BMI) and RE. Methods This cross-sectional study, conducted between May and June 2020 during the lockdown period imposed by the Lebanese government, included 614 adolescents aged between 15 and 18 years from all Lebanese governorates (mean age of 16.66 ± 1.01 years). The scales used were: Dutch Restrained Eating Scale, body dissatisfaction subscale of the Eating Disorder Inventory-Second version, Rosenberg Self-Esteem Scale, Beirut Distress Scale (for psychological distress), Hamilton Anxiety Rating Scale and Patient Health Questionnaire (for depression). Results The factor analysis yielded a one-factor solution with Eigen values > 1 (variance explained = 59.65 %; αCronbach = 0.924). Female gender (B = 0.19), higher BMI (B = 0.49), higher physical activity index (B = 0.17), following a diet to lose weight (B = 0.26), starving oneself to lose weight (B = 0.13), more body dissatisfaction (B = 1.09), and higher stress (B = 0.18) were significantly associated with more RE, whereas taking medications to lose weight (B=-0.10) was significantly associated with less RE. The interaction body mass index (BMI) by body dissatisfaction was significantly associated with RE; in the group with low BMI, higher body dissatisfaction was significantly associated with more RE. Conclusions Our study showed that the Dutch Restrained Eating scale is an adapted and validated tool to be used among Lebanese adolescents and revealed factors associated with restrained eating in this population. Since restrained eating has been associated with many clinically-diagnosed eating disorders, the results of this study might serve as a first step towards the development of prevention strategies targeted towards promoting a healthy lifestyle in Lebanese adolescents.
Background
The term "Eating disorders" represents multiple serious conditions characterized by disordered eating behaviors negatively impacting the physical and mental health of a person, as well as his/her ability to properly function [1]. Eating disorders rank third in terms of chronic diseases [2], are increasing in adolescents [3], are more present in Western compared to Asian countries [4], and are more frequent in females than in males [4].
Among eating disorders, restrained eating (RE) is defined as a behavior "to restrict food intake deliberately in order to prevent weight gain or promote weight loss" [5]. However, some studies indicated that episodes of restrained eating were followed by time intervals of disinhibition towards eating and consequently, weight gain [6,7]. Moreover, other studies indicated the possibility of stress triggering this alternating restrained/disinhibited eating episode [8,9]; and hence forming a vulnerable weight cycle [9]. Hormonal changes, along with physical changes and behavioral changes in adolescents, are important factors that might influence the development of restrained eating in these individuals, thus, making it a frequent eating disorder reported in adolescents [10]. Actually, multiple other factors (demographic, social, psychological, etc.) were shown to be associated with restrained eating as well.
Sociodemographic factors
Females are generally influenced by their body image more than males [11], with adolescent females being less happy about their bodies than adolescent males (20). When comparing boys and girls with the same body mass index (BMI), boys showed more satisfaction of their bodies, while girls were more likely to attempt weight loss maneuvers [11].
Physical activity
A correlation seems to exist between physical activity and restrained eating. The results of a previous study indicated that girls at high risk of developing an eating disorder, performed more physical activity, with the goal of losing weight [12]. Indeed, more eating restriction was observed in adolescents who practiced more physical activity [12].
Social factors
Social factors include family, parents and media. A previous study demonstrated that teasing by family and friends, as well as internalized weight stigma, especially that related to weight and body shame and guilt, are correlated with eating disorder [12]. Parents, family and media can lead the individual to restrained eating by influencing him/her to reach the "perfect weight" [13,14]. An illustrative example is that excessive watching of reality and entertainment shows leads to disordered eating in women [15]. Media perfectionism, and media pressure increase the occurrence of body dissatisfaction that can lead to restrained eating [16]. Even though parents are a source of social support, they can increase teens' body dissatisfaction by criticizing their appearance, hence, contributing to the development of restrained eating [17].
Psychological factors
Among those factors emerges body dissatisfaction. Its core "negative feeling about the body" affects restrained eating both directly and indirectly by different pathways [8,18,19]. Having thinner body ideals as a tool for better social recognition among peers, is a well-established concept among adolescents, paving way for more body dissatisfaction and increased effort to lose weight among them [20]. Furthermore, adolescents with a high BMI level are more likely to get dissatisfied with their current weight, and become prone to weight decreasing maneuvers, particularly restrained eating [18].
Another psychological factor is depression. There is a positive correlation between depression and restrained eating in women, but not men [18]. In women, the positive effect of depression on restrained eating suggests that focusing on food, eating or dieting may be a broader group of methods for escaping awareness of negative emotions [17]. In men, depression inhibits, rather than facilitates, restrained eating, suggesting that men do not use food to regulate their emotions, and do not rely on diet to escape unwanted emotions [17,18].
Thus, based on previous studies [11,18,21,22], body dissatisfaction seems to play a moderating role between multiple factors (body mass index, physical activity, depression, self-esteem, gender) and restrained eating. Based on the before mentioned studies, the following trans-theoretical model of restrained eating was developed (Fig. 1).
Multiple scales are used for the assessment of restrained eating: Dutch Eating Behavior Questionnaire, Eating Inventory (EI), Revised Restraint Scale (RS), and the Current Dieting Questionnaire [23]. The Dutch Eating Behavior Questionnaire (DEBQ), originally developed by Van Strien et al. in 1986, assesses restrained, emotional, and external eating behavior [24]. In addition, subsequent exploratory and confirmatory factor analyses generally have supported the original three-factor structure [25]. The DEBQ has equivalent psychometric properties and factor structure in men and women and across the full range of weight categories. In this study, only the restrained questionnaire will be adopted. The Dutch Restrained Eating Scale (DRES) was validated in different languages in adolescents' populations, mainly French [26], Maltese [27] and Spanish [28], with the last two studies validating the DRES in female gender exclusively [27,28]. A study conducted in Lebanon about restrained eating did validate the Arabic version of the Dutch Restrained Eating Scale in adults [29].
Restrained eating disorder is prevalent worldwide across both ethnic and different cultural groups [30] and most importantly within the adolescent population [31]. Additionally, comorbidities of restrained eating present a large burden on both physical and mental health of individuals [32]. Moreover, literature is relatively scarce in Arab countries on eating disorder, let alone restrained eating, and in adolescents populations; hence, the aim of this study was to (1) validate the Arabic version of the Dutch Restrained Eating Scale and (2) assess factors associated with restrained eating among a sample of Lebanese adolescents, while taking body dissatisfaction as a moderator between BMI and RE.
Study design
This cross-sectional study, conducted between May and June 2020 during the lockdown period imposed by the Lebanese government, included 614 adolescents aged between 15 and 18 years old from all Lebanese governorates (Beirut, Mount-Lebanon, South, North, Bekaa). Our sample was chosen using the snowball technique; the research team contacted adolescents in their contact lists from different schools; those students were instructed to forward the link to the questionnaire to their classmates via the WhatsApp application. The first page of the questionnaire included an explanation of the study topic and objective, a statement ensuring the anonymity of respondents and an explanation for the student to get his/ her parents' approval before participation. The student had to select the option stating "I got my parents' approval and I consent to participation in this study" to be directed to the questionnaire.
The mean age of the participants was 16.66 ± 1.01 years, with 76.1 % females. The mean house crowding index was 0.97 ± 0.51. More details about the students can be found in Table 1. The mean restrained eating score in the total sample was 26.32 ± 9.43.
Minimal sample size
Since the Dutch Restrained Eating Scale contains 10 items, a sample of 100 adolescents was deemed necessary to conduct a factor analysis (10 participants per 1 scale item according to Comrey and Lee) [33].
Questionnaire
The first part of the questionnaire contained sociodemographic information about the participants (age, gender, governorate, current weight and height). The household crowding index, reflecting the socioeconomic status of the family [34], is the ratio of the number of persons living in the house over the number of rooms in it (excluding the kitchen and the bathrooms). The physical activity index is the cross result of the intensity, duration, and frequency of daily activity [35].
The second part included the scales used in this study:
Dutch Restrained Eating Scale
It is composed of 10 questions [36] rated from never (1 point) to always (5 points). Higher scores reflect more restrained eating (Cronbach's α in this study = 0.924).
Body dissatisfaction subscale of the Eating Disorder Inventory-Second version (EDI-2)
It is composed of nine items, scored from 0 (sometimes/rarely/never) to 3 (always). Higher scores define more body dissatisfaction (Cronbach's α in this study = 0.812) [37].
Rosenberg Self-Esteem Scale
This scale is used to assess self-esteem [38]. It includes ten items measured on 4-point Likert scale ranging from 1 (strongly disagree) to 4 (strongly agree). Higher scores reflected higher self-esteem (Cronbach's α in this study = 0.776).
Beirut Distress Scale (BDS-10)
This scale, developed in Lebanon [39], was used to assess the intensity of distress. It is composed of ten questions. The points range from 0 (never) to 4 (always). A higher score indicates higher perceived distress (Cronbach's α in this study = 0.826).
Hamilton Anxiety Rating Scale (HAM-A)
It is composed of fourteen items rated on a 5-point Likert scale ranging from 0 (not present) to 4 (very severe) [40]. Higher scores mean more anxiety. This scale is validated in the Lebanese population [41] (Cronbach's α in this study = 0.891).
Patient Health Questionnaire (PHQ-9)
This nine-item scale was used to assess depression [42] and is validated in Lebanon [43]. Scores range from 0 (not at all) to 3 (nearly every day). Higher scores indicate higher rates of depression (Cronbach's α in this study = 0.835).
The last part of the questionnaire included general questions retrieved from a previous study [44] about methods to lose weight, dieting, food, external pressures to go on a diet, abuse and family history of eating disorder (i.e. "do you always hear a comment about your weight?", "do your relatives comment on your weight?", "do you feel pressured to go on a diet?", "have you felt pressured by the media to change your diet?", "have you followed any diet to lose weight?"). These variables were classified as categorical variables (yes/no type of answer).
Forward and back translation
One bilingual psychologist whose mother tongue is Arabic, accomplished the forward translation. The backward translation was performed by another psychologist. The original and translated English versions were compared by one healthcare professional (psychiatrist) for discrepancies, which were resolved by consensus [45][46][47][48][49][50][51].
Statistical analysis
The SPSS software version 23 was used to conduct data analysis. Weighting to the general population was done based on age, gender and governorate. The total sample (n = 614) was divided into two separate subsamples for the validation of the Dutch Restrained Eating Scale (Subsample 1: n = 150 for the factor analysis (FA); Subsample 2: n = 464 for the confirmatory factor analysis (CFA)). However, the whole sample (n = 614) was used to evaluate factors correlated with restrained eating.
FA was first executed on subsample 1. The Kaiser-Meyer-Olkin (KMO) index and Bartlett's test of sphericity confirmed the sample's adequacy. Factors retained corresponded to those with an Eigenvalue > 1. Then, a CFA was carried out on subsample 2 using the Statistica software and taking the solution that was obtained in the EFA. Several goodness-of-fit indicators were reported: the Relative Chi-square (χ2/df), the Root Mean Square Error of Approximation (RMSEA), the Goodness of Fit Index (GFI) and the Adjusted Goodness of Fit Index (AGFI). The value of χ2 divided by the degrees of freedom (χ2/df) has a low sensitivity to sample size and may be used as an index of goodness of fit (cut-off values:< 2-5). The RMSEA tests the fit of the model to the covariance matrix. As a guideline, values of < 0.05 indicate a close fit and values below 0.11 an acceptable fit. The GFI and AGFI are Chi-square-based calculations independent of degrees of freedom. The recommended thresholds for acceptable values are ≥ 0.90 [52]. Cronbach's alpha values ensured internal reliability of the scales.
The normality of distribution of the restrained eating score was confirmed via a calculation of the skewness and kurtosis; values for asymmetry and kurtosis between − 2 and + 2 are considered acceptable in order to prove normal distribution [53]. These conditions consolidate the assumptions of normality in samples larger than 300 [54]. The Student t and ANOVA tests were used to compare two and three or more means respectively. Pearson correlation was used to correlate two continuous variables. A multiple stage set of linear regressions was conducted, taking the restrained eating score as the dependent variable and all variables that showed a p < 0.2 in the bivariate analysis as independent variables. Sociodemographic characteristics were entered at the first step; as a second step, practices followed by the participants (vomiting/starving/medications to lose weight, etc.) were entered as independent variables; at the third step, anxiety, depression, stress and body dissatisfaction were entered in the model; finally, the interaction BMI by body dissatisfaction was entered as an independent variable. P < 0.05 was considered significant.
Results
Factor Analysis (FA) Subsample 1 (n = 150) was used for the factor analysis; all items of the restrained eating scale were extracted.
Confirmatory factor analysis
A confirmatory factor analysis was run on subsample 2 (n = 464), using the one-factor structure obtained in sample 1.
Bivariate analysis
Higher anxiety, depression, body dissatisfaction, higher physical activity index and BMI were significantly associated with more restrained eating (Table 3). Female gender, following a diet to lose weight, those who starve themselves to lose weight, and those who feel pressured by media to lose weight had significantly more restrained eating (Table 4). Restrained eating score was calculated from the Dutch Restrained Eating Scale; numbers displayed in the table represent correlation coefficients obtained from the Pearson correlation test.
Multivariable analysis
The results of the hierarchical linear regressions are shown in Table 5. In the final model that included all independent variables, female gender (B = 0.19), higher BMI (B = 0.49), higher physical activity index (B = 0.17), following a diet to lose weight (B = 0.26), starving oneself to lose weight (B = 0.13), more body dissatisfaction (B = 1.09), higher stress (B = 0.18) were significantly associated with more restrained eating, whereas taking medications to lose weight (B=-0.10) was significantly associated with less restrained eating ( Table 5, Model 4). The interaction BMI by body dissatisfaction was significantly associated with restrained eating; in the group with low BMI, high body dissatisfaction was significantly associated with more restrained eating (Fig. 2).
Scale validation
In our study, the DRES converged over a solution of one factor similarly to the study conducted among Lebanese adults [29] and Maltese female adults [27]. The Cronbach's alpha in our study is 0.924, reflecting an excellent internal consistency. We observed equivalent results by Van Strien et al. in the original DRES with alpha between 0.8 and 0.95 [29]. Also Cronbach's alpha was 0.87 in the Maltese version [27] and 0.9 in the Spanish version [28]. However, the RMSEA value obtained in the confirmatory factor analysis is borderline and might not show adequate fit indices. Thus, the Arabic version of
Correlates of restrained eating
To our knowledge, this is the first study done in Lebanon assessing factors associated with restrained eating in adolescents. Results showed that female gender, having a higher BMI, practicing more physical activity, following a diet to lose weight, starving oneself, higher body dissatisfaction and stress, were associated with more restrained eating. Taking medications to lose weight was associated with less restrained eating. The interaction BMI by body dissatisfaction turned out to be significantly associated with restrained eating. Gender.
This study results showed that female gender was significantly associated with more restrained eating, in line with previous studies [11,55]. This might be due to the social pressure exerted on females, and the expectation of an ideal body, which leads girls to restrain from eating [11]. Moreover, with older age, a higher rate of body dissatisfaction was observed in females compared to their male counterparts [11]. While girls have more plans to decrease their weight, boys focus more on increasing weight and muscle building [11]. Girls might be more concerned with comparing their bodies to the ideal ones shown in media [11]. They seek the stereotype of beautiful thin women in order to confirm that they meet social expectation of femininity, consequently leading them to more restrained eating [11].
Body Mass Index
In our study, a significant positive relationship between BMI and restrained eating was established, in line with previous findings [10]. The higher the BMI of an adolescent, the higher the risk of dieting [10]. The mechanism by which BMI leads to restrained eating is not fully explained [10]. When individuals reach puberty, weight gain happens due to hormonal changes; individuals will thrive to become thinner and will go on a diet.
Body dissatisfaction
Our results showed a positive association between body dissatisfaction and restrained eating, in line with previous studies [8,18,19]. This could be explained by the fact that adolescents who have a high BMI are ashamed of their bodies, developing a negative image about themselves, resulting in restrained eating in order to decrease their weight [18]. In addition, our results showed that the interaction BMI by body dissatisfaction was significantly associated with restrained eating; in the group with low BMI, high body dissatisfaction was significantly associated with more restrained eating. To our knowledge, in adults, higher weight suppression (WS), defined as the difference between maximal and current weight, is associated with decreased leptin and loss of control eating [56] and more body dissatisfaction [57]. The social stigma associated with obesity may cause shame, guilt and body dissatisfaction [58]. This is clinically important since body dissatisfaction is an unpleasant result of obesity, which serves as a motivation to follow unhealthy eating behaviors and weight control practices [59].
Physical activity
Our results showed that higher physical activity was related to more restrained eating, in line with a previous study that showed that exercise was able to favorably modify the short-term appetite control [60]. It is important to note that physical activity does not always imply going to a gym. During the COVID-19 pandemic, it is more frequent that adolescents exercise outdoor, or at home, with online/internet videos. The association between restrained eating and physical activity in determining energy intake after exercise, remains unclear and may be related to disinhibition (loss of restraint) levels [61]. Restrained eaters tend to decrease their energy intake after exercise, which creates a negative energy balance; the opposite is true about unrestrained eaters who actually increase their energy intake after physical activity [62]. More studies are recommended to solve the mystery of this dilemma.
Following a diet
Our study showed a positive association between following a diet and restrained eating, in line with previous findings [63]. People who follow a diet learn how to do self-control and have previous success in this regard, which makes them successful restrained eaters [64].
Starving oneself
Our study demonstrated a positive significant association between starving one self and restrained eating. In the general population, starving oneself does not precede restrained eating. In fact, previous findings [65,66] showed that starving oneself and eating restriction are two behaviors that occur at the same time in order to lose weight. Controversially, another study demonstrated that dieting and restrained eating increase starvation and food cravings [67]. Therefore, our results should be interpreted with caution.
Stress
In our study, a positive relationship was established between stress and restrained eating. Literature is controversial in this regard; while previous studies showed that overeating can be used as a compensation of stress and negative mood [68,69], other findings showed that stress can result in undereating [70]. More studies are needed to clarify this association.
Medication to lose weight
Our study showed a negative association between restrained eating and the use of medication to lose weight. No previous studies regarding this association have been conducted in adolescents. We hypothesize that this negative association could be due to the fact that when adolescents take medications to lose weight, they rely on them and do not restrict their eating because they think that medications alone are more than enough to control their weight [71]. At the same time, not restricted eating does not imply overeating. Those results should also be interpreted with caution.
Limitations
This study is a cross-sectional study, therefore we cannot conclude causality. It is important to point out that the confirmatory factor analysis results did not show adequate fit indices. Diagnosis was made by means of a questionnaire (less accurate) rather than a clinical interview. Females outnumbered males; furthermore, the majority of adolescents recruited went to school, and were from Mount Lebanon. The recruitment strategy (snowball technique) does not guarantee representativeness and extrapolation of our results to the general population. The questions used in the chosen scales addressed females more than males (questions referring to part of the body from the waist and downwards). Some scales (self-esteem and body dissatisfaction) have not been validated in Lebanon so far. A residual confounding bias is also possible since not all factors associated with restrained eating were considered in this study. Finally, we did not add another scale assessing restrained eating to help estimate the construct validity of the DRES in Lebanon. Future studies taking all these limitations into consideration are needed.
Conclusions
In conclusion, our study presents preliminary results for the validation of the Dutch restrained eating scale among Lebanese adolescents and revealed factors associated with restrained eating in this group. This study, by validating the DERS in Lebanese adolescents, would help clinicians detect harmful eating practices (restrained eating) in persons within this age group. The results of this study might serve as a first step towards the development of prevention strategies targeted towards promoting a healthy lifestyle in Lebanese adolescents.
|
2021-06-01T14:01:02.658Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a40b4e157e293a70eebeaef7be6580e2d458e1c4",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-021-02728-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a40b4e157e293a70eebeaef7be6580e2d458e1c4",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235198500
|
pes2o/s2orc
|
v3-fos-license
|
Attitudes of physicians towards COVID‐19 vaccines and reasons of vaccine hesitancy in Turkey
Abstract Aim The development of safe and effective vaccines against SARS‐CoV‐2 and successful implementation of a global vaccination programme are prerequisites for a return to normal living conditions. Despite these intensive research efforts, vaccine hesitancy and misinformation in many countries present substantial obstacles to achieving sufficient coverage and community immunity. Here, we report the findings of a survey regarding the likelihood of COVID‐19 vaccine acceptance in a sample of physicians in Turkey. Materials and methods An anonymous web‐based survey was prepared and sent to medical doctors randomly selected from seven parts of Turkey via a text message sent to their mobile phones. Demographic data were collected, including sex (male or female), medical specialty, age, professional experience, COVID‐19 history, knowledge of COVID‐19 vaccines and behaviours related to vaccines against COVID‐19 and other diseases. The survey was conducted over a 1‐week period in December 2020. Results A total of 1,557 medical doctors responded to the survey. A total of 1,065 (68.4%) respondents were considering COVID‐19 vaccination, 374 (24%) were undecided and 118 (7.6%) did not want to be vaccinated. As a result of multivariate analysis, the male gender, absence of history of COVID‐19 infection, and having sufficient information about the vaccine were determined as predictive factors for willingness to vaccination. Conclusion Although trials tend to focus on the efficacy of vaccines, the results of this study indicated that the most important factor affecting the preference for a given vaccine among Turkish physicians is safety.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), first reported in Wuhan, China, in December 2019, has subsequently spread around the world and was declared a pandemic by the World Health Organisation in March 2020. 1
Accepted Article
This article is protected by copyright. All rights reserved Although no specific treatments have yet been developed for coronavirus disease 2019 (COVID-19) caused by SARS-CoV-2, a number of adjunctive therapies have been used, such as antiviral agents, systemic corticosteroids, low-molecular-weight heparin (LMWH), convalescent plasma, and mesenchymal stem cell therapy, as well as investigational therapies such as interferon-α, ribavirin, intravenous immunoglobulin, etc. 2 Successful vaccination strategies have already provided significant protection against at least 31 human diseases, which has had an extraordinary impact on human health worldwide. 3 The development of safe and effective vaccines against SARS-CoV-2 and successful implementation of a global vaccination programme are prerequisites for a return to normal living conditions. 4 More than 90 vaccines against SARS-CoV-2 are currently under development by research teams in both academia and industry across the world. 5 Despite these intensive research efforts, vaccine hesitancy and misinformation present substantial obstacles to achieving sufficient coverage and community immunity in many countries. 6,7 Due to the accelerated vaccine approval processes necessitated by the urgency of this pandemic, anti-vaccine propaganda on social media may have led to increased suspicion and negative attitudes toward vaccination among both medical professionals and the general population.
Here, we report the findings of a survey regarding the likelihood of COVID-19 vaccine acceptance and hesitancy in a sample of physicians in Turkey.
Study design and data collection
An anonymous web-based survey was prepared and sent to medical doctors randomly selected from seven parts of Turkey via a text message sent to their mobile phones.
Demographic data were collected, including sex (male or female), medical specialty, age (< 30, 30-40, 40-50, 50-60, > 60 years), geographic location, professional experience, type of hospital, COVID-19 history, knowledge of COVID-19 vaccines and behaviours related to vaccines against COVID-19 and other diseases. The participants were asked whether they intended to vaccinate themselves or their families (if applicable). The survey was conducted over a 1-week period in December 2020.
Accepted Article
This article is protected by copyright. All rights reserved
Statistical analysis
Data were collected via web-based platform (Google Surveys®); percentage and frequency data were obtained.
Statistical analyses were performed using SPSS for Windows® software (version 22.0; SPSS Inc., Chicago, IL, USA). All variables were compared by the Chi-square test. In all analyses, p < 0.05 was taken to indicate statistical significance. Multinomial logistic regression analysis was performed to identify independent factors associated with acceptance of COVID-19 vaccination. Bonferroni corrected Post-hoc paired comparisons were made to determine from which group the significant relationship originated.
While 43.9% of the participants had received information from the literature and lectures about COVID-19 vaccine types and development technologies, 41% of them had received information from the press and 12% had received information from the Ministry of Health. A total of 1,317 (84.6%) of the participants felt that phase III trials were required before commencing population vaccination programmes, while 156 (10%) felt that accelerated approval was sufficient in this case. A statistically significant relationship was found between the status of the willingness to get vaccinated and gender (χ2 = 24.331; p <0.001), professional seniority (years) (χ2 = 98.417; p<0.001), the branch (internal and
Accepted Article
This article is protected by copyright. All rights reserved surgical units versus basic sciences) (χ2 = 15.431; p<0.001). Bonferroni corrected Post-hoc paired comparisons were made to determine from which group the significant relationship originated ( Table 2).
A total of 1,065 (68.4%) respondents were considering COVID-19 vaccination, 374 (24%) were undecided and 118 (7.6%) did not want to be vaccinated Figure Bonferroni corrected Post-hoc paired comparisons were made to determine from which group the significant relationship originated. It was determined that the rate of those who preferred the vaccine due to the safety data was higher than all other groups (Table 3).
As a result of the logistic regression analysis based on the status of the willingness to get vaccinated the optimal model is created. In the current model, it was determined that males wanted to get vaccinated 2.051 times more than females (p=0.001). The occupational working time classes was an effective parameter on gender desire to get vaccinated (p <0.05).
The participants who worked for 6-10 years wanted to get vaccinated 4.151 times more than those who worked for ≤5 years ( p = 0.004). It was determined that employees working for 11-15 years wanted to get vaccinated 4,800 times more than those who worked for ≤5 years (p=0.001). It has been determined that employees working for> 15 years want to get vaccinated 8,540 times more than those who have worked for ≤5 years (p=0.001).
Accepted Article
This article is protected by copyright. All rights reserved Participants who did not have Covid-19 wanted to get vaccinated 3,262 times more than those who had Covid-19 (p <0.001). The physicians who knew the vaccine content wanted to get vaccinated 1.944 times more than those who did not know the vaccine content (p=0.033).
It was determined that those who intend to vaccinate their family wanted to get vaccinated 27,193 times more than those who were undecided (p <0.001) ( Table 4).
DISCUSSION
The main purpose of this study was to document and analyse the views of healthcare professionals in Turkey towards COVID-19 vaccines, where ultimately the goal is to minimise anti-vaccination sentiments and prejudices. It will be necessary to determine the views of healthcare professionals regarding vaccines against COVID-19 around the world, to better inform the public and allow promote guidance by health authorities.
This study included 1,557 physicians, most of whom were senior specialists or lecturers, including professors and associate professors, which make the findings presented here more compelling. In addition, the majority of the physicians had 15 years or more of clinical experience.
In a study conducted with 384 non-healthcare professionals in Turkey, the vaccine hesitancy rate was found to be 45.3%. 8 In our study, this rate was 31.6%, which was relatively low. However, considering that healthcare workers are in a higher risk group for COVID-19 than the general population, the vaccine hesitancy rate (31.6%) found in our study may still be higher than expected. A recently published Canadian study supported our prediction. 9 It reported that 19.1% of 2761 healthcare workers who were planned to be vaccinated with the Pfizer-BioNTech mRNA vaccine by government refused to be vaccinated. 9 It was stated that 74% of the healthcare workers who refused to be vaccinated could change their opinions and accept vaccination in the future. 9 Janssens et al. evaluated the vaccine willingness levels before and after the vaccination program in a survey conducted with healthcare workers. 10 In this study, they found that the rate of willingness to vaccinate increased significantly after vaccination compared to before vaccination (63.8% vs. 75.9%). 10 They also showed that the participants' concerns about side effects and long-term harm related to the vaccine decreased significantly after vaccination, and they thought that this situation contributed to the increase of willingness to vaccinate ratio. 10 Similarly, in present study, the reasons given by the majority of physicians for their opposition to vaccination were
Accepted Article
This article is protected by copyright. All rights reserved the low level of evidence and data quality in vaccine studies. We believe that this rate will decrease with the publication of the results of phase III trials.
Our study determined that having COVID-19 infection is an independent predictive factor that increases vaccine hesitancy. The rate of physicians with COVID-19 infection was 17.1%, and this rate may be one of the other reasons that could explain the high vaccine hesitancy in our study.
Current study showed that among physicians, female gender might be a predictive factor for COVID-19 vaccine hesitancy. Dzieciolowska et al. also obtained similar results in their study with healthcare workers and showed that the vaccine acceptance rate is higher among male healthcare workers. 9 Janssens et al. showed that the female gender was significantly associated with a restricted willingness to vaccinate. The results obtained in all these studies suggest that women healthcare workers have a special place in programs aimed at reducing vaccine hesitancy. 10 Improving the design of clinical trials for existing vaccines, and sharing the data thereof instead of waiting for the results of new COVID-19 vaccine trials, will reduce the current uncertainty. In our study, the majority of physicians expressed a preference for BioNTech and Sinovac vaccines, and it was striking that the most important factor affecting their preference was safety rather than efficacy data.
One of the limitations of this study was that it included a heterogeneous population of medical doctors from all surgical and internal specialties, rather than being limited to specialties related to COVID-19 treatment. In addition, other healthcare professionals and members of the general population were not included in this study. Dror et al. reported that the rates of vaccine hesitancy were higher among nurses, other medical workers and the general population than among physicians. 11 Therefore, studies including these populations are required. The best known COVID-19 vaccines in Turkey (BioNTech, Sputnik V, Moderna, Sinovac, AstraZeneca and Oxford) were listed in the questionnaire. [12][13][14][15][16] No other vaccines were listed by name, which may represent a limitation of the study. BioNTech and Sinovac were preferred by most of the respondents. These two vaccines were derived using completely different techniques, and the preference for them was most likely because the health authority provided Sinovac vaccine from China and BioNTech managers were of Turkish origin.
Accepted Article
This article is protected by copyright. All rights reserved This study was planned before commencement of a COVID-19 vaccination programme in Turkey, and the results revealed varying opinions about the vaccine among phsicians; as mentioned above, the prevalence of prejudices and misconceptions will likely be much higher in the general population. 11
CONCLUSION
This was the first study to evaluate attitudes towards COVID-19 vaccination among physicians in Turkey. By designing similar studies in other countries and evaluating the attitudes of healthcare professionals therein, health authorities will be able to develop more effective vaccination strategies and public education programmes pertaining to vaccination.
Health authorities must take measures to counteract anti-vaccine propaganda. Although trials tend to focus on the efficacy of vaccines, the results of this study indicated that the most important factor affecting the preference for a given vaccine among Turkish physicians is safety. Table 2. Examination of the relationships between the status of the willingness to get vaccinated and some parameters Table 3. Examination of the relationships between the status of the willingness to get vaccinated and some parameters
Accepted Article
This article is protected by copyright. All rights reserved
|
2021-05-27T06:19:25.881Z
|
2021-05-26T00:00:00.000
|
{
"year": 2021,
"sha1": "4d0b4d416988c3229974e38983e4a25b510064b2",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ijcp.14399",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a2ec86aa62343b20930666aa699530a4f3c65ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256158002
|
pes2o/s2orc
|
v3-fos-license
|
A Dedicated 21-Plex Proximity Extension Assay Panel for High-Sensitivity Protein Biomarker Detection Using Microdialysis in Severe Traumatic Brain Injury: The Next Step in Precision Medicine?
Cerebral protein profiling in traumatic brain injury (TBI) is needed to better comprehend secondary injury pathways. Cerebral microdialysis (CMD), in combination with the proximity extension assay (PEA) technique, has great potential in this field. By using PEA, we have previously screened >500 proteins from CMD samples collected from TBI patients. In this study, we customized a PEA panel prototype of 21 selected candidate protein biomarkers, involved in inflammation (13), neuroplasticity/-repair (six), and axonal injury (two). The aim was to study their temporal dynamics and relation to age, structural injury, and clinical outcome. Ten patients with severe TBI and CMD monitoring, who were treated in the Neurointensive Care Unit, Uppsala University Hospital, Sweden, were included. Hourly CMD samples were collected for up to 7 days after trauma and analyzed with the 21-plex PEA panel. Seventeen of the 21 proteins from the CMD sample analyses showed significantly different mean levels between days. Early peaks (within 48 h) were noted with interleukin (IL)-1β, IL-6, IL-8, granulocyte colony-stimulating factor, transforming growth factor alpha, brevican, junctional adhesion molecule B, and neurocan. C-X-C motif chemokine ligand 10 peaked after 3 days. Late peaks (>5 days) were noted with interleukin-1 receptor antagonist (IL-1ra), monocyte chemoattractant protein (MCP)-2, MCP-3, urokinase-type plasminogen activator, Dickkopf-related protein 1, and DRAXIN. IL-8, neurofilament heavy chain, and TAU were biphasic. Age (above/below 22 years) interacted with the temporal dynamics of IL-6, IL-1ra, vascular endothelial growth factor, MCP-3, and TAU. There was no association between radiological injury (Marshall grade) or clinical outcome (Extended Glasgow Outcome Scale) with the protein expression pattern. The PEA method is a highly sensitive molecular tool for protein profiling from cerebral tissue in TBI. The novel TBI dedicated 21-plex panel showed marked regulation of proteins belonging to the inflammation, plasticity/repair, and axonal injury families. The method may enable important insights into complex injury processes on a molecular level that may be of value in future efforts to tailor pharmacological TBI trials to better address specific disease processes and optimize timing of treatments.
Introduction
Traumatic brain injury (TBI) is a complex and heterogeneous condition with a high mortality and burden of long-term sequelae. 1,2 Multi-modality monitoring, including intracranial pressure (ICP), cerebral perfusion pressure (CPP), brain tissue oxygenation, and brain energy metabolism (cerebral microdialysis; CMD), has been implemented in several neurointensive care (NIC) units to better understand the pathophysiological mechanisms and ultimately to give more refined treatments. [3][4][5] In addition to derangements in these physiological variables, several other post-traumatic injury processes may occur. Particularly, the role of inflammation has received increased interest and understanding lately. [5][6][7][8] Severe TBI may induce both systemic and central nervous system (CNS) inflammatory responses, and these two phenomena may interact, for example, by activation and migration of systemic immune cells into the CNS, facilitated by concurrent blood-brain barrier (BBB) damage and activation of parenchymal astrocytes and microglia. 7,9,10 Neuroinflammation is considered a double-edged sword with both distinct beneficial, neuroprotective mechanisms, as well as severe detrimental, neurotoxic effects. 7 There have been several attempts to influence inflammatory response by pharmacological agents, such as steroids, sex hormones, statins, and cell-cycle inhibitors, but with limited therapeutic success. 11 There is now an increased interest in bedside monitoring of biomarker patterns, reflecting neuroinflammation, growth factors, and neuronal injury biomarkers, to better understand the underlying disease processes on a molecular level and, ultimately, be able to give more tailored pharmacological agents taking into account both timing and molecular mechanisms. By using modern CMD catheters with larger cut-off membranes, it is possible to continuously sample proteins from cerebral interstitial fluid. [12][13][14][15][16] Together with the newly developed technique, the proximity extension assay (PEA), 6 which is a very sensitive tool to detect small quantities of protein while using only small volumes of fluid, it is now also possible to efficaciously detect and study numerous brain protein biomarkers from patients with severe TBI. [17][18][19] In a recent study by our group, we analyzed 92 potential protein biomarkers of inflammation from CMD samples of 10 patients with severe TBI, using PEA technology. 6 In the present study, we customized a PEA panel prototype of 21 selected candidate proteins, involved in inflammation, neuroplasticity/ -repair, and axonal injury in another cohort of 10 TBI patients. The aim was to explore their temporal dynamics and relation to age, type of brain injury, and clinical outcome.
Patient population and management
The Department of Neurosurgery at the University Hospital in Uppsala, Sweden, provides neurosurgical care for a central part of Sweden with a population of *2 million. Most patients are treated initially at local hospitals according to advanced trauma life-support principles and then transferred to Uppsala. This study included 10 patients with severe TBI, who were unconscious (Glasgow Coma Scale [GCS] Motor [GCS M] score <6), intubated and mechanically ventilated, and treated at our NIC unit with ICP and concurrent CMD monitoring of protein and metabolic biomarkers.
The management protocol has been described in detail in previous studies. 20,21 Treatment goals were ICP £20 mm Hg, CPP ‡60 mm Hg, systolic blood pressure >100 mm Hg, pO 2 >12 kPa, arterial glucose 5-10 mmol/L, electrolytes within normal ranges, normovolemia, and body temperature <38°C. Unconscious (GCS M, <6) patients were intubated, mechanically ventilated, and received an ICP monitor (intraparenchymal or an external ventricular drain [EVD]). Propofol was administered for sedation and morphine for analgesia. The head of the bed was elevated to 30 degrees. Intracranial lesions with significant mass effect were surgically evacuated. In situations of increased ICP, despite basal NIC treatments, and when no mass lesion was present, cerebrospinal fluid (CSF) was drained with an EVD. If ICP was still refractorily elevated, a thiopental infusion was started and, thereafter, a decompressive craniectomy (DC) was performed as a last resort.
Clinical variables, radiological imaging, and outcome: data acquisition Data on demographics, NIC admission variables, and treatments were extracted from the prospective Uppsala TBI register. 22 The first computed tomography (CT) scan after injury was analyzed according to the Marshall classification, 23 and a crude sorting of the most dominant cerebral injury visible on the first CT was also done for all patients. In cases where several types of injuries were equally present (e.g., traumatic subarachnoid hemorrhage, contusions, and subdural hematomas), the term ''mixed'' brain injury was used. Clinical outcome was evaluated 6 months post-injury, by specially trained personnel with structured telephone interviews, using the Extended Glasgow Outcome Scale (GOS-E), containing eight categories of global outcome, from death to upper good recovery. [24][25][26] Cerebral microdialysis monitoring The CMD procedure has previously been described in detail in previous studies by our group. 6,27 Briefly, the CMD catheter was inserted in conjunction with implantation of the ICP-monitoring device in the right frontal lobe (Fig. 1). The 71 High Cut-Off (100 kDa) CMD catheter was used with a membrane length of 10 mm (M Dialysis AB, Stockholm, Sweden). Artificial CSF was used as perfusion fluid, containing NaCl 147 mM, KCl 2.7 mM, CaCl 2 1.2 mM, and MgCl 2 0.85 mM, with the addition of 1.5% human serum albumin, at a perfusion rate of 0.3 lL/min delivered by a 106 Microdialysis pump (M Dialysis).
Sampling was started *1 h after insertion of the CMD catheter to allow for normalization of changes caused by catheter implantation. Cerebral metabolites (glucose, pyruvate, lactate, glycerol, and glutamate) were measured hourly and continuously evaluated during routine clinical care. Further samples (*18 lL) for the multiplex PEA analysis were collected every third hour and stored at À70°C until protein biomarker analysis.
Protein biomarker analysis Levels of 21 potential protein biomarkers in CMD samples were measured every third hour in pooled consec-utive, hourly CMD samples by the multi-plex PEA panel that was designed, constructed, and validated by our group in collaboration with Olink Proteomics (Olink, 21-Plex Custom Made Panel; Olink Proteomics AB, Uppsala, Sweden), 28 based on the PEA results from our previously published TBI study. 6 In brief, 1 lL of CMD sample was incubated with a set of paired antibodies where two oligonucleotideconjugated antibodies would bind to the same protein. The affinity bindings of the antibodies brought the two attached oligonucleotides in proximity, allowing them to be extended using enzymatic DNA polymerization in the assay. The resulting double-strand DNAs were subsequently amplified and quantified by real-time quantitative polymerase chain reaction (qPCR) by a microfluidic PCR system (Fluidigm, San Francisco, CA).
Expression levels of the biomarkers were expressed as normalized protein expression (NPX). The NPX value is a relative unit of expression in log 2 scale, resulting in that an increase of one unit of the value represents a doubling of the protein concentration in the sample. Values less than or equal to the limit of detection (LoD) were set to the estimated LoD and included in all analyses. For each protein, the optimal dilution was used. Draxin, transforming growth factor 1 (TGF1), granulocyte colony-stimulating factor (G-CSF), monocyte chemoattractant protein (MCP)-2, interleukin (IL)-1b, and urokinase-type plasminogen activator (uPA) were run at a 1:1 dilution and the others at 1:10. Characteristics of the selected proteins are described in Supplementary Table S1.
Precision of the 21-plex panel, analyzed as average Intra %CV (percent coefficient of variation) and Inter %CV based on two control CMD samples measured in triplicate on each of the 10 PEA plates used, was found to be 8% (Intra %CV) and 13% (Inter %CV), respectively.
Statistical analysis
The study primarily 1) aimed to describe the temporal dynamics post-injury of this subset of candidate biomarkers in cerebral interstitial fluid and secondarily 2) to describe their relation to age, structural injury, and clinical outcome.
The protein profile was determined as the first 40 samples within the first 7 days post-injury. Each sample was pooled from consecutive 3-h windows, and 40 samples hence represented 120 h (only three missing samples in total). For example, if the first CMD sample was acquired on the 23rd hour post-injury, then hours 23-26 made up the first of the 40 samples for that patient. The temporal dynamics for each biomarker was illustrated with line plots and statistically described using cubic beta splines. Cross-correlation analyses were performed to assess the inter-relation among the studied proteins. Time-series analyses, taking into account time, age, and outcome, were done with linear mixed models. Exploratory principal component analysis (PCA) was conducted to explore eventual biomarker clusters depending on age, structural injury (Marshall grade), and outcome. p values from each test were adjusted for multiple testing using the Benjamin-Hochberg approach. An adjusted p value <0.05 was considered statistically significant.
Ethics
The study was approved by the regional ethics board (2010/138 and 2010/138/1) and the Swedish Ethical Review Authority (2020-05462). Written informed consent was obtained either by the patients at followup or their closest relative during NIC.
Results
Demographics, admission variables, treatments, and clinical outcome The patient cohort is described in detail in Table 1. Briefly, 10 patients (8 males and 2 females), with a mean age of 36 years (range, 11-71), with severe TBI were included. The injury panorama was equally divided between fall accidents and motor vehicle accidents (MVAs). Three of the patients were operated . The cerebral microdialysis catheter (red arrow) was located in the injured lobe, but in the normal-appearing brain, distant from any hemorrhagic lesion (B). Case no. 9 exhibited a deep cerebral contusion (C). The CMD catheter (red arrow) was also located in the injured lobe, but in the normalappearing brain, distant from any hemorrhagic lesion (D). CMD, cerebral microdialysis. on because of epidural hematoma or contusions the first day post-injury. Three received an external ventricular drain on days 4-5 post-injury to lower ICP. Four patients developed a respiratory infection within the first 7 days.
Temporal dynamics and cross-correlations of the cerebral microdialysis biomarkers
The temporal dynamics of the CMD proteins was evaluated during the first 7 days post-injury. Figure (Table 2). Only cluster of differentiation 200 (CD200), macrophage inflammatory protein (MIP)-1b, repulsive guidance molecule A (RGMA), and vascular endothelial growth factor (VEGF) exhibited stable values throughout the study period. In general, levels varied between patients, but the overall temporal trend for various biomarkers were strikingly similar (see Fig. 2 and Supplementary Figs. S1-S20). There was a significant decline in the log 2 scaled ratio between IL1b and IL-1ra (Fig. 4). Cerebral energy metabolites during this period are demonstrated in Supplementary Table S2.
Cross-correlations among CMD proteins were assessed, as demonstrated in Figure 5. Both IL-1b and IL-6 were strongly associated with each other. JAM-B, BCAN, and NCAN also exhibited a strong association. DKK-1 was inversely associated with IL-1b, IL-6, JAM-B, and G-CSF.
Cerebral microdialysis biomarkers in relation to age, type of brain injury, and clinical outcome In PCA analyses, there was no specific CMD protein pattern in relation to age, structural injury (Marshall grade), or clinical outcome (Fig. 6). In time-series analyses, there was an age interaction, so that younger patients tended to exhibit lower IL-1ra and IL-6 in MD-probe location Normal brain Normal brain Normal brain Normal brain Normal brain Contusion Contusion Normal brain Normal brain Normal brain GCS M at admiss-ion the later course, but higher MCP-3 and VEGF in the early course and higher TAU in the late course (Fig. 7). In similar time-series analyses, there was no interaction of clinical outcome for any of the CMD proteins (data not shown).
Discussion
A second generation of membranes for CMD catheters with larger pores have made it possible to reliably capture proteins from inside the BBB. These proteins can be measured using multi-plex assays of the CMD samples so that these potential biomarkers can be studied in great detail. In this study, the usefulness of a novel 21-plex protein assay for candidate protein biomarkers involved in neuroinflammation and cell survival in CMD samples from patients with severe TBI was studied. The main findings are a description of interesting protein dynamics that are hypothesis-generating regarding different aspects of the injury development and recovery in the acute phase after TBI. There was a striking similarity between individual patients regarding overall temporal protein level profiles, indicating that the results could be generalized to other patients with TBI. We believe that detailed protein profiling of cerebral CMD samples will increase our knowledge about important secondary injury mechanisms, which may be crucial for development of novel therapeutics and better disease characterization.
Cell adhesion molecules
Cell adhesion molecules are of importance in the CNS regarding neuronal migration, axon-bundle formation, synapse formation, and formation of glial networks. The complex network of adhesion lays the structure for brain morphology as well as coordination of brain functions. 29,30 Two cell adhesion molecules were investigated in this study. First, the well-known protein, TAU, was one of the proteins in our panel. TAU is known to be involved in microtubule assembly and stabilization of neurons and axons. 31 TAU has been proposed as a potential biomarker for brain injury in CSF 32 and CMD samples. 33,34 Consistent with previous studies, 34 we found a temporal pattern characterized by an early peak in most patients and almost all dropped in Early peak was defined as within 48h post-injury, mid-peak within the interval 48-96h post-injury, late peak as within the interval 96-150h post-injury, biphasic as peaks both in the early and late phase, and stable trends as no peaks.
BCAN, brevican; G-CSF, granulocyte colony-stimulating factor; IL, interleukin; JAM-B, junctional adhesion molecule B; NCAN, neurocan; TGF-a, transforming growth factor alpha; CXCL, C-X-C motif chemokine ligand; DKK1, Dickkopf-related protein 1; IL-1ra, interleukin-1 receptor agonist; MCP, monocyte chemoattractant protein; CCL, chemokine (C-C motif) ligand; uPA, urokinase-type plasminogen activator; NFH, neurofilament heavy chain; CD200, cluster of differentiation 200; MIP, macrophage inflammatory protein; RGMA, repulsive guidance molecule A; VEGF, vascular endothelial growth factor. mid-phase, with a slight increase during the last 2 days of measurement. This early increase likely reflected the immediate primary brain injury. Increased TAU has previously been associated with focal lesions, 33 but we did not find any association with TAU and Marshall grade. Magnoni and colleagues 34 have previously found a significant association between higher CMD-TAU and worse outcome, but this was not observed in our study, possibly attributable to patient heterogeneity and a slightly smaller patient cohort in our FIG. 4. Temporal course of the log 2 IL-1b/IL-1ra ratio. NPX values for IL1-b and IL1ra were centralized with the mean for each patient to better show the similarity of the decrease. The ratio between centralized IL1-b and IL-1ra was calculated and plotted with a log 2 scale. A fixed-effect regression line is shown in black. The ratio shows a strong significant decrease in ratio post-injury ( p < 10 -16 ) of À0.037/h (log 2 scale). IL, interleukin; IL-1ra, interleukin-1 receptor agonist; NPX, normalized protein expression. material (n = 10 vs. 16). Second, we also studied JAM-B. JAM-B is an endothelium-specific adhesion molecule predominantly localized in the CNS. It has been associated with certain BBB functions, such as infiltration of cells into the parenchyma, 35 and may reflect a compromised BBB. 36 In this study, TBI patients exhibited early JAM-B elevations, which steadily decreased during the first 3 days in almost all patients, possibly reflecting gradual BBB recovery.
Cytokines and chemokines
Cytokines are a large family of small proteins involved in cell signaling, which consist of interleukins, interferons, chemokines, lymphokines, tumor necrosis factors, and the colony-stimulating factors (hematopoietic growth factors).
Hematopoietic growth factors may exert neuroprotective effects, such as neural tissue repair and neurovasculogenesis. 37 G-CSF is one of these factors, which is known to be involved in neuronal and endovascular regeneration after injury. 38 Here, we observed that G-CSF was immediately elevated and gradually decreased. The early increase in G-CSF could be a response to the initial injury, followed by a normalization. This temporal pattern was also consistent with earlier studies by Helmy and colleagues. 16,39 IL-1b is a proinflammatory cytokine, which seems to exert a negative effect on the brain after TBI. 40 Consistent with both experimental 41,42 and clinical studies, 16,43,44 IL-1b peaked early and gradually decreased in our material. IL-1ra is the endogenous antagonist that exerts opposite effects compared to IL-1a and IL-1b. IL-1ra is anti-inflammatory, and several studies suggest that it may ameliorate neuroinflammation 45,46 and benefit neurological recovery 40 after TBI. In this study, IL-1b and IL-1ra showed inverse trends over the temporal course, with a steadily decreasing ratio between the two subsequent to each day post-injury, indicating a gradual shift toward anti-inflammation. Despite the findings in previous studies regarding IL-1b and IL-1ra with worse/better clinical outcome, this was not replicated in this study. However, we advocate further exploration of the usefulness of the IL-1b/ IL-1ra ratio as an inflammatory index in NIC.
IL-6 mostly exerts a proinflammatory effect after TBI. Consistent with previous clinical reports, 14,16,39,44,47 IL-6 peaked early and gradually declined in most patients in the current study. IL-6 is known to be stimulated by IL-1b and, in line with this, exhibited a similar temporal trend given that IL-1b and the cross-correlation analysis also demonstrated a strong positive association between these two interleukins. The association between IL-6 in CNS after TBI and outcome are inconsistent. One CMD study found that elevated IL-6 correlates with more favorable outcome, 48 but CSF studies generally indicate the opposite association, 5 whereas no association with outcome was found in our data with CMD samples.
IL-8, also called CXCL8, is a chemokine that attracts primarily neutrophils, but also other granulocytes. High CMD levels have been observed in our previous TBI study, as well as in a study by Helmy and coworkers. 6,16 The current results confirm the earlier findings, and we found a temporal biphasic pattern. We interpret the early peak to indicate signaling for chemotaxis and the later peak as connected to the phagocytosis of cellular debris. 6 CXCL10 is a chemokine that is produced in many cell types as a response to interferon-c and is responsible for macrophage or microglia recruitment into injured tissue. 49 We found a peak around day 3 postinjury in the current study, which is in line with previous CMD studies. 6,16 This peak likely indicates the time point when the brain parenchyma is under stress and signal to the target cells to migrate to the injured area. 50 As a potential target for anti-inflammatory intervention in TBI, the delayed peak of CXCL10 appears attractive in comparison to, for example, IL-1b. 6 Members of the MCP family form a major component of the CC family of chemokines and are considered the principal chemokines involved in the recruitment of monocytes/macrophages and activated lymphocytes. 51 Especially, MCP-2 (chemokine [C-C motif] ligand [CCL] 7), but also MCP-3 (CCL8), showed a steady increase over time. These two chemokines have not been widely studied in the context of brain injuries, but in an experimental model of endotoxin-induced encephalitis, the dynamic pattern was the opposite, with an early peak and rapid decrease in concentration. 52 Presently, the increase over time could be a sign that the tissue continues to be under stress and is signaling to the immune system for help to resolve the situation.
MIP-1b (CCL4) is produced by macrophages and microglia in response to tumor necrosis factor alpha and IL-1b stimulation and may, among other things, act as a chemotactic factor. We observed a relatively stable trend during the monitoring period. In a mouse model of focal TBI, Ciechanowska and colleagues found a 75-fold increase in the messenger RNA expression of MIP-1b 24 h after TBI in mice compared to sham controls, which translated into a 2-fold increase in protein concentration at 24 h. 53 The increase in protein concentration remained high in the injured cortex for at least 7 days after injury in their study. Together, these data suggest that MIP-1B is rapidly increased and stays elevated for both the acute and subacute phase of the injury process.
Growth factors and neurotrophins
TGF-a is a member of the epidermal growth factor family and is involved in proliferation, differentiation, and development. TGF-a exhibits neurotropic properties that protect neurons from various neurotoxic insults. 54 In this study, TGF-a peaked early and showed a steady decrease, which could reflect an immediate protective response to the primary brain injury.
VEGF is primarily involved in the development and formation of blood vessels. It also acts as a chemotactic factor for macrophages and activates resting astrocytes. 8 Previous studies by our group 6 and Mellergård and colleagues 14 have shown VEGF elevations after TBI, in which Mellergård and colleagues found that patients >25 years of age exhibited higher values. 15 In the present study, we report generally lower and no temporal dynamics in VEGF levels as compared to our earlier study, 6 in which some patients showed distinct peaks. In addition, in contrast to Mellergård and colleagues, 15 older rather than younger patients exhibited lower VEGF levels. Heterogeneity in primary and secondary injury patterns and small patient cohorts could perhaps explain some of the conflicting results.
BCAN is a proteoglycan found on neurons, which is important during the development of the CNS, and is involved in the communication between neurons and the extracellular matrix (ECM) of the adult brain. 55 Minta and colleagues have studied fragments of BCAN after TBI in CSF samples collected from EVD and found consistently higher levels in patients with unfavorable outcome. 56 Here, we found a gradual decrease over time, suggesting an early peak after the acute injury followed by a slow recovery. NCAN is, similarly to BCAN, a proteoglycan found in the ECM of the CNS and is involved in the development of the brain. 57 Minta and colleagues measured NCAN in the CSF from EVD from TBI patients, but found no relation to clinical outcome. 56 Our data show a consistent decline over time, very similar to the dynamic of BCAN. One interpretation could be that, as the brain recovers, less NCAN is shed from the ECM.
Draxin is a repulsive axonal guidance protein that is involved in the development of many CNS structures. 58 Currently, the importance of draxin in the adult brain is not yet elucidated. The protein is clearly important during development, but knocking draxin out in mice is not lethal, although animals exhibit axonal aberrations. 59 In the present study, draxin gradually increased and plateaued after 5 days post-injury. One explanation could be that the need for proteins that repel axonal growth, as the brain starts to recover, gradually increases, in order to minimize the risk for faulty connections.
RGMA is another repulsive axonal guidance protein that is important during development. In a recent publication, Liu and colleagues showed that a decrease in methylation of RGMA in the CSF samples from TBI patients correlated with intracranial hypertension. 60 Our data showed fairly stable levels during the time studied, and it is possible that the methylation of RGMA is a better biomarker than the levels of the protein itself.
Other biomarkers CD200 is expressed in the membrane of neurons and is an important ''off-signal'' to immune activation. 61 Our data show low and stable levels over time, suggesting either no regulation after injury or that the protein does not enter the interstitial space.
uPA is a serine protease that converts inactive plasminogen to active plasmin, which then degrades fibrin clots. It has been proposed as a biomarker in TBI, but it is ubiquitous throughout the body and may also be released after, for example, extracranial injuries. 62 Our results show an increase from day 1 to day 6 and then a decrease to day 7. This could reflect a resolving of microembolism in the injured brain requiring plasmin activity initiated by the release of uPA.
DKK1 is a cysteine-rich protein involved in normal embryonic development, but is also believed to be a marker of ongoing inflammation. DKK1 serum concentration has been reported to correlate with TBI severity and 30-day mortality. 63 Here, we report a suggested biphasic dynamic of the protein with an early small peak at day 1, followed by a small dip in level on day 2, and then a distinct increase from day 3 to day 6 with a tapering off of the increase to day 7. This could be a sign of an increase over time for a certain aspect of the immune response to the injury.
NFH is an intermediate filament protein that makes up microfilaments and -tubules forming the neuronal cytoskeleton. The family of neurofilaments has been identified as promising biomarkers for several degenerative conditions to the brain given that an increase in CSF or blood indicates a degradation of neurons. 64 Our data demonstrated a clear biphasic dynamic with an initial decrease from day 1 to day 3, followed by an increase that peaked on day 6. The initial decrease could reflect an early release of degraded cytoskeletons from necrotic neurons that resolves during the first days after injury. The later increase could reflect delayed apoptosis and/or greater production of the protein in the repair process.
Methodological considerations
The strength of this study is the description of a novel combination of techniques (large-pore CMD with PEA) for analysis of focal cerebral biochemistry to study the complex interplay of injury and recovery processes that occur on a molecule level. This method carries a great advantage in combining multiple samples of the interstitial cerebral fluid from each patient and requires only a small fluid volume. This is beneficial, given that multiple samples increased the reliability as well as the sensitivity to detect temporal dynamic changes of the biomarkers in these patients. However, some limitations of the study include the small number of patients, heterogeneity in demographics and primary injuries, differences in clinical course during NIC, medications, and the focal nature of the CMD measurements with potentially limited global validity. Still, this field of study on multiple protein biomarkers is still growing, and our patient cohort was comparable to the limited numbers in previous studies. 5 Last, the relation between systemic and neuroinflammation is important; however, although we had plasma biomarkers in some patients, we abstained from further analyses because of too small a sample size.
Conclusion
There is a need to better understand the injury and recovery processes on a molecular level after severe TBI. The high cut-off CMD membrane with the PEA technique for sample analyses enabled us to use small volumes of cerebral interstitial fluid in order to study a large set of important protein biomarkers with high temporal resolution as well as sensitivity and specificity. In this pilot study using a newly designed 21-plex PEA panel, several biomarkers known to be involved in, for example, neuronal/axonal injury and inflammation exhibited significant, temporally dynamic changes the first 7 days post-injury. There was no clear association between biomarker pattern and the degree of structural injury or clinical outcome, but age tended to interact with some of the biomarkers. The study was limited by the small patient cohort. In future efforts, we intend to proceed with data collection and include more patients to attain more reliable results and be able to more thoroughly explore the factors responsible for certain biomarker patterns and their relation to clinical course and patient outcome.
Acknowledgments
We are sincerely grateful to the nurses and staff of the NIC unit for running the bedside CMD and to Inger Ståhl Myllyaho for excellent technical assistance. Expert input on the statistical analysis was provided by Dr. Emil Nilsson, PhD, Data Scientist, Olink Proteomics, Uppsala.
Funding Information
The study was supported by Uppsala University Hospital and funded by the Uppsala Berzelii Technology Centre.
Author Disclosure Statement
No competing financial interests exist.
|
2023-01-24T16:42:54.031Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "13fe8b389f1b3773878a14d5af80ab8d0c7323e5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1089/neur.2022.0067",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a3cfe0daa7df57b22b099ab2a19165dcd320f7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.